path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Data Analysis-Pandas/Data Analysis-Pandas-2/Project_.ipynb | ###Markdown
Project : Holiday weatherThere is nothing I like better than taking a holiday. In this project I am going to use the historic weather data from the Weather Underground for London to try to predict two good weather weeks to take off as holiday. Of course the weather in the summer of 2016 may be very different to 2014 but it should give some indication of when would be a good time to take a summer break. Getting the dataWeather Underground keeps historical weather data collected in many airports around the world. Right-click on the following URL and choose 'Open Link in New Window' (or similar, depending on your browser):http://www.wunderground.com/historyWhen the new page opens start typing 'LHR' in the 'Location' input box and when the pop up menu comes up with the option 'LHR, United Kingdom' select it and then click on 'Submit'. When the next page opens with London Heathrow data, click on the 'Custom' tab and select the time period From: 1 January 2014 to: 31 December 2014 and then click on 'Get History'. The data for that year should then be displayed further down the page. You can copy each month's data directly from the browser to a text editor like Notepad or TextEdit, to obtain a single file with as many months as you wish.Weather Underground has changed in the past the way it provides data and may do so again in the future. I have therefore collated the whole 2014 data in the provided 'London_2014.csv' file which can be found in the project folder. Now load the CSV file into a dataframe making sure that any extra spaces are skipped:
###Code
import warnings
warnings.simplefilter('ignore', FutureWarning)
import pandas as pd
london = pd.read_csv('London_2014.csv', skipinitialspace=True)
###Output
_____no_output_____
###Markdown
Cleaning the dataFirst we need to clean up the data. I'm not going to make use of `'WindDirDegrees'` in my analysis, but you might in yours so we'll rename `'WindDirDegrees'` to `'WindDirDegrees'`.
###Code
london = london.rename(columns={'WindDirDegrees<br />' : 'WindDirDegrees'})
###Output
_____no_output_____
###Markdown
remove the `` html line breaks from the values in the `'WindDirDegrees'` column.
###Code
london['WindDirDegrees'] = london['WindDirDegrees'].str.rstrip('<br />')
###Output
_____no_output_____
###Markdown
and change the values in the `'WindDirDegrees'` column to `float64`:
###Code
london['WindDirDegrees'] = london['WindDirDegrees'].astype('float64')
###Output
_____no_output_____
###Markdown
We definitely need to change the values in the `'GMT'` column into values of the `datetime64` date type.
###Code
london['GMT'] = to_datetime(london['GMT'])
###Output
_____no_output_____
###Markdown
We also need to change the index from the default to the `datetime64` values in the `'GMT'` column so that it is easier to pull out rows between particular dates and display more meaningful graphs:
###Code
london.index = london['GMT']
###Output
_____no_output_____
###Markdown
Finding a summer breakAccording to meteorologists, summer extends for the whole months of June, July, and August in the northern hemisphere and the whole months of December, January, and February in the southern hemisphere. So as I'm in the northern hemisphere I'm going to create a dataframe that holds just those months using the `datetime` index, like this:
###Code
summer = london.loc[datetime(2014,6,1) : datetime(2014,8,31)]
###Output
_____no_output_____
###Markdown
I now look for the days with warm temperatures.
###Code
summer[summer['Mean TemperatureC'] >= 25]
###Output
_____no_output_____
###Markdown
Summer 2014 was rather cool in London: there are no days with temperatures of 25 Celsius or higher. Best to see a graph of the temperature and look for the warmest period.So next we tell Jupyter to display any graph created inside this notebook:
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's plot the `'Mean TemperatureC'` for the summer:
###Code
summer['Mean TemperatureC'].plot(grid=True, figsize=(10,5))
###Output
_____no_output_____
###Markdown
Well looking at the graph the second half of July looks good for mean temperatures over 20 degrees C so let's also put precipitation on the graph too:
###Code
summer[['Mean TemperatureC', 'Precipitationmm']].plot(grid=True, figsize=(10,5))
###Output
_____no_output_____
###Markdown
The second half of July is still looking good, with just a couple of peaks showing heavy rain. Let's have a closer look by just plotting mean temperature and precipitation for July.
###Code
july = summer.loc[datetime(2014,7,1) : datetime(2014,7,31)]
july[['Mean TemperatureC', 'Precipitationmm']].plot(grid=True, figsize=(10,5))
###Output
_____no_output_____ |
site/zh-cn/beta/tutorials/keras/feature_columns.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
对结构化数据进行分类 在 tensorflow.google.cn 上查看 在 Google Colab 运行 在 Github 上查看源代码 下载此 notebook Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的[官方英文文档](https://www.tensorflow.org/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入[[email protected] Google Group](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-zh-cn)。 本教程演示了如何对结构化数据进行分类(例如,CSV 中的表格数据)。我们将使用 [Keras](https://tensorflow.google.cn/guide/keras) 来定义模型,将[特征列(feature columns)](https://tensorflow.google.cn/guide/feature_columns) 作为从 CSV 中的列(columns)映射到用于训练模型的特征(features)的桥梁。本教程包括了以下内容的完整代码:* 用 [Pandas](https://pandas.pydata.org/) 导入 CSV 文件。* 用 [tf.data](https://tensorflow.google.cn/guide/datasets) 建立了一个输入流水线(pipeline),用于对行进行分批(batch)和随机排序(shuffle)。* 用特征列将 CSV 中的列映射到用于训练模型的特征。* 用 Keras 构建,训练并评估模型。 数据集我们将使用一个小型 [数据集](https://archive.ics.uci.edu/ml/datasets/heart+Disease),该数据集由克利夫兰心脏病诊所基金会(Cleveland Clinic Foundation for Heart Disease)提供。CSV 中有几百行数据。每行描述了一个病人(patient),每列描述了一个属性(attribute)。我们将使用这些信息来预测一位病人是否患有心脏病,这是在该数据集上的二分类任务。下面是该数据集的[描述](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names)。 请注意,有数值(numeric)和类别(categorical)类型的列。>列| 描述| 特征类型 | 数据类型>------------|--------------------|----------------------|----------------->Age | 年龄以年为单位 | Numerical | integer>Sex | (1 = 男;0 = 女) | Categorical | integer>CP | 胸痛类型(0,1,2,3,4)| Categorical | integer>Trestbpd | 静息血压(入院时,以mm Hg计) | Numerical | integer>Chol | 血清胆固醇(mg/dl) | Numerical | integer>FBS |(空腹血糖> 120 mg/dl)(1 = true;0 = false)| Categorical | integer>RestECG | 静息心电图结果(0,1,2)| Categorical | integer>Thalach | 达到的最大心率 | Numerical | integer>Exang | 运动诱发心绞痛(1 =是;0 =否)| Categorical | integer>Oldpeak | 与休息时相比由运动引起的 ST 节段下降|Numerical | integer>Slope | 在运动高峰 ST 段的斜率 | Numerical | float>CA | 荧光透视法染色的大血管动脉(0-3)的数量 | Numerical | integer>Thal | 3 =正常;6 =固定缺陷;7 =可逆缺陷 | Categorical | string>Target | 心脏病诊断(1 = true;0 = false) | Classification | integer 导入 TensorFlow 和其他库
###Code
!pip install sklearn
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
使用 Pandas 创建一个 dataframe[Pandas](https://pandas.pydata.org/) 是一个 Python 库,它有许多有用的实用程序,用于加载和处理结构化数据。我们将使用 Pandas 从 URL下载数据集,并将其加载到 dataframe 中。
###Code
URL = 'https://storage.googleapis.com/applied-dl/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
###Output
_____no_output_____
###Markdown
将 dataframe 拆分为训练、验证和测试集我们下载的数据集是一个 CSV 文件。 我们将其拆分为训练、验证和测试集。
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
_____no_output_____
###Markdown
用 tf.data 创建输入流水线接下来,我们将使用 [tf.data](https://tensorflow.google.cn/guide/datasets) 包装 dataframe。这让我们能将特征列作为一座桥梁,该桥梁将 Pandas dataframe 中的列映射到用于训练模型的特征。如果我们使用一个非常大的 CSV 文件(非常大以至于它不能放入内存),我们将使用 tf.data 直接从磁盘读取它。本教程不涉及这一点。
###Code
# 一种从 Pandas Dataframe 创建 tf.data 数据集的实用程序方法(utility method)
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # 小批量大小用于演示
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
理解输入流水线现在我们已经创建了输入流水线,让我们调用它来查看它返回的数据的格式。 我们使用了一小批量大小来保持输出的可读性。
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
###Output
_____no_output_____
###Markdown
我们可以看到数据集返回了一个字典,该字典从列名称(来自 dataframe)映射到 dataframe 中行的列值。 演示几种特征列TensorFlow 提供了多种特征列。本节中,我们将创建几类特征列,并演示特征列如何转换 dataframe 中的列。
###Code
# 我们将使用该批数据演示几种特征列
example_batch = next(iter(train_ds))[0]
# 用于创建一个特征列
# 并转换一批次数据的一个实用程序方法
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
数值列一个特征列的输出将成为模型的输入(使用上面定义的 demo 函数,我们将能准确地看到 dataframe 中的每列的转换方式)。 [数值列(numeric column)](https://tensorflow.google.cn/api_docs/python/tf/feature_column/numeric_column) 是最简单的列类型。它用于表示实数特征。使用此列时,模型将从 dataframe 中接收未更改的列值。
###Code
age = feature_column.numeric_column("age")
demo(age)
###Output
_____no_output_____
###Markdown
在这个心脏病数据集中,dataframe 中的大多数列都是数值列。 分桶列通常,您不希望将数字直接输入模型,而是根据数值范围将其值分成不同的类别。考虑代表一个人年龄的原始数据。我们可以用 [分桶列(bucketized column)](https://tensorflow.google.cn/api_docs/python/tf/feature_column/bucketized_column)将年龄分成几个分桶(buckets),而不是将年龄表示成数值列。请注意下面的 one-hot 数值表示每行匹配的年龄范围。
###Code
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
###Output
_____no_output_____
###Markdown
分类列在此数据集中,thal 用字符串表示(如 'fixed','normal',或 'reversible')。我们无法直接将字符串提供给模型。相反,我们必须首先将它们映射到数值。分类词汇列(categorical vocabulary columns)提供了一种用 one-hot 向量表示字符串的方法(就像您在上面看到的年龄分桶一样)。词汇表可以用 [categorical_column_with_vocabulary_list](https://tensorflow.google.cn/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list) 作为 list 传递,或者用 [categorical_column_with_vocabulary_file](https://tensorflow.google.cn/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file) 从文件中加载。
###Code
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
_____no_output_____
###Markdown
在更复杂的数据集中,许多列都是分类列(如 strings)。在处理分类数据时,特征列最有价值。尽管在该数据集中只有一列分类列,但我们将使用它来演示在处理其他数据集时,可以使用的几种重要的特征列。 嵌入列假设我们不是只有几个可能的字符串,而是每个类别有数千(或更多)值。 由于多种原因,随着类别数量的增加,使用 one-hot 编码训练神经网络变得不可行。我们可以使用嵌入列来克服此限制。[嵌入列(embedding column)](https://tensorflow.google.cn/api_docs/python/tf/feature_column/embedding_column)将数据表示为一个低维度密集向量,而非多维的 one-hot 向量,该低维度密集向量可以包含任何数,而不仅仅是 0 或 1。嵌入的大小(在下面的示例中为 8)是必须调整的参数。关键点:当分类列具有许多可能的值时,最好使用嵌入列。我们在这里使用嵌入列用于演示目的,为此您有一个完整的示例,以在将来可以修改用于其他数据集。
###Code
# 注意到嵌入列的输入是我们之前创建的类别列
thal_embedding = feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
_____no_output_____
###Markdown
经过哈希处理的特征列表示具有大量数值的分类列的另一种方法是使用 [categorical_column_with_hash_bucket](https://tensorflow.google.cn/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket)。该特征列计算输入的一个哈希值,然后选择一个 `hash_bucket_size` 分桶来编码字符串。使用此列时,您不需要提供词汇表,并且可以选择使 hash_buckets 的数量远远小于实际类别的数量以节省空间。关键点:该技术的一个重要缺点是可能存在冲突,不同的字符串被映射到同一个范围。实际上,无论如何,经过哈希处理的特征列对某些数据集都有效。
###Code
thal_hashed = feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(feature_column.indicator_column(thal_hashed))
###Output
_____no_output_____
###Markdown
组合的特征列将多种特征组合到一个特征中,称为[特征组合(feature crosses)](https://developers.google.com/machine-learning/glossary/feature_cross),它让模型能够为每种特征组合学习单独的权重。此处,我们将创建一个 age 和 thal 组合的新特征。请注意,`crossed_column` 不会构建所有可能组合的完整列表(可能非常大)。相反,它由 `hashed_column` 支持,因此您可以选择表的大小。
###Code
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(feature_column.indicator_column(crossed_feature))
###Output
_____no_output_____
###Markdown
选择要使用的列我们已经了解了如何使用几种类型的特征列。 现在我们将使用它们来训练模型。本教程的目标是向您展示使用特征列所需的完整代码(例如,机制)。我们任意地选择了几列来训练我们的模型。关键点:如果您的目标是建立一个准确的模型,请尝试使用您自己的更大的数据集,并仔细考虑哪些特征最有意义,以及如何表示它们。
###Code
feature_columns = []
# 数值列
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# 分桶列
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# 分类列
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# 嵌入列
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# 组合列
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
建立一个新的特征层现在我们已经定义了我们的特征列,我们将使用[密集特征(DenseFeatures)](https://tensorflow.google.cn/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures)层将特征列输入到我们的 Keras 模型中。
###Code
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
之前,我们使用一个小批量大小来演示特征列如何运转。我们将创建一个新的更大批量的输入流水线。
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
创建,编译和训练模型
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'],
run_eagerly=True)
model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
_____no_output_____ |
examples/notebooks/ets.ipynb | ###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook gives a very brief introduction to these models and shows how they can be used with statsmodels. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7 Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudia Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodel's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
###Code
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
fit.summary()
fit_heuristic.summary()
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
fit.summary()
fit._rank
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook gives a very brief introduction to these models and shows how they can be used with statsmodels. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudia Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodel's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
###Code
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
fit.summary()
fit_heuristic.summary()
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
fit.summary()
fit._rank
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.`statsmodels` implements all combinations of:- additive and multiplicative error model- additive and multiplicative trend, possibly dampened- additive and multiplicative seasonalityHowever, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
###Code
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
###Output
_____no_output_____
###Markdown
The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is a bit lower than the one using a heuristic for the initial states.Additionally, we see that $\beta$ (`smoothing_trend`) is at the boundary of the default parameter bounds, and therefore it's not possible to estimate confidence intervals for $\beta$.
###Code
fit.summary()
fit_heuristic.summary()
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
fit.summary()
###Output
_____no_output_____
###Markdown
PredictionsThe ETS model can also be used for predicting. There are several different methods available:- `forecast`: makes out of sample predictions- `predict`: in sample and out of sample predictions- `simulate`: runs simulations of the statespace model- `get_prediction`: in sample and out of sample predictions, as well as prediction intervalsWe can use them on our previously fitted model to predict from 2014 to 2020.
###Code
pred = fit.get_prediction(start='2014', end='2020')
df = pred.summary_frame(alpha=0.05)
df
###Output
_____no_output_____
###Markdown
In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
###Code
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)
df["mean"].plot(label='mean prediction')
df["pi_lower"].plot(linestyle='--', color='tab:blue', label='95% interval')
df["pi_upper"].plot(linestyle='--', color='tab:blue', label='_')
pred.endog.plot(label='data')
plt.legend()
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.`statsmodels` implements all combinations of:- additive and multiplicative error model- additive and multiplicative trend, possibly dampened- additive and multiplicative seasonalityHowever, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil)
fit = model.fit(maxiter=10000)
oil.plot(label="data")
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label="R fit", linestyle="--")
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. It is possible to only use a heuristic for the initial values:
###Code
model_heuristic = ETSModel(oil, initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
###Output
_____no_output_____
###Markdown
The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is fractionally lower than the one using a heuristic for the initial states.
###Code
print(fit.summary())
print(fit_heuristic.summary())
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
print(fit.summary())
###Output
_____no_output_____
###Markdown
PredictionsThe ETS model can also be used for predicting. There are several different methods available:- `forecast`: makes out of sample predictions- `predict`: in sample and out of sample predictions- `simulate`: runs simulations of the statespace model- `get_prediction`: in sample and out of sample predictions, as well as prediction intervalsWe can use them on our previously fitted model to predict from 2014 to 2020.
###Code
pred = fit.get_prediction(start='2014', end='2020')
df = pred.summary_frame(alpha=0.05)
df
###Output
_____no_output_____
###Markdown
In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
###Code
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)
df["mean"].plot(label='mean prediction')
df["pi_lower"].plot(linestyle='--', color='tab:blue', label='95% interval')
df["pi_upper"].plot(linestyle='--', color='tab:blue', label='_')
pred.endog.plot(label='data')
plt.legend()
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.`statsmodels` implements all combinations of:- additive and multiplicative error model- additive and multiplicative trend, possibly dampened- additive and multiplicative seasonalityHowever, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil)
fit = model.fit(maxiter=10000)
oil.plot(label="data")
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label="R fit", linestyle="--")
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. It is possible to only use a heuristic for the initial values:
###Code
model_heuristic = ETSModel(oil, initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
###Output
_____no_output_____
###Markdown
The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is fractionally lower than the one using a heuristic for the initial states.
###Code
print(fit.summary())
print(fit_heuristic.summary())
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
print(fit.summary())
###Output
_____no_output_____
###Markdown
PredictionsThe ETS model can also be used for predicting. There are several different methods available:- `forecast`: makes out of sample predictions- `predict`: in sample and out of sample predictions- `simulate`: runs simulations of the statespace model- `get_prediction`: in sample and out of sample predictions, as well as prediction intervalsWe can use them on our previously fitted model to predict from 2014 to 2020.
###Code
pred = fit.get_prediction(start='2014', end='2020')
df = pred.summary_frame(alpha=0.05)
df
###Output
_____no_output_____
###Markdown
In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
###Code
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)
df["mean"].plot(label='mean prediction')
df["pi_lower"].plot(linestyle='--', color='tab:blue', label='95% interval')
df["pi_upper"].plot(linestyle='--', color='tab:blue', label='_')
pred.endog.plot(label='data')
plt.legend()
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.`statsmodels` implements all combinations of:- additive and multiplicative error model- additive and multiplicative trend, possibly dampened- additive and multiplicative seasonalityHowever, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams["figure.figsize"] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091,
130.8284,
141.2871,
154.2278,
162.7409,
192.1665,
240.7997,
304.2174,
384.0046,
429.6622,
359.3169,
437.2519,
468.4008,
424.4353,
487.9794,
509.8284,
506.3473,
340.1842,
240.2589,
219.0328,
172.0747,
252.5901,
221.0711,
276.5188,
271.1480,
342.6186,
428.3558,
442.3946,
432.7851,
437.2497,
437.2092,
445.3641,
453.1950,
454.4096,
422.3789,
456.0371,
440.3866,
425.1944,
486.2052,
500.4291,
521.2759,
508.9476,
488.8889,
509.8706,
456.7229,
473.8166,
525.9509,
549.8338,
542.3405,
]
oil = pd.Series(oildata, index=pd.date_range("1965", "2013", freq="AS"))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil)
fit = model.fit(maxiter=10000)
oil.plot(label="data")
fit.fittedvalues.plot(label="statsmodels fit")
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label="R fit", linestyle="--")
plt.legend()
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. It is possible to only use a heuristic for the initial values:
###Code
model_heuristic = ETSModel(oil, initialization_method="heuristic")
fit_heuristic = model_heuristic.fit()
oil.plot(label="data")
fit.fittedvalues.plot(label="estimated")
fit_heuristic.fittedvalues.plot(label="heuristic", linestyle="--")
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label="with R params", linestyle=":")
plt.legend()
###Output
_____no_output_____
###Markdown
The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is fractionally lower than the one using a heuristic for the initial states.
###Code
print(fit.summary())
print(fit_heuristic.summary())
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300,
19.14849600,
25.31769200,
27.59143700,
32.07645600,
23.48796100,
28.47594000,
35.12375300,
36.83848500,
25.00701700,
30.72223000,
28.69375900,
36.64098600,
23.82460900,
29.31168300,
31.77030900,
35.17787700,
19.77524400,
29.60175000,
34.53884200,
41.27359900,
26.65586200,
28.27985900,
35.19115300,
42.20566386,
24.64917133,
32.66733514,
37.25735401,
45.24246027,
29.35048127,
36.34420728,
41.78208136,
49.27659843,
31.27540139,
37.85062549,
38.83704413,
51.23690034,
31.83855162,
41.32342126,
42.79900337,
55.70835836,
33.40714492,
42.31663797,
45.15712257,
59.57607996,
34.83733016,
44.84168072,
46.97124960,
60.01903094,
38.37117851,
46.97586413,
50.73379646,
61.64687319,
39.29956937,
52.67120908,
54.33231689,
66.83435838,
40.87118847,
51.82853579,
57.49190993,
65.25146985,
43.06120822,
54.76075713,
59.83447494,
73.25702747,
47.69662373,
61.09776802,
66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel("Australian Tourists")
# fit in statsmodels
model = ETSModel(
austourists,
error="add",
trend="add",
seasonal="add",
damped_trend=True,
seasonal_periods=4,
)
fit = model.fit()
# fit with R params
params_R = [
0.35445427,
0.03200749,
0.39993387,
0.97999997,
24.01278357,
0.97770147,
1.76951063,
-0.50735902,
-6.61171798,
5.34956637,
]
fit_R = model.smooth(params_R)
austourists.plot(label="data")
plt.ylabel("Australian Tourists")
fit.fittedvalues.plot(label="statsmodels fit")
fit_R.fittedvalues.plot(label="R fit", linestyle="--")
plt.legend()
print(fit.summary())
###Output
_____no_output_____
###Markdown
PredictionsThe ETS model can also be used for predicting. There are several different methods available:- `forecast`: makes out of sample predictions- `predict`: in sample and out of sample predictions- `simulate`: runs simulations of the statespace model- `get_prediction`: in sample and out of sample predictions, as well as prediction intervalsWe can use them on our previously fitted model to predict from 2014 to 2020.
###Code
pred = fit.get_prediction(start="2014", end="2020")
df = pred.summary_frame(alpha=0.05)
df
###Output
_____no_output_____
###Markdown
In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
###Code
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:, i].plot(label="_", color="gray", alpha=0.1)
df["mean"].plot(label="mean prediction")
df["pi_lower"].plot(linestyle="--", color="tab:blue", label="95% interval")
df["pi_upper"].plot(linestyle="--", color="tab:blue", label="_")
pred.endog.plot(label="data")
plt.legend()
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook gives a very brief introduction to these models and shows how they can be used with statsmodels. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
###Code
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
fit.summary()
fit_heuristic.summary()
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
fit.summary()
fit._rank
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.`statsmodels` implements all combinations of:- additive and multiplicative error model- additive and multiplicative trend, possibly dampened- additive and multiplicative seasonalityHowever, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
###Code
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
###Output
_____no_output_____
###Markdown
The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is a bit lower than the one using a heuristic for the initial states.Additionally, we see that $\beta$ (`smoothing_trend`) is at the boundary of the default parameter bounds, and therefore it's not possible to estimate confidence intervals for $\beta$.
###Code
print(fit.summary())
print(fit_heuristic.summary())
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
print(fit.summary())
###Output
_____no_output_____
###Markdown
PredictionsThe ETS model can also be used for predicting. There are several different methods available:- `forecast`: makes out of sample predictions- `predict`: in sample and out of sample predictions- `simulate`: runs simulations of the statespace model- `get_prediction`: in sample and out of sample predictions, as well as prediction intervalsWe can use them on our previously fitted model to predict from 2014 to 2020.
###Code
pred = fit.get_prediction(start='2014', end='2020')
df = pred.summary_frame(alpha=0.05)
df
###Output
_____no_output_____
###Markdown
In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
###Code
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)
df["mean"].plot(label='mean prediction')
df["pi_lower"].plot(linestyle='--', color='tab:blue', label='95% interval')
df["pi_upper"].plot(linestyle='--', color='tab:blue', label='_')
pred.endog.plot(label='data')
plt.legend()
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook gives a very brief introduction to these models and shows how they can be used with statsmodels. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudia Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodel's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
###Code
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
fit.summary()
fit_heuristic.summary()
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
fit.summary()
fit._rank
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.`statmodels` implements all combinations of:- additive and multiplicative error model- additive and multiplicative trend, possibly dampened- additive and multiplicative seasonalityHowever, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.[1] Hyndman, Rob J., and George Athanasopoulos. *Forecasting: principles and practice*, 3rd edition, OTexts, 2019. https://www.otexts.org/fpp3/7
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams['figure.figsize'] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091, 130.8284, 141.2871, 154.2278,
162.7409, 192.1665, 240.7997, 304.2174,
384.0046, 429.6622, 359.3169, 437.2519,
468.4008, 424.4353, 487.9794, 509.8284,
506.3473, 340.1842, 240.2589, 219.0328,
172.0747, 252.5901, 221.0711, 276.5188,
271.1480, 342.6186, 428.3558, 442.3946,
432.7851, 437.2497, 437.2092, 445.3641,
453.1950, 454.4096, 422.3789, 456.0371,
440.3866, 425.1944, 486.2052, 500.4291,
521.2759, 508.9476, 488.8889, 509.8706,
456.7229, 473.8166, 525.9509, 549.8338,
542.3405
]
oil = pd.Series(oildata, index=pd.date_range('1965', '2013', freq='AS'))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil, error='add', trend='add', damped_trend=True)
fit = model.fit(maxiter=10000)
oil.plot(label='data')
fit.fittedvalues.plot(label='statsmodels fit')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label='R fit', linestyle='--')
plt.legend();
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. Additionally it is possible to only use a heuristic for the initial values. In this case this leads to better agreement with the R implementation.
###Code
model_heuristic = ETSModel(oil, error='add', trend='add', damped_trend=True,
initialization_method='heuristic')
fit_heuristic = model_heuristic.fit()
oil.plot(label='data')
fit.fittedvalues.plot(label='estimated')
fit_heuristic.fittedvalues.plot(label='heuristic', linestyle='--')
plt.ylabel("Annual oil production in Saudi Arabia (Mt)");
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label='with R params', linestyle=':')
plt.legend();
###Output
_____no_output_____
###Markdown
The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is a bit lower than the one using a heuristic for the initial states.Additionally, we see that $\beta$ (`smoothing_trend`) is at the boundary of the default parameter bounds, and therefore it's not possible to estimate confidence intervals for $\beta$.
###Code
fit.summary()
fit_heuristic.summary()
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300, 19.14849600, 25.31769200, 27.59143700,
32.07645600, 23.48796100, 28.47594000, 35.12375300,
36.83848500, 25.00701700, 30.72223000, 28.69375900,
36.64098600, 23.82460900, 29.31168300, 31.77030900,
35.17787700, 19.77524400, 29.60175000, 34.53884200,
41.27359900, 26.65586200, 28.27985900, 35.19115300,
42.20566386, 24.64917133, 32.66733514, 37.25735401,
45.24246027, 29.35048127, 36.34420728, 41.78208136,
49.27659843, 31.27540139, 37.85062549, 38.83704413,
51.23690034, 31.83855162, 41.32342126, 42.79900337,
55.70835836, 33.40714492, 42.31663797, 45.15712257,
59.57607996, 34.83733016, 44.84168072, 46.97124960,
60.01903094, 38.37117851, 46.97586413, 50.73379646,
61.64687319, 39.29956937, 52.67120908, 54.33231689,
66.83435838, 40.87118847, 51.82853579, 57.49190993,
65.25146985, 43.06120822, 54.76075713, 59.83447494,
73.25702747, 47.69662373, 61.09776802, 66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel('Australian Tourists');
# fit in statsmodels
model = ETSModel(austourists, error="add", trend="add", seasonal="add",
damped_trend=True, seasonal_periods=4)
fit = model.fit()
# fit with R params
params_R = [
0.35445427, 0.03200749, 0.39993387, 0.97999997, 24.01278357,
0.97770147, 1.76951063, -0.50735902, -6.61171798, 5.34956637
]
fit_R = model.smooth(params_R)
austourists.plot(label='data')
plt.ylabel('Australian Tourists')
fit.fittedvalues.plot(label='statsmodels fit')
fit_R.fittedvalues.plot(label='R fit', linestyle='--')
plt.legend();
fit.summary()
###Output
_____no_output_____
###Markdown
PredictionsThe ETS model can also be used for predicting. There are several different methods available:- `forecast`: makes out of sample predictions- `predict`: in sample and out of sample predictions- `simulate`: runs simulations of the statespace model- `get_prediction`: in sample and out of sample predictions, as well as prediction intervalsWe can use them on our previously fitted model to predict from 2014 to 2020.
###Code
pred = fit.get_prediction(start='2014', end='2020')
df = pred.summary_frame(alpha=0.05)
df
###Output
_____no_output_____
###Markdown
In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
###Code
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:,i].plot(label='_', color='gray', alpha=0.1)
df["mean"].plot(label='mean prediction')
df["pi_lower"].plot(linestyle='--', color='tab:blue', label='95% interval')
df["pi_upper"].plot(linestyle='--', color='tab:blue', label='_')
pred.endog.plot(label='data')
plt.legend()
###Output
_____no_output_____
###Markdown
ETS modelsThe ETS models are a family of time series models with an underlying state space model consisting of a level component, a trend component (T), a seasonal component (S), and an error term (E).This notebook shows how they can be used with `statsmodels`. For a more thorough treatment we refer to [1], chapter 8 (free online resource), on which the implementation in statsmodels and the examples used in this notebook are based.`statsmodels` implements all combinations of:- additive and multiplicative error model- additive and multiplicative trend, possibly dampened- additive and multiplicative seasonalityHowever, not all of these methods are stable. Refer to [1] and references therein for more info about model stability.[1] Hyndman, Rob J., and Athanasopoulos, George. *Forecasting: principles and practice*, 3rd edition, OTexts, 2021. https://otexts.com/fpp3/expsmooth.html
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
from statsmodels.tsa.exponential_smoothing.ets import ETSModel
plt.rcParams["figure.figsize"] = (12, 8)
###Output
_____no_output_____
###Markdown
Simple exponential smoothingThe simplest of the ETS models is also known as *simple exponential smoothing*. In ETS terms, it corresponds to the (A, N, N) model, that is, a model with additive errors, no trend, and no seasonality. The state space formulation of Holt's method is:\begin{align}y_{t} &= y_{t-1} + e_t\\l_{t} &= l_{t-1} + \alpha e_t\\\end{align}This state space formulation can be turned into a different formulation, a forecast and a smoothing equation (as can be done with all ETS models):\begin{align}\hat{y}_{t|t-1} &= l_{t-1}\\l_{t} &= \alpha y_{t-1} + (1 - \alpha) l_{t-1}\end{align}Here, $\hat{y}_{t|t-1}$ is the forecast/expectation of $y_t$ given the information of the previous step. In the simple exponential smoothing model, the forecast corresponds to the previous level. The second equation (smoothing equation) calculates the next level as weighted average of the previous level and the previous observation.
###Code
oildata = [
111.0091,
130.8284,
141.2871,
154.2278,
162.7409,
192.1665,
240.7997,
304.2174,
384.0046,
429.6622,
359.3169,
437.2519,
468.4008,
424.4353,
487.9794,
509.8284,
506.3473,
340.1842,
240.2589,
219.0328,
172.0747,
252.5901,
221.0711,
276.5188,
271.1480,
342.6186,
428.3558,
442.3946,
432.7851,
437.2497,
437.2092,
445.3641,
453.1950,
454.4096,
422.3789,
456.0371,
440.3866,
425.1944,
486.2052,
500.4291,
521.2759,
508.9476,
488.8889,
509.8706,
456.7229,
473.8166,
525.9509,
549.8338,
542.3405,
]
oil = pd.Series(oildata, index=pd.date_range("1965", "2013", freq="AS"))
oil.plot()
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
###Output
_____no_output_____
###Markdown
The plot above shows annual oil production in Saudi Arabia in million tonnes. The data are taken from the R package `fpp2` (companion package to prior version [1]).Below you can see how to fit a simple exponential smoothing model using statsmodels's ETS implementation to this data. Additionally, the fit using `forecast` in R is shown as comparison.
###Code
model = ETSModel(oil)
fit = model.fit(maxiter=10000)
oil.plot(label="data")
fit.fittedvalues.plot(label="statsmodels fit")
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
# obtained from R
params_R = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params_R).fittedvalues
yhat.plot(label="R fit", linestyle="--")
plt.legend()
###Output
_____no_output_____
###Markdown
By default the initial states are considered to be fitting parameters and are estimated by maximizing log-likelihood. It is possible to only use a heuristic for the initial values:
###Code
model_heuristic = ETSModel(oil, initialization_method="heuristic")
fit_heuristic = model_heuristic.fit()
oil.plot(label="data")
fit.fittedvalues.plot(label="estimated")
fit_heuristic.fittedvalues.plot(label="heuristic", linestyle="--")
plt.ylabel("Annual oil production in Saudi Arabia (Mt)")
# obtained from R
params = [0.99989969, 0.11888177503085334, 0.80000197, 36.46466837, 34.72584983]
yhat = model.smooth(params).fittedvalues
yhat.plot(label="with R params", linestyle=":")
plt.legend()
###Output
_____no_output_____
###Markdown
The fitted parameters and some other measures are shown using `fit.summary()`. Here we can see that the log-likelihood of the model using fitted initial states is fractionally lower than the one using a heuristic for the initial states.
###Code
print(fit.summary())
print(fit_heuristic.summary())
###Output
_____no_output_____
###Markdown
Holt-Winters' seasonal methodThe exponential smoothing method can be modified to incorporate a trend and a seasonal component. In the additive Holt-Winters' method, the seasonal component is added to the rest. This model corresponds to the ETS(A, A, A) model, and has the following state space formulation:\begin{align}y_t &= l_{t-1} + b_{t-1} + s_{t-m} + e_t\\l_{t} &= l_{t-1} + b_{t-1} + \alpha e_t\\b_{t} &= b_{t-1} + \beta e_t\\s_{t} &= s_{t-m} + \gamma e_t\end{align}
###Code
austourists_data = [
30.05251300,
19.14849600,
25.31769200,
27.59143700,
32.07645600,
23.48796100,
28.47594000,
35.12375300,
36.83848500,
25.00701700,
30.72223000,
28.69375900,
36.64098600,
23.82460900,
29.31168300,
31.77030900,
35.17787700,
19.77524400,
29.60175000,
34.53884200,
41.27359900,
26.65586200,
28.27985900,
35.19115300,
42.20566386,
24.64917133,
32.66733514,
37.25735401,
45.24246027,
29.35048127,
36.34420728,
41.78208136,
49.27659843,
31.27540139,
37.85062549,
38.83704413,
51.23690034,
31.83855162,
41.32342126,
42.79900337,
55.70835836,
33.40714492,
42.31663797,
45.15712257,
59.57607996,
34.83733016,
44.84168072,
46.97124960,
60.01903094,
38.37117851,
46.97586413,
50.73379646,
61.64687319,
39.29956937,
52.67120908,
54.33231689,
66.83435838,
40.87118847,
51.82853579,
57.49190993,
65.25146985,
43.06120822,
54.76075713,
59.83447494,
73.25702747,
47.69662373,
61.09776802,
66.05576122,
]
index = pd.date_range("1999-03-01", "2015-12-01", freq="3MS")
austourists = pd.Series(austourists_data, index=index)
austourists.plot()
plt.ylabel("Australian Tourists")
# fit in statsmodels
model = ETSModel(
austourists,
error="add",
trend="add",
seasonal="add",
damped_trend=True,
seasonal_periods=4,
)
fit = model.fit()
# fit with R params
params_R = [
0.35445427,
0.03200749,
0.39993387,
0.97999997,
24.01278357,
0.97770147,
1.76951063,
-0.50735902,
-6.61171798,
5.34956637,
]
fit_R = model.smooth(params_R)
austourists.plot(label="data")
plt.ylabel("Australian Tourists")
fit.fittedvalues.plot(label="statsmodels fit")
fit_R.fittedvalues.plot(label="R fit", linestyle="--")
plt.legend()
print(fit.summary())
###Output
_____no_output_____
###Markdown
PredictionsThe ETS model can also be used for predicting. There are several different methods available:- `forecast`: makes out of sample predictions- `predict`: in sample and out of sample predictions- `simulate`: runs simulations of the statespace model- `get_prediction`: in sample and out of sample predictions, as well as prediction intervalsWe can use them on our previously fitted model to predict from 2014 to 2020.
###Code
pred = fit.get_prediction(start="2014", end="2020")
df = pred.summary_frame(alpha=0.05)
df
###Output
_____no_output_____
###Markdown
In this case the prediction intervals were calculated using an analytical formula. This is not available for all models. For these other models, prediction intervals are calculated by performing multiple simulations (1000 by default) and using the percentiles of the simulation results. This is done internally by the `get_prediction` method.We can also manually run simulations, e.g. to plot them. Since the data ranges until end of 2015, we have to simulate from the first quarter of 2016 to the first quarter of 2020, which means 17 steps.
###Code
simulated = fit.simulate(anchor="end", nsimulations=17, repetitions=100)
for i in range(simulated.shape[1]):
simulated.iloc[:, i].plot(label="_", color="gray", alpha=0.1)
df["mean"].plot(label="mean prediction")
df["pi_lower"].plot(linestyle="--", color="tab:blue", label="95% interval")
df["pi_upper"].plot(linestyle="--", color="tab:blue", label="_")
pred.endog.plot(label="data")
plt.legend()
###Output
_____no_output_____ |
Q1 PartA&B&C Codes/MiniProj_RNN_Adam_MSE_Q1_PartA_Pytorch.ipynb | ###Markdown
**Data Pre Processing**
###Code
DATA_DIR = "Beijing-Pollution-DataSet/"
from pandas import read_csv
from datetime import datetime
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# split a multivariate sequence into samples
def split_sequences(sequences, n_steps, n_samples=12000, start_from=0):
X, y = list(), list()
for i in range(start_from, (start_from + n_samples)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the dataset
# if end_ix > len(sequences):
# break
# gather input and output parts of the pattern
seq_x = sequences[i:end_ix, :]
seq_y = sequences[end_ix, 0]
y.append(seq_y)
X.append(seq_x)
return array(X), array(y)
# load dataset
DATA_DIR = "Beijing-Pollution-DataSet/"
data = np.load(DATA_DIR + 'polution_dataSet.npy')
scaled_data = data
# specify the number of lag hours
n_hours = 11
n_features = 8
# frame as supervised learning
# reframed = series_to_supervised(scaled_data, n_hours, 1)
# print("Reframed Shape: ", reframed.shape)
# # split into train and test sets
# values = reframed.values
# n_train_hours = 12000 #365 * 24
# train = values[:n_train_hours, :]
# test = values[n_train_hours:n_train_hours+3000, :]
# # split into input and outputs
# n_obs = n_hours * n_features
# train_X, train_y = train[:, :n_obs], train[:, -n_features]
# test_X, test_y = test[:, :n_obs], test[:, -n_features]
# print("Train X shape : => ", train_X.shape, len(train_X), ", Train y Shape :=> ", train_y.shape)
# # reshape input to be 3D [samples, timesteps, features]
# train_X = train_X.reshape((train_X.shape[0], n_hours, n_features))
# test_X = test_X.reshape((test_X.shape[0], n_hours, n_features))
# print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
# convert dataset into input/output
n_timesteps = 11
dataset = data
train_X, train_y = split_sequences(dataset, n_timesteps, n_samples=15000, start_from=0)
valid_X, valid_y = split_sequences(dataset, n_timesteps, n_samples=3000, start_from=15000)
test_loader_X = torch.utils.data.DataLoader(dataset=(train_X), batch_size=20, shuffle=False)
# train_X = torch.tensor(train_X, dtype=torch.float32)
# train_y = torch.tensor(train_y, dtype=torch.float32)
print("Train X Shape :=> ", train_X.shape)
print("Train Y Shape :=> ", train_y.shape)
print("####################################")
print("Test X Shape :=> ", valid_X.shape)
print("Test Y Shape :=> ", valid_y.shape)
class RNN(torch.nn.Module):
def __init__(self, n_features=8, n_output=1, seq_length=11, n_hidden_layers=233, n_layers=1):
super(RNN, self).__init__()
self.n_features = n_features
self.seq_len = seq_length
self.n_output = n_output
self.n_hidden = n_hidden_layers # number of hidden states
self.n_layers = n_layers # number of LSTM layers (stacked)
# define RNN with specified parameters
# bath_first means that the first dim of the input and output will be the batch_size
self.rnn = nn.RNN(input_size=self.n_features,
hidden_size=self.n_hidden,
num_layers=self.n_layers,
batch_first=True)
# last, fully connected layer
self.l_linear = torch.nn.Linear(self.n_hidden*self.seq_len, self.n_output)
def forward(self, x, hidden):
# hidden_state = torch.zeros(self.n_layers, x.size(0), self.n_hidden).requires_grad_()
# cell_state = torch.zeros(self.n_layers, x.size(0), self.n_hidden).requires_grad_()
batch_size = x.size(0)
rnn_out, hidden = self.rnn(x, hidden)
# print(rnn_out.shape)
rnn_out = rnn_out.contiguous().view(batch_size, -1)
# lstm_out(with batch_first = True) is
# (batch_size,seq_len,num_directions * hidden_size)
# for following linear layer we want to keep batch_size dimension and merge rest
# .contiguous() -> solves tensor compatibility error
# x = lstm_out.contiguous().view(batch_size, -1)
out = self.l_linear(rnn_out)
return out, hidden
torch.manual_seed(13)
model = RNN(n_features=8, n_output=1, seq_length=11, n_hidden_layers=233, n_layers=1)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0003)
model = model#.to(device)
criterion = criterion#.to(device)
for p in model.parameters():
print(p.numel())
import time
start_time = time.time()
hidden = None
hidden_test = None
epochs = 100
model.train()
batch_size = 200
running_loss_history = []
val_running_loss_history = []
for epoch in range(epochs):
running_loss = 0.0
val_running_loss = 0.0
model.train()
for b in range(0, len(train_X), batch_size):
inpt = train_X[b:b+batch_size, :, :]
target = train_y[b:b+batch_size]
# print("Input Shape :=> ", inpt.shape)
x_batch = torch.tensor(inpt, dtype=torch.float32)
y_batch = torch.tensor(target, dtype=torch.float32)
output, hidden = model(x_batch, hidden)
hidden = hidden.data
loss = criterion(output.view(-1), y_batch)
running_loss += loss.item()
loss.backward()
optimizer.step()
optimizer.zero_grad()
else:
with torch.no_grad(): # it will temprerorerly set all the required grad flags to be false
model.eval()
for b in range(0, len(valid_X), batch_size):
inpt = valid_X[b:b+batch_size, :, :]
target = valid_y[b:b+batch_size]
x_batch_test = torch.tensor(inpt, dtype=torch.float32)
y_batch_test = torch.tensor(target, dtype=torch.float32)
# model.init_hidden(x_batch_test.size(0))
output_test, hidden_test = model(x_batch_test, hidden_test)
hidden_test = hidden_test.data
loss_valid = criterion(output_test.view(-1), y_batch_test)
val_running_loss += loss_valid.item()
val_epoch_loss = val_running_loss / len(valid_X)
val_running_loss_history.append(val_epoch_loss)
epoch_loss = running_loss / len(train_X)
running_loss_history.append(epoch_loss)
print('step : ' , epoch , ' Train loss : ' , epoch_loss, ', Valid Loss : => ', val_epoch_loss)
print("***->>>-----------------------------------------------<<<-***")
total_time = time.time() - start_time
print("===========================================================")
print("*********************************************************")
print("The total Training Time is Equal with ==> : {0} Sec.".format(total_time))
print("*********************************************************")
print("===========================================================")
f, ax = plt.subplots(1, 1, figsize=(10, 7))
plt.title("Train & Valid Loss - RNN", fontsize=18)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.plot(running_loss_history, label='Train')
plt.plot(val_running_loss_history, label='Test')
# pyplot.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
test_x, test_y = split_sequences(dataset, n_timesteps, n_samples=100, start_from=20500)
model.eval()
test_x = torch.tensor(test_x, dtype=torch.float32)
test_y = torch.tensor(test_y, dtype=torch.float32)
res, hid = model(test_x, None)
loss_test = criterion(res.view(-1), test_y)
future = 100
window_size = 11
# preds = dataset[15000:15100, 0].tolist()
# print(len(preds))
# print(preds)
# for i in range (future):
# # seq = torch.FloatTensor(preds[-window_size:])
# with torch.no_grad():
# # seq = torch.tensor(seq, dtype=torch.float32).view(1, 11, 8)
# # model.hidden = (torch.zeros(1, 1, model.hidden_size),
# # torch.zeros(1, 1, model.hidden_size))
# preds.append(model(seq))
# print(preds[11:])
fig = plt.figure(figsize=(20, 7))
plt.title("Beijing Polution Prediction - RNN", fontsize=18)
plt.ylabel('Polution')
plt.xlabel('Num data')
plt.grid(True)
plt.autoscale(axis='x', tight=True)
fig.autofmt_xdate()
plt.plot(test_y, label="Real")
plt.plot(res.detach().numpy(), label="Prediction")
plt.legend()
plt.show()
test_x, test_y = split_sequences(dataset, n_timesteps, n_samples=3000, start_from=18000)
model.eval()
test_running_loss = 0
with torch.no_grad(): # it will temprerorerly set all the required grad flags to be false
model.eval()
for b in range(0, len(test_x), batch_size):
inpt = test_x[b:b+batch_size, :, :]
target = test_y[b:b+batch_size]
x_batch_test = torch.tensor(inpt, dtype=torch.float32)
y_batch_test = torch.tensor(target, dtype=torch.float32)
# model.init_hidden(x_batch_test.size(0))
output_test, hidden_test = model(x_batch_test, hidden_test)
hidden_test = hidden_test.data
loss_test = criterion(output_test.view(-1), y_batch_test)
test_running_loss += loss_test.item()
test_epoch_loss = test_running_loss / len(test_x)
print("##########################################################")
print(">>>>---------------------------------------------------<<<<")
print(">>>>----------***************************--------------<<<<")
print("**** Test Loss :==>>> ", test_epoch_loss)
print(">>>>----------***************************--------------<<<<")
print(">>>>---------------------------------------------------<<<<")
print("##########################################################")
# split a multivariate sequence into samples
def split_sequences12(sequences, n_steps, n_samples=12000, start_from=0):
X, y = list(), list()
j = 0
for i in range(start_from, (start_from + n_samples)):
# find the end of this pattern
end_ix = j*12 + n_steps + start_from
# check if we are beyond the dataset
# gather input and output parts of the pattern
j = j + 1
seq_x = sequences[end_ix-11:end_ix, :]
seq_y = sequences[end_ix, 0]
y.append(seq_y)
X.append(seq_x)
print("End :=> ", end_ix)
return array(X), array(y)
x_12, y_12 = split_sequences12(sequences=dataset, n_steps=11, n_samples=200, start_from=18000)
x_12 = torch.tensor(x_12, dtype=torch.float32)
x_12.shape
model.eval()
x_12 = x_12.clone().detach() #torch.tensor(x_12.clone().detach(), dtype=torch.float32)
res_12, hid = model(x_12, None)
fig = plt.figure(figsize=(12, 4))
plt.title("Beijing Polution Prediction", fontsize=18)
plt.ylabel('Polution')
plt.grid(True)
plt.autoscale(axis='x', tight=True)
fig.autofmt_xdate()
# plt.plot(data[15000:15100, 0])
plt.plot(y_12, label="Real")
# plt.plot(preds[12:])
print(res_12.shape)
plt.plot(res_12.detach().numpy(), label="Prediction")
plt.legend()
plt.show()
df_y = DataFrame(y_12)
df_y.columns = ['Real Values']
df_y['Predicted Values'] = res_12.detach().numpy()
# dataset.index.name = 'date'
pd.set_option("max_rows", None)
df_y.to_csv('Predict_every12Hour_RNN_ADAM_MSE.csv')
df_y
###Output
_____no_output_____ |
d200316_tf/ac_tf_input.ipynb | ###Markdown
Investigating AC TF Input filesThis notebook investigates the TF Input files that were created during the sprint at MPIK on 24/03/2020. There seems to be a problem with the scaling during the transformation from TF Input file to TF file. I will begin by checking the TF input files against the old TF Input file for the module
###Code
import fitsio
from target_calib import TFInputArrayReader
import numpy as np
from matplotlib import pyplot as plt
%matplotlib widget
paths = {
"old": "/Users/Jason/Downloads/tempdata/CHEC-S_tf_data/SN0067/TFInput_File_SN0067_180213.tcal",
"T20": "/Users/Jason/Downloads/tempdata/mpik_tf/tfinputs/TFInput_File_SN0067_20.tcal",
"T25": "/Users/Jason/Downloads/tempdata/mpik_tf/tfinputs/TFInput_File_SN0067_25.tcal",
"T30": "/Users/Jason/Downloads/tempdata/mpik_tf/tfinputs/TFInput_File_SN0067_30.tcal",
"T35": "/Users/Jason/Downloads/tempdata/mpik_tf/tfinputs/TFInput_File_SN0067_35.tcal",
"T40": "/Users/Jason/Downloads/tempdata/mpik_tf/tfinputs/TFInput_File_SN0067_40.tcal",
"T45": "/Users/Jason/Downloads/tempdata/mpik_tf/tfinputs/TFInput_File_SN0067_45.tcal"
}
def read_tf_fitsio(path):
with fitsio.FITS(path) as file:
header = file[0].read_header()
n_pixels = int(header['TM'] * header['PIX'])
n_cells = int(header['CELLS'])
n_amplitudes = int(header['PNTS'])
data = file["DATA"].read(columns="CELLS").reshape((n_pixels, n_cells, n_amplitudes))
amplitudes = file["AMPLITUDES"].read(columns="CELLS").astype('float64')
return data, amplitudes
fig, ax = plt.subplots()
for key, path in paths.items():
tf, amplitudes = read_tf_fitsio(path)
ax.plot(amplitudes, tf[0, 0], label=key)
ax.set_xlabel("Input Amplitude (V)")
ax.set_ylabel("Pedestal-subtracted ADC")
ax.legend(loc='best')
###Output
/Users/jason/opt/anaconda3/envs/cta/lib/python3.7/site-packages/ipykernel_launcher.py:1: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
"""Entry point for launching an IPython kernel.
###Markdown
The input amplitude appears to be incorrectly calibrated by a scale factor. The lab setup should be checked (e.g. load setting for the pulse generator). Check to see if TargetCalib can be used instead of fitsio
###Code
def read_tf_targetcalib(path):
reader = TFInputArrayReader(paths['old'])
tf = np.array(reader.GetTFInput())
amplitudes = np.array(reader.GetAmplitudes())
return tf, amplitudes
fig, ax = plt.subplots()
for key, path in paths.items():
tf, amplitudes = read_tf_fitsio(path)
ax.plot(amplitudes, tf[0, 0], label=key)
ax.set_xlabel("Input Amplitude (V)")
ax.set_ylabel("Pedestal-subtracted ADC")
ax.legend(loc='best')
###Output
_____no_output_____ |
Python For Finance Risk And Return/07 - Beta.ipynb | ###Markdown
Beta- Beta is a measure of a stock's volatility in relation to the overall market.- S&P 500 Index has a beta of 1.0- High-beta stocks are supposed to be riskier but provide higher return potential.- Low-beta stocks pose less risk but also lower returns. Formula- $Beta = \frac{Covariance}{Variance}$ Interpretation- Beta above 1: stock is more volatile than the market, but expects higher return- Beta below 1: stock with lower volatility, and expects less return Resources- Beta https://www.investopedia.com/investing/beta-know-risk/
###Code
import numpy as np
import pandas_datareader as pdr
import datetime as dt
import pandas as pd
from sklearn.linear_model import LinearRegression
tickers = ['AAPL', 'MSFT', 'TWTR', 'IBM', '^GSPC']
start = dt.datetime(2015, 12, 1)
end = dt.datetime(2021, 1, 1)
data = pdr.get_data_yahoo(tickers, start, end, interval="m")
data = data['Adj Close']
log_returns = np.log(data/data.shift())
log_returns
cov = log_returns.cov()
var = log_returns['^GSPC'].var()
var
cov.loc['AAPL', '^GSPC']/var
cov.loc['^GSPC']/var
X = log_returns['^GSPC'].iloc[1:].to_numpy().reshape(-1, 1)
Y = log_returns['AAPL'].iloc[1:].to_numpy().reshape(-1, 1)
lin_regr = LinearRegression()
lin_regr.fit(X, Y)
lin_regr.coef_[0, 0]
import matplotlib.pyplot as plt
%matplotlib notebook
fig, ax = plt.subplots()
ax.scatter(X, Y)
###Output
_____no_output_____ |
notebooks/1-basics/PY0101EN-1-3-Expressions.ipynb | ###Markdown
Expression and Variables in Python Welcome! This notebook will teach you the basics of the Python programming language. By the end of this notebook, you'll know to interpret variables and solve expressions by applying mathematical operations. Table of Contents Expressions and Variables Expressions Exercise: Expressions Variables Exercise: Expression and Variables in Python Estimated time needed: 10 min Expression and Variables Expressions Expressions in Python can include operations among compatible types (e.g., integers and floats). For example, basic arithmetic operations like adding multiple numbers:
###Code
# Addition operation expression
43 + 60 + 16 + 41
###Output
_____no_output_____
###Markdown
We can perform subtraction operations using the minus operator. In this case the result is a negative number:
###Code
# Subtraction operation expression
50 - 60
###Output
_____no_output_____
###Markdown
We can do multiplication using an asterisk:
###Code
# Multiplication operation expression
5 * 5
###Output
_____no_output_____
###Markdown
We can also perform division with the forward slash:
###Code
# Division operation expression
25 / 5
# Division operation expression
25 / 6
###Output
_____no_output_____
###Markdown
As seen in the quiz above, we can use the double slash for integer division, where the result is rounded to the nearest integer:
###Code
# Integer division operation expression
25 // 5
# Integer division operation expression
25 // 6
###Output
_____no_output_____
###Markdown
Exercise: Expression Let's write an expression that calculates how many hours there are in 160 minutes:
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:160/60 Or 160//60--> Python follows well accepted mathematical conventions when evaluating mathematical expressions. In the following example, Python adds 30 to the result of the multiplication (i.e., 120).
###Code
# Mathematical expression
30 + 2 * 60
###Output
_____no_output_____
###Markdown
And just like mathematics, expressions enclosed in parentheses have priority. So the following multiplies 32 by 60.
###Code
# Mathematical expression
(30 + 2) * 60
###Output
_____no_output_____
###Markdown
Variables Just like with most programming languages, we can store values in variables, so we can use them later on. For example:
###Code
# Store value into variable
x = 43 + 60 + 16 + 41
###Output
_____no_output_____
###Markdown
To see the value of x in a Notebook, we can simply place it on the last line of a cell:
###Code
# Print out the value in variable
x
###Output
_____no_output_____
###Markdown
We can also perform operations on x and save the result to a new variable:
###Code
# Use another variable to store the result of the operation between variable and value
y = x / 60
y
###Output
_____no_output_____
###Markdown
If we save a value to an existing variable, the new value will overwrite the previous value:
###Code
# Overwrite variable with new value
x = x / 60
x
###Output
_____no_output_____
###Markdown
It's a good practice to use meaningful variable names, so you and others can read the code and understand it more easily:
###Code
# Name the variables meaningfully
total_min = 43 + 42 + 57 # Total length of albums in minutes
total_min
# Name the variables meaningfully
total_hours = total_min / 60 # Total length of albums in hours
total_hours
###Output
_____no_output_____
###Markdown
In the cells above we added the length of three albums in minutes and stored it in total_min. We then divided it by 60 to calculate total length total_hours in hours. You can also do it all at once in a single expression, as long as you use parenthesis to add the albums length before you divide, as shown below.
###Code
# Complicate expression
total_hours = (43 + 42 + 57) / 60 # Total hours in a single expression
total_hours
###Output
_____no_output_____
###Markdown
If you'd rather have total hours as an integer, you can of course replace the floating point division with integer division (i.e., //). Exercise: Expression and Variables in Python What is the value of x where x = 3 + 2 * 2
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:7--> What is the value of y where y = (3 + 2) * 2?
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- Your answer is below:10--> What is the value of z where z = x + y?
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
###Output
_____no_output_____ |
modelLiteMaker.ipynb | ###Markdown
###Code
!pip install tflite_model_maker
!unzip /content/PlantDoc.v1-resize-416x416.tfrecord\ \(1\).zip
tspec = model_spec.get('efficientdet_lite4')
!git clone https://github.com/tensorflow/models.git
%cd /content/models/research/object_detection/
!protoc object_detection/protos/*.proto --python_out=.
# Install TensorFlow Object Detection API.
!cp object_detection/packages/tf2/setup.py .
!python -m pip install --use-feature=2020-resolver .
from utils import label_map_util
category_index = label_map_util.get_label_map_dict('/content/train/leaves_label_map.pbtxt', use_display_name=True)
inv_map = {v: k for k, v in category_index.items()}
train = object_detector.DataLoader('/content/train/leaves.tfrecord', 3000, inv_map)
!unzip /content/PlantDoc.v1-resize-416x416.tfrecord.zip
model = object_detector.create(train, model_spec=tspec, epochs=10, batch_size=8, train_whole_model=True)
model.export(export_dir='/content', export_format=[ExportFormat.TFLITE])
###Output
_____no_output_____ |
Landry_collab/4_Landry_VAMPIRE_workflow-regiondependency.ipynb | ###Markdown
VAMPIRE WORKFLOW Purpose: To split tile scans, pick training and testing image sets, and in the future run the full VAMPIRE workflow *Step 1: Import necessary packages*
###Code
import shutil, os
from glob import glob
import numpy as np
import pandas as pd
from skimage import io
import matplotlib.pyplot as plt
from PIL import Image
from numpy.linalg import inv
import image_slicer
from sklearn.model_selection import train_test_split
%matplotlib inline
###Output
_____no_output_____
###Markdown
*Step 2: User Inputs* Manual Step:1. Move/Download the images for testing and training into a new folder2. Rename images to insure they include the condition somewhere in them3. Add a folder for each of your stains into the folder created in step 14. Input the name of that folder into 'folder_location' below5. Input the names of the nuclear stain into 'stain1' and the cell stain into 'stain2' below6. Insert your testing conditions into 'condition1' and 'condition 2' below7. Insert the number of slices that you want to split each image into in 'slice number'8. Add a folder labeled 'train' to your desktop9. Add a folder labeled 'test' to your desktop10. Within folder 'test' create a folder for each of your conditions
###Code
#file names should be in the current working directory
folder_location = '/Users/hhelmbre/Desktop/Kate_images'
stain1 = 'dapi'
stain2 = 'iba'
#conditions are our four regions
conditions = np.arange([1,15,1])
conditions
file_type_init = '.tif'
file_type_new = '.png'
slice_number = 4
random_state_num = 12
def folder_cleaner(folder, image_type):
k=0
for files in folder:
if image_type in str(files):
k+=1
else:
folder = np.delete(folder, np.argwhere(folder == str(files)))
return folder
###Output
_____no_output_____
###Markdown
*Step 5: Split the Image(s) into tiles*
###Code
arr = os.listdir(folder_location)
file_list = np.asarray(arr)
file_list = folder_cleaner(file_list, file_type_init)
file_list
for files in file_list:
im=io.imread(str(folder_location + '/' + files))
channel1 = im[0, :, :]
channel2= im[1, :, :]
filename = files.replace(file_type_init, "")
channel1 = Image.fromarray(np.uint16(channel1))
channel1.save(str(folder_location + '/' + filename + '_' + stain1 + file_type_new))
channel2 = Image.fromarray(np.uint16(channel2))
channel2.save(str(folder_location + '/' + filename + '_' + stain2 + file_type_new))
###Output
_____no_output_____
###Markdown
*Step 6: Split the Images*
###Code
arr = os.listdir(folder_location)
file_list = np.asarray(arr)
file_list = folder_cleaner(file_list, file_type_new)
for files in file_list:
image_slicer.slice(str(folder_location + '/' + files), slice_number)
###Output
_____no_output_____
###Markdown
*Moving the DAPI and Iba images into their own folders*
###Code
arr = os.listdir(folder_location)
file_list1 = np.asarray(arr)
file_list1 = folder_cleaner(file_list1, file_type_new)
for tiled_images in file_list1:
conditional = str(str(tiled_images)[-5].isdigit())
if conditional == 'True':
if stain1 in tiled_images:
shutil.move(str(folder_location + '/' + tiled_images), str(folder_location + '/' + stain1 + '/' + tiled_images))
elif stain2 in tiled_images:
shutil.move(str(folder_location + '/' + tiled_images), str(folder_location + '/' + stain2 + '/' + tiled_images))
else:
pass
###Output
_____no_output_____
###Markdown
*Step 4: Choose training and testing data sets*
###Code
arr = os.listdir(str(folder_location + '/' + stain1))
file_list_train = np.asarray(arr)
file_list_train = folder_cleaner(file_list_train, file_type_new)
X_train, X_test= train_test_split(file_list_train, test_size=0.20, random_state=random_state_num)
###Output
_____no_output_____
###Markdown
*Step X: Moving the testing and training DAPI data sets into test and train folders*
###Code
for names in file_list_train:
if names in X_train[:]:
shutil.move(str(folder_location + '/'+ stain1 + '/' + names), '/Users/hhelmbre/Desktop/train')
else:
shutil.move(str(folder_location + '/' + stain1 + '/' + names), '/Users/hhelmbre/Desktop/test')
###Output
_____no_output_____
###Markdown
*Step Y: Renaming the DAPI and Iba datasets according to proper VAMPIRE naming modality*
###Code
arr_train1 = os.listdir('/Users/hhelmbre/Desktop/train')
file_list_train1 = np.asarray(arr_train1)
file_list_train1 = folder_cleaner(file_list_train1, file_type_new)
arr_stain2 = os.listdir(str(folder_location + '/' + stain2))
file_list_stain2 = np.asarray(arr_stain2)
file_list_stain2 = folder_cleaner(file_list_stain2, file_type_new)
im_number= 1
for names in file_list_train1:
dapi_name = str(names)
if im_number < 10:
os.rename(str('/Users/hhelmbre/Desktop/train/' + names), str('/Users/hhelmbre/Desktop/train/' + 'xy' + '0' + str(im_number) + 'c2.png'))
else:
os.rename(str('/Users/hhelmbre/Desktop/train/' + names), str('/Users/hhelmbre/Desktop/train/' + 'xy' + str(im_number) + 'c2.png'))
iba_name = dapi_name.replace(stain1, stain2)
if im_number < 10:
os.rename(str(folder_location + '/' + stain2 + '/' + iba_name), str('/Users/hhelmbre/Desktop/train/' + 'xy' + '0' + str(im_number) + 'c1.png'))
else:
os.rename(str(folder_location + '/' + stain2 + '/' + iba_name), str('/Users/hhelmbre/Desktop/train/' + 'xy' + str(im_number) + 'c1.png'))
im_number +=1
###Output
_____no_output_____
###Markdown
*Splitting the test group into the appropriate conditions*
###Code
arr_test = os.listdir('/Users/hhelmbre/Desktop/test')
file_list_test = np.asarray(arr_test)
file_list_test = folder_cleaner(file_list_test, file_type_new)
for test_images in file_list_test:
if condition1 in test_images:
shutil.move(str('/Users/hhelmbre/Desktop/test/' + test_images), str('/Users/hhelmbre/Desktop/test/' + condition1 + '/' + test_images))
elif condition2 in test_images:
shutil.move(str('/Users/hhelmbre/Desktop/test/' + test_images), str('/Users/hhelmbre/Desktop/test/' + condition2 + '/' + test_images))
elif condition3 in test_images:
shutil.move(str('/Users/hhelmbre/Desktop/test/' + test_images), str('/Users/hhelmbre/Desktop/test/' + condition3 + '/' + test_images))
elif condition4 in test_images:
shutil.move(str('/Users/hhelmbre/Desktop/test/' + test_images), str('/Users/hhelmbre/Desktop/test/' + condition4 + '/' + test_images))
else:
pass
###Output
_____no_output_____
###Markdown
*Step x: Renaming the test images and getting their appropriate iba stain*
###Code
arr_test_condition1 = os.listdir(str('/Users/hhelmbre/Desktop/test/' + condition1))
file_list_test_condition1 = np.asarray(arr_test_condition1)
file_list_test_condition1 = folder_cleaner(file_list_test_condition1, file_type_new)
arr_test_condition2 = os.listdir(str('/Users/hhelmbre/Desktop/test/' + condition2))
file_list_test_condition2 = np.asarray(arr_test_condition2)
file_list_test_condition2 = folder_cleaner(file_list_test_condition2, file_type_new)
arr_test_condition3 = os.listdir(str('/Users/hhelmbre/Desktop/test/' + condition3))
file_list_test_condition3 = np.asarray(arr_test_condition3)
file_list_test_condition3 = folder_cleaner(file_list_test_condition3, file_type_new)
arr_test_condition4 = os.listdir(str('/Users/hhelmbre/Desktop/test/' + condition4))
file_list_test_condition4 = np.asarray(arr_test_condition4)
file_list_test_condition4 = folder_cleaner(file_list_test_condition4, file_type_new)
im_number = 1
for names in file_list_test_condition1:
dapi_name = str(names)
if im_number < 10:
os.rename(str('/Users/hhelmbre/Desktop/test/' + condition1 + '/'+ names), str('/Users/hhelmbre/Desktop/test/' + condition1 + '/' + 'xy' + '0' + str(im_number) + 'c2.png'))
else:
os.rename(str('/Users/hhelmbre/Desktop/test/' + condition1 + '/'+ names), str('/Users/hhelmbre/Desktop/test/' + condition1 + '/' + 'xy' + '0' + str(im_number) + 'c2.png'))
iba_name = dapi_name.replace(stain1, stain2)
if im_number < 10:
os.rename(str(folder_location + '/' + stain2 + '/'+ iba_name), str('/Users/hhelmbre/Desktop/test/' + condition1 + '/' + 'xy' + '0' + str(im_number) + 'c1.png'))
else:
os.rename(str(folder_location + '/' + stain2 + '/'+ iba_name), str('/Users/hhelmbre/Desktop/test/' + condition1 + '/' + 'xy' + '0' + str(im_number) + 'c1.png'))
im_number +=1
im_number= 1
for names in file_list_test_condition2:
dapi_name = str(names)
if im_number < 10:
os.rename(str('/Users/hhelmbre/Desktop/test/' + condition2 + '/'+ names), str('/Users/hhelmbre/Desktop/test/' + condition2 + '/' + 'xy' + '0' + str(im_number) + 'c2.png'))
else:
os.rename(str('/Users/hhelmbre/Desktop/test/' + condition2 + '/'+ names), str('/Users/hhelmbre/Desktop/test/' + condition2 + '/' + 'xy' + '0' + str(im_number) + 'c2.png'))
iba_name = dapi_name.replace(stain1, stain2)
if im_number < 10:
os.rename(str(folder_location + '/' + stain2 + '/'+ iba_name), str('/Users/hhelmbre/Desktop/test/' + condition2 + '/' + 'xy' + '0' + str(im_number) + 'c1.png'))
else:
os.rename(str(folder_location + '/' + stain2 + '/'+ iba_name), str('/Users/hhelmbre/Desktop/test/' + condition2 + '/' + 'xy' + '0' + str(im_number) + 'c1.png'))
im_number +=1
im_number= 1
for names in file_list_test_condition3:
dapi_name = str(names)
if im_number < 10:
os.rename(str('/Users/hhelmbre/Desktop/test/' + condition3 + '/'+ names), str('/Users/hhelmbre/Desktop/test/' + condition3 + '/' + 'xy' + '0' + str(im_number) + 'c2.png'))
else:
os.rename(str('/Users/hhelmbre/Desktop/test/' + condition3 + '/'+ names), str('/Users/hhelmbre/Desktop/test/' + condition3 + '/' + 'xy' + '0' + str(im_number) + 'c2.png'))
iba_name = dapi_name.replace(stain1, stain2)
if im_number < 10:
os.rename(str(folder_location + '/' + stain2 + '/'+ iba_name), str('/Users/hhelmbre/Desktop/test/' + condition3 + '/' + 'xy' + '0' + str(im_number) + 'c1.png'))
else:
os.rename(str(folder_location + '/' + stain2 + '/'+ iba_name), str('/Users/hhelmbre/Desktop/test/' + condition3 + '/' + 'xy' + '0' + str(im_number) + 'c1.png'))
im_number +=1
im_number= 1
for names in file_list_test_condition4:
dapi_name = str(names)
if im_number < 10:
os.rename(str('/Users/hhelmbre/Desktop/test/' + condition4 + '/'+ names), str('/Users/hhelmbre/Desktop/test/' + condition4 + '/' + 'xy' + '0' + str(im_number) + 'c2.png'))
else:
os.rename(str('/Users/hhelmbre/Desktop/test/' + condition4 + '/'+ names), str('/Users/hhelmbre/Desktop/test/' + condition4 + '/' + 'xy' + '0' + str(im_number) + 'c2.png'))
iba_name = dapi_name.replace(stain1, stain2)
if im_number < 10:
os.rename(str(folder_location + '/' + stain2 + '/'+ iba_name), str('/Users/hhelmbre/Desktop/test/' + condition4 + '/' + 'xy' + '0' + str(im_number) + 'c1.png'))
else:
os.rename(str(folder_location + '/' + stain2 + '/'+ iba_name), str('/Users/hhelmbre/Desktop/test/' + condition4 + '/' + 'xy' + '0' + str(im_number) + 'c1.png'))
im_number +=1
###Output
_____no_output_____ |
tests/integration/common/test_notebook.ipynb | ###Markdown
Test Notebook
###Code
print('Hello World')
###Output
_____no_output_____ |
Tocic Comment Classification.ipynb | ###Markdown
Imports
###Code
# Import required modules
import numpy as np
import pandas as pd
import re
import string
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
import nltk
from nltk.tokenize import sent_tokenize,word_tokenize
from nltk.corpus import stopwords
from tqdm import tqdm
nltk.download('punkt')
nltk.download('stopwords')
stop = stopwords.words('english')
from sklearn import feature_extraction, model_selection, naive_bayes, pipeline, manifold, preprocessing, metrics
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from tensorflow import keras
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, GRU, Embedding, Dropout, Activation, BatchNormalization, SpatialDropout1D, CuDNNLSTM
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model, Sequential
from keras import initializers, regularizers, constraints, optimizers, layers, callbacks
# Import Data
from google.colab import drive
drive.mount('/content/drive')
# Change PATH to folder
DATA_PATH = "drive/MyDrive/nlp_project/"
train = pd.read_csv(DATA_PATH + "train.csv")
test = pd.read_csv(DATA_PATH + "test.csv")
test_labels = pd.read_csv(DATA_PATH + "test_labels.csv")
###Output
Mounted at /content/drive
###Markdown
Dataset Exploration
###Code
train.head()
# Example comment
train["comment_text"].values[0]
print("Trining data shape:", train.shape)
print("Testing data shape:",test.shape)
# Check for NaNs in the training data
train.isnull().any()
# Check for NaNs in the testing data
test.isnull().any()
###Output
_____no_output_____
###Markdown
Merging Test Files and Removing rows with -1 labels
###Code
# Merge test data with test labels and drop all rows with label as -1
concatenated_test = pd.merge(test, test_labels)
concat_cols = concatenated_test[ (concatenated_test['toxic'] == -1) & (concatenated_test['severe_toxic'] == -1) & (concatenated_test['obscene'] == -1) & (concatenated_test['threat'] == -1) & (concatenated_test['insult'] == -1) & (concatenated_test['identity_hate'] == -1)].index
test = concatenated_test.drop(concat_cols, inplace = False)
test.drop(['id'], inplace = True, axis = 1)
test_y = test[['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']].copy()
###Output
_____no_output_____
###Markdown
Data Preprocessing Declaring Punctuations Bank
###Code
punctuations = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£',
'·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…',
'“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─',
'▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞',
'∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ]
###Output
_____no_output_____
###Markdown
Method to Removing Numbers
###Code
def number_cleaning(x):
if bool(re.search(r'\d', x)):
x = re.sub('[0-9]{5,}', '#####', x)
x = re.sub('[0-9]{4}', '####', x)
x = re.sub('[0-9]{3}', '###', x)
x = re.sub('[0-9]{2}', '##', x)
return x
###Output
_____no_output_____
###Markdown
Method to clean text of punctuations
###Code
def text_cleaning(x):
x = str(x).lower()
for punctuation in punctuations:
if punctuation in x:
x = x.replace(punctuation, '')
return x
# Applying the preprocessing functions on both training and testing set
train['comment_text'] = train['comment_text'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)]))
train['comment_text'] = train['comment_text'].apply(text_cleaning)
train['comment_text'] = train['comment_text'].apply(number_cleaning)
test['comment_text'] = test['comment_text'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)]))
test['comment_text'] = test['comment_text'].apply(text_cleaning)
test['comment_text'] = test['comment_text'].apply(number_cleaning)
###Output
_____no_output_____
###Markdown
Baseline Model Using TF-IDF for word/sentence embedding
###Code
tfidf_vectorizer = TfidfVectorizer(max_features=9000, ngram_range=(1,2))
train_X = tfidf_vectorizer.fit_transform(train['comment_text'])
test_X = tfidf_vectorizer.fit_transform(test['comment_text'])
###Output
_____no_output_____
###Markdown
Model Building & Training - Logistic Regression
###Code
logistic_toxic = LogisticRegression(random_state=0)
logistic_toxic.fit(train_X,train['toxic'])
logistic_severetoxic = LogisticRegression(random_state=0)
logistic_severetoxic.fit(train_X, train['severe_toxic'])
logistic_obscene = LogisticRegression(random_state=0)
logistic_obscene.fit(train_X, train['obscene'])
logistic_threat = LogisticRegression(random_state=0)
logistic_threat.fit(train_X, train['threat'])
logistic_insult = LogisticRegression(random_state=0)
logistic_insult.fit(train_X, train['insult'])
logistic_identityhate = LogisticRegression(random_state=0)
logistic_identityhate.fit(train_X, train['identity_hate'])
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
logistic_toxic_predicted = logistic_toxic.predict(test_X)
logistic_severetoxic_predicted = logistic_severetoxic.predict(test_X)
logistic_obscene_predicted = logistic_obscene.predict(test_X)
logistic_threat_predicted = logistic_threat.predict(test_X)
logistic_insult_predicted = logistic_insult.predict(test_X)
logistic_identityhate_predicted = logistic_identityhate.predict(test_X)
stk1 = np.column_stack((logistic_toxic_predicted,logistic_severetoxic_predicted))
stk2 = np.column_stack((stk1, logistic_obscene_predicted))
stk3 = np.column_stack((stk2, logistic_threat_predicted))
stk4 = np.column_stack((stk3, logistic_insult_predicted))
stk5 = np.column_stack((stk4, logistic_identityhate_predicted))
###Output
_____no_output_____
###Markdown
Baseline Metrics
###Code
logistic_accuracy = metrics.accuracy_score(test_y, stk5)
logistic_precision = metrics.precision_score(test_y, stk5, average='macro')
logistic_f1score = metrics.f1_score(test_y, stk5, average='macro')
logistic_recall = metrics.recall_score(test_y, stk5, average='macro')
print("Accuracy:", logistic_accuracy)
print("Precision:", logistic_precision)
print("Recall:", logistic_recall)
print("F1 Score:", logistic_f1score)
###Output
Accuracy: 0.895589108756135
Precision: 0.03573210963982707
Recall: 0.001642134665125069
F1 Score: 0.0031046445885034826
###Markdown
Proposed Models
###Code
# Helper Function
def print_graph(history):
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
def convertToBinary(x):
if x >= 0.5:
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Model 1 - Keras Inbuilt Embedding + Single Layer LSTM Model 2 - Keras Inbuilt Embedding + Single Layer Bidirectional LSTM
###Code
tokenizer = Tokenizer(num_words=20000) #maximum features
tokenizer.fit_on_texts(list(train['comment_text']))
train_x_tokenized = tokenizer.texts_to_sequences(train["comment_text"])
test_x_tokenized = tokenizer.texts_to_sequences(test["comment_text"])
train_X = pad_sequences(train_x_tokenized, maxlen=200)
test_X = pad_sequences(test_x_tokenized, maxlen=200)
labels = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
train_Y = train[labels].copy().to_numpy()
test_Y = test_y.to_numpy()
#Embedding Parameters
maximum_features = 20000
embedding_size = 128
###Output
_____no_output_____
###Markdown
Keras Inbuilt Embedding + Single Layer LSTM
###Code
def lstm_model_structure():
model = Sequential()
model.add(Embedding(maximum_features, embedding_size))
model.add(LSTM(60, return_sequences=True,name='lstm_layer'))
model.add(GlobalMaxPool1D())
model.add(Dropout(0.1))
model.add(Dense(50, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(6, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
lstm_model = lstm_model_structure()
lstm_model.summary()
history_lstm_model = lstm_model.fit(train_X,train_Y, batch_size=32, epochs=2, validation_split=0.1)
print_graph(history_lstm_model)
lstm_predicted = lstm_model.predict(test_X)
vector_func = np.vectorize(convertToBinary)
metrics.accuracy_score(test_Y, vector_func(lstm_predicted))
###Output
_____no_output_____
###Markdown
Keras Inbuilt Embedding + Single Layer Bidirectional LSTM
###Code
def bidirectional_model_structure():
model = Sequential()
model.add(Embedding(maximum_features, embedding_size))
model.add(Bidirectional(LSTM(60, return_sequences=True,name='bidirectional_lstm_layer')))
model.add(GlobalMaxPool1D())
model.add(Dropout(0.1))
model.add(Dense(50, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(6, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
bidirectional_lstm_model = bidirectional_model_structure()
bidirectional_lstm_model.summary()
history_bidirectional_lstm_model = bidirectional_lstm_model.fit(train_X,train_Y, batch_size=32, epochs=2, validation_split=0.1)
print_graph(history_bidirectional_lstm_model)
bidirectional_lstm_predicted = bidirectional_lstm_model.predict(test_X)
metrics.accuracy_score(test_Y, vector_func(bidirectional_lstm_predicted))
###Output
_____no_output_____
###Markdown
Model 3 - Glove Embedding + Bidirectional LSTM
###Code
# Load GloVe Embeddings
def load_glove_index():
EMBEDDING_FILE = DATA_PATH + 'glove.840B.300d.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')[:300]
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE))
return embeddings_index
embeddings_index = load_glove_index()
print("Words in GloVe: ", len(embeddings_index))
embed_size = 300
max_features = 20000
maxlen = 100
X_train = pad_sequences(train_x_tokenized, maxlen=maxlen)
X_test = pad_sequences(test_x_tokenized, maxlen=maxlen)
y_train = train_Y
y_test = test_Y
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
emb_mean,emb_std
word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
inp = Input(shape=(maxlen,))
x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=True)(inp)
x = Bidirectional(LSTM(50, return_sequences=True,dropout=0.1, recurrent_dropout=0.1))(x)
x = Bidirectional(LSTM(50, return_sequences=True,dropout=0.1, recurrent_dropout=0.1))(x)
x = GlobalMaxPool1D()(x)
x = BatchNormalization()(x)
x = Dense(50, activation="relu")(x)
#x = BatchNormalization()(x)
x = Dropout(0.1)(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
import keras.backend as K
def loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred)
model.compile(loss=loss, optimizer='nadam', metrics=['accuracy'])
def schedule(ind):
a = [0.002,0.003, 0.01]
return a[ind]
lr = callbacks.LearningRateScheduler(schedule)
import tensorflow as tf
y_train = tf.cast(y_train, tf.float32)
history = model.fit(X_train, y_train, batch_size=256,validation_split=0.2, epochs=3, callbacks=[lr])
model.save(DATA_PATH + "glove_bilstm")
model_load = keras.models.load_model(DATA_PATH + "glove_bilstm", custom_objects={"loss":loss})
y_test = tf.cast(y_test, tf.float32)
model_load.evaluate(X_test, y_test, batch_size=256)
print_graph(history)
###Output
_____no_output_____ |
ConvNet_Sinais_PyTorch.ipynb | ###Markdown
Parte 1: Leitura e Preparação dos Dados Importação de BibliotecasNa célula de código abaixo importamos todas as principais bibliotecas (módulos do Python) que usaremos em nosso exercício.
###Code
import numpy as np
import torch
import torch.nn as nn
import torch.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
Leitura dos DadosO código abaixo faz a leitura das imagens de entrada e suas correspondentes categorias de saída desejada.
###Code
# Baixa as entradas X.npy
!gdown https://drive.google.com/uc?id=1oSRay8phFA91RJoGH0tMmj86LBovKj73
# Baixa as saídas desejadas Y.npy
!gdown https://drive.google.com/uc?id=1_BQLcsgcYYsubtv4M80BVm4BEknrTOr7
# Leitura dos dados
X = np.load('X.npy')
Y = np.load('Y.npy')
# Reordena as categorias na ordem correta
# (por motivo que desconheço, os dados
# originais estavam com as classes fora
# de ordem -- consistentes e organizadas,
# mas fora de ordem)
cats = [9,0,7,6,1,8,4,3,2,5]
Y[:,cats] = Y[:,range(10)]
###Output
_____no_output_____
###Markdown
Embaralhamento e Separação dos DadosEm seguida embaralhamos as amostras, mantendo os pares correspondentes entre entradas e suas respectivas saídas desejadas, e depois separamos uma parte das amostras para treinamento e outra parte para validação.
###Code
def split_and_shuffle(X, Y, perc = 0.1):
''' Esta função embaralha os pares de entradas
e saídas desejadas, e separa os dados de
treinamento e validação
'''
# Total de amostras
tot = len(X)
# Emabaralhamento dos índices
indexes = np.arange(tot)
np.random.shuffle(indexes)
# Calculo da quantidade de amostras de
# treinamento
n = int((1 - perc)*tot)
Xt = X[indexes[:n]]
Yt = Y[indexes[:n]]
Xv = X[indexes[n:]]
Yv = Y[indexes[n:]]
return Xt, Yt, Xv, Yv
# Aqui efetivamente realizamos a separação
# e embaralhamento
Xt, Yt, Xv, Yv = split_and_shuffle(X, Y)
# Transforma os arrays do NumPy em
# tensores do PyTorch
Xt = torch.from_numpy(Xt)
Yt = torch.from_numpy(Yt)
Xv = torch.from_numpy(Xv)
Yv = torch.from_numpy(Yv)
# Adiciona dimensão dos canais
# (único canal, imagem monocromática)
Xt = Xt.unsqueeze(1)
Xv = Xv.unsqueeze(1)
print('Dados de treinamento:')
print('Xt', Xt.size(), 'Yt', Yt.size())
print()
print('Dados de validação:')
print('Xv', Xv.size(), 'Yv', Yv.size())
###Output
Dados de treinamento:
Xt torch.Size([1855, 1, 64, 64]) Yt torch.Size([1855, 10])
Dados de validação:
Xv torch.Size([207, 1, 64, 64]) Yv torch.Size([207, 10])
###Markdown
Inspeção dos DadosAgora mostramos algumas amostras dos dados para verificar se a preparação feita até aqui continua coerente.
###Code
def show_sample(X, Y, n=3):
''' Essa função mostra algumas
amostras aleatórias
'''
for i in range(n):
k = np.random.randint(0,len(X))
print('Mostrando', int(torch.argmax(Y[k,:])))
plt.imshow(X[k,0,:,:], cmap='gray')
plt.show()
show_sample(Xt, Yt)
###Output
Mostrando 6
###Markdown
Parte 2: Projeto da Rede NeuralPara esta primeira parte do exercício você irá implementar uma rede neural convolucional conforme a figura abaixo. Primeiro examine com calma a figura, tentando entender cada etapa da rede neural. Ela é muito semelhante à rede neural que implementamos em aula, disponível [aqui](https://colab.research.google.com/drive/1bT8jyS0qyScFLi_mA6c1Fbv9uixrNRO3?usp=sharing).Considere a fórmula abaixo, onde $w_i$ representa a largura da imagem de entrada, $p$ o tamanho do padding (se não houver padding, $p$=0), $k$ a largura do kernel, $s$ o tamanho do passo (stride). Essa fórmula calcula a largura $w_o$ do feature map de saída após a convolução. A mesma fórmula pode ser usada para calcular a altura também.$w_o = \frac{w_i + 2p - k}{s}+1$Na rede neural da figura acima, as camadas são:1. `conv1`: Camada convolucional com kernel 6x6, 5 canais de saída, sem padding, stride 2 e ativação ReLU.2. `pool1`: Camada _max-pooling_ 2x2, com stride 2.3. `conv2`: Camada convolucional com kernel 3x3, 8 canais de saída, sem padding, stride 1 e ativação ReLU.4. `drp1`: Dropout de 25%5. `pool2`: Camada _max-pooling_ 2x2, com stride 2.6. `lin1`: Camada feedforward que recebe os dados serializados e gera as saídas. A função de ativação final é _softmax_, mas ela é implementada no cálculo da função de custo, então não precisa ser considerada aqui.Com base nas informações e na figura acima, e usando a fórmula cima, considerando que a entrada é de 1 canal, largura 64 e altura 64 (1x64x64), defina os valores de `N1`, `N2`, `N3`, `N4`, `N5`, `N6`, `N7`, `N8`, `N9`, `N10`, `N11`, `N12` conforme apontados na figura.Preencha os valores no código abaixo.
###Code
# Para cada uma das variáveis abaixo
# substitua None pelo valor inteiro
# correto.
N1 = 5
N2 = 30
N3 = 4500
N4 = 5
N5 = 15
N6 = 1125
N7 = 8
N8 = 13
N9 = 1352
N10 = 8
N11 = 6
N12 = 288
N13 = 10
###Output
_____no_output_____
###Markdown
Autovaliação do código até aqui
###Code
ok.check('avalia01.py')
###Output
_____no_output_____
###Markdown
Parte 3: Código da Rede NeuralCrie abaixo uma classe de nome `ConvNet`. Essa classe deve derivar da classe `nn.Module`. Se você estiver com dúvidas sobre como começar, revise o código desenvolvido em aula [aqui](https://colab.research.google.com/drive/1bT8jyS0qyScFLi_mA6c1Fbv9uixrNRO3?usp=sharing). Nesta classe, você vai definir uma rede convolucional com as seguintes camadas:1. A primeira camada você vai chamar de `self.conv1`. Essa deve receber a imagem de entrada e aplicar uma convolução com um kernel de tamanho 6x6, com passo 2 (stride=2). A saída deve conter 5 canais.2. A segunda camada deve ser uma camada de _max-pooling_ numa janela 2x2, com passo 2. Essa camada você vai chamar de `self.pool1`.3. A terceira camada você vai chamar de `self.conv2`. Ela deve ser uma convolução com um kernel de tamanho 3x3, gerando 8 canais de saída.4. Em seguida voce tomará a saída da terceira camada e aplicará _dropout_ com p=25%. Essa camada de _dropout_ você vai chamar de `self.drp1`.5. Após o _dropout_, adicione mais uma camada de _max-pooling_ idêntica à usada na segunda camada, com janela 2x2 e passo 2. Essa camada você vai chamar de `self.pool2`6. Agora os dados serão serializados. Adicione uma camada _feed-forward_ de nome `self.lin1` que receberá os dados serializados e gerará as saídas.
###Code
# Escreva aqui o código da classe que
# implementará sua rede neural
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(1, 5, kernel_size= 6, stride= 2) # 1x64x64 -> 5x30x30
self.pool1 = nn.MaxPool2d(2,2) # 5x30x30 -> 5x15x15
self.conv2 = nn.Conv2d(5, 8, kernel_size= 3) # 5x15x15 -> 8x13x13
self.drp1 = nn.Dropout2d(0.25) #1/4 do total dos neuronios serão dropados
self.pool2 = nn.MaxPool2d(2, 2) #8x6x6
self.lin1 = nn.Linear(288,10)
def forward(self, x):
x = self.conv1(x)
x = torch.relu(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.drp1(x)
x = torch.relu(x)
x = self.pool2(x)
x = x.view(-1, 288)
x = self.lin1(x)
return x
###Output
_____no_output_____
###Markdown
A célula de código abaixo vai criar um objeto da classe recém criada por você, e irá imprimir um sumário das camadas. Verifique se constam as camadas `conv1`, `pool1`, `conv2`, `drp1`, `pool2` e `lin1`, com os respectivos parâmetros pedidos no enunciado. Lembre que `conv1` e que as camadas de _max-pooling_ `pool1` e `pool2` devem possuir stride 2.
###Code
cnn = ConvNet()
print(cnn)
###Output
ConvNet(
(conv1): Conv2d(1, 5, kernel_size=(6, 6), stride=(2, 2))
(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(5, 8, kernel_size=(3, 3), stride=(1, 1))
(drp1): Dropout2d(p=0.25, inplace=False)
(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(lin1): Linear(in_features=288, out_features=10, bias=True)
)
###Markdown
Autovaliação do código até aqui
###Code
ok.check('avalia02.py')
###Output
_____no_output_____
###Markdown
Parte 4: TreinamentoAgora você vai implementar o código para treinamento da rede neural. Para facilitar, já estão implementadas abaixo algumas partes desse código.A função `evaluate(x, y_hat)` vai servir para verificar a acurácia da rede neural para um par de entradas `x` e saídas desejadas correspondentes `y_hat`. Cuide que o nome de sua variável correspondente ao objeto da rede neural deve ser `cnn` para usar essa função.
###Code
def evaluate(x, y_hat):
''' Calcula a acurácia da ConvNet (variável cnn)
para o par de entradas e saídas desejadas
x, y_hat. Aqui assume-se que y_hat está
originalmente no formato one-hot. Tanto
x quanto y_hat devem ser lotes, não amostras
individuais.
'''
y = cnn(x).argmax(dim=1)
y_hat = y_hat.argmax(dim=1)
return 100*float((y == y_hat).sum()) / len(y)
###Output
_____no_output_____
###Markdown
Abaixo criamos os objetos `opt` que será o otimizador Adam, com passo de aprendizagem 0,0001, e função de custo entropia cruzada no objeto `loss`.
###Code
opt = optim.Adam(cnn.parameters(), lr=0.0001)
loss = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Em seguida movemos os dados e a rede neural para a GPU, para que o treinamento seja um pouco mais ágil. Essa parte é opcional, depende de você ter GPU disponível e com memória suficiente para alocar todos os dados. Testando no Google Colab consegui alocar tudo normalmente.
###Code
# Movemos tudo para a GPU
# (essa parte é opcional)
gpu = torch.device("cuda:0")
cnn = cnn.to(gpu)
Xt = Xt.to(gpu, dtype=torch.float)
Yt = Yt.to(gpu, dtype=torch.long)
Xv = Xv.to(gpu, dtype=torch.float)
Yv = Yv.to(gpu, dtype=torch.long)
###Output
_____no_output_____
###Markdown
Agora complete você mesmo o código abaixo, colocando os comandos que faltam nos espaços indicados conforme as instruções.
###Code
# Laço de treinamento para 2001
# épocas
for j in range(2001):
# Faremos o treinamento em lotes de
# tamanho igual a 128 amostras
for i in range(0,len(Yt),128):
# Separa o lote de entradas
x = Xt[i:i+128,:,:,:]
# Separa o lote de saídas desejadas
# já transformando de one-hot para
# índice das colunas.
y_hat = Yt[i:i+128,:].argmax(dim=1)
# Zera o gradiente do otimizador
opt.zero_grad()
# Calcula a saída da rede neural
y = cnn(x)
# Calcula o erro
e = loss(y, y_hat)
# Calcula o gradiente usando
# backpropagation
e.backward()
# Realiza um passo de atualização
# dos parâmetros da rede neural
# usando o otimizador.
opt.step()
# A cada 200 épocas imprimimos o
# erro do último lote e a acurácia
# nos dados de treinamento
if not (j % 200):
print(float(e), evaluate(Xt, Yt))
###Output
2.3067433834075928 10.080862533692722
0.37847277522087097 83.07277628032345
0.25036564469337463 86.73854447439354
0.16156786680221558 89.81132075471699
0.12945868074893951 91.91374663072776
0.0885118618607521 93.09973045822102
0.05358064919710159 94.55525606469003
0.03123493865132332 95.14824797843666
0.05437462404370308 95.36388140161725
0.04947707802057266 95.95687331536388
0.02707962691783905 97.30458221024259
###Markdown
Depois de treinar a rede neural, podemos desligar a camada de _dropout_ e mostrar o resultado nos dados de validação.
###Code
cnn.eval() # desliga dropout
# Não modifique essa célula.
ac = evaluate(Xv, Yv)
print('Acurácia de', ac, '%')
###Output
Acurácia de 91.30434782608695 %
###Markdown
Autovaliação do código até aqui
###Code
ok.check('avalia03.py')
###Output
_____no_output_____
###Markdown
Parte 5: Examinando os ResultadosPor fim, podemos agora avaliar a rede neural em funcionamento, nos dados de validação.A função abaixo escolhe 5 amostras aleatórias dos dados de validação e aplica sua rede neural nelas, mostrando a imagem, a saída calculada e a saída desejada.
###Code
def random_sample_cnn(X, Y):
''' Essa função testa a rede convolucional
mostrando a imagem de entrada, a saída
calculada, e a saída esperada, para
5 amostras aleatórias.
'''
for _ in range(5):
idx = np.random.randint(0, len(Yv))
x = Xv[idx:idx+1,:,:,:]
y = int(cnn(x).argmax(dim=1))
y_hat = int(Yv[idx:idx+1,:].argmax(dim=1))
print('y =', y, 'y_hat =', y_hat)
x = x.cpu()
plt.imshow(x[0,0,:,:], cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Abaixo, finalmente, os resultados
###Code
# Aqui examinamos alguns exemplos
# aleatórios nos dados de validação
random_sample_cnn(Xv, Yv)
###Output
y = 1 y_hat = 1
|
transformer-based-model-python-code-generator/src/END_NLP_CAPSTONE_PROJECT_English_Python_Code_Transformer_6_0.ipynb | ###Markdown
Model Output with min_freq = 1 and Model Encoder and Decoder dimension has been increased from 512 to 1024 
###Code
# ! pip install datasets transformers
from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP
from io import BytesIO
from google.colab import drive
drive.mount('/content/drive')
! cp "/content/drive/My Drive/NLP/english_python_data_modified.txt" english_python_data_modified.txt
# ! cp '/content/drive/My Drive/NLP/cornell_movie_dialogs_corpus.zip' .
import torch
import torch.nn as nn
import torch.optim as optim
import torchtext
# from torchtext.data import Field, BucketIterator, TabularDataset
from torchtext.legacy.data import Field, BucketIterator, TabularDataset
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import spacy
import numpy as np
import random
import math
import time
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Downloading the File
###Code
import requests
import os
import datetime
!wget "https://drive.google.com/u/0/uc?id=1rHb0FQ5z5ZpaY2HpyFGY6CeyDG0kTLoO&export=download"
os.rename("uc?id=1rHb0FQ5z5ZpaY2HpyFGY6CeyDG0kTLoO&export=download","english_python_data.txt")
###Output
--2021-03-13 10:32:26-- https://drive.google.com/u/0/uc?id=1rHb0FQ5z5ZpaY2HpyFGY6CeyDG0kTLoO&export=download
Resolving drive.google.com (drive.google.com)... 74.125.137.102, 74.125.137.113, 74.125.137.100, ...
Connecting to drive.google.com (drive.google.com)|74.125.137.102|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://doc-14-3o-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/a1hgh24lqmhf6binukd63r1p38j51lkt/1615631475000/02008525212197398114/*/1rHb0FQ5z5ZpaY2HpyFGY6CeyDG0kTLoO?e=download [following]
Warning: wildcards not supported in HTTP.
--2021-03-13 10:32:27-- https://doc-14-3o-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/a1hgh24lqmhf6binukd63r1p38j51lkt/1615631475000/02008525212197398114/*/1rHb0FQ5z5ZpaY2HpyFGY6CeyDG0kTLoO?e=download
Resolving doc-14-3o-docs.googleusercontent.com (doc-14-3o-docs.googleusercontent.com)... 172.217.3.33, 2607:f8b0:4026:801::2001
Connecting to doc-14-3o-docs.googleusercontent.com (doc-14-3o-docs.googleusercontent.com)|172.217.3.33|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1122316 (1.1M) [text/plain]
Saving to: ‘uc?id=1rHb0FQ5z5ZpaY2HpyFGY6CeyDG0kTLoO&export=download’
uc?id=1rHb0FQ5z5Zpa 100%[===================>] 1.07M --.-KB/s in 0.009s
2021-03-13 10:32:27 (116 MB/s) - ‘uc?id=1rHb0FQ5z5ZpaY2HpyFGY6CeyDG0kTLoO&export=download’ saved [1122316/1122316]
###Markdown
Reading the File and Separating Out English Text and Python Code
###Code
# https://stackoverflow.com/questions/31786823/print-lines-between-two-patterns-in-python/31787181
fastq_filename = "english_python_data_modified.txt"
fastq = open(fastq_filename) # fastq is the file object
for line in fastq:
if line.startswith("#") or line.isalpha():
print(line.replace("@", ">"))
def generate_df(filename):
with open(filename) as file_in:
newline = '\n'
lineno = 0
lines = []
Question = []
Answer = []
Question_Ind =-1
mystring = "NA"
revised_string = "NA"
Initial_Answer = False
# you may also want to remove whitespace characters like `\n` at the end of each line
for line in file_in:
lineno = lineno +1
if line in ['\n', '\r\n']:
pass
else:
linex = line.rstrip() # strip trailing spaces and newline
# if string[0].isdigit()
if linex.startswith('# '): ## to address question like " # write a python function to implement linear extrapolation"
if Initial_Answer:
Answer.append(revised_string)
revised_string = "NA"
mystring = "NA"
Initial_Answer = True
Question.append(linex.strip('# '))
# Question_Ind = Question_Ind +1
elif linex.startswith('#'): ## to address question like "#24. Python Program to Find Numbers Divisible by Another Number"
linex = linex.strip('#')
# print(linex)
# print(f"amit:{len(linex)}:LineNo:{lineno}")
if (linex[0].isdigit()): ## stripping first number which is 2
# print("Amit")
linex = linex.strip(linex[0])
if (linex[0].isdigit()): ## stripping 2nd number which is 4
linex = linex.strip(linex[0])
if (linex[0]=="."):
linex = linex.strip(linex[0])
if (linex[0].isspace()):
linex = linex.strip(linex[0]) ## stripping out empty space
if Initial_Answer:
Answer.append(revised_string)
revised_string = "NA"
mystring = "NA"
Initial_Answer = True
Question.append(linex)
else:
# linex = '\n'.join(linex)
if (mystring == "NA"):
mystring = f"{linex}{newline}"
revised_string = mystring
# print(f"I am here:{mystring}")
else:
mystring = f"{linex}{newline}"
if (revised_string == "NA"):
revised_string = mystring
# print(f"I am here revised_string:{revised_string}")
else:
revised_string = revised_string + mystring
# print(f"revised_string:{revised_string}")
# Answer.append(string)
lines.append(linex)
Answer.append(revised_string)
return Question, Answer
Question, Answer = generate_df("english_python_data_modified.txt")
print(f"Length of Question:{len(Question)}")
print(f"Length of Answer:{len(Answer)}")
# Answer[0]
# num1 = 1.5\nnum2 = 6.3\nsum = num1 + num2\nprint(f'Sum: {sum}')\n\n\n
# with open("english_emp.txt") as file_in:
# newline = '\n'
# lines = []
# Question = []
# Answer = []
# Question_Ind =-1
# mystring = "NA"
# revised_string = "NA"
# Initial_Answer = False
# # you may also want to remove whitespace characters like `\n` at the end of each line
# for line in file_in:
# linex = line.rstrip() # strip trailing spaces and newline
# if linex.startswith('# '):
# if Initial_Answer:
# # print(f"Answer:{Answer}")
# Answer.append(revised_string)
# revised_string = "NA"
# mystring = "NA"
# Initial_Answer = True
# Question.append(linex.strip('# '))
# Question_Ind = Question_Ind +1
# else:
# # linex = '\n'.join(linex)
# if (mystring == "NA"):
# mystring = f"{linex}{newline}"
# revised_string = mystring
# # print(f"I am here:{mystring}")
# else:
# mystring = f"{linex}{newline}"
# if (revised_string == "NA"):
# revised_string = mystring
# # print(f"I am here revised_string:{revised_string}")
# else:
# revised_string = revised_string + mystring
# # print(f"revised_string:{revised_string}")
# # Answer.append(string)
# lines.append(linex)
# Answer.append(revised_string)
# print(Question[1])
## do some random check
print(f"Question[0]:\n{Question[0]}")
print(f"Answer[0]:\n{Answer[0]}")
## do some random check
print(f"Question[1]:\n {Question[1]}")
print(f"Answer[1]:\n {Answer[1]}")
## do some random check
print(f"Question[4849]:\n{Question[4849]}")
print(f"Answer[4849]:\n{Answer[4849]}")
###Output
Question[4849]:
write a program to Binary Right Shift a number
Answer[4849]:
c = a >> 2
print("Binary Right Shift", c)
###Markdown
Converting into dataframe and dumping into CSV
###Code
import pandas as pd
df_Question = pd.DataFrame(Question, columns =['Question'])
df_Answer = pd.DataFrame(Answer,columns =['Answer'])
frames = [df_Question, df_Answer]
combined_question_answer = pd.concat(frames,axis=1)
combined_question_answer.head(2)
combined_question_answer.to_csv("combined_question_answer_from_df.csv",index=False)
combined_question_answer['AnswerLen'] = combined_question_answer['Answer'].astype(str).map(len)
combined_question_answer.size
combined_question_answer.head(2)
combined_question_answer_df = combined_question_answer[combined_question_answer['AnswerLen'] < 495]
combined_question_answer_df.size
combined_question_answer_df = combined_question_answer_df.drop(['AnswerLen'], axis=1)
combined_question_answer_df.head(2)
from sklearn.model_selection import train_test_split
train_combined_question_answer, val_combined_question_answer = train_test_split(combined_question_answer_df, test_size=0.2)
train_combined_question_answer.to_csv("train_combined_question_answer.csv",index=False)
val_combined_question_answer.to_csv("val_combined_question_answer.csv",index=False)
###Output
_____no_output_____
###Markdown
Downloading spacy and tokenization
###Code
!python -m spacy download en
spacy_en = spacy.load('en')
###Output
_____no_output_____
###Markdown
Defining Iterator and Tokenization
###Code
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
###Output
_____no_output_____
###Markdown
###Code
def tokenize_en_python(text):
"""
Tokenizes English text from a string into a list of strings
"""
return [tokenize(text)]
TEXT = Field(tokenize = tokenize_en,
eos_token = '<eos>',
init_token = '<sos>',
# lower = True,
batch_first = True)
fields = [("Question", TEXT), ("Answer", TEXT)]
!wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
!python collect_env.py
!python -c "import torchtext; print(\"torchtext version is \", torchtext.__version__)"
###Output
--2021-03-13 10:32:33-- https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15203 (15K) [text/plain]
Saving to: ‘collect_env.py.1’
collect_env.py.1 0%[ ] 0 --.-KB/s
collect_env.py.1 100%[===================>] 14.85K --.-KB/s in 0s
2021-03-13 10:32:33 (94.6 MB/s) - ‘collect_env.py.1’ saved [15203/15203]
Collecting environment information...
PyTorch version: 1.8.0+cu101
Is debug build: False
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.12.0
Python version: 3.7 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: 11.0.221
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.8.0+cu101
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.9.0
[pip3] torchvision==0.9.0+cu101
[conda] Could not collect
torchtext version is 0.9.0
###Markdown
An example article-title pair looks like this:**article**: the algerian cabinet chaired by president abdelaziz bouteflika on sunday adopted the finance bill predicated on an oil price of dollars a barrel and a growth rate of . percent , it was announced here .**title**: algeria adopts finance bill with oil put at dollars a barrel
###Code
train_data, valid_data = TabularDataset.splits(path=f'/content',
train='train_combined_question_answer.csv', validation='val_combined_question_answer.csv',
format='csv', skip_header=True, fields=fields)
print(f'Number of training examples: {len(train_data)}')
print(f'Number of validation examples: {len(valid_data)}')
#print(f'Number of testing examples: {len(test_data)}')
# a sample of the preprocessed data
print(train_data[0].Question, train_data[0].Answer)
TEXT.build_vocab(train_data, min_freq = 1)
print(f"Unique tokens in TEXT vocabulary: {len(TEXT.vocab)}")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
BATCH_SIZE = 32
train_iterator, valid_iterator = BucketIterator.splits(
(train_data, valid_data),
batch_size = BATCH_SIZE,
device = device,
sort_key=lambda x: len(x.Question),
shuffle=True,
sort_within_batch=False,
repeat=False)
###Output
_____no_output_____
###Markdown
 Transformer Model Architecture
###Code
class Encoder(nn.Module):
def __init__(self,
input_dim,
hid_dim,
n_layers,
n_heads,
pf_dim,
dropout,
device,
max_length = 500):
super().__init__()
self.device = device
self.tok_embedding = nn.Embedding(input_dim, hid_dim)
self.pos_embedding = nn.Embedding(max_length, hid_dim)
self.layers = nn.ModuleList([EncoderLayer(hid_dim,
n_heads,
pf_dim,
dropout,
device)
for _ in range(n_layers)])
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device)
def forward(self, src, src_mask):
#src = [batch size, src len]
#src_mask = [batch size, 1, 1, src len]
batch_size = src.shape[0]
src_len = src.shape[1]
pos = torch.arange(0, src_len).unsqueeze(0).repeat(batch_size, 1).to(self.device)
#pos = [batch size, src len]
src = self.dropout((self.tok_embedding(src) * self.scale) + self.pos_embedding(pos))
#src = [batch size, src len, hid dim]
for layer in self.layers:
src = layer(src, src_mask)
#src = [batch size, src len, hid dim]
return src
class EncoderLayer(nn.Module):
def __init__(self,
hid_dim,
n_heads,
pf_dim,
dropout,
device):
super().__init__()
self.self_attn_layer_norm = nn.LayerNorm(hid_dim)
self.ff_layer_norm = nn.LayerNorm(hid_dim)
self.self_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.positionwise_feedforward = PositionwiseFeedforwardLayer(hid_dim,
pf_dim,
dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src, src_mask):
#src = [batch size, src len, hid dim]
#src_mask = [batch size, 1, 1, src len]
#self attention
_src, _ = self.self_attention(src, src, src, src_mask)
#dropout, residual connection and layer norm
src = self.self_attn_layer_norm(src + self.dropout(_src))
#src = [batch size, src len, hid dim]
#positionwise feedforward
_src = self.positionwise_feedforward(src)
#dropout, residual and layer norm
src = self.ff_layer_norm(src + self.dropout(_src))
#src = [batch size, src len, hid dim]
return src
###Output
_____no_output_____
###Markdown

###Code
class MultiHeadAttentionLayer(nn.Module):
def __init__(self, hid_dim, n_heads, dropout, device):
super().__init__()
assert hid_dim % n_heads == 0
self.hid_dim = hid_dim
self.n_heads = n_heads
self.head_dim = hid_dim // n_heads
self.fc_q = nn.Linear(hid_dim, hid_dim)
self.fc_k = nn.Linear(hid_dim, hid_dim)
self.fc_v = nn.Linear(hid_dim, hid_dim)
self.fc_o = nn.Linear(hid_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([self.head_dim])).to(device)
def forward(self, query, key, value, mask = None):
batch_size = query.shape[0]
#query = [batch size, query len, hid dim]
#key = [batch size, key len, hid dim]
#value = [batch size, value len, hid dim]
Q = self.fc_q(query)
K = self.fc_k(key)
V = self.fc_v(value)
#Q = [batch size, query len, hid dim]
#K = [batch size, key len, hid dim]
#V = [batch size, value len, hid dim]
Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
#Q = [batch size, n heads, query len, head dim]
#K = [batch size, n heads, key len, head dim]
#V = [batch size, n heads, value len, head dim]
energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / self.scale
#energy = [batch size, n heads, query len, key len]
if mask is not None:
energy = energy.masked_fill(mask == 0, -1e10)
attention = torch.softmax(energy, dim = -1)
#attention = [batch size, n heads, query len, key len]
x = torch.matmul(self.dropout(attention), V)
#x = [batch size, n heads, query len, head dim]
x = x.permute(0, 2, 1, 3).contiguous()
#x = [batch size, query len, n heads, head dim]
x = x.view(batch_size, -1, self.hid_dim)
#x = [batch size, query len, hid dim]
x = self.fc_o(x)
#x = [batch size, query len, hid dim]
return x, attention
class PositionwiseFeedforwardLayer(nn.Module):
def __init__(self, hid_dim, pf_dim, dropout):
super().__init__()
self.fc_1 = nn.Linear(hid_dim, pf_dim)
self.fc_2 = nn.Linear(pf_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
#x = [batch size, seq len, hid dim]
x = self.dropout(torch.relu(self.fc_1(x)))
#x = [batch size, seq len, pf dim]
x = self.fc_2(x)
#x = [batch size, seq len, hid dim]
return x
###Output
_____no_output_____
###Markdown

###Code
class Decoder(nn.Module):
def __init__(self,
output_dim,
hid_dim,
n_layers,
n_heads,
pf_dim,
dropout,
device,
max_length = 500):
super().__init__()
self.device = device
self.tok_embedding = nn.Embedding(output_dim, hid_dim)
self.pos_embedding = nn.Embedding(max_length, hid_dim)
self.layers = nn.ModuleList([DecoderLayer(hid_dim,
n_heads,
pf_dim,
dropout,
device)
for _ in range(n_layers)])
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device)
def forward(self, trg, enc_src, trg_mask, src_mask):
#trg = [batch size, trg len]
#enc_src = [batch size, src len, hid dim]
#trg_mask = [batch size, 1, trg len, trg len]
#src_mask = [batch size, 1, 1, src len]
batch_size = trg.shape[0]
trg_len = trg.shape[1]
pos = torch.arange(0, trg_len).unsqueeze(0).repeat(batch_size, 1).to(self.device)
#pos = [batch size, trg len]
trg = self.dropout((self.tok_embedding(trg) * self.scale) + self.pos_embedding(pos))
#trg = [batch size, trg len, hid dim]
for layer in self.layers:
trg, attention = layer(trg, enc_src, trg_mask, src_mask)
#trg = [batch size, trg len, hid dim]
#attention = [batch size, n heads, trg len, src len]
output = self.fc_out(trg)
#output = [batch size, trg len, output dim]
return output, attention
class DecoderLayer(nn.Module):
def __init__(self,
hid_dim,
n_heads,
pf_dim,
dropout,
device):
super().__init__()
self.self_attn_layer_norm = nn.LayerNorm(hid_dim)
self.enc_attn_layer_norm = nn.LayerNorm(hid_dim)
self.ff_layer_norm = nn.LayerNorm(hid_dim)
self.self_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.encoder_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.positionwise_feedforward = PositionwiseFeedforwardLayer(hid_dim,
pf_dim,
dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, trg, enc_src, trg_mask, src_mask):
#trg = [batch size, trg len, hid dim]
#enc_src = [batch size, src len, hid dim]
#trg_mask = [batch size, 1, trg len, trg len]
#src_mask = [batch size, 1, 1, src len]
#self attention
_trg, _ = self.self_attention(trg, trg, trg, trg_mask)
#dropout, residual connection and layer norm
trg = self.self_attn_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#encoder attention
_trg, attention = self.encoder_attention(trg, enc_src, enc_src, src_mask)
# query, key, value
#dropout, residual connection and layer norm
trg = self.enc_attn_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#positionwise feedforward
_trg = self.positionwise_feedforward(trg)
#dropout, residual and layer norm
trg = self.ff_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#attention = [batch size, n heads, trg len, src len]
return trg, attention
###Output
_____no_output_____
###Markdown
10000
11000
11100
11100
11100
###Code
class Seq2Seq(nn.Module):
def __init__(self,
encoder,
decoder,
src_pad_idx,
trg_pad_idx,
device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.src_pad_idx = src_pad_idx
self.trg_pad_idx = trg_pad_idx
self.device = device
def make_src_mask(self, src):
#src = [batch size, src len]
src_mask = (src != self.src_pad_idx).unsqueeze(1).unsqueeze(2)
#src_mask = [batch size, 1, 1, src len]
return src_mask
def make_trg_mask(self, trg):
#trg = [batch size, trg len]
trg_pad_mask = (trg != self.trg_pad_idx).unsqueeze(1).unsqueeze(2)
#trg_pad_mask = [batch size, 1, 1, trg len]
trg_len = trg.shape[1]
trg_sub_mask = torch.tril(torch.ones((trg_len, trg_len), device = self.device)).bool()
#trg_sub_mask = [trg len, trg len]
trg_mask = trg_pad_mask & trg_sub_mask
#trg_mask = [batch size, 1, trg len, trg len]
return trg_mask
def forward(self, src, trg):
#src = [batch size, src len]
#trg = [batch size, trg len]
src_mask = self.make_src_mask(src)
trg_mask = self.make_trg_mask(trg)
#src_mask = [batch size, 1, 1, src len]
#trg_mask = [batch size, 1, trg len, trg len]
enc_src = self.encoder(src, src_mask)
#enc_src = [batch size, src len, hid dim]
output, attention = self.decoder(trg, enc_src, trg_mask, src_mask)
#output = [batch size, trg len, output dim]
#attention = [batch size, n heads, trg len, src len]
return output, attention
INPUT_DIM = len(TEXT.vocab)
OUTPUT_DIM = len(TEXT.vocab)
HID_DIM = 512
ENC_LAYERS = 3
DEC_LAYERS = 3
ENC_HEADS = 8
DEC_HEADS = 8
ENC_PF_DIM = 1024
DEC_PF_DIM = 1024
ENC_DROPOUT = 0.1
DEC_DROPOUT = 0.1
enc = Encoder(INPUT_DIM,
HID_DIM,
ENC_LAYERS,
ENC_HEADS,
ENC_PF_DIM,
ENC_DROPOUT,
device)
dec = Decoder(OUTPUT_DIM,
HID_DIM,
DEC_LAYERS,
DEC_HEADS,
DEC_PF_DIM,
DEC_DROPOUT,
device)
SRC_PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
TRG_PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = Seq2Seq(enc, dec, SRC_PAD_IDX, TRG_PAD_IDX, device).to(device)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
def initialize_weights(m):
if hasattr(m, 'weight') and m.weight.dim() > 1:
nn.init.xavier_uniform_(m.weight.data)
model.apply(initialize_weights);
LEARNING_RATE = 0.0005
optimizer = torch.optim.Adam(model.parameters(), lr = LEARNING_RATE)
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
Question = batch.Question
Answer = batch.Answer
optimizer.zero_grad()
output, _ = model(Question, Answer[:,:-1])
#output = [batch size, trg len - 1, output dim]
#trg = [batch size, trg len]
output_dim = output.shape[-1]
output = output.contiguous().view(-1, output_dim)
Answer = Answer[:,1:].contiguous().view(-1)
#output = [batch size * trg len - 1, output dim]
#trg = [batch size * trg len - 1]
loss = criterion(output, Answer)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
Question = batch.Question
Answer = batch.Answer
output, _ = model(Question, Answer[:,:-1])
#output = [batch size, trg len - 1, output dim]
#trg = [batch size, trg len]
output_dim = output.shape[-1]
output = output.contiguous().view(-1, output_dim)
Answer = Answer[:,1:].contiguous().view(-1)
#output = [batch size * trg len - 1, output dim]
#trg = [batch size * trg len - 1]
loss = criterion(output, Answer)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
Model Training
###Code
N_EPOCHS = 25
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), '/content/drive/My Drive/NLP/tut6-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
model.load_state_dict(torch.load('/content/drive/My Drive/NLP/tut6-model.pt'))
test_loss = evaluate(model, valid_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
# | Test Loss: 2.119 | Test PPL: 8.325 |
###Output
| Test Loss: 2.166 | Test PPL: 8.723 |
###Markdown
Model Validations
###Code
def translate_sentence(sentence, src_field, trg_field, model, device, max_len = 50):
model.eval()
if isinstance(sentence, str):
nlp = spacy.load('de')
tokens = [token.text.lower() for token in nlp(sentence)]
else:
tokens = [token.lower() for token in sentence]
tokens = [src_field.init_token] + tokens + [src_field.eos_token]
src_indexes = [src_field.vocab.stoi[token] for token in tokens]
src_tensor = torch.LongTensor(src_indexes).unsqueeze(0).to(device)
src_mask = model.make_src_mask(src_tensor)
with torch.no_grad():
enc_src = model.encoder(src_tensor, src_mask)
trg_indexes = [trg_field.vocab.stoi[trg_field.init_token]]
for i in range(max_len):
trg_tensor = torch.LongTensor(trg_indexes).unsqueeze(0).to(device)
trg_mask = model.make_trg_mask(trg_tensor)
with torch.no_grad():
output, attention = model.decoder(trg_tensor, enc_src, trg_mask, src_mask)
pred_token = output.argmax(2)[:,-1].item()
trg_indexes.append(pred_token)
if pred_token == trg_field.vocab.stoi[trg_field.eos_token]:
break
trg_tokens = [trg_field.vocab.itos[i] for i in trg_indexes]
return trg_tokens[1:], attention
def display_attention(sentence, translation, attention, n_heads = 8, n_rows = 4, n_cols = 2):
assert n_rows * n_cols == n_heads
fig = plt.figure(figsize=(15,25))
for i in range(n_heads):
ax = fig.add_subplot(n_rows, n_cols, i+1)
_attention = attention.squeeze(0)[i].cpu().detach().numpy()
cax = ax.matshow(_attention, cmap='bone')
ax.tick_params(labelsize=12)
ax.set_xticklabels(['']+['<sos>']+[t.lower() for t in sentence]+['<eos>'],
rotation=45)
ax.set_yticklabels(['']+translation)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
example_idx = 11
src = vars(train_data.examples[example_idx])['Question']
trg = vars(train_data.examples[example_idx])['Answer']
print(f'src = {src}')
print(f'trg = {trg}')
# translation, attention = translate_sentence(Question, TEXT, TEXT, model, device)
# print(f'predicted trg = {translation}')
# display_attention(src, translation, attention)
example_idx = 1
src = vars(valid_data.examples[example_idx])['Question']
trg = vars(valid_data.examples[example_idx])['Answer']
print(f'src = {src}')
print(f'trg = {trg}')
translation, attention = translate_sentence(src, TEXT, TEXT, model, device)
print(f'predicted trg = {translation}')
example_idx = 19
src = vars(valid_data.examples[example_idx])['Question']
trg = vars(valid_data.examples[example_idx])['Answer']
print(f'src = {src}')
print(f'trg = {trg}')
translation, attention = translate_sentence(src, TEXT, TEXT, model, device)
print(f'predicted trg = {translation}')
example_idx = 39
src = vars(valid_data.examples[example_idx])['Question']
trg = vars(valid_data.examples[example_idx])['Answer']
# print(f'src = {src}')
listToStr = ' '.join([str(elem) for elem in src])
print(f'src: {listToStr}')
# print(f'trg = {trg}')
listToStr = ' '.join([str(elem) for elem in trg])
print(f'Target:\n{listToStr}')
translation, attention = translate_sentence(src, TEXT, TEXT, model, device)
print(f'predicted trg = {translation}')
# # output = []
# for x in translation:
# output.append(x)
# # print(x)
# print(output)
listToStr = ' '.join([str(elem) for elem in translation])
print(listToStr)
###Output
def compute_lcm(x , y ) :
if x > y :
greater = y
else :
greater = y
while(True ) :
if((greater % x = 0 ) and ( greater % y = = 0 ) ) :
break
greater +
###Markdown
Model Prediction to generate Python Code on 25 Random Python Question
###Code
import random
#Generate 5 random numbers between 10 and 30
randomlist = random.sample(range(0, len(valid_data)), 25)
for ele in randomlist:
example_idx = ele
src = vars(valid_data.examples[example_idx])['Question']
trg = vars(valid_data.examples[example_idx])['Answer']
# print(f'src = {src}')
listToStr = ' '.join([str(elem) for elem in src])
print(f'Question: {listToStr}')
listToStr = ' '.join([str(elem) for elem in trg])
print(f'Source Python:\n{listToStr}')
print(f'\n')
# print(f'\n')
# print(f'trg = {trg}')
listToStr = ' '.join([str(elem) for elem in trg])
translation, attention = translate_sentence(src, TEXT, TEXT, model, device)
listToStr = ' '.join([str(elem) for elem in translation])
listToStrx = listToStr.replace('<eos>', '')
print(f'Target Python:\n{listToStrx}')
print('#########################################################################################################')
print('#########################################################################################################')
import random
#Generate 5 random numbers between 10 and 30
randomlist = random.sample(range(0, len(valid_data)), 25)
for ele in randomlist:
example_idx = ele
src = vars(valid_data.examples[example_idx])['Question']
trg = vars(valid_data.examples[example_idx])['Answer']
# print(f'src = {src}')
listToStr = ' '.join([str(elem) for elem in src])
print(f'Question: {listToStr}')
listToStr = ' '.join([str(elem) for elem in trg])
print(f'Source Python:\n{listToStr}')
print(f'\n')
# print(f'\n')
# print(f'trg = {trg}')
listToStr = ' '.join([str(elem) for elem in trg])
translation, attention = translate_sentence(src, TEXT, TEXT, model, device)
listToStr = ' '.join([str(elem) for elem in translation])
listToStrx = listToStr.replace('<eos>', '')
print(f'Target Python:\n{listToStrx}')
print('#########################################################################################################')
print('#########################################################################################################')
import random
#Generate 5 random numbers between 10 and 30
randomlist = random.sample(range(0, len(valid_data)), 25)
for ele in randomlist:
example_idx = ele
src = vars(valid_data.examples[example_idx])['Question']
trg = vars(valid_data.examples[example_idx])['Answer']
# print(f'src = {src}')
listToStr = ' '.join([str(elem) for elem in src])
print(f'Question: {listToStr}')
listToStr = ' '.join([str(elem) for elem in trg])
print(f'Source Python:\n{listToStr}')
print(f'\n')
# print(f'\n')
# print(f'trg = {trg}')
listToStr = ' '.join([str(elem) for elem in trg])
translation, attention = translate_sentence(src, TEXT, TEXT, model, device)
listToStr = ' '.join([str(elem) for elem in translation])
listToStrx = listToStr.replace('<eos>', '')
print(f'Target Python:\n{listToStrx}')
print('#########################################################################################################')
print('#########################################################################################################')
###Output
Question: Python Program to Illustrate Different Set Operations
Source Python:
NA
Target Python:
a = 60
b = 13
c = a ^ b
print("XOR " , c )
#########################################################################################################
#########################################################################################################
Question: write a python program to add two lists using map and lambda
Source Python:
nums1 = [ 1 , 2 , 3 ]
nums2 = [ 4 , 5 , 6 ]
result = map(lambda x , y : x + y , nums1 , nums2 )
print(list(result ) )
Target Python:
test_list = [ ( 5 , 6 ) , ( 1 , ( 3 ) , ( 5 , ( 6 , ( 7 ) , ( 7 ) ]
print("The original list is : " + str(test_list )
res = [ sub for sub in test_list if
#########################################################################################################
#########################################################################################################
Question: Python3 code to demonstrate working of Extract String till Numeric Using isdigit ( ) + index ( ) + loop
Source Python:
test_str = " geeks4geeks is best "
print("The original string is : " + str(test_str ) )
temp = 0
for chr in test_str :
if chr.isdigit ( ) :
temp = test_str.index(chr )
print("Extracted String : " + str(test_str[0 : temp ] ) )
1 .
Target Python:
res = [ ]
for sub in test_list :
for ele in test_list :
if ( ele ) :
res.append(val )
print(res )
#########################################################################################################
#########################################################################################################
Question: Write a function to return the torque when a force f is applied at angle thea and distance for axis of rotation to place force applied is r
Source Python:
def cal_torque(force : float , theta : float , r : float)->float :
import math
return force*r*math.sin(theta )
Target Python:
def cal_speed(distance : float , time : float)->float :
return distance / time
#########################################################################################################
#########################################################################################################
Question: Write a python program using a list comprehension to square each odd number in a list . The list is input by a sequence of comma - separated numbers .
Source Python:
values = raw_input ( )
numbers = [ x for x in values.split ( " , " ) if int(x)%2!=0 ]
print ( " , " .join(numbers ) )
Target Python:
NA
#########################################################################################################
#########################################################################################################
Question: Find if all elements in a list are identical
Source Python:
listOne = [ 20 , 20 , 20 , 20 ]
print("All element are duplicate in listOne : " , listOne.count(listOne[0 ] ) = = len(listOne ) )
Target Python:
test_list = [ ( 1 , 5 ) , ( 1 ) , ( 3 ) , ( 1 ) , ( 3 ) ]
res = [ ]
for i , ( i ) for i , j in test_list :
for i ) ] )
#########################################################################################################
#########################################################################################################
Question: Write a function that splits the elements of string
Source Python:
def split_elements(s : str , seperator)- > list :
return s.split(seperator )
Target Python:
def list_to_dict(list1 , list2 ) :
return dict(zip(list1 , list2 ) )
#########################################################################################################
#########################################################################################################
Question: write a python program to print element with maximum values from a list
Source Python:
list1 = [ " gfg " , " best " , " for " , " geeks " ]
s= [ ]
for i in list1 :
count=0
for j in i :
if j in ( ' a','e','i','o','u ' ) :
count = count+1
s.append(count )
print(s )
if count== max(s ) :
print(list1[s.index(max(s ) ) ] )
Target Python:
test_list = [ ( 1 , 5 ) , ( 1 ) , ( 3 ) , ( 1 ) , ( 3 ) ]
res = [ ( a , ( a , b ) for i , b ) for i , b in test_list ]
#########################################################################################################
#########################################################################################################
Question: Write a program which can filter ( ) to make a list whose elements are even number between 1 and 20 ( both included ) .
Source Python:
evenNumbers = filter(lambda x : x%2==0 , range(1,21 ) )
print evenNumbers
Target Python:
NA
#########################################################################################################
#########################################################################################################
Question: Stella octangula numbers : n ( 2n2 − 1 ) , with n ≥ 0 .
Source Python:
def stella_octangula_number(n ) :
if n > = 0 :
return n*(2**n - 1 )
Target Python:
def compute_gcd(x , y ) :
while(y ) :
x , x % y = = y
return x
#########################################################################################################
#########################################################################################################
Question: convert string to intern string
Source Python:
def str_to_intern_str(a ) :
import sys
b = sys.intern(a )
if a is b :
print('Sentence is interned ' )
else :
raise ValueError('This should not happen ' )
Target Python:
str1 = " Hello ! It is a Good thing "
substr1 = "
substr2 = "
substr2 = " bad "
replaced_str = str1.replace(substr1 , substr2 )
print("String after replace : " + str(replaced_str ) ) )
#########################################################################################################
#########################################################################################################
Question: Python3 code to demonstrate Shift from Front to Rear in List using insert ( ) + pop ( )
Source Python:
test_list = [ 1 , 4 , 5 , 6 , 7 , 8 , 9 , 12 ]
print ( " The original list is : " + str(test_list ) )
test_list.insert(len(test_list ) - 1 , test_list.pop(0 ) )
print ( " The list after shift is : " + str(test_list ) )
Target Python:
test_list = [ ( 5 , 6 ) , ( 1 ) , ( 3 ) , ( 1 ) ]
print("The original list is : " + str(test_list )
res = [ sub ) for ele in test_list :
for ele in sub :
res
#########################################################################################################
#########################################################################################################
Question: Write a python function to return minimum sum of factors of a number
Source Python:
def findMinSum(num ) :
sum = 0
i = 2
while(i * i < = num ) :
while(num % i = = 0 ) :
sum + = i
num /= i
i + = 1
sum + = num
return sum
Target Python:
def compute_gcd(x , y ) :
while(y ) :
x , x % y = = y
return x
#########################################################################################################
#########################################################################################################
Question: printing result
Source Python:
print("Top N keys are : " + str(res ) )
Target Python:
print("The original dictionary is : " + str(test_dict ) )
#########################################################################################################
#########################################################################################################
Question: Convert dictionary to JSON
Source Python:
import json
person_dict = { ' name ' : ' Bob ' ,
' age ' : 12 ,
' children ' : None
}
person_json = json.dumps(person_dict )
print(person_json )
Target Python:
str1 = " Hello ! "
str2 = "
print("Original String : " )
print("Maximum length of consecutive 0 ’s : " )
#########################################################################################################
#########################################################################################################
Question: Write a python function to get the volume of a prism with base area & height as input
Source Python:
def prism_volume(base_area , height ) :
volume = base_area * height
return volume
Target Python:
def compound_interest(principle , rate , time ) :
Amount = principle * ( pow((1 + rate / 100 ) , time ) , time )
CI = Amount - principle
CI = Amount - principle
print("Compound interest is " , CI )
#########################################################################################################
#########################################################################################################
Question: Rotate dictionary by K
Source Python:
NA
Target Python:
test_list = [ ( ' gfg ' , ' ) , ( ' 5 ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' ' '
#########################################################################################################
#########################################################################################################
Question: Write a program to check and print whether a number is palindrome or not
Source Python:
num = 12321
temp = num
rev = 0
while num > 0 :
dig = num % 10
rev = rev*10 + dig
num//=10
if temp==rev :
print("The number is a palindrome ! " )
else :
print("The number is n't a palindrome ! " )
Target Python:
def sumDigits(no ) :
return 0 if no = 0 else int(no % 10 ) + sumDigits(int(no / 10 )
n = 1234511
print(sumDigits(n ) )
#########################################################################################################
#########################################################################################################
Question: write a program to sort Dictionary by key - value Summation and print it
Source Python:
test_dict = { 3 : 5 , 1 : 3 , 4 : 6 , 2 : 7 , 8 : 1 }
Target Python:
res = [ ]
for sub in test_list :
for ele in test_list :
if ( ele ) :
res.append(val )
print(res )
#########################################################################################################
#########################################################################################################
Question: 7 write a python function to return every second number from a list
Source Python:
def every_other_number(lst ) :
return lst[::2 ]
Target Python:
def swapList(newList ) :
size = len(newList )
temp = newList[0 ]
newList[size - 1 ]
return newList
#########################################################################################################
#########################################################################################################
Question: write a function that acts like a ReLU function for a 1D array
Source Python:
def relu_list(input_list : list)->list :
return [ ( lambda x : x if x > = 0 else 0)(x ) for x in input_list ]
Target Python:
def printList ( ) :
li = list ( )
for i in range(1,21 ) :
li.append(i**2 )
print li[5 : ]
#########################################################################################################
#########################################################################################################
Question: Write Python Program to Print Table of a Given Number
Source Python:
n = int(input("Enter the number to print the tables for : " ) )
for i in range(1,11 ) :
print(n,"x",i,"=",n*i )
Target Python:
num = 16
if num < 0 :
print("Enter a positive number " )
else :
sum = 0
# use while loop to iterate until zero
while(num > 0 ) :
sum + = num -= 1
print("The sum is "
#########################################################################################################
#########################################################################################################
Question: Write a function to calculate the gravitational force between two objects of mass m1 and m2 and distance of r between them
Source Python:
def cal_gforce(mass1 : float , mass2 : float , distance : float)->float :
g = 6.674*(10)**(-11 )
return ( g*mass1*mass2)/(distance**2 )
Target Python:
def cal_speed(distance : float , time : float)->float :
return distance / time
#########################################################################################################
#########################################################################################################
Question: 88 write a program which prints all permutations of [ 1,2,3 ]
Source Python:
import itertools
print(list(itertools.permutations([1 , 2 , 3 ] ) ) )
Target Python:
import random
print random.sample(range(100 ) , 5 )
#########################################################################################################
#########################################################################################################
Question: write a python function to check if the given structure is a instance of list or dictionary
Source Python:
def check_insst(obj ) :
if isinstance(obj , list ) :
return " list "
elif isinstance(obj , dict ) :
return " dict "
else :
return " unknown "
check_insst ( { } )
Target Python:
def newlist(lst ) :
return list(filter(None , lst ) )
#########################################################################################################
#########################################################################################################
|
src/.ipynb_checkpoints/Untitled-checkpoint.ipynb | ###Markdown
---
###Code
pics2 = []
n = datetime.datetime.now()
for i in range(50):
ii = str(i)
a = np.fromfile('data/results/ress'+ii+'.raw',dtype=np.float32)
a = np.array(a)
model = PathModel(a)
img = cv2.imread('data/pics/'+ii+'.png')
res = draw_lane(img.copy(),model)
pics2.append(res)
print(datetime.datetime.now()-n)
8/50
def update_lane_line_data(points, off, is_ghost):
# print(points)
# print(off)
pvd = Pvd()
for i in range(MODEL_PATH_MAX_VERTICES_CNT // 2):
px = float(i)
py = points[i] - off
p_car_space = np.array([px, py, 0, 1])
p_full_frame = car_space_to_full_frame(p_car_space)
x = p_full_frame[0]
y = p_full_frame[1]
# print(x,y)
if x<0 or y<0:
continue
pvd.add_pt(x,y)
pvd.cnt += 1
for i in range(MODEL_PATH_MAX_VERTICES_CNT // 2, 0, -1):
px = float(i)
if is_ghost:
py = points[i]-off
else:
py = points[i]+off
p_car_space = np.array([px, py, 0, 1])
p_full_frame = car_space_to_full_frame(p_car_space)
x = p_full_frame[0]
y = p_full_frame[1]
if x<0 or y<0:
continue
pvd.add_pt(x,y)
pvd.cnt += 1
return pvd
MODEL_PATH_MAX_VERTICES_CNT=98
# rgb_height = 1748/2
# rgb_width = 2328/2
# intrinsic_matrix = np.array([
# [910., 0., 582.],
# [0., 910., 437.],
# [0., 0., 1.]
# ])
eon_focal_length=910
medmodel_zoom = 1.
MEDMODEL_INPUT_SIZE = (512, 256)
MEDMODEL_YUV_SIZE = (MEDMODEL_INPUT_SIZE[0], MEDMODEL_INPUT_SIZE[1] * 3 // 2)
MEDMODEL_CY = 47.6
intrinsics = np.array(
[[ eon_focal_length / medmodel_zoom, 0. , 0.5 * MEDMODEL_INPUT_SIZE[0]],
[ 0. , eon_focal_length / medmodel_zoom, MEDMODEL_CY],
[ 0. , 0. , 1.]])
def car_space_to_full_frame(car_space_projective):
extrinsic = np.array([[ 9.86890774e-03, -9.99951124e-01, -6.39820937e-04, 0.00000000e+00],
[-6.46961704e-02, -5.42101086e-20, -9.97905016e-01, 1.22000003e+00],
[ 9.97856200e-01, 9.88962594e-03, -6.46930113e-02, 0.00000000e+00]])
ep = extrinsic.dot(car_space_projective)
# print(ep.shape)
kep = intrinsic_matrix.dot(ep)
p_image = np.array([kep[0]/kep[2], kep[1]/kep[2], 1])
return p_image
def update_lane_line_data(points, off, is_ghost):
pvd = {
'cnt':0,
'v':[]
}
for i in range(MODEL_PATH_MAX_VERTICES_CNT // 2):
px = float(i)
py = points[i] - off
p_car_space = np.array([px, py, 0, 1])
p_full_frame = car_space_to_full_frame(p_car_space)
temp = {
'x':p_full_frame[0],
'y':p_full_frame[1],
}
if temp['x']<0 or temp['y']<0:
continue
# if not (px >= 0 and px <= rgb_width and py >= 0 and py <= rgb_height):
# continue
pvd['v'].append(temp)
pvd['cnt'] += 1
for i in range(MODEL_PATH_MAX_VERTICES_CNT // 2, 0, -1):
px = float(i)
if is_ghost:
py = points[i]-off
else:
py = points[i]+off
p_car_space = np.array([px, py, 0, 1])
p_full_frame = car_space_to_full_frame(p_car_space)
temp = {
'x':p_full_frame[0],
'y':p_full_frame[1],
}
if temp['x']<0 or temp['y']<0:
continue
pvd['v'].append(temp)
pvd['cnt'] += 1
return pvd
def update(p):
p1 = update_lane_line_data(p['points'],0.025*p['prob'], False)
var = min(p['std'], 0.7)
p2 = update_lane_line_data(p['points'],-var, True)
p3 = update_lane_line_data(p['points'],var, True)
return p1,p2,p3
def draw(line,c):
l = line['v'][1:]
for j in range(1,len(l)):
pt1 = (int(l[j-1]['x']),int(l[j-1]['y']))
pt2 = (int(l[j]['x']),int(l[j]['y']))
cv2.line(img, pt1, pt2, c, 4)
def draw(line,c):
height = 894
width = 1164
img2 = np.array([[[0,0,0]]*width]*height).astype(np.uint8).copy()
l = line['v'][1:]
for j in range(1,len(l)):
pt1 = (int(l[j-1]['x']),int(l[j-1]['y']))
pt2 = (int(l[j]['x']),int(l[j]['y']))
# print(pt1,pt2)
cv2.line(img2, pt1, pt2, c, 10)
return img2
# extrinsic = get_view_frame_from_road_frame(0.21, -0.0, 0.23, 1.22)
# extrinsic = get_view_frame_from_road_frame(0.25, -0.0, 0.13, 1.55)
# extrinsic = get_view_frame_from_road_frame(0,0,0,1.2)
extrinsic = np.array([[ 9.86890774e-03, -9.99951124e-01, -6.39820937e-04, 0.00000000e+00],
[-6.46961704e-02, -5.42101086e-20, -9.97905016e-01, 1.22000003e+00],
[ 9.97856200e-01, 9.88962594e-03, -6.46930113e-02, 0.00000000e+00]])
p1,p2,p3 = update(model.path)
lp1,lp2,lp3 = update(model.left_lane)
rp1,rp2,rp3 = update(model.right_lane)
# img = cv2.imread('data/29/preview.png')
h = 700
img = cv2.resize(img,(512,256))
# draw(lp1,(255,0,0))
# draw(rp1,(0,0,255))
# imshow(img)
img.shape[:-1]
l1 = draw(lp1,(1,1,1))
l2 = draw(lp2,(1,1,1))
l3 = draw(lp3,(1,1,1))
left = cv2.resize((l1+l2+l3),(img.shape[1],img.shape[0]))
ll1 = draw(lp1,(255,0,0))
ll2 = draw(lp2,(255,0,0))
ll3 = draw(lp3,(255,0,0))
leftt = cv2.resize((ll1+ll2+ll3),(img.shape[1],img.shape[0]))
r1 = draw(rp1,(1,1,1))
r2 = draw(rp2,(1,1,1))
r3 = draw(rp3,(1,1,1))
right = cv2.resize((r1+r2+r3),(img.shape[1],img.shape[0]))
rr1 = draw(rp1,(0,0,255))
rr2 = draw(rp2,(0,0,255))
rr3 = draw(rp3,(0,0,255))
rightt = cv2.resize((rr1+rr2+rr3),(img.shape[1],img.shape[0]))
imshow(img-(left+right)*img+(leftt+rightt))
imshow(cv2.imread('data/pics/0.png'))
import time
from IPython.display import clear_output
pngs = []
for i in range(50):
# clear_output()
pngs.append(cv2.imread('data/pics/'+str(i)+'.png'))
# imshow(a)
# time.sleep(0.1)
for i in range(50):
clear_output()
imshow(pics[i])
# time.sleep(0.1)
from PIL import Image, ImageDraw
pics = []
for i in range(50):
pics.append(Image.open('data/pics/'+str(i)+'.png'))
from IPython.display import Image, display
X = Image(url='test.gif')
display(X)
from IPython.display import Image, display
X = Image(url='test2.gif')
display(X)
from PIL import Image, ImageDraw
imgs2 = [Image.fromarray(cv2.cvtColor(cv2.resize(i,(i.shape[1]//2,i.shape[0]//2)),cv2.COLOR_BGR2RGB)) for i in pics2]
imgs2[0].save('test2.gif', format='GIF', append_images=imgs2[1:], save_all=True, duration=70, loop=0)
imgs2[0].save('test2.gif', format='GIF', append_images=imgs2[1:], save_all=True, duration=100, loop=0)
###Output
_____no_output_____
###Markdown
Data Augmentation
###Code
import matplotlib.pyplot as plt
import numpy as np
from scipy import misc, ndimage
import keras
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator
# from preprocessing import ImageDataGenerator
%matplotlib inline
gen = ImageDataGenerator(rotation_range=10,
width_shift_range=0,
height_shift_range=0,
shear_range=0.15,
zoom_range=0.1,
channel_shift_range=10,
horizontal_flip=True)
test_img= np.expand_dims(plt.imread(os.path.join(img_path,img_1)),0)
plt.imshow(test_img[0])
plt.show()
print(test_img.shape)
aug_iter = gen.flow(test_img)
plt.imshow(next(aug_iter)[0].astype(np.uint8))
plt.show()
aug_images = [next(aug_iter)[0]]
###Output
_____no_output_____
###Markdown
Convert to Parquet
###Code
import os
import numpy as np
import pyarrow
import pyarrow.parquet as pq
import matplotlib.pyplot as plt
from utils import convert_labels
data_path = os.path.join(os.getcwd(), '..', 'data', 'raw')
img_path = os.path.join(data_path, 'images')
label_path = os.path.join(data_path, 'labels')
os.listdir(label_path)
###Output
_____no_output_____
###Markdown
Youtube AnalyticsThis project aims at analysing the trending videos from Youtube's trending list for the country of Great Britain. Great Britian was choosen over other demographic as the titles consisted of mainly English alphabets and numbers and this enables us to have a better knowledge of data as a proficient English spekers. Insted of analysing everything at once, this visual breakdown will be divided into several smaller parts to remain beginner friendly and accessible to those who may not have in depth knowledge of the domain or this dataset. Importing necessary modules
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Brief overview of the DatasetTo be able to work with the dataset, we must have a brief overview of what it holds. For this project, the Dataset is located at `../data` directory. It consists of two parts:* A CSV file named `GBvideos.csv`* A JSON file named `GB_category_id.json`These two files constitutes our Dataset for analysing some of the most popular British Youtube content. Structure of Data
###Code
df = pd.read_csv('../data/GBvideos.csv')
print("Dimension: {}".format(df.shape))
df.head()
###Output
Dimension: (38916, 16)
###Markdown
The above reperesentation gives us an overview what the data from the CSV file looks like. We have a CSV file with 38916 rows and 16 columns each consisting of data extracted using the official Youtube V3 API (This API is now deprecated and only caters to the previous subscribed users) Data AttributesWe can see 13 distinct columns each with unique attributes. Let's see what these attributes are:
###Code
df.columns
###Output
_____no_output_____
###Markdown
Validation 1
###Code
E_f = np.array([233, 23.1, 23.1])
v_f = np.array([0.40, 0.20, 0.20])
G_f = np.array([8.27, 8.96, 8.96])
alpha_f = np.array([-0.54, 10.10, 10.10])
mat_f = Material(E_f, v_f, G_f, alpha_f)
V_f = 0.61
E_m = 4.62
v_m = 0.36
G_m = 0
alpha_m = 41.4
mat_m = Material(E_m, v_m, G_m, alpha_m)
layer_1 = Lamina(mat_fiber=mat_f, mat_matrix=mat_m, Vol_fiber=V_f, array_geometry=2)
E, v, G = layer_1.get_lamina_properties()
a, b = layer_1.get_lamina_expansion_properties()
print(E)
print(v)
print(G)
print(a)
###Output
[143.9318 12.03631579 12.03631579]
[0.58919839 0.2624 0.2624 ]
[3.78691416 4.54567129 4.54567129]
[-1.49770933e-02 2.81195600e+01 2.81195600e+01]
###Markdown
Validation 2
###Code
E = np.array([19.2, 1.56, 1.56])
v = np.array([0.59, 0.24, 0.24])
G = np.array([0.49, 0.82, 0.82])
lam = Laminate()
mat = Material(E, v, G)
layer_1 = Lamina(mat_composite=mat)
print(layer_1.matrices.S)
print(layer_1.matrices.C)
###Output
[[ 0.05208333 -0.0125 -0.0125 0. 0. 0. ]
[-0.0125 0.64102564 -0.37820513 0. 0. 0. ]
[-0.0125 -0.37820513 0.64102564 0. 0. 0. ]
[ 0. 0. 0. 2.04081633 0. 0. ]
[ 0. 0. 0. 0. 1.2195122 0. ]
[ 0. 0. 0. 0. 0. 1.2195122 ]]
[[19.6485623 0.93450479 0.93450479 0. 0. 0. ]
[ 0.93450479 2.43745102 1.45631895 0. 0. 0. ]
[ 0.93450479 1.45631895 2.43745102 0. 0. 0. ]
[ 0. 0. 0. 0.49 0. 0. ]
[ 0. 0. 0. 0. 0.82 0. ]
[ 0. 0. 0. 0. 0. 0.82 ]]
###Markdown
Validation 3
###Code
E = np.array([163, 14.1, 14.1]) * 1e9
v = np.array([0.45, 0.24, 0.24])
G = np.array([3.6, 4.8, 4.8]) * 1e9
alpha = np.array([-0.018, 24.3, 24.3, 0, 0, 0]) * 1e-6
beta = np.array([150, 4870, 4870, 0, 0, 0]) * 1e-6
sigma = create_tensor_3D(50, -50, -5, 0, 0, -3) * 1e6
lam = Laminate()
mat = Material(E, v, G)
layer_1 = Lamina(mat_composite=mat)
lam.add_lamina(layer_1)
layer_1.apply_stress(sigma)
e_thermal = alpha * 10
e_moisture = beta * 0.6
print((layer_1.local_state.strain + e_thermal + e_moisture))
###Output
[ 0.00047755 -0.00029514 0.00433252 0. 0. -0.000625 ]
###Markdown
Validation 4
###Code
E = np.array([163, 14.1, 14.1]) * 1e9
v = np.array([0.45, 0.24, 0.24])
G = np.array([3.6, 4.8, 4.8]) * 1e9
alpha = np.array([-0.018, 24.3, 24.3, 0, 0, 0]) * 1e-6
beta = np.array([150, 4870, 4870, 0, 0, 0]) * 1e-6
lam = Laminate()
mat = Material(E, v, G)
layer_1 = Lamina(mat_composite=mat)
lam.add_lamina(layer_1)
epsilon = np.array([4.0e-4, -3.5e-3, 1.2e-3, 0, 0, -6e-4])
e_thermal = alpha * -30
e_moisture = beta * 0.6
e_total = create_tensor_3D(*(epsilon - e_thermal - e_moisture))
layer_1.apply_strain(e_total)
print(layer_1.local_state.stress * 1e-6)
###Output
[ 9.47654588 -108.19637856 -62.49293028 0. 0.
-2.88 ]
###Markdown
Validation 5
###Code
E = np.array([163, 14.1, 14.1]) * 1e9
v = np.array([0.45, 0.24, 0.24])
G = np.array([3.6, 4.8, 4.8]) * 1e9
alpha = np.array([-0.018, 24.3, 24.3, 0, 0, 0]) * 1e-6
beta = np.array([150, 4870, 4870, 0, 0, 0]) * 1e-6
lam = Laminate()
mat = Material(E, v, G)
layer_1 = Lamina(mat_composite=mat)
lam.add_lamina(layer_1)
epsilon = create_tensor_3D(4.0e-4, -3.5e-3, 1.2e-3, 0, 0, -6e-4)
layer_1.apply_strain(epsilon)
print(layer_1.matrices.C_reduced.dot(np.array([4.0e-4, -3.5e-3, -6e-4])) * 1e-6)
print(layer_1.local_state.stress * 1e-6)
###Output
[ 53.62318161 -48.23674327 -2.88 ]
[ 51.99071907 -50.3710594 -4.66761113 0. 0.
-2.88 ]
###Markdown
Validation 6
###Code
E = np.array([100, 20, 20])
v = np.array([0.40, 0.18, 0.18])
G = np.array([4, 5, 5])
lam = Laminate()
mat = Material(E, v, G)
layer_1 = Lamina(mat_composite=mat)
layer_2 = Lamina(mat_composite=mat)
lam.add_lamina(layer_1, 45)
lam.add_lamina(layer_2, -30)
print(lam.get_lamina(1).matrices.T_2D)
print(lam.get_lamina(1).matrices.S_bar_reduced)
print(lam.get_lamina(1).matrices.Q_bar_reduced)
print('-'*50)
print(lam.get_lamina(2).matrices.T_2D)
print(lam.get_lamina(2).matrices.S_bar_reduced)
print(lam.get_lamina(2).matrices.Q_bar_reduced)
###Output
[[ 0.5 0.5 1. ]
[ 0.5 0.5 -1. ]
[-0.5 0.5 0. ]]
[[ 0.0641 -0.0359 -0.02 ]
[-0.0359 0.0641 -0.02 ]
[-0.02 -0.02 0.0636]]
[[37.007408 27.007408 20.13044529]
[27.007408 37.007408 20.13044529]
[20.13044529 20.13044529 28.38392785]]
--------------------------------------------------
[[ 0.75 0.25 -0.8660254]
[ 0.25 0.75 0.8660254]
[ 0.4330127 -0.4330127 0.5 ]]
[[ 0.045575 -0.027375 0.04685197]
[-0.027375 0.065575 -0.01221096]
[ 0.04685197 -0.01221096 0.0977 ]]
[[ 62.98383525 21.16142604 -27.55901479]
[ 21.16142604 22.72294468 -7.30793923]
[-27.55901479 -7.30793923 22.53794589]]
###Markdown
For 2D Material Properties: $E_1=170 GPa, E_2=9 GPa, \nu_{12}=0.27, G_{12}=4.4 GPa$
###Code
E = np.array([170, 9, 1])
v = np.array([0, 0.27, 0.27])
G = np.array([1, 4.4, 4.4])
lam = Laminate()
mat = Material(E, v, G)
layer_1 = Lamina(mat_composite=mat)
layer_2 = Lamina(mat_composite=mat)
lam.add_lamina(layer_1, 45)
lam.add_lamina(layer_2, -30)
# print(lam.get_lamina(1).matrices.T_2D)
# print(lam.get_lamina(1).matrices.S_bar_reduced)
# print(lam.get_lamina(1).matrices.Q_bar_reduced)
# # print(lam.get_lamina(1).local_state.stress)
# print(lam.get_lamina(2).matrices.T_2D)
###Output
_____no_output_____
###Markdown
Opening file
###Code
import re
import pandas as pd
import numpy as np
import json
import os
import math
os.chdir("../data/")
df=pd.read_csv("Satellite Data.csv")
df.to_csv('PaperPoints.csv',columns=['Id','Ti', 'CC'], index=False)
linksdf=pd.DataFrame()
for refs in df['RId']:
if pd.isnull(refs):
continue
for ref in refs.split('|'):
if df['Id'].str.contains(ref, na=False):
print("contained"+ref)
#linksdf.append(df['Id'])
print(linksdf)
###Output
_____no_output_____
###Markdown
Get the Face Embeddings for different models
###Code
cap = cv2.VideoCapture(0)
faces = 0
frames = 0
max_faces = 10
max_bbox = np.zeros(4)
model = DeepFace.build_model("Facenet")
while faces < max_faces:
ret, frame = cap.read()
frames += 1
dtString = str(datetime.now().microsecond)
# if not (os.path.exists(path)):
# os.makedirs(path)
if frames % 3 == 0:
try:
img = functions.preprocess_face(frame, target_size= (model.input_shape[1], model.input_shape[2]), detector_backend= 'mtcnn', enforce_detection= False)
embedding = model.predict(img)[0].tolist()
# print(len(embedding))
faces += 1
except Exception as e:
print(e)
continue
cv2.imshow("Face detection", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
model.summary()
t = (1, 160, 160, 3)
s = t[1], t[2], t[3]
(shape[1], shape[2], shape[3])
type(img)
img1 = np.reshape(img, (shape[1], shape[2], shape[3]))
img1.shape
type(img1)
plt.imshow(img1)
embedding = model.predict(img)[0].tolist()
len(embedding)
g = cv2.imread(path + '25181.jpg')
cv2.imshow("adad", g)
print(g)
faces = FaceDetector.detect_faces(face_detector, detector_backend = 'mtcnn',img, align = False)
###Output
_____no_output_____
###Markdown
To detect and extract Face crops
###Code
detector_backend = 'mtcnn'
face_detector = FaceDetector.build_model(detector_backend)
cap = cv2.VideoCapture(0)
faces1= 0
frames = 0
max_faces = 50
max_bbox = np.zeros(4)
while faces1 < max_faces:
ret, frame = cap.read()
frames += 1
dtString = str(datetime.now().microsecond)
if not (os.path.exists(path)):
os.makedirs(path)
if frames % 3== 0:
faces = FaceDetector.detect_faces(face_detector, detector_backend, frame, align=False)
print(frame.shape)
print()
for face, (x, y, w, h) in faces:
# plt.imshow(face)
cv2.rectangle(frame, (x,y), (x+w,y+h), (67,67,67), 3)
cv2.imwrite(os.path.join(path, "{}.jpg".format(dtString)), face)
print('Face detected')
faces1 += 1
cv2.imshow("Face detection", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
###Output
(480, 640, 3)
Face detected
(480, 640, 3)
Face detected
(480, 640, 3)
Face detected
(480, 640, 3)
Face detected
Face detected
(480, 640, 3)
Face detected
(480, 640, 3)
Face detected
(480, 640, 3)
Face detected
(480, 640, 3)
Face detected
(480, 640, 3)
Face detected
(480, 640, 3)
Face detected
###Markdown
Feature age, gender, facial expression, race
###Code
from deepface import DeepFace
cap = cv2.VideoCapture(0)
faces = 0
frames = 0
max_faces = 10
max_bbox = np.zeros(4)
model = DeepFace.build_model("Facenet")
while faces < max_faces:
ret, frame = cap.read()
frames += 1
dtString = str(datetime.now().microsecond)
# if not (os.path.exists(path)):
# os.makedirs(path)
if frames % 1 == 0:
try:
img = functions.preprocess_face(frame, target_size= (model.input_shape[1], model.input_shape[2]), detector_backend= 'mtcnn', enforce_detection= False)
obj = DeepFace.analyze(img)
embedding = model.predict(img)[0].tolist()
print(len(embedding))
print(obj)
faces += 1
except Exception as e:
print(e)
continue
cv2.imshow("Face detection", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
break
cap.release()
cv2.destroyAllWindows()
pwd
%cd ..
%cd ..
###Output
C:\Users\91992\DLCVNLP\CV_Projects\Deepface_facerecog
|
PennyLane/Data Reuploading Classifier/4_QConv2ent_QFC2 LR Decay.ipynb | ###Markdown
Loading Raw Data
###Code
import tensorflow as tf
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train[:, 0:27, 0:27]
x_test = x_test[:, 0:27, 0:27]
x_train_flatten = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])/255.0
x_test_flatten = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])/255.0
print(x_train_flatten.shape, y_train.shape)
print(x_test_flatten.shape, y_test.shape)
x_train_0 = x_train_flatten[y_train == 0]
x_train_1 = x_train_flatten[y_train == 1]
x_train_2 = x_train_flatten[y_train == 2]
x_train_3 = x_train_flatten[y_train == 3]
x_train_4 = x_train_flatten[y_train == 4]
x_train_5 = x_train_flatten[y_train == 5]
x_train_6 = x_train_flatten[y_train == 6]
x_train_7 = x_train_flatten[y_train == 7]
x_train_8 = x_train_flatten[y_train == 8]
x_train_9 = x_train_flatten[y_train == 9]
x_train_list = [x_train_0, x_train_1, x_train_2, x_train_3, x_train_4, x_train_5, x_train_6, x_train_7, x_train_8, x_train_9]
print(x_train_0.shape)
print(x_train_1.shape)
print(x_train_2.shape)
print(x_train_3.shape)
print(x_train_4.shape)
print(x_train_5.shape)
print(x_train_6.shape)
print(x_train_7.shape)
print(x_train_8.shape)
print(x_train_9.shape)
x_test_0 = x_test_flatten[y_test == 0]
x_test_1 = x_test_flatten[y_test == 1]
x_test_2 = x_test_flatten[y_test == 2]
x_test_3 = x_test_flatten[y_test == 3]
x_test_4 = x_test_flatten[y_test == 4]
x_test_5 = x_test_flatten[y_test == 5]
x_test_6 = x_test_flatten[y_test == 6]
x_test_7 = x_test_flatten[y_test == 7]
x_test_8 = x_test_flatten[y_test == 8]
x_test_9 = x_test_flatten[y_test == 9]
x_test_list = [x_test_0, x_test_1, x_test_2, x_test_3, x_test_4, x_test_5, x_test_6, x_test_7, x_test_8, x_test_9]
print(x_test_0.shape)
print(x_test_1.shape)
print(x_test_2.shape)
print(x_test_3.shape)
print(x_test_4.shape)
print(x_test_5.shape)
print(x_test_6.shape)
print(x_test_7.shape)
print(x_test_8.shape)
print(x_test_9.shape)
###Output
(980, 729)
(1135, 729)
(1032, 729)
(1010, 729)
(982, 729)
(892, 729)
(958, 729)
(1028, 729)
(974, 729)
(1009, 729)
###Markdown
Selecting the datasetOutput: X_train, Y_train, X_test, Y_test
###Code
num_sample = 200
n_class = 4
mult_test = 0.25
X_train = x_train_list[0][:num_sample, :]
X_test = x_test_list[0][:int(mult_test*num_sample), :]
Y_train = np.zeros((n_class*X_train.shape[0],), dtype=int)
Y_test = np.zeros((n_class*X_test.shape[0],), dtype=int)
for i in range(n_class-1):
X_train = np.concatenate((X_train, x_train_list[i+1][:num_sample, :]), axis=0)
Y_train[num_sample*(i+1):num_sample*(i+2)] = int(i+1)
X_test = np.concatenate((X_test, x_test_list[i+1][:int(mult_test*num_sample), :]), axis=0)
Y_test[int(mult_test*num_sample*(i+1)):int(mult_test*num_sample*(i+2))] = int(i+1)
print(X_train.shape, Y_train.shape)
print(X_test.shape, Y_test.shape)
###Output
(800, 729) (800,)
(200, 729) (200,)
###Markdown
Dataset Preprocessing
###Code
X_train = X_train.reshape(X_train.shape[0], 27, 27)
X_test = X_test.reshape(X_test.shape[0], 27, 27)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Quantum
###Code
import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer, GradientDescentOptimizer
qml.enable_tape()
from tensorflow.keras.utils import to_categorical
# Set a random seed
np.random.seed(2020)
# Define output labels as quantum state vectors
# def density_matrix(state):
# """Calculates the density matrix representation of a state.
# Args:
# state (array[complex]): array representing a quantum state vector
# Returns:
# dm: (array[complex]): array representing the density matrix
# """
# return state * np.conj(state).T
label_0 = [[1], [0]]
label_1 = [[0], [1]]
def density_matrix(state):
"""Calculates the density matrix representation of a state.
Args:
state (array[complex]): array representing a quantum state vector
Returns:
dm: (array[complex]): array representing the density matrix
"""
return np.outer(state, np.conj(state))
#state_labels = [label_0, label_1]
state_labels = np.loadtxt('./tetra_states.txt', dtype=np.complex_)
dm_labels = [density_matrix(state_labels[i]) for i in range(4)]
len(dm_labels)
dm_labels
n_qubits = 4 # number of class
dev_fc = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev_fc)
def q_fc(params, inputs):
"""A variational quantum circuit representing the DRC.
Args:
params (array[float]): array of parameters
inputs = [x, y]
x (array[float]): 1-d input vector
y (array[float]): single output state density matrix
Returns:
float: fidelity between output state and input
"""
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q)
return [qml.expval(qml.Hermitian(dm_labels[i], wires=[i])) for i in range(n_qubits)]
dev_conv = qml.device("default.qubit", wires=9)
@qml.qnode(dev_conv)
def q_conv(conv_params, inputs):
"""A variational quantum circuit representing the Universal classifier + Conv.
Args:
params (array[float]): array of parameters
x (array[float]): 2-d input vector
y (array[float]): single output state density matrix
Returns:
float: fidelity between output state and input
"""
# layer iteration
for l in range(len(conv_params[0])):
# RY layer
# height iteration
for i in range(3):
# width iteration
for j in range(3):
qml.RY((conv_params[0][l][3*i+j] * inputs[i, j] + conv_params[1][l][3*i+j]), wires=(3*i+j))
# entangling layer
for i in range(9):
if i != (9-1):
qml.CNOT(wires=[i, i+1])
return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1) @ qml.PauliZ(2) @ qml.PauliZ(3) @ qml.PauliZ(4) @ qml.PauliZ(5) @ qml.PauliZ(6) @ qml.PauliZ(7) @ qml.PauliZ(8))
a = np.zeros((2, 1, 9))
q_conv(a, X_train[0, 0:3, 0:3])
a = np.zeros((2, 1, 9))
q_fc(a, X_train[0, 0, 0:9])
tetra_class = np.loadtxt('./tetra_class_label.txt')
binary_class = np.array([[1, 0], [0, 1]])
square_class = np.array(np.loadtxt('./square_class_label.txt', dtype=np.complex_), dtype=float)
class_labels = tetra_class
class_labels
n_class = 4
temp = np.zeros((len(Y_train), n_class))
for i in range(len(Y_train)):
temp[i, :] = class_labels[Y_train[i]]
Y_train = temp
temp = np.zeros((len(Y_test), n_class))
for i in range(len(Y_test)):
temp[i, :] = class_labels[Y_test[i]]
Y_test = temp
Y_test
from keras import backend as K
# Alpha Custom Layer
class class_weights(tf.keras.layers.Layer):
def __init__(self):
super(class_weights, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(1, n_class), dtype="float32"),
trainable=True,
)
def call(self, inputs):
return (inputs * self.w)
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 2
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
# Max Pooling Layer, output size = 9
max_pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(reshape_layer_2)
reshape_layer_3 = tf.keras.layers.Reshape((9,))(max_pool_layer)
# Quantum FC Layer, trainable params = 18*L*n_class + 2, output size = 2
num_fc_layer = 2
q_fc_layer_0 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=n_class)(reshape_layer_3)
# Alpha Layer
alpha_layer_0 = class_weights()(q_fc_layer_0)
model = tf.keras.Model(inputs=X, outputs=alpha_layer_0)
model(X_train[0:32, :, :])
import keras.backend as K
# def custom_loss(y_true, y_pred):
# return K.sum(((y_true.shape[1]-2)*y_true+1)*K.square(y_true-y_pred))/len(y_true)
def custom_loss(y_true, y_pred):
loss = K.square(y_true-y_pred)
#class_weights = y_true*(weight_for_1-weight_for_0) + weight_for_0
#loss = loss * class_weights
return K.sum(loss)/len(y_true)
for i in range(10):
print(0.1* ((0.95)**i))
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.1,
decay_steps=int(len(X_train)/32),
decay_rate=0.95,
staircase=True)
opt = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(opt, loss='mse', metrics=["accuracy"])
cp_val_acc = tf.keras.callbacks.ModelCheckpoint(filepath="./Model/4_QConv2ent_QFC_LRDecay_valacc.hdf5",
monitor='val_accuracy', verbose=1, save_weights_only=True, save_best_only=True, mode='max')
cp_val_loss = tf.keras.callbacks.ModelCheckpoint(filepath="./Model/4_QConv2ent_QFC_LRDecay_valloss.hdf5",
monitor='val_loss', verbose=1, save_weights_only=True, save_best_only=True, mode='min')
H = model.fit(X_train, Y_train, epochs=20, batch_size=32, initial_epoch=18,
validation_data=(X_test, Y_test), verbose=1,
callbacks=[cp_val_acc, cp_val_loss])
model.summary()
# best first 10 epochs
H.history
# epoch 11-13
H.history
# epoch 14-15
H.history
# epoch 16-18
H.history
# epoch 19-20
H.history
# best first 10 epochs weights
model.weights
# 20 epochs weights (best val acc)
model.weights
###Output
_____no_output_____
###Markdown
Exploring the results
###Code
X_train = np.concatenate((x_train_list[0][:20, :], x_train_list[1][:20, :]), axis=0)
Y_train = np.zeros((X_train.shape[0],), dtype=int)
Y_train[20:] += 1
X_train.shape, Y_train.shape
X_test = np.concatenate((x_test_list[0][:20, :], x_test_list[1][:20, :]), axis=0)
Y_test = np.zeros((X_test.shape[0],), dtype=int)
Y_test[20:] += 1
X_test.shape, Y_test.shape
X_train = X_train.reshape(X_train.shape[0], 27, 27)
X_test = X_test.reshape(X_test.shape[0], 27, 27)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
First Layer
###Code
qconv_1_weights = np.array([[[ 2.2775786 , 0.5692359 , -1.3423119 , -0.5417412 ,
-0.02558044, 0.05552492, 0.68753076, -1.0091343 ,
1.5005509 ],
[ 1.1272193 , -0.20396537, -1.0141615 , 0.51830167,
-0.06443349, 0.43985152, 0.14942138, -0.09139597,
-0.848188 ]],
[[ 0.16573486, 0.45735574, -0.7883569 , 0.6720633 ,
0.00878196, -0.06765157, -0.13890953, -0.22267656,
0.7158553 ],
[-0.08998799, 0.0277558 , -0.38429782, -0.46371996,
0.03086979, -0.3737983 , 0.24834684, -0.26080084,
-0.5305297 ]]])
qconv_1_weights.shape
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
qconv1_model = tf.keras.Model(inputs=X, outputs=reshape_layer_1)
qconv1_model(X_train[0:1])
qconv1_model.get_layer('Quantum_Conv_Layer_1').set_weights([qconv_1_weights])
qconv1_model.weights
preprocessed_img_train = qconv1_model(X_train)
preprocessed_img_test = qconv1_model(X_test)
data_train = preprocessed_img_train.numpy().reshape(-1, 13*13)
np.savetxt('./4_QConv2ent_QFC2_LRDecay-Filter1_Image_Train.txt', data_train)
data_test = preprocessed_img_test.numpy().reshape(-1, 13*13)
np.savetxt('./4_QConv2ent_QFC2_LRDecay-Filter1_Image_Test.txt', data_test)
print(data_train.shape, data_test.shape)
###Output
(800, 169) (200, 169)
###Markdown
Second Layer
###Code
qconv_2_weights = np.array([[[ 1.1693882e-03, 6.2681824e-01, 1.0461473e+00, 1.6218431e+00,
6.3077182e-01, 1.0981085e-01, -2.2929375e+00, 1.4420069e+00,
4.2860335e-01],
[ 5.3585139e-03, 3.5323524e-01, 1.1388476e+00, -4.8413089e-01,
-5.7266551e-01, -4.0522391e-01, -2.0937469e+00, -2.5532886e-01,
-2.9869470e-01]],
[[-7.6449532e-03, 7.9749459e-01, 4.8039538e-01, -4.2923185e-01,
7.1820688e-01, -6.5161633e-01, -9.1815329e-01, -3.1984165e-01,
-1.5801352e+00],
[ 6.8552271e-03, -2.0065814e-01, 6.1129004e-01, -1.8278420e-02,
-4.7626549e-01, 2.6897669e-01, -1.0094500e+00, -9.0352833e-02,
1.8626230e+00]]])
qconv_2_weights.shape
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 2
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 2
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
qconv2_model = tf.keras.Model(inputs=X, outputs=reshape_layer_2)
qconv2_model(X_train[0:1])
qconv2_model.get_layer('Quantum_Conv_Layer_1').set_weights([qconv_1_weights])
qconv2_model.get_layer('Quantum_Conv_Layer_2').set_weights([qconv_2_weights])
qconv2_model.weights
preprocessed_img_train = qconv2_model(X_train)
preprocessed_img_test = qconv2_model(X_test)
data_train = preprocessed_img_train.numpy().reshape(-1, 6*6)
np.savetxt('./2_QConv2ent_QFC-Filter2_Image_Train.txt', data_train)
data_test = preprocessed_img_test.numpy().reshape(-1, 6*6)
np.savetxt('./2_QConv2ent_QFC-Filter2_Image_Test.txt', data_test)
print(data_train.shape, data_test.shape)
###Output
(40, 36) (40, 36)
###Markdown
Quantum States
###Code
q_fc_weights = np.array([[[-0.37493795, 0.28872567, 0.25326616, 2.3205736 ,
0.17077611, -0.09203133, 0.16455732, -0.46178114,
1.8485489 ],
[-0.02452541, -0.5649712 , -0.20143943, 1.8506535 ,
-1.0290856 , 0.7255949 , 0.66575605, -0.10246853,
1.5756156 ]],
[[ 0.2782273 , 0.37753746, -0.4796371 , -1.0230453 ,
-0.1992439 , 0.12077603, -0.1110618 , 0.41521144,
-0.22446293],
[ 0.07413091, 0.7279123 , 0.18484522, 0.7462162 ,
0.3220253 , 0.19055723, 0.20813133, 1.7572886 ,
0.7828762 ]]])
q_fc_weights.shape
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
np.argmax(pred_train, axis=1)
np.argmax(pred_test, axis=1)
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 1
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 1
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
# Max Pooling Layer, output size = 9
max_pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(reshape_layer_2)
reshape_layer_3 = tf.keras.layers.Reshape((9,))(max_pool_layer)
maxpool_model = tf.keras.Model(inputs=X, outputs=reshape_layer_3)
maxpool_model(X_train[0:1])
maxpool_model.get_layer('Quantum_Conv_Layer_1').set_weights([qconv_1_weights])
maxpool_model.get_layer('Quantum_Conv_Layer_2').set_weights([qconv_2_weights])
maxpool_train = maxpool_model(X_train)
maxpool_test = maxpool_model(X_test)
maxpool_train.shape, maxpool_test.shape
n_qubits = 1 # number of class
dev_state = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev_state)
def q_fc_state(params, inputs):
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q)
#return [qml.expval(qml.Hermitian(density_matrix(state_labels[i]), wires=[i])) for i in range(n_qubits)]
return qml.expval(qml.Hermitian(density_matrix(state_labels[0]), wires=[0]))
q_fc_state(np.zeros((2,1,9)), maxpool_train[0])
q_fc_state(q_fc_weights, maxpool_train[0])
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(q_fc_weights, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
q_fc_state(q_fc_weights, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./2_QConv2ent_QFC-State_Train.txt', train_state)
np.savetxt('./2_QConv2ent_QFC-State_Test.txt', test_state)
###Output
_____no_output_____
###Markdown
Random Starting State
###Code
# Input image, size = 27 x 27
X = tf.keras.Input(shape=(27,27), name='Input_Layer')
# Specs for Conv
c_filter = 3
c_strides = 2
# First Quantum Conv Layer, trainable params = 18*L, output size = 13 x 13
num_conv_layer_1 = 1
q_conv_layer_1 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_1, 9)}, output_dim=(1), name='Quantum_Conv_Layer_1')
size_1 = int(1+(X.shape[1]-c_filter)/c_strides)
q_conv_layer_1_list = []
# height iteration
for i in range(size_1):
# width iteration
for j in range(size_1):
temp = q_conv_layer_1(X[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_1_list += [temp]
concat_layer_1 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_1_list)
reshape_layer_1 = tf.keras.layers.Reshape((size_1, size_1))(concat_layer_1)
# Second Quantum Conv Layer, trainable params = 18*L, output size = 6 x 6
num_conv_layer_2 = 1
q_conv_layer_2 = qml.qnn.KerasLayer(q_conv, {"conv_params": (2, num_conv_layer_2, 9)}, output_dim=(1), name='Quantum_Conv_Layer_2')
size_2 = int(1+(reshape_layer_1.shape[1]-c_filter)/c_strides)
q_conv_layer_2_list = []
# height iteration
for i in range(size_2):
# width iteration
for j in range(size_2):
temp = q_conv_layer_2(reshape_layer_1[:, 2*i:2*(i+1)+1, 2*j:2*(j+1)+1])
temp = tf.keras.layers.Reshape((1,))(temp)
q_conv_layer_2_list += [temp]
concat_layer_2 = tf.keras.layers.Concatenate(axis=1)(q_conv_layer_2_list)
reshape_layer_2 = tf.keras.layers.Reshape((size_2, size_2, 1))(concat_layer_2)
# Max Pooling Layer, output size = 9
max_pool_layer = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=None, name='Max_Pool_Layer')(reshape_layer_2)
reshape_layer_3 = tf.keras.layers.Reshape((9,))(max_pool_layer)
# Quantum FC Layer, trainable params = 18*L*n_class + 2, output size = 2
num_fc_layer = 1
q_fc_layer_0 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, 9)}, output_dim=2)(reshape_layer_3)
# Alpha Layer
alpha_layer_0 = class_weights()(q_fc_layer_0)
model_random = tf.keras.Model(inputs=X, outputs=alpha_layer_0)
model_maxpool_random = tf.keras.Model(inputs=X, outputs=reshape_layer_3)
model_random(X_train[0:1])
model_random.weights
random_weights = np.array([[[-0.4163184 , 0.29198825, 0.49920654, 0.33594978,
-0.49212807, 0.00343066, 0.30105686, -0.15320912,
-0.5011647 ]],
[[ 0.41882396, 0.17975801, 0.3508029 , 0.37545007,
-0.37378743, 0.39107925, -0.3128681 , -0.22416279,
-0.00185567]]])
maxpool_train = model_maxpool_random(X_train)
maxpool_test = model_maxpool_random(X_test)
maxpool_train.shape, maxpool_test.shape
n_qubits = 1 # number of class
dev_state = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev_state)
def q_fc_state(params, inputs):
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q)
#return [qml.expval(qml.Hermitian(density_matrix(state_labels[i]), wires=[i])) for i in range(n_qubits)]
return qml.expval(qml.Hermitian(density_matrix(state_labels[0]), wires=[0]))
q_fc_state(np.zeros((2,1,9)), maxpool_train[0])
q_fc_state(random_weights, maxpool_train[21])
train_state = np.zeros((len(X_train), 2), dtype=np.complex_)
test_state = np.zeros((len(X_test), 2), dtype=np.complex_)
for i in range(len(train_state)):
q_fc_state(random_weights, maxpool_train[i])
temp = np.flip(dev_state._state)
train_state[i, :] = temp
q_fc_state(random_weights, maxpool_test[i])
temp = np.flip(dev_state._state)
test_state[i, :] = temp
# sanity check
print(((np.conj(train_state) @ density_matrix(state_labels[0])) * train_state)[:, 0] > 0.5)
print(((np.conj(test_state) @ density_matrix(state_labels[0])) * test_state)[:, 0] > 0.5)
np.savetxt('./2_QConv2ent_QFC-RandomState_Train.txt', train_state)
np.savetxt('./2_QConv2ent_QFC-RandomState_Test.txt', test_state)
###Output
_____no_output_____
###Markdown
Finish
###Code
first_10_epoch = H.history
first_10_epoch
# initial 10 epoch
model.get_weights()
###Output
_____no_output_____ |
06-01-hidden-markov-models.ipynb | ###Markdown
Hidden Markov Models To demonstrate how hidden markov models work, imagine that we have a very y sensor on a robot (dangerbot-9000) that is walking around looking for danger. It needs to give a signal when a bad thing is nearby but the signal is soo noisy that it might give a lot of false positives. If we see a spike from our sensor, is this due to noise or due to a state change? In our example, we will see sensor output $y$ over time. This sensor output depends on the `danger` state $x_i$. Sensor output is independant over time given the state $x$ (which we cannot measure) but the state $x$ is not independant over time however. If danger is nearby, it will lurk around for a while. Suddenly, even though we have a very noisy sensor, we may be able to filter out some false positives. Working out the ExampleLet's use this (somewhat silly) example to learn about a cool tool in the python ecosystem: `pomegranate`. It has some of the bayesian algorithms that `sklearn` is missing.
###Code
import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
from pomegranate import NormalDistribution, HiddenMarkovModel
###Output
_____no_output_____
###Markdown
Let's say that $x=1$ if there is a bad thing nearby. We'll define some probabilities. First we define; $$ p(s|x=0) \sim N(0,2) $$ $$ p(s|x=1) \sim N(2,3) $$ Next we'll define the transition probabilities;$$ p(s_i | s_{i-1}) $$ Then we can define the model with just a little bit of code; With these probabilities defined, we will use a HMM to infer $(x_t | s_t)$
###Code
dists = [NormalDistribution(0, 2), NormalDistribution(2, 3)]
trans_mat = np.array([[0.99, 0.01],
[0.20, 0.80]])
starts = np.array([0.99, 0.01])
model = HiddenMarkovModel.from_matrix(trans_mat, dists, starts)
###Output
_____no_output_____
###Markdown
Next we'll suppose that this is our sensor data.
###Code
data = [0, 0, 0, 0, 0, 1, 4, 5, 4, 1, 0, 0, 0]
###Output
_____no_output_____
###Markdown
We can apply the model against this data to label points where we had danger nearby.
###Code
model.predict_proba(data)
model.predict(data)
###Output
_____no_output_____
###Markdown
Exercise 1 Take this code and play with it. Try to see what happens if: - the two output distributions are very much alike - the sensor input doesn't have consecutive high values - the transition probabilities are very homogenous Fancy Notes Note that pomegrenate is flexible. We aren't limited to mere normal distributions. Let's use the poisson distribution now instead.
###Code
from pomegranate import PoissonDistribution
###Output
_____no_output_____
###Markdown
Let's come up with another usecase to demonstrate that this functionality of pomegrenate. Granted, it'll be silly, but it'll serve the purpose of explaining. GDD FoodtruckLet's pretend that we have a foodtruck that can be in two states: `grill-on` and `grill-off`. If the grill is on, more people will drop by to have a look at what is cooking. If the grill is off, less people will drop by. The problem is that people might drop by due to chance. We know that if the grill is on, we tend to see 3 people drop by every minute while if the grill is off we tend to see 1 person every minute. We're using a deep learning algorithm that detects the number of people, so the sensor might be biased a bit too. Can we use hidden markov models to help us? Application The example is silly, but it helps explore different distributions in the api.
###Code
dists = [PoissonDistribution(1), PoissonDistribution(4)]
trans_mat = np.array([[0.9, 0.1],
[0.2, 0.8]])
starts = np.array([0.5, 0.5])
model = HiddenMarkovModel.from_matrix(trans_mat, dists, starts)
###Output
_____no_output_____
###Markdown
Let's generate some data to see how the model might respond.
###Code
data = 3*np.sin(np.linspace(0, np.pi*2, 500))
data += np.random.normal(0, 0.3, data.shape)
data += np.abs(data.min())
data = np.round(data)
data[200:215] = 2.5
plt.plot(data);
###Output
_____no_output_____
###Markdown
This is the, potentially noisy, signal that comes in. Note that we only supply positive values since the poisson distribution doesn't handle negative numbers.So given the noisy signal of number of people, can we estimate when the grill as turned on?
###Code
probs = model.predict_proba(data)
plt.plot(probs[:, 1])
###Output
_____no_output_____
###Markdown
The hidden markov model seems to be able to filter out some of the noise. You can also see that it start doubting around index 200 a little bit, but it doesn't see enough evidence to consider the state to have change. In practice this smoothing really depends on the transition matrix, which you may want to consider a hyperparameter during training. Final Example: Weather Let's come up with something that is a little bit less arbitrary. Let's try to predict the season with just the temperature data. We'll import the data first.
###Code
import pandas as pd
df_weather = pd.read_csv('data/clean_weather.csv')
df_weather = df_weather.iloc[365*3:365*10]
def assign_season(dataf):
return (dataf
.assign(winter = lambda d: d['month'].isin([1,2,3]))
.assign(spring = lambda d: d['month'].isin([4,5,6]))
.assign(summer = lambda d: d['month'].isin([7,8,9]))
.assign(fall = lambda d: d['month'].isin([10,11,12]))
.assign(truth = lambda d: np.round((d['month']+1)/3) % 4))
def summarise(dataf):
return {'nrow': dataf.shape[0]}
###Output
_____no_output_____
###Markdown
We've taken a subset of the temperatures in Eindhoven. You can see the temperatures below.
###Code
plt.plot(df_weather['max_temp']);
df_weather = df_weather.pipe(assign_season)
###Output
_____no_output_____
###Markdown
After assigning the proper seasons we can see that we get some statistics.
###Code
(df_weather
.groupby(['truth', 'winter', 'spring', 'summer', 'fall'])
.agg({'max_temp': ['mean', 'std', 'count']})
.reset_index())
###Output
_____no_output_____
###Markdown
Exercise Use this information to train a hidden markov model in pomegrenate. AnswerFeel free to play with the starter template below.
###Code
dists = [NormalDistribution(250, 50),
NormalDistribution(250, 50),
NormalDistribution(250, 50),
NormalDistribution(250, 50)]
trans_mat = np.array([[0.99, 0.01, 0.0, 0.0],
[0.0, 0.99, 0.01, 0.0],
[0.0, 0.0, 0.99, 0.01],
[0.001, 0.0, 0.0, 0.99]])
starts = np.array([0.25, 0.25, 0.25, 0.25])
model = HiddenMarkovModel.from_matrix(trans_mat, dists, starts)
###Output
_____no_output_____
###Markdown
With this model defined, lets check out how it works.
###Code
temperatures = df_weather['max_temp'][:365*2]
truth = df_weather['truth'][:365*2]
plt.plot(temperatures)
probs = model.predict_proba(temperatures)
for i in range(probs.shape[1]):
plt.plot(probs[:, i])
###Output
_____no_output_____
###Markdown
So whats so special about this? It still looks like the performance isn't that great. Well, imagine the model performance without the time component.
###Code
def best_naive(temp, verbose=False):
if verbose:
print(temp, [_.log_probability(temp) for _ in dists])
return np.argmax([_.log_probability(temp) for _ in dists])
naive_pred = [best_naive(t) for t in temperatures]
plt.plot(naive_pred)
###Output
_____no_output_____
###Markdown
Suddenly, when we compare the difference, it becomes better.
###Code
np.mean(np.round(naive_pred) == truth), np.mean(model.predict(temperatures) == truth)
###Output
_____no_output_____ |
surveys/2015-12-notebook-ux/analysis/prep/3b_integration_themes.ipynb | ###Markdown
Response Themes for "What tools and applications, if any, would you like to see more tightly integrated with Jupyter Notebook?"* Goal: Extract theme keywords from `integrations` responses.* Data: Output from 2_clean_survey.ipynb notebook (`survey_short_columns.csv`)* Strawman process from [1_ux_survey_review.ipynb](1_ux_survey_review.ipynb):> Moving forward, here's a semi-automatic procedure we can follow for identifying themes across questions:> 1. Take a random sample of question responses> 2. Write down common theme keywords> 3. Search back through the responses using the theme keywords> 4. Expand the set of keywords with other words seen in the search results> 5. Repeat for all themes and questions> Later, we can use a fully automated topic modeling approach to validate our manually generated themes.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Make sure the samples come up the same for anyone that re-runs this.
###Code
rs = np.random.RandomState(123)
pd.set_option('max_colwidth', 1000)
df = pd.read_csv('survey_short_columns.csv')
def show(series):
'''Make random samples easier to read.'''
for i, value in enumerate(series):
print('{}) {}'.format(i, value), end='\n\n')
###Output
_____no_output_____
###Markdown
Concat all three integration response columns into one. We don't care about order at the moment.
###Code
responses = pd.concat([df.integrations_1, df.integrations_2, df.integrations_3])
assert len(responses) == len(df) * 3
###Output
_____no_output_____
###Markdown
For later ref, to keep the notebook code generic for other questions.
###Code
column = 'integrations'
responses.isnull().value_counts()
responses = responses.dropna()
###Output
_____no_output_____
###Markdown
Initial SamplesI ran the sampling code below 6 times and manually built up the initial set of keywords seen commonly across them. I formed groups of conceptually related keywords. Then I tried to assign a simple label to each group.
###Code
show(responses.sample(20, random_state=rs))
themes = {
'version': ['git', 'version control'],
'language' : ['fortran', 'julia', 'scala', 'latex', 'julia', 'cross language'],
'feature' : ['interactive', 'dashboards', 'extensions', 'web app', 'animation', 'image', 'data source', '3d',
'timeline', 'gantt', 'repl', 'editor', 'profile', 'console', 'collab', 'debug',
'terminal', 'management', 'file browser', 'diagram', 'wiki',
'test', 'offline', 'error', 'blog', 'database', 'script'],
'app_lib_service' : ['d3', 'web app', 'shiny', 'animation',
'scikit-learn', 'matplotlib', 'spark', 'pandas',
'flake', 'pep8', 'pdf', 'numpy', 'sphinx', 'seaborn', 'plotly', 'nosebook', 'vtk', 'vispy',
'graphviz', 'netlogo', 'vim', 'emacs', 'sublime', 'biquery', 'pycharm'],
'hosting' : ['cloud', 'saas', 'deploy', 'host', 'docker', 'google', 'aws'],
'other': ['hfg']
}
###Output
_____no_output_____
###Markdown
Coverage ImprovementI next ran the code below to associate the theme labels with the responses. I then iterated on running the code below to find reponses without labels. I expanded the list of keywords and themes in order to improve coverage.
###Code
import re
def keywords_or(text, keywords):
for keyword in keywords:
if re.search('(^|\W+){}'.format(keyword), text, re.IGNORECASE):
return True
return False
def tag_themes(responses, themes):
tagged = responses.to_frame()
tagged['themes'] = ''
for theme, keywords in themes.items():
results = responses.map(lambda text: keywords_or(text, keywords))
tagged.loc[results, 'themes'] += theme + ','
print(theme, results.sum())
return tagged
tagged = tag_themes(responses, themes)
tagged.themes.str.count(',').value_counts()
tagged[tagged.themes.str.len() == 0].sample(20, random_state=rs)
themes = {
'version': ['git(\W|$)', 'version(ing)?\Wcontrol', 'd?vcs', 'mercurial', 'hg', 'history'],
'language' : ['fortran', 'julia', 'scala', 'latex', 'i?julia', 'cross language', 'sql', 'R(\W|$)', 'C(\W|$)',
'java', 'sas', 'node', 'jdk', 'polyglot', 'bash', 'python(3|2)?', 'perl', 'awk', 'js',
'clojure', 'cling', 'ruby', 'rust', 'php', 'haskell', 'lua', 'golang'],
'feature' : ['interactiv(e|ity)', 'dashboard', 'extensions', 'web app', 'animation', 'image', 'data source', '3d',
'timeline', 'gantt', 'repl', 'editor', 'profil(e|ing)', 'console', 'collab', 'debug',
'terminal', 'management', 'file browse', 'file manage', 'diagram', 'wiki',
'test', 'offline', 'error', 'blog', 'database', 'script', 'slides', 'env(ironment)? vars',
'bibliography', 'command\W?line', 'memory', 'refactor', 'spreadsheet', 'completion', 'comment',
'co-author', 'customiz', 'orchestrat', 'widgets', 'them(e|ing)', 'warning', 'lint', 'outline', 'fold',
'video', 'progress', 'presentation', 'slide', 'gis', 'spell\W?check', 'native', 'notification',
'citation', 'keyboard', 'variable', 'physics', 'documentation', 'schedul', 'calendar', 'api(\W|$)',
'xml', 'backup', 'writing', 'languages', 'views', 'navigation', 'file system', 'share',
'exploration', 'grid', 'install', 'plugin', 'search', 'visualization', 'auto ?complet(e|ion)', 'grading',
'table of content', 'load balanc', 'clipboard', 'imports', 'caching', 'math', 'footnote', 'modeling'
'preview', 'code editing', 'cluster', 'visuali(s|z)ation', 'index', 'pagebreaks', 'mobile', 'skins',
'styles', 'reports', 'warehouse', 'proprietary', 'state', 'full screen',
'app creation', 'graphs', 'chart(s|ing)', 'plot(ting)', 'large data', 'web ?hook', 'deep learn',
'shortcut', 'diffing', 'production', 'geology', 'diff/merge', 'sandbox', 'document edit',
'graphical', 'collaps(e|ing)', 'modules', 'hide cell', 'without code', 'hidden cells', 'remote kernel',
'object inspector', 'converter', 'instruments', 'cprofile', 'figures', 'ides?(\W|$)', 'web app'],
'app_lib_service' : ['d3', 'shiny', 'animation',
'scikit', 'matplotlib', 'spark', 'pandas',
'flake', 'pep8', 'pdf', 'numpy', 'sphinx', 'seaborn', 'plot(\.)?ly', 'nosebook', 'vtk', 'vispy',
'graphviz', 'netlogo', 'vim', 'emacs', 'sublime', 'biquery', 'pycharm', 'pelican', 'wordpress',
'pandoc', 'rstudio', 'gpilab', 'nbconvert', '(ana)?conda', 'htop', 'zsh', 'beaker', 'evernote',
'rodeo', 'spyder', 'posgres', 'tableau', 'idea', 'bokeh', 'three.js', 'pyspark', 'jedi',
'nose', 'bibtex', 'excel', 'graphvis', 'atom', 'electron', 'tensorflow', 'sage', 'pygdb',
'gui', 'mayavi', 'rvm', 'finder', 'npm', 'django', 'octave', 'geojson', 'qt', 'hive', 'impala',
'docrepr', 'pip', 'pdb', 'nbgrader', 'scrapy', 'nbdiff', 'zeppelin', 'gmail', 'pyflakes',
'jupyter\W?hub', 'visual studio', 'rise(\W|$)', 'xcode', 'eslint', 'hdf', 'hadoop', 'binder',
'fenics', 'alteryx', 'venv', 'mathjax', 'tern(\W|$)', 'dill', 'moodle', 'gvim', 'sparql',
'atlassian', 'doit', 'matlab', 'swift', 'xplot', 'reveal', 'virtualenv', 'mp4',
'phantomx', 'thebe', 'tmpnb', 'line_profiler', 'netbeans', 'webgl', 'travis', 'synapse.org',
'python\W?anywhere', 'sage', 'gephi', 'sumatra', 'cdh', 'yt', 'ffmpeg', 'scipy', 'trinket',
'ipython', 'markdown', 'stack overflow', 'ros(\W|$)', 'mysql', 'bbedit', 'neovim', 'dropbox',
'nbmerge', 'ggvis', 'pyside', 'eclipse', 'torch', 'slack', 'pycuda', 'theano', 'slurm',
'artview', 'nbviewer', 'flask', 'pylint', 'stata', 'expect', 'ipyparall', 'cookiecutter',
'intellij', 'stash', 'cantor', 'wakari', 'gnuplot', 'tex(\W|$)', 'live_reveal', 'html',
'coursera', 'opencv', 'selenium', 'hfg', 'hue', 'unittest', 'org-mode', 'github'],
'platform' : ['cloud', 'saas', 'deploy', 'host', 'docker', 'google', 'aws', 'ios', 'windows', 'gnome', 'os x',
'openbsd']
}
###Output
_____no_output_____
###Markdown
Precision CheckI then studied a sample of responses for each theme to see if there major inaccuracies in their application (e.g., string matches that are too fuzzy).
###Code
tagged = tag_themes(responses, themes)
tagged.themes.str.count(',').value_counts()
from IPython.display import display, clear_output
###Output
_____no_output_____
###Markdown
I've commented out this code so that the notebook re-runs top to bottom without getting stuck at this interactive prompt. Uncomment it if you want to poke through samples of the tagged responses.
###Code
# for key in themes:
# clear_output()
# display(tagged[tagged.themes.str.contains(key)].sample(10))
# if input('Showing `{}`. Type Enter to continue, "q" to stop.'.format(key)) == 'q':
# break
###Output
_____no_output_____
###Markdown
Keyword Frequencies
###Code
import matplotlib
import seaborn
counts = {}
for theme, keywords in themes.items():
for keyword in keywords:
hits = responses.map(lambda text: keywords_or(text, [keyword]))
counts[keyword] = hits.sum()
hist = pd.Series(counts).sort_values()
ax = hist[-30:].plot.barh(figsize=(8, 8))
_ = ax.set_xlabel('Mentions')
###Output
_____no_output_____
###Markdown
PersistI save off the themes and keywords to a DataFrame with the same index as the original so that the entries can be tagged.
###Code
themes_df = tagged.themes.to_frame()
themes_df = themes_df.rename(columns={'themes' : column+'_themes'})
themes_df[column+'_keywords'] = ''
for theme, keywords in themes.items():
for keyword in keywords:
results = responses.map(lambda text: keywords_or(text, [keyword]))
themes_df.loc[results, column+'_keywords'] += keyword + ','
themes_df[column+'_themes'] = themes_df[column+'_themes'].str.rstrip(',')
themes_df[column+'_keywords'] = themes_df[column+'_keywords'].str.rstrip(',')
###Output
_____no_output_____
###Markdown
Up above, I merged the three response fields for the question into one common pool which means we can have duplicate index value in the themes DataFrame. We need to squash these down and remove duplicates.
###Code
def union(group_df):
'''Gets the set union of themes and keywords for a given DataFrame.'''
themes = group_df[column+'_themes'].str.cat(sep=',')
themes = list(set(themes.split(',')))
themes = ','.join(theme for theme in themes if theme)
keywords = group_df[column+'_keywords'].str.cat(sep=',')
keywords = list(set(keywords.split(',')))
keywords = ','.join(keyword for keyword in keywords if keyword)
return pd.Series([themes, keywords], index=[column+'_themes', column+'_keywords'])
###Output
_____no_output_____
###Markdown
We group by the index and union the themes and keywords.
###Code
themes_df = themes_df.groupby(themes_df.index).apply(union)
themes_df.head(5)
###Output
_____no_output_____
###Markdown
The themes DataFrame should have as many rows as there are non-null responses in the original DataFrame.
###Code
assert len(themes_df) == len(df[[column+'_1', column+'_2', column+'_3']].dropna(how='all'))
themes_df.to_csv(column + '_themes.csv', sep=';')
###Output
_____no_output_____ |
notebooks/18b-compr_modresults_run03.ipynb | ###Markdown
model results
###Code
scores = {}
for model, model_dir in model_dirs.items():
res = pd.read_csv(os.path.join(model_dir, 'model_results.csv'))
scores.update({model: res.loc[res.model=='all','score_test'].values})
scores = pd.DataFrame.from_dict(scores)
scores.median(axis=0)
xlab_dict = {'elasticnet': 'elastic net',
'lm': 'linear regression',
'rf': 'random forest',
'rf_boruta': 'random forest\niter select+boruta'}
df = pd.melt(scores)
ax = sns.boxplot(data=df, x='variable', y='value', color='steelblue')
ax.set(ylabel='Score (on test)', xlabel='Models', xticklabels=[xlab_dict[n] for n in model_dirs.keys()],
title='Model using all features')
###Output
_____no_output_____
###Markdown
train vs test for linear models
###Code
res = pd.read_csv(os.path.join(model_dirs['elasticnet'], 'model_results.csv'))
res.loc[res.model=='all',['score_train', 'score_test']].describe()
df = pd.melt(res.loc[res.model=='all',['score_train', 'score_test']])
ax = sns.boxplot(data=df, x='variable',y='value')
ax.set_yscale('symlog')
ax.set(ylabel='Score', xlabel='',xticklabels=['Train','Test'], title='Elastic net',
ylim=[-1,1.2], yticks=[-1,-0.5,0,0.5,1])
res = pd.read_csv(os.path.join(model_dirs['lm'], 'model_results.csv'))
res.loc[res.model=='all',['score_train', 'score_test']].describe()
df = pd.melt(res.loc[res.model=='all',['score_train', 'score_test']])
ax = sns.boxplot(data=df, x='variable',y='value')
ax.set_yscale('symlog')
ax.set(ylabel='Score', xlabel='',xticklabels=['Train','Test'], title='Linear regression',
ylim=[-1,1.2], yticks=[-1,-0.5,0,0.5,1])
###Output
_____no_output_____
###Markdown
Although the linear models look ok, but they overfitted quite substantially. Next we'll look at the results after feature selection, so to minimize overfitting on these linear models (the random forests don't overfit so we'll directly look at the test scores). anlyz aggRes stats_score aggregates the scores, without excluding negative scoresfeat stats_score aggregates the scores, and excludes negative scores
###Code
scores_fl = {}
scores_rd = {}
scores_rd10 = {}
for model, model_dir in model_dirs.items():
res = pd.read_csv(os.path.join(model_dir, 'anlyz/stats_score_aggRes/stats_score.csv'), index_col=0)
scores_fl.update({model: res.loc[res.index=='50%', 'full'].values})
scores_rd.update({model: res.loc[res.index=='50%', 'reduced'].values})
scores_rd10.update({model: res.loc[res.index=='50%', 'reduced10feat'].values})
scores_fl = pd.DataFrame.from_dict(scores_fl)
scores_rd = pd.DataFrame.from_dict(scores_rd)
scores_rd10 = pd.DataFrame.from_dict(scores_rd10)
df = pd.concat([scores_fl,
scores_rd,
scores_rd10], axis=0)
df = df.T
df.columns = ['full', 'reduced', 'reduced_top10feat']
df
xlab_dict = {'elasticnet': 'elastic net',
'lm': 'linear regression',
'rf': 'random forest',
'rf_boruta': 'random forest\niter select+boruta'}
ax = sns.barplot(scores_rd10.columns, scores_rd10.values[0], color='steelblue')
ax.set(ylabel='Score (median)', xlabel='Model',xticklabels=[xlab_dict[n] for n in model_dirs.keys()],
title='Reduced model with top 10 features')
###Output
_____no_output_____
###Markdown
anlyz_filtered
###Code
scores_fl = {}
scores_rd = {}
scores_rd10 = {}
for model, model_dir in model_dirs.items():
res = pd.read_csv(os.path.join(model_dir, 'anlyz_filtered/stats_score_aggRes/stats_score.csv'), index_col=0)
scores_fl.update({model: res.loc[res.index=='mean', 'full'].values})
scores_rd.update({model: res.loc[res.index=='mean', 'reduced'].values})
scores_rd10.update({model: res.loc[res.index=='mean', 'reduced10feat'].values})
scores_fl = pd.DataFrame.from_dict(scores_fl)
scores_rd = pd.DataFrame.from_dict(scores_rd)
scores_rd10 = pd.DataFrame.from_dict(scores_rd10)
df = pd.concat([scores_fl.mean(),
scores_rd.mean(),
scores_rd10.mean()], axis=1)
df.columns = ['full', 'reduced', 'reduced_top10feat']
df
###Output
_____no_output_____ |
Homework/Day_27_HW.ipynb | ###Markdown
作業:今天學到2種分配,包含 : 離散均勻分配( Discrete Uniform Distribution ) 伯努利分配( Bernoulli Distribution ) 今天我們透過作業中的問題,回想今天的內容吧! 丟一個銅板,丟了100次,出現正面 50 次的機率有多大。(提示: 先想是哪一種分配,然後透過 python 語法進行計算)
###Code
# library
import numpy as np
import pandas as pd
from scipy import stats
import math
import statistics
import matplotlib.pyplot as plt
# 二項分佈( Bermoulli Distribution )
p = 0.5 # 假設銅板出現正面的機率為 50%
n = 100 # 重複 100 次伯努利實驗( Bernoulli trial )
r = 50 # 計算出現 50 次正面
# 二項分佈的機率質量函數( probability mass function )
probs = stats.binom.pmf(r, n, p)
print( '{:.2%}'.format( probs ) )
p = 0.5 # 假設銅板出現正面的機率為 50%
n = 100 # 重複 100 次伯努利實驗( Bernoulli trial )
r = np.arange(0,101) # 出現正面的可能總次數
plt.figure( figsize=(10,5) )
plt.bar( r, stats.binom.pmf(r, n, p) )
plt.ylabel( 'P(X=x)' )
plt.xlabel( 'x' )
plt.title( 'Binomial(n=100,p=0.5)' )
plt.show( )
###Output
_____no_output_____ |
assignments/0126--ENV_pre-class-assignment.ipynb | ###Markdown
[Link to this document's Jupyter Notebook](./0126--ENV_pre-class-assignment.ipynb) In order to successfully complete this assignment you must do the required reading, watch the provided videos and complete all instructions. The embedded survey form must be entirely filled out and submitted on or before **11:59pm on Tuesday January 26**. Students must come to class the next day prepared to discuss the material covered in this assignment. --- Pre-Class Assignment: Navigating Shared Clusters (HPC) Goals for today's pre-class assignment 1. [Create XSEDE Account](Create-XSEDE-Account)2. [Finding software on the HPCC](Finding-software-on-the-HPCC)3. [Assignment wrap up](Assignment-wrap-up) --- 1. Create XSEDE AccountThe National Science Foundation invests quite a bit of money into providing computing resources to researchers. The Extreme Science and Engineering Discovery Environment (XSEDE) is a single virtual system that scientists can use to interactively share computing resources and expertise. Here is a short video that describes XSEDE.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo("PBUIBJHZzD4",width=640,height=360)
###Output
_____no_output_____
###Markdown
As part of this course we have obtained access to compute resources on these National Systems. In order to use these resources you will need an XSEDE portal account. Please sign up for an account here:https://portal.xsede.org//guestProvide your portal ID to the instructor using the Google form below. &9989; **QUESTION:** What is your XSEDE Portal Account UserID? Put your Portal ID Here. --- 2. Finding software on the HPCCThe following video describes the ```PATH``` environment variable and shows you how it can be changed to add software installed in a new location.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo("2OXvoXejZcw",width=640,height=360)
###Output
_____no_output_____
###Markdown
Commands used in the above video``` ls clear which python cd env echo $PATH echo $PATH | sec -r "s/:/\\n/g" export PATH=~/anaconda3/bin:$PATH ssh dev-intel16 nano ~/.bashrc``` &9989; **QUESTION:** What is the ```which``` command used for? Put your answer to the above question here. &9989; **QUESTION:** The PATH environment variable is a set of system folders separated by a colon (:). What command would you use to add the ```/mnt/research/mygroup/bin``` folder to the end of your path? Put your answer to the above question here. The following video shows some basics on how to use the ```module``` command that is available on the HPCC.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo("lXYpQeU3j-0",width=640,height=360)
###Output
_____no_output_____
###Markdown
Commands used in the above video``` ssh dev-intel16 clear who | wc -l module list module spider MATLAB module unload MATLAB module load MATLAB/2018b which matlab module swap MATLAB/2018b MATLAB/2018a module avail module unload gnu module load intel module show intel module purge``` &9989; **QUESTION:** Use the ```module spider``` command to search modules. What versions of the libpng are available on the HPCC? (**Note**: if the modules have changed since the last time you used the ```module spider``` command it may need to rebuild it's database which can take a few seconds). Put the answer to the above question here. --- 3. Assignment wrap upPlease fill out the form that appears when you run the code below. **You must completely fill this out in order to receive credits for the assignment!**[Direct Link to Google Form](https://cmse.msu.edu/cmse401-pc-survey)If you have trouble with the embedded form, please make sure you log on with your MSU google account at [googleapps.msu.edu](https://googleapps.msu.edu) and then click on the direct link above. &9989; **Assignment-Specific QUESTION:** What is your XSEDE Portal Account UserID? Put your answer to the above question here &9989; **QUESTION:** Summarize what you did in this assignment. Put your answer to the above question here &9989; **QUESTION:** What questions do you have, if any, about any of the topics discussed in this assignment after working through the jupyter notebook? Put your answer to the above question here &9989; **QUESTION:** How well do you feel this assignment helped you to achieve a better understanding of the above mentioned topic(s)? Put your answer to the above question here &9989; **QUESTION:** What was the **most** challenging part of this assignment for you? Put your answer to the above question here &9989; **QUESTION:** What was the **least** challenging part of this assignment for you? Put your answer to the above question here &9989; **QUESTION:** What kind of additional questions or support, if any, do you feel you need to have a better understanding of the content in this assignment? Put your answer to the above question here &9989; **QUESTION:** Do you have any further questions or comments about this material, or anything else that's going on in class? Put your answer to the above question here &9989; **QUESTION:** Approximately how long did this pre-class assignment take? Put your answer to the above question here
###Code
from IPython.display import HTML
HTML(
"""
<iframe
src="https://cmse.msu.edu/cmse401-pc-survey"
width="100%"
height="500px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
###Output
_____no_output_____ |
finmarketpy_examples/finmarketpy_notebooks/backtest_example.ipynb | ###Markdown
Backtesting a simple trend following strategySaeed Amen - [email protected], we demonstrate how to develop a trading strategy in finmarketpy (https://www.github.com/cuemacro/finmarketpy). In this example, we show how to do a backtest of a simple trend following strategy using the `Backtest` class. The trading strategy involves buying spot, when it is above the 200D simple moving average and selling spot, when it below the 200D simple moving average.First, let's do all the imports.
###Code
# for backtest and loading data
from finmarketpy.backtest import BacktestRequest, Backtest
from findatapy.market import Market, MarketDataRequest, MarketDataGenerator
from findatapy.util.fxconv import FXConv
# for logging
from findatapy.util.loggermanager import LoggerManager
# for signal generation
from finmarketpy.economics import TechIndicator, TechParams
# for plotting
from chartpy import Chart, Style
###Output
_____no_output_____
###Markdown
Create a logger.
###Code
# housekeeping
logger = LoggerManager().getLogger(__name__)
import datetime
###Output
_____no_output_____
###Markdown
Let's load up market data. Note you will need to type in your Quandl API key below (or set it as an environment variable before running this Jupyter notebook). You can get a free API key from Quandl.com, once you sign up for a free account.
###Code
try:
import os
QUANDL_API_KEY = os.environ['QUANDL_API_KEY']
except:
QUANDL_API_KEY = 'TYPE_YOUR_KEY_HERE'
# pick USD crosses in G10 FX
# note: we are calculating returns from spot (it is much better to use to total return
# indices for FX, which include carry)
logger.info("Loading asset data...")
tickers = ['EURUSD', 'USDJPY', 'GBPUSD', 'AUDUSD', 'USDCAD',
'NZDUSD', 'USDCHF', 'USDNOK', 'USDSEK']
vendor_tickers = ['FRED/DEXUSEU', 'FRED/DEXJPUS', 'FRED/DEXUSUK', 'FRED/DEXUSAL', 'FRED/DEXCAUS',
'FRED/DEXUSNZ', 'FRED/DEXSZUS', 'FRED/DEXNOUS', 'FRED/DEXSDUS']
md_request = MarketDataRequest(
start_date="01 Jan 1989", # start date
finish_date=datetime.date.today(), # finish date
freq='daily', # daily data
data_source='quandl', # use Quandl as data source
tickers=tickers, # ticker (findatapy)
fields=['close'], # which fields to download
vendor_tickers=vendor_tickers, # ticker (Quandl)
vendor_fields=['close'], # which Bloomberg fields to download
cache_algo='internet_load_return',
quandl_api_key=QUANDL_API_KEY) # how to return data
market = Market(market_data_generator=MarketDataGenerator())
asset_df = market.fetch_market(md_request)
spot_df = asset_df
###Output
2020-11-12 14:10:06,928 - __main__ - INFO - Loading asset data...
2020-11-12 14:10:07,875 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:07,875 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:07,880 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:07,880 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:10,429 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['EURUSD.close']
2020-11-12 14:10:10,838 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:11,443 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['AUDUSD.close']
2020-11-12 14:10:11,558 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:12,412 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDJPY.close']
2020-11-12 14:10:12,418 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['GBPUSD.close']
2020-11-12 14:10:12,420 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:12,421 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:17,725 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDCHF.close']
2020-11-12 14:10:17,785 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDCAD.close']
2020-11-12 14:10:18,042 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:18,899 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['NZDUSD.close']
2020-11-12 14:10:19,286 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDNOK.close']
2020-11-12 14:10:20,604 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDSEK.close']
2020-11-12 14:10:20,749 - findatapy.market.ioengine - INFO - Pushed MarketDataRequest_577__abstract_curve_key-None__category-None__category_key-backtest_default-cat_quandl_daily_NYC__cut-NYC__data_source-quandl__environment-backtest__expiry_date-NaT__fields-close__finish_date-2020-11-12 00:00:00__freq-daily__freq_mult-1__gran_freq-None__push_to_cache-True__resample-None__resample_how-last__start_date-1989-01-01 00:00:00__tickers-EURUSD_USDJPY_GBPUSD_AUDUSD_USDCAD_NZDUSD_USDCHF_USDNOK_USDSEK__trade_side-trade__vendor_fields-close__vendor_tickers-FRED_DEXUSEU_FRED_DEXJPUS_FRED_DEXUSUK_FRED_DEXUSAL_FRED_DEXCAUS_FRED_DEXUSNZ_FRED_DEXSZUS_FRED_DEXNOUS_FRED_DEXSDUS to Redis
###Markdown
Let's define all the parameters for the backtest, start/finish dates, technical indicator we'll use etc.
###Code
backtest = Backtest()
br = BacktestRequest()
fxconv = FXConv()
# get all asset data
br.start_date = "02 Jan 1990"
br.finish_date = datetime.datetime.utcnow()
br.spot_tc_bp = 0 # 2.5 bps bid/ask spread
br.ann_factor = 252
# have vol target for each signal
br.signal_vol_adjust = True
br.signal_vol_target = 0.05
br.signal_vol_max_leverage = 3
br.signal_vol_periods = 60
br.signal_vol_obs_in_year = 252
br.signal_vol_rebalance_freq = 'BM'
br.signal_vol_resample_freq = None
tech_params = TechParams();
tech_params.sma_period = 200;
indicator = 'SMA'
###Output
_____no_output_____
###Markdown
Calculate the technical indicator and the trading signal.
###Code
logger.info("Running backtest...")
# use technical indicator to create signals
# (we could obviously create whatever function we wanted for generating the signal dataframe)
tech_ind = TechIndicator()
tech_ind.create_tech_ind(spot_df, indicator, tech_params);
signal_df = tech_ind.get_signal()
###Output
2020-11-12 16:19:03,860 - __main__ - INFO - Running backtest...
###Markdown
Run the backtest using the market data, signal etc.
###Code
contract_value_df = None
# use the same data for generating signals
backtest.calculate_trading_PnL(br, asset_df, signal_df, contract_value_df, run_in_parallel=False)
port = backtest.portfolio_cum()
port.columns = [indicator + ' = ' + str(tech_params.sma_period) + ' ' + str(backtest.portfolio_pnl_desc()[0])]
signals = backtest.portfolio_signal()
# print the last positions (we could also save as CSV etc.)
print(signals.tail(1))
###Output
2020-11-12 16:19:04,941 - finmarketpy.backtest.backtestengine - INFO - Calculating trading P&L...
2020-11-12 16:19:04,982 - finmarketpy.backtest.backtestengine - INFO - Cumulative index calculations
2020-11-12 16:19:04,993 - finmarketpy.backtest.backtestengine - INFO - Completed cumulative index calculations
EURUSD.close SMA Signal USDJPY.close SMA Signal \
Date
2020-11-06 0.08321 -0.102708
GBPUSD.close SMA Signal AUDUSD.close SMA Signal \
Date
2020-11-06 0.062994 0.060006
USDCAD.close SMA Signal NZDUSD.close SMA Signal \
Date
2020-11-06 -0.093499 0.057194
USDCHF.close SMA Signal USDNOK.close SMA Signal \
Date
2020-11-06 -0.085228 -0.042613
USDSEK.close SMA Signal
Date
2020-11-06 -0.063243
###Markdown
Finally display the portfolio cumulative index.
###Code
style = Style()
style.title = "FX trend strategy"
style.source = 'Quandl'
style.scale_factor = 1
style.file_output = 'fx-trend-example.png'
Chart().plot(port, style=style)
###Output
_____no_output_____
###Markdown
Backtesting a simple trend following strategySaeed Amen - [email protected], we demonstrate how to develop a trading strategy in finmarketpy (https://www.github.com/cuemacro/finmarketpy). In this example, we show how to do a backtest of a simple trend following strategy using the `Backtest` class. The trading strategy involves buying spot, when it is above the 200D simple moving average and selling spot, when it below the 200D simple moving average.First, let's do all the imports.
###Code
# for backtest and loading data
from finmarketpy.backtest import BacktestRequest, Backtest
from findatapy.market import Market, MarketDataRequest, MarketDataGenerator
from findatapy.util.fxconv import FXConv
# for logging
from findatapy.util.loggermanager import LoggerManager
# for signal generation
from finmarketpy.economics import TechIndicator, TechParams
# for plotting
from chartpy import Chart, Style
###Output
_____no_output_____
###Markdown
Create a logger.
###Code
# housekeeping
logger = LoggerManager().getLogger(__name__)
import datetime
###Output
_____no_output_____
###Markdown
Let's load up market data. Note you will need to type in your Quandl API key below (or set it as an environment variable before running this Jupyter notebook). You can get a free API key from Quandl.com, once you sign up for a free account.
###Code
try:
import os
QUANDL_API_KEY = os.environ['QUANDL_API_KEY']
except:
QUANDL_API_KEY = 'TYPE_YOUR_KEY_HERE'
# pick USD crosses in G10 FX
# note: we are calculating returns from spot (it is much better to use to total return
# indices for FX, which include carry)
logger.info("Loading asset data...")
tickers = ['EURUSD', 'USDJPY', 'GBPUSD', 'AUDUSD', 'USDCAD',
'NZDUSD', 'USDCHF', 'USDNOK', 'USDSEK']
vendor_tickers = ['FRED/DEXUSEU', 'FRED/DEXJPUS', 'FRED/DEXUSUK', 'FRED/DEXUSAL', 'FRED/DEXCAUS',
'FRED/DEXUSNZ', 'FRED/DEXSZUS', 'FRED/DEXNOUS', 'FRED/DEXSDUS']
md_request = MarketDataRequest(
start_date="01 Jan 1989", # start date
finish_date=datetime.date.today(), # finish date
freq='daily', # daily data
data_source='quandl', # use Quandl as data source
tickers=tickers, # ticker (findatapy)
fields=['close'], # which fields to download
vendor_tickers=vendor_tickers, # ticker (Quandl)
vendor_fields=['close'], # which Bloomberg fields to download
cache_algo='internet_load_return',
quandl_api_key=QUANDL_API_KEY) # how to return data
market = Market(market_data_generator=MarketDataGenerator())
asset_df = market.fetch_market(md_request)
spot_df = asset_df
###Output
2020-11-12 14:10:06,928 - __main__ - INFO - Loading asset data...
2020-11-12 14:10:07,875 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:07,875 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:07,880 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:07,880 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:10,429 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['EURUSD.close']
2020-11-12 14:10:10,838 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:11,443 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['AUDUSD.close']
2020-11-12 14:10:11,558 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:12,412 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDJPY.close']
2020-11-12 14:10:12,418 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['GBPUSD.close']
2020-11-12 14:10:12,420 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:12,421 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:17,725 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDCHF.close']
2020-11-12 14:10:17,785 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDCAD.close']
2020-11-12 14:10:18,042 - findatapy.market.datavendorweb - INFO - Request Quandl data
2020-11-12 14:10:18,899 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['NZDUSD.close']
2020-11-12 14:10:19,286 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDNOK.close']
2020-11-12 14:10:20,604 - findatapy.market.datavendorweb - INFO - Completed request from Quandl for ['USDSEK.close']
2020-11-12 14:10:20,749 - findatapy.market.ioengine - INFO - Pushed MarketDataRequest_577__abstract_curve_key-None__category-None__category_key-backtest_default-cat_quandl_daily_NYC__cut-NYC__data_source-quandl__environment-backtest__expiry_date-NaT__fields-close__finish_date-2020-11-12 00:00:00__freq-daily__freq_mult-1__gran_freq-None__push_to_cache-True__resample-None__resample_how-last__start_date-1989-01-01 00:00:00__tickers-EURUSD_USDJPY_GBPUSD_AUDUSD_USDCAD_NZDUSD_USDCHF_USDNOK_USDSEK__trade_side-trade__vendor_fields-close__vendor_tickers-FRED_DEXUSEU_FRED_DEXJPUS_FRED_DEXUSUK_FRED_DEXUSAL_FRED_DEXCAUS_FRED_DEXUSNZ_FRED_DEXSZUS_FRED_DEXNOUS_FRED_DEXSDUS to Redis
###Markdown
Let's define all the parameters for the backtest, start/finish dates, technical indicator we'll use etc.
###Code
backtest = Backtest()
br = BacktestRequest()
fxconv = FXConv()
# get all asset data
br.start_date = "02 Jan 1990"
br.finish_date = datetime.datetime.utcnow()
br.spot_tc_bp = 0 # 2.5 bps bid/ask spread
br.ann_factor = 252
# have vol target for each signal
br.signal_vol_adjust = True
br.signal_vol_target = 0.05
br.signal_vol_max_leverage = 3
br.signal_vol_periods = 60
br.signal_vol_obs_in_year = 252
br.signal_vol_rebalance_freq = 'BM'
br.signal_vol_resample_freq = None
tech_params = TechParams();
tech_params.sma_period = 200;
indicator = 'SMA'
###Output
_____no_output_____
###Markdown
Calculate the technical indicator and the trading signal.
###Code
logger.info("Running backtest...")
# use technical indicator to create signals
# (we could obviously create whatever function we wanted for generating the signal dataframe)
tech_ind = TechIndicator()
tech_ind.create_tech_ind(spot_df, indicator, tech_params);
signal_df = tech_ind.get_signal()
###Output
2020-11-12 16:19:03,860 - __main__ - INFO - Running backtest...
###Markdown
Run the backtest using the market data, signal etc.
###Code
contract_value_df = None
# use the same data for generating signals
backtest.calculate_trading_PnL(br, asset_df, signal_df, contract_value_df, run_in_parallel=False)
port = backtest.portfolio_cum()
port.columns = [indicator + ' = ' + str(tech_params.sma_period) + ' ' + str(backtest.portfolio_pnl_desc()[0])]
signals = backtest.portfolio_signal()
# print the last positions (we could also save as CSV etc.)
print(signals.tail(1))
###Output
2020-11-12 16:19:04,941 - finmarketpy.backtest.backtestengine - INFO - Calculating trading P&L...
2020-11-12 16:19:04,982 - finmarketpy.backtest.backtestengine - INFO - Cumulative index calculations
2020-11-12 16:19:04,993 - finmarketpy.backtest.backtestengine - INFO - Completed cumulative index calculations
EURUSD.close SMA Signal USDJPY.close SMA Signal \
Date
2020-11-06 0.08321 -0.102708
GBPUSD.close SMA Signal AUDUSD.close SMA Signal \
Date
2020-11-06 0.062994 0.060006
USDCAD.close SMA Signal NZDUSD.close SMA Signal \
Date
2020-11-06 -0.093499 0.057194
USDCHF.close SMA Signal USDNOK.close SMA Signal \
Date
2020-11-06 -0.085228 -0.042613
USDSEK.close SMA Signal
Date
2020-11-06 -0.063243
###Markdown
Finally display the portfolio cumulative index.
###Code
style = Style()
style.title = "FX trend strategy"
style.source = 'Quandl'
style.scale_factor = 1
style.file_output = 'fx-trend-example.png'
Chart().plot(port, style=style)
###Output
_____no_output_____ |
samples/notebooks/csharp/Docs/Math-and-LaTeX.ipynb | ###Markdown
[this doc on github](https://github.com/dotnet/interactive/tree/main/samples/notebooks/csharp/Docs) Math and LaTeX Math content and LaTeX are supported
###Code
(LaTeXString)@"\begin{align}
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}"
(MathString)@"H← 60 + \frac{30(B−R)}{Vmax−Vmin} , if Vmax = G"
###Output
_____no_output_____
###Markdown
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/csharp/Docs) Math and LaTeX Math content and LaTeX are supported
###Code
(LaTeXString)@"\begin{align}
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}"
(MathString)@"H← 60 + \frac{30(B−R)}{Vmax−Vmin} , if Vmax = G"
###Output
_____no_output_____
###Markdown
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/csharp/Docs) Math and LaTeX Math content and LaTeX are supported
###Code
(LaTeXString)@"\begin{align}
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}"
(MathString)@"H← 60 + \frac{30(B−R)}{Vmax−Vmin} , if Vmax = G"
###Output
_____no_output_____
###Markdown
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/csharp/Docs) Math and LaTeX Math content and LaTeX are supported
###Code
(LaTeXString)@"\begin{align}
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}"
(MathString)@"H← 60 + \frac{30(B−R)}{Vmax−Vmin} , if Vmax = G"
###Output
_____no_output_____
###Markdown
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/csharp/Docs) Math and LaTeX Math content and LaTeX are supported
###Code
(LaTeXString)@"\begin{align}
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}"
(MathString)@"H← 60 + \frac{30(B−R)}{Vmax−Vmin} , if Vmax = G"
###Output
_____no_output_____ |
pytorch tutorial for bot-trading algorithms using reinforcement learing.ipynb | ###Markdown
Hello World for PytorchThe tensors in pytorch are creatd by `torch.tensor([...], dtype = ...)`. The operations of tensors are same as the ones in numpy.
###Code
import torch
########### broadcasting example ###########
a = torch.rand(5, 2)
b = torch.rand(1, 2)
print(b*a)
############### dot product ################
a = torch.rand(5)
b = torch.rand(5)
print(torch.dot(a, b))
########### matrix multiplication ##########
a = torch.rand(5, 4)
b = torch.rand(4, 3)
print(torch.matmul(a, b))
###Output
tensor([[0.0165, 0.0875],
[0.0512, 0.1382],
[0.1086, 0.3146],
[0.0998, 0.0288],
[0.0075, 0.4139]])
tensor(1.5872)
tensor([[1.5601, 1.8872, 1.8746],
[0.9561, 0.8141, 0.6230],
[0.9546, 0.8443, 0.4169],
[1.3867, 0.9729, 1.1332],
[1.3957, 1.9708, 1.6355]])
###Markdown
Differention on pytorch is done using a built in internal engine called `torch.autograd`. The independent variables are set to required grad. In the following example, $y = {\bf w}^t {\bf x} + b$. The gradietns at $x$ and $b$ are given by `x.grad` and `b.grad`.
###Code
b = torch.rand(1, requires_grad = True)
x = torch.rand(5, requires_grad = True)
w = torch.rand(5)
y = torch.dot(w, x) + b
y.backward()
print(b.grad, x.grad)
###Output
tensor([1.]) tensor([0.8710, 0.2257, 0.1938, 0.7747, 0.3650])
###Markdown
**Exercise 1.** Construct a linear regression using gradient descent. First initialize the parameters $w$ and $b$. Then generate random data for $x$ and $y$.
###Code
# first create the parameters for linear regression
k = 5
b = torch.tensor(8, dtype = torch.float)
w = torch.rand(k)
# generate the train data
x_train = torch.rand(100, k)*100
y_train = torch.matmul(x_train, w) + b + torch.rand(100)*2
# define the loss function
def loss(y_1, y_2): return torch.sum((y_1 - y_2)**2)
W = torch.rand(k, 1, requires_grad = True)
B = torch.rand(1, requires_grad = True)
lr = torch.tensor([0.01], dtype = torch.float)
epochs = 1000
LOSS = []
for i in range(epochs+1):
y_hat = torch.matmul(x_train, W) + B
l = loss(torch.reshape(y_train, y_hat.shape), y_hat)
l.backward()
LOSS.append(l)
# normaize the gradients
n_factor = torch.sqrt(torch.sum(W.grad.data**2) + B.grad.data**2)
W_dir = W.grad.data/n_factor
B_dir = B.grad.data/n_factor
W.data = W.data - lr*W_dir
B.data = B.data - lr*B_dir
if i%int(epochs/10) == 0 : print("Epoch :", i, "\t LOSS :", round(LOSS[-1].data.item(), 2))
W.grad.data.zero_()
B.grad.data.zero_()
print(torch.reshape(W, w.shape).data, w)
print(B, b)
###Output
tensor([0.0168, 0.6260, 0.8029, 0.7137, 0.8705]) tensor([0.0010, 0.6104, 0.7901, 0.7014, 0.8607])
tensor([6.2963], requires_grad=True) tensor(8.)
###Markdown
We can use one layer Neural Network with $10$ units to approximate it as well
###Code
W1 = torch.rand(k, 10, requires_grad = True)
B1 = torch.rand(1, requires_grad = True)
W2 = torch.rand(10, 1, requires_grad = True)
B2 = torch.rand(1, requires_grad = True)
lr = torch.tensor([0.01], dtype = torch.float)
epochs = 1000
LOSS = []
for i in range(epochs+1):
y_hat = torch.matmul(torch.matmul(x_train, W1) + B1, W2) + B2
l = loss(torch.reshape(y_train, y_hat.shape), y_hat)
l.backward()
LOSS.append(l)
n1_factor = torch.sqrt(torch.sum(W1.grad.data**2) + B1.grad.data**2)
W1_dir = W1.grad.data/n1_factor
B1_dir = B1.grad.data/n1_factor
n2_factor = torch.sqrt(torch.sum(W2.grad.data**2) + B2.grad.data**2)
W2_dir = W2.grad.data/n2_factor
B2_dir = B2.grad.data/n2_factor
W1.data = W1.data - lr*W1_dir
B1.data = B1.data - lr*B1_dir
W2.data = W2.data - lr*W2_dir
B2.data = B2.data - lr*B2_dir
if i%int(epochs/10) == 0 : print("Epoch :", i, "\t LOSS :", round(LOSS[-1].data.item(), 2))
W1.grad.data.zero_()
B1.grad.data.zero_()
W2.grad.data.zero_()
B2.grad.data.zero_()
###Output
Epoch : 0 LOSS : 50178656.0
Epoch : 100 LOSS : 1845363.25
Epoch : 200 LOSS : 1149.38
Epoch : 300 LOSS : 857.78
Epoch : 400 LOSS : 855.26
Epoch : 500 LOSS : 853.42
Epoch : 600 LOSS : 851.55
Epoch : 700 LOSS : 849.69
Epoch : 800 LOSS : 847.83
Epoch : 900 LOSS : 845.99
Epoch : 1000 LOSS : 844.14
|
Notebooks/Use_PY_in_Calculus.ipynb | ###Markdown
Use PY in Calculus What is Function我們可以將函數(functions)看作一台機器,當我們向這台機器輸入「x」時,它將輸出「f(x)」這台機器所能接受的所有輸入的集合被稱為定義域(domian),其所有可能的輸出的集合被稱為值域(range)。函數的定義域和值域都十分重要,當我們知道一個函數的定義域,就不會將不合適的`x`扔給這個函數;知道了定義域就可以判斷一個值是否可能是這個函數所輸出的。 多項式(polynomials):$f(x) = x^3 - 5^2 +9$因為這是個三次函數,當 $x\rightarrow \infty$ 時,$f(x) \rightarrow -\infty$,當 $x\rightarrow \infty$ 時,$f(x) \rightarrow \infty$ 因此,這個函數的定義域和值域都屬於實數集$R$。
###Code
def f(x):
return x**3 - 5*x**2 + 9
print(f(1), f(2))
###Output
5 -3
###Markdown
通常,我們會繪製函數圖像來幫助我們來理解函數的變化
###Code
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-10,10,num = 1000)
y = f(x)
plt.plot(x,y)
###Output
_____no_output_____
###Markdown
指數函數(Exponential Functions)$exp(x) = e^x$domain is $(-\infty,\infty)$,range is $(0,\infty)$。在 py 中,我們可以利用歐拉常數 $e$ 定義指數函數:
###Code
def exp(x):
return np.e**x
print("exp(2) = e^2 = ",exp(2))
###Output
exp(2) = e^2 = 7.3890560989306495
###Markdown
或者可以使用 `numpy` 自帶的指數函數:`np.e**x`
###Code
def eexp(x):
return np.e**(x)
print("exp(2) = e^2 = ",eexp(2))
plt.plot(x,exp(x))
###Output
_____no_output_____
###Markdown
當然,數學課就會講的更加深入$e^x$的定義式應該長成這樣:$\begin{align*}\sum_{k=0}^{\infty}\frac{x^k}{k!}\end{align*}$ 至於為什麼他會長成這樣,會在後面提及。這個式子應該怎麼在`python`中實現呢?
###Code
def eeexp(x):
sum = 0
for k in range(100):
sum += float(x**k)/np.math.factorial(k)
return sum
print("exp(2) = e^2 = ",eeexp(2))
###Output
exp(2) = e^2 = 7.389056098930649
###Markdown
對數函數(Logarithmic Function)$log_e(x) = ln(x)$*高中教的 $ln(x)$ 在大學和以後的生活中經常會被寫成 $log(x)$*對數函數其實就是指數函數的反函數,即,定義域為$(0,\infty)$,值域為$(-\infty,\infty)$。`numpy` 為我們提供了以$2,e,10$ 為底數的對數函數:
###Code
x = np.linspace(1,10,1000,endpoint = False)
y1 = np.log2(x)
y2 = np.log(x)
y3 = np.log10(x)
plt.plot(x,y1,'red',x,y2,'yellow',x,y3,'blue')
###Output
_____no_output_____
###Markdown
三角函數(Trigonometric functions)三角函數是常見的關於角的函數,三角函數在研究三角形和園等集合形狀的性質時,有很重要的作用,也是研究週期性現象的基礎工具;常見的三角函數有:正弦(sin),餘弦(cos)和正切(tan),當然,以後還會用到如餘切,正割,餘割等。
###Code
x = np.linspace(-10, 10, 10000)
a = np.sin(x)
b = np.cos(x)
c = np.tan(x)
# d = np.log(x)
plt.figure(figsize=(8,4))
plt.plot(x,a,label='$sin(x)$',color='green',linewidth=0.5)
plt.plot(x,b,label='$cos(x)$',color='red',linewidth=0.5)
plt.plot(x,c,label='$tan(x)$',color='blue',linewidth=0.5)
# plt.plot(x,d,label='$log(x)$',color='grey',linewidth=0.5)
plt.xlabel('Time(s)')
plt.ylabel('Volt')
plt.title('PyPlot')
plt.xlim(0,10)
plt.ylim(-5,5)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
複合函數(composition)函數 $f$ 和 $g$ 複合,$f \circ g = f(g(x))$,可以理解為先把$x$ 輸入給 $g$ 函數,獲得 $g(x)$ 後在輸入函數 $f$ 中,最後得出:$f(g(x))$* 幾個函數符合後仍然為一個函數* 任何函數都可以看成若干個函數的複合形式* $f\circ g(x)$ 的定義域與 $g(x)$ 相同,但是值域不一定與 $f(x)$ 相同例:$f(x) = x^2, g(x) = x^2 + x, h(x) = x^4 +2x^2\cdot x + x^2$
###Code
def f(x):
return x**2
def g(x):
return x**2+x
def h(x):
return f(g(x))
print("f(1) equals",f(1),"g(1) equals",g(1),"h(1) equals",h(1))
x = np.array(range(-10,10))
y = np.array([h(i) for i in x])
plt.scatter(x,y,)
###Output
f(1) equals 1 g(1) equals 2 h(1) equals 4
###Markdown
逆函數(Inverse Function)給定一個函數$f$,其逆函數 $f^{-1}$ 是一個與 $f$ 進行複合後 $f\circ f^{-1}(x) = x$ 的特殊函數函數與其反函數圖像一定是關於 $y = x$ 對稱的
###Code
def w(x):
return x**2
def inv(x):
return np.sqrt(x)
x = np.linspace(0,2,100)
plt.plot(x,w(x),'r',x,inv(x),'b',x,x,'g-.')
###Output
_____no_output_____
###Markdown
高階函數(Higher Order Function)我们可以不局限于将数值作为函数的输入和输出,函数本身也可以作为输入和输出,在給出例子之前,插一段話:這裡介紹一下在 `python`中十分重要的一個表達式:`lambda`,`lambda`本身就是一行函數,他們在其他語言中被稱為匿名函數,如果你不想在程序中對一個函數使用兩次,你也許會想到用 `lambda` 表達式,他們和普通函數完全一樣。原型:`lambda` 參數:操作(參數)
###Code
add = lambda x,y: x+y
print(add(3,5))
###Output
8
###Markdown
這裡,我們給出 高階函數 的例子:
###Code
def horizontal_shift(f,H):
return lambda x: f(x-H)
###Output
_____no_output_____
###Markdown
上面定義的函數 `horizontal_shift(f,H)`。接受的輸入是一個函數 $f$ 和一個實數 $H$,然後輸出一個新的函數,新函數是將 $f$ 沿著水平方向平移了距離 $H$ 以後得到的。
###Code
x = np.linspace(-10,10,1000)
shifted_g = horizontal_shift(g,2)
plt.plot(x,g(x),'b',x,shifted_g(x),'r')
###Output
_____no_output_____
###Markdown
以高階函數的觀點去看,函數的複合就等於將兩個函數作為輸入給複合函數,然後由其產生一個新的函數作為輸出。所以複合函數又有了新的定義:
###Code
def composite(f,g):
return lambda x: f(g(x))
h3 = composite(f,g)
print (sum (h(x) == h3(x)) == len(x))
###Output
True
###Markdown
歐拉公式(Euler's Formula)在前面給出了指數函數的多項式形式:$e^x = 1 + \frac{x}{1!} + \frac{x^2}{2!} + \dots = \sum_{k = 0}^{\infty}\frac{x^k}{k!}$ 接下來,我們不僅不去解釋上面的式子是怎麼來的,而且還要喪心病狂地扔給讀者:三角函數:$\begin{align*} &sin(x) = \frac{x}{1!}-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}\dots = \sum_{k=0}^{\infty}(-1)^k\frac{x^{(2k+1)}}{(2k+1)!} \\ &cos(x) = \frac{x^0}{0!}-\frac{x^2}{2!}+\frac{x^4}{4!}-\dots =\sum_{k=0}^{\infty}(-1)^k\frac{x^{2k}}{2k!}\end{align*}$在中學,我們曾經學過虛數 `i` (Imaginary Number)的概念,這裡我們對其來源和意義暫不討論,只是簡單回顧一下其基本的運算規則:$i^0 = 1, i^1 = i, i^2 = -1 \dots$將 $ix$ 帶入指數函數的公式中,得:$\begin{align*}e^{ix} &= \frac{(ix)^0}{0!} + \frac{(ix)^1}{1!} + \frac{(ix)^2}{2!} + \dots \\ &= \frac{i^0 x^0}{0!} + \frac{i^1 x^1}{1!} + \frac{i^2 x^2}{2!} + \dots \\ &= 1\frac{x^0}{0!} + i\frac{x^i}{1!} -1\frac{x^2}{2!} -i\frac{x^3}{3!} \dots \\ &=(\frac{x^0}{0!}-\frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \dots ) + i(\frac{x^1}{1!} -\frac{x^3}{3!} + \frac{x^5}{5!}-\frac{x^7}{7!} + \dots \\&cos(x) + isin(x)\end{align*}$此時,我們便可以獲得著名的歐拉公式:$e^{ix} = cos(x) + isin(x)$ 令,$x = \pi$時,$\Rightarrow e^{i\pi} + 1 = 0$歐拉公式在三角函數、圓周率、虛數以及自然指數之間建立的橋樑,在很多領域都扮演著重要的角色。 如果你對偶啦公式的正確性感到疑惑,不妨在`Python`中驗證一下:
###Code
import math
import numpy as np
a = np.sin(x)
b = np.cos(x)
x = np.pi
# the imaginary number in Numpy is 'j';
lhs = math.e**(1j*x)
rhs = b + (0+1j)*a
if(lhs == rhs):
print(bool(1))
else:
print(bool(0))
###Output
True
###Markdown
這裡給大家介紹一個很好的 `Python` 庫:`sympy`,如名所示,它是符號數學的 `Python` 庫,它的目標是稱為一個全功能的計算機代數系統,同時保證代碼簡潔、易於理解和拓展;所以,我們也可以通過 `sympy` 來展開 $e^x$ 來看看它的結果是什麼🙂
###Code
import sympy
z =sympy.Symbol('z',real = True)
sympy.expand(sympy.E**(sympy.I*z),complex = True)
###Output
_____no_output_____
###Markdown
將函數寫成多項式形式有很多的好處,多項式的微分和積分都相對容易。這是就很容易證明這個公式了:$\frac{d}{dx}e^x = e^x \frac{d}{dx}sin(x) = cos(x)\frac{d}{dx}cos(x) = -sin(x)$ 喔,對了,這一章怎麼能沒有圖呢?收尾之前來一發吧: 我也不知道这是啥 🤨
###Code
import numpy as np
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d
x,y=np.mgrid[-2:2:20j,-2:2:20j]
z=x*np.exp(-x**2-y**2)
ax=plt.subplot(111,projection='3d')
ax.plot_surface(x,y,z,rstride=2,cstride=1,cmap=plt.cm.coolwarm,alpha=0.8)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()
###Output
_____no_output_____
###Markdown
泰勒級數 泰勒級數(Taylor Series)在前幾章的預熱之後,讀者可能有這樣的疑問,是否任何函數都可以寫成友善的多項式形式呢? 到目前為止,我們介紹的$e^x$, $sin(x)$, $cos(x)$ 都可以用多項式進行表達。其實,這些多項式實際上就是這些函數在 $x=0$ 處展開的泰勒級數。下面我們給出函數 $f(x)$ 在$x=0$ 處展開的泰勒級數的定義:$\begin{align*}f(x) = f(0) + \frac{f'(0)}{1!}x + \frac{f''(0)}{2!}x^2 + \frac{f'''(0)}{3!}x^3 + \dots = \sum^{\infty}{k = 0} \frac{f^{(k)}(0)}{k!}x^k \end{align*}$其中:$f^{(k)}(0)$ 表示函數 $f$ 在 $k$ 次導函數在 $x=0$ 的取值。我們知道 $e^x$ 無論計算多少次導數結果出來都是 $e^x$即,$exp(x) = exp'(x)=exp''(x)=exp'''(x)=exp'''(x) = \dots$因而,根據上面的定義展開:$\begin{align*}exp(x) &= exp(0) + \frac{exp'(0)}{1!}+\frac{exp''(0)}{2!}x^2 +\frac{exp'''(0)}{3!}x^3 + \dots \\ &=1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots \\&=\sum_{k=0}^{\infty}\frac{x^k}{k!}\end{align*}$ 多項式近似(Polynomial Approximation)泰勒級數,可以把非常複雜的函數變成無限項的和的形式。通常,我們可以只計算泰勒級數的前幾項和,就可以獲得原函數的局部近似了。在做這樣的多項式近似時,我們所計算的項越多,則近似的結果越精確。下面,開始使用 `python` 做演示
###Code
import sympy as sy
import numpy as np
from sympy.functions import sin,cos
import matplotlib.pyplot as plt
plt.style.use("ggplot")
# Define the variable and the function to approximate
x = sy.Symbol('x')
f = sin(x)
# Factorial function
def factorial(n):
if n <= 0:
return 1
else:
return n*factorial(n-1)
# Taylor approximation at x0 of the function 'function'
def taylor(function,x0,n):
i = 0
p = 0
while i <= n:
p = p + (function.diff(x,i).subs(x,x0))/(factorial(i))*(x-x0)**i
i += 1
return p
# Plot results
def plot():
x_lims = [-5,5]
x1 = np.linspace(x_lims[0],x_lims[1],800)
y1 = []
# Approximate up until 10 starting from 1 and using steps of 2
for j in range(1,10,2):
func = taylor(f,0,j)
print('Taylor expansion at n='+str(j),func)
for k in x1:
y1.append(func.subs(x,k))
plt.plot(x1,y1,label='order '+str(j))
y1 = []
# Plot the function to approximate (sine, in this case)
plt.plot(x1,np.sin(x1),label='sin of x')
plt.xlim(x_lims)
plt.ylim([-5,5])
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.grid(True)
plt.title('Taylor series approximation')
plt.show()
plot()
###Output
Taylor expansion at n=1 x
Taylor expansion at n=3 -x**3/6 + x
Taylor expansion at n=5 x**5/120 - x**3/6 + x
Taylor expansion at n=7 -x**7/5040 + x**5/120 - x**3/6 + x
Taylor expansion at n=9 x**9/362880 - x**7/5040 + x**5/120 - x**3/6 + x
###Markdown
展開點(Expansion Point)上述的式子,都是在 $x=0$ 進行的,我們會發現多項式近似只在 $x=0$ 處較為準確。但,這不代表,我們可以在別的點進行多項式近似,如$x=a$ :$f(x) = f(a) + \frac{f'(a)}{1!}(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \dots $ 極限 極限(Limits)函數的極限,描述的是輸入值在接近一個特定值時函數的表現。定義: 我們如果要稱函數 $f(x)$ 在 $x = a$ 處的極限為 $L$,即:$lim_{x\rightarrow a} f(x) = L$,則需要:對任意一個 $\epsilon > 0$,我們要能找到一個 $\delta > 0$ 使的當 $x$ 的取值滿足:$0<|x-a|<\delta$時,$|f(x)-L|<\epsilon$
###Code
import sympy
x = sympy.Symbol('x',real = True)
f = lambda x: x**x-2*x-6
y = f(x)
print(y.limit(x,2))
###Output
-6
###Markdown
函數的連續性極限可以用來判斷一個函數是否為連續函數。當極限$\begin{align*}\lim_{x\rightarrow a} f(x)= f(a)\end{align*}$時,稱函數$f(x)$在點$ x = a$ 處為連續的。當一個函數在其定義域中任意一點均為連續,則稱該函數是連續函數。 泰勒級數用於極限計算我們在中學的時候,學習過關於部分極限的計算,這裡不再贅述。泰勒級數也可以用於計算一些形式比較複雜的函數的極限。這裡,僅舉一個例子:$\begin{align*} \lim_{x\rightarrow 0}\frac{sin(X)}{x} &= lim_{x\rightarrow 0} \frac{\frac{x}{1!}-\frac{x^3}{3!}\dots }{x} \\ &= \lim_{x\rightarrow 0} \frac{x(1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+\dots}{x} \\ &= \lim_{x\rightarrow 0} 1 -\frac{x^2}{3!} + \frac{x^4}{5!}-\frac{x^6}{7!}+\dots \\& = 1 \end{align*}$ 洛必達法則(l'Hopital's rule)在高中,老師就教過的一個神奇的法則:如果我們在求極限的時候,所求極限是無窮的,那我們可以試一下使用洛必達法則,哪些形式呢:$\frac{0}{0}, \frac{\infty}{\infty}, \frac{\infty}{0}$等等。**這裡,我們要注意一個前提條件:上下兩個函數都是連續函數才可以使用洛必達法則**這裡我們用 $\frac{0}{0}$ 作為一個例子:$\begin{align*}\lim_{x \rightarrow a}\frac{f'(x)}{g'(x)} \\= \lim_{x \rightarrow a}\frac{f'(x)}{g'(x)} \end{align*}$若此時,分子分母還都是$0$的話,再次重複:$\begin{align*}\lim_{x \rightarrow a}\frac{f''(x)}{g''(x)}\end{align*}$ 大$O$記法(Big-O Notation)*這個我在網上能找到的資料很少,大多是算法的時間複雜度相關的資料*算法複雜度的定義:> We denote an algorithm has a complexity of O(g(n))if there exists a constants > $c \in R^+$, suchthat $t(n)\leq c\cdot g(n), \forall n\geq 0$.> > 這裡的$n$是算法的輸入大小(input size),可以看作變量的個數等等。> > 方程$t$在這裡指算法的“時間”,也可以看作執行基本算法需要的步驟等等。> > 方程$g$在這裡值得是任意函數。*我們也可以將這個概念用在函數上:*我們已經見過了很多函數,在比較這兩個函數時,我們可能會知道,隨著輸入值$x$的增加或者減少,兩個函數的輸出值,兩個函數的輸出值增長或者減少的速度究竟是誰快誰慢,哪一個函數最終會遠遠甩開另一個。通過繪製函數圖像,我們可以得到一些之直觀的感受:
###Code
import numpy as np
import matplotlib.pyplot as plt
m= range(1,7)
fac = [np.math.factorial(i) for i in m] #fac means factorial#
exponential = [np.e**i for i in m]
polynomial = [i**3 for i in m]
logarithimic = [np.log(i) for i in m]
plt.plot(m,fac,'black',m,exponential,'blue',m,polynomial,'green',m,logarithimic,'red')
plt.show()
###Output
_____no_output_____
###Markdown
根據上面的圖,我們可以看出$x \rightarrow \infty$ 時,$x! > e^x > x^3 > ln(x)$ ,想要證明的話,我們需要去極限去算(用洛必達法則)。$\begin{align*}\lim_{x\rightarrow \infty}\frac{e^x}{x^3} = \infty \end{align*}$ 可以看出,趨於無窮時,分子遠大於分母,反之同理。我們可以用 `sympy` 來算一下這個例子:
###Code
import sympy
import numpy as np
x = sympy.Symbol('x',real = True)
f = lambda x: np.e**x/x**3
y = f(x)
print(y.limit(x,oo))
###Output
oo
###Markdown
為了描述這種隨著輸入$x\rightarrow \infty$或$x \rightarrow 0$時,函數的表現,我們如下定義大$O$記法:若我們稱函數$f(x)$在$x\rightarrow 0$時,時$O(g(x))$,則需要找到一個常數$C$,對於所有足夠小的$x$均有$|f(x)|若我們稱函數$f(x)$在$x\rightarrow 0$時是$O(g(x))$,則需要找一個常數$C$,對於所有足夠大的$x$均有$|f(x)|大$O$記法之所以得此名稱,是因為函數的增長速率很多時候被稱為函數的階(**Order**)下面舉一個例子:當$x\rightarrow \infty$時,$x\sqrt{1+x^2}$是$O(x^2)$
###Code
import sympy
import numpy as np
import matplotlib.pyplot as plt
x = sympy.Symbol('x',real = True)
xvals = np.linspace(0,100,1000)
f = x*sympy.sqrt(1+x**2)
g = 2*x**2
y1 = [f.evalf(subs = {x:xval}) for xval in xvals]
y2 = [g.evalf(subs = {x:xval}) for xval in xvals]
plt.plot(xvals[:10],y1[:10],'r',xvals[:10],y2[:10],'b')
plt.show()
plt.plot(xvals,y1,'r',xvals,y2,'b')
plt.show()
###Output
_____no_output_____
###Markdown
導數 割線(Secent Line)曲線的格線是指與弧線由兩個公共點的直線。
###Code
import numpy as np
from sympy.abc import x
import matplotlib.pyplot as plt
# function
f = x**3-3*x-6
# the tengent line at x=6
line = 106*x-428
d4 = np.linspace(5.9,6.1,100)
domains = [d3]
# define the plot funtion
def makeplot(f,l,d):
plt.plot(d,[f.evalf(subs={x:xval}) for xval in d],'b',\
d,[l.evalf(subs={x:xval}) for xval in d],'r')
for i in range(len(domains)):
# draw the plot and the subplot
plt.subplot(2, 2, i+1)
makeplot(f,line,domains[i])
plt.show()
###Output
_____no_output_____
###Markdown
切線(Tangent Line)中學介紹導數的時候,通常會舉兩個例子,其中一個是幾何意義上的例子:對於函數關於某一點進行球道,得到的是函數在該點處切線的斜率。選中函數圖像中的某一點,然後不斷地將函數圖放大,當我們將鏡頭拉至足夠近後便會發現函數圖看起來像一條直線,這條直線就是切線。
###Code
import numpy as np
from sympy.abc import x
import matplotlib.pyplot as plt
# function
f = x**3-2*x-6
# the tengent line at x=6
line = 106*x-438
d1 = np.linspace(2,10,1000)
d2 = np.linspace(4,8,1000)
d3 = np.linspace(5,7,1000)
d4 = np.linspace(5.9,6.1,100)
domains = [d1,d2,d3,d4]
# define the plot funtion
def makeplot(f,l,d):
plt.plot(d,[f.evalf(subs={x:xval}) for xval in d],'b',\
d,[l.evalf(subs={x:xval}) for xval in d],'r')
for i in range(len(domains)):
# draw the plot and the subplot
plt.subplot(2, 2, i+1)
makeplot(f,line,domains[i])
plt.show()
###Output
_____no_output_____
###Markdown
另一個例子就是:對路程的時間函數 $s(t)$ 求導可以得到速度的時間函數 $v(t)$,再進一步求導可以得到加速度的時間函數 $a(t)$。這個比較好理解,因為函數真正關心的是:當我們稍稍改變一點函數的輸入值時,函數的輸出值有怎樣的變化。 導數(Derivative)導數的定義如下:定義一:$\begin{align*}f'(a) = \frac{df}{dx}\mid_{x=a} = \lim_{x\rightarrow 0} \frac{f(x)-f(a)}{x-a}\end{align*}$若該極限不存在,則函數在 $x=a$ 處的導數也不存在。定義二:$\begin{align*}f'(a) = \frac{df}{dx}\mid_{x=a} = \lim_{h\rightarrow 0} \frac{f(a+h)-f(a)}{h}\end{align*}$以上两个定义都是耳熟能详的定义了,这里不多加赘述。**定義三**:函數$f(x)$在$x=a$處的導數$f'(a)$是滿足如下條件的常數$C$:對於在$a$附近輸入值的微笑變化$h$有,$f(a+h)=f(a) + Ch + O(h^2)$ 始終成立,也就是說導數$C$是輸出值變化中一階項的係數。$\begin{align*} \lim_{h\rightarrow 0} \frac{f(a+h)-f(a)}{h} = \lim_{h\rightarrow 0} C + O(h) = C \end{align*}$ 下面具一個例子,求$cos(x)$在$x=a$處的導數:$\begin{align*} cos(a+h) &= cos(a)cos(h) - sin(a)sin(h)\\&=cos(a)(a+O(h^2)) - sin(a)(h+O(h^3))\\&=cos(a)-sin(a)h+O(h^2)\end{align*}$因此,$\frac{d}{dx}cos(x)\mid_{x=a} = -sin(a)$
###Code
import numpy as np
from sympy.abc import x
f = lambda x: x**3-2*x-6
def derivative(f,h=0.00001):#define the 'derivative' function
return lambda x: float(f(x+h)-f(x))/h
fprime = derivative(f)
print (fprime(6))
#use sympy's defult derivative function
from sympy.abc import x
f = x**3-2*x-6
print(f.diff())
print(f.diff().evalf(subs={x:6}))
###Output
3*x**2 - 2
106.000000000000
###Markdown
線性近似(Linear approximation)定義:就是用線性函數去對普通函數進行近似。依據導數的定義三,我們有:$f(a+h) = f(a) + f'(a)h + O(h^2)$ 如果,我們將高階項去掉,就獲得了$f(a+h)$的線性近似式了:$f(a+h) = \approx f(a) + f'(a)h$ 舉個例子,用線性逼近去估算:$\begin{align*} \sqrt{255} &= \sqrt {256-1} \approx \sqrt{256} + \frac{1}{2\sqrt{256}(-1)} \\ &=16-\frac{1}{32} \\ &=15 \frac{31}{32} \end{align*}$ 牛頓迭代法(Newton's Method)**它是一種用於在實數域和複數域上近似求解方程的方法:使用函數$f(x)$的泰勒級數的前面幾項來尋找$f(X)=0$的根。**首先,選擇一個接近函數$f(x)$零點的$x_0$,計算對應的函數值$f(x_0)$和切線的斜率$f'(x_0)$;然後計算切線和$x$軸的交點$x_1$的$x$座標:$ 0 = (x_1 - x_0)\cdot f'(x_0) + f(x_0)$;通常來說,$x_1$ 會比 $x_0$ 更接近方程$f(X)=0$的解。因此, 我們現在會利用$x_1$去開始新一輪的迭代。公式如下:$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$
###Code
from sympy.abc import x
def mysqrt(c, x = 1, maxiter = 10, prt_step = False):
for i in range(maxiter):
x = 0.5*(x+ c/x)
if prt_step == True:
# 在输出时,{0}和{1}将被i+1和x所替代
print ("After {0} iteration, the root value is updated to {1}".format(i+1,x))
return x
print (mysqrt(2,maxiter =4,prt_step = True))
###Output
After 1 iteration, the root value is updated to 1.5
After 2 iteration, the root value is updated to 1.4166666666666665
After 3 iteration, the root value is updated to 1.4142156862745097
After 4 iteration, the root value is updated to 1.4142135623746899
1.4142135623746899
###Markdown
我們可以通過畫圖,更加了解牛頓法
###Code
import numpy as np
import matplotlib.pyplot as plt
f = lambda x: x**2-2*x-4
l1 = lambda x: 2*x-8
l2 = lambda x: 6*x-20
x = np.linspace(0,5,100)
plt.plot(x,f(x),'black')
plt.plot(x[30:80],l1(x[30:80]),'blue', linestyle = '--')
plt.plot(x[66:],l2(x[66:]),'blue', linestyle = '--')
l = plt.axhline(y=0,xmin=0,xmax=1,color = 'black')
l = plt.axvline(x=2,ymin=2.0/18,ymax=6.0/18, linestyle = '--')
l = plt.axvline(x=4,ymin=6.0/18,ymax=10.0/18, linestyle = '--')
plt.text(1.9,0.5,r"$x_0$", fontsize = 18)
plt.text(3.9,-1.5,r"$x_1$", fontsize = 18)
plt.text(3.1,1.3,r"$x_2$", fontsize = 18)
plt.plot(2,0,marker = 'o', color = 'r' )
plt.plot(2,-4,marker = 'o', color = 'r' )
plt.plot(4,0,marker = 'o', color = 'r' )
plt.plot(4,4,marker = 'o', color = 'r' )
plt.plot(10.0/3,0,marker = 'o', color = 'r' )
plt.show()
###Output
_____no_output_____
###Markdown
下面舉一個例子,$f(x) = x^2 -2x -4 = 0$的解,從$x_0 = 4$ 的初始猜測值開始,找到$x_0$的切線:$y=2x-8$,找到與$x$軸的交點$(4,0)$,將此點更新為新解:$x_1 = 4$,如此循環。
###Code
def NewTon(f, s = 1, maxiter = 100, prt_step = False):
for i in range(maxiter):
# 相较于f.evalf(subs={x:s}),subs()是更好的将值带入并计算的方法。
s = s - f.subs(x,s)/f.diff().subs(x,s)
if prt_step == True:
print("After {0} iteration, the solution is updated to {1}".format(i+1,s))
return s
from sympy.abc import x
f = x**2-2*x-4
print(NewTon(f, s = 2, maxiter = 4, prt_step = True))
###Output
After 1 iteration, the solution is updated to 4
After 2 iteration, the solution is updated to 10/3
After 3 iteration, the solution is updated to 68/21
After 4 iteration, the solution is updated to 3194/987
3194/987
###Markdown
另外,我們可以使用`sympy`,它可以幫助我們運算
###Code
import sympy
from sympy.abc import x
f = x**2-2*x-4
print(sympy.solve(f,x))
###Output
[1 + sqrt(5), -sqrt(5) + 1]
###Markdown
優化 高階導數(Higher Derivatives)在之前,我們講過什麼是高階導數,這裡在此提及,高階導數的遞歸式的定義為:函數$f(x)$的$n$階導數$f^{(n)}(x)$(或記為$\frac{d^n}{dx^n}(f)$為:$f^{(n)}(x) = \frac{d}{dx}f^{(n-1}(x)$如果將求導$\frac{d}{dx}$看作一個運算符,則相當於反覆對運算的結果使用$n$次運算符:$(\frac{d}{dx})^n \ f=\frac{d^n}{dx^n}f$
###Code
from sympy.abc import x
from sympy.abc import y
import matplotlib.pyplot as plt
f = x**2*y-2*x*y
print(f.diff(x,2)) #the second derivatives of x
print(f.diff(x).diff(x))# the different writing of the second derivatives of x
print(f.diff(x,y)) # we first get the derivative of x , then get the derivative of y
###Output
2*y
2*y
2*(x - 1)
###Markdown
优化问题(Optimization Problem)在微積分中,優化問題常常指的是算最大面積,最大體積等,現在給出一個例子:
###Code
plt.figure(1, figsize=(4,4))
plt.axis('off')
plt.axhspan(0,1,0.2,0.8,ec="none")
plt.axhspan(0.2,0.8,0,0.2,ec="none")
plt.axhspan(0.2,0.8,0.8,1,ec="none")
plt.axhline(0.2,0.2,0.8,linewidth = 2, color = 'black')
plt.axhline(0.8,0.17,0.23,linewidth = 2, color = 'black')
plt.axhline(1,0.17,0.23,linewidth = 2, color = 'black')
plt.axvline(0.2,0.8,1,linewidth = 2, color = 'black')
plt.axhline(0.8,0.17,0.23,linewidth = 2, color = 'black')
plt.axhline(1,0.17,0.23,linewidth = 2, color = 'black')
plt.text(0.495,0.22,r"$l$",fontsize = 18,color = "black")
plt.text(0.1,0.9,r"$\frac{4-1}{2}$",fontsize = 18,color = "black")
plt.show()
###Output
_____no_output_____
###Markdown
用一張給定邊長$4$的正方形紙來一個沒有蓋的紙盒,設這個紙盒的底部邊長為$l$,紙盒的高為$\frac{4-l}{2}$,那麼紙盒的體積為:$V(l) = l^2\frac{4-l}{2}$我們會希望之道,怎麼樣得到$ max\{V_1, V_2, \dots V_n\}$ ;優化問題就是在滿足條件下,使得目標函數(objective function)得到最大值(或最小)。
###Code
import numpy as np
import matplotlib.pyplot as plt
l = np.linspace(0,4,100)
V = lambda l: 0.5*l**2*(4-l) # the 'l' is the charcter 'l', not the number'one' as '1'
plt.plot(l,V(l))
plt.vlines(2.7,0,5, colors = "c", linestyles = "dashed")
plt.show()
###Output
_____no_output_____
###Markdown
通過觀察可得,在$l$的值略大於$2.5$的位置(虛線),獲得最大體積。 關鍵點(Critical Points)通過導數一節,我們知道一個函數在某一處的導數是代表了在輸入後函數值所發生的相對應的變化。因此,如果在給定一個函數$f$,如果知道點$x=a$處函數的導數不為$0$,則在該點處稍微改變函數的輸入值,函數值會發生變化,這表明函數在該點的函數值,既不是局部最大值(local maximum),也不是局部最小值(local minimum);相反,如果函數$f$在點$x=a$處函數的導數為$0$,或者該點出的導數不存在則稱這個點為關鍵點(critical Plints)要想知道一個$f'(a)=0$的關鍵處,函數值$f(a)$是一個局部最大值還是局部最小值,可以使用二次導數測試:1. 如果 $f''(a) > 0$, 則函數$f$在$a$處的函數值是局部最小值;2. 如果 $f''(a) < 0$, 則函數$f$在$a$處的函數值是局部最大值;3. 如果 $f''(a) = 0$, 則無結論。二次函數測試在中學課本中,大多是要求不求甚解地記憶的規則,其實理解起來非常容易。二次導數測試中涉及到函數在某一點處的函數值、一次導數和二次導數,於是我們可以利用泰勒級數:$f(x)$在$x=a$的泰勒級數:$f(x) = f(a) + f'(a)(x-a) + \frac{1}{2}f''(a)(x-a)^2 + \dots$因為$a$是關鍵點,$f'(a)$ = 0, 因而:$f(x) = f(a) + \frac{1}{2}f''(a)(x-a)^2 + O(x^3)$ 表明$f''(a) \neq 0$時,函數$f(x)$在$x=a$附近的表現近似於二次函數,二次項的係數$\frac{1}{2}f''(a)$決定了函數值在該點的表現。回到剛才那題:求最大體積,現在,我們就可以求了:
###Code
import sympy
from sympy.abc import l
V = 0.5*l**2*(4-l)
# first derivative
print(V.diff(l))
# the domain of first derivative is (-oo,oo),so, the critical point is the root of V'(1) = 0
cp = sympy.solve(V.diff(l),l)
print(str(cp))
#after finding out the critical point, we can calculate the second derivative
for p in cp:
print(int(V.diff(l,2).subs(l,p)))
# known that whenl=2.666..., we get the maximum V
###Output
-0.5*l**2 + 1.0*l*(-l + 4)
[0.0, 2.66666666666667]
4
-4
###Markdown
線性迴歸(Linear Regression)二維平面上有$n$個數據點,$p_i = (x_i,y_i)$,現在嘗試找到一條經過原點的直線$y=ax$,使得所有數據點到該直線的殘差(數據點和回歸直線之間的水平距離)的平方和最小。
###Code
import numpy as np
import matplotlib.pyplot as plt
# Set seed of random function to ensure reproducibility of simulation data
np.random.seed(123)
# Randomly generate some data with errors
x = np.linspace(0,10,10)
res = np.random.randint(-5,5,10)
y = 3*x + res
# Solve the coefficient of the regression line
a = sum(x*y)/sum(x**2)
# 绘图
plt.plot(x,y,'o')
plt.plot(x,a*x,'red')
for i in range(len(x)):
plt.axvline(x[i],min((a*x[i]+5)/35.0,(y[i]+5)/35.0),\
max((a*x[i]+5)/35.0,(y[i]+5)/35.0),linestyle = '--',\
color = 'black')
plt.show()
###Output
_____no_output_____
###Markdown
要找到這樣一條直線,實際上是一個優化問題:$\min_a Err(a) = \sum_i(y_i - ax_i)^2$要找出函數$Err(a)$的最小值,首先計算一次導函數:$\frac{dErr}{da} = \sum_i 2(y_i-ax_i)(-x_i)$,因此,$a = \frac{\sum_i x_iy_i}{\sum_i x_i^2}$ 是能夠使得函數值最小的輸入。這也是上面`python`代碼中,求解回歸線斜率所用的計算方式。如果,我們不限定直線一定經過原點,即,$y=ax+b$,則變量變成兩個:$a$和$b$:$\min_a Err(a,b) = \sum_i(y_i - ax_i-b)^2$這個問題就是多元微積分中所要分析的問題了,這裡給出一種`python`中的解法:
###Code
import numpy as np
import matplotlib.pyplot as plt
# 设定好随机函数种子,确保模拟数据的可重现性
np.random.seed(123)
# 随机生成一些带误差的数据
x = np.linspace(0,10,10)
res = np.random.randint(-5,5,10)
y = 3*x + res
# 求解回归线的系数
a = sum(x*y)/sum(x**2)
slope, intercept = np.polyfit(x,y,1)
# 绘图
plt.plot(x,y,'o')
plt.plot(x,a*x,'red',linestyle='--')
plt.plot(x,slope*x+intercept, 'blue')
for i in range(len(x)):
plt.axvline(x[i],min((a*x[i]+5)/35.0,(y[i]+5)/35.0),\
max((a*x[i]+5)/35.0,(y[i]+5)/35.0),linestyle = '--',\
color = 'black')
plt.show()
###Output
_____no_output_____
###Markdown
積分與微分(Integration and Differentiation) 積分積分時微積分中一個一個核心概念,通常會分為**定積分和不定積分**兩種。 定積分(Integral)也被稱為**黎曼積分(Riemann integral)**,直觀地說,對於一個給定的正實數值函數$f(x)$,$f(x)$在一個實數區間$[a,b]$上的定積分:$\int_a^b f(x) dx$ 可以理解成在$O-xy$坐標平面上,由曲線$(x,f(x))$,直線$x=a, x=b$以及$x$軸圍成的面積。
###Code
x = np.linspace(0, 5, 100)
y = np.sqrt(x)
plt.plot(x, y)
plt.fill_between(x, y, interpolate=True, color='b', alpha=0.5)
plt.xlim(0,5)
plt.ylim(0,5)
plt.show()
###Output
_____no_output_____
###Markdown
**黎曼積分**的核心思想就是試圖通過無限逼近來確定這個積分值。同時請注意,如果$f(x)$取負值,則相應的面積值$S$也取負值。這裡不給出詳細的證明和分析。不太嚴格的講,黎曼積分就是當分割的月來月“精細”的時候,黎曼河去想的極限。下面的圖就是展示,如何通過“矩形逼近”來證明。(這裡不提及勒貝格積分 Lebesgue integral)
###Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
def func(x):
return -x**3 - x**2 + 5
a, b = 2, 9 # integral limits
x = np.linspace(-5, 5)
y = func(x)
ix = np.linspace(-5, 5,10)
iy = func(ix)
fig, ax = plt.subplots()
plt.plot(x, y, 'r', linewidth=2, zorder=5)
plt.bar(ix, iy, width=1.1, color='b', align='edge', ec='olive', ls='-', lw=2,zorder=5)
plt.figtext(0.9, 0.05, '$x$')
plt.figtext(0.1, 0.9, '$y$')
ax.spines['left'].set_visible(True)
ax.spines['right'].set_visible(True)
ax.xaxis.set_major_locator(ticker.IndexLocator(base=1, offset=0))
plt.xlim(-6,6)
plt.ylim(-100,100)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
def func(x):
return -x**3 - x**2 + 5
a, b = 2, 9 # integral limits
x = np.linspace(-5, 5)
y = func(x)
ix = np.linspace(-5, 5,20)
iy = func(ix)
fig, ax = plt.subplots()
plt.plot(x, y, 'r', linewidth=2, zorder=5)
plt.bar(ix, iy, width=1.1, color='b', align='edge',ec='olive', ls='-', lw=2,zorder=5)
plt.figtext(0.9, 0.05, '$x$')
plt.figtext(0.1, 0.9, '$y$')
ax.spines['left'].set_visible(True)
ax.spines['right'].set_visible(True)
ax.xaxis.set_major_locator(ticker.IndexLocator(base=1, offset=0))
plt.xlim(-6,6)
plt.ylim(-100,100)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
def func(x):
n = 10
return n / (n ** 2 + x ** 3)
a, b = 2, 9 # integral limits
x = np.linspace(0, 11)
y = func(x)
x2 = np.linspace(1, 12)
y2 = func(x2-1)
ix = np.linspace(1, 10, 10)
iy = func(ix)
fig, ax = plt.subplots()
plt.plot(x, y, 'r', linewidth=2, zorder=15)
plt.plot(x2, y2, 'g', linewidth=2, zorder=15)
plt.bar(ix, iy, width=1, color='r', align='edge', ec='olive', ls='--', lw=2,zorder=10)
plt.ylim(ymin=0)
plt.figtext(0.9, 0.05, '$x$')
plt.figtext(0.1, 0.9, '$y$')
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_major_locator(ticker.IndexLocator(base=1, offset=1))
plt.show()
###Output
_____no_output_____
###Markdown
不定積分(indefinite integral)如果,我們將求導看作一個高階函數,輸入進去的一個函數,求導後成為一個新的函數。那麼不定積分可以視作求導的「反函數」,$F'(x) = f(x)$ ,則$\int f(x)dx = F(x) + C$,寫成類似於反函數之間的複合的形式有:$\int((\frac{d}{dx}F(x))dx) = F(x) + C, \ \ C \in R$即,在微積分中,一個函數$f = f$的不定積分,也稱為**原函數**或**反函數**,是一個導數等於$ f=f $的函數$ f = F $,即,$f = F' = f$。不定積分和定積分之間的關係,由 微積分基本定理 確定。$\int f(x) dx = F(x) + C$ 其中$f = F$ 是 $f = f$的不定積分。這樣,許多函數的定積分的計算就可以簡便的通過求不定積分來進行了。這裡介紹`python`中的實現方法
###Code
print(a.integrate())
print(sympy.integrate(sympy.E**t+3*t**2))
###Output
t**3 - 3*t
t**3 + exp(t)
###Markdown
常微分方程(Ordinary Differential Equations,ODE)我們觀察一輛行駛的汽車,假設我們發現函數$a(t)$能夠很好地描述這輛汽車在各個時刻的加速度,因為對速度的時間函數(v-t)求導可以得到加速度的時間函數(a-t),如果我們希望根據$a(t)$求出$v(t)$,很自然就會得出下面的方程:$\frac{dv}{dt}=a(t)$;如果我們能夠找到一個函數滿足:$\frac{dv}{dt} = a(t)$,那麼$v(t)$就是上面房車的其中一個解,因為常數項求導的結果是$0$,那麼$\forall C \in R$,$v(t)+C$也都是這個方程的解,因此,常微分方程的解就是$set \ = \{v(t) + C\}$ 在得到這一系列的函數後,我們只需要知道任意一個時刻裡汽車行駛的速度,就可以解出常數項$C$,從而得到最終想要的一個速度時間函數。如果我們沿用「導數是函數在某一個位置的切線斜率」這一種解讀去看上面的方正,就像是我們知道了一個函數在各個位置的切線斜率,反過來曲球這個函數一樣。
###Code
import sympy
t = sympy.Symbol('t')
c = sympy.Symbol('c')
domain = np.linspace(-3,3,100)
v = t**3-3*t-6
a = v.diff()
for p in np.linspace(-2,2,20):
slope = a.subs(t,p)
intercept = sympy.solve(slope*p+c-v.subs(t,p),c)[0]
lindomain = np.linspace(p-1,p+1,20)
plt.plot(lindomain,slope*lindomain+intercept,'red',linewidth = 1)
plt.plot(domain,[v.subs(t,i) for i in domain],linewidth = 2)
###Output
_____no_output_____
###Markdown
旋轉體(Rotator)分割法是微積分中的第一步,簡單的講,就是講研究對象的一小部分座位單元,放大了仔細研究,找出特徵,然後在總結整體規律。普遍連說,有兩種分割方式:直角坐標系分割和極座標分割。 直角坐標系分割對於直角坐標系分割,我們已經很熟悉了,上面講到的“矩陣逼近”其實就是沿著$x$軸分割成$n$段$\{\Delta x_i\}$,即。在直角坐標系下分割,是按照自變量進行分割。*當然,也可以沿著$y$軸進行分割。(勒貝格積分)* 極坐標分割同樣的,極座標也是按照自變量進行分割。這是由函數的影射關係決定的,一直自變量,通過函數運算,就可以得到函數值。從圖形上看,這樣分割可以是的每個分割單元“不規則的邊”的數量最小,最好是只有一條。所以,在實際問題建模時,重要的是選取合適的坐標系。[](https://i.loli.net/2018/06/13/5b1ff2e2bbee6.png) 近似近似,是微積分中重要的一部,通過近似將分割出來的不規則的“單元”近似成一個規則的”單元“。跟上面一樣,我們無法直接計算曲線圍成的面積,但是可以用一個**相似**的矩形去替代。1. Riemann 的定義的例子:在待求解的是區間$[a, b]$上曲線與$x$軸圍成的面積,因此套用的是平面的面積公式:$S_i = h_i \times w_i = f(\xi) \times \Delta x_i$2. 極坐標系曲線積分待求解的是在區間$[\theta_1, \theta_2]$上曲線與原點圍成的面積,因此套用的圓弧面積公式:$S_i = \frac{1}{2}\times r_i^2 \times \Delta \theta_i = \frac{1}{2} \times [f(\xi_i)^2 \times \Delta \theta_i$3. 平面曲線長度平面曲線在微觀上近似為一段“斜線”,那麼,它遵循的是“勾股定理”了,即“Pythagoras 定理”:$\Delta l_i = \sqrt{(\Delta x_i)^2 + (\Delta y_i)^2} = \sqrt{1 + (\frac{\Delta y_i}{\Delta x_i}^2 \Delta x_i}$4. 極坐標曲線長度$dl = \sqrt{(dx)^2 + (dy)^2 } = \sqrt{ \frac{d^2[r(\theta)\times cos(\theta)]}{d\theta^2} + \frac{d^2[r(\theta)\times sin(\theta)]}{d\theta^2} d\theta } = \sqrt{ r^2(\theta) + r'^2(\theta)}d\theta$我們不能直接用弧長公式,弧長公式的推導用了$\pi$,而$\pi$本身就是一個近似值 求和前面幾步都是在微觀層面進行的,只有通過“求和”(Remann 和)才能回到宏觀層面:$\lim_{\lambda \rightarrow 0^+}\sum_{i = 0}^n F_i$ 其中,$F_i$ 表示各種圍觀單元的公式。 例題:求(lemniscate)$\rho^2 = 2a^2 cos(2\theta)$ 圍成的平民啊區域的面積。
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
alpha = 1
theta = np.linspace(0, 2*np.pi, num=1000)
x = alpha * np.sqrt(2) * np.cos(theta) / (np.sin(theta)**2 + 1)
y = alpha * np.sqrt(2) * np.cos(theta) * np.sin(theta) / (np.sin(theta)**2 + 1)
plt.plot(x, y)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
這是一個對稱圖形,只需要計算其中的四分之一區域面積即可
###Code
from sympy import *
t, a = symbols('t a')
f = a ** 2 * cos(2 * t)
4 * integrate(f, (t, 0, pi / 4))
###Output
_____no_output_____ |
sample_comp/.ipynb_checkpoints/08_IR_crossmatch-checkpoint.ipynb | ###Markdown
Cross-Match 2MASS & WISE Catalogues
###Code
import os, glob, getpass, sys, warnings
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table, join, vstack, hstack, Column, MaskedColumn, unique
from astropy.utils.exceptions import AstropyWarning
from astropy import units as u
user = getpass.getuser()
sys.path.append('/Users/' + user + '/Dropbox/my_python_packages')
path = '../'
# Path to data =================================
warnings.simplefilter('ignore', AstropyWarning)
path_0 = path + 'sample_control/'
path_1 = path + 'sample_clusters/cl_'
path_2 = path + 'sample_gaia/'
path_control = path_0 + 'OPH___control_sample.vot'
path_gaia = path_2 + 'gaia_sample_cleaned.vot'
path_entire = 'tab_3.vot'
# Read Data ====================================
sample_gaia = Table.read(path_gaia, format = 'votable')
sample_control = Table.read(path_control, format = 'votable')
sample_entire = Table.read(path_entire, format = 'votable')
sample_common = sample_entire[sample_entire['DOH'] == 'YYY']
# Sanity Check =================================
print(f'N_Elements of Common Sample: {len(sample_common)}')
print(f'N_Elements of Entire Sample: {len(sample_entire)}')
print(f'N_Elements of Control Sample: {len(sample_control)}')
print(f'N_Elements of Gaia Sample: {len(sample_gaia)}')
#Read Gaia * IR catalogues =====================
warnings.simplefilter('ignore', AstropyWarning)
sample_t = Table.read('sample_common_x_2mass-result.vot') # Gaia Server [2MASS] * Sample Common
sample_w = Table.read('sample_common_x_wise-result.vot') # Gaia Server [WISE] * Sample Common
print('Gaia-Dawnloaded ==================')
print(f'2MASS/WISE * Gaia N_els: {len(sample_t)} {len(sample_w)}')
# Remove Masked Elements =======================
sample_t = sample_t[sample_t['ph_qual'].mask == False]
sample_w = sample_w[sample_w['ph_qual'].mask == False]
print()
print('Removing Masked Elements =========')
print(f'2MASS/WISE * Gaia N_els: {len(sample_t)} {len(sample_w)}')
# Convert Quality Flag to string ===============
sample_t['ph_qual'] = [inp.decode('utf-8') for inp in sample_t['ph_qual']]
sample_w['ph_qual'] = [inp.decode('utf-8') for inp in sample_w['ph_qual']]
sample_w['cc_flag'] = [inp.decode('utf-8') for inp in sample_w['cc_flag']]
# Rename for later =============================
sample_t['2MASS_ID'] = [inp.decode('utf-8') for inp in sample_t['original_ext_source_id']]
sample_t.remove_columns(['original_ext_source_id', 'ra', 'dec']) # To avoid duplicated Ra, Dec
# Merge WISE & 2MASS catalogues ================
merged = join(sample_w, sample_t, keys='source_id')
# Create new columns ===========================
merged['Ks_flag'] = [inp[-1:] for inp in merged['ph_qual_2']] # Extract Ks Quality Flags for later (see below)
merged['W1_flag'] = [inp[0:1] for inp in merged['ph_qual_1']] # Extract W1 Quality Flags for later (see below)
merged['W2_flag'] = [inp[1:2] for inp in merged['ph_qual_1']] # Extract W2 Quality Flags for later (see below)
merged['W3_flag'] = [inp[2:3] for inp in merged['ph_qual_1']] # Extract W3 Quality Flags for later (see below)
merged['W4_flag'] = [inp[3:4] for inp in merged['ph_qual_1']] # Extract W4 Quality Flags for later (see below)
print('Merged Sample ==========')
print(f'MERGED N_els: {len(merged):10.0f}')
# Clean sample =================================
els_1_1 = (merged['W1_flag'] == 'A') | (merged['W1_flag'] == 'B')
els_1_2 = (merged['W2_flag'] == 'A') | (merged['W2_flag'] == 'B')
els_1_3 = (merged['W3_flag'] == 'A') | (merged['W3_flag'] == 'B')
els_1_4 = (merged['W4_flag'] == 'A') | (merged['W4_flag'] == 'B')
els_1 = els_1_1 & els_1_2 & els_1_3 & els_1_4 # Photometry Quality Flag
els_2 = merged['ext_flag'] <2 # Extended Source Flag
els_3 = merged['cc_flag'] == '0000' # Artifact Flag
merged_cl = merged[els_1 & els_2 & els_3]
print('CLEANED Merged Sample =============')
print(f'MERGED N_els: {len(merged_cl):10.0f}')
# Sanity Check for 2MASS photometry ============
for inp in merged_cl['ph_qual_2']:
if inp != 'AAA': print('QFlag != AAA')
# Find Control sample elements =================
sample_control['control'] = ['Y'] * len(sample_control) # Add Column
merged_cl = join(merged_cl, sample_control['control', 'source_id'], keys='source_id', join_type='left')
merged_cl['control'][merged_cl['control'].mask == True] = 'N'
inp = len(merged_cl[ merged_cl['control'] == 'Y'])
print(f'Control Sources in 2MASS & WISE: {inp}')
merged_cl[0:3]
# Include SIMBAD References count ==============
# Neeed to identify the NEW discs
simbad = Table.read('simbad.xml')
simbad = simbad['TYPED_ID', 'MAIN_ID', 'NB_REF']
simbad['source_id'] = [np.int(inp[9:].decode('utf-8')) for inp in simbad['TYPED_ID']]
merged_cl = join(merged_cl, simbad, keys='source_id', join_type='left')
merged_cl['NB_REF'][merged_cl['NB_REF'].mask == True] = 0
# Remove flag cols =============================
merged_cl.remove_columns(['ext_flag', 'cc_flag', 'Ks_flag', 'W1_flag', 'W2_flag', 'W3_flag', 'W4_flag', 'TYPED_ID','MAIN_ID'])
# Save Table ===================================
merged_cl.write('08_IR_crossmatch.vot', format = 'votable', overwrite = True)
#Export for WISE verification ==================
file = '08_IR_crossmatch_WISE_check.txt' # Input file for IPAC/WISE webpage. WISE (.fits) maps for each source are downloaded from here.
merged_cl['artifact'] = ['N'] * len(merged_cl)
merged_cl.sort('ra')
merged_cl['ra', 'dec', 'source_id', 'artifact'].write(file, format ='ipac', overwrite = True)
merged_cl[0:3]
# Quick Sanity Check ====
len(merged_cl), len(merged_cl[merged_cl['control'] == 'Y']), len(merged_cl[merged_cl['control'] == 'N']), len(merged_cl[merged_cl['NB_REF'] == 0])
###Output
_____no_output_____ |
site/ja/tutorials/keras/classification.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中のモデルが不正確であるほど大きな値となる関数です。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 TensorFlow.org で表示 Run in Google Colab GitHub でソースを表示 ノートブックをダウンロード このガイドでは、スニーカーやシャツなど、身に着けるものの画像を分類するニューラルネットワークのモデルをトレーニングします。すべての詳細を理解できなくても問題ありません。ここでは、完全な TensorFlow プログラムについて概説し、細かいところはその過程において見ていきます。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow and tf.keras
import tensorflow as tf
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Fashion MNIST データセットをインポートする このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) データセットを使用します。このデータセットには、10 カテゴリの 70,000 のグレースケール画像が含まれています。次のように、画像は低解像度(28 x 28 ピクセル)で個々の衣料品を示しています。 図 1. Fashion-MNIST サンプル (作成者:Zalando、MIT ライセンス)Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNIST を使うのは、目先を変える意味もありますが、普通の MNIST よりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確認するために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000 枚の画像を使用してネットワークをトレーニングし、10,000 枚の画像を使用して、ネットワークが画像の分類をどの程度正確に学習したかを評価します。Tensor Flow から直接 Fashion MNIST にアクセスできます。Tensor Flow から直接 [Fashion MNIST データをインポートして読み込みます](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/fashion_mnist/load_data)。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
読み込んだデータセットは、NumPy 配列になります。- `train_images` と `train_labels` の 2 つの配列は、モデルのトレーニングに使用される*トレーニング用データセット*です。- モデルは、*テストセット*、`test_images`および`test_labels` 配列に対してテストされます。画像は 28×28 の NumPy 配列から構成されています。それぞれのピクセルの値は 0 から 255 の間です。*ラベル*は、0 から 9 までの整数の配列です。それぞれの数字が下表のように、衣料品の*クラス*に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルのトレーニングを行う前に、データセットの形式を見てみましょう。下記のように、トレーニング用データセットには 28 × 28 ピクセルの画像が 60,000 含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、トレーニング用データセットには 60,000 のラベルが含まれています。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0 から 9 までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000 の画像が含まれます。画像は 28 × 28 ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには 10,000 のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークをトレーニングする前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は 0 から 255 の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
これらの値をニューラルネットワークモデルに供給する前に、0 から 1 の範囲にスケーリングします。これを行うには、値を 255 で割ります。*トレーニングセット*と*テストセット*を同じ方法で前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルのレイヤーを定義し、その後モデルをコンパイルします。 レイヤーの設定ニューラルネットワークの基本的な構成要素は、[*レイヤー*](https://www.tensorflow.org/api_docs/python/tf/keras/layers)です。レイヤーは、レイヤーに入力されたデータから表現を抽出します。 これらの表現は解決しようとする問題に有用であることが望まれます。ディープラーニングモデルのほとんどは、単純なレイヤーの積み重ねで構成されています。`tf.keras.layers.Dense` のようなレイヤーのほとんどには、トレーニング中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初のレイヤーは、`tf.keras.layers.Flatten` です。このレイヤーは、画像を(28 × 28 ピクセルの)2 次元配列から、28×28=784 ピクセルの、1 次元配列に変換します。このレイヤーが、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。このレイヤーには学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは 2 つの `tf.keras.layers.Dense` レイヤーとなります。これらのレイヤーは、密結合あるいは全結合されたニューロンのレイヤーとなります。最初の `Dense` レイヤーには、128 個のノード(あるはニューロン)があります。最後のレイヤーでもある 2 番めのレイヤーは、長さが 10 のロジット配列を返します。それぞれのノードは、今見ている画像が 10 個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルのトレーニングの準備が整う前に、さらにいくつかの設定が必要です。これらは、モデルの[*コンパイル*](https://www.tensorflow.org/api_docs/python/tf/keras/Modelcompile)ステップ中に追加されます。- [*損失関数*](https://www.tensorflow.org/api_docs/python/tf/keras/losses) —これは、トレーニング中のモデルの正解率を測定します。この関数を最小化して、モデルを正しい方向に「操縦」する必要があります。- [*オプティマイザ*](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers) —これは、モデルが表示するデータとその損失関数に基づいてモデルが更新される方法です。- [*指標*](https://www.tensorflow.org/api_docs/python/tf/keras/metrics) —トレーニングとテストの手順を監視するために使用されます。次の例では、正しく分類された画像の率である正解率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークモデルのトレーニングには、次の手順が必要です。1. モデルトレーニング用データを投入します。この例では、トレーニングデータは `train_images` および `train_labels` 配列にあります。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます。この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。4. 予測が `test_labels` 配列のラベルと一致することを確認します。 モデルに投入するトレーニングを開始するには、[`model.fit`](https://www.tensorflow.org/api_docs/python/tf/keras/Modelfit) メソッドを呼び出します。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルのトレーニングの進行とともに、損失値と正解率が表示されます。このモデルの場合、トレーニング用データでは 0.91 (すなわち 91%) の正解率に達します。 正解率を評価する次に、モデルがテストデータセットでどのように機能するかを比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、トレーニング用データセットでの正解率よりも少し低くなります。このトレーニング時の正解率とテスト時の正解率の差は、**過適合**の一例です。過適合とは、新しいデータに対する機械学習モデルの性能が、トレーニング時と比較して低下する現象です。過適合モデルは、トレーニングデータセットのノイズと詳細を「記憶」するため、新しいデータでのモデルのパフォーマンスに悪影響を及ぼします。詳細については、以下を参照してください。- [過適合のデモ](https://www.tensorflow.org/tutorials/keras/overfit_and_underfitdemonstrate_overfitting)- [過適合を防ぐためのストラテジー](https://www.tensorflow.org/tutorials/keras/overfit_and_underfitstrategies_to_prevent_overfitting) 予測するトレーニングされたモデルを使用して、いくつかの画像に関する予測を行うことができます。モデルの線形出力は、[ロジット](https://developers.google.com/machine-learning/glossarylogits)です。ソフトマックスレイヤーをアタッチして、ロジットを解釈しやすい確率に変換します。
###Code
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
このモデルは、この画像が、アンクルブーツ、`class_names[9]`である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
これをグラフ化して、10 クラスの予測の完全なセットを確認します。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
予測を検証するトレーニングされたモデルを使用して、いくつかの画像に関する予測を行うことができます。 0 番目の画像、予測、および予測配列を見てみましょう。 正しい予測ラベルは青で、間違った予測ラベルは赤です。 数値は、予測されたラベルのパーセンテージ (/100) を示します。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
いくつかの画像をそれらの予測とともにプロットしてみましょう。確信度が高い場合でも、モデルが間違っていることがあることに注意してください。
###Code
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
トレーニングされたモデルを使用する最後に、トレーニング済みモデルを使って 1 つの画像に対する予測を行います。
###Code
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中のバッチあるいは「集まり」についてまとめて予測を行うように最適化されています。そのため、1 つの画像を使う場合でも、リスト化する必要があります。
###Code
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
`tf.keras.Model.predict` は、リストのリストを返します。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが) 予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 TensorFlow.org で表示 Run in Google Colab GitHub でソースを表示 ノートブックをダウンロード このガイドでは、スニーカーやシャツなど、身に着けるものの画像を分類するニューラルネットワークのモデルをトレーニングします。すべての詳細を理解できなくても問題ありません。ここでは、完全な TensorFlow プログラムについて概説し、細かいところはその過程において見ていきます。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
# TensorFlow and tf.keras
import tensorflow as tf
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Fashion MNIST データセットをインポートする このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) データセットを使用します。このデータセットには、10 カテゴリの 70,000 のグレースケール画像が含まれています。次のように、画像は低解像度(28 x 28 ピクセル)で個々の衣料品を示しています。 図 1. Fashion-MNIST サンプル (作成者:Zalando、MIT ライセンス) Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNIST を使うのは、目先を変える意味もありますが、普通の MNIST よりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確認するために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000 枚の画像を使用してネットワークをトレーニングし、10,000 枚の画像を使用して、ネットワークが画像の分類をどの程度正確に学習したかを評価します。Tensor Flow から直接 Fashion MNIST にアクセスできます。Tensor Flow から直接 [Fashion MNIST データをインポートして読み込みます](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/fashion_mnist/load_data)。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
読み込んだデータセットは、NumPy 配列になります。- `train_images` と `train_labels` の 2 つの配列は、モデルのトレーニングに使用される*トレーニング用データセット*です。- モデルは、*テストセット*、`test_images`および`test_labels` 配列に対してテストされます。画像は 28×28 の NumPy 配列から構成されています。それぞれのピクセルの値は 0 から 255 の間です。*ラベル*は、0 から 9 までの整数の配列です。それぞれの数字が下表のように、衣料品の*クラス*に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルのトレーニングを行う前に、データセットの形式を見てみましょう。下記のように、トレーニング用データセットには 28 × 28 ピクセルの画像が 60,000 含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、トレーニング用データセットには 60,000 のラベルが含まれています。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0 から 9 までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000 の画像が含まれます。画像は 28 × 28 ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには 10,000 のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークをトレーニングする前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は 0 から 255 の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
これらの値をニューラルネットワークモデルに供給する前に、0 から 1 の範囲にスケーリングします。これを行うには、値を 255 で割ります。*トレーニングセット*と*テストセット*を同じ方法で前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルのレイヤーを定義し、その後モデルをコンパイルします。 レイヤーの設定ニューラルネットワークの基本的な構成要素は、[*レイヤー*](https://www.tensorflow.org/api_docs/python/tf/keras/layers)です。レイヤーは、レイヤーに入力されたデータから表現を抽出します。 これらの表現は解決しようとする問題に有用であることが望まれます。ディープラーニングモデルのほとんどは、単純なレイヤーの積み重ねで構成されています。`tf.keras.layers.Dense` のようなレイヤーのほとんどには、トレーニング中に学習されるパラメータが存在します。
###Code
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
このネットワークの最初のレイヤーは、`tf.keras.layers.Flatten` です。このレイヤーは、画像を(28 × 28 ピクセルの)2 次元配列から、28×28=784 ピクセルの、1 次元配列に変換します。このレイヤーが、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。このレイヤーには学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは 2 つの `tf.keras.layers.Dense` レイヤーとなります。これらのレイヤーは、密結合あるいは全結合されたニューロンのレイヤーとなります。最初の `Dense` レイヤーには、128 個のノード(あるはニューロン)があります。最後のレイヤーでもある 2 番めのレイヤーは、長さが 10 のロジット配列を返します。それぞれのノードは、今見ている画像が 10 個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルのトレーニングの準備が整う前に、さらにいくつかの設定が必要です。これらは、モデルの[*コンパイル*](https://www.tensorflow.org/api_docs/python/tf/keras/Modelcompile)ステップ中に追加されます。- [*損失関数*](https://www.tensorflow.org/api_docs/python/tf/keras/losses) —これは、トレーニング中のモデルの正解率を測定します。この関数を最小化して、モデルを正しい方向に「操縦」する必要があります。- [*オプティマイザ*](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers) —これは、モデルが表示するデータとその損失関数に基づいてモデルが更新される方法です。- [*指標*](https://www.tensorflow.org/api_docs/python/tf/keras/metrics) —トレーニングとテストの手順を監視するために使用されます。次の例では、正しく分類された画像の率である正解率を使用しています。
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークモデルのトレーニングには、次の手順が必要です。1. モデルトレーニング用データを投入します。この例では、トレーニングデータは `train_images` および `train_labels` 配列にあります。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます。この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。4. 予測が `test_labels` 配列のラベルと一致することを確認します。 モデルに投入するトレーニングを開始するには、[`model.fit`](https://www.tensorflow.org/api_docs/python/tf/keras/Modelfit) メソッドを呼び出します。
###Code
model.fit(train_images, train_labels, epochs=10)
###Output
_____no_output_____
###Markdown
モデルのトレーニングの進行とともに、損失値と正解率が表示されます。このモデルの場合、トレーニング用データでは 0.91 (すなわち 91%) の正解率に達します。 正解率を評価する次に、モデルがテストデータセットでどのように機能するかを比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、トレーニング用データセットでの正解率よりも少し低くなります。このトレーニング時の正解率とテスト時の正解率の差は、**過適合**の一例です。過適合とは、新しいデータに対する機械学習モデルの性能が、トレーニング時と比較して低下する現象です。過適合モデルは、トレーニングデータセットのノイズと詳細を「記憶」するため、新しいデータでのモデルのパフォーマンスに悪影響を及ぼします。詳細については、以下を参照してください。- [過適合のデモ](https://www.tensorflow.org/tutorials/keras/overfit_and_underfitdemonstrate_overfitting)- [過適合を防ぐためのストラテジー](https://www.tensorflow.org/tutorials/keras/overfit_and_underfitstrategies_to_prevent_overfitting) 予測するトレーニングされたモデルを使用して、いくつかの画像に関する予測を行うことができます。モデルの線形出力は、[ロジット](https://developers.google.com/machine-learning/glossarylogits)です。ソフトマックスレイヤーをアタッチして、ロジットを解釈しやすい確率に変換します。
###Code
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
このモデルは、この画像が、アンクルブーツ、`class_names[9]`である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
これをグラフ化して、10 クラスの予測の完全なセットを確認します。
###Code
def plot_image(i, predictions_array, true_label, img):
true_label, img = true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
true_label = true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
予測を検証するトレーニングされたモデルを使用して、いくつかの画像に関する予測を行うことができます。 0 番目の画像、予測、および予測配列を見てみましょう。 正しい予測ラベルは青で、間違った予測ラベルは赤です。 数値は、予測されたラベルのパーセンテージ (/100) を示します。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
いくつかの画像をそれらの予測とともにプロットしてみましょう。確信度が高い場合でも、モデルが間違っていることがあることに注意してください。
###Code
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], test_labels)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
トレーニングされたモデルを使用する最後に、トレーニング済みモデルを使って 1 つの画像に対する予測を行います。
###Code
# Grab an image from the test dataset.
img = test_images[1]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中のバッチあるいは「集まり」についてまとめて予測を行うように最適化されています。そのため、1 つの画像を使う場合でも、リスト化する必要があります。
###Code
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = probability_model.predict(img)
print(predictions_single)
plot_value_array(1, predictions_single[0], test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
`tf.keras.Model.predict` は、リストのリストを返します。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが) 予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
はじめてのニューラルネットワーク:分類問題の初歩 View on TensorFlow.org Run in Google Colab View source on GitHub Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このガイドでは、スニーカーやシャツなど、身に着けるものの写真を分類するニューラルネットワークのモデルを訓練します。すべての詳細を理解できなくても問題ありません。TensorFlowの全体を早足で掴むためのもので、詳細についてはあとから見ていくことになります。このガイドでは、TensorFlowのモデルを構築し訓練するためのハイレベルのAPIである [tf.keras](https://www.tensorflow.org/guide/keras)を使用します。
###Code
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow と tf.keras のインポート
import tensorflow as tf
from tensorflow import keras
# ヘルパーライブラリのインポート
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
###Output
_____no_output_____
###Markdown
ファッションMNISTデータセットのロード このガイドでは、[Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist)を使用します。Fashion MNISTには10カテゴリーの白黒画像70,000枚が含まれています。それぞれは下図のような1枚に付き1種類の衣料品が写っている低解像度(28×28ピクセル)の画像です。 <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> Figure 1. Fashion-MNIST samples (by Zalando, MIT License). Fashion MNISTは、画像処理のための機械学習での"Hello, World"としてしばしば登場する[MNIST](http://yann.lecun.com/exdb/mnist/) データセットの代替として開発されたものです。MNISTデータセットは手書きの数字(0, 1, 2 など)から構成されており、そのフォーマットはこれから使うFashion MNISTと全く同じです。Fashion MNISTを使うのは、目先を変える意味もありますが、普通のMNISTよりも少しだけ手応えがあるからでもあります。どちらのデータセットも比較的小さく、アルゴリズムが期待したとおりに機能するかどうかを確かめるために使われます。プログラムのテストやデバッグのためには、よい出発点になります。ここでは、60,000枚の画像を訓練に、10,000枚の画像を、ネットワークが学習した画像分類の正確性を評価するのに使います。TensorFlowを使うと、下記のようにFashion MNISTのデータを簡単にインポートし、ロードすることが出来ます。
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
ロードしたデータセットは、NumPy配列になります。* `train_images` と `train_labels` の2つの配列は、モデルの訓練に使用される**訓練用データセット**です。* 訓練されたモデルは、 `test_images` と `test_labels` 配列からなる**テスト用データセット**を使ってテストします。画像は28×28のNumPy配列から構成されています。それぞれのピクセルの値は0から255の間の整数です。**ラベル**(label)は、0から9までの整数の配列です。それぞれの数字が下表のように、衣料品の**クラス**(class)に対応しています。 Label Class 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 画像はそれぞれ単一のラベルに分類されます。データセットには上記の**クラス名**が含まれていないため、後ほど画像を出力するときのために、クラス名を保存しておきます。
###Code
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
データの観察モデルの訓練を行う前に、データセットのフォーマットを見てみましょう。下記のように、訓練用データセットには28×28ピクセルの画像が60,000枚含まれています。
###Code
train_images.shape
###Output
_____no_output_____
###Markdown
同様に、訓練用データセットには60,000個のラベルが含まれます。
###Code
len(train_labels)
###Output
_____no_output_____
###Markdown
ラベルはそれぞれ、0から9までの間の整数です。
###Code
train_labels
###Output
_____no_output_____
###Markdown
テスト用データセットには、10,000枚の画像が含まれます。画像は28×28ピクセルで構成されています。
###Code
test_images.shape
###Output
_____no_output_____
###Markdown
テスト用データセットには10,000個のラベルが含まれます。
###Code
len(test_labels)
###Output
_____no_output_____
###Markdown
データの前処理ネットワークを訓練する前に、データを前処理する必要があります。最初の画像を調べてみればわかるように、ピクセルの値は0から255の間の数値です。
###Code
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
ニューラルネットワークにデータを投入する前に、これらの値を0から1までの範囲にスケールします。そのためには、画素の値を255で割ります。**訓練用データセット**と**テスト用データセット**は、同じように前処理することが重要です。
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
**訓練用データセット**の最初の25枚の画像を、クラス名付きで表示してみましょう。ネットワークを構築・訓練する前に、データが正しいフォーマットになっていることを確認します。
###Code
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
モデルの構築ニューラルネットワークを構築するには、まずモデルの階層を定義し、その後モデルをコンパイルします。 層の設定ニューラルネットワークを形作る基本的な構成要素は**層**(layer)です。層は、入力されたデータから「表現」を抽出します。それらの「表現」は、今取り組もうとしている問題に対して、より「意味のある」ものであることが期待されます。ディープラーニングモデルのほとんどは、単純な層の積み重ねで構成されています。`tf.keras.layers.Dense` のような層のほとんどには、訓練中に学習されるパラメータが存在します。
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
###Output
_____no_output_____
###Markdown
このネットワークの最初の層は、`tf.keras.layers.Flatten` です。この層は、画像を(28×28ピクセルの)2次元配列から、28×28=784ピクセルの、1次元配列に変換します。この層が、画像の中に積まれているピクセルの行を取り崩し、横に並べると考えてください。この層には学習すべきパラメータはなく、ただデータのフォーマット変換を行うだけです。ピクセルが1次元化されたあと、ネットワークは2つの `tf.keras.layers.Dense` 層となります。これらの層は、密結合あるいは全結合されたニューロンの層となります。最初の `Dense` 層には、128個のノード(あるはニューロン)があります。最後の層でもある2番めの層は、10ノードの**softmax**層です。この層は、合計が1になる10個の確率の配列を返します。それぞれのノードは、今見ている画像が10個のクラスのひとつひとつに属する確率を出力します。 モデルのコンパイルモデルが訓練できるようになるには、いくつかの設定を追加する必要があります。それらの設定は、モデルの**コンパイル**(compile)時に追加されます。* **損失関数**(loss function) —訓練中にモデルがどれくらい正確かを測定します。この関数の値を最小化することにより、訓練中のモデルを正しい方向に向かわせようというわけです。* **オプティマイザ**(optimizer)—モデルが見ているデータと、損失関数の値から、どのようにモデルを更新するかを決定します。* **メトリクス**(metrics) —訓練とテストのステップを監視するのに使用します。下記の例では*accuracy* (正解率)、つまり、画像が正しく分類された比率を使用しています。
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルの訓練ニューラルネットワークの訓練には次のようなステップが必要です。1. モデルに訓練用データを投入します—この例では `train_images` と `train_labels` の2つの配列です。2. モデルは、画像とラベルの対応関係を学習します。3. モデルにテスト用データセットの予測(分類)を行わせます—この例では `test_images` 配列です。その後、予測結果と `test_labels` 配列を照合します。 訓練を開始するには、`model.fit` メソッドを呼び出します。モデルを訓練用データに "fit"(適合)させるという意味です。
###Code
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
モデルの訓練の進行とともに、損失値と正解率が表示されます。このモデルの場合、訓練用データでは0.88(すなわち88%)の正解率に達します。 正解率の評価次に、テスト用データセットに対するモデルの性能を比較します。
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
ご覧の通り、テスト用データセットでの正解率は、訓練用データセットでの正解率よりも少し低くなります。この訓練時の正解率とテスト時の正解率の差は、**過学習**(over fitting)の一例です。過学習とは、新しいデータに対する機械学習モデルの性能が、訓練時と比較して低下する現象です。 予測するモデルの訓練が終わったら、そのモデルを使って画像の分類予測を行うことが出来ます。
###Code
predictions = model.predict(test_images)
###Output
_____no_output_____
###Markdown
これは、モデルがテスト用データセットの画像のひとつひとつを分類予測した結果です。最初の予測を見てみましょう。
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
予測結果は、10個の数字の配列です。これは、その画像が10の衣料品の種類のそれぞれに該当するかの「確信度」を表しています。どのラベルが一番確信度が高いかを見てみましょう。
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
というわけで、このモデルは、この画像が、アンクルブーツ、`class_names[9]` である可能性が最も高いと判断したことになります。これが正しいかどうか、テスト用ラベルを見てみましょう。
###Code
test_labels[0]
###Output
_____no_output_____
###Markdown
10チャンネルすべてをグラフ化してみることができます。
###Code
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
###Output
_____no_output_____
###Markdown
0番目の画像と、予測、予測配列を見てみましょう。
###Code
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
予測の中のいくつかの画像を、予測値とともに表示してみましょう。正しい予測は青で、誤っている予測は赤でラベルを表示します。数字は予測したラベルのパーセント(100分率)を示します。自信があるように見えても間違っていることがあることに注意してください。
###Code
# X個のテスト画像、予測されたラベル、正解ラベルを表示します。
# 正しい予測は青で、間違った予測は赤で表示しています。
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions, test_labels)
plt.show()
###Output
_____no_output_____
###Markdown
最後に、訓練済みモデルを使って1枚の画像に対する予測を行います。
###Code
# テスト用データセットから画像を1枚取り出す
img = test_images[0]
print(img.shape)
###Output
_____no_output_____
###Markdown
`tf.keras` モデルは、サンプルの中の**バッチ**(batch)あるいは「集まり」について予測を行うように作られています。そのため、1枚の画像を使う場合でも、リスト化する必要があります。
###Code
# 画像を1枚だけのバッチのメンバーにする
img = (np.expand_dims(img,0))
print(img.shape)
###Output
_____no_output_____
###Markdown
そして、予測を行います。
###Code
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
_ = plt.xticks(range(10), class_names, rotation=45)
###Output
_____no_output_____
###Markdown
`model.predict` メソッドの戻り値は、リストのリストです。リストの要素のそれぞれが、バッチの中の画像に対応します。バッチの中から、(といってもバッチの中身は1つだけですが)予測を取り出します。
###Code
np.argmax(predictions_single[0])
###Output
_____no_output_____ |
4_Zip Your Project Files and Submit.ipynb | ###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 317151 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1569550 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: 3_Inference.html (deflated 36%)
adding: model.py (deflated 66%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 297748 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1535013 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 3_Inference.html (deflated 35%)
adding: model.py (deflated 67%)
adding: 2_Training.html (deflated 83%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 319549 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1464347 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
updating: 2_Training.html (deflated 83%)
updating: 3_Inference.html (deflated 36%)
updating: model.py (deflated 68%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 319153 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1373394 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 3_Inference.html (deflated 37%)
adding: 2_Training.html (deflated 83%)
adding: model.py (deflated 67%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 304413 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1510161 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 3_Inference.html (deflated 35%)
adding: model.py (deflated 69%)
adding: 2_Training.html (deflated 83%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 325387 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1360367 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 3_Inference.html (deflated 37%)
adding: model.py (deflated 68%)
adding: 2_Training.html (deflated 83%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 321801 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 2131309 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: 3_Inference.html (deflated 33%)
adding: model.py (deflated 65%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
_____no_output_____
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
_____no_output_____
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 324632 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1368203 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: 3_Inference.html (deflated 37%)
adding: model.py (deflated 67%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 319569 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1306541 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
updating: 3_Inference.html (deflated 38%)
updating: 2_Training.html (deflated 83%)
updating: model.py (deflated 67%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 323196 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1746006 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 3_Inference.html (deflated 35%)
adding: 2_Training.html (deflated 83%)
adding: model.py (deflated 65%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 373903 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1455097 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: model.py (deflated 67%)
adding: 3_Inference.html (deflated 37%)
adding: 2_Training.html (deflated 83%)
###Markdown
Submit Your ProjectAfter creating and downloading your zip file, click on the `Submit` button and follow the instructions for submitting your `project2.zip` file. Congratulations on completing this project and I hope you enjoyed it!
###Code
print('Done!')
###Output
Done!
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 326844 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1408336 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 3_Inference.html (deflated 37%)
adding: model.py (deflated 65%)
adding: 2_Training.html (deflated 83%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 329206 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1254226 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: model.py (deflated 67%)
adding: 3_Inference.html (deflated 39%)
adding: 2_Training.html (deflated 83%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 332079 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1084893 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: model.py (deflated 67%)
adding: 3_Inference.html (deflated 41%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 377408 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1449122 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: 3_Inference.html (deflated 37%)
adding: model.py (deflated 67%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 380859 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1597333 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
updating: model.py (deflated 74%)
updating: 3_Inference.html (deflated 36%)
updating: 2_Training.html (deflated 83%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 319590 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1282648 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
updating: 2_Training.html (deflated 83%)
updating: 3_Inference.html (deflated 38%)
updating: model.py (deflated 66%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 346546 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1644793 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: 3_Inference.html (deflated 37%)
adding: model.py (deflated 72%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 316630 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1415511 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project1.zip -r . [email protected]
###Output
adding: model.py (deflated 66%)
adding: 2_Training.html (deflated 83%)
adding: 3_Inference.html (deflated 37%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 327297 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1308324 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: model.py (deflated 67%)
adding: 3_Inference.html (deflated 38%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 308541 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1486095 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
updating: 2_Training.html (deflated 83%)
updating: model.py (deflated 64%)
updating: 3_Inference.html (deflated 35%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 333522 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1589423 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: 3_Inference.html (deflated 36%)
adding: model.py (deflated 62%)
###Markdown
Project SubmissionWhen you are ready to submit your project, meaning you have checked the [rubric](https://review.udacity.com/!/rubrics/1427/view) and made sure that you have completed all tasks and answered all questions. Then you are ready to compress your files and submit your solution!The following steps assume:1. All cells have been *run* in Notebooks 2 and 3 (and that progress has been saved).2. All questions in those notebooks have been answered.3. Your architecture in `model.py` is your best tested architecture.Please make sure all your work is saved before moving on. You do not need to change any code in these cells; this code is to help you submit your project, only.---The first thing we'll do, is convert your notebooks into `.html` files; these files will save the output of each cell and any code/text that you have modified and saved in those notebooks. Note that the first notebooks are not included because their contents will not affect your project review.
###Code
!jupyter nbconvert "2_Training.ipynb"
!jupyter nbconvert "3_Inference.ipynb"
###Output
[NbConvertApp] Converting notebook 2_Training.ipynb to html
[NbConvertApp] Writing 318137 bytes to 2_Training.html
[NbConvertApp] Converting notebook 3_Inference.ipynb to html
[NbConvertApp] Writing 1450268 bytes to 3_Inference.html
###Markdown
Zip the project filesNext, we'll zip all these notebook files and your `model.py` file into one compressed archive named `project2.zip`.After completing this step you should see this zip file appear in your home directory, where you can download it as seen in the image below, by selecting it from the list and clicking **Download**. This step may take a minute or two to complete.
###Code
!!apt-get -y update && apt-get install -y zip
!zip project2.zip -r . [email protected]
###Output
adding: 2_Training.html (deflated 83%)
adding: 3_Inference.html (deflated 37%)
adding: model.py (deflated 66%)
|
create_ndArray.ipynb | ###Markdown
0. Creating Ndarray numpy.array(object, dtype = None, copy = True, order = None, subok = False, ndmin = 0) Parameters: object : array_like. an array-like is any Python object that np.array can convert to an ndarray. From the source code, we can infer that the array_like object can be:1.a NumPy array, or2.a NumPy scalar, or3.a Python scalar, or4.any object which supports the PEP 3118 buffer interface, or5.any object that supports the __array_struct__ or __array_interface__ interface, or6.any object that supplies the __array__ function, or7.any object that can be treated as a list of lists Sum up, An array, any object exposing the array interface, an object whose __array__ method returns an array, or any (nested) sequence.dtype : data-type, optional. The desired data-type for the array. If not given, then the type will be determined as the minimum type required to hold the objects in the sequence. This argument can only be used to ‘upcast’ the array. For downcasting, use the .astype(t) method.copy : bool, optional. If true (default), then the object is copied. Otherwise, a copy will only be made if __array__ returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (dtype, order, etc.).order : {‘K’, ‘A’, ‘C’, ‘F’}, optional. Specify the memory layout of the array. If object is not an array, the newly created array will be in C order (row major) unless ‘F’ is specified, in which case it will be in Fortran order (column major). If object is an array the following holds.order no copy copy=True‘K’ unchanged F & C order preserved, otherwise most similar order‘A’ unchanged F order if input is F and not C, otherwise C order‘C’ C order C order‘F’ F order F orderorder no copy copy=True‘K’ unchanged F & C order preserved, otherwise most similar order‘A’ unchanged F order if input is F and not C, otherwise C order‘C’ C order C order‘F’ F order F orderWhen copy=False and a copy is made for other reasons, the result is the same as if copy=True, with some exceptions for A, see the Notes section. The default order is ‘K’.subok : bool, optional. If True, then sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (default).ndmin : int, optional. Specifies the minimum number of dimensions that the resulting array should have. Ones will be pre-pended to the shape as needed to meet this requirement.Returns: out : ndarray. An array object satisfying the specified requirements. When order is ‘A’ and object is an array in neither ‘C’ nor ‘F’ order, and a copy is forced by a change in dtype, then the order of the result is not necessarily ‘C’ as expected. This is likely a bug.
###Code
np.array([[1,2,3],[2,3,4]],dtype=int,copy=True, order='C',subok=False,ndmin=3)
np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')])
np.array(np.mat('1 2; 3 4'))
###Output
_____no_output_____
###Markdown
Check internal memory layouthttps://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.ndarray.html 1. Creating Series class pandas.Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)[source]One-dimensional ndarray with axis labels (including time series).Labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index. Statistical methods from ndarray have been overridden to automatically exclude missing data (currently represented as NaN).Operations between Series (+, -, /, , *) align values based on their associated index values– they need not be the same length. The result index will be the sorted union of the two indexes. Parameters: data : array-like, dict, or scalar value Contains data stored in Seriesindex : array-like or Index (1d) Values must be hashable and have the same length as data. Non-unique index values are allowed. Will default to RangeIndex(len(data)) if not provided. If both a dict and index sequence are used, the index will override the keys found in the dict.If data is an ndarray, index must be the same length as data. If no index is passed, one will be created having values [0, ..., len(data) - 1].dtype : numpy.dtype or None If None, dtype will be inferredcopy : boolean, default False. Copy input dataLabels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index. Statistical methods from ndarray have been overridden to automatically exclude missing data (currently represented as NaN).Operations between Series (+, -, /, , *) align values based on their associated index values– they need not be the same length. The result index will be the sorted union of the two indexes.
###Code
pd.Series(3)
pd.Series(np.random.randn(5),index=['a','b','c','d','e'])
s.index
pd.Series({'a':1.,'b':2,'c':3})
###Output
_____no_output_____
###Markdown
Note NaN (not a number) is the standard missing data marker used in pandas
###Code
pd.Series(5., index=['a', 'b', 'c', 'd', 'e'])
###Output
_____no_output_____
###Markdown
Series is ndarray-like. Series acts very similarly to a ndarray, and is a valid argument to most NumPy functions. However, things like slicing also slice the index. 2. Creating DataFrame pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=False) Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure Parameters: data : numpy ndarray (structured or homogeneous), dict, or DataFrameDict can contain Series, arrays, constants, or list-like objectsindex : Index or array-likeIndex to use for resulting frame. Will default to np.arange(n) if no indexing information part of input data and no index providedcolumns : Index or array-likeColumn labels to use for resulting frame. Will default to np.arange(n) if no column labels are provideddtype : dtype, default NoneData type to force. Only a single dtype is allowed. If None, infercopy : boolean, default FalseCopy data from inputs. Only affects DataFrame / 2d ndarray input 1. The most straightforward way, is by creating it from a NumPy array.
###Code
a=pd.DataFrame(np.array([[10, 11], [20, 21]]))
a
b=pd.DataFrame(a)
b
###Output
_____no_output_____
###Markdown
Each row of the array forms a row in the DataFrame object. 2. A DataFrame can also be initialized by passing a list of Series objects.
###Code
df1 = pd.DataFrame([pd.Series(np.arange(10, 15)),
pd.Series(np.arange(15, 20))])
df1
###Output
_____no_output_____
###Markdown
The dimensions of a DataFrame object can be determined using its .shape property.
###Code
df = pd.DataFrame(np.array([[10, 11], [20, 21]]),
columns=['a', 'b'],
index=['r1', 'r2'])
df
###Output
_____no_output_____
###Markdown
The names of the columns of a DataFrame can be accessed with its .columns property: 3. A DataFrame object can also be created by passing a dictionary containing one or more Series objects, where the dictionary keys contain the column names and each series is one column of data:
###Code
s1 = pd.Series(np.arange(1, 6, 1))
s2 = pd.Series(np.arange(6, 11, 1))
df3=pd.DataFrame({'c1': s1, 'c2': s2})
df3
###Output
_____no_output_____
###Markdown
When the DataFrame is created, each series in the dictionary is aligned with each other by the index label, as it is added to the DataFrame object.
###Code
df=pd.DataFrame({'col1': [1, 2], 'col2': [3, 4]})
df
###Output
_____no_output_____
###Markdown
Selecting columns of a DataFrameSelecting the data in specifc columns of a DataFrame is performed by using the [] operator. This can be passed either as a single object, or a list of objects.
###Code
df
df['col1']
###Output
_____no_output_____ |
Algorithms-spectral-embedding.ipynb | ###Markdown
Algorithms: spectral embedding This work by Jephian Lin is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import networkx as nx
### for computing partial eigenvectors
import scipy.linalg as LA
###Output
_____no_output_____
###Markdown
Spectral embeddingLet $G$ be a graph and $L$ its Laplacian matrix. The **spectral embedding** uses the eigenvectors of $L$ to generate positions for each vertex. Therefore, the spectral embedding is also called the **Laplacian embedding**.  AlgorithmLet $G$ be a graph on $n$ vertices and $L$ its Laplacian matrix. Let $d$ be the target dimension. Suppose the eigenvalues of $L$ are $\{\lambda_0=0, \lambda_1, \ldots, \lambda_{n-1}\}$ and the corresponding eigenvalues are $\{{\bf v}_0=\frac{1}{\sqrt{n}}{\bf 1}, {\bf v}_1, \ldots, {\bf v}_{n-1}\}$. (We may assume these eigenvectors are of length $1$, and they are mutually orthogonal.) Create a matrix $Y = \begin{bmatrix} | & \cdots & | \\{\bf v}_1 & \cdots & {\bf v}_d \\| & \cdots & | \\\end{bmatrix} = \begin{bmatrix}- & {\bf y}_0 & - \\\vdots & \vdots & \vdots \\- & {\bf y}_{n -1} & - \\\end{bmatrix}$. Then assign ${\bf y}_i$ to be the position for vertex $i$. Properties of the spectral embeddingThe $Y$ matrix has the properties below.* $Y^\top Y=I$: column vectors are of unit length and are mutually orthogonal.* $\operatorname{tr}(Y^\top LY) = \sum_{ij\in E(G)}\|{\bf y}_i-{\bf y}_j\|_2^2$ is the square sum of the edge length.* The chosen $Y$ minimize $\operatorname{tr}(Y^\top LY)$ subject to $Y^\top Y=I$. **Conclusion**: The spectral embedding tends to put adjacent vertices together. Pseudocode**Input**: a graph `g` and a target dimension `d` **Output**:a dictionary {i: position of vertex i as an array} the position is given by the spectral embedding onto $\mathbb{R}^d$```PythonL = the Laplacian matrix of gcompute the eigenvectors v1, ..., vdY = the array whose columns are v1, ..., vdpos = {i: Y[i] for i in range(g.order())}``` ExerciseLet `g = nx.path_graph(10)`. Use `nx.laplacian_matrix` to find the Laplacian matrix of `g`.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseLet `g = nx.path_graph(10)`. The output of `nx.laplacian_matrix` is a sparse matrix. Use `.toarray` to transform it to an array.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseLet `L = 5*np.eye(5) - np.ones((5,5))`. Use `LA.eigh` to find the eigenvalues and the eigenvectors of `L`.Note: The module `LA` comes from ```Pythonimport scipy.linalg as LA```
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseLet `L = 5*np.eye(5) - np.ones((5,5))` and `d = 2`. The function `LA.eigh` has a keyword `eigvals` that allows you to compute certain eigenvalues and eigenvectors only. Use them to find `Y` whose columns are `v1, ..., vd`.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseWrite a function `spectral_embedding(g, d)` that returns the dictionary `{i: position of vertex i of g in Rd}`.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseLet `g = nx.path_graph(10)` and `pos = spectral_embedding(g, 2)`. Draw the graph by `nx.draw` using this posision. Compare the drawing with ```Pythonnx.draw(g, pos=nx.spectral_layout(g))```
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseRun the code below.```Pythong = nx.path_graph(10)pos = spectral_embedding(g, 2)Y = np.vstack(list(pos.values()))x,y = Y.T```Use `plt.scatter` to plot the vertices with respect to the positions in `pos`.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseRun the code below.```Pythong = nx.path_graph(10)pos = spectral_embedding(g, 2)Y = np.vstack(list(pos.values()))x,y = Y.T```Go through a `for` loop on `g.edges()` and use `plt.plot` plot each edge of `g`.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseGiven a NetworkX graph object `g`. Embed the graph onto $\mathbb{R}^2$ and use matplotlib to draw the graph.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
ExerciseGiven a NetworkX graph object `g`. Embed the graph onto $\mathbb{R}^3$ and use matplotlib to draw the graph.You will need the following settings.```Pythonfrom mpl_toolkits import mplot3dfig = plt.figure()ax = plt.axes(projection='3d')```After the settings, `ax.scatter(x, y, z)` and `ax.plot(x, y, z)` can be used to draw a 3D graph.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Exercise The term **spectral clustering** is (more or less) the same as [data to graph](Algorithms-data-to-graph.ipynb) + [spectral embedding](Algorithms-spectral-embedding.ipynb) + [$k$-mean clustering](Algorithms-k-mean-clustering.ipynb). Create a function `spectral_clustering(X, e, d, k)` that returns `y` whose entries are in 0, ..., k-1 and indicates the belonging groups. Here `e` is used for the epsilon ball algorithm as the threshold, while `d` is used for spectral embedding as the target dimension.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Sample code for the spectral embedding onto $\mathbb{R}^2$ and $\mathbb{R}^3$
###Code
def Laplacian_2_embedding(g, draw=True):
"""
Input:
g: NetworkX graph object
draw: draw the graph when draw == True
Output:
a dictionary {i: position of vertex i as an array}
the position is given by
the Laplacian embedding onto R^2
This function works only when the graph is
labeled by {0, 1, ..., g.order() - 1}
"""
n = g.order()
L = nx.laplacian_matrix(g).toarray()
lam, Y = LA.eigh(L, eigvals=(1,2))
x,y = Y.T
### create pos
pos = {i: Y[i] for i in range(n)}
if draw:
fig = plt.figure()
ax = plt.axes()
### plot points
ax.scatter(x, y, s=50, zorder=3)
### add vertex labels
for i in range(n):
ax.annotate(i, (x[i], y[i]), zorder=4)
### add lines
for i,j in g.edges():
ax.plot([x[i],x[j]], [y[i],y[j]], 'c')
return pos
def Laplacian_3_embedding(g, draw=True):
"""
Same function as Laplacian_2_embedding
except the graph is embedded
to R^3
"""
n = g.order()
L = nx.laplacian_matrix(g).toarray()
lam, Y = LA.eigh(L, eigvals=(1,3))
x,y,z = Y.T
### create pos
pos = {i: Y[i] for i in range(n)}
if draw:
fig = plt.figure()
ax = plt.axes(projection='3d')
### plot points
ax.scatter(x, y, z, s=50, zorder=3)
### add vertex labels
for i in range(n):
ax.text(x[i], y[i], z[i], i, zorder=4)
### add lines
for i,j in g.edges():
ax.plot([x[i],x[j]], [y[i],y[j]], [z[i],z[j]], 'c')
# fig.savefig('spectral_embedding.png')
return pos
g = nx.path_graph(10)
pos = Laplacian_2_embedding(g)
fig = plt.figure(figsize=(3,3))
nx.draw(g, pos=pos)
pos = Laplacian_3_embedding(g)
pos
###Output
_____no_output_____ |
functions_lesson.ipynb | ###Markdown
Python Functions- Context: what are functions? why are they helpful? - They are reusable peices of code - Takes an input to produce and output. - Abstractions: (Print function) allow us to abstract away. Using Functions Vocab Run/invoke/call Start of function Argument Value passed to the function Return Value Result of evaluating the function call expression.
###Code
1 + 2
int(123) #Int = call #Argument = string 123 and #Return of 123
###Output
_____no_output_____
###Markdown
We've already used built-in functions Mini Exercise -- Using Functions Take a look at this code snippet: max([1, 2, 3]) What is the function name? Max Where is the function invocation? On [1, 2, 3] What is the return value? 3 Take a look at this code snippet: type(max([1, 2, 3])) What will the output be? Why? Take a look at this code snippet: type(max) What will the output be? Why? What is the difference between the two code blocks below? print print() What other built in functions do you know?
###Code
max([1,2,3])
type(max([1,2,3]))
type(max)
print # refrences the print function
print() #invoking the function/ calling th function
###Output
###Markdown
Function signiture e.g.max(l: list(int))-> int What are the signitures for - print() - print(x) -> none - can not store a value in print- range() - takes in two arguments, both int, and returns a list(range) of ints - range(start: int, stop: int[, step: int]) -> list[int] Other types of built-in functions- max()- min()- range()- print()- return() Defining Functions Vocab Function Definition Function Name Argument Parameter Function Body
###Code
def increment(n):
return n + 1
increment(2) # calling the function
# 2 isthe argument to the invocation of the incremented function.
###Output
_____no_output_____
###Markdown
Mini Exercise -- Defining Functions What is the difference between calling and defining a function? What is the difference between the two code blocks below? def increment(n): return n + 1 def increment(n): print(n + 1) Create a function named nonzero. It should accept a number and return true if the number is anythong other than zero, false otherwise. Use your nonzero function in combination with the built-in input function and an if statement to prompt the user for a number and print a message displaying whether or not the number is zero. Transfer the work you have done into a function named explain_nonzero. Calling this function whould prompt the user and display the message as before.
###Code
def increment(n):
print(n + 1)
increment(3)
def increment(n):
return n + 1
increment(3)
def nonezero(n):
if n != 0:
return True
else:
return False
nonezero(2)
while True:
n = int(input("Please enter a number: "))
if n != 0:
print(f"{n} is not equal to 0")
elif n == 0:
print(f"{n} is equal to 0")
else:
print("Not a number")
Continue = input("More numbers? ")
if not Continue.lower().startswith("y"):
break
def explain_nonezero():
n = int(input("Please enter a number: "))
if n != 0:
print(f"{n} is not equal to 0")
elif n == 0:
print(f"{n} is equal to 0")
else:
print("Not a number")
explain_nonezero() # must use this to call a def funtion
#rather than just running it as before
def increment(n):
return n + 1
assert increment(3) == 4
def increment(n):
print(n + 1)
assert increment(3) == 4
# We see the 4 printed but the function did not return the value 4 so we get an assertion error
###Output
4
###Markdown
- what happens if we omit the return keyword? - the function does not return a value. - the function call expression evaluates to None- when is this useful?- for side effects - 'square_and_double(x)': Produces a Value - 'insert_book_into_database(book)': Has a side effect Default Parameter Values and Keyword Arguments
###Code
# sayhello(name: str) -> str *(function signiture)*
def sayhello(name="Easley"):
return f"Hello, {name}!"
# The name parameter has a default value of "Easley"
sayhello() # Calls the function so it prints
sayhello("class") # can update the name value
# More customization
def sayhello(name="Easley", greeting="Hello"):
return f"{greeting}, {name}!"
sayhello()
sayhello("Class", "Good Afternoon") # defined by their position
sayhello(greeting="Salutations") # defined as keyword
sayhello(greeting="Salutations", name="Class")
###Output
_____no_output_____
###Markdown
- postitional arguments parameter defined by position or order- keyword arguments parameter defined by keyword Function Scope- defining variables inside/outside of functions- defines where a variable can be referenced Vocab Scope Global Local
###Code
# NB. function names and variables are very generic here because the concept is very generic
def f():
x = 123 # variable x has a local scope (b/c its inside the body of the function)
f()
print(x) # does not print because x is not in outside world
x = 123 # (x variable is golbaly defined variable)
def f():
print(x)
f()
###Output
123
###Markdown
Why would yo use a global scope vs local scope? Which is preferd?- short answer: prefer local scope, ise global sparingly when a variable needs to be refrenced from within multiple functions.
###Code
x = 123
def f(x):
return x + 1
print(f(12)) # only prints f() (Only calling from the local scope)
x = 123
def f(x):
return x + 1
print(f(12), x) # Also prints x (calling from local scope and global scope)
###Output
13 123
###Markdown
Mini Exercise -- Function Scope What is the difference between local and global scope? Which is preferred? Take a look at the cell below this one. Before running it, think about what you would expect to happen. Explain step by step how the python code is executing.
###Code
def changeit(x):
x = x + 1
x = 42
print(x)
changeit(x) # there is not chnage for changeit() because x is local to the function while the golbal x is unchnaged.
print(x)
def changeit(x):
return x + 1
x = 42
print(x)
print(changeit(x)) #This is how you would get the changeit(x) to print
print(x)
###Output
42
43
42
###Markdown
- perfere local scope - avoid re-assigning global variables Function Scope Example```pythondef fill_nulls(df): return df.fillna(0) def drop_outliers(df): outlier_cutoff = 3 return df[df.zscore().abs() < 3] def prep_dataframe(df): df = fill_nulls(df) df = drop_outliers(df) return ``` [Data Prep example](https://github.com/CodeupClassroom/darden-nlp-exercises/blob/main/nlp_prepare.py). The specifics here aren't important right now, just pay attention to the overall shape of functions and how local scope is used. Lambda Functions- A function as an expression- used for "throw away", or one-off, functions
###Code
# Useful in pandas and dataframes
def increment(n):
return n + 1
# same as
increment = lambda n: n + 1
###Output
_____no_output_____
###Markdown
**Use case**: sorting (min, max too)Python doesn't know how to compare dictionaries, but it does know how to compare strings or numbers
###Code
students = [
{"name": "Ada Lovelace", "grade": 87},
{"name": "Thomas Bayes", "grade": 89},
{"name": "Christine Darden", "grade": 99},
{"name": "Annie Easley", "grade": 94},
{"name": "Marie Curie", "grade": 97},
]
sorted([3, 1, 5, 100, -4])
# sort by name
sorted(students, key=lambda s: s["name"])
# sort by grade
sorted(students, key=lambda s: s["grade"]) # defaults low to high
sorted(students, key=lambda s: -s["grade"]) # sort from high to low
###Output
_____no_output_____
###Markdown
Mini Exercise -- Lambda Functions & Sorting Write the code necessary to sort the list of student dictionaries by student last name. Hints: You will need to write a function that takes in a student dictionary and returns just the last name. You can use the .split string method to seperate the first name and the last name.
###Code
student = {'name': 'Ada Lovelace', 'grade': 87}
student['name'].split(' ')[-1]
sorted(students, key=lambda s: s["name"].split(" ")[-1])
###Output
_____no_output_____
###Markdown
Python Functions- Context: what are functions? why are they helpful? Using Functions Vocab Run/invoke/call Argument Return Value We've already used built-in functions Mini Exercise -- Using Functions Take a look at this code snippet: max([1, 2, 3]) What is the function name? Where is the function invocation? What is the return value? Take a look at this code snippet: type(max([1, 2, 3])) What will the output be? Why? Take a look at this code snippet: type(max) What will the output be? Why? What is the difference between the two code blocks below? print print() What other built in functions do you know?
###Code
max([1, 2, 3])
# max
# produce highest number from list
# 3
type(max([1, 2, 3]))
# type produces the data type
type(max)
# describes the built in function
print
print()
###Output
###Markdown
Defining Functions Vocab Function Definition Function Name Argument Parameter Function Body
###Code
def increment(n):
return n + 1
###Output
_____no_output_____
###Markdown
Default Parameter Values and Keyword Arguments Mini Exercise -- Defining Functions What is the difference between calling and defining a function? What is the difference between the two code blocks below? def increment(n): return n + 1 def increment(n): print(n + 1) Create a function named nonzero. It should accept a number and return true if the number is anythong other than zero, false otherwise. Use your nonzero function in combination with the built-in input function and an if statement to prompt the user for a number and print a message displaying whether or not the number is zero. Transfer the work you have done into a function named explain_nonzero. Calling this function whould prompt the user and display the message as before.
###Code
#1 What is the difference between calling and defining a function?
#2 What is the difference between the two code blocks below?
def increment(n):
return n + 1
def increment(n):
print(n + 1)
#3 Create a function named nonzero. It should accept a number and return true if the number is anythong other than zero, false otherwise.
#4 Use your nonzero function in combination with the built-in input function and an if statement to prompt the user for a number and print a message displaying whether or not the number is zero.
#5 Transfer the work you have done into a function named explain_nonzero. Calling this function whould prompt the user and display the message as before.
def sayhello(name="Easley"):
return f"Hello, {name}!"
###Output
_____no_output_____
###Markdown
Function Scope- defining variables inside/outside of functions- defines where a variable can be referenced Vocab Scope Global Local
###Code
# NB. function names and variables are very generic here because the concept is very generic
def f():
x = 123
f()
print(x)
x = 123
def f():
print(x)
f()
x = 123
def f(x):
return x + 1
print(f(12))
###Output
_____no_output_____
###Markdown
Mini Exercise -- Function Scope What is the difference between local and global scope? Which is preferred? Take a look at the cell below this one. Before running it, think about what you would expect to happen. Explain step by step how the python code is executing.
###Code
def changeit(x):
x = x + 1
x = 42
print(x)
changeit(x)
print(x)
###Output
_____no_output_____
###Markdown
Function Scope Example```pythondef fill_nulls(df): return df.fillna(0) def drop_outliers(df): outlier_cutoff = 3 return df[df.zscore().abs() < 3] def prep_dataframe(df): df = fill_nulls(df) df = drop_outliers(df) return ``` [Data Prep example](https://github.com/CodeupClassroom/darden-nlp-exercises/blob/main/nlp_prepare.py). The specifics here aren't important right now, just pay attention to the overall shape of functions and how local scope is used. Lambda Functions- A function as an expression- used for "throw away", or one-off, functions
###Code
def increment(n):
return n + 1
# same as
increment = lambda n: n + 1
###Output
_____no_output_____
###Markdown
**Use case**: sorting (min, max too)Python doesn't know how to compare dictionaries, but it does know how to compare strings or numbers
###Code
students = [
{"name": "Ada Lovelace", "grade": 87},
{"name": "Thomas Bayes", "grade": 89},
{"name": "Christine Darden", "grade": 99},
{"name": "Annie Easley", "grade": 94},
{"name": "Marie Curie", "grade": 97},
]
# sort by name
sorted(students, key=lambda s: s["name"])
# sort by grade
sorted(students, key=lambda s: s["grade"])
###Output
_____no_output_____
###Markdown
Python Functions- Context: what are functions? why are they helpful? - reusable pieces of code - accepts inputs and produce outputs - abstraction Using Functions Vocab Run/invoke/call Argument Return Value
###Code
1 + 1
int("123")
###Output
_____no_output_____
###Markdown
We've already used built-in functions Mini Exercise -- Using Functions Take a look at this code snippet: max([1, 2, 3]) What is the function name? Where is the function invocation? What is the return value? Take a look at this code snippet: type(max([1, 2, 3])) What will the output be? Why? Take a look at this code snippet: type(max) What will the output be? Why? What is the difference between the two code blocks below? print print() What other built in functions do you know? 1. - function name: max - funct invocation is `max([1, 2, 3])` - return value: integer value 32. the output would be int b'c the max of the list is a number3. the output would be function(?) b'c that's what it is. - `builtin_function_or_method` 4. The difference between `print` and `print()` is: - `print`: is referring to the function - `print()`: is calling/running/invoking the function5. Other built in functions: - `min` - `avg` - `
###Code
type(max)
###Output
_____no_output_____
###Markdown
Function Signature:The type and quantity of the function arguments plus the function's return type.e.g.```python not executable python codemax(l: list[int]) -> int```What are the signatures of the `print` funtion and the `range` function?```pythonprint(x) -> None``````pythonrange(start: int, end: int) -> list[int]``````pythonrange(start: int, end: int[, step: int]) -> list[int]```
###Code
return_value = print('hey there!')
type(return_value)
###Output
_____no_output_____
###Markdown
Defining Functions Vocab Function Definition Function Name: usually verb Argument Parameter Function Body
###Code
# n is the parameter
def increment(n):
return n + 1 # body: everything indented
increment(2)
# 2 is the argument to the icocation of the increment function
###Output
_____no_output_____
###Markdown
Mini Exercise -- Defining Functions What is the difference between calling and defining a function? What is the difference between the two code blocks below? def increment(n): return n + 1 def increment(n): print(n + 1) Create a function named nonzero. It should accept a number and return true if the number is anythong other than zero, false otherwise. Use your nonzero function in combination with the built-in input function and an if statement to prompt the user for a number and print a message displaying whether or not the number is zero. Transfer the work you have done into a function named explain_nonzero. Calling this function whould prompt the user and display the message as before. ```pythonincrement(n: int) -> int``` 1. Defining a function is like creating the rule to follow, whereas calling a definition runs it.2. ```pythondef increment(n): return n + 1````return` will...```pythondef increment(n): print(n + 1)````print` is an action occurring.3. ```pythonnonzero(x: int) -> bool```
###Code
def nonzero(n):
return n != 0
nonzero(123)
###Output
_____no_output_____
###Markdown
4.
###Code
user_input = int(input("Please enter a number: "))
if nonzero(user_input):
print("That is not zero!")
else:
print("That is zero!")
###Output
please enter a number: 5
That is not zero!
###Markdown
5.
###Code
#nothing will happen as an output b'c we are JUST defining the funct here
def explain_nonzero():
user_input = int(input("Please enter a number: "))
if nonzero(user_input):
print("That is not zero!")
else:
print("That is zero!")
#calling the funct will result in an output
explain_nonzero()
###Output
Please enter a number: 0
That is zero!
###Markdown
- What happens if we omit the `return` keyword? the function doesn't return a value the function call expression evaluates to `None` - When is this useful? For side effects. - `square_and_double()`: produces a value - `insert_book_into_database(book)`: has a side effect - `fill_nulls_with_zero(colum)`: produces a value -- a new column with nulls filled in - `launch_the_missles()`: has a side effect
###Code
def increment(n):
print(n + 1)
assert increment(3) == 4
assert increment(1000) == 1001
###Output
4
###Markdown
Default Parameter Values and Keyword Arguments
###Code
#sayhello(name: str) -> str
def sayhello(name="Easley"):
return f"Hello, {name}!"
#the name parameter has a default value of easley
#passing an argument for `name` is optional
sayhello()
def sayhello(name="Easley", greeting="Hello"):
return f"{greeting}, {name}!"
sayhello()
def sayhello(name="Easley", greeting="Hello"):
return f"{greeting}, {name}!"
sayhello("class", "Good afternoon")
###Output
_____no_output_____
###Markdown
positional arguments: paramenter defined by position, or order keyword arguments: paramenter defined by keyword
###Code
sayhello(greeting='Salutations')
###Output
_____no_output_____
###Markdown
Function Scope- defining variables inside/outside of functions- defines where a variable can be referenced Vocab Scope Global Local
###Code
# NB. function names and variables are very generic here because the concept is very generic
def f():
x = 123
# local scope: b'c it's inside of the funct
#only exists inside of the function
#doesn't exist outside of the funct
f()
print(x)
x = 123
# globally scoped b'c it's outside of the funct
def f():
print(x)
#we can access a variable defined outside the function,
#but not the other way around
f()
###Output
123
###Markdown
Prefer local scope, use global sparingly when a variable NEEDS to be referenced from within multiple functions. Harder to "mess up" by accidentally deleting data that is unneccessary for on function, but needed for another. - avoid re-assigning global variables
###Code
#global var x
x = 123
#local var. x
def f(x):
return x + 1
#funtion f is invoked where x=12 this is the local var. x
print(f(12))
print(x)
print(f(x))
###Output
13
123
124
###Markdown
Mini Exercise -- Function Scope What is the difference between local and global scope? Which is preferred? Take a look at the cell below this one. Before running it, think about what you would expect to happen. Explain step by step how the python code is executing. 2. I predict: - 42 - 43 - 42 I was incorrect, `changeit(x)` type is `NoneType`
###Code
def changeit(x): #x is defined as a parameter
x = x + 1#local
x = 42 #global
print(x)
changeit(x)
print(x)
type(changeit(x))
def changeit(x): #x is defined as a parameter
x = x + 1 #local
x = 42 #global
print(x)
changeit(x)
print(x)
###Output
42
###Markdown
Function Scope Example```pythondef fill_nulls(df): return df.fillna(0) def drop_outliers(df): outlier_cutoff = 3 return df[df.zscore().abs() < 3] def prep_dataframe(df): df = fill_nulls(df) df = drop_outliers(df) return df``` [Data Prep example](https://github.com/CodeupClassroom/darden-nlp-exercises/blob/main/nlp_prepare.py). The specifics here aren't important right now, just pay attention to the overall shape of functions and how local scope is used. Lambda Functions- A function as an expression- used for "throw away", or one-off, functions
###Code
def increment(n):
return n + 1
# same as
increment = lambda n: n + 1
# lambda is limited to a single expression
###Output
_____no_output_____
###Markdown
**Use case**: sorting (min, max too)Python doesn't know how to compare dictionaries, but it does know how to compare strings or numbers
###Code
students = [
{"name": "Ada Lovelace", "grade": 87},
{"name": "Thomas Bayes", "grade": 89},
{"name": "Christine Darden", "grade": 99},
{"name": "Annie Easley", "grade": 94},
{"name": "Marie Curie", "grade": 97},
]
sorted([3, 1, 5, 100, -4])
sorted(students)
#TypeError: '<' not supported between instances of 'dict' and 'dict'
#can't compare dictionaries, since has lots/variety of data in it
# sort by name
sorted(students, key=lambda s: s["name"])
#key maps one element to a value that can be compared
# sort by grade
sorted(students, key=lambda s: s["grade"])
###Output
_____no_output_____
###Markdown
Mini Exercise -- Lambda Functions & Sorting Write the code necessary to sort the list of student dictionaries by student last name. Hints: You will need to write a function that takes in a student dictionary and returns just the last name. You can use the .split string method to seperate the first name and the last name.
###Code
student = {'name': 'Ada Lovelace', 'grade': 87}
student['name'].split(' ')[-1]
sorted(students, key = lambda s: s['name'].split(" ")[-1])
###Output
_____no_output_____
###Markdown
Python Functions- Context: what are functions? why are they helpful? -Reusable pieces of code -Accept inputs and produce outputs -Abstraction Using Functions Vocab Run/invoke/call Argument Return Value
###Code
1 + 1
int("123")
###Output
_____no_output_____
###Markdown
We've already used built-in functions Mini Exercise -- Using Functions Take a look at this code snippet: max([1, 2, 3]) What is the function name? Where is the function invocation? What is the return value? Take a look at this code snippet: type(max([1, 2, 3])) What will the output be? Why? Take a look at this code snippet: type(max) What will the output be? Why? What is the difference between the two code blocks below? print print() What other built in functions do you know?
###Code
max([1,2,3])
Function Name : Max
Function Invocation : Return max value
Return Value : 3
type(max([1,2,3]))
Output : INT because the function called for the "type" to be returned
type(max)
# Function Signature: The type and quantity of the function arguments plus the functions return type.
eg.
#not executable python code
max(l:list[int]) -> int
Signatures of the print and range function:
range(start: int, stop: int) -> list[int]
range -Takes in two arguments, both integars and returns list(range)
###Output
_____no_output_____
###Markdown
Defining Functions Vocab Function Definition Function Name Argument Parameter Function Body
###Code
def increment(n):
return n + 1
###Output
_____no_output_____
###Markdown
Mini Exercise -- Defining Functions What is the difference between calling and defining a function? What is the difference between the two code blocks below? def increment(n): return n + 1 def increment(n): print(n + 1) Create a function named nonzero. It should accept a number and return true if the number is anythong other than zero, false otherwise. Use your nonzero function in combination with the built-in input function and an if statement to prompt the user for a number and print a message displaying whether or not the number is zero. Transfer the work you have done into a function named explain_nonzero. Calling this function whould prompt the user and display the message as before.
###Code
1: Calling a function invokes the argument(return).
Defining the function consists of naming and then setting paremeter, arguments to then call upoon.
2:
def function_that_prints():
print ("I printed")
def function_that_returns():
return "I returned"
f1 = function_that_prints()
f2 = function_that_returns()
print ("Now let us see what the values of f1 and f2 are")
print (f1)
print (f2)`
nonzero(x:int) -> bool
def nonzero(x):
return x != 0
nonzero(123)
user_input = int(input("Please enter a number: "))
if nonzero(user_input):
print("that is not a zero!")
else:
print("That is zero!")
THEN
def explain_nonzero():
user_input = int(input("Please enter a number: "))
if nonzero(user_input):
print("that is not a zero!")
else:
print("That is zero!")
explain_nonzero()
def increment(n):
return n + 1
increment(1000)
assert increment(3) == 4
assert increment(1_000) == 1_001
###Output
_____no_output_____
###Markdown
Default Parameter Values and Keyword Arguments
###Code
#sayhello(name: str) -> str
def sayhello(name="Easley"):
return f"Hello, {name}!"
#the name parameter has a default value of "Easley"
sayhello()
def sayhello(name="Easley" , greeting ="Hello"):
return f"{greeting}, {name}!"
sayhello("Class", "Good Afternoon")
###Output
_____no_output_____
###Markdown
--Posistional arguments: parameter defined by posistion, or by order--Keyword arguments: parameter defined by keyword Function Scope- defining variables inside/outside of functions- defines where a variable can be referenced Vocab Scope Global Local
###Code
# NB. function names and variables are very generic here because the concept is very generic
def f():
x = 123
f()
print(x)
x = 123
def f():
print(x)
f()
x = 123
def f(x):
return x + 1
print(f(12))
###Output
_____no_output_____
###Markdown
Mini Exercise -- Function Scope What is the difference between local and global scope? Which is preferred? Take a look at the cell below this one. Before running it, think about what you would expect to happen. Explain step by step how the python code is executing.
###Code
def changeit(x):
x = x + 1
x = 42
print(x)
changeit(x)
print(x)
###Output
42
42
###Markdown
Function Scope Example```pythondef fill_nulls(df): return df.fillna(0) def drop_outliers(df): outlier_cutoff = 3 return df[df.zscore().abs() < 3] def prep_dataframe(df): df = fill_nulls(df) df = drop_outliers(df) return ``` [Data Prep example](https://github.com/CodeupClassroom/darden-nlp-exercises/blob/main/nlp_prepare.py). The specifics here aren't important right now, just pay attention to the overall shape of functions and how local scope is used. Lambda Functions- A function as an expression- used for "throw away", or one-off, functions
###Code
def increment(n):
return n + 1
# same as
increment = lambda n: n + 1
###Output
_____no_output_____
###Markdown
**Use case**: sorting (min, max too)Python doesn't know how to compare dictionaries, but it does know how to compare strings or numbers
###Code
students = [
{"name": "Ada Lovelace", "grade": 87},
{"name": "Thomas Bayes", "grade": 89},
{"name": "Christine Darden", "grade": 99},
{"name": "Annie Easley", "grade": 94},
{"name": "Marie Curie", "grade": 97},
]
# sort by name
sorted(students, key=lambda s: s["name"])
# sort by grade
sorted(students, key=lambda s: s["grade"])
###Output
_____no_output_____
###Markdown
Mini Exercise -- Lambda Functions & Sorting Write the code necessary to sort the list of student dictionaries by student last name. Hints: You will need to write a function that takes in a student dictionary and returns just the last name. You can use the .split string method to seperate the first name and the last name.
###Code
sorted(students, key=lambda s: s["name"])
###Output
_____no_output_____ |
_solved/pandas_03_selecting_data.ipynb | ###Markdown
03 - Pandas: Indexing and selecting data> *DS Data manipulation, analysis and visualisation in Python* > *December, 2017*> *© 2016, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*---
###Code
import pandas as pd
# redefining the example objects
# series
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
# dataframe
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
###Output
_____no_output_____
###Markdown
Setting the index to the country names:
###Code
countries = countries.set_index('country')
countries
###Output
_____no_output_____
###Markdown
Selecting data ATTENTION!: One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. We now have to distuinguish between: selection by **label** (using the row and column names) selection by **position** (using integers) `data[]` provides some convenience shortcuts For a DataFrame, basic indexing selects the columns (cfr. the dictionaries of pure python)Selecting a **single column**:
###Code
countries['area'] # single []
###Output
_____no_output_____
###Markdown
or multiple **columns**:
###Code
countries[['area', 'population']] # double [[]]
###Output
_____no_output_____
###Markdown
But, slicing or boolean indexing accesses the **rows**:
###Code
countries['France':'Netherlands']
countries[countries['population'] > 50]
###Output
_____no_output_____
###Markdown
NOTE: Unlike slicing in numpy, the end label is **included**! REMEMBER: So as a summary, `[]` provides the following convenience shortcuts: **Series**: selecting a **label**: `s[label]` **DataFrame**: selecting a single or multiple **columns**: `df['col']` or `df[['col1', 'col2']]` **DataFrame**: slicing or filtering the **rows**: `df['row_label1':'row_label2']` or `df[mask]` Systematic indexing with `loc` and `iloc` When using `[]` like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes: * `loc`: selection by label* `iloc`: selection by positionBoth `loc` and `iloc` use the following pattern: `df.loc[ , ]`.This 'selection of the rows / columns' can be: a single label, a list of labels, a slice or a boolean mask. Selecting a single element:
###Code
countries.loc['Germany', 'area']
###Output
_____no_output_____
###Markdown
But the row or column indexer can also be a list, slice, boolean array (see next section), ..
###Code
countries.loc['France':'Germany', ['area', 'population']]
###Output
_____no_output_____
###Markdown
---Selecting by position with `iloc` works similar as **indexing numpy arrays**:
###Code
countries.iloc[0:2,1:3]
###Output
_____no_output_____
###Markdown
The different indexing methods can also be used to **assign data**:
###Code
countries2 = countries.copy()
countries2.loc['Belgium':'Germany', 'population'] = 10
countries2
###Output
_____no_output_____
###Markdown
REMEMBER: Advanced indexing with **loc** and **iloc** **loc**: select by label: `df.loc[row_indexer, column_indexer]` **iloc**: select by position: `df.iloc[row_indexer, column_indexer]` Boolean indexing (filtering) Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy. The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
###Code
countries['area'] > 100000
countries[countries['area'] > 100000]
###Output
_____no_output_____
###Markdown
EXERCISE: Add the population density as column to the DataFrame.Note: the population column is expressed in millions.
###Code
countries['density'] = countries['population']*1000000 / countries['area']
###Output
_____no_output_____
###Markdown
EXERCISE: Select the capital and the population column of those countries where the density is larger than 300
###Code
countries.loc[countries['density'] > 300, ['capital', 'population']]
###Output
_____no_output_____
###Markdown
EXERCISE: Add a column 'density_ratio' with the ratio of the population density to the average population density for all countries.
###Code
countries['density_ratio'] = countries['density'] / countries['density'].mean()
countries
###Output
_____no_output_____
###Markdown
EXERCISE: Change the capital of the UK to Cambridge
###Code
countries.loc['United Kingdom', 'capital'] = 'Cambridge'
countries
###Output
_____no_output_____
###Markdown
EXERCISE: Select all countries whose population density is between 100 and 300 people/km²
###Code
countries[(countries['density'] > 100) & (countries['density'] < 300)]
###Output
_____no_output_____
###Markdown
Some other essential methods: `isin` and `string` methods The `isin` method of Series is very useful to select rows that may contain certain values:
###Code
s = countries['capital']
s.isin?
s.isin(['Berlin', 'London'])
###Output
_____no_output_____
###Markdown
This can then be used to filter the dataframe with boolean indexing:
###Code
countries[countries['capital'].isin(['Berlin', 'London'])]
###Output
_____no_output_____
###Markdown
Let's say we want to select all data for which the capital starts with a 'B'. In Python, when having a string, we could use the `startswith` method:
###Code
string = 'Berlin'
string.startswith('B')
###Output
_____no_output_____
###Markdown
In pandas, these are available on a Series through the `str` namespace:
###Code
countries['capital'].str.startswith('B')
###Output
_____no_output_____
###Markdown
For an overview of all string methods, see: http://pandas.pydata.org/pandas-docs/stable/api.htmlstring-handling EXERCISE: Select all countries that have capital names with more than 7 characters
###Code
countries[countries['capital'].str.len() > 7]
###Output
_____no_output_____
###Markdown
EXERCISE: Select all countries that have capital names that contain the character sequence 'am'
###Code
countries[countries['capital'].str.contains('am')]
###Output
_____no_output_____
###Markdown
Pitfall: chained indexing (and the 'SettingWithCopyWarning')
###Code
countries.loc['Belgium', 'capital'] = 'Ghent'
countries
countries['capital']['Belgium'] = 'Antwerp'
countries
countries[countries['capital'] == 'Antwerp']['capital'] = 'Brussels'
countries
countries.loc[countries['capital'] == 'Antwerp', 'capital'] = 'Brussels'
countries
###Output
_____no_output_____
###Markdown
REMEMBER!What to do when encountering the *value is trying to be set on a copy of a slice from a DataFrame* error? Use `loc` instead of chained indexing **if possible**! Or `copy` explicitly if you don't want to change the original data. Exercises using the Titanic dataset
###Code
df = pd.read_csv("../data/titanic.csv")
df.head()
###Output
_____no_output_____
###Markdown
EXERCISE: Select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers.
###Code
males = df[df['Sex'] == 'male']
df.loc[df['Sex'] == 'male', 'Age'].mean()
df.loc[df['Sex'] == 'female', 'Age'].mean()
###Output
_____no_output_____
###Markdown
We will later see an easier way to calculate both averages at the same time with groupby. EXERCISE: How many passengers older than 70 were on the Titanic?
###Code
len(df[df['Age'] > 70])
(df['Age'] > 70).sum()
###Output
_____no_output_____
###Markdown
[OPTIONAL] more exercises For the quick ones among you, here are some more exercises with some larger dataframe with film data. These exercises are based on the [PyCon tutorial of Brandon Rhodes](https://github.com/brandon-rhodes/pycon-pandas-tutorial/) (so all credit to him!) and the datasets he prepared for that. You can download these data from here: [`titles.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKajNMa1pfSzN6Q3M) and [`cast.csv`](https://drive.google.com/open?id=0B3G70MlBnCgKal9UYTJSR2ZhSW8) and put them in the `/data` folder.
###Code
cast = pd.read_csv('../data/cast.csv')
cast.head()
titles = pd.read_csv('../data/titles.csv')
titles.head()
###Output
_____no_output_____
###Markdown
EXERCISE: How many movies are listed in the titles dataframe?
###Code
len(titles)
###Output
_____no_output_____
###Markdown
EXERCISE: What are the earliest two films listed in the titles dataframe?
###Code
titles.sort_values('year').head(2)
###Output
_____no_output_____
###Markdown
EXERCISE: How many movies have the title "Hamlet"?
###Code
len(titles[titles['title'] == 'Hamlet'])
###Output
_____no_output_____
###Markdown
EXERCISE: List all of the "Treasure Island" movies from earliest to most recent.
###Code
titles[titles.title == 'Treasure Island'].sort_values('year')
###Output
_____no_output_____
###Markdown
EXERCISE: How many movies were made from 1950 through 1959?
###Code
len(titles[(titles['year'] >= 1950) & (titles['year'] <= 1959)])
len(titles[titles['year'] // 10 == 195])
###Output
_____no_output_____
###Markdown
EXERCISE: How many roles in the movie "Inception" are NOT ranked by an "n" value?
###Code
inception = cast[cast['title'] == 'Inception']
len(inception[inception['n'].isnull()])
inception['n'].isnull().sum()
###Output
_____no_output_____
###Markdown
EXERCISE: But how many roles in the movie "Inception" did receive an "n" value?
###Code
len(inception[inception['n'].notnull()])
###Output
_____no_output_____
###Markdown
EXERCISE: Display the cast of the "Titanic" (the most famous 1997 one) in their correct "n"-value order, ignoring roles that did not earn a numeric "n" value.
###Code
titanic = cast[(cast['title'] == 'Titanic') & (cast['year'] == 1997)]
titanic = titanic[titanic['n'].notnull()]
titanic.sort_values('n')
###Output
_____no_output_____
###Markdown
EXERCISE: List the supporting roles (having n=2) played by Brad Pitt in the 1990s, in order by year.
###Code
brad = cast[cast['name'] == 'Brad Pitt']
brad = brad[brad['year'] // 10 == 199]
brad = brad[brad['n'] == 2]
brad.sort_values('year')
###Output
_____no_output_____ |
Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/Initialization.ipynb | ###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[0. 0. 0.]
[0. 0. 0.]]
b1 = [[0.]
[0.]]
W2 = [[0. 0.]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 3
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 5.36588542 1.30952955 0.2894924 ]
[-5.59047811 -0.83216461 -1.06427694]]
b1 = [[0.]
[0.]]
W2 = [[-0.24822444 -1.88100203]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: inf
Cost after iteration 1000: 0.6829649461107691
Cost after iteration 2000: 0.6814768019281778
Cost after iteration 3000: 0.6803099452276381
Cost after iteration 4000: 0.6789866081452782
Cost after iteration 5000: 0.6773376250016052
Cost after iteration 6000: 0.6752047403645899
Cost after iteration 7000: 0.6722307451226063
Cost after iteration 8000: 0.6676351796924785
Cost after iteration 9000: 0.6593430789561638
Cost after iteration 10000: 0.6397918584021834
Cost after iteration 11000: 0.5449259561922309
Cost after iteration 12000: 0.23445483911115195
Cost after iteration 13000: 0.1540675335592854
Cost after iteration 14000: 0.126306376623725
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part!
###Code
#compare X and parameter values
from scipy import stats
stats.describe(train_X.flatten())
# xmean=np.mean(train_X)
# print (xmean)
print (parameters)
allw=np.concatenate((parameters['W1'].flatten(), parameters['W2'].flatten(), parameters['W3'].flatten()))
stats.describe(allw)
###Output
_____no_output_____
###Markdown
4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: inf
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[0. 0. 0.]
[0. 0. 0.]]
b1 = [[0.]
[0.]]
W2 = [[0. 0.]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[0.]
[0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/home/yenlow/deep-learning-coursera/Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/yenlow/deep-learning-coursera/Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
/Users/dkovalen/Documents/venv_ML/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[0. 0. 0.]
[0. 0. 0.]]
b1 = [[0.]
[0.]]
W2 = [[0. 0.]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[0.]
[0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/Users/dkovalen/Documents/courserea/Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/init_utils.py:141: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/Users/dkovalen/Documents/courserea/Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/init_utils.py:141: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l],layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*10
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*np.sqrt(2/layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: inf
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l],layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*10
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*np.sqrt(2./layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
_____no_output_____
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
_____no_output_____
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
_____no_output_____
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
_____no_output_____
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
_____no_output_____
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * 10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: inf
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l],layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*10
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: inf
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*(np.sqrt(2/layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate=0.01, num_iterations=15000, print_cost=True, initialization="he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print("predictions_train = " + str(predictions_train))
print("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: inf
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print(predictions_train)
print(predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5, 1.5])
axes.set_ylim([-1.5, 1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import random
import torch
import numpy as np
random_seed = 40
torch.manual_seed(random_seed)
torch.cuda.manual_seed(random_seed)
# torch.cuda.manual_seed_all(random_seed) # if use multi-GPU
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(random_seed)
random.seed(random_seed)
###Output
_____no_output_____
###Markdown
1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
from sklearn.datasets import load_boston
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import pandas as pd
bos = load_boston()
bos.keys()
df = pd.DataFrame(bos.data)
df.columns = bos.feature_names
df['Price'] = bos.target
df.head()
data = df[df.columns[:-1]]
data = data.apply(
lambda x: (x - x.mean()) / x.std()
)
data['Price'] = df.Price
X = data.drop('Price', axis=1).to_numpy()
Y = data['Price'].to_numpy()
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=42)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
n_train = X_train.shape[0]
X_train = torch.tensor(X_train, dtype=torch.float)
X_test = torch.tensor(X_test, dtype=torch.float)
Y_train = torch.tensor(Y_train, dtype=torch.float).view(-1, 1)
Y_test = torch.tensor(Y_test, dtype=torch.float).view(-1, 1)
# from torch.utils.data import DataLoader, TensorDataset
# datasets = TensorDataset(X_train, Y_train)
# train_set = DataLoader(datasets, batch_size=10, shuffle=True)
# datasets = TensorDataset(X_test, Y_test)
# test_set = DataLoader(datasets, batch_size=10, shuffle=True)
from torch.autograd import Variable
# torch can only train on Variable, so convert them to Variable
x, y = Variable(X_train), Variable(Y_train)
def training(net, X_train, Y_train, X_test, Y_test, batch_size, patience=5000, learning_rate = 0.1, best_loss = 1e06):
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
iter = 0
while(best_loss>1e-06):
for i in range(len(X_train)//batch_size):
inputs = Variable(X_train)
labels = Variable(Y_train)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
out = net(inputs)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(out, labels)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
iter += 1
# Calculate Accuracy
correct = 0
total = 0
# Iterate through test dataset
for j in range(len(X_test)//batch_size):
inputs = Variable(X_test)
labels = Variable(Y_test)
# Forward pass only to get logits/output
outputs = net(inputs)
val_loss = criterion(outputs, labels)
# Total number of labels
total += labels.size(0)
# Total correct predictions
correct += (outputs.type(torch.FloatTensor).cpu() == labels.type(torch.FloatTensor)).sum()
accuracy = 100. * correct.item() / total
# Print Loss
if best_loss > val_loss.item():
p = patience
best_loss = val_loss.item()
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, val_loss.item(), accuracy))
else:
p -= 1
if p == 0:
break
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. https://pytorch.org/docs/stable/nn.init.htmltorch.nn.init.constant_https://pytorch.org/docs/stable/nn.init.htmltorch.nn.init.zeros_ Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
import torch.nn as nn
import torch.nn.functional as F
w_num = X_train.shape[1]
net = nn.Sequential(
nn.Linear(w_num, 1)
)
nn.init.constant_(net[0].weight, val=0)
nn.init.constant_(net[0].bias, val=0)
batch_size = 12
training(net, X_train, Y_train, X_test, Y_test, batch_size)
###Output
Iteration: 1. Loss: 422.0935974121094. Accuracy: 0.0
Iteration: 2. Loss: 331.7978515625. Accuracy: 0.0
Iteration: 3. Loss: 252.31092834472656. Accuracy: 0.0
Iteration: 4. Loss: 186.39996337890625. Accuracy: 0.0
Iteration: 5. Loss: 136.51019287109375. Accuracy: 0.0
Iteration: 6. Loss: 99.80962371826172. Accuracy: 0.0
Iteration: 7. Loss: 73.47848510742188. Accuracy: 0.0
Iteration: 8. Loss: 55.1191291809082. Accuracy: 0.0
Iteration: 9. Loss: 42.69768524169922. Accuracy: 0.0
Iteration: 10. Loss: 34.540130615234375. Accuracy: 0.0
Iteration: 11. Loss: 29.339153289794922. Accuracy: 0.0
Iteration: 12. Loss: 26.117141723632812. Accuracy: 0.0
Iteration: 13. Loss: 24.174522399902344. Accuracy: 0.0
Iteration: 14. Loss: 23.03174591064453. Accuracy: 0.0
Iteration: 15. Loss: 22.373502731323242. Accuracy: 0.0
Iteration: 16. Loss: 22.000469207763672. Accuracy: 0.0
Iteration: 17. Loss: 21.79120635986328. Accuracy: 0.0
Iteration: 18. Loss: 21.674148559570312. Accuracy: 0.0
Iteration: 19. Loss: 21.608360290527344. Accuracy: 0.0
Iteration: 20. Loss: 21.570945739746094. Accuracy: 0.0
Iteration: 21. Loss: 21.54930877685547. Accuracy: 0.0
Iteration: 22. Loss: 21.53658103942871. Accuracy: 0.0
Iteration: 23. Loss: 21.528968811035156. Accuracy: 0.0
Iteration: 24. Loss: 21.524368286132812. Accuracy: 0.0
Iteration: 25. Loss: 21.521577835083008. Accuracy: 0.0
Iteration: 26. Loss: 21.51988410949707. Accuracy: 0.0
Iteration: 27. Loss: 21.51886749267578. Accuracy: 0.0
Iteration: 28. Loss: 21.518260955810547. Accuracy: 0.0
Iteration: 29. Loss: 21.517908096313477. Accuracy: 0.0
Iteration: 30. Loss: 21.517698287963867. Accuracy: 0.0
Iteration: 31. Loss: 21.51758575439453. Accuracy: 0.0
Iteration: 32. Loss: 21.51751708984375. Accuracy: 0.0
Iteration: 33. Loss: 21.51748275756836. Accuracy: 0.0
Iteration: 34. Loss: 21.5174617767334. Accuracy: 0.0
Iteration: 35. Loss: 21.517459869384766. Accuracy: 0.0
Iteration: 40. Loss: 21.517457962036133. Accuracy: 0.0
Iteration: 55. Loss: 21.5174560546875. Accuracy: 0.0
Iteration: 58. Loss: 21.517454147338867. Accuracy: 0.0
Iteration: 62. Loss: 21.517452239990234. Accuracy: 0.0
Iteration: 76. Loss: 21.5174503326416. Accuracy: 0.0
Iteration: 110. Loss: 21.51744842529297. Accuracy: 0.0
Iteration: 124. Loss: 21.517446517944336. Accuracy: 0.0
Iteration: 136. Loss: 21.517444610595703. Accuracy: 0.0
Iteration: 137. Loss: 21.51744270324707. Accuracy: 0.0
Iteration: 198. Loss: 21.517440795898438. Accuracy: 0.0
Iteration: 200. Loss: 21.51702880859375. Accuracy: 0.0
Iteration: 201. Loss: 21.464902877807617. Accuracy: 0.0
Iteration: 211. Loss: 21.39563751220703. Accuracy: 0.0
Iteration: 231. Loss: 21.39488983154297. Accuracy: 0.0
Iteration: 756. Loss: 21.370689392089844. Accuracy: 0.0
Iteration: 1022. Loss: 21.3678035736084. Accuracy: 0.0
Iteration: 1462. Loss: 21.3602237701416. Accuracy: 0.0
Iteration: 4101. Loss: 21.356830596923828. Accuracy: 0.0
###Markdown
3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. You can choose which type of random distributions you would use.https://pytorch.org/docs/stable/nn.init.htmltorch.nn.init.uniform_https://pytorch.org/docs/stable/nn.init.htmltorch.nn.init.normal_
###Code
import torch.nn as nn
import torch.nn.functional as F
w_num = X_train.shape[1]
net = nn.Sequential(
nn.Linear(w_num, 1)
)
nn.init.normal_(net[0].weight, mean=0, std=0.1)
nn.init.constant_(net[0].bias, val=0)
batch_size = 12
training(net, X_train, Y_train, X_test, Y_test, batch_size)
###Output
Iteration: 1. Loss: 422.0658874511719. Accuracy: 0.0
Iteration: 2. Loss: 331.5777282714844. Accuracy: 0.0
Iteration: 3. Loss: 252.1563262939453. Accuracy: 0.0
Iteration: 4. Loss: 186.18951416015625. Accuracy: 0.0
Iteration: 5. Loss: 136.3537139892578. Accuracy: 0.0
Iteration: 6. Loss: 99.70662689208984. Accuracy: 0.0
Iteration: 7. Loss: 73.39952087402344. Accuracy: 0.0
Iteration: 8. Loss: 55.06049728393555. Accuracy: 0.0
Iteration: 9. Loss: 42.65496063232422. Accuracy: 0.0
Iteration: 10. Loss: 34.5093879699707. Accuracy: 0.0
Iteration: 11. Loss: 29.317346572875977. Accuracy: 0.0
Iteration: 12. Loss: 26.101869583129883. Accuracy: 0.0
Iteration: 13. Loss: 24.163951873779297. Accuracy: 0.0
Iteration: 14. Loss: 23.024532318115234. Accuracy: 0.0
Iteration: 15. Loss: 22.368633270263672. Accuracy: 0.0
Iteration: 16. Loss: 21.997209548950195. Accuracy: 0.0
Iteration: 17. Loss: 21.789033889770508. Accuracy: 0.0
Iteration: 18. Loss: 21.672712326049805. Accuracy: 0.0
Iteration: 19. Loss: 21.607419967651367. Accuracy: 0.0
Iteration: 20. Loss: 21.570329666137695. Accuracy: 0.0
Iteration: 21. Loss: 21.548921585083008. Accuracy: 0.0
Iteration: 22. Loss: 21.536331176757812. Accuracy: 0.0
Iteration: 23. Loss: 21.528810501098633. Accuracy: 0.0
Iteration: 24. Loss: 21.524276733398438. Accuracy: 0.0
Iteration: 25. Loss: 21.52151870727539. Accuracy: 0.0
Iteration: 26. Loss: 21.519855499267578. Accuracy: 0.0
Iteration: 27. Loss: 21.51884651184082. Accuracy: 0.0
Iteration: 28. Loss: 21.518245697021484. Accuracy: 0.0
Iteration: 29. Loss: 21.517898559570312. Accuracy: 0.0
Iteration: 30. Loss: 21.5176944732666. Accuracy: 0.0
Iteration: 31. Loss: 21.517578125. Accuracy: 0.0
Iteration: 32. Loss: 21.517518997192383. Accuracy: 0.0
Iteration: 33. Loss: 21.517478942871094. Accuracy: 0.0
Iteration: 34. Loss: 21.51746368408203. Accuracy: 0.0
Iteration: 35. Loss: 21.517459869384766. Accuracy: 0.0
Iteration: 36. Loss: 21.517457962036133. Accuracy: 0.0
Iteration: 38. Loss: 21.5174560546875. Accuracy: 0.0
Iteration: 56. Loss: 21.517454147338867. Accuracy: 0.0
Iteration: 61. Loss: 21.517452239990234. Accuracy: 0.0
Iteration: 79. Loss: 21.5174503326416. Accuracy: 0.0
Iteration: 106. Loss: 21.51744842529297. Accuracy: 0.0
Iteration: 109. Loss: 21.517446517944336. Accuracy: 0.0
Iteration: 122. Loss: 21.517444610595703. Accuracy: 0.0
Iteration: 139. Loss: 21.51744270324707. Accuracy: 0.0
Iteration: 199. Loss: 21.51287841796875. Accuracy: 0.0
Iteration: 216. Loss: 21.489089965820312. Accuracy: 0.0
Iteration: 237. Loss: 21.4804744720459. Accuracy: 0.0
Iteration: 305. Loss: 21.442903518676758. Accuracy: 0.0
Iteration: 336. Loss: 21.404712677001953. Accuracy: 0.0
Iteration: 814. Loss: 21.38812828063965. Accuracy: 0.0
Iteration: 1484. Loss: 21.3671932220459. Accuracy: 0.0
Iteration: 2622. Loss: 21.356639862060547. Accuracy: 0.0
Iteration: 2938. Loss: 21.354843139648438. Accuracy: 0.0
Iteration: 5391. Loss: 21.352184295654297. Accuracy: 0.0
###Markdown
4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.https://pytorch.org/docs/stable/nn.init.htmltorch.nn.init.xavier_normal_
###Code
import torch.nn as nn
import torch.nn.functional as F
w_num = X_train.shape[1]
net = nn.Sequential(
nn.Linear(w_num, 1)
)
nn.init.xavier_normal_(net[0].weight, gain=1.0)
nn.init.constant_(net[0].bias, val=0)
batch_size = 12
training(net, X_train, Y_train, X_test, Y_test, batch_size)
###Output
Iteration: 1. Loss: 424.1647033691406. Accuracy: 0.0
Iteration: 2. Loss: 332.1796875. Accuracy: 0.0
Iteration: 3. Loss: 251.9687042236328. Accuracy: 0.0
Iteration: 4. Loss: 185.84878540039062. Accuracy: 0.0
Iteration: 5. Loss: 136.1215362548828. Accuracy: 0.0
Iteration: 6. Loss: 99.51551818847656. Accuracy: 0.0
Iteration: 7. Loss: 73.26202392578125. Accuracy: 0.0
Iteration: 8. Loss: 54.966861724853516. Accuracy: 0.0
Iteration: 9. Loss: 42.59405517578125. Accuracy: 0.0
Iteration: 10. Loss: 34.4719352722168. Accuracy: 0.0
Iteration: 11. Loss: 29.295969009399414. Accuracy: 0.0
Iteration: 12. Loss: 26.090869903564453. Accuracy: 0.0
Iteration: 13. Loss: 24.159215927124023. Accuracy: 0.0
Iteration: 14. Loss: 23.02326202392578. Accuracy: 0.0
Iteration: 15. Loss: 22.369064331054688. Accuracy: 0.0
Iteration: 16. Loss: 21.99831199645996. Accuracy: 0.0
Iteration: 17. Loss: 21.790246963500977. Accuracy: 0.0
Iteration: 18. Loss: 21.673789978027344. Accuracy: 0.0
Iteration: 19. Loss: 21.608253479003906. Accuracy: 0.0
Iteration: 20. Loss: 21.570940017700195. Accuracy: 0.0
Iteration: 21. Loss: 21.54933738708496. Accuracy: 0.0
Iteration: 22. Loss: 21.536602020263672. Accuracy: 0.0
Iteration: 23. Loss: 21.528987884521484. Accuracy: 0.0
Iteration: 24. Loss: 21.524381637573242. Accuracy: 0.0
Iteration: 25. Loss: 21.52158546447754. Accuracy: 0.0
Iteration: 26. Loss: 21.519886016845703. Accuracy: 0.0
Iteration: 27. Loss: 21.518869400024414. Accuracy: 0.0
Iteration: 28. Loss: 21.518260955810547. Accuracy: 0.0
Iteration: 29. Loss: 21.517902374267578. Accuracy: 0.0
Iteration: 30. Loss: 21.517698287963867. Accuracy: 0.0
Iteration: 31. Loss: 21.5175838470459. Accuracy: 0.0
Iteration: 32. Loss: 21.51752281188965. Accuracy: 0.0
Iteration: 33. Loss: 21.517484664916992. Accuracy: 0.0
Iteration: 34. Loss: 21.5174617767334. Accuracy: 0.0
Iteration: 36. Loss: 21.517457962036133. Accuracy: 0.0
Iteration: 56. Loss: 21.5174560546875. Accuracy: 0.0
Iteration: 58. Loss: 21.517454147338867. Accuracy: 0.0
Iteration: 62. Loss: 21.517452239990234. Accuracy: 0.0
Iteration: 90. Loss: 21.5174503326416. Accuracy: 0.0
Iteration: 105. Loss: 21.517446517944336. Accuracy: 0.0
Iteration: 106. Loss: 21.517444610595703. Accuracy: 0.0
Iteration: 136. Loss: 21.51744270324707. Accuracy: 0.0
Iteration: 199. Loss: 21.517438888549805. Accuracy: 0.0
Iteration: 202. Loss: 21.516569137573242. Accuracy: 0.0
Iteration: 203. Loss: 21.491273880004883. Accuracy: 0.0
Iteration: 213. Loss: 21.40636444091797. Accuracy: 0.0
Iteration: 420. Loss: 21.39154815673828. Accuracy: 0.0
Iteration: 598. Loss: 21.38630485534668. Accuracy: 0.0
Iteration: 812. Loss: 21.371023178100586. Accuracy: 0.0
Iteration: 997. Loss: 21.360458374023438. Accuracy: 0.0
###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros(shape=(layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros(shape=(layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[0. 0. 0.]
[0. 0. 0.]]
b1 = [[0.]
[0.]]
W2 = [[0. 0.]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10, !TEN!) and your biases to !ZEROS!. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10
parameters['b' + str(l)] = np.zeros(shape=(layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
_____no_output_____
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
_____no_output_____
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
# parameters['W' + str(l)] = None
# parameters['b' + str(l)] = None
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * \
np.sqrt(2./layers_dims[l-1])
parameters['b' + str(l)] = np.zeros(shape=(layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
BookExercises/DeepLearningwithPython/.ipynb_checkpoints/Chapter2_Mathematics-checkpoint.ipynb | ###Markdown
Tensor Slicing
###Code
my_slice = train_images[10:100, :, :]
print(my_slice.shape)
my_slice = train_images[:, 14:, 14:]
plt.imshow(my_slice[4], cmap=plt.cm.binary)
plt.show()
my_slice = train_images[:, 7:-7, 7:-7]
plt.imshow(my_slice[4], cmap=plt.cm.binary)
plt.show()
###Output
(90, 28, 28)
|
mkdocs_jupyter/tests/mkdocs/docs/variational-inference.ipynb | ###Markdown
Variational Inference Intro to Bayesian Networks Random VariablesRandom Variables are simply variables whose values are uncertain. Eg -1. In case of flipping a coin $n$ times, a random variable $X$ can be number of heads shown up.2. In COVID-19 pandemic situation, random variable can be number of patients found positive with virus daily. Probability DistributionsProbability Distributions governs the amount of uncertainty of random variables. They have a math function with which they assign probabilities to different values taken by random variables. The associated math function is called probability density function (pdf). For simplicity, let's denote any random variable as $X$ and its corresponding pdf as $P\left (X\right )$. Eg - Following figure shows the probability distribution for number of heads when an unbiased coin is flipped 5 times. Bayesian NetworksBayesian Networks are graph based representations to acccount for randomness while modelling our data. The nodes of the graph are random variables and the connections between nodes denote the direct influence from parent to child. Bayesian Network ExampleLet's say a student is taking a class during school. The `difficulty` of the class and the `intelligence` of the student together directly influence student's `grades`. And the `grades` affects his/her acceptance to the university. Also, the `intelligence` factor influences student's `SAT` score. Keep this example in mind.More formally, Bayesian Networks represent joint probability distribution over all the nodes of graph -$P\left (X_1, X_2, X_3, ..., X_n\right )$ or $P\left (\bigcap_{i=1}^{n}X_i\right )$ where $X_i$ is a random variable. Also Bayesian Networks follow local Markov property by which every node in the graph is independent on its **non-descendants** given its **parents**. In this way, the joint probability distribution can be decomposed as -$$P\left (X_1, X_2, X_3, ..., X_n\right ) = \prod_{i=1}^{n} P\left (X_i | Par\left (X_i\right )\right )$$ Extra: Proof of decomposition First, let's recall conditional probability, $$P\left (A|B\right ) = \frac{P\left (A, B\right )}{P\left (B\right )}$$ The above equation is so derived because of reduction of sample space of $A$ when $B$ has already occured. Now, adjusting terms - $$P\left (A, B\right ) = P\left (A|B\right )*P\left (B\right )$$ This equation is called chain rule of probability. Let's generalize this rule for Bayesian Networks. The ordering of names of nodes is such that parent(s) of nodes lie above them (Breadth First Ordering). $$P\left (X_1, X_2, X_3, ..., X_n\right ) = P\left (X_n, X_{n-1}, X_{n-2}, ..., X_1\right )\\ = P\left (X_n|X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) * P \left (X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) \left (Chain Rule\right )\\ = P\left (X_n|X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) * P \left (X_{n-1}|X_{n-2}, X_{n-3}, X_{n-4}, ..., X_1\right ) * P \left (X_{n-2}, X_{n-3}, X_{n-4}, ..., X_1\right )$$ Applying chain rule repeatedly, we get the following equation - $$P\left (\bigcap_{i=1}^{n}X_i\right ) = \prod_{i=1}^{n} P\left (X_i | P\left (\bigcap_{j=1}^{i-1}X_j\right )\right )$$ Keep the above equation in mind. Let's bring back Markov property. To bring some intuition behind Markov property, let's reuse Bayesian Network Example. If we say, the student scored very good grades, then it is highly likely the student gets acceptance letter to university. No matter how difficult the class was, how much intelligent the student was, and no matter what his/her SAT score was. The key thing to note here is by observing the node's parent, the influence by non-descendants towards the node gets eliminated. Now, the equation becomes - $$P\left (\bigcap_{i=1}^{n}X_i\right ) = \prod_{i=1}^{n} P\left (X_i | Par\left (X_i\right )\right )$$ Bingo, with the above equation, we have proved Factorization Theorem in Probability. The decomposition of running [Bayesian Network Example](bayesian-network-example) can be written as -$$P\left (Difficulty, Intelligence, Grade, SAT, Acceptance Letter\right ) = P\left (Difficulty\right )*P\left (Intelligence\right )*\left (Grade|Difficulty, Intelligence\right )*P\left (SAT|Intelligence\right )*P\left (Acceptance Letter|Grade\right )$$ Why care about Bayesian NetworksBayesian Networks allow us to determine the distribution of parameters given the data (Posterior Distribution). The whole idea is to model the underlying data generative process and estimate unobservable quantities. Regarding this, Bayes formula can be written as -$$P\left (\theta | D\right ) = \frac{P\left (D|\theta\right ) * P\left (\theta\right )}{P\left (D\right )}$$$\theta$ = Parameters of the model$P\left (\theta\right )$ = Prior Distribution over the parameters$P\left (D|\theta\right )$ = Likelihood of the data$P\left (\theta|D\right )$ = Posterior Distribution$P\left (D\right )$ = Probability of Data. This term is calculated by marginalising out the effect of parameters.$$P\left (D\right ) = \int P\left (D, \theta\right ) d\left (\theta\right )\\P\left (D\right ) = \int P\left (D|\theta\right ) P\left (\theta\right ) d\left (\theta\right )$$So, the Bayes formula becomes -$$P\left (\theta | D\right ) = \frac{P\left (D|\theta\right ) * P\left (\theta\right )}{\int P\left (D|\theta\right ) P\left (\theta\right ) d\left (\theta\right )}$$The devil is in the denominator. The integration over all the parameters is **intractable**. So we resort to sampling and optimization techniques. Intro to Variational Inference InformationVariational Inference has its origin in Information Theory. So first, let's understand the basic terms - Information and Entropy . Simply, **Information** quantifies how much useful the data is. It is related to Probability Distributions as -$$I = -\log \left (P\left (X\right )\right )$$The negative sign in the formula has high intuitive meaning. In words, it signifies whenever the probability of certain events is high, the related information is less and vica versa. For example -1. Consider the statement - It never snows in deserts. The probability of this statement being true is significantly high because we already know that it is hardly possible to snow in deserts. So, the related information is very small.2. Now consider - There was a snowfall in Sahara Desert in late December 2019. Wow, thats a great news because some unlikely event occured (probability was less). In turn, the information is high. EntropyEntropy quantifies how much **average** Information is present in occurence of events. It is denoted by $H$. It is named Differential Entropy in case of Real Continuous Domain.$$H = E_{P\left (X\right )} \left [-\log\left (P\left (X\right )\right )\right ]\\H = -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$$ Entropy of Normal DistributionAs an exercise, let's calculate entropy of Normal Distribution. Let's denote $\mu$ as mean nd $\sigma$ as standard deviation of Normal Distribution. Remember the results, we will need them further.$$X \sim Normal\left (\mu, \sigma^2\right )\\P_X\left (x\right ) = \frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\\H = -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$$Only expanding $\log\left (P_X\left (x\right )\right )$ -$$H = -\int_X P_X\left (x\right ) \log\left (\frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\right ) dx\\H = -\frac{1}{2}\int_X P_X\left (x\right ) \log\left (\frac{1}{2 \pi {\sigma}^2}\right )dx - \int_X P_X\left (x\right ) \log\left (e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\right ) dx\\H = \frac{1}{2}\log \left ( 2 \pi {\sigma}^2 \right)\int_X P_X\left (x\right ) dx + \frac{1}{2{\sigma}^2} \int_X \left ( x-\mu \right)^2 P_X\left (x\right ) dx$$Identifying terms -$$\int_X P_X\left (x\right ) dx = 1\\\int_X \left ( x-\mu \right)^2 P_X\left (x\right ) dx = \sigma^2$$Substituting back, the entropy becomes -$$H = \frac{1}{2}\log \left ( 2 \pi {\sigma}^2 \right) + \frac{1}{2\sigma^2} \sigma^2\\H = \frac{1}{2}\left ( \log \left ( 2 \pi {\sigma}^2 \right) + 1 \right )$$ KL divergenceThis mathematical tool serves as the backbone of Variational Inference. Kullback–Leibler (KL) divergence measures the mutual information between two probability distributions. Let's say, we have two probability distributions $P$ and $Q$, then KL divergence quantifies how much similar these distributions are. Mathematically, it is just the difference between entropies of probabilities distributions. In terms of notation, $KL(Q||P)$ represents KL divergence with respect to $Q$ against $P$.$$KL(Q||P) = H_P - H_Q\\= -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx + \int_X Q_X\left (x\right ) \log\left (Q_X\left (x\right )\right ) dx$$Changing $-\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$ to $-\int_X Q_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$ as the KL divergence is with respect to $Q$.$$= -\int_X Q_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx + \int_X Q_X\left (x\right ) \log\left (Q_X\left (x\right )\right ) dx\\= \int_X Q_X\left (x \right) \log \left( \frac{Q_X\left (x \right)}{P_X\left (x \right)} \right) dx$$Remember? We were stuck upon Bayesian Equation because of denominator term but now, we can estimate the posterior distribution $p(\theta|D)$ by another distribution $q(\theta)$ over all the parameters of the model.$$KL(q(\theta)||p(\theta|D)) = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta|D)} \right) d\theta\\$$ Note If two distributions are similar, then their entropies are similar, implies the KL divergence with respect to two distributions will be smaller. And vica versa. In Variational Inference, the whole idea is to minimize KL divergence so that our approximating distribution $q(\theta)$ can be made similar to $p(\theta|D)$. Extra: What are latent variables? If you go about exploring any paper talking about Variational Inference, then most certainly, the papers mention about latent variables instead of parameters. The parameters are fixed quantities for the model whereas latent variables are unobserved quantities of the model conditioned on parameters. Also, we model parameters by probability distributions. For simplicity, let's consider the running terminology of parameters only. Evidence Lower BoundThere is again an issue with KL divergence formula as it still involves posterior term i.e. $p(\theta|D)$. Let's get rid of it -$$KL(q(\theta)||p(\theta|D)) = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta|D)} \right) d\theta\\KL = \int q(\theta) \log \left( \frac{q(\theta) p(D)}{p(\theta, D)} \right) d\theta\\KL = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta, D)} \right) d\theta + \int q(\theta) \log \left(p(D) \right) d\theta\\KL + \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta = \log \left(p(D) \right) \int q(\theta) d\theta\\$$Identifying terms -$$\int q(\theta) d\theta = 1$$So, substituting back, our running equation becomes -$$KL + \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta = \log \left(p(D) \right)$$The term $\int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta$ is called Evidence Lower Bound (ELBO). The right side of the equation $\log \left(p(D) \right)$ is constant. Observe Minimizing the KL divergence is equivalent to maximizing the ELBO. Also, the ELBO does not depend on posterior distribution. Also,$$ELBO = \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta\\ELBO = E_{q(\theta)}\left [\log \left( \frac{p(\theta, D)}{q(\theta)} \right) \right]\\ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + E_{q(\theta)} \left [-\log(q(\theta)) \right]$$The term $E_{q(\theta)} \left [-\log(q(\theta)) \right]$ is entropy of $q(\theta)$. Our running equation becomes -$$ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + H_{q(\theta)}$$ Mean Field ADVISo far, the whole crux of the story is - To approximate the posterior, maximize the ELBO term. ADVI = Automatic Differentiation Variational Inference. I think the term **Automatic Differentiation** deals with maximizing the ELBO (or minimizing the negative ELBO) using any autograd differentiation library. Coming to Mean Field ADVI (MF ADVI), we simply assume that the parameters of approximating distribution $q(\theta)$ are independent and posit Normal distributions over all parameters in **transformed** space to maximize ELBO. Transformed SpaceTo freely optimize ELBO, without caring about matching the **support** of model parameters, we **transform** the support of parameters to Real Coordinate Space. In other words, we optimize ELBO in transformed/unconstrained/unbounded space which automatically maps to minimization of KL divergence in original space. In terms of notation, let's denote a transformation over parameters $\theta$ as $T$ and the transformed parameters as $\zeta$. Mathematically, $\zeta=T(\theta)$. Also, since we are approximating by Normal Distributions, $q(\zeta)$ can be written as -$$q(\zeta) = \prod_{i=1}^{k} N(\zeta_k; \mu_k, \sigma^2_k)$$Now, the transformed joint probability distribution of the model becomes -$$p\left (D, \zeta \right) = p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right |\\$$ Extra: Proof of transformation equation To simplify notations, let's use $Y=T(X)$ instead of $\zeta=T(\theta)$. After reaching the results, we will put the values back. Also, let's denote cummulative distribution function (cdf) as $F$. There are two cases which respect to properties of function $T$.Case 1 - When $T$ is an increasing function $$F_Y(y) = P(Y <= y) = P(T(X) <= y)\\ = P\left(X <= T^{-1}(y) \right) = F_X\left(T^{-1}(y) \right)\\ F_Y(y) = F_X\left(T^{-1}(y) \right)$$Let's differentiate with respect to $y$ both sides - $$\frac{\mathrm{d} (F_Y(y))}{\mathrm{d} y} = \frac{\mathrm{d} (F_X\left(T^{-1}(y) \right))}{\mathrm{d} y}\\ P_Y(y) = P_X\left(T^{-1}(y) \right) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}$$Case 2 - When $T$ is a descreasing function $$F_Y(y) = P(Y = T^{-1}(y) \right)\\ = 1-P\left(X < T^{-1}(y) \right) = 1-P\left(X <= T^{-1}(y) \right) = 1-F_X\left(T^{-1}(y) \right)\\ F_Y(y) = 1-F_X\left(T^{-1}(y) \right)$$Let's differentiate with respect to $y$ both sides - $$\frac{\mathrm{d} (F_Y(y))}{\mathrm{d} y} = \frac{\mathrm{d} (1-F_X\left(T^{-1}(y) \right))}{\mathrm{d} y}\\ P_Y(y) = (-1) P_X\left(T^{-1}(y) \right) (-1) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}\\ P_Y(y) = P_X\left(T^{-1}(y) \right) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}$$Combining both results - $$P_Y(y) = P_X\left(T^{-1}(y) \right) \left | \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y} \right |$$Now comes the role of Jacobians to deal with multivariate parameters $X$ and $Y$. $$J_{T^{-1}}(Y) = \begin{vmatrix} \frac{\partial (T_1^{-1})}{\partial y_1} & ... & \frac{\partial (T_1^{-1})}{\partial y_k}\\ . & & .\\ . & & .\\ \frac{\partial (T_k^{-1})}{\partial y_1} & ... &\frac{\partial (T_k^{-1})}{\partial y_k} \end{vmatrix}$$Concluding - $$P(Y) = P(T^{-1}(Y)) |det J_{T^{-1}}(Y)|\\P(Y) = P(X) |det J_{T^{-1}}(Y)| $$Substitute $X$ as $\theta$ and $Y$ as $\zeta$, we will get - $$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\$$ ELBO in transformed SpaceLet's bring back the equation formed at [ELBO](evidence-lower-bound). Expressing ELBO in terms of $\zeta$ -$$ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + H_{q(\theta)}\\ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + H_{q(\zeta)}$$Since, we are optimizing ELBO by factorized Normal Distributions, let's bring back the results of [Entropy of Normal Distribution](entropy-of-normal-distribution). Our running equation becomes -$$ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + H_{q(\zeta)}\\ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + \frac{1}{2}\left ( \log \left ( 2 \pi {\sigma}^2 \right) + 1 \right )$$ Success The above ELBO equation is the final one which needs to be optimized. Let's Code
###Code
# Imports
%matplotlib inline
import numpy as np
import scipy as sp
import pandas as pd
import tensorflow as tf
from scipy.stats import expon, uniform
import arviz as az
import pymc3 as pm
import matplotlib.pyplot as plt
import tensorflow_probability as tfp
from pprint import pprint
plt.style.use("seaborn-darkgrid")
from tensorflow_probability.python.mcmc.transformed_kernel import (
make_transform_fn, make_transformed_log_prob)
tfb = tfp.bijectors
tfd = tfp.distributions
dtype = tf.float32
# Plot functions
def plot_transformation(theta, zeta, p_theta, p_zeta):
fig, (const, trans) = plt.subplots(nrows=2, ncols=1, figsize=(6.5, 12))
const.plot(theta, p_theta, color='blue', lw=2)
const.set_xlabel(r"$\theta$")
const.set_ylabel(r"$P(\theta)$")
const.set_title("Constrained Space")
trans.plot(zeta, p_zeta, color='blue', lw=2)
trans.set_xlabel(r"$\zeta$")
trans.set_ylabel(r"$P(\zeta)$")
trans.set_title("Transfomed Space");
###Output
_____no_output_____
###Markdown
Transformed Space Example-1Transformation of Standard Exponential Distribution$$P_X(x) = e^{-x}$$The support of Exponential Distribution is $x>=0$. Let's use **log** transformation to map the support to real number line. Mathematically, $\zeta=\log(\theta)$. Now, let's bring back our transformed joint probability distribution equation -$$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\P(\zeta) = P(e^{\zeta}) * e^{\zeta}$$Converting this directly into Python code -
###Code
theta = np.linspace(0, 5, 100)
zeta = np.linspace(-5, 5, 100)
dist = expon()
p_theta = dist.pdf(theta)
p_zeta = dist.pdf(np.exp(zeta)) * np.exp(zeta)
plot_transformation(theta, zeta, p_theta, p_zeta)
###Output
_____no_output_____
###Markdown
Transformed Space Example-2Transformation of Uniform Distribution (with support $0<=x<=1$)$$P_X(x) = 1$$Let's use **logit** or **inverse sigmoid** transformation to map the support to real number line. Mathematically, $\zeta=logit(\theta)$.$$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\P(\zeta) = P(sig(\zeta)) * sig(\zeta) * (1-sig(\zeta))$$where $sig$ is the sigmoid function.Converting this directly into Python code -
###Code
theta = np.linspace(0, 1, 100)
zeta = np.linspace(-5, 5, 100)
dist = uniform()
p_theta = dist.pdf(theta)
sigmoid = sp.special.expit
p_zeta = dist.pdf(sigmoid(zeta)) * sigmoid(zeta) * (1-sigmoid(zeta))
plot_transformation(theta, zeta, p_theta, p_zeta)
###Output
_____no_output_____
###Markdown
Mean Field ADVI ExampleInfer $\mu$ and $\sigma$ for Normal distribution.
###Code
# Generating data
mu = 12
sigma = 2.2
data = np.random.normal(mu, sigma, size=200)
# Defining the model
model = tfd.JointDistributionSequential([
# sigma_prior
tfd.Exponential(1, name='sigma'),
# mu_prior
tfd.Normal(loc=0, scale=10, name='mu'),
# likelihood
lambda mu, sigma: tfd.Normal(loc=mu, scale=sigma)
])
print(model.resolve_graph())
# Let's generate joint log probability
joint_log_prob = lambda *x: model.log_prob(x + (data,))
# Build Mean Field ADVI
def build_mf_advi():
parameters = model.sample(1)
parameters.pop()
dists = []
for i, parameter in enumerate(parameters):
shape = parameter[0].shape
loc = tf.Variable(
tf.random.normal(shape, dtype=dtype),
name=f'meanfield_{i}_loc',
dtype=dtype
)
scale = tfp.util.TransformedVariable(
tf.fill(shape, value=tf.constant(0.02, dtype=dtype)),
tfb.Softplus(), # For positive values of scale
name=f'meanfield_{i}_scale'
)
approx_parameter = tfd.Normal(loc=loc, scale=scale)
dists.append(approx_parameter)
return tfd.JointDistributionSequential(dists)
meanfield_advi = build_mf_advi()
###Output
_____no_output_____
###Markdown
TFP handles transformations differently as it transforms unconstrained space to match the support of distributions.
###Code
unconstraining_bijectors = [
tfb.Exp(),
tfb.Identity()
]
posterior = make_transformed_log_prob(
joint_log_prob,
unconstraining_bijectors,
direction='forward',
enable_bijector_caching=False
)
opt = tf.optimizers.Adam(learning_rate=.1)
@tf.function(autograph=False)
def run_approximation():
elbo_loss = tfp.vi.fit_surrogate_posterior(
posterior,
surrogate_posterior=meanfield_advi,
optimizer=opt,
sample_size=200,
num_steps=10000)
return elbo_loss
elbo_loss = run_approximation()
plt.plot(elbo_loss, color='blue')
plt.xlabel("No of iterations")
plt.ylabel("Negative ELBO")
plt.show()
graph_info = model.resolve_graph()
approx_param = dict()
free_param = meanfield_advi.trainable_variables
for i, (rvname, param) in enumerate(graph_info[:-1]):
approx_param[rvname] = {"mu": free_param[i*2].numpy(),
"sd": free_param[i*2+1].numpy()}
print(approx_param)
###Output
{'sigma': {'mu': 0.82331234, 'sd': -0.6924289}, 'mu': {'mu': 11.906398, 'sd': 1.6057507}}
###Markdown
Variational Inference Intro to Bayesian Networks Random VariablesRandom Variables are simply variables whose values are uncertain. Eg -1. In case of flipping a coin $n$ times, a random variable $X$ can be number of heads shown up.2. In COVID-19 pandemic situation, random variable can be number of patients found positive with virus daily. Probability DistributionsProbability Distributions governs the amount of uncertainty of random variables. They have a math function with which they assign probabilities to different values taken by random variables. The associated math function is called probability density function (pdf). For simplicity, let's denote any random variable as $X$ and its corresponding pdf as $P\left (X\right )$. Eg - Following figure shows the probability distribution for number of heads when an unbiased coin is flipped 5 times. Bayesian NetworksBayesian Networks are graph based representations to acccount for randomness while modelling our data. The nodes of the graph are random variables and the connections between nodes denote the direct influence from parent to child. Bayesian Network ExampleLet's say a student is taking a class during school. The `difficulty` of the class and the `intelligence` of the student together directly influence student's `grades`. And the `grades` affects his/her acceptance to the university. Also, the `intelligence` factor influences student's `SAT` score. Keep this example in mind.More formally, Bayesian Networks represent joint probability distribution over all the nodes of graph -$P\left (X_1, X_2, X_3, ..., X_n\right )$ or $P\left (\bigcap_{i=1}^{n}X_i\right )$ where $X_i$ is a random variable. Also Bayesian Networks follow local Markov property by which every node in the graph is independent on its **non-descendants** given its **parents**. In this way, the joint probability distribution can be decomposed as -$$P\left (X_1, X_2, X_3, ..., X_n\right ) = \prod_{i=1}^{n} P\left (X_i | Par\left (X_i\right )\right )$$ Extra: Proof of decomposition First, let's recall conditional probability, $$P\left (A|B\right ) = \frac{P\left (A, B\right )}{P\left (B\right )}$$ The above equation is so derived because of reduction of sample space of $A$ when $B$ has already occured. Now, adjusting terms - $$P\left (A, B\right ) = P\left (A|B\right )*P\left (B\right )$$ This equation is called chain rule of probability. Let's generalize this rule for Bayesian Networks. The ordering of names of nodes is such that parent(s) of nodes lie above them (Breadth First Ordering). $$P\left (X_1, X_2, X_3, ..., X_n\right ) = P\left (X_n, X_{n-1}, X_{n-2}, ..., X_1\right )\\ = P\left (X_n|X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) * P \left (X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) \left (Chain Rule\right )\\ = P\left (X_n|X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) * P \left (X_{n-1}|X_{n-2}, X_{n-3}, X_{n-4}, ..., X_1\right ) * P \left (X_{n-2}, X_{n-3}, X_{n-4}, ..., X_1\right )$$ Applying chain rule repeatedly, we get the following equation - $$P\left (\bigcap_{i=1}^{n}X_i\right ) = \prod_{i=1}^{n} P\left (X_i | P\left (\bigcap_{j=1}^{i-1}X_j\right )\right )$$ Keep the above equation in mind. Let's bring back Markov property. To bring some intuition behind Markov property, let's reuse Bayesian Network Example. If we say, the student scored very good grades, then it is highly likely the student gets acceptance letter to university. No matter how difficult the class was, how much intelligent the student was, and no matter what his/her SAT score was. The key thing to note here is by observing the node's parent, the influence by non-descendants towards the node gets eliminated. Now, the equation becomes - $$P\left (\bigcap_{i=1}^{n}X_i\right ) = \prod_{i=1}^{n} P\left (X_i | Par\left (X_i\right )\right )$$ Bingo, with the above equation, we have proved Factorization Theorem in Probability. The decomposition of running [Bayesian Network Example](bayesian-network-example) can be written as -$$P\left (Difficulty, Intelligence, Grade, SAT, Acceptance Letter\right ) = P\left (Difficulty\right )*P\left (Intelligence\right )*\left (Grade|Difficulty, Intelligence\right )*P\left (SAT|Intelligence\right )*P\left (Acceptance Letter|Grade\right )$$ Why care about Bayesian NetworksBayesian Networks allow us to determine the distribution of parameters given the data (Posterior Distribution). The whole idea is to model the underlying data generative process and estimate unobservable quantities. Regarding this, Bayes formula can be written as -$$P\left (\theta | D\right ) = \frac{P\left (D|\theta\right ) * P\left (\theta\right )}{P\left (D\right )}$$$\theta$ = Parameters of the model$P\left (\theta\right )$ = Prior Distribution over the parameters$P\left (D|\theta\right )$ = Likelihood of the data$P\left (\theta|D\right )$ = Posterior Distribution$P\left (D\right )$ = Probability of Data. This term is calculated by marginalising out the effect of parameters.$$P\left (D\right ) = \int P\left (D, \theta\right ) d\left (\theta\right )\\P\left (D\right ) = \int P\left (D|\theta\right ) P\left (\theta\right ) d\left (\theta\right )$$So, the Bayes formula becomes -$$P\left (\theta | D\right ) = \frac{P\left (D|\theta\right ) * P\left (\theta\right )}{\int P\left (D|\theta\right ) P\left (\theta\right ) d\left (\theta\right )}$$The devil is in the denominator. The integration over all the parameters is **intractable**. So we resort to sampling and optimization techniques. Intro to Variational Inference InformationVariational Inference has its origin in Information Theory. So first, let's understand the basic terms - Information and Entropy . Simply, **Information** quantifies how much useful the data is. It is related to Probability Distributions as -$$I = -\log \left (P\left (X\right )\right )$$The negative sign in the formula has high intuitive meaning. In words, it signifies whenever the probability of certain events is high, the related information is less and vica versa. For example -1. Consider the statement - It never snows in deserts. The probability of this statement being true is significantly high because we already know that it is hardly possible to snow in deserts. So, the related information is very small.2. Now consider - There was a snowfall in Sahara Desert in late December 2019. Wow, thats a great news because some unlikely event occured (probability was less). In turn, the information is high. EntropyEntropy quantifies how much **average** Information is present in occurence of events. It is denoted by $H$. It is named Differential Entropy in case of Real Continuous Domain.$$H = E_{P\left (X\right )} \left [-\log\left (P\left (X\right )\right )\right ]\\H = -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$$ Entropy of Normal DistributionAs an exercise, let's calculate entropy of Normal Distribution. Let's denote $\mu$ as mean nd $\sigma$ as standard deviation of Normal Distribution. Remember the results, we will need them further.$$X \sim Normal\left (\mu, \sigma^2\right )\\P_X\left (x\right ) = \frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\\H = -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$$Only expanding $\log\left (P_X\left (x\right )\right )$ -$$H = -\int_X P_X\left (x\right ) \log\left (\frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\right ) dx\\H = -\frac{1}{2}\int_X P_X\left (x\right ) \log\left (\frac{1}{2 \pi {\sigma}^2}\right )dx - \int_X P_X\left (x\right ) \log\left (e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\right ) dx\\H = \frac{1}{2}\log \left ( 2 \pi {\sigma}^2 \right)\int_X P_X\left (x\right ) dx + \frac{1}{2{\sigma}^2} \int_X \left ( x-\mu \right)^2 P_X\left (x\right ) dx$$Identifying terms -$$\int_X P_X\left (x\right ) dx = 1\\\int_X \left ( x-\mu \right)^2 P_X\left (x\right ) dx = \sigma^2$$Substituting back, the entropy becomes -$$H = \frac{1}{2}\log \left ( 2 \pi {\sigma}^2 \right) + \frac{1}{2\sigma^2} \sigma^2\\H = \frac{1}{2}\left ( \log \left ( 2 \pi {\sigma}^2 \right) + 1 \right )$$ KL divergenceThis mathematical tool serves as the backbone of Variational Inference. Kullback–Leibler (KL) divergence measures the mutual information between two probability distributions. Let's say, we have two probability distributions $P$ and $Q$, then KL divergence quantifies how much similar these distributions are. Mathematically, it is just the difference between entropies of probabilities distributions. In terms of notation, $KL(Q||P)$ represents KL divergence with respect to $Q$ against $P$.$$KL(Q||P) = H_P - H_Q\\= -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx + \int_X Q_X\left (x\right ) \log\left (Q_X\left (x\right )\right ) dx$$Changing $-\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$ to $-\int_X Q_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$ as the KL divergence is with respect to $Q$.$$= -\int_X Q_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx + \int_X Q_X\left (x\right ) \log\left (Q_X\left (x\right )\right ) dx\\= \int_X Q_X\left (x \right) \log \left( \frac{Q_X\left (x \right)}{P_X\left (x \right)} \right) dx$$Remember? We were stuck upon Bayesian Equation because of denominator term but now, we can estimate the posterior distribution $p(\theta|D)$ by another distribution $q(\theta)$ over all the parameters of the model.$$KL(q(\theta)||p(\theta|D)) = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta|D)} \right) d\theta\\$$ Note If two distributions are similar, then their entropies are similar, implies the KL divergence with respect to two distributions will be smaller. And vica versa. In Variational Inference, the whole idea is to minimize KL divergence so that our approximating distribution $q(\theta)$ can be made similar to $p(\theta|D)$. Extra: What are latent variables? If you go about exploring any paper talking about Variational Inference, then most certainly, the papers mention about latent variables instead of parameters. The parameters are fixed quantities for the model whereas latent variables are unobserved quantities of the model conditioned on parameters. Also, we model parameters by probability distributions. For simplicity, let's consider the running terminology of parameters only. Evidence Lower BoundThere is again an issue with KL divergence formula as it still involves posterior term i.e. $p(\theta|D)$. Let's get rid of it -$$KL(q(\theta)||p(\theta|D)) = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta|D)} \right) d\theta\\KL = \int q(\theta) \log \left( \frac{q(\theta) p(D)}{p(\theta, D)} \right) d\theta\\KL = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta, D)} \right) d\theta + \int q(\theta) \log \left(p(D) \right) d\theta\\KL + \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta = \log \left(p(D) \right) \int q(\theta) d\theta\\$$Identifying terms -$$\int q(\theta) d\theta = 1$$So, substituting back, our running equation becomes -$$KL + \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta = \log \left(p(D) \right)$$The term $\int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta$ is called Evidence Lower Bound (ELBO). The right side of the equation $\log \left(p(D) \right)$ is constant. Observe Minimizing the KL divergence is equivalent to maximizing the ELBO. Also, the ELBO does not depend on posterior distribution. Also,$$ELBO = \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta\\ELBO = E_{q(\theta)}\left [\log \left( \frac{p(\theta, D)}{q(\theta)} \right) \right]\\ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + E_{q(\theta)} \left [-\log(q(\theta)) \right]$$The term $E_{q(\theta)} \left [-\log(q(\theta)) \right]$ is entropy of $q(\theta)$. Our running equation becomes -$$ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + H_{q(\theta)}$$ Mean Field ADVISo far, the whole crux of the story is - To approximate the posterior, maximize the ELBO term. ADVI = Automatic Differentiation Variational Inference. I think the term **Automatic Differentiation** deals with maximizing the ELBO (or minimizing the negative ELBO) using any autograd differentiation library. Coming to Mean Field ADVI (MF ADVI), we simply assume that the parameters of approximating distribution $q(\theta)$ are independent and posit Normal distributions over all parameters in **transformed** space to maximize ELBO. Transformed SpaceTo freely optimize ELBO, without caring about matching the **support** of model parameters, we **transform** the support of parameters to Real Coordinate Space. In other words, we optimize ELBO in transformed/unconstrained/unbounded space which automatically maps to minimization of KL divergence in original space. In terms of notation, let's denote a transformation over parameters $\theta$ as $T$ and the transformed parameters as $\zeta$. Mathematically, $\zeta=T(\theta)$. Also, since we are approximating by Normal Distributions, $q(\zeta)$ can be written as -$$q(\zeta) = \prod_{i=1}^{k} N(\zeta_k; \mu_k, \sigma^2_k)$$Now, the transformed joint probability distribution of the model becomes -$$p\left (D, \zeta \right) = p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right |\\$$ Extra: Proof of transformation equation To simplify notations, let's use $Y=T(X)$ instead of $\zeta=T(\theta)$. After reaching the results, we will put the values back. Also, let's denote cummulative distribution function (cdf) as $F$. There are two cases which respect to properties of function $T$.Case 1 - When $T$ is an increasing function $$F_Y(y) = P(Y <= y) = P(T(X) <= y)\\ = P\left(X <= T^{-1}(y) \right) = F_X\left(T^{-1}(y) \right)\\ F_Y(y) = F_X\left(T^{-1}(y) \right)$$Let's differentiate with respect to $y$ both sides - $$\frac{\mathrm{d} (F_Y(y))}{\mathrm{d} y} = \frac{\mathrm{d} (F_X\left(T^{-1}(y) \right))}{\mathrm{d} y}\\ P_Y(y) = P_X\left(T^{-1}(y) \right) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}$$Case 2 - When $T$ is a descreasing function $$F_Y(y) = P(Y = T^{-1}(y) \right)\\ = 1-P\left(X < T^{-1}(y) \right) = 1-P\left(X <= T^{-1}(y) \right) = 1-F_X\left(T^{-1}(y) \right)\\ F_Y(y) = 1-F_X\left(T^{-1}(y) \right)$$Let's differentiate with respect to $y$ both sides - $$\frac{\mathrm{d} (F_Y(y))}{\mathrm{d} y} = \frac{\mathrm{d} (1-F_X\left(T^{-1}(y) \right))}{\mathrm{d} y}\\ P_Y(y) = (-1) P_X\left(T^{-1}(y) \right) (-1) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}\\ P_Y(y) = P_X\left(T^{-1}(y) \right) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}$$Combining both results - $$P_Y(y) = P_X\left(T^{-1}(y) \right) \left | \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y} \right |$$Now comes the role of Jacobians to deal with multivariate parameters $X$ and $Y$. $$J_{T^{-1}}(Y) = \begin{vmatrix} \frac{\partial (T_1^{-1})}{\partial y_1} & ... & \frac{\partial (T_1^{-1})}{\partial y_k}\\ . & & .\\ . & & .\\ \frac{\partial (T_k^{-1})}{\partial y_1} & ... &\frac{\partial (T_k^{-1})}{\partial y_k} \end{vmatrix}$$Concluding - $$P(Y) = P(T^{-1}(Y)) |det J_{T^{-1}}(Y)|\\P(Y) = P(X) |det J_{T^{-1}}(Y)| $$Substitute $X$ as $\theta$ and $Y$ as $\zeta$, we will get - $$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\$$ ELBO in transformed SpaceLet's bring back the equation formed at [ELBO](evidence-lower-bound). Expressing ELBO in terms of $\zeta$ -$$ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + H_{q(\theta)}\\ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + H_{q(\zeta)}$$Since, we are optimizing ELBO by factorized Normal Distributions, let's bring back the results of [Entropy of Normal Distribution](entropy-of-normal-distribution). Our running equation becomes -$$ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + H_{q(\zeta)}\\ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + \frac{1}{2}\left ( \log \left ( 2 \pi {\sigma}^2 \right) + 1 \right )$$ Success The above ELBO equation is the final one which needs to be optimized. Let's Code
###Code
# Imports
%matplotlib inline
import numpy as np
import scipy as sp
import pandas as pd
import tensorflow as tf
from scipy.stats import expon, uniform
import arviz as az
import pymc3 as pm
import matplotlib.pyplot as plt
import tensorflow_probability as tfp
from pprint import pprint
plt.style.use("seaborn-darkgrid")
from tensorflow_probability.python.mcmc.transformed_kernel import (
make_transform_fn, make_transformed_log_prob)
tfb = tfp.bijectors
tfd = tfp.distributions
dtype = tf.float32
# Plot functions
def plot_transformation(theta, zeta, p_theta, p_zeta):
fig, (const, trans) = plt.subplots(nrows=2, ncols=1, figsize=(6.5, 12))
const.plot(theta, p_theta, color='blue', lw=2)
const.set_xlabel(r"$\theta$")
const.set_ylabel(r"$P(\theta)$")
const.set_title("Constrained Space")
trans.plot(zeta, p_zeta, color='blue', lw=2)
trans.set_xlabel(r"$\zeta$")
trans.set_ylabel(r"$P(\zeta)$")
trans.set_title("Transfomed Space");
###Output
_____no_output_____
###Markdown
Transformed Space Example-1Transformation of Standard Exponential Distribution$$P_X(x) = e^{-x}$$The support of Exponential Distribution is $x>=0$. Let's use **log** transformation to map the support to real number line. Mathematically, $\zeta=\log(\theta)$. Now, let's bring back our transformed joint probability distribution equation -$$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\P(\zeta) = P(e^{\zeta}) * e^{\zeta}$$Converting this directly into Python code -
###Code
theta = np.linspace(0, 5, 100)
zeta = np.linspace(-5, 5, 100)
dist = expon()
p_theta = dist.pdf(theta)
p_zeta = dist.pdf(np.exp(zeta)) * np.exp(zeta)
plot_transformation(theta, zeta, p_theta, p_zeta)
###Output
_____no_output_____
###Markdown
Transformed Space Example-2Transformation of Uniform Distribution (with support $0<=x<=1$)$$P_X(x) = 1$$Let's use **logit** or **inverse sigmoid** transformation to map the support to real number line. Mathematically, $\zeta=logit(\theta)$.$$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\P(\zeta) = P(sig(\zeta)) * sig(\zeta) * (1-sig(\zeta))$$where $sig$ is the sigmoid function.Converting this directly into Python code -
###Code
theta = np.linspace(0, 1, 100)
zeta = np.linspace(-5, 5, 100)
dist = uniform()
p_theta = dist.pdf(theta)
sigmoid = sp.special.expit
p_zeta = dist.pdf(sigmoid(zeta)) * sigmoid(zeta) * (1-sigmoid(zeta))
plot_transformation(theta, zeta, p_theta, p_zeta)
###Output
_____no_output_____
###Markdown
Mean Field ADVI ExampleInfer $\mu$ and $\sigma$ for Normal distribution.
###Code
# Generating data
mu = 12
sigma = 2.2
data = np.random.normal(mu, sigma, size=200)
# Defining the model
model = tfd.JointDistributionSequential([
# sigma_prior
tfd.Exponential(1, name='sigma'),
# mu_prior
tfd.Normal(loc=0, scale=10, name='mu'),
# likelihood
lambda mu, sigma: tfd.Normal(loc=mu, scale=sigma)
])
print(model.resolve_graph())
# Let's generate joint log probability
joint_log_prob = lambda *x: model.log_prob(x + (data,))
# Build Mean Field ADVI
def build_mf_advi():
parameters = model.sample(1)
parameters.pop()
dists = []
for i, parameter in enumerate(parameters):
shape = parameter[0].shape
loc = tf.Variable(
tf.random.normal(shape, dtype=dtype),
name=f'meanfield_{i}_loc',
dtype=dtype
)
scale = tfp.util.TransformedVariable(
tf.fill(shape, value=tf.constant(0.02, dtype=dtype)),
tfb.Softplus(), # For positive values of scale
name=f'meanfield_{i}_scale'
)
approx_parameter = tfd.Normal(loc=loc, scale=scale)
dists.append(approx_parameter)
return tfd.JointDistributionSequential(dists)
meanfield_advi = build_mf_advi()
###Output
_____no_output_____
###Markdown
TFP handles transformations differently as it transforms unconstrained space to match the support of distributions.
###Code
unconstraining_bijectors = [
tfb.Exp(),
tfb.Identity()
]
posterior = make_transformed_log_prob(
joint_log_prob,
unconstraining_bijectors,
direction='forward',
enable_bijector_caching=False
)
opt = tf.optimizers.Adam(learning_rate=.1)
@tf.function(autograph=False)
def run_approximation():
elbo_loss = tfp.vi.fit_surrogate_posterior(
posterior,
surrogate_posterior=meanfield_advi,
optimizer=opt,
sample_size=200,
num_steps=10000)
return elbo_loss
elbo_loss = run_approximation()
plt.plot(elbo_loss, color='blue')
plt.xlabel("No of iterations")
plt.ylabel("Negative ELBO")
plt.show()
graph_info = model.resolve_graph()
approx_param = dict()
free_param = meanfield_advi.trainable_variables
for i, (rvname, param) in enumerate(graph_info[:-1]):
approx_param[rvname] = {"mu": free_param[i*2].numpy(),
"sd": free_param[i*2+1].numpy()}
print(approx_param)
###Output
{'sigma': {'mu': 0.82331234, 'sd': -0.6924289}, 'mu': {'mu': 11.906398, 'sd': 1.6057507}}
###Markdown
Variational Inference Intro to Bayesian Networks Random VariablesRandom Variables are simply variables whose values are uncertain. Eg -1. In case of flipping a coin $n$ times, a random variable $X$ can be number of heads shown up.2. In COVID-19 pandemic situation, random variable can be number of patients found positive with virus daily. Probability DistributionsProbability Distributions governs the amount of uncertainty of random variables. They have a math function with which they assign probabilities to different values taken by random variables. The associated math function is called probability density function (pdf). For simplicity, let's denote any random variable as $X$ and its corresponding pdf as $P\left (X\right )$. Eg - Following figure shows the probability distribution for number of heads when an unbiased coin is flipped 5 times. Bayesian NetworksBayesian Networks are graph based representations to acccount for randomness while modelling our data. The nodes of the graph are random variables and the connections between nodes denote the direct influence from parent to child. Bayesian Network ExampleLet's say a student is taking a class during school. The `difficulty` of the class and the `intelligence` of the student together directly influence student's `grades`. And the `grades` affects his/her acceptance to the university. Also, the `intelligence` factor influences student's `SAT` score. Keep this example in mind.More formally, Bayesian Networks represent joint probability distribution over all the nodes of graph -$P\left (X_1, X_2, X_3, ..., X_n\right )$ or $P\left (\bigcap_{i=1}^{n}X_i\right )$ where $X_i$ is a random variable. Also Bayesian Networks follow local Markov property by which every node in the graph is independent on its **non-descendants** given its **parents**. In this way, the joint probability distribution can be decomposed as -$$P\left (X_1, X_2, X_3, ..., X_n\right ) = \prod_{i=1}^{n} P\left (X_i | Par\left (X_i\right )\right )$$ Extra: Proof of decomposition First, let's recall conditional probability, $$P\left (A|B\right ) = \frac{P\left (A, B\right )}{P\left (B\right )}$$ The above equation is so derived because of reduction of sample space of $A$ when $B$ has already occured. Now, adjusting terms - $$P\left (A, B\right ) = P\left (A|B\right )*P\left (B\right )$$ This equation is called chain rule of probability. Let's generalize this rule for Bayesian Networks. The ordering of names of nodes is such that parent(s) of nodes lie above them (Breadth First Ordering). $$P\left (X_1, X_2, X_3, ..., X_n\right ) = P\left (X_n, X_{n-1}, X_{n-2}, ..., X_1\right )\\ = P\left (X_n|X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) * P \left (X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) \left (Chain Rule\right )\\ = P\left (X_n|X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) * P \left (X_{n-1}|X_{n-2}, X_{n-3}, X_{n-4}, ..., X_1\right ) * P \left (X_{n-2}, X_{n-3}, X_{n-4}, ..., X_1\right )$$ Applying chain rule repeatedly, we get the following equation - $$P\left (\bigcap_{i=1}^{n}X_i\right ) = \prod_{i=1}^{n} P\left (X_i | P\left (\bigcap_{j=1}^{i-1}X_j\right )\right )$$ Keep the above equation in mind. Let's bring back Markov property. To bring some intuition behind Markov property, let's reuse Bayesian Network Example. If we say, the student scored very good grades, then it is highly likely the student gets acceptance letter to university. No matter how difficult the class was, how much intelligent the student was, and no matter what his/her SAT score was. The key thing to note here is by observing the node's parent, the influence by non-descendants towards the node gets eliminated. Now, the equation becomes - $$P\left (\bigcap_{i=1}^{n}X_i\right ) = \prod_{i=1}^{n} P\left (X_i | Par\left (X_i\right )\right )$$ Bingo, with the above equation, we have proved Factorization Theorem in Probability. The decomposition of running [Bayesian Network Example](bayesian-network-example) can be written as -$$P\left (Difficulty, Intelligence, Grade, SAT, Acceptance Letter\right ) = P\left (Difficulty\right )*P\left (Intelligence\right )*\left (Grade|Difficulty, Intelligence\right )*P\left (SAT|Intelligence\right )*P\left (Acceptance Letter|Grade\right )$$ Why care about Bayesian NetworksBayesian Networks allow us to determine the distribution of parameters given the data (Posterior Distribution). The whole idea is to model the underlying data generative process and estimate unobservable quantities. Regarding this, Bayes formula can be written as -$$P\left (\theta | D\right ) = \frac{P\left (D|\theta\right ) * P\left (\theta\right )}{P\left (D\right )}$$$\theta$ = Parameters of the model$P\left (\theta\right )$ = Prior Distribution over the parameters$P\left (D|\theta\right )$ = Likelihood of the data$P\left (\theta|D\right )$ = Posterior Distribution$P\left (D\right )$ = Probability of Data. This term is calculated by marginalising out the effect of parameters.$$P\left (D\right ) = \int P\left (D, \theta\right ) d\left (\theta\right )\\P\left (D\right ) = \int P\left (D|\theta\right ) P\left (\theta\right ) d\left (\theta\right )$$So, the Bayes formula becomes -$$P\left (\theta | D\right ) = \frac{P\left (D|\theta\right ) * P\left (\theta\right )}{\int P\left (D|\theta\right ) P\left (\theta\right ) d\left (\theta\right )}$$The devil is in the denominator. The integration over all the parameters is **intractable**. So we resort to sampling and optimization techniques. Intro to Variational Inference InformationVariational Inference has its origin in Information Theory. So first, let's understand the basic terms - Information and Entropy . Simply, **Information** quantifies how much useful the data is. It is related to Probability Distributions as -$$I = -\log \left (P\left (X\right )\right )$$The negative sign in the formula has high intuitive meaning. In words, it signifies whenever the probability of certain events is high, the related information is less and vica versa. For example -1. Consider the statement - It never snows in deserts. The probability of this statement being true is significantly high because we already know that it is hardly possible to snow in deserts. So, the related information is very small.2. Now consider - There was a snowfall in Sahara Desert in late December 2019. Wow, thats a great news because some unlikely event occured (probability was less). In turn, the information is high. EntropyEntropy quantifies how much **average** Information is present in occurence of events. It is denoted by $H$. It is named Differential Entropy in case of Real Continuous Domain.$$H = E_{P\left (X\right )} \left [-\log\left (P\left (X\right )\right )\right ]\\H = -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$$ Entropy of Normal DistributionAs an exercise, let's calculate entropy of Normal Distribution. Let's denote $\mu$ as mean nd $\sigma$ as standard deviation of Normal Distribution. Remember the results, we will need them further.$$X \sim Normal\left (\mu, \sigma^2\right )\\P_X\left (x\right ) = \frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\\H = -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$$Only expanding $\log\left (P_X\left (x\right )\right )$ -$$H = -\int_X P_X\left (x\right ) \log\left (\frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\right ) dx\\H = -\frac{1}{2}\int_X P_X\left (x\right ) \log\left (\frac{1}{2 \pi {\sigma}^2}\right )dx - \int_X P_X\left (x\right ) \log\left (e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\right ) dx\\H = \frac{1}{2}\log \left ( 2 \pi {\sigma}^2 \right)\int_X P_X\left (x\right ) dx + \frac{1}{2{\sigma}^2} \int_X \left ( x-\mu \right)^2 P_X\left (x\right ) dx$$Identifying terms -$$\int_X P_X\left (x\right ) dx = 1\\\int_X \left ( x-\mu \right)^2 P_X\left (x\right ) dx = \sigma^2$$Substituting back, the entropy becomes -$$H = \frac{1}{2}\log \left ( 2 \pi {\sigma}^2 \right) + \frac{1}{2\sigma^2} \sigma^2\\H = \frac{1}{2}\left ( \log \left ( 2 \pi {\sigma}^2 \right) + 1 \right )$$ KL divergenceThis mathematical tool serves as the backbone of Variational Inference. Kullback–Leibler (KL) divergence measures the mutual information between two probability distributions. Let's say, we have two probability distributions $P$ and $Q$, then KL divergence quantifies how much similar these distributions are. Mathematically, it is just the difference between entropies of probabilities distributions. In terms of notation, $KL(Q||P)$ represents KL divergence with respect to $Q$ against $P$.$$KL(Q||P) = H_P - H_Q\\= -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx + \int_X Q_X\left (x\right ) \log\left (Q_X\left (x\right )\right ) dx$$Changing $-\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$ to $-\int_X Q_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$ as the KL divergence is with respect to $Q$.$$= -\int_X Q_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx + \int_X Q_X\left (x\right ) \log\left (Q_X\left (x\right )\right ) dx\\= \int_X Q_X\left (x \right) \log \left( \frac{Q_X\left (x \right)}{P_X\left (x \right)} \right) dx$$Remember? We were stuck upon Bayesian Equation because of denominator term but now, we can estimate the posterior distribution $p(\theta|D)$ by another distribution $q(\theta)$ over all the parameters of the model.$$KL(q(\theta)||p(\theta|D)) = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta|D)} \right) d\theta\\$$ Note If two distributions are similar, then their entropies are similar, implies the KL divergence with respect to two distributions will be smaller. And vica versa. In Variational Inference, the whole idea is to minimize KL divergence so that our approximating distribution $q(\theta)$ can be made similar to $p(\theta|D)$. Extra: What are latent variables? If you go about exploring any paper talking about Variational Inference, then most certainly, the papers mention about latent variables instead of parameters. The parameters are fixed quantities for the model whereas latent variables are unobserved quantities of the model conditioned on parameters. Also, we model parameters by probability distributions. For simplicity, let's consider the running terminology of parameters only. Evidence Lower BoundThere is again an issue with KL divergence formula as it still involves posterior term i.e. $p(\theta|D)$. Let's get rid of it -$$KL(q(\theta)||p(\theta|D)) = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta|D)} \right) d\theta\\KL = \int q(\theta) \log \left( \frac{q(\theta) p(D)}{p(\theta, D)} \right) d\theta\\KL = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta, D)} \right) d\theta + \int q(\theta) \log \left(p(D) \right) d\theta\\KL + \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta = \log \left(p(D) \right) \int q(\theta) d\theta\\$$Identifying terms -$$\int q(\theta) d\theta = 1$$So, substituting back, our running equation becomes -$$KL + \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta = \log \left(p(D) \right)$$The term $\int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta$ is called Evidence Lower Bound (ELBO). The right side of the equation $\log \left(p(D) \right)$ is constant. Observe Minimizing the KL divergence is equivalent to maximizing the ELBO. Also, the ELBO does not depend on posterior distribution. Also,$$ELBO = \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta\\ELBO = E_{q(\theta)}\left [\log \left( \frac{p(\theta, D)}{q(\theta)} \right) \right]\\ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + E_{q(\theta)} \left [-\log(q(\theta)) \right]$$The term $E_{q(\theta)} \left [-\log(q(\theta)) \right]$ is entropy of $q(\theta)$. Our running equation becomes -$$ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + H_{q(\theta)}$$ Mean Field ADVISo far, the whole crux of the story is - To approximate the posterior, maximize the ELBO term. ADVI = Automatic Differentiation Variational Inference. I think the term **Automatic Differentiation** deals with maximizing the ELBO (or minimizing the negative ELBO) using any autograd differentiation library. Coming to Mean Field ADVI (MF ADVI), we simply assume that the parameters of approximating distribution $q(\theta)$ are independent and posit Normal distributions over all parameters in **transformed** space to maximize ELBO. Transformed SpaceTo freely optimize ELBO, without caring about matching the **support** of model parameters, we **transform** the support of parameters to Real Coordinate Space. In other words, we optimize ELBO in transformed/unconstrained/unbounded space which automatically maps to minimization of KL divergence in original space. In terms of notation, let's denote a transformation over parameters $\theta$ as $T$ and the transformed parameters as $\zeta$. Mathematically, $\zeta=T(\theta)$. Also, since we are approximating by Normal Distributions, $q(\zeta)$ can be written as -$$q(\zeta) = \prod_{i=1}^{k} N(\zeta_k; \mu_k, \sigma^2_k)$$Now, the transformed joint probability distribution of the model becomes -$$p\left (D, \zeta \right) = p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right |\\$$ Extra: Proof of transformation equation To simplify notations, let's use $Y=T(X)$ instead of $\zeta=T(\theta)$. After reaching the results, we will put the values back. Also, let's denote cummulative distribution function (cdf) as $F$. There are two cases which respect to properties of function $T$.Case 1 - When $T$ is an increasing function $$F_Y(y) = P(Y <= y) = P(T(X) <= y)\\ = P\left(X <= T^{-1}(y) \right) = F_X\left(T^{-1}(y) \right)\\ F_Y(y) = F_X\left(T^{-1}(y) \right)$$Let's differentiate with respect to $y$ both sides - $$\frac{\mathrm{d} (F_Y(y))}{\mathrm{d} y} = \frac{\mathrm{d} (F_X\left(T^{-1}(y) \right))}{\mathrm{d} y}\\ P_Y(y) = P_X\left(T^{-1}(y) \right) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}$$Case 2 - When $T$ is a descreasing function $$F_Y(y) = P(Y = T^{-1}(y) \right)\\ = 1-P\left(X < T^{-1}(y) \right) = 1-P\left(X <= T^{-1}(y) \right) = 1-F_X\left(T^{-1}(y) \right)\\ F_Y(y) = 1-F_X\left(T^{-1}(y) \right)$$Let's differentiate with respect to $y$ both sides - $$\frac{\mathrm{d} (F_Y(y))}{\mathrm{d} y} = \frac{\mathrm{d} (1-F_X\left(T^{-1}(y) \right))}{\mathrm{d} y}\\ P_Y(y) = (-1) P_X\left(T^{-1}(y) \right) (-1) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}\\ P_Y(y) = P_X\left(T^{-1}(y) \right) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}$$Combining both results - $$P_Y(y) = P_X\left(T^{-1}(y) \right) \left | \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y} \right |$$Now comes the role of Jacobians to deal with multivariate parameters $X$ and $Y$. $$J_{T^{-1}}(Y) = \begin{vmatrix} \frac{\partial (T_1^{-1})}{\partial y_1} & ... & \frac{\partial (T_1^{-1})}{\partial y_k}\\ . & & .\\ . & & .\\ \frac{\partial (T_k^{-1})}{\partial y_1} & ... &\frac{\partial (T_k^{-1})}{\partial y_k} \end{vmatrix}$$Concluding - $$P(Y) = P(T^{-1}(Y)) |det J_{T^{-1}}(Y)|\\P(Y) = P(X) |det J_{T^{-1}}(Y)| $$Substitute $X$ as $\theta$ and $Y$ as $\zeta$, we will get - $$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\$$ ELBO in transformed SpaceLet's bring back the equation formed at [ELBO](evidence-lower-bound). Expressing ELBO in terms of $\zeta$ -$$ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + H_{q(\theta)}\\ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + H_{q(\zeta)}$$Since, we are optimizing ELBO by factorized Normal Distributions, let's bring back the results of [Entropy of Normal Distribution](entropy-of-normal-distribution). Our running equation becomes -$$ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + H_{q(\zeta)}\\ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + \frac{1}{2}\left ( \log \left ( 2 \pi {\sigma}^2 \right) + 1 \right )$$ Success The above ELBO equation is the final one which needs to be optimized. Let's Code
###Code
# Imports
%matplotlib inline
import numpy as np
import scipy as sp
import pandas as pd
import tensorflow as tf
from scipy.stats import expon, uniform
import arviz as az
import pymc3 as pm
import matplotlib.pyplot as plt
import tensorflow_probability as tfp
from pprint import pprint
plt.style.use("seaborn-darkgrid")
from tensorflow_probability.python.mcmc.transformed_kernel import (
make_transform_fn, make_transformed_log_prob)
tfb = tfp.bijectors
tfd = tfp.distributions
dtype = tf.float32
# Plot functions
def plot_transformation(theta, zeta, p_theta, p_zeta):
fig, (const, trans) = plt.subplots(nrows=2, ncols=1, figsize=(6.5, 12))
const.plot(theta, p_theta, color='blue', lw=2)
const.set_xlabel(r"$\theta$")
const.set_ylabel(r"$P(\theta)$")
const.set_title("Constrained Space")
trans.plot(zeta, p_zeta, color='blue', lw=2)
trans.set_xlabel(r"$\zeta$")
trans.set_ylabel(r"$P(\zeta)$")
trans.set_title("Transfomed Space");
###Output
_____no_output_____
###Markdown
Transformed Space Example-1Transformation of Standard Exponential Distribution$$P_X(x) = e^{-x}$$The support of Exponential Distribution is $x>=0$. Let's use **log** transformation to map the support to real number line. Mathematically, $\zeta=\log(\theta)$. Now, let's bring back our transformed joint probability distribution equation -$$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\P(\zeta) = P(e^{\zeta}) * e^{\zeta}$$Converting this directly into Python code -
###Code
theta = np.linspace(0, 5, 100)
zeta = np.linspace(-5, 5, 100)
dist = expon()
p_theta = dist.pdf(theta)
p_zeta = dist.pdf(np.exp(zeta)) * np.exp(zeta)
plot_transformation(theta, zeta, p_theta, p_zeta)
###Output
_____no_output_____
###Markdown
Transformed Space Example-2Transformation of Uniform Distribution (with support $0<=x<=1$)$$P_X(x) = 1$$Let's use **logit** or **inverse sigmoid** transformation to map the support to real number line. Mathematically, $\zeta=logit(\theta)$.$$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\P(\zeta) = P(sig(\zeta)) * sig(\zeta) * (1-sig(\zeta))$$where $sig$ is the sigmoid function.Converting this directly into Python code -
###Code
theta = np.linspace(0, 1, 100)
zeta = np.linspace(-5, 5, 100)
dist = uniform()
p_theta = dist.pdf(theta)
sigmoid = sp.special.expit
p_zeta = dist.pdf(sigmoid(zeta)) * sigmoid(zeta) * (1-sigmoid(zeta))
plot_transformation(theta, zeta, p_theta, p_zeta)
###Output
_____no_output_____
###Markdown
Mean Field ADVI ExampleInfer $\mu$ and $\sigma$ for Normal distribution.
###Code
# Generating data
mu = 12
sigma = 2.2
data = np.random.normal(mu, sigma, size=200)
# Defining the model
model = tfd.JointDistributionSequential([
# sigma_prior
tfd.Exponential(1, name='sigma'),
# mu_prior
tfd.Normal(loc=0, scale=10, name='mu'),
# likelihood
lambda mu, sigma: tfd.Normal(loc=mu, scale=sigma)
])
print(model.resolve_graph())
# Let's generate joint log probability
joint_log_prob = lambda *x: model.log_prob(x + (data,))
# Build Mean Field ADVI
def build_mf_advi():
parameters = model.sample(1)
parameters.pop()
dists = []
for i, parameter in enumerate(parameters):
shape = parameter[0].shape
loc = tf.Variable(
tf.random.normal(shape, dtype=dtype),
name=f'meanfield_{i}_loc',
dtype=dtype
)
scale = tfp.util.TransformedVariable(
tf.fill(shape, value=tf.constant(0.02, dtype=dtype)),
tfb.Softplus(), # For positive values of scale
name=f'meanfield_{i}_scale'
)
approx_parameter = tfd.Normal(loc=loc, scale=scale)
dists.append(approx_parameter)
return tfd.JointDistributionSequential(dists)
meanfield_advi = build_mf_advi()
###Output
_____no_output_____
###Markdown
TFP handles transformations differently as it transforms unconstrained space to match the support of distributions.
###Code
unconstraining_bijectors = [
tfb.Exp(),
tfb.Identity()
]
posterior = make_transformed_log_prob(
joint_log_prob,
unconstraining_bijectors,
direction='forward',
enable_bijector_caching=False
)
opt = tf.optimizers.Adam(learning_rate=.1)
@tf.function(autograph=False)
def run_approximation():
elbo_loss = tfp.vi.fit_surrogate_posterior(
posterior,
surrogate_posterior=meanfield_advi,
optimizer=opt,
sample_size=200,
num_steps=10000)
return elbo_loss
elbo_loss = run_approximation()
plt.plot(elbo_loss, color='blue')
plt.xlabel("No of iterations")
plt.ylabel("Negative ELBO")
plt.show()
graph_info = model.resolve_graph()
approx_param = dict()
free_param = meanfield_advi.trainable_variables
for i, (rvname, param) in enumerate(graph_info[:-1]):
approx_param[rvname] = {"mu": free_param[i*2].numpy(),
"sd": free_param[i*2+1].numpy()}
print(approx_param)
###Output
{'sigma': {'mu': 0.82331234, 'sd': -0.6924289}, 'mu': {'mu': 11.906398, 'sd': 1.6057507}}
###Markdown
Variational Inference Intro to Bayesian Networks Random VariablesRandom Variables are simply variables whose values are uncertain. Eg -1. In case of flipping a coin $n$ times, a random variable $X$ can be number of heads shown up.2. In COVID-19 pandemic situation, random variable can be number of patients found positive with virus daily. Probability DistributionsProbability Distributions governs the amount of uncertainty of random variables. They have a math function with which they assign probabilities to different values taken by random variables. The associated math function is called probability density function (pdf). For simplicity, let's denote any random variable as $X$ and its corresponding pdf as $P\left (X\right )$. Eg - Following figure shows the probability distribution for number of heads when an unbiased coin is flipped 5 times. Bayesian NetworksBayesian Networks are graph based representations to acccount for randomness while modelling our data. The nodes of the graph are random variables and the connections between nodes denote the direct influence from parent to child. Bayesian Network ExampleLet's say a student is taking a class during school. The `difficulty` of the class and the `intelligence` of the student together directly influence student's `grades`. And the `grades` affects his/her acceptance to the university. Also, the `intelligence` factor influences student's `SAT` score. Keep this example in mind.More formally, Bayesian Networks represent joint probability distribution over all the nodes of graph -$P\left (X_1, X_2, X_3, ..., X_n\right )$ or $P\left (\bigcap_{i=1}^{n}X_i\right )$ where $X_i$ is a random variable. Also Bayesian Networks follow local Markov property by which every node in the graph is independent on its **non-descendants** given its **parents**. In this way, the joint probability distribution can be decomposed as -$$P\left (X_1, X_2, X_3, ..., X_n\right ) = \prod_{i=1}^{n} P\left (X_i | Par\left (X_i\right )\right )$$ Extra: Proof of decomposition First, let's recall conditional probability, $$P\left (A|B\right ) = \frac{P\left (A, B\right )}{P\left (B\right )}$$ The above equation is so derived because of reduction of sample space of $A$ when $B$ has already occured. Now, adjusting terms - $$P\left (A, B\right ) = P\left (A|B\right )*P\left (B\right )$$ This equation is called chain rule of probability. Let's generalize this rule for Bayesian Networks. The ordering of names of nodes is such that parent(s) of nodes lie above them (Breadth First Ordering). $$P\left (X_1, X_2, X_3, ..., X_n\right ) = P\left (X_n, X_{n-1}, X_{n-2}, ..., X_1\right )\\ = P\left (X_n|X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) * P \left (X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) \left (Chain Rule\right )\\ = P\left (X_n|X_{n-1}, X_{n-2}, X_{n-3}, ..., X_1\right ) * P \left (X_{n-1}|X_{n-2}, X_{n-3}, X_{n-4}, ..., X_1\right ) * P \left (X_{n-2}, X_{n-3}, X_{n-4}, ..., X_1\right )$$ Applying chain rule repeatedly, we get the following equation - $$P\left (\bigcap_{i=1}^{n}X_i\right ) = \prod_{i=1}^{n} P\left (X_i | P\left (\bigcap_{j=1}^{i-1}X_j\right )\right )$$ Keep the above equation in mind. Let's bring back Markov property. To bring some intuition behind Markov property, let's reuse Bayesian Network Example. If we say, the student scored very good grades, then it is highly likely the student gets acceptance letter to university. No matter how difficult the class was, how much intelligent the student was, and no matter what his/her SAT score was. The key thing to note here is by observing the node's parent, the influence by non-descendants towards the node gets eliminated. Now, the equation becomes - $$P\left (\bigcap_{i=1}^{n}X_i\right ) = \prod_{i=1}^{n} P\left (X_i | Par\left (X_i\right )\right )$$ Bingo, with the above equation, we have proved Factorization Theorem in Probability. The decomposition of running [Bayesian Network Example](bayesian-network-example) can be written as -$$P\left (Difficulty, Intelligence, Grade, SAT, Acceptance Letter\right ) = P\left (Difficulty\right )*P\left (Intelligence\right )*\left (Grade|Difficulty, Intelligence\right )*P\left (SAT|Intelligence\right )*P\left (Acceptance Letter|Grade\right )$$ Why care about Bayesian NetworksBayesian Networks allow us to determine the distribution of parameters given the data (Posterior Distribution). The whole idea is to model the underlying data generative process and estimate unobservable quantities. Regarding this, Bayes formula can be written as -$$P\left (\theta | D\right ) = \frac{P\left (D|\theta\right ) * P\left (\theta\right )}{P\left (D\right )}$$$\theta$ = Parameters of the model$P\left (\theta\right )$ = Prior Distribution over the parameters$P\left (D|\theta\right )$ = Likelihood of the data$P\left (\theta|D\right )$ = Posterior Distribution$P\left (D\right )$ = Probability of Data. This term is calculated by marginalising out the effect of parameters.$$P\left (D\right ) = \int P\left (D, \theta\right ) d\left (\theta\right )\\P\left (D\right ) = \int P\left (D|\theta\right ) P\left (\theta\right ) d\left (\theta\right )$$So, the Bayes formula becomes -$$P\left (\theta | D\right ) = \frac{P\left (D|\theta\right ) * P\left (\theta\right )}{\int P\left (D|\theta\right ) P\left (\theta\right ) d\left (\theta\right )}$$The devil is in the denominator. The integration over all the parameters is **intractable**. So we resort to sampling and optimization techniques. Intro to Variational Inference InformationVariational Inference has its origin in Information Theory. So first, let's understand the basic terms - Information and Entropy . Simply, **Information** quantifies how much useful the data is. It is related to Probability Distributions as -$$I = -\log \left (P\left (X\right )\right )$$The negative sign in the formula has high intuitive meaning. In words, it signifies whenever the probability of certain events is high, the related information is less and vica versa. For example -1. Consider the statement - It never snows in deserts. The probability of this statement being true is significantly high because we already know that it is hardly possible to snow in deserts. So, the related information is very small.2. Now consider - There was a snowfall in Sahara Desert in late December 2019. Wow, thats a great news because some unlikely event occured (probability was less). In turn, the information is high. EntropyEntropy quantifies how much **average** Information is present in occurence of events. It is denoted by $H$. It is named Differential Entropy in case of Real Continuous Domain.$$H = E_{P\left (X\right )} \left [-\log\left (P\left (X\right )\right )\right ]\\H = -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$$ Entropy of Normal DistributionAs an exercise, let's calculate entropy of Normal Distribution. Let's denote $\mu$ as mean nd $\sigma$ as standard deviation of Normal Distribution. Remember the results, we will need them further.$$X \sim Normal\left (\mu, \sigma^2\right )\\P_X\left (x\right ) = \frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\\H = -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$$Only expanding $\log\left (P_X\left (x\right )\right )$ -$$H = -\int_X P_X\left (x\right ) \log\left (\frac{1}{\sigma \sqrt{2 \pi}} e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\right ) dx\\H = -\frac{1}{2}\int_X P_X\left (x\right ) \log\left (\frac{1}{2 \pi {\sigma}^2}\right )dx - \int_X P_X\left (x\right ) \log\left (e^{ - \frac{1}{2} \left ({\frac{x- \mu}{ \sigma}}\right )^2}\right ) dx\\H = \frac{1}{2}\log \left ( 2 \pi {\sigma}^2 \right)\int_X P_X\left (x\right ) dx + \frac{1}{2{\sigma}^2} \int_X \left ( x-\mu \right)^2 P_X\left (x\right ) dx$$Identifying terms -$$\int_X P_X\left (x\right ) dx = 1\\\int_X \left ( x-\mu \right)^2 P_X\left (x\right ) dx = \sigma^2$$Substituting back, the entropy becomes -$$H = \frac{1}{2}\log \left ( 2 \pi {\sigma}^2 \right) + \frac{1}{2\sigma^2} \sigma^2\\H = \frac{1}{2}\left ( \log \left ( 2 \pi {\sigma}^2 \right) + 1 \right )$$ KL divergenceThis mathematical tool serves as the backbone of Variational Inference. Kullback–Leibler (KL) divergence measures the mutual information between two probability distributions. Let's say, we have two probability distributions $P$ and $Q$, then KL divergence quantifies how much similar these distributions are. Mathematically, it is just the difference between entropies of probabilities distributions. In terms of notation, $KL(Q||P)$ represents KL divergence with respect to $Q$ against $P$.$$KL(Q||P) = H_P - H_Q\\= -\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx + \int_X Q_X\left (x\right ) \log\left (Q_X\left (x\right )\right ) dx$$Changing $-\int_X P_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$ to $-\int_X Q_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx$ as the KL divergence is with respect to $Q$.$$= -\int_X Q_X\left (x\right ) \log\left (P_X\left (x\right )\right ) dx + \int_X Q_X\left (x\right ) \log\left (Q_X\left (x\right )\right ) dx\\= \int_X Q_X\left (x \right) \log \left( \frac{Q_X\left (x \right)}{P_X\left (x \right)} \right) dx$$Remember? We were stuck upon Bayesian Equation because of denominator term but now, we can estimate the posterior distribution $p(\theta|D)$ by another distribution $q(\theta)$ over all the parameters of the model.$$KL(q(\theta)||p(\theta|D)) = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta|D)} \right) d\theta\\$$ Note If two distributions are similar, then their entropies are similar, implies the KL divergence with respect to two distributions will be smaller. And vica versa. In Variational Inference, the whole idea is to minimize KL divergence so that our approximating distribution $q(\theta)$ can be made similar to $p(\theta|D)$. Extra: What are latent variables? If you go about exploring any paper talking about Variational Inference, then most certainly, the papers mention about latent variables instead of parameters. The parameters are fixed quantities for the model whereas latent variables are unobserved quantities of the model conditioned on parameters. Also, we model parameters by probability distributions. For simplicity, let's consider the running terminology of parameters only. Evidence Lower BoundThere is again an issue with KL divergence formula as it still involves posterior term i.e. $p(\theta|D)$. Let's get rid of it -$$KL(q(\theta)||p(\theta|D)) = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta|D)} \right) d\theta\\KL = \int q(\theta) \log \left( \frac{q(\theta) p(D)}{p(\theta, D)} \right) d\theta\\KL = \int q(\theta) \log \left( \frac{q(\theta)}{p(\theta, D)} \right) d\theta + \int q(\theta) \log \left(p(D) \right) d\theta\\KL + \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta = \log \left(p(D) \right) \int q(\theta) d\theta\\$$Identifying terms -$$\int q(\theta) d\theta = 1$$So, substituting back, our running equation becomes -$$KL + \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta = \log \left(p(D) \right)$$The term $\int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta$ is called Evidence Lower Bound (ELBO). The right side of the equation $\log \left(p(D) \right)$ is constant. Observe Minimizing the KL divergence is equivalent to maximizing the ELBO. Also, the ELBO does not depend on posterior distribution. Also,$$ELBO = \int q(\theta) \log \left( \frac{p(\theta, D)}{q(\theta)} \right) d\theta\\ELBO = E_{q(\theta)}\left [\log \left( \frac{p(\theta, D)}{q(\theta)} \right) \right]\\ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + E_{q(\theta)} \left [-\log(q(\theta)) \right]$$The term $E_{q(\theta)} \left [-\log(q(\theta)) \right]$ is entropy of $q(\theta)$. Our running equation becomes -$$ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + H_{q(\theta)}$$ Mean Field ADVISo far, the whole crux of the story is - To approximate the posterior, maximize the ELBO term. ADVI = Automatic Differentiation Variational Inference. I think the term **Automatic Differentiation** deals with maximizing the ELBO (or minimizing the negative ELBO) using any autograd differentiation library. Coming to Mean Field ADVI (MF ADVI), we simply assume that the parameters of approximating distribution $q(\theta)$ are independent and posit Normal distributions over all parameters in **transformed** space to maximize ELBO. Transformed SpaceTo freely optimize ELBO, without caring about matching the **support** of model parameters, we **transform** the support of parameters to Real Coordinate Space. In other words, we optimize ELBO in transformed/unconstrained/unbounded space which automatically maps to minimization of KL divergence in original space. In terms of notation, let's denote a transformation over parameters $\theta$ as $T$ and the transformed parameters as $\zeta$. Mathematically, $\zeta=T(\theta)$. Also, since we are approximating by Normal Distributions, $q(\zeta)$ can be written as -$$q(\zeta) = \prod_{i=1}^{k} N(\zeta_k; \mu_k, \sigma^2_k)$$Now, the transformed joint probability distribution of the model becomes -$$p\left (D, \zeta \right) = p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right |\\$$ Extra: Proof of transformation equation To simplify notations, let's use $Y=T(X)$ instead of $\zeta=T(\theta)$. After reaching the results, we will put the values back. Also, let's denote cummulative distribution function (cdf) as $F$. There are two cases which respect to properties of function $T$.Case 1 - When $T$ is an increasing function $$F_Y(y) = P(Y <= y) = P(T(X) <= y)\\ = P\left(X <= T^{-1}(y) \right) = F_X\left(T^{-1}(y) \right)\\ F_Y(y) = F_X\left(T^{-1}(y) \right)$$Let's differentiate with respect to $y$ both sides - $$\frac{\mathrm{d} (F_Y(y))}{\mathrm{d} y} = \frac{\mathrm{d} (F_X\left(T^{-1}(y) \right))}{\mathrm{d} y}\\ P_Y(y) = P_X\left(T^{-1}(y) \right) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}$$Case 2 - When $T$ is a descreasing function $$F_Y(y) = P(Y = T^{-1}(y) \right)\\ = 1-P\left(X < T^{-1}(y) \right) = 1-P\left(X <= T^{-1}(y) \right) = 1-F_X\left(T^{-1}(y) \right)\\ F_Y(y) = 1-F_X\left(T^{-1}(y) \right)$$Let's differentiate with respect to $y$ both sides - $$\frac{\mathrm{d} (F_Y(y))}{\mathrm{d} y} = \frac{\mathrm{d} (1-F_X\left(T^{-1}(y) \right))}{\mathrm{d} y}\\ P_Y(y) = (-1) P_X\left(T^{-1}(y) \right) (-1) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}\\ P_Y(y) = P_X\left(T^{-1}(y) \right) \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y}$$Combining both results - $$P_Y(y) = P_X\left(T^{-1}(y) \right) \left | \frac{\mathrm{d} (T^{-1}(y))}{\mathrm{d} y} \right |$$Now comes the role of Jacobians to deal with multivariate parameters $X$ and $Y$. $$J_{T^{-1}}(Y) = \begin{vmatrix} \frac{\partial (T_1^{-1})}{\partial y_1} & ... & \frac{\partial (T_1^{-1})}{\partial y_k}\\ . & & .\\ . & & .\\ \frac{\partial (T_k^{-1})}{\partial y_1} & ... &\frac{\partial (T_k^{-1})}{\partial y_k} \end{vmatrix}$$Concluding - $$P(Y) = P(T^{-1}(Y)) |det J_{T^{-1}}(Y)|\\P(Y) = P(X) |det J_{T^{-1}}(Y)| $$Substitute $X$ as $\theta$ and $Y$ as $\zeta$, we will get - $$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\$$ ELBO in transformed SpaceLet's bring back the equation formed at [ELBO](evidence-lower-bound). Expressing ELBO in terms of $\zeta$ -$$ELBO = E_{q(\theta)}\left [\log \left(p(\theta, D) \right) \right] + H_{q(\theta)}\\ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + H_{q(\zeta)}$$Since, we are optimizing ELBO by factorized Normal Distributions, let's bring back the results of [Entropy of Normal Distribution](entropy-of-normal-distribution). Our running equation becomes -$$ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + H_{q(\zeta)}\\ELBO = E_{q(\zeta)}\left [\log \left(p\left (D, T^{-1}\left (\zeta \right) \right) \left | det J_{T^{-1}}(\zeta) \right | \right) \right] + \frac{1}{2}\left ( \log \left ( 2 \pi {\sigma}^2 \right) + 1 \right )$$ Success The above ELBO equation is the final one which needs to be optimized. Let's Code
###Code
# Imports
%matplotlib inline
import numpy as np
import scipy as sp
import pandas as pd
import tensorflow as tf
from scipy.stats import expon, uniform
import arviz as az
import pymc3 as pm
import matplotlib.pyplot as plt
import tensorflow_probability as tfp
from pprint import pprint
plt.style.use("seaborn-darkgrid")
from tensorflow_probability.python.mcmc.transformed_kernel import (
make_transform_fn, make_transformed_log_prob)
tfb = tfp.bijectors
tfd = tfp.distributions
dtype = tf.float32
# Plot functions
def plot_transformation(theta, zeta, p_theta, p_zeta):
fig, (const, trans) = plt.subplots(nrows=2, ncols=1, figsize=(6.5, 12))
const.plot(theta, p_theta, color='blue', lw=2)
const.set_xlabel(r"$\theta$")
const.set_ylabel(r"$P(\theta)$")
const.set_title("Constrained Space")
trans.plot(zeta, p_zeta, color='blue', lw=2)
trans.set_xlabel(r"$\zeta$")
trans.set_ylabel(r"$P(\zeta)$")
trans.set_title("Transfomed Space");
###Output
_____no_output_____
###Markdown
Transformed Space Example-1Transformation of Standard Exponential Distribution$$P_X(x) = e^{-x}$$The support of Exponential Distribution is $x>=0$. Let's use **log** transformation to map the support to real number line. Mathematically, $\zeta=\log(\theta)$. Now, let's bring back our transformed joint probability distribution equation -$$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\P(\zeta) = P(e^{\zeta}) * e^{\zeta}$$Converting this directly into Python code -
###Code
theta = np.linspace(0, 5, 100)
zeta = np.linspace(-5, 5, 100)
dist = expon()
p_theta = dist.pdf(theta)
p_zeta = dist.pdf(np.exp(zeta)) * np.exp(zeta)
plot_transformation(theta, zeta, p_theta, p_zeta)
###Output
_____no_output_____
###Markdown
Transformed Space Example-2Transformation of Uniform Distribution (with support $0<=x<=1$)$$P_X(x) = 1$$Let's use **logit** or **inverse sigmoid** transformation to map the support to real number line. Mathematically, $\zeta=logit(\theta)$.$$P(\zeta) = P(T^{-1}(\zeta)) |det J_{T^{-1}}(\zeta)|\\P(\zeta) = P(sig(\zeta)) * sig(\zeta) * (1-sig(\zeta))$$where $sig$ is the sigmoid function.Converting this directly into Python code -
###Code
theta = np.linspace(0, 1, 100)
zeta = np.linspace(-5, 5, 100)
dist = uniform()
p_theta = dist.pdf(theta)
sigmoid = sp.special.expit
p_zeta = dist.pdf(sigmoid(zeta)) * sigmoid(zeta) * (1-sigmoid(zeta))
plot_transformation(theta, zeta, p_theta, p_zeta)
###Output
_____no_output_____
###Markdown
Mean Field ADVI ExampleInfer $\mu$ and $\sigma$ for Normal distribution.
###Code
# Generating data
mu = 12
sigma = 2.2
data = np.random.normal(mu, sigma, size=200)
# Defining the model
model = tfd.JointDistributionSequential([
# sigma_prior
tfd.Exponential(1, name='sigma'),
# mu_prior
tfd.Normal(loc=0, scale=10, name='mu'),
# likelihood
lambda mu, sigma: tfd.Normal(loc=mu, scale=sigma)
])
print(model.resolve_graph())
# Let's generate joint log probability
joint_log_prob = lambda *x: model.log_prob(x + (data,))
# Build Mean Field ADVI
def build_mf_advi():
parameters = model.sample(1)
parameters.pop()
dists = []
for i, parameter in enumerate(parameters):
shape = parameter[0].shape
loc = tf.Variable(
tf.random.normal(shape, dtype=dtype),
name=f'meanfield_{i}_loc',
dtype=dtype
)
scale = tfp.util.TransformedVariable(
tf.fill(shape, value=tf.constant(0.02, dtype=dtype)),
tfb.Softplus(), # For positive values of scale
name=f'meanfield_{i}_scale'
)
approx_parameter = tfd.Normal(loc=loc, scale=scale)
dists.append(approx_parameter)
return tfd.JointDistributionSequential(dists)
meanfield_advi = build_mf_advi()
###Output
_____no_output_____
###Markdown
TFP handles transformations differently as it transforms unconstrained space to match the support of distributions.
###Code
unconstraining_bijectors = [
tfb.Exp(),
tfb.Identity()
]
posterior = make_transformed_log_prob(
joint_log_prob,
unconstraining_bijectors,
direction='forward',
enable_bijector_caching=False
)
opt = tf.optimizers.Adam(learning_rate=.1)
@tf.function(autograph=False)
def run_approximation():
elbo_loss = tfp.vi.fit_surrogate_posterior(
posterior,
surrogate_posterior=meanfield_advi,
optimizer=opt,
sample_size=200,
num_steps=10000)
return elbo_loss
elbo_loss = run_approximation()
plt.plot(elbo_loss, color='blue')
plt.xlabel("No of iterations")
plt.ylabel("Negative ELBO")
plt.show()
graph_info = model.resolve_graph()
approx_param = dict()
free_param = meanfield_advi.trainable_variables
for i, (rvname, param) in enumerate(graph_info[:-1]):
approx_param[rvname] = {"mu": free_param[i*2].numpy(),
"sd": free_param[i*2+1].numpy()}
print(approx_param)
###Output
{'sigma': {'mu': 0.82331234, 'sd': -0.6924289}, 'mu': {'mu': 11.906398, 'sd': 1.6057507}}
|
dev/encoding/vqgan-jax-encoding-with-captions.ipynb | ###Markdown
vqgan-jax-encoding-with-captions Notebook based on [vqgan-jax-reconstruction](https://colab.research.google.com/drive/1mdXXsMbV6K_LTvCh3IImRsFIWcKU5m1w?usp=sharing) by @surajpatil.We process a `tsv` file with `image_file` and `caption` fields, and add a `vqgan_indices` column with indices extracted from a VQGAN-JAX model.
###Code
import io
import requests
from PIL import Image
import numpy as np
from tqdm import tqdm
import torch
import torchvision.transforms as T
import torchvision.transforms.functional as TF
from torchvision.transforms import InterpolationMode
from torch.utils.data import Dataset, DataLoader
import jax
from jax import pmap
###Output
_____no_output_____
###Markdown
VQGAN-JAX model
###Code
from vqgan_jax.modeling_flax_vqgan import VQModel
###Output
_____no_output_____
###Markdown
We'll use a VQGAN trained by using Taming Transformers and converted to a JAX model.
###Code
model = VQModel.from_pretrained("flax-community/vqgan_f16_16384")
###Output
_____no_output_____
###Markdown
Dataset We use Luke Melas-Kyriazi's `dataset.py` which reads image paths and captions from a tsv file that contains both. We only need the images for encoding.
###Code
from dalle_mini.dataset import *
cc12m_images = '/data/CC12M/images'
cc12m_list = '/data/CC12M/images-list-clean.tsv'
# cc12m_list = '/data/CC12M/images-10000.tsv'
cc12m_output = '/data/CC12M/images-encoded.tsv'
image_size = 256
def image_transform(image):
s = min(image.size)
r = image_size / s
s = (round(r * image.size[1]), round(r * image.size[0]))
image = TF.resize(image, s, interpolation=InterpolationMode.LANCZOS)
image = TF.center_crop(image, output_size = 2 * [image_size])
image = torch.unsqueeze(T.ToTensor()(image), 0)
image = image.permute(0, 2, 3, 1).numpy()
return image
dataset = CaptionDataset(
images_root=cc12m_images,
captions_path=cc12m_list,
image_transform=image_transform,
image_transform_type='torchvision',
include_captions=False
)
len(dataset)
###Output
_____no_output_____
###Markdown
Encoding
###Code
def encode(model, batch):
# print("jitting encode function")
_, indices = model.encode(batch)
return indices
def superbatch_generator(dataloader, num_tpus):
iter_loader = iter(dataloader)
for batch in iter_loader:
superbatch = [batch.squeeze(1)]
try:
for b in range(num_tpus-1):
batch = next(iter_loader)
if batch is None:
break
# Skip incomplete last batch
if batch.shape[0] == dataloader.batch_size:
superbatch.append(batch.squeeze(1))
except StopIteration:
pass
superbatch = torch.stack(superbatch, axis=0)
yield superbatch
import os
def encode_captioned_dataset(dataset, output_tsv, batch_size=32, num_workers=16):
if os.path.isfile(output_tsv):
print(f"Destination file {output_tsv} already exists, please move away.")
return
num_tpus = 8
dataloader = DataLoader(dataset, batch_size=batch_size, num_workers=num_workers)
superbatches = superbatch_generator(dataloader, num_tpus=num_tpus)
p_encoder = pmap(lambda batch: encode(model, batch))
# We save each superbatch to avoid reallocation of buffers as we process them.
# We keep the file open to prevent excessive file seeks.
with open(output_tsv, "w") as file:
iterations = len(dataset) // (batch_size * num_tpus)
for n in tqdm(range(iterations)):
superbatch = next(superbatches)
encoded = p_encoder(superbatch.numpy())
encoded = encoded.reshape(-1, encoded.shape[-1])
# Extract fields from the dataset internal `captions` property, and save to disk
start_index = n * batch_size * num_tpus
end_index = (n+1) * batch_size * num_tpus
paths = dataset.captions["image_file"][start_index:end_index].values
captions = dataset.captions["caption"][start_index:end_index].values
encoded_as_string = list(map(lambda item: np.array2string(item, separator=',', max_line_width=50000, formatter={'int':lambda x: str(x)}), encoded))
batch_df = pd.DataFrame.from_dict({"image_file": paths, "caption": captions, "encoding": encoded_as_string})
batch_df.to_csv(file, sep='\t', header=(n==0), index=None)
encode_captioned_dataset(dataset, cc12m_output, batch_size=64, num_workers=16)
###Output
4%|██▋ | 621/16781 [07:09<3:02:46, 1.47it/s] |
.ipynb_checkpoints/sql_for_data_analysis3-checkpoint.ipynb | ###Markdown
**SQL AGGREGATIONS** We connect to MySQL server and workbench and make analysis with the parch-and-posey database. This course is the practicals of the course **SQL for Data Analysis** at Udacity.
###Code
# Install mySQL connector
!pip install mysql-connector-python
# we import some required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from pprint import pprint
import time
print('Done!')
###Output
_____no_output_____
###Markdown
**Next, we create a connection to the parch-and-posey DataBase in MySQL Work-Bench**
###Code
import mysql
from mysql.connector import Error
from getpass import getpass
try:
connection = mysql.connector.connect(host='localhost',
database='parch_and_posey',
user=input('Enter UserName:'),
password=getpass('Enter Password:'))
if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL Server version ", db_Info)
cursor = connection.cursor()
cursor.execute("select database();")
record = cursor.fetchone()
print("You're connected to database: ", record)
except Error as e:
print("Error while connecting to MySQL", e)
# Let's see the tables in parch-and-posey DB
# let's run the show tables command
cursor.execute('show tables')
out = cursor.fetchall()
out
###Output
_____no_output_____
###Markdown
Let's see the first 3 data of the different tables in parch and posey database Defining a method that converts a select query to a data frame
###Code
def query_to_df(query):
st = time.time()
# Assert Every Query ends with a semi-colon
try:
assert query.endswith(';')
except AssertionError:
return 'ERROR: Query Must End with ;'
# so we never have more than 20 rows displayed
pd.set_option('display.max_rows', 20)
df = None
# Process the query
cursor.execute(query)
columns = cursor.description
result = []
for value in cursor.fetchall():
tmp = {}
for (index,column) in enumerate(value):
tmp[columns[index][0]] = [column]
result.append(tmp)
# Create a DataFrame from all results
for ind, data in enumerate(result):
if ind >= 1:
x = pd.DataFrame(data)
df = pd.concat([df, x], ignore_index=True)
else:
df = pd.DataFrame(data)
print(f'Query ran for {time.time()-st} secs!')
return df
# 1. For the accounts table
query = 'SELECT * FROM accounts LIMIT 3;'
query_to_df(query)
# 2. For the orders table
query = 'SELECT * FROM orders LIMIT 3;'
query_to_df(query)
# 3. For the sales_reps table
query = 'SELECT * FROM sales_reps LIMIT 3;'
query_to_df(query)
# 4. For the web_events table
query = 'SELECT * FROM web_events LIMIT 3;'
query_to_df(query)
# 5. For the region table
query = 'SELECT * FROM region LIMIT 3;'
query_to_df(query)
###Output
_____no_output_____
###Markdown
**In essential, row-level data are useful for initial exploratory data analysis, when we're trying to get a feel of the data... But as we search for answers, aggregate-data which are often done along columns, become more useful...** Nulls:NULLs are a datatype that specifies where no data exists in SQL. They are often ignored in our aggregation functions* Notice that NULLs are different than a zero - they are cells where data does not exist.* When identifying NULLs in a WHERE clause, we write IS NULL or IS NOT NULL. We don't use =, because NULL isn't considered a value in SQL. Rather, it is a property of the data.**NULLs - Expert Tip*** There are two common ways in which you are likely to encounter NULLs:* NULLs frequently occur when performing a LEFT or RIGHT JOIN. You saw in the last lesson - when some rows in the left table of a left join are not matched with rows in the right table, those rows will contain some NULL values in the result set.* NULLs can also occur from simply missing data in our database. **COUNT the Number of Rows in each Table**Try your hand at finding the number of rows in each table.
###Code
for table in ['orders','accounts','web_events','region','sales_reps']:
query = f'SELECT COUNT(*) AS row_count FROM {table};'
ans = query_to_df(query)
print(f'Table {table}:')
print(ans)
print()
###Output
_____no_output_____
###Markdown
COUNT:* Note that unlike other aggregations, `COUNT` can be used in columns of Non-Numerical values. Same too for `MIN` and `MAX` clauses.* Notice that `COUNT` does not consider rows that have `NULL` values. Therefore, this can be useful for quickly identifying which rows have missing data. SUM:* Unlike `COUNT`, you can only use `SUM` on numeric columns. However, `SUM` will ignore NULL values, as do the other aggregation functions you will see in the upcoming lessons. Aggregation Reminder:An important thing to remember: aggregators only aggregate vertically - the values of a column. If you want to perform a calculation across rows, you would do this with simple arithmetic. Aggregation Questionfind the solution for each of the following questions. If you get stuck or want to check your answers, you can find the answers at the top of the next concept. Q1: Find the total amount of poster_qty paper ordered in the orders table.
###Code
query = 'SELECT SUM(poster_qty) FROM orders;'
query_to_df(query)
###Output
_____no_output_____
###Markdown
Q2: Find the total amount of standard_qty paper ordered in the orders table.
###Code
query = 'SELECT SUM(standard_qty) FROM orders;'
query_to_df(query)
###Output
_____no_output_____
###Markdown
Q4. Find the total dollar amount of sales using the total_amt_usd in the orders table.
###Code
query_to_df('SELECT SUM(total_amt_usd) FROM orders;')
###Output
_____no_output_____
###Markdown
Q5. Find the total amount spent on standard_amt_usd and gloss_amt_usd paper for each order in the orders table. This should give a dollar amount for each order in the table.
###Code
query_to_df(
'SELECT id, (standard_amt_usd + gloss_amt_usd) tot_amt_usd FROM orders;'
)
###Output
_____no_output_____
###Markdown
Q6. Find the standard_amt_usd per unit of standard_qty paper. Your solution should use both an aggregation and a mathematical operator.
###Code
query_to_df(
'SELECT (SUM(standard_amt_usd) / SUM(standard_qty)) \
standard_unit_usd FROM orders;'
)
###Output
_____no_output_____
###Markdown
Min and MaxNotice that `MIN` and `MAX` are aggregators that again ignore `NULL` values. Expert TipFunctionally, MIN and MAX are similar to COUNT in that they can be used on non-numerical columns. Depending on the column type, MIN will return the lowest number, earliest date, or non-numerical value as early in the alphabet as possible. As you might suspect, MAX does the opposite—it returns the highest number, the latest date, or the non-numerical value closest alphabetically to “Z.” AVG:Similar to other software `AVG` returns the mean of the data - that is the sum of all of the values in the column divided by the number of values in a column. This aggregate function again ignores the `NULL` values in both the numerator and the denominator.If you want to count NULLs as zero, you will need to use SUM and COUNT. However, this is probably not a good idea if the NULL values truly just represent unknown values for a cell. MEDIAN - Expert TipOne quick note that a median might be a more appropriate measure of center for this data, but finding the median happens to be a pretty difficult thing to get using SQL alone — so difficult that finding a median is occasionally asked as an interview question. Questions: MIN, MAX, & AVERAGEAnswer the following questions. 1. When was the earliest order ever placed? You only need to return the date.
###Code
query_to_df(
'SELECT MIN(occurred_at) earliest_order FROM orders;'
)
###Output
_____no_output_____
###Markdown
2. Try performing the same query as in question 1 without using an aggregation function.
###Code
query_to_df(
'SELECT occurred_at earliest_order FROM orders ORDER BY earliest_order LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
3. When did the most recent (latest) web_event occur?
###Code
query_to_df(
'SELECT MAX(occurred_at) latest_event FROM web_events;'
)
###Output
_____no_output_____
###Markdown
4. Try to perform the result of the previous query without using an aggregation function.
###Code
query_to_df(
'SELECT occurred_at FROM web_events ORDER BY occurred_at DESC LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
5. Find the mean (AVERAGE) amount spent per order on each paper type, as well as the mean amount of each paper type purchased per order. Your final answer should have 6 values - one for each paper type for the average number of sales, as well as the average amount.
###Code
query_to_df(
'SELECT SUM(standard_amt_usd) / SUM(standard_qty) avg_standard_usd, \
SUM(total) / SUM(standard_qty) avg_standard_qty, \
SUM(gloss_amt_usd) / SUM(gloss_qty) avg_gloss_usd, \
SUM(total) / SUM(gloss_qty) avg_gloss_qty, \
SUM(poster_amt_usd) / SUM(poster_qty) avg_poster_usd, \
SUM(total) / SUM(poster_qty) avg_poster_qty\
FROM orders;'
)
###Output
_____no_output_____
###Markdown
6: Via the video, you might be interested in how to calculate the MEDIAN. Though this is more advanced than what we have covered so far try finding - what is the MEDIAN total_usd spent on all orders?
###Code
query_to_df(
'SELECT * FROM \
(SELECT total_amt_usd FROM orders ORDER BY total_amt_usd LIMIT 3457) \
AS tot_amt ORDER BY total_amt_usd DESC LIMIT 2;'
)
###Output
_____no_output_____
###Markdown
GROUP BY:* `GROUP BY` can be used to aggregate data within subsets of the data. For example, grouping for different accounts, different regions, or different sales representatives.* Any column in the `SELECT` statement that is not within an aggregator must be in the `GROUP BY` clause.* The `GROUP BY` always goes between `WHERE` and `ORDER BY`.* `ORDER BY` works like SORT in spreadsheet software. GROUP BY - Expert Tip:SQL evaluates the aggregations before the `LIMIT` clause. If you don’t `group by` any columns, you’ll get a 1-row result—no problem there. If you `group by` a column with enough unique values that it exceeds the `LIMIT` number, the aggregates will be calculated, and then some rows will simply be omitted from the results.This is actually a nice way to do things because you know you’re going to get the correct aggregates. If SQL cuts the table down to 100 rows, then performed the aggregations, your results would be substantially different. So the default style of `Group by` before `LIMIT` which usally comes last is ok. GROUP BY QUIZ:Now that we've been introduced to `JOINs`, `GROUP BY`, and aggregate functions, the real power of SQL starts to come to life. Try some of the below to put your skills to the test!One part that can be difficult to recognize is when it might be easiest to use an aggregate or one of the other SQL functionalities. Try some of the below to see if you can differentiate to find the easiest solution. Q1Which account (by name) placed the earliest order? Your solution should have the account name and the date of the order.
###Code
query_to_df(
'SELECT a.name acct_name, o.occurred_at date from accounts a JOIN \
orders o ON a.id = o.account_id ORDER BY date LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
Q2Find the total sales in usd for each account. You should include two columns - the total sales for each company's orders in usd and the company name.
###Code
query_to_df(
'SELECT SUM(o.total_amt_usd) total_sales_usd, a.name acct_name FROM orders o \
JOIN accounts a ON o.account_id = a.id GROUP BY acct_name;'
)
###Output
_____no_output_____
###Markdown
Q3Via what channel did the most recent (latest) web_event occur, which account was associated with this web_event? Your query should return only three values - the date, channel, and account name.
###Code
query_to_df(
'SELECT w.occurred_at date, w.channel channel, a.name acct_name FROM \
web_events w JOIN accounts a ON w.account_id = a.id ORDER BY date DESC LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
Q4Find the total number of times each type of channel from the web_events was used. Your final table should have two columns - the channel and the number of times the channel was used.
###Code
query_to_df(
'SELECT w.channel channel, COUNT(w.channel) count FROM web_events w GROUP BY \
channel;'
)
# Aggregating with DISTINCT...
query_to_df(
'SELECT DISTINCT w.channel channel, COUNT(w.channel) count FROM web_events w \
GROUP BY channel;'
)
###Output
_____no_output_____
###Markdown
Q5Who was the primary contact associated with the earliest web_event?
###Code
query_to_df(
'SELECT a.primary_poc FROM accounts a JOIN web_events w ON a.id = \
w.account_id ORDER BY w.occurred_at LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
Q6What was the smallest order placed by each account in terms of total usd. Provide only two columns - the account name and the total usd. Order from smallest dollar amounts to largest.
###Code
query_to_df(
'SELECT a.name acct_name, MIN(o.total_amt_usd) min_order_usd FROM accounts \
a JOIN orders o ON a.id = o.account_id GROUP BY acct_name ORDER BY \
min_order_usd;'
)
###Output
_____no_output_____
###Markdown
Q7Find the number of sales reps in each region. Your final table should have two columns - the region and the number of sales_reps. Order from fewest reps to most reps.
###Code
query_to_df(
'SELECT r.name region, COUNT(s.name) sales_reps_count FROM region r JOIN \
sales_reps s ON r.id = s.region_id GROUP BY region ORDER BY sales_reps_count;'
)
###Output
_____no_output_____
###Markdown
I need to reconfirm the distinct channels in web_evnts again...
###Code
query_to_df(
'SELECT DISTINCT(w.channel) distinct_channels FROM web_events w ORDER BY \
distinct_channels;'
)
###Output
_____no_output_____
###Markdown
**GROUP BY PART 2*** We can `GROUP BY` multiple columns at once. This is often useful to aggregate across a number of different segments.* The order of columns listed in the `ORDER BY` clause does make a difference. You are ordering the columns from left to right. But it makes no difference in `GROUP BY` Clause**GROUP BY - Expert Tips*** The order of column names in your `GROUP BY` clause doesn’t matter—the results will be the same regardless. If we run the same query and reverse the order in the `GROUP BY` clause, you can see we get the same results.* As with `ORDER BY`, we can substitute numbers for column names in the `GROUP BY` clause. It’s generally recommended to do this only when you’re grouping many columns, or if something else is causing the text in the `GROUP BY` clause to be excessively long.* A reminder here that any column that is not within an aggregation must show up in your `GROUP BY` statement. If you forget, you will likely get an error. However, in the off chance that your query does work, you might not like the results! GROUP BY Part II Q1For each account, determine the average amount of each type of paper they purchased across their orders. Your result should have four columns - one for the account name and one for the average quantity purchased for each of the paper types for each account.
###Code
query_to_df(
'SELECT a.name acct_name, AVG(o.standard_qty) ave_standard_qty, AVG(o.poster_qty) \
ave_poster_qty, AVG(o.gloss_qty) ave_gloss_qty FROM accounts a JOIN orders o ON a.id \
= o.account_id GROUP BY acct_name;'
)
###Output
_____no_output_____
###Markdown
Q2For each account, determine the average amount spent per order on each paper type. Your result should have four columns - one for the account name and one for the average amount spent on each paper type.
###Code
query_to_df(
'SELECT a.name acct_name, AVG(o.standard_amt_usd) ave_standard_usd, AVG(o.poster_amt_usd) \
ave_poster_usd, AVG(o.gloss_amt_usd) ave_gloss_usd FROM accounts a JOIN orders o ON a.id \
= o.account_id GROUP BY acct_name;'
)
###Output
_____no_output_____
###Markdown
Q3Determine the number of times a particular channel was used in the web_events table for each sales rep. Your final table should have three columns - the name of the sales rep, the channel, and the number of occurrences. Order your table with the highest number of occurrences first.
###Code
query_to_df(
'SELECT s.name sales_rep, w.channel channels, COUNT(w.channel) count FROM \
sales_reps s JOIN accounts a ON s.id = a.sales_rep_id JOIN web_events w ON \
w.account_id = a.id GROUP BY sales_rep, channels ORDER BY sales_rep, count DESC;'
)
# Aggregating with DISTINCT
query_to_df(
'SELECT DISTINCT s.name sales_rep, w.channel channels, COUNT(w.channel) count FROM \
sales_reps s JOIN accounts a ON s.id = a.sales_rep_id JOIN web_events w ON \
w.account_id = a.id GROUP BY sales_rep, channels ORDER BY sales_rep, count DESC;'
)
###Output
_____no_output_____
###Markdown
Q4Determine the number of times a particular channel was used in the web_events table for each region. Your final table should have three columns - the region name, the channel, and the number of occurrences. Order your table with the highest number of occurrences first.
###Code
query_to_df(
'SELECT r.name region, w.channel channels, COUNT(w.channel) count FROM \
region r JOIN sales_reps s ON r.id = s.region_id JOIN accounts a ON s.id = \
a.sales_rep_id JOIN web_events w ON w.account_id = a.id GROUP BY region, \
channels ORDER BY region, count DESC;'
)
###Output
_____no_output_____
###Markdown
**Distinct*** `DISTINCT` is always used in `SELECT` statements, and it provides the unique rows for all columns written in the `SELECT` statement. Therefore, you only use `DISTINCT` once in any particular `SELECT` statement.* You could write:```SELECT DISTINCT column1, column2, column3FROM table1;```which would return the unique (or DISTINCT) rows across all three columns.* You could not write:```SELECT DISTINCT column1, DISTINCT column2, DISTINCT column3FROM table1;```* You can think of DISTINCT the same way you might think of the statement "unique".**DISTINCT - Expert Tip**It’s worth noting that using `DISTINCT`, particularly in aggregations, can slow your queries down quite a bit. Q1 DistinctUse DISTINCT to test if there are any accounts associated with more than one region.
###Code
query_to_df(
'SELECT DISTINCT a.name acct_name, COUNT(r.name) count FROM \
accounts a JOIN sales_reps s ON a.sales_rep_id = s.id JOIN region r on \
s.region_id = r.id GROUP BY acct_name ORDER BY count DESC;'
)
###Output
_____no_output_____
###Markdown
Q2Have any sales reps worked on more than one account? Answer using Distinct
###Code
query_to_df(
'SELECT DISTINCT s.name sales_rep, COUNT(a.name) count \
FROM sales_reps s JOIN accounts a on s.id = a.sales_rep_id GROUP BY sales_rep \
ORDER BY count DESC;'
)
###Output
_____no_output_____
###Markdown
**Having****HAVING - Expert Tip**HAVING is the “clean” way to filter a query that has been aggregated, but this is also commonly done using a subquery. Essentially, any time you want to perform a `WHERE` on an element of your query that was created by an aggregate, you need to use `HAVING` instead. **Pitching Where and Having**1. `WHERE` subsets the returned data based on a logical condition2. `WHERE` appears after the `FROM`, `JOIN` and `ON` clauses but before the `GROUP BY`3. `HAVING` appears after the `GROUP BY` clause but before the `ORDER BY`.4. `HAVING` is like `WHERE` but it works on logical statements involving aggregations. QHow many of the sales reps have more than 5 accounts that they manage?
###Code
query_to_df(
'SELECT COUNT(*) num_reps FROM\
(SELECT DISTINCT s.name sales_rep, COUNT(a.name) count FROM sales_reps s JOIN \
accounts a on s.id = a.sales_rep_id GROUP BY sales_rep HAVING count > 5 \
ORDER BY count) AS t1;'
)
###Output
_____no_output_____
###Markdown
QHow many accounts have more than 20 orders?
###Code
query_to_df(
'SELECT COUNT(*) num_accts FROM \
(SELECT DISTINCT a.name acct_name, COUNT(o.account_id) orders FROM accounts a JOIN \
orders o ON a.id = o.account_id GROUP BY acct_name HAVING orders > 20 \
ORDER BY orders) AS t1;'
)
###Output
_____no_output_____
###Markdown
QWhich account has the most orders?
###Code
query_to_df(
'SELECT DISTINCT a.name acct_name, COUNT(o.account_id) orders FROM accounts a \
JOIN orders o ON a.id = o.account_id GROUP BY acct_name ORDER BY orders DESC\
LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
QHow many accounts spent more than 30,000 usd total across all orders?
###Code
query_to_df(
'SELECT COUNT(*) total_accts_over_30k FROM \
(SELECT DISTINCT a.name acct_name, SUM(o.total_amt_usd) sum_total FROM accounts \
a JOIN orders o on a.id=o.account_id GROUP BY acct_name HAVING sum_total > \
30000 ORDER BY 2) AS t1;'
)
###Output
_____no_output_____
###Markdown
QWhich accounts spent less than 1,000 usd total across all orders?
###Code
query_to_df(
'SELECT DISTINCT a.name acct_name, SUM(o.total_amt_usd) total_spent FROM \
accounts a JOIN orders o ON a.id=o.account_id GROUP BY acct_name HAVING \
total_spent < 1000 ORDER BY total_spent DESC;'
)
###Output
_____no_output_____
###Markdown
QWhich account has spent the most with us?
###Code
query_to_df(
'SELECT DISTINCT a.name acct_name, SUM(o.total_amt_usd) max_total_spent FROM \
accounts a JOIN orders o ON a.id=o.account_id GROUP BY acct_name ORDER BY \
max_total_spent DESC LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
QWhich account has spent the least with us?
###Code
query_to_df(
'SELECT DISTINCT a.name acct_name, SUM(o.total_amt_usd) min_total_spent FROM \
accounts a JOIN orders o ON a.id=o.account_id GROUP BY acct_name ORDER BY \
min_total_spent LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
QWhich accounts used facebook as a channel to contact customers more than 6 times?
###Code
query_to_df(
'SELECT DISTINCT a.name acct_name, w.channel channels, COUNT(w.channel) count \
FROM accounts a JOIN web_events w ON a.id=w.account_id WHERE w.channel LIKE \
"%facebook%" GROUP BY acct_name, channels HAVING count > 6 ORDER BY count;'
)
# Query can be written with only HAVING like so...
query_to_df(
'SELECT a.id, a.name, w.channel, COUNT(*) use_of_channel FROM accounts a \
JOIN web_events w ON a.id = w.account_id GROUP BY a.id, a.name, w.channel \
HAVING COUNT(*) > 6 AND w.channel LIKE "%facebook%" ORDER BY use_of_channel;'
)
###Output
_____no_output_____
###Markdown
QWhich account used facebook most as a channel?
###Code
query_to_df(
'SELECT DISTINCT a.name acct_name, w.channel channels, COUNT(w.channel) count \
FROM accounts a JOIN web_events w ON a.id=w.account_id WHERE w.channel LIKE \
"%facebook%" GROUP BY 1, 2 ORDER BY 3 DESC LIMIT 1;'
)
###Output
_____no_output_____
###Markdown
QWhich channel was most frequently used by most accounts?
###Code
query_to_df(
'SELECT a.name acct_name, w.channel channels, COUNT(w.channel) count \
FROM accounts a JOIN web_events w ON a.id=w.account_id GROUP BY acct_name, \
channels ORDER BY count DESC LIMIT 10;'
)
# End the connection after running notebook
if connection.is_connected():
cursor.close()
connection.close()
print(f'Closing MySQL Connection to {record} Database')
###Output
_____no_output_____ |
docs/notebooks/thomson.ipynb | ###Markdown
Thomson Scattering: Spectral Density [thomson]: ../diagnostics/thomson.rst[spectral-density]: ../api/plasmapy.diagnostics.thomson.spectral_density.rstspectral-density[sheffield]: https://www.sciencedirect.com/book/9780123748775/plasma-scattering-of-electromagnetic-radiationThe [thomson.spectral_density][spectral-density] function calculates the [spectral density function S(k,w)][sheffield], which is one of several terms that determine the scattered power spectrum for the Thomson scattering of a probe laser beam by a plasma. In particular, this function calculates $S(k,w)$ for the case of a plasma consisting of one or more ion species and a neutralizing electron fluid under the assumption that all of the ion species and the electron fluid have Maxwellian velocity distribution functions. In this regime, the spectral density is given by the equation:\begin{equation}S(k,\omega) = \frac{2\pi}{k} \bigg |1 - \frac{\chi_e}{\epsilon} \bigg |^2 f_{e0}\bigg ( \frac{\omega}{k} \bigg ) + \sum_i \frac{2\pi Z_i}{k} \bigg | \frac{\chi_e}{\epsilon} \bigg |^2 f_{i0, i} \bigg ( \frac{\omega}{k} \bigg )\end{equation}where $\chi_e$ is the electron component susceptibility of the plasma and $\epsilon = 1 + \chi_e + \sum_i \chi_i$ is the total plasma dielectric function (with $\chi_i$ being the ion component of the susceptibility), $Z_i$ is the charge of each ion, $k$ is the scattering wavenumber, $\omega$ is the scattering frequency, and the functions $f_{e0}$ and $f_{i0,i}$ are the Maxwellian velocity distributions for the electrons and ion species respectively.Thomson scattering can be either non-collective (the scattered spectrum is a linear sum of the light scattered by individual particles) or collective (the scattered spectrum is dominated by scattering off of collective plasma waves). The [thomson.spectral_density][spectral-density] function can be used in both cases. These regimes are delineated by the dimensionless constant $\alpha$:\begin{equation}\alpha = \frac{1}{k \lambda_{De}}\end{equation}where $\lambda_{De}$ is the Debye length. $\alpha > 1$ corresponds to collective scattering, while $\alpha < 1$ corresponds to non-collective scattering. Depending on which of these regimes applies, fitting the scattered spectrum can provide the electron (and sometimes ion) density and temperature. Doppler shifting of the spectrum can also provide a measurement of the drift velocity of each plasma species.For a detailed explanation of the underlying physics (and derivations of these expressions), see ["Plasma Scattering of Electromagnetic Radiation" by Sheffield et al.][sheffield]
###Code
%matplotlib inline
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
from plasmapy.diagnostics import thomson
###Output
_____no_output_____
###Markdown
Construct parameters that define the Thomson diagnostic setup, the probing beam and scattering collection. These parameters will be used for all examples.
###Code
# The probe wavelength can in theory be anything, but in practice integer frequency multiples of the Nd:YAG wavelength
# 1064 nm are used (532 corresponds to a frequency-doubled probe beam from such a laser).
probe_wavelength = 532*u.nm
# Array of wavelengths over which to calcualte the spectral distribution
wavelengths = np.arange(probe_wavelength.value-60, probe_wavelength.value+60, 0.01)*u.nm
# The scattering geometry is defined by unit vectors for the orientation of the probe laser beam (probe_n) and
# the path from the scattering volume (where the measurement is made) to the detector (scatter_n).
# These can be setup for any experimental geometry.
probe_vec = np.array([1, 0, 0])
scattering_angle = np.deg2rad(63)
scatter_vec = np.array([np.cos(scattering_angle), np.sin(scattering_angle), 0])
###Output
_____no_output_____
###Markdown
In order to calcluate the scattered spectrum, we must also include some information about the plasma. For this plot we'll allow the ``fract``, ``ion_species``, ``fluid_vel``, and ``ion_vel`` keywords to keep their default values, describing a single-species H+ plasma at rest in the laboratory frame.
###Code
ne = 2e17*u.cm**-3
Te = 12*u.eV
Ti = 10*u.eV
alpha, Skw = thomson.spectral_density(wavelengths, probe_wavelength,
ne, Te, Ti, probe_vec=probe_vec,
scatter_vec=scatter_vec)
fig, ax = plt.subplots()
ax.plot(wavelengths, Skw, lw=2)
ax.set_xlim(probe_wavelength.value-10, probe_wavelength.value+10)
ax.set_ylim(0, 1e-13)
ax.set_xlabel('$\lambda$ (nm)')
ax.set_ylabel('S(k,w)')
ax.set_title('Thomson Scattering Spectral Density')
###Output
_____no_output_____
###Markdown
Example Cases in Different Scattering RegimesWe will now consider several example cases in different scattering regimes. In order to facilitate this, we'll set up each example as a dictionary of plasma parameters: A single-species, stationary hydrogen plasma with a density and temperature that results in a scattering spectrum dominated by scattering off of single electrons.
###Code
non_collective = {
'name': 'Non-Collective Regime',
'ne': 5e15*u.cm**-3,
'Te': 40*u.eV,
'Ti': np.array([10])*u.eV,
'fract': np.array([1]),
'ion_species': ['H+'],
'fluid_vel': np.array([0, 0, 0])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A single-species, stationary hydrogen plasma with a density and temperature that result in weakly collective scattering (scattering paramter $\alpha$ approaching 1)
###Code
weakly_collective = {
'name': 'Weakly Collective Regime',
'ne': 2e17*u.cm**-3,
'Te': 20*u.eV,
'Ti': np.array([10])*u.eV,
'fract': np.array([1]),
'ion_species': ['H+'],
'fluid_vel': np.array([0, 0, 0])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A single-species, stationary hydrogen plasma with a density and temperature that result in a spectrum dominated by multi-particle scattering, including scattering off of ions.
###Code
collective = {
'name': 'Collective Regime',
'ne': 5e17*u.cm**-3,
'Te': 10*u.eV,
'Ti': np.array([4])*u.eV,
'fract': np.array([1]),
'ion_species': ['H+'],
'fluid_vel': np.array([0, 0, 0])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A case identical to the collective example above, except that now the electron fluid has a substantial drift velocity parallel to the probe laser and the ions have a drift (relative to the electrons) at an angle.
###Code
drifts = {
'name': 'Drift Velocities',
'ne': 5e17*u.cm**-3,
'Te': 10*u.eV,
'Ti': np.array([10])*u.eV,
'fract': np.array([1]),
'ion_species': ['H+'],
'fluid_vel': np.array([700, 0, 0])*u.km/u.s,
'ion_vel': np.array([[-600, -100, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A case identical to the collective example, except that now the plasma consists 25% He+1 and 75% C+5
###Code
two_species = {
'name': 'Two Ion Species',
'ne': 5e17*u.cm**-3,
'Te': 10*u.eV,
'Ti': np.array([10, 50])*u.eV,
'fract': np.array([.25, .75]),
'ion_species': ['He-4 1+', 'C-12 5+'],
'fluid_vel': np.array([0, 0, 0])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0],[0, 0, 0]])*u.km/u.s,
}
examples = [non_collective, weakly_collective, collective, drifts, two_species]
###Output
_____no_output_____
###Markdown
For each example, plot the the spectral distribution function over a large range to show the broad electron scattering feature (top row) and a narrow range around the probe wavelength to show the ion scattering feature (bottom row)
###Code
fig, ax = plt.subplots(ncols=len(examples), nrows=2, figsize=[25,10])
fig.subplots_adjust( wspace=0.4, hspace=0.4)
lbls = 'abcdefg'
for i, x in enumerate(examples):
alpha, Skw = thomson.spectral_density(wavelengths, probe_wavelength,
x['ne'], x['Te'], x['Ti'], fract=x['fract'],
ion_species=x['ion_species'], fluid_vel=x['fluid_vel'],
probe_vec=probe_vec, scatter_vec=scatter_vec)
ax[0][i].axvline(x=probe_wavelength.value, color='red') # Mark the probe wavelength
ax[0][i].plot(wavelengths, Skw)
ax[0][i].set_xlim(probe_wavelength.value-15, probe_wavelength.value+15)
ax[0][i].set_ylim(0, 1e-13)
ax[0][i].set_xlabel('$\lambda$ (nm)')
ax[0][i].set_title( lbls[i] + ') ' + x['name'] + '\n$\\alpha$={:.4f}'.format(alpha))
ax[1][i].axvline(x=probe_wavelength.value, color='red') # Mark the probe wavelength
ax[1][i].plot(wavelengths, Skw)
ax[1][i].set_xlim(probe_wavelength.value-1, probe_wavelength.value+1)
ax[1][i].set_ylim(0, 1.1*np.max(Skw.value))
ax[1][i].set_xlabel('$\lambda$ (nm)')
###Output
_____no_output_____
###Markdown
Thomson Scattering: Spectral Density [thomson]: ../diagnostics/thomson.rst[spectral-density]: ../api/plasmapy.diagnostics.thomson.spectral_density.rstspectral-density[sheffield]: https://www.sciencedirect.com/book/9780123748775/plasma-scattering-of-electromagnetic-radiationThe [thomson.spectral_density][spectral-density] function calculates the [spectral density function S(k,w)][sheffield], which is one of several terms that determine the scattered power spectrum for the Thomson scattering of a probe laser beam by a plasma. In particular, this function calculates $S(k,w)$ for the case of a plasma consisting of one or more ion species and electron populations under the assumption that all of the ion species and the electron fluid have Maxwellian velocity distribution functions and that the combined plasma is quasi-neutral. In this regime, the spectral density is given by the equation:\begin{equation}S(k,\omega) = \sum_e \frac{2\pi}{k} \bigg |1 - \frac{\chi_e}{\epsilon} \bigg |^2 f_{e0,e}\bigg ( \frac{\omega}{k} \bigg ) + \sum_i \frac{2\pi Z_i}{k} \bigg | \frac{\chi_e}{\epsilon} \bigg |^2 f_{i0, i} \bigg ( \frac{\omega}{k} \bigg )\end{equation}where $\chi_e$ is the electron component susceptibility of the plasma and $\epsilon = 1 + \sum_e \chi_e + \sum_i \chi_i$ is the total plasma dielectric function (with $\chi_i$ being the ion component of the susceptibility), $Z_i$ is the charge of each ion, $k$ is the scattering wavenumber, $\omega$ is the scattering frequency, and the functions $f_{e0,e}$ and $f_{i0,i}$ are the Maxwellian velocity distributions for the electrons and ion species respectively.Thomson scattering can be either non-collective (the scattered spectrum is a linear sum of the light scattered by individual particles) or collective (the scattered spectrum is dominated by scattering off of collective plasma waves). The [thomson.spectral_density][spectral-density] function can be used in both cases. These regimes are delineated by the dimensionless constant $\alpha$:\begin{equation}\alpha = \frac{1}{k \lambda_{De}}\end{equation}where $\lambda_{De}$ is the Debye length. $\alpha > 1$ corresponds to collective scattering, while $\alpha < 1$ corresponds to non-collective scattering. Depending on which of these regimes applies, fitting the scattered spectrum can provide the electron (and sometimes ion) density and temperature. Doppler shifting of the spectrum can also provide a measurement of the drift velocity of each plasma species.For a detailed explanation of the underlying physics (and derivations of these expressions), see ["Plasma Scattering of Electromagnetic Radiation" by Sheffield et al.][sheffield]
###Code
%matplotlib inline
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
from plasmapy.diagnostics import thomson
###Output
_____no_output_____
###Markdown
Construct parameters that define the Thomson diagnostic setup, the probing beam and scattering collection. These parameters will be used for all examples.
###Code
# The probe wavelength can in theory be anything, but in practice integer frequency multiples of the Nd:YAG wavelength
# 1064 nm are used (532 corresponds to a frequency-doubled probe beam from such a laser).
probe_wavelength = 532 * u.nm
# Array of wavelengths over which to calcualte the spectral distribution
wavelengths = (
np.arange(probe_wavelength.value - 60, probe_wavelength.value + 60, 0.01) * u.nm
)
# The scattering geometry is defined by unit vectors for the orientation of the probe laser beam (probe_n) and
# the path from the scattering volume (where the measurement is made) to the detector (scatter_n).
# These can be setup for any experimental geometry.
probe_vec = np.array([1, 0, 0])
scattering_angle = np.deg2rad(63)
scatter_vec = np.array([np.cos(scattering_angle), np.sin(scattering_angle), 0])
###Output
_____no_output_____
###Markdown
In order to calcluate the scattered spectrum, we must also include some information about the plasma. For this plot we'll allow the ``fract``, ``ion_species``, ``fluid_vel``, and ``ion_vel`` keywords to keep their default values, describing a single-species H+ plasma at rest in the laboratory frame.
###Code
ne = 2e17 * u.cm ** -3
Te = 12 * u.eV
Ti = 10 * u.eV
alpha, Skw = thomson.spectral_density(
wavelengths,
probe_wavelength,
ne,
Te,
Ti,
probe_vec=probe_vec,
scatter_vec=scatter_vec,
)
fig, ax = plt.subplots()
ax.plot(wavelengths, Skw, lw=2)
ax.set_xlim(probe_wavelength.value - 10, probe_wavelength.value + 10)
ax.set_ylim(0, 1e-13)
ax.set_xlabel("$\lambda$ (nm)")
ax.set_ylabel("S(k,w)")
ax.set_title("Thomson Scattering Spectral Density")
###Output
_____no_output_____
###Markdown
Example Cases in Different Scattering RegimesWe will now consider several example cases in different scattering regimes. In order to facilitate this, we'll set up each example as a dictionary of plasma parameters: A single-species, stationary hydrogen plasma with a density and temperature that results in a scattering spectrum dominated by scattering off of single electrons.
###Code
non_collective = {
"name": "Non-Collective Regime",
"n": 5e15 * u.cm ** -3,
"Te": 40 * u.eV,
"Ti": np.array([10]) * u.eV,
"ion_species": ["H+"],
"electron_vel": np.array([[0, 0, 0]]) * u.km / u.s,
"ion_vel": np.array([[0, 0, 0]]) * u.km / u.s,
}
###Output
_____no_output_____
###Markdown
A single-species, stationary hydrogen plasma with a density and temperature that result in weakly collective scattering (scattering paramter $\alpha$ approaching 1)
###Code
weakly_collective = {
"name": "Weakly Collective Regime",
"n": 2e17 * u.cm ** -3,
"Te": 20 * u.eV,
"Ti": 10 * u.eV,
"ion_species": ["H+"],
"electron_vel": np.array([[0, 0, 0]]) * u.km / u.s,
"ion_vel": np.array([[0, 0, 0]]) * u.km / u.s,
}
###Output
_____no_output_____
###Markdown
A single-species, stationary hydrogen plasma with a density and temperature that result in a spectrum dominated by multi-particle scattering, including scattering off of ions.
###Code
collective = {
"name": "Collective Regime",
"n": 5e17 * u.cm ** -3,
"Te": 10 * u.eV,
"Ti": 4 * u.eV,
"ion_species": ["H+"],
"electron_vel": np.array([[0, 0, 0]]) * u.km / u.s,
"ion_vel": np.array([[0, 0, 0]]) * u.km / u.s,
}
###Output
_____no_output_____
###Markdown
A case identical to the collective example above, except that now the electron fluid has a substantial drift velocity parallel to the probe laser and the ions have a drift (relative to the electrons) at an angle.
###Code
drifts = {
"name": "Drift Velocities",
"n": 5e17 * u.cm ** -3,
"Te": 10 * u.eV,
"Ti": 10 * u.eV,
"ion_species": ["H+"],
"electron_vel": np.array([[700, 0, 0]]) * u.km / u.s,
"ion_vel": np.array([[-600, -100, 0]]) * u.km / u.s,
}
###Output
_____no_output_____
###Markdown
A case identical to the collective example, except that now the plasma consists 25% He+1 and 75% C+5, and two electron populations exist with different temperatures.
###Code
two_species = {
"name": "Two Ion and Electron Components",
"n": 5e17 * u.cm ** -3,
"Te": np.array([50, 10]) * u.eV,
"Ti": np.array([10, 50]) * u.eV,
"efract": np.array([0.5, 0.5]),
"ifract": np.array([0.25, 0.75]),
"ion_species": ["He-4 1+", "C-12 5+"],
"electron_vel": np.array([[0, 0, 0], [0, 0, 0]]) * u.km / u.s,
"ion_vel": np.array([[0, 0, 0], [0, 0, 0]]) * u.km / u.s,
}
examples = [non_collective, weakly_collective, collective, drifts, two_species]
###Output
_____no_output_____
###Markdown
For each example, plot the the spectral distribution function over a large range to show the broad electron scattering feature (top row) and a narrow range around the probe wavelength to show the ion scattering feature (bottom row)
###Code
fig, ax = plt.subplots(ncols=len(examples), nrows=2, figsize=[25, 10])
fig.subplots_adjust(wspace=0.4, hspace=0.4)
lbls = "abcdefg"
for i, x in enumerate(examples):
alpha, Skw = thomson.spectral_density(
wavelengths,
probe_wavelength,
x["n"],
x["Te"],
x["Ti"],
ifract=x.get("ifract"),
efract=x.get("efract"),
ion_species=x["ion_species"],
electron_vel=x["electron_vel"],
probe_vec=probe_vec,
scatter_vec=scatter_vec,
)
ax[0][i].axvline(x=probe_wavelength.value, color="red") # Mark the probe wavelength
ax[0][i].plot(wavelengths, Skw)
ax[0][i].set_xlim(probe_wavelength.value - 15, probe_wavelength.value + 15)
ax[0][i].set_ylim(0, 1e-13)
ax[0][i].set_xlabel("$\lambda$ (nm)")
ax[0][i].set_title(lbls[i] + ") " + x["name"] + "\n$\\alpha$={:.4f}".format(alpha))
ax[1][i].axvline(x=probe_wavelength.value, color="red") # Mark the probe wavelength
ax[1][i].plot(wavelengths, Skw)
ax[1][i].set_xlim(probe_wavelength.value - 1, probe_wavelength.value + 1)
ax[1][i].set_ylim(0, 1.1 * np.max(Skw.value))
ax[1][i].set_xlabel("$\lambda$ (nm)")
###Output
_____no_output_____
###Markdown
Thomson Scattering: Spectral Density [thomson]: ../diagnostics/thomson.rst[spectral-density]: ../api/plasmapy.diagnostics.thomson.spectral_density.rstspectral-density[sheffield]: https://www.sciencedirect.com/book/9780123748775/plasma-scattering-of-electromagnetic-radiationThe [thomson.spectral_density][spectral-density] function calculates the [spectral density function S(k,w)][sheffield], which is one of several terms that determine the scattered power spectrum for the Thomson scattering of a probe laser beam by a plasma. In particular, this function calculates $S(k,w)$ for the case of a plasma consisting of one or more ion species and a neutralizing electron fluid under the assumption that all of the ion species and the electron fluid have Maxwellian velocity distribution functions. In this regime, the spectral density is given by the equation:\begin{equation}S(k,\omega) = \frac{2\pi}{k} \bigg |1 - \frac{\chi_e}{\epsilon} \bigg |^2 f_{e0}\bigg ( \frac{\omega}{k} \bigg ) + \sum_i \frac{2\pi Z_i}{k} \bigg | \frac{\chi_e}{\epsilon} \bigg |^2 f_{i0, i} \bigg ( \frac{\omega}{k} \bigg )\end{equation}where $\chi_e$ is the electron component susceptibility of the plasma and $\epsilon = 1 + \chi_e + \sum_i \chi_i$ is the total plasma dielectric function (with $\chi_i$ being the ion component of the susceptibility), $Z_i$ is the charge of each ion, $k$ is the scattering wavenumber, $\omega$ is the scattering frequency, and the functions $f_{e0}$ and $f_{i0,i}$ are the Maxwellian velocity distributions for the electrons and ion species respectively.Thomson scattering can be either non-collective (the scattered spectrum is a linear sum of the light scattered by individual particles) or collective (the scattered spectrum is dominated by scattering off of collective plasma waves). The [thomson.spectral_density][spectral-density] function can be used in both cases. These regimes are delineated by the dimensionless constant $\alpha$:\begin{equation}\alpha = \frac{1}{k \lambda_{De}}\end{equation}where $\lambda_{De}$ is the Debye length. $\alpha > 1$ corresponds to collective scattering, while $\alpha < 1$ corresponds to non-collective scattering. Depending on which of these regimes applies, fitting the scattered spectrum can provide the electron (and sometimes ion) density and temperature. Doppler shifting of the spectrum can also provide a measurement of the drift velocity of each plasma species.For a detailed explanation of the underlying physics (and derivations of these expressions), see ["Plasma Scattering of Electromagnetic Radiation" by Sheffield et al.][sheffield]
###Code
%matplotlib inline
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
import warnings
from plasmapy.diagnostics import thomson
from plasmapy.utils.exceptions import ImplicitUnitConversionWarning
###Output
_____no_output_____
###Markdown
Construct parameters that define the Thomson diagnostic setup, the probing beam and scattering collection. These parameters will be used for all examples.
###Code
# The probe wavelength can in theory be anything, but in practice integer frequency multiples of the Nd:YAG wavelength
# 1064 nm are used (532 corresponds to a frequency-doubled probe beam from such a laser).
probe_wavelength = 532*u.nm
# Array of wavelengths over which to calcualte the spectral distribution
wavelengths = np.arange(probe_wavelength.value-60, probe_wavelength.value+60, 0.01)*u.nm
# The scattering geometry is defined by unit vectors for the orientation of the probe laser beam (probe_n) and
# the path from the scattering volume (where the measurement is made) to the detector (scatter_n).
# These can be setup for any experimental geometry.
probe_vec = np.array([1, 0, 0])
scattering_angle = np.deg2rad(63)
scatter_vec = np.array([np.cos(scattering_angle), np.sin(scattering_angle), 0])
###Output
_____no_output_____
###Markdown
In order to calcluate the scattered spectrum, we must also include some information about the plasma. For this plot we'll allow the ``fract``, ``ion_species``, ``fluid_vel``, and ``ion_vel`` keywords to keep their default values, describing a single-species H+ plasma at rest in the laboratory frame.
###Code
ne = 2e17*u.cm**-3
Te = 12*u.eV
Ti = 10*u.eV
# This warning filter catches an ImplicitUnitConversionWarning that results from specifying
# temperatures in eV instead of Kelvin.
with warnings.catch_warnings():
warnings.simplefilter("ignore", ImplicitUnitConversionWarning)
alpha, Skw = thomson.spectral_density(wavelengths, probe_wavelength,
ne, Te, Ti, probe_vec=probe_vec,
scatter_vec=scatter_vec)
fig, ax = plt.subplots()
ax.plot(wavelengths, Skw, lw=2)
ax.set_xlim(probe_wavelength.value-10, probe_wavelength.value+10)
ax.set_ylim(0, 1e-13)
ax.set_xlabel('$\lambda$ (nm)')
ax.set_ylabel('S(k,w)')
ax.set_title('Thomson Scattering Spectral Density')
###Output
_____no_output_____
###Markdown
Example Cases in Different Scattering RegimesWe will now consider several example cases in different scattering regimes. In order to facilitate this, we'll set up each example as a dictionary of plasma parameters: A single-species, stationary hydrogen plasma with a density and temperature that results in a scattering spectrum dominated by scattering off of single electrons.
###Code
non_collective = {
'name': 'Non-Collective Regime',
'ne': 5e15*u.cm**-3,
'Te': 40*u.eV,
'Ti': np.array([10])*u.eV,
'fract': np.array([1]),
'ion_species': ['H+'],
'fluid_vel': np.array([0, 0, 0])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A single-species, stationary hydrogen plasma with a density and temperature that result in weakly collective scattering (scattering paramter $\alpha$ approaching 1)
###Code
weakly_collective = {
'name': 'Weakly Collective Regime',
'ne': 2e17*u.cm**-3,
'Te': 20*u.eV,
'Ti': np.array([10])*u.eV,
'fract': np.array([1]),
'ion_species': ['H+'],
'fluid_vel': np.array([0, 0, 0])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A single-species, stationary hydrogen plasma with a density and temperature that result in a spectrum dominated by multi-particle scattering, including scattering off of ions.
###Code
collective = {
'name': 'Collective Regime',
'ne': 5e17*u.cm**-3,
'Te': 10*u.eV,
'Ti': np.array([4])*u.eV,
'fract': np.array([1]),
'ion_species': ['H+'],
'fluid_vel': np.array([0, 0, 0])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A case identical to the collective example above, except that now the electron fluid has a substantial drift velocity parallel to the probe laser and the ions have a drift (relative to the electrons) at an angle.
###Code
drifts = {
'name': 'Drift Velocities',
'ne': 5e17*u.cm**-3,
'Te': 10*u.eV,
'Ti': np.array([10])*u.eV,
'fract': np.array([1]),
'ion_species': ['H+'],
'fluid_vel': np.array([700, 0, 0])*u.km/u.s,
'ion_vel': np.array([[-600, -100, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A case identical to the collective example, except that now the plasma consists 25% He+1 and 75% C+5
###Code
two_species = {
'name': 'Two Ion Species',
'ne': 5e17*u.cm**-3,
'Te': 10*u.eV,
'Ti': np.array([10, 50])*u.eV,
'fract': np.array([.25, .75]),
'ion_species': ['He-4 1+', 'C-12 5+'],
'fluid_vel': np.array([0, 0, 0])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0],[0, 0, 0]])*u.km/u.s,
}
examples = [non_collective, weakly_collective, collective, drifts, two_species]
###Output
_____no_output_____
###Markdown
For each example, plot the the spectral distribution function over a large range to show the broad electron scattering feature (top row) and a narrow range around the probe wavelength to show the ion scattering feature (bottom row)
###Code
fig, ax = plt.subplots(ncols=len(examples), nrows=2, figsize=[25,10])
fig.subplots_adjust( wspace=0.4, hspace=0.4)
lbls = 'abcdefg'
for i, x in enumerate(examples):
with warnings.catch_warnings():
warnings.simplefilter("ignore", ImplicitUnitConversionWarning)
alpha, Skw = thomson.spectral_density(wavelengths, probe_wavelength,
x['ne'], x['Te'], x['Ti'], fract=x['fract'],
ion_species=x['ion_species'], fluid_vel=x['fluid_vel'],
probe_vec=probe_vec, scatter_vec=scatter_vec)
ax[0][i].axvline(x=probe_wavelength.value, color='red') # Mark the probe wavelength
ax[0][i].plot(wavelengths, Skw)
ax[0][i].set_xlim(probe_wavelength.value-15, probe_wavelength.value+15)
ax[0][i].set_ylim(0, 1e-13)
ax[0][i].set_xlabel('$\lambda$ (nm)')
ax[0][i].set_title( lbls[i] + ') ' + x['name'] + '\n$\\alpha$={:.4f}'.format(alpha))
ax[1][i].axvline(x=probe_wavelength.value, color='red') # Mark the probe wavelength
ax[1][i].plot(wavelengths, Skw)
ax[1][i].set_xlim(probe_wavelength.value-1, probe_wavelength.value+1)
ax[1][i].set_ylim(0, 1.1*np.max(Skw.value))
ax[1][i].set_xlabel('$\lambda$ (nm)')
###Output
_____no_output_____
###Markdown
Thomson Scattering: Spectral Density [thomson]: ../diagnostics/thomson.rst[spectral-density]: ../api/plasmapy.diagnostics.thomson.spectral_density.rstspectral-density[sheffield]: https://www.sciencedirect.com/book/9780123748775/plasma-scattering-of-electromagnetic-radiationThe [thomson.spectral_density][spectral-density] function calculates the [spectral density function S(k,w)][sheffield], which is one of several terms that determine the scattered power spectrum for the Thomson scattering of a probe laser beam by a plasma. In particular, this function calculates $S(k,w)$ for the case of a plasma consisting of one or more ion species and electron populations under the assumption that all of the ion species and the electron fluid have Maxwellian velocity distribution functions and that the combined plasma is quasi-neutral. In this regime, the spectral density is given by the equation:\begin{equation}S(k,\omega) = \sum_e \frac{2\pi}{k} \bigg |1 - \frac{\chi_e}{\epsilon} \bigg |^2 f_{e0,e}\bigg ( \frac{\omega}{k} \bigg ) + \sum_i \frac{2\pi Z_i}{k} \bigg | \frac{\chi_e}{\epsilon} \bigg |^2 f_{i0, i} \bigg ( \frac{\omega}{k} \bigg )\end{equation}where $\chi_e$ is the electron component susceptibility of the plasma and $\epsilon = 1 + \sum_e \chi_e + \sum_i \chi_i$ is the total plasma dielectric function (with $\chi_i$ being the ion component of the susceptibility), $Z_i$ is the charge of each ion, $k$ is the scattering wavenumber, $\omega$ is the scattering frequency, and the functions $f_{e0,e}$ and $f_{i0,i}$ are the Maxwellian velocity distributions for the electrons and ion species respectively.Thomson scattering can be either non-collective (the scattered spectrum is a linear sum of the light scattered by individual particles) or collective (the scattered spectrum is dominated by scattering off of collective plasma waves). The [thomson.spectral_density][spectral-density] function can be used in both cases. These regimes are delineated by the dimensionless constant $\alpha$:\begin{equation}\alpha = \frac{1}{k \lambda_{De}}\end{equation}where $\lambda_{De}$ is the Debye length. $\alpha > 1$ corresponds to collective scattering, while $\alpha < 1$ corresponds to non-collective scattering. Depending on which of these regimes applies, fitting the scattered spectrum can provide the electron (and sometimes ion) density and temperature. Doppler shifting of the spectrum can also provide a measurement of the drift velocity of each plasma species.For a detailed explanation of the underlying physics (and derivations of these expressions), see ["Plasma Scattering of Electromagnetic Radiation" by Sheffield et al.][sheffield]
###Code
%matplotlib inline
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
from plasmapy.diagnostics import thomson
###Output
_____no_output_____
###Markdown
Construct parameters that define the Thomson diagnostic setup, the probing beam and scattering collection. These parameters will be used for all examples.
###Code
# The probe wavelength can in theory be anything, but in practice integer frequency multiples of the Nd:YAG wavelength
# 1064 nm are used (532 corresponds to a frequency-doubled probe beam from such a laser).
probe_wavelength = 532*u.nm
# Array of wavelengths over which to calcualte the spectral distribution
wavelengths = np.arange(probe_wavelength.value-60, probe_wavelength.value+60, 0.01)*u.nm
# The scattering geometry is defined by unit vectors for the orientation of the probe laser beam (probe_n) and
# the path from the scattering volume (where the measurement is made) to the detector (scatter_n).
# These can be setup for any experimental geometry.
probe_vec = np.array([1, 0, 0])
scattering_angle = np.deg2rad(63)
scatter_vec = np.array([np.cos(scattering_angle), np.sin(scattering_angle), 0])
###Output
_____no_output_____
###Markdown
In order to calcluate the scattered spectrum, we must also include some information about the plasma. For this plot we'll allow the ``fract``, ``ion_species``, ``fluid_vel``, and ``ion_vel`` keywords to keep their default values, describing a single-species H+ plasma at rest in the laboratory frame.
###Code
ne = 2e17*u.cm**-3
Te = 12*u.eV
Ti = 10*u.eV
alpha, Skw = thomson.spectral_density(wavelengths, probe_wavelength,
ne, Te, Ti, probe_vec=probe_vec,
scatter_vec=scatter_vec)
fig, ax = plt.subplots()
ax.plot(wavelengths, Skw, lw=2)
ax.set_xlim(probe_wavelength.value-10, probe_wavelength.value+10)
ax.set_ylim(0, 1e-13)
ax.set_xlabel('$\lambda$ (nm)')
ax.set_ylabel('S(k,w)')
ax.set_title('Thomson Scattering Spectral Density')
###Output
_____no_output_____
###Markdown
Example Cases in Different Scattering RegimesWe will now consider several example cases in different scattering regimes. In order to facilitate this, we'll set up each example as a dictionary of plasma parameters: A single-species, stationary hydrogen plasma with a density and temperature that results in a scattering spectrum dominated by scattering off of single electrons.
###Code
non_collective = {
'name': 'Non-Collective Regime',
'n': 5e15*u.cm**-3,
'Te': 40*u.eV,
'Ti': np.array([10])*u.eV,
'ion_species': ['H+'],
'electron_vel': np.array([[0, 0, 0]])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A single-species, stationary hydrogen plasma with a density and temperature that result in weakly collective scattering (scattering paramter $\alpha$ approaching 1)
###Code
weakly_collective = {
'name': 'Weakly Collective Regime',
'n': 2e17*u.cm**-3,
'Te': 20*u.eV,
'Ti': 10*u.eV,
'ion_species': ['H+'],
'electron_vel': np.array([[0, 0, 0]])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A single-species, stationary hydrogen plasma with a density and temperature that result in a spectrum dominated by multi-particle scattering, including scattering off of ions.
###Code
collective = {
'name': 'Collective Regime',
'n': 5e17*u.cm**-3,
'Te': 10*u.eV,
'Ti': 4*u.eV,
'ion_species': ['H+'],
'electron_vel': np.array([[0, 0, 0]])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A case identical to the collective example above, except that now the electron fluid has a substantial drift velocity parallel to the probe laser and the ions have a drift (relative to the electrons) at an angle.
###Code
drifts = {
'name': 'Drift Velocities',
'n': 5e17*u.cm**-3,
'Te': 10*u.eV,
'Ti': 10*u.eV,
'ion_species': ['H+'],
'electron_vel': np.array([[700, 0, 0]])*u.km/u.s,
'ion_vel': np.array([[-600, -100, 0]])*u.km/u.s,
}
###Output
_____no_output_____
###Markdown
A case identical to the collective example, except that now the plasma consists 25% He+1 and 75% C+5, and two electron populations exist with different temperatures.
###Code
two_species = {
'name': 'Two Ion and Electron Components',
'n': 5e17*u.cm**-3,
'Te': np.array([50, 10])*u.eV,
'Ti': np.array([10, 50])*u.eV,
'efract': np.array([0.5, 0.5]),
'ifract': np.array([.25, .75]),
'ion_species': ['He-4 1+', 'C-12 5+'],
'electron_vel': np.array([[0, 0, 0],[0, 0, 0]])*u.km/u.s,
'ion_vel': np.array([[0, 0, 0],[0, 0, 0]])*u.km/u.s,
}
examples = [non_collective, weakly_collective, collective, drifts, two_species]
###Output
_____no_output_____
###Markdown
For each example, plot the the spectral distribution function over a large range to show the broad electron scattering feature (top row) and a narrow range around the probe wavelength to show the ion scattering feature (bottom row)
###Code
fig, ax = plt.subplots(ncols=len(examples), nrows=2, figsize=[25,10])
fig.subplots_adjust( wspace=0.4, hspace=0.4)
lbls = 'abcdefg'
for i, x in enumerate(examples):
alpha, Skw = thomson.spectral_density(wavelengths, probe_wavelength,
x['n'], x['Te'], x['Ti'], ifract=x.get('ifract'), efract=x.get('efract'),
ion_species=x['ion_species'], electron_vel=x['electron_vel'],
probe_vec=probe_vec, scatter_vec=scatter_vec)
ax[0][i].axvline(x=probe_wavelength.value, color='red') # Mark the probe wavelength
ax[0][i].plot(wavelengths, Skw)
ax[0][i].set_xlim(probe_wavelength.value-15, probe_wavelength.value+15)
ax[0][i].set_ylim(0, 1e-13)
ax[0][i].set_xlabel('$\lambda$ (nm)')
ax[0][i].set_title( lbls[i] + ') ' + x['name'] + '\n$\\alpha$={:.4f}'.format(alpha))
ax[1][i].axvline(x=probe_wavelength.value, color='red') # Mark the probe wavelength
ax[1][i].plot(wavelengths, Skw)
ax[1][i].set_xlim(probe_wavelength.value-1, probe_wavelength.value+1)
ax[1][i].set_ylim(0, 1.1*np.max(Skw.value))
ax[1][i].set_xlabel('$\lambda$ (nm)')
###Output
_____no_output_____ |
Notebooks/Spectrum_Normalizations.ipynb | ###Markdown
Spectrum Continuum Normalization Aim: - To perform Chi^2 comparision between PHOENIX ACES spectra and my CRIRES observations. Problem: - The nomalization of the observed spectra - Differences in the continuum normalization affect the chi^2 comparison when using mixed models of two different spectra. Proposed Solution: - equation (1) from [Passegger 2016](https://arxiv.org/pdf/1601.01877.pdf) Fobs = F obs * (cont_fit model / cont_fit observation) where con_fit is a linear fit to the spectra.To take out and linear trends in the continuums and correct the amplitude of the continuum. In this notebook I outline what I do currently showing an example.
###Code
import copy
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
% matplotlib inline
#%matplotlib auto
###Output
_____no_output_____
###Markdown
The obeservatios were originally automatically continuum normalized in the iraf extraction pipeline. I believe the continuum is not quite at 1 here anymore due to the divsion by the telluric spectra.
###Code
# Observation
obs = fits.getdata("/home/jneal/.handy_spectra/HD211847-1-mixavg-tellcorr_1.fits")
plt.plot(obs["wavelength"], obs["flux"])
plt.hlines(1, 2111, 2124, linestyle="--")
plt.title("CRIRES spectra")
plt.xlabel("Wavelength (nm)")
plt.show()
###Output
_____no_output_____
###Markdown
The two PHOENIX ACES spectra here are the first best guess of the two spectral components.
###Code
# Models
wav_model = fits.getdata("/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/WAVE_PHOENIX-ACES-AGSS-COND-2011.fits")
wav_model /= 10 # nm
host = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte05700-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits"
old_companion = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte02600-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits"
companion = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte02300-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits"
host_f = fits.getdata(host)
comp_f = fits.getdata(companion)
plt.plot(wav_model, host_f, label="Host")
plt.plot(wav_model, comp_f, label="Companion")
plt.title("Phoenix spectra")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
mask = (2000 < wav_model) & (wav_model < 2200)
wav_model = wav_model[mask]
host_f = host_f[mask]
comp_f = comp_f[mask]
plt.plot(wav_model, host_f, label="Host")
plt.plot(wav_model, comp_f, label="Companion")
plt.title("Phoenix spectra")
plt.legend()
plt.xlabel("Wavelength (nm)")
plt.show()
###Output
_____no_output_____
###Markdown
Current NormalizationI then continuum normalize the Phoenix spectrum locally around my observations by fitting an **exponenital** to the continuum like so.- Split the spectrum into 50 bins- Take median of 20 highest points in each bin.- Fix an exponetial- Evaulate at the orginal wavelength values- Divide original by the fit
###Code
def get_continuum_points(wave, flux, splits=50, top=20):
"""Get continuum points along a spectrum.
This splits a spectrum into "splits" number of bins and calculates
the medain wavelength and flux of the upper "top" number of flux
values.
"""
# Shorten array until can be evenly split up.
remainder = len(flux) % splits
if remainder:
# Nozero reainder needs this slicing
wave = wave[:-remainder]
flux = flux[:-remainder]
wave_shaped = wave.reshape((splits, -1))
flux_shaped = flux.reshape((splits, -1))
s = np.argsort(flux_shaped, axis=-1)[:, -top:]
s_flux = np.array([ar1[s1] for ar1, s1 in zip(flux_shaped, s)])
s_wave = np.array([ar1[s1] for ar1, s1 in zip(wave_shaped, s)])
wave_points = np.median(s_wave, axis=-1)
flux_points = np.median(s_flux, axis=-1)
assert len(flux_points) == splits
return wave_points, flux_points
def continuum(wave, flux, splits=50, method='scalar', plot=False, top=20):
"""Fit continuum of flux.
top: is number of top points to take median of continuum.
"""
org_wave = wave[:]
org_flux = flux[:]
# Get continuum value in chunked sections of spectrum.
wave_points, flux_points = get_continuum_points(wave, flux, splits=splits, top=top)
poly_num = {"scalar": 0, "linear": 1, "quadratic": 2, "cubic": 3}
if method == "exponential":
z = np.polyfit(wave_points, np.log(flux_points), deg=1, w=np.sqrt(flux_points))
p = np.poly1d(z)
norm_flux = np.exp(p(org_wave)) # Un-log the y values.
else:
z = np.polyfit(wave_points, flux_points, poly_num[method])
p = np.poly1d(z)
norm_flux = p(org_wave)
if plot:
plt.subplot(211)
plt.plot(wave, flux)
plt.plot(wave_points, flux_points, "x-", label="points")
plt.plot(org_wave, norm_flux, label='norm_flux')
plt.legend()
plt.subplot(212)
plt.plot(org_wave, org_flux / norm_flux)
plt.title("Normalization")
plt.xlabel("Wavelength (nm)")
plt.show()
return norm_flux
#host_cont = local_normalization(wav_model, host_f, splits=50, method="exponential", plot=True)
host_continuum = continuum(wav_model, host_f, splits=50, method="exponential", plot=True)
host_cont = host_f / host_continuum
#comp_cont = local_normalization(wav_model, comp_f, splits=50, method="exponential", plot=True)
comp_continuum = continuum(wav_model, comp_f, splits=50, method="exponential", plot=True)
comp_cont = comp_f / comp_continuum
###Output
_____no_output_____
###Markdown
Above the top is the unnormalize spectra, with the median points in orangeand the green line the continuum fit. The bottom plot is the contiuum normalized result
###Code
plt.plot(wav_model, comp_cont, label="Companion")
plt.plot(wav_model, host_cont-0.3, label="Host")
plt.title("Continuum Normalized (with -0.3 offset)")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
plt.plot(wav_model[20:200], comp_cont[20:200], label="Companion")
plt.plot(wav_model[20:200], host_cont[20:200], label="Host")
plt.title("Continuum Normalized - close up")
plt.xlabel("Wavelength (nm)")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Combining SpectraI then mix the models using a combination of the two spectra.In this case with NO RV shifts.
###Code
def mix(h, c, alpha):
return (h + c * alpha) / (1 + alpha)
mix1 = mix(host_cont, comp_cont, 0.01) # 1% of the companion spectra
mix2 = mix(host_cont, comp_cont, 0.05) # 5% of the companion spectra
# plt.plot(wav_model[20:100], comp_cont[20:100], label="comp")
plt.plot(wav_model[20:100], host_cont[20:100], label="host")
plt.plot(wav_model[20:100], mix1[20:100], label="mix 1%")
plt.plot(wav_model[20:100], mix2[20:100], label="mix 5%")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The companion is cooler there are many more deeper lines present in the spectra.Even a small contribution of the companion spectra reduce the continuum of the mixed spectra considerably.When I compare these mixed spectra to my observations
###Code
mask = (wav_model > np.min(obs["wavelength"])) & (wav_model < np.max(obs["wavelength"]))
plt.plot(wav_model[mask], mix1[mask], label="mix 1%")
plt.plot(wav_model[mask], mix2[mask], label="mix 5%")
plt.plot(obs["wavelength"], obs["flux"], label="obs")
#plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
# Zoomed in
plt.plot(wav_model[mask], mix2[mask], label="mix 5%")
plt.plot(wav_model[mask], mix1[mask], label="mix 1%")
plt.plot(obs["wavelength"], obs["flux"], label="obs")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.xlim([2112, 2117])
plt.ylim([0.9, 1.1])
plt.title("Zoomed")
plt.show()
###Output
_____no_output_____
###Markdown
As you can see here my observations are above the continuum most of the time.What I have noticed is this drastically affects the chisquared result as the mix model is the one with the least amount of alpha.I am thinking of renormalizing my observations by implementing equation (1) from [Passegger 2016](https://arxiv.org/pdf/1601.01877.pdf) *(Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES modesl)* F_obs = F_obs * (continuum_fit model / continuum_fit observation) They fit a linear function to the continuum of the observation and computed spectra to account for *"slight differences in the continuum level and possible linear trends between the already noramlized spectra."* - One difference is that they say they normalize the **average** flux of the spectra to unity. Would this make a difference in this method. Questions- Would this be the correct approach to take to solve this? - Should I renomalize the observations first as well?- Am I treating the cooler M-dwarf spectra correctly in this approach? Attempting the Passegger method
###Code
from scipy.interpolate import interp1d
# mix1_norm = continuum(wav_model, mix1, splits=50, method="linear", plot=False)
# mix2_norm = local_normalization(wav_model, mix2, splits=50, method="linear", plot=False)
obs_continuum = continuum(obs["wavelength"], obs["flux"], splits=20, method="linear", plot=True)
linear1 = continuum(wav_model, mix1, splits=50, method="linear", plot=True)
linear2 = continuum(wav_model, mix2, splits=50, method="linear", plot=False)
obs_renorm1 = obs["flux"] * (interp1d(wav_model, linear1)(obs["wavelength"]) / obs_continuum)
obs_renorm2 = obs["flux"] * (interp1d(wav_model, linear2)(obs["wavelength"]) / obs_continuum)
# Just a scalar
# mix1_norm = local_normalization(wav_model, mix1, splits=50, method="scalar", plot=False)
# mix2_norm = local_normalization(wav_model, mix2, splits=50, method="scalar", plot=False)
obs_scalar = continuum(obs["wavelength"], obs["flux"], splits=20, method="scalar", plot=False)
scalar1 = continuum(wav_model, mix1, splits=50, method="scalar", plot=True)
scalar2 = continuum(wav_model, mix2, splits=50, method="scalar", plot=False)
print(scalar2)
obs_renorm_scalar1 = obs["flux"] * (interp1d(wav_model, scalar1)(obs["wavelength"]) / obs_scalar)
obs_renorm_scalar2 = obs["flux"] * (interp1d(wav_model, scalar2)(obs["wavelength"]) / obs_scalar)
plt.plot(obs["wavelength"], obs_scalar, label="scalar observed")
plt.plot(obs["wavelength"], obs_continuum, label="linear observed")
plt.plot(obs["wavelength"], interp1d(wav_model, scalar1)(obs["wavelength"]), label="scalar 1%")
plt.plot(obs["wavelength"], interp1d(wav_model, linear1)(obs["wavelength"]), label="linear 1%")
plt.plot(obs["wavelength"], interp1d(wav_model, scalar2)(obs["wavelength"]), label="scalar 5%")
plt.plot(obs["wavelength"], interp1d(wav_model, linear2)(obs["wavelength"]), label="linear 5%")
plt.title("Linear and Scalar continuum renormalizations.")
plt.legend()
plt.show()
plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6)
plt.plot(obs["wavelength"], obs_renorm1, label="linear norm")
plt.plot(obs["wavelength"], obs_renorm_scalar1, label="scalar norm")
plt.plot(wav_model[mask], mix1[mask], label="mix 1%")
plt.legend()
plt.title("1% model")
plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2)
plt.show()
plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6)
plt.plot(obs["wavelength"], obs_renorm1, label="linear norm")
plt.plot(obs["wavelength"], obs_renorm_scalar1, label="scalar norm")
plt.plot(wav_model[mask], mix1[mask], label="mix 1%")
plt.legend()
plt.title("1% model, zoom")
plt.xlim([2120, 2122])
plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2)
plt.show()
plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6)
plt.plot(obs["wavelength"], obs_renorm2, label="linear norm")
plt.plot(obs["wavelength"], obs_renorm_scalar2, label="scalar norm")
plt.plot(wav_model[mask], mix2[mask], label="mix 5%")
plt.legend()
plt.title("5% model")
plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2)
plt.show()
plt.plot(obs["wavelength"], obs["flux"], label="obs", alpha =0.6)
plt.plot(obs["wavelength"], obs_renorm2, label="linear norm")
plt.plot(obs["wavelength"], obs_renorm_scalar2, label="scalar norm")
plt.plot(wav_model[mask], mix2[mask], label="mix 5%")
plt.legend()
plt.title("5% model zoomed")
plt.xlim([2120, 2122])
plt.hlines(1, 2111, 2124, linestyle="--", alpha=0.2)
plt.show()
###Output
_____no_output_____
###Markdown
In this example for the 5% companion spectra there is a bit of difference between the linear and scalar normalizations. With a larger difference at the longer wavelength. (more orange visible above the red.) Faint blue is the spectrum before the renormalization. Range of phoenix spectra
###Code
wav_model = fits.getdata("/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/WAVE_PHOENIX-ACES-AGSS-COND-2011.fits")
wav_model /= 10 # nm
temps = [2300, 3000, 4000, 5000]
mask1 = (1000 < wav_model) & (wav_model < 3300)
masked_wav1 = wav_model[mask1]
for temp in temps[::-1]:
file = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte0{0}-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits".format(temp)
host_f = fits.getdata(file)
plt.plot(masked_wav1, host_f[mask1], label="Teff={}".format(temp))
plt.title("Phoenix spectra")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
mask = (2000 < wav_model) & (wav_model < 2300)
masked_wav = wav_model[mask]
for temp in temps[::-1]:
file = "/home/jneal/Phd/data/PHOENIX-ALL/PHOENIX/Z-0.0/lte0{0}-4.50-0.0.PHOENIX-ACES-AGSS-COND-2011-HiRes.fits".format(temp)
host_f = fits.getdata(file)
host_f = host_f[mask]
plt.plot(masked_wav, host_f, label="Teff={}".format(temp))
plt.title("Phoenix spectra")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
# Observations
for chip in range(1,5):
obs = fits.getdata("/home/jneal/.handy_spectra/HD211847-1-mixavg-tellcorr_{}.fits".format(chip))
plt.plot(obs["wavelength"], obs["flux"], label="chip {}".format(chip))
plt.hlines(1, 2111, 2165, linestyle="--")
plt.title("CRIRES spectrum HD211847")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
# Observations
for chip in range(1,5):
obs = fits.getdata("/home/jneal/.handy_spectra/HD30501-1-mixavg-tellcorr_{}.fits".format(chip))
plt.plot(obs["wavelength"], obs["flux"], label="chip {}".format(chip))
plt.hlines(1, 2111, 2165, linestyle="--")
plt.title("CRIRES spectrum HD30501")
plt.xlabel("Wavelength (nm)")
plt.legend()
plt.show()
###Output
_____no_output_____ |
recognition_gap/recognition_gap_experiment_JOV.ipynb | ###Markdown
Recognition gap: Search for MIRCs This notebook contains the code for the main experiment of the thirdcase study "recognition gap" in "The Notorious Difficult of ComparingHuman and Machine Perception" (Funke, Borowski et al. 2020): Weimplement a search algorithm for a deep convolutional neural network toidentify MIRCs (minimal recognizable configuration). The procedure isvery similar to the human experiment performed by Ullman et al. (2016). Libraries, packages, ...
###Code
# basic imports
import os
# standard libraries
import numpy as np
import math
from PIL import Image
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import csv
import time
# torch imports
import torch
# custom imports
import configuration_for_experiment as config
import utils.pytorchnet_bagnets as pytorchnet_bagnets
import utils.data_in as data_in
import utils.data_out as data_out
import utils.search as search
###Output
_____no_output_____
###Markdown
Set device
###Code
# set device on GPU if available, else CPU
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
Load data
###Code
# get data_loader
data_loader = data_in.get_data_loader(config.Ullman_or_ImageNet)
###Output
_____no_output_____
###Markdown
Load model
###Code
model = pytorchnet_bagnets.bagnet33(pretrained=True).to(DEVICE)
model.avg_pool = False
model.eval()
torch.set_grad_enabled(False)
###Output
_____no_output_____
###Markdown
Directories for ouput data
###Code
exp_dir = data_out.make_exp_dir(
config.Ullman_or_ImageNet,
config.list_as_one_class,
config.start_idx,
config.stop_idx,
config.descendent_specifier)
###Output
_____no_output_____
###Markdown
Do it! Search MIRCs - and while you're at it, also sub-MIRCs
###Code
write_or_append = "w"
start = time.time()
# loop through all images in data_loader
# note that this data_loader was slightly modified and that it returns a
# list of target(s) and the path to the image file
for number_IN, (image_from_loader, target, path) in enumerate(data_loader):
# only perform the search if the images are from Ullman et al.
# or if the images are in the specified range (start_idx and stop_idx)
if (
# and ("suit" in path[0])) # TODO
((config.Ullman_or_ImageNet == "Ullman"))
or ((config.Ullman_or_ImageNet == "ImageNet") and (number_IN >= config.start_idx) and (number_IN < config.stop_idx))
):
print("\nnumber_IN", number_IN)
# all classes as one class
if config.list_as_one_class:
target_list = target
search.perform_MIRC_search(
image_from_loader,
target_list,
path,
model,
DEVICE,
config.Ullman_or_ImageNet,
config.descendent_specifier,
exp_dir,
write_or_append)
write_or_append = "a"
# individual elements as separate classes
else:
for target_i in target:
target_list = [target_i]
search.perform_MIRC_search(
image_from_loader,
target_list,
path,
model,
DEVICE,
config.Ullman_or_ImageNet,
config.descendent_specifier,
exp_dir,
write_or_append)
write_or_append = "a"
print("done")
stop = time.time()
print(f"time {stop-start}")
###Output
_____no_output_____ |
Applied Math/Y1S4/Lineair programmeren/Opgave 16.ipynb | ###Markdown
**Helper functions**: A few functions that are useful whilst working with simplex tableaus.
###Code
# Add the column and row numbers to the matrix for easier indexing.
# It preserves the original naming.
colrownames <- function(M) {
cn=colnames(M)
cn=paste(cn, ' (', c(1:length(M[1,])), ')', sep='')
colnames(M)=cn
rn=rownames(M)
rn=paste(rn, ' (', c(1:length(M[,1])), ')', sep='')
rownames(M)=rn
M
}
# Returns the pivot element for a given column.
find.pivot <- function(M, col) {
rhs.index=length(M[,1])
row=M[,rhs.index]/M[,col]
min(row[row>0 & !is.na(row) & row!=Inf])
}
###Output
_____no_output_____
###Markdown
LP model **Beslissingsvariabelen*** $x_1$ : Oliesoort 1* $x_2$ : Oliesoort 2* $x_3$ : Oliesoort 3**LP probleem**$\max 12x_1 + 15x_2 + 20x_3 \\x_1 + 4x_2 + 8x_3 \leq 1000 \\ 0.5x_1+2x_2+0.25x_3 \leq 1500 \\7x_1+6x_2+x_3\leq 1400 \\3x_1+5x_2+4x_3 \leq 2000$**Canonieke vorm**$A+A_5=0 \\d-12x_1-15x_2-20x_3=0 \\$ Solving
###Code
A=c(1,0,0,0,0,0,0,0)
d=c(0,1,0,0,0,0,0,0)
x1=c(-1,-12,1,.5,7,3,1,0)
x2=c(0,-15,4,2,6,5,0,0)
x3=c(0,-20,8,.25,1,4,0,1)
s1=c(0,0,1,0,0,0,0,0)
s2=c(0,0,0,1,0,0,0,0)
s3=c(0,0,0,0,1,0,0,0)
s4=c(0,0,0,0,0,1,0,0)
a5=c(0,0,0,0,0,0,1,0)
s5=c(1,0,0,0,0,0,-1,0)
s6=c(0,0,0,0,0,0,0,1)
RHS=c(-50,0,1000,1500,1400,2000,50,10)
M=cbind(A,d,x1,x2,x3,s1,s2,s3,s4,a5,s5,s6,RHS)
rownames(M)=c('A','d','s1','s2','s3','s4','s5','s6')
M=colrownames(M)
# Iteratie 2
M[1,]=M[1,]+M[7,]
M[2,]=M[2,]+12*M[7,]
M[3,]=M[3,]-M[7,]
M[4,]=M[4,]-1/2*M[7,]
M[5,]=M[5,]-7*M[7,]
M[6,]=M[6,]-3*M[7,]
# Iteratie 3
M[2,]=M[2,]+20*M[8,]
M[3,]=M[3,]-8*M[8,]
M[4,]=M[4,]-1/4*M[8,]
M[5,]=M[5,]-M[8,]
M[6,]=M[6,]-4*M[8,]
# Iteratie 4
M[5,]=M[5,]/6
M[2,]=M[2,]+15*M[5,]
M[3,]=M[3,]-4*M[5,]
M[4,]=M[4,]-2*M[5,]
M[6,]=M[6,]-5*M[5,]
M
###Output
_____no_output_____
###Markdown
De oplossing is $(x_1,x_2,x_3)=(50, 173.33, 10)$. 42.a _Als volgende week vanwege een strengere milieuwetgeving van oliesoort 3 nog maarhoogstens 4 ton gemaakt mag worden, wat zou dan de optimale oplossing en maximalewinst zijn?_ De vraagstelling heeft betrekking tot de beperking: $x_3 \leq 10$. Dit wordt namelijk $x_3 \leq 4$. In de canonieke vorm is dit $x_3 + s_3 = 10$. Hier moet $k$ van af, dus $s_3^*=s_3-k$.
###Code
M[,13]-6*M[,12]
###Output
_____no_output_____
###Markdown
De nieuwe oplossing is $(x_1,x_2,x_3)=(50, 174.33, 4)$ met een maximale winst van $€3295$. 42.b _Stel, de fabriek wil volgende week minstens 170 ton van oliesoort 1 maken. Watzou dan de optimale oplossing en maximale winst zijn?_
###Code
M[,13]-120*M[,11]
###Output
_____no_output_____
###Markdown
42.c _Stel, de fabriek wil volgende week minstens 200 ton van oliesoort 1 maken. Watzou dan de optimale oplossing en maximale winst zijn?_
###Code
M[,13]-150*M[,11]
###Output
_____no_output_____ |
_notebooks/2021-06-25-Allcorrect-Games.ipynb | ###Markdown
"Allcorrect Games"> "We tackle an NLP multiclass classification challenge for a localization company"- toc: true- badges: true- comments: true- categories: [fastpages, jupyter]- hide: false Problem Statement- Allcorrect Games is looking to improve the speed at which they identify potential customers.- The current bottleneck is manually labeling reviews into 4 categories- We will attempt to resolve this using machine learning Data Description- id - unique identifier- mark - our RL, YL, L+, or L- label - RL – localization request; - L+ – good localization; - L- – bad localization; - YL – localization exists.- review - The reviews to be classified Plan:1. Examine the data for insights and errors2. Clean up the reviews for processing and vectorization.3. Examine class balance and test sampling methods.4. Experiment with different vectorization methods and examine our results via Logistic Regression5. Use a more advanced model and compare results Solution Import and examine the data
###Code
#collapse
import warnings
warnings.filterwarnings("ignore")
import math
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format = 'png'
# the next line provides graphs of better quality on HiDPI screens
%config InlineBackend.figure_format = 'retina'
plt.style.use('seaborn')
from tqdm.auto import tqdm
tqdm.pandas()
import re
import spacy
import torch
import transformers
from sklearn.metrics import classification_report,accuracy_score
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from catboost import Pool, CatBoostClassifier
#hide
STATE = 12345
USE_GPU = True
SOURCE_FILE = 'C:/Users/The Ogre/datascience/allcorrectgames/reviews.xlsx'
#hide
torch.cuda.is_available()
df_reviews = pd.read_excel(SOURCE_FILE, engine='openpyxl')
df_reviews.info()
df_reviews.head()
df_reviews['mark'].value_counts()
#collapse
df_reviews['mark'].value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
- Categories need cleaning up, need to make them all Capital Letters
###Code
df_reviews['id'].duplicated().value_counts()
df_reviews[df_reviews['review'].str.len() <= 5]['review'].value_counts()
###Output
_____no_output_____
###Markdown
- Removing these reviews. They are not long enough to have any value and appear to be errors in the initial algorithm gathering the reviews
###Code
df_reviews['review'].duplicated().value_counts()
df_reviews[df_reviews['review'].duplicated()]['review'].value_counts().head()
###Output
_____no_output_____
###Markdown
- Duplicates area a natural occurence in the data set, so we are going to leave them Clean the Data
###Code
df_reviews = df_reviews[df_reviews['review'].str.len() > 5]
df_reviews.reset_index(drop=True, inplace=True)
df_reviews['mark'] = df_reviews['mark'].str.upper()
#collapse
df_reviews['mark'].value_counts().plot(kind='bar')
df_reviews['mark'].value_counts()
###Output
_____no_output_____
###Markdown
- All labels have been properly corrected
###Code
#collapse
contractions = {
"ain't": "am not",
"aren't": "are not",
"can't": "cannot",
"can't've": "cannot have",
"'cause": "because",
"could've": "could have",
"couldn't": "could not",
"couldn't've": "could not have",
"didn't": "did not",
"doesn't": "does not",
"don't": "do not",
"hadn't": "had not",
"hadn't've": "had not have",
"hasn't": "has not",
"haven't": "have not",
"he'd": "he would",
"he'd've": "he would have",
"he'll": "he will",
"he'll've": "he will have",
"he's": "he is",
"how'd": "how did",
"how'd'y": "how do you",
"how'll": "how will",
"how's": "how does",
"i'd": "i would",
"i'd've": "i would have",
"i'll": "i will",
"i'll've": "i will have",
"i'm": "i am",
"i've": "i have",
"isn't": "is not",
"it'd": "it would",
"it'd've": "it would have",
"it'll": "it will",
"it'll've": "it will have",
"it's": "it is",
"let's": "let us",
"ma'am": "madam",
"mayn't": "may not",
"might've": "might have",
"mightn't": "might not",
"mightn't've": "might not have",
"must've": "must have",
"mustn't": "must not",
"mustn't've": "must not have",
"needn't": "need not",
"needn't've": "need not have",
"o'clock": "of the clock",
"oughtn't": "ought not",
"oughtn't've": "ought not have",
"shan't": "shall not",
"sha'n't": "shall not",
"shan't've": "shall not have",
"she'd": "she would",
"she'd've": "she would have",
"she'll": "she will",
"she'll've": "she will have",
"she's": "she is",
"should've": "should have",
"shouldn't": "should not",
"shouldn't've": "should not have",
"so've": "so have",
"so's": "so is",
"that'd": "that would",
"that'd've": "that would have",
"that's": "that is",
"there'd": "there would",
"there'd've": "there would have",
"there's": "there is",
"they'd": "they would",
"they'd've": "they would have",
"they'll": "they will",
"they'll've": "they will have",
"they're": "they are",
"they've": "they have",
"to've": "to have",
"wasn't": "was not",
" u ": " you ",
" ur ": " your ",
" n ": " and ",
"won't": "would not",
'dis ': 'this ',
'bak ': 'back ',
'brng': 'bring'}
def cont_to_exp(x):
if type(x) is str:
for key in contractions:
value = contractions[key]
x = x.replace(key, value)
return x
else:
return x
def clear_text(text):
text = text.lower()
text = re.sub(r"[^a-z']+", " ", text)
return " ".join(text.split())
df_reviews['review_norm'] = df_reviews['review'].progress_apply(clear_text)
df_reviews['review_norm'] = df_reviews['review_norm'].progress_apply(cont_to_exp)
df_reviews['review_norm'].head()
###Output
_____no_output_____
###Markdown
- Reviews are now all lower case and contractions have been removed to simplify vectorization Sampling- Both upsampling and downsampling were attempted on this dataset- No improvement to results were achieved so the code was removed to declutter Split the Data
###Code
train, test = train_test_split(df_reviews,
test_size=0.25,
stratify = df_reviews['mark'],
random_state=STATE)
X_train = train.drop(['id', 'review', 'mark'], axis=1)
y_train = train['mark']
X_test = test.drop(['id', 'review', 'mark'], axis=1)
y_test = test['mark']
display(X_train.shape[0])
X_test.shape[0]
y_train.value_counts()
y_test.value_counts()
###Output
_____no_output_____
###Markdown
Logistic Regression Model Count Vectorizer
###Code
corpus = X_train['review_norm']
count_vect = CountVectorizer(stop_words='english', ngram_range=(2,3), max_features=30000)
X_train_1 = count_vect.fit_transform(corpus)
corpus = X_test['review_norm']
X_test_1 = count_vect.transform(corpus)
grid={
"penalty":["l2"],
"fit_intercept": [True, False],
"random_state": [STATE],
"solver": ["newton-cg", "lbfgs", "liblinear", "sag", "saga"],
"max_iter": [1000],
"multi_class": ["ovr", "multinomial"],
"n_jobs": [-1],
}
model_lr = LogisticRegression()
lr_cv=GridSearchCV(model_lr,grid,cv=5)
lr_cv.fit(X_train_1 ,y_train)
print("Tuned Hyperparameters:", lr_cv.best_params_)
print("Accuracy:", lr_cv.best_score_)
###Output
Tuned Hyperparameters: {'fit_intercept': True, 'max_iter': 1000, 'multi_class': 'multinomial', 'n_jobs': -1, 'penalty': 'l2', 'random_state': 12345, 'solver': 'saga'}
Accuracy: 0.8754731556585554
###Markdown
- This cell can take hours to run- If significant changes to preprocessing or data, please run again
###Code
model_lr = LogisticRegression(**lr_cv.best_params_)
model_lr.fit(X_train_1, y_train)
pred = model_lr.predict(X_test_1)
print(classification_report(y_test,pred))
print(accuracy_score(y_test,pred))
###Output
precision recall f1-score support
L+ 0.65 0.19 0.30 166
L- 0.80 0.52 0.63 1369
RL 0.89 0.98 0.94 10670
YL 0.58 0.17 0.27 741
accuracy 0.88 12946
macro avg 0.73 0.47 0.53 12946
weighted avg 0.86 0.88 0.86 12946
0.8791132396106905
###Markdown
- These are strong results for Logistic Regression- Not enough unique data for the smaller categories it seems Tfidf Vectorizer
###Code
corpus = X_train['review_norm']
tfidf_vect= TfidfVectorizer(stop_words='english', ngram_range=(2,3), max_features=30000)
X_train_2 = tfidf_vect.fit_transform(corpus)
corpus = X_test['review_norm']
X_test_2 = tfidf_vect.transform(corpus)
model_lr_2 = LogisticRegression()
lr_cv=GridSearchCV(model_lr_2,grid,cv=5)
lr_cv.fit(X_train_2 ,y_train)
print("Tuned Hyperparameters:", lr_cv.best_params_)
print("Accuracy:", lr_cv.best_score_)
###Output
Tuned Hyperparameters: {'fit_intercept': True, 'max_iter': 1000, 'multi_class': 'multinomial', 'n_jobs': -1, 'penalty': 'l2', 'random_state': 12345, 'solver': 'saga'}
Accuracy: 0.8592764259044676
###Markdown
- This cell can take hours to run- Results are hardcoded to avoid rerunning hours of calculations- If significant changes to preprocessing or data, please run again
###Code
model_lr_2 = LogisticRegression(**lr_cv.best_params_)
model_lr_2.fit(X_train_2, y_train)
pred = model_lr_2.predict(X_test_2)
print(classification_report(y_test,pred))
print(accuracy_score(y_test,pred))
###Output
precision recall f1-score support
L+ 0.60 0.05 0.10 166
L- 0.84 0.44 0.58 1369
RL 0.87 0.99 0.93 10670
YL 0.84 0.05 0.09 741
accuracy 0.87 12946
macro avg 0.79 0.38 0.42 12946
weighted avg 0.86 0.87 0.83 12946
0.869457747566816
###Markdown
- Significantly worse results on the smaller categories- Count Vectorizer is the clear winner here Catboost Model
###Code
text_features = ['review_norm']
train_pool = Pool(
X_train,
y_train,
text_features=text_features,
feature_names=list(X_train)
)
valid_pool = Pool(
X_test,
y_test,
text_features=text_features,
feature_names=list(X_train)
)
catboost_params = {
'iterations': 5000,
'learning_rate': 0.03,
'eval_metric': 'TotalF1',
'task_type': 'GPU' if USE_GPU else 'CPU',
'early_stopping_rounds': 2000,
'use_best_model': True,
'verbose': 500,
'random_state': STATE
}
#collapse-output
model_cb = CatBoostClassifier(**catboost_params)
model_cb.fit(train_pool, eval_set=valid_pool)
pred = model_cb.predict(X_test)
print(classification_report(y_test,pred))
print(accuracy_score(y_test,pred))
###Output
precision recall f1-score support
L+ 0.65 0.23 0.35 166
L- 0.80 0.79 0.79 1369
RL 0.94 0.98 0.96 10670
YL 0.77 0.43 0.55 741
accuracy 0.92 12946
macro avg 0.79 0.61 0.66 12946
weighted avg 0.91 0.92 0.91 12946
0.9181986714042948
###Markdown
- Very strong results for RL category- The trend appears to be that with more data points the models accuracy increases in a category Experiment on Multiple Model Usage Phase 1 - Localization Requests
###Code
reviews_set_1 = df_reviews.copy()
for i in range(len(reviews_set_1)):
if reviews_set_1['mark'][i] != 'RL':
reviews_set_1['mark'][i] = 'YL'
reviews_set_1['mark'].value_counts()
X1 = reviews_set_1.drop(['id', 'review', 'mark'], axis=1)
y1 = reviews_set_1['mark']
X1_train, X1_test, y1_train, y1_test = train_test_split(X1,y1, test_size=0.25, stratify = y1)
train_pool_2 = Pool(
X1_train,
y1_train,
text_features=text_features,
feature_names=list(X1_train)
)
valid_pool_2 = Pool(
X1_test,
y1_test,
text_features=text_features,
feature_names=list(X1_train)
)
#collapse-output
model1 = CatBoostClassifier(**catboost_params)
model1.fit(train_pool_2, eval_set=valid_pool_2)
pred1 = model1.predict(X1_test)
print(classification_report(y1_test,pred1))
print(accuracy_score(y1_test,pred1))
###Output
precision recall f1-score support
RL 0.95 0.97 0.96 10670
YL 0.84 0.75 0.79 2276
accuracy 0.93 12946
macro avg 0.89 0.86 0.87 12946
weighted avg 0.93 0.93 0.93 12946
0.9304032133477522
###Markdown
Phase 2 - Localization Reviews
###Code
reviews_set_2 = df_reviews.copy()
reviews_set_2 = reviews_set_2[reviews_set_2['mark'] != 'RL']
reviews_set_2['mark'].value_counts()
X2 = reviews_set_2.drop(['id', 'review', 'mark'], axis=1)
y2 = reviews_set_2['mark']
X2_train, X2_test, y2_train, y2_test = train_test_split(X2,y2, test_size=0.25, stratify = y2)
train_pool_3 = Pool(
X2_train,
y2_train,
text_features=text_features,
feature_names=list(X2_train)
)
valid_pool_3 = Pool(
X2_test,
y2_test,
text_features=text_features,
feature_names=list(X2_train)
)
#collapse-output
model2 = CatBoostClassifier(**catboost_params)
model2.fit(train_pool_3, eval_set=valid_pool_3)
pred2 = model2.predict(X2_test)
print(classification_report(y2_test,pred2))
print(accuracy_score(y2_test,pred2))
###Output
precision recall f1-score support
L+ 0.69 0.31 0.43 166
L- 0.89 0.95 0.92 1370
YL 0.85 0.86 0.86 741
accuracy 0.87 2277
macro avg 0.81 0.71 0.74 2277
weighted avg 0.87 0.87 0.86 2277
0.873078612209047
|
python_dashboard/Projet Open Data.ipynb | ###Markdown
RISQUE AUTOMOBILE
###Code
riqueAutoData = pd.read_csv("risque auto.csv",encoding="latin1",sep=";")
#Création des Risques
risqueAutoLabel = db.labels.create("RISQUE_AUTOMOBILE")
risqueAutoNode = db.nodes.create(name="RISQUE AUTOMOBILE")
risqueAutoLabel.add(risqueAutoNode)
#Création des 4 Principales variables
principalesVariables = ["Assuré", "Environnement","Comportement","Entourage"]
for attribut in principalesVariables:
query = "CREATE (PRINCIPALES: variablesPRINCIPALESAuto {name:{name}, nom:{nom}})"
results = db.query(query, params={"name":attribut, "nom":attribut},returns=(client.Node, str, client.Node))
for attribut in principalesVariables:
q = 'MATCH (u:RISQUE_AUTOMOBILE {name:"RISQUE AUTOMOBILE"}), (r:variablesPRINCIPALESAuto {name:{attribut}}) CREATE (u)-[:Variable]->(r)'
results = db.query(q, params={"attribut":attribut} ,returns=(client.Node, str, client.Node))
#Les Variables Enrironnement
query = "CREATE (ENVIRONNEMENT:DEMOGRAPHIE {name:{name}})"
results = db.query(query, params={"name":"DEMOGRAPHIE"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPRINCIPALESAuto {name:"Environnement"}), (r:DEMOGRAPHIE {name:"DEMOGRAPHIE"}) CREATE (u)-[:Donnee_De]->(r)'
results = db.query(q, returns=(client.Node, str, client.Node))
#Population
variablesPopulation = []
famillePopulation = []
nbreElements = 4
for i in range(1,nbreElements+1):
variablesPopulation.append(riqueAutoData.iloc[i+1, 2])
famillePopulation.append(riqueAutoData.iloc[i+1, 5])
for i in range(0,nbreElements):
if(famillePopulation[i] == "Environnement"):
query = "CREATE (POPULATION: donneesPopulation {name:{name}})"
results = db.query(query, params={"name":variablesPopulation[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:DEMOGRAPHIE {name:"DEMOGRAPHIE"}), (r:donneesPopulation {name:{nameDonnee}}) CREATE (u)-[:relPopulation]->(r)'
results = db.query(q, params={"nameDonnee":variablesPopulation[i]}, returns=(client.Node, str, client.Node))
#Crimes et Délits
query = "CREATE (CRIMES_DELITS: CRIMES_DELITS {name:{name}})"
results = db.query(query, params={"name":"CRIMES ET DELITS"}, returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPRINCIPALESAuto {name:"Environnement"}), (r:CRIMES_DELITS {name:{name}}) CREATE (u)-[:Donnee_De]->(r)'
results = db.query(q, params={"name":"CRIMES ET DELITS"},returns=(client.Node, str, client.Node))
#Création des 2 Types de Crimes
crimesVariables = ["Police Nationale", "Gendarmerie Nationale"]
for attribut in crimesVariables:
query = "CREATE (CRIMES_DIVISIONS: variablesCRIMES {name:{name}})"
results = db.query(query, params={"name":attribut},returns=(client.Node, str, client.Node))
q = 'MATCH (u:CRIMES_DELITS {name:"CRIMES ET DELITS"}), (r:variablesCRIMES {name:{attribut}}) CREATE (u)-[:Variable]->(r)'
results = db.query(q, params={"attribut":attribut} ,returns=(client.Node, str, client.Node))
#Données Délits et Crimes Police
variablesCrimesPolice = []
familleCrimesPolice = []
nbreElements = 107
for i in range(1,nbreElements+1):
variablesCrimesPolice.append(riqueAutoData.iloc[i+116, 2])
familleCrimesPolice.append(riqueAutoData.iloc[i+116, 5])
for i in range(0,nbreElements):
if(familleCrimesPolice[i] == "Environnement"):
query = "CREATE (CRIMESDELITS: donneesCrimesDelitsPolice {name:{name}})"
results = db.query(query, params={"name":variablesCrimesPolice[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesCRIMES {name:"Police Nationale"}), (r:donneesCrimesDelitsPolice {name:{nameDonnee}}) CREATE (u)-[:relPolice]->(r)'
results = db.query(q, params={"nameDonnee":variablesCrimesPolice[i]}, returns=(client.Node, str, client.Node))
#Données Délits et Crimes Gendarmerie
variablesCrimesGendarmerie = []
familleCrimesGendarmerie = []
nbreElements = 107
for i in range(1,nbreElements+1):
variablesCrimesGendarmerie.append(riqueAutoData.iloc[i+8, 2])
familleCrimesGendarmerie.append(riqueAutoData.iloc[i+8, 5])
for i in range(0,nbreElements):
if(familleCrimesGendarmerie[i] == "Environnement"):
query = "CREATE (CRIMESDELITS: donneesCrimesDelitsGendarmerie {name:{name}})"
results = db.query(query, params={"name":variablesCrimesGendarmerie[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesCRIMES {name:"Gendarmerie Nationale"}), (r:donneesCrimesDelitsGendarmerie {name:{nameDonnee}}) CREATE (u)-[:relGendarmerie]->(r)'
results = db.query(q, params={"nameDonnee":variablesCrimesGendarmerie[i]}, returns=(client.Node, str, client.Node))
#Accidents
query = "CREATE (ACCIDENTS: ACCIDENTS {name:{name}})"
results = db.query(query, params={"name":"ACCIDENTS"}, returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPRINCIPALESAuto {name:"Entourage"}), (r:ACCIDENTS {name:{name}}) CREATE (u)-[:relEntourage]->(r)'
results = db.query(q, params={"name":"ACCIDENTS"},returns=(client.Node, str, client.Node))
#Création des 4 variables pour l'accident
accidentVariables = ["Lieux", "Véhicules", "Caractérisques", "Usagers"]
for attribut in accidentVariables:
query = "CREATE (ACCIDENT_DIVISIONS: variablesACCIDENT {name:{name}})"
results = db.query(query, params={"name":attribut},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ACCIDENTS {name:"ACCIDENTS"}), (r:variablesACCIDENT {name:{attribut}}) CREATE (u)-[:varAccident]->(r)'
results = db.query(q, params={"attribut":attribut} ,returns=(client.Node, str, client.Node))
#Données Lieux Accidents
variablesLieuxAccidents = []
familleLieuxAccidents = []
nbreElements = 11
for i in range(1,nbreElements+1):
variablesLieuxAccidents.append(riqueAutoData.iloc[i+227, 2])
familleLieuxAccidents.append(riqueAutoData.iloc[i+227, 5])
for i in range(0,nbreElements):
if(familleLieuxAccidents[i] == "Entourage"):
query = "CREATE (LieuxAccidents: donneesLieuxAccidents {name:{name}})"
results = db.query(query, params={"name":variablesLieuxAccidents[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesACCIDENT {name:"Lieux"}), (r:donneesLieuxAccidents {name:{nameDonnee}}) CREATE (u)-[:relLieuxAccidents]->(r)'
results = db.query(q, params={"nameDonnee":variablesLieuxAccidents[i]}, returns=(client.Node, str, client.Node))
#Données Véhicules Accidents
variablesVehiculesAccidents = []
familleVehiculesAccidents = []
nbreElements = 9
for i in range(1,nbreElements+1):
variablesVehiculesAccidents.append(riqueAutoData.iloc[i+239, 2])
familleVehiculesAccidents.append(riqueAutoData.iloc[i+239, 5])
for i in range(0,nbreElements):
if(familleVehiculesAccidents[i] == "Entourage"):
query = "CREATE (VehiculesAccidents: donneesVehiculesAccidents {name:{name}})"
results = db.query(query, params={"name":variablesVehiculesAccidents[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesACCIDENT {name:"Véhicules"}), (r:donneesVehiculesAccidents {name:{nameDonnee}}) CREATE (u)-[:relVehiculesAccidents]->(r)'
results = db.query(q, params={"nameDonnee":variablesVehiculesAccidents[i]}, returns=(client.Node, str, client.Node))
#Données Usagers Accidents
variablesUsagersAccidents= []
familleUsagersAccidents=[]
nbreElements = 12
for i in range(1,nbreElements+1):
variablesUsagersAccidents.append(riqueAutoData.iloc[i+249, 2])
familleUsagersAccidents.append(riqueAutoData.iloc[i+249, 5])
for i in range(0,nbreElements):
if(familleUsagersAccidents[i] == "Entourage"):
query = "CREATE (UsagersAccidents: donneesUsagersAccidents {name:{name}})"
results = db.query(query, params={"name":variablesUsagersAccidents[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesACCIDENT {name:"Usagers"}), (r:donneesUsagersAccidents {name:{nameDonnee}}) CREATE (u)-[:relUsagersAccidents]->(r)'
results = db.query(q, params={"nameDonnee":variablesUsagersAccidents[i]}, returns=(client.Node, str, client.Node))
#Données Caractéristiques Accidents
variablesCaracterisquesAccidents= []
familleCaracterisquesAccidents=[]
nbreElements = 16
for i in range(1,nbreElements+1):
variablesCaracterisquesAccidents.append(riqueAutoData.iloc[i+262, 2])
familleCaracterisquesAccidents.append(riqueAutoData.iloc[i+262, 5])
for i in range(0,nbreElements):
if(familleCaracterisquesAccidents[i] == "Entourage"):
query = "CREATE (CaracterisquesAccidents: donneesCaracterisquesAccidents {name:{name}})"
results = db.query(query, params={"name":variablesCaracterisquesAccidents[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesACCIDENT {name:"Caractérisques"}), (r:donneesCaracterisquesAccidents {name:{nameDonnee}}) CREATE (u)-[:relCaracterisquesAccidents]->(r)'
results = db.query(q, params={"nameDonnee":variablesCaracterisquesAccidents[i]}, returns=(client.Node, str, client.Node))
###Output
_____no_output_____
###Markdown
RISQUE SANTE
###Code
risqueSanteData = pd.read_csv("risque sante.csv",encoding="latin1",sep=";")
risqueSanteData
#Création des Risques
query = "CREATE (RISQUES_DE_SANTE:RISQUES_DE_SANTE {name:{name}})"
results = db.query(query, params={"name":"RISQUE DE SANTE"},returns=(client.Node, str, client.Node))
#Création des 4 Principales variables
principalesVariables = ["Assuré", "Environnement","Comportement","Entourage"]
for attribut in principalesVariables:
query = "CREATE (PRINCIPALES: variablesPrincipalesSante {name:{name}, nom:{nom}})"
results = db.query(query, params={"name":attribut, "nom":attribut},returns=(client.Node, str, client.Node))
for attribut in principalesVariables:
q = 'MATCH (u:RISQUES_DE_SANTE {name:"RISQUE DE SANTE"}), (r:variablesPrincipalesSante {name:{attribut}}) CREATE (u)-[:relRisqueSante]->(r)'
results = db.query(q, params={"attribut":attribut} ,returns=(client.Node, str, client.Node))
#Les Variables Enrironnement
#Population Demographie
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"DEMOGRAPHIE"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"DEMOGRAPHIE"},returns=(client.Node, str, client.Node))
variablesPopulation = []
famillePopulation = []
nbreElements = 4
for i in range(1,nbreElements+1):
variablesPopulation.append(risqueSanteData.iloc[i+4, 2])
famillePopulation.append(risqueSanteData.iloc[i+4, 4])
for i in range(0,nbreElements):
if(famillePopulation[i] == "Environnement"):
query = "CREATE (POPULATION: donneesPopulationSANTE {name:{name}})"
results = db.query(query, params={"name":variablesPopulation[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"DEMOGRAPHIE"}), (r:donneesPopulationSANTE {name:{nameDonnee}}) CREATE (u)-[:relPopulationSante]->(r)'
results = db.query(q, params={"nameDonnee":variablesPopulation[i]}, returns=(client.Node, str, client.Node))
#Pratique du sport
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"PRATIQUE SPORT"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"PRATIQUE SPORT"},returns=(client.Node, str, client.Node))
variablesPratiqueSport = []
famillePratiqueSport = []
nbreElements = 2
for i in range(1,nbreElements+1):
variablesPratiqueSport.append(risqueSanteData.iloc[i+9, 2])
famillePratiqueSport.append(risqueSanteData.iloc[i+9, 4])
for i in range(0,nbreElements):
if(famillePratiqueSport[i] == "Environnement"):
query = "CREATE (POPULATION: donneesPratiqueSportSANTE {name:{name}})"
results = db.query(query, params={"name":variablesPratiqueSport[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"PRATIQUE SPORT"}), (r:donneesPratiqueSportSANTE {name:{nameDonnee}}) CREATE (u)-[:relSportSante]->(r)'
results = db.query(q, params={"nameDonnee":variablesPratiqueSport[i]}, returns=(client.Node, str, client.Node))
#DAMIR
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"DEPENSES ASSURANCE MALADIE"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"DEPENSES ASSURANCE MALADIE"},returns=(client.Node, str, client.Node))
variablesDAMIR = []
familleDAMIR = []
nbreElements = 54
for i in range(1,nbreElements+1):
variablesDAMIR.append(risqueSanteData.iloc[i+15, 2])
familleDAMIR.append(risqueSanteData.iloc[i+15, 4])
for i in range(0,nbreElements):
if(familleDAMIR[i] == "Environnement"):
query = "CREATE (POPULATION: donneesDAMIRSANTE {name:{name}})"
results = db.query(query, params={"name":variablesDAMIR[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"DEPENSES ASSURANCE MALADIE"}), (r:donneesDAMIRSANTE {name:{nameDonnee}}) CREATE (u)-[:relDAMIRSante]->(r)'
results = db.query(q, params={"nameDonnee":variablesDAMIR[i]}, returns=(client.Node, str, client.Node))
#CLIMAT
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"DONNEES CLIMATIQUES"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"DONNEES CLIMATIQUES"},returns=(client.Node, str, client.Node))
variablesCLIMAT = []
familleCLIMAT = []
nbreElements = 5
for i in range(1,nbreElements+1):
variablesCLIMAT.append(risqueSanteData.iloc[i+73, 2])
familleCLIMAT.append(risqueSanteData.iloc[i+73, 4])
for i in range(0,nbreElements):
if(familleCLIMAT[i] == "Environnement"):
query = "CREATE (POPULATION: donneesCLIMATSANTE {name:{name}})"
results = db.query(query, params={"name":variablesCLIMAT[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"DONNEES CLIMATIQUES"}), (r:donneesCLIMATSANTE {name:{nameDonnee}}) CREATE (u)-[:relCLIMATSante]->(r)'
results = db.query(q, params={"nameDonnee":variablesCLIMAT[i]}, returns=(client.Node, str, client.Node))
#MORTALITE
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"MORTALITE"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"MORTALITE"},returns=(client.Node, str, client.Node))
variablesMORTALITE = []
familleMORTALITE= []
nbreElements = 4
for i in range(1,nbreElements+1):
variablesMORTALITE.append(risqueSanteData.iloc[i+82, 2])
familleMORTALITE.append(risqueSanteData.iloc[i+82, 4])
for i in range(0,nbreElements):
if(familleMORTALITE[i] == "Environnement"):
query = "CREATE (POPULATION: donneesMORTALITE {name:{name}})"
results = db.query(query, params={"name":variablesMORTALITE[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"MORTALITE"}), (r:donneesMORTALITE {name:{nameDonnee}}) CREATE (u)-[:rel_MORTALITE_Sante]->(r)'
results = db.query(q, params={"nameDonnee":variablesMORTALITE[i]}, returns=(client.Node, str, client.Node))
#MORBIDITE
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"MORBIDITE"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"MORBIDITE"},returns=(client.Node, str, client.Node))
variablesMORBIDITE = []
familleMORBIDITE= []
nbreElements = 9
for i in range(1,nbreElements+1):
variablesMORBIDITE.append(risqueSanteData.iloc[i+87, 2])
familleMORBIDITE.append(risqueSanteData.iloc[i+87, 4])
for i in range(0,nbreElements):
if(familleMORBIDITE[i] == "Environnement"):
query = "CREATE (POPULATION: donneesMORBIDITE {name:{name}})"
results = db.query(query, params={"name":variablesMORBIDITE[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"MORBIDITE"}), (r:donneesMORBIDITE {name:{nameDonnee}}) CREATE (u)-[:rel_MORBIDITE_Sante]->(r)'
results = db.query(q, params={"nameDonnee":variablesMORBIDITE[i]}, returns=(client.Node, str, client.Node))
#PROTECTION FACTEURS RISQUES
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"FACTEURS RISQUES SANTE"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"FACTEURS RISQUES SANTE"},returns=(client.Node, str, client.Node))
variablesFACTEURS_RISQUE = []
familleFACTEURS_RISQUE= []
nbreElements = 15
for i in range(1,nbreElements+1):
variablesFACTEURS_RISQUE.append(risqueSanteData.iloc[i+97, 2])
familleFACTEURS_RISQUE.append(risqueSanteData.iloc[i+97, 4])
for i in range(0,nbreElements):
if(familleFACTEURS_RISQUE[i] == "Environnement"):
query = "CREATE (POPULATION: donneesFACTEURS_RISQUESANTE {name:{name}})"
results = db.query(query, params={"name":variablesFACTEURS_RISQUE[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"FACTEURS RISQUES SANTE"}), (r:donneesFACTEURS_RISQUESANTE {name:{nameDonnee}}) CREATE (u)-[:relFACTEURS_RISQUE_Sante]->(r)'
results = db.query(q, params={"nameDonnee":variablesFACTEURS_RISQUE[i]}, returns=(client.Node, str, client.Node))
#PROTECTION OFFRE BIENS ET SERVICES
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"OFFRES BIENS SERVICES MEDICAUX"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"OFFRES BIENS SERVICES MEDICAUX"},returns=(client.Node, str, client.Node))
variablesOFFRES = []
familleOFFRES= []
nbreElements = 3
for i in range(1,nbreElements+1):
variablesOFFRES.append(risqueSanteData.iloc[i+113, 2])
familleOFFRES.append(risqueSanteData.iloc[i+113, 4])
for i in range(0,nbreElements):
if(familleOFFRES[i] == "Environnement"):
query = "CREATE (POPULATION: donneesOFFRESSANTE {name:{name}})"
results = db.query(query, params={"name":variablesOFFRES[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"OFFRES BIENS SERVICES MEDICAUX"}), (r:donneesOFFRESSANTE {name:{nameDonnee}}) CREATE (u)-[:relOFFRESSante]->(r)'
results = db.query(q, params={"nameDonnee":variablesOFFRES[i]}, returns=(client.Node, str, client.Node))
#PROTECTION SOCIALE
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"PROTECTION SOCIALE"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"PROTECTION SOCIALE"},returns=(client.Node, str, client.Node))
variablesSOCIAL = []
familleSOCIAL= []
nbreElements = 23
for i in range(1,nbreElements+1):
variablesSOCIAL.append(risqueSanteData.iloc[i+124, 2])
familleSOCIAL.append(risqueSanteData.iloc[i+124, 4])
for i in range(0,nbreElements):
if(familleSOCIAL[i] == "Environnement"):
query = "CREATE (POPULATION: donneesSOCIALSANTE {name:{name}})"
results = db.query(query, params={"name":variablesSOCIAL[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"PROTECTION SOCIALE"}), (r:donneesSOCIALSANTE {name:{nameDonnee}}) CREATE (u)-[:relSOCIALSante]->(r)'
results = db.query(q, params={"nameDonnee":variablesSOCIAL[i]}, returns=(client.Node, str, client.Node))
#Causes Decès
query = "CREATE (ENVIRONNEMENT_SANTE:ENVIRONNEMENT_SANTE {name:{name}})"
results = db.query(query, params={"name":"CAUSES DECES"},returns=(client.Node, str, client.Node))
q = 'MATCH (u:variablesPrincipalesSante {name:"Environnement"}), (r:ENVIRONNEMENT_SANTE {name:{name}}) CREATE (u)-[:relEnvironnement]->(r)'
results = db.query(q, params={"name":"CAUSES DECES"},returns=(client.Node, str, client.Node))
variablesDECES = []
familleDECES = []
nbreElements = 14
for i in range(1,nbreElements+1):
variablesDECES.append(risqueSanteData.iloc[i+369, 2])
familleDECES.append(risqueSanteData.iloc[i+369, 4])
for i in range(0,nbreElements):
if(familleDECES[i] == "Environnement"):
query = "CREATE (POPULATION: donneesDECESSANTE {name:{name}})"
results = db.query(query, params={"name":variablesDECES[i]},returns=(client.Node, str, client.Node))
q = 'MATCH (u:ENVIRONNEMENT_SANTE {name:"CAUSES DECES"}), (r:donneesDECESSANTE {name:{nameDonnee}}) CREATE (u)-[:relDECESSante]->(r)'
results = db.query(q, params={"nameDonnee":variablesDECES[i]}, returns=(client.Node, str, client.Node))
from neo4jrestclient import client
q = 'MATCH (u:User)-[r:likes]->(m:Beer) WHERE u.name="Marco" RETURN u, type(r), m'
# "db" as defined above
results = db.query(q, returns=(client.Node, str, client.Node))
for r in results:
print("(%s)-[%s]->(%s)" % (r[0]["name"], r[1], r[2]["name"]))
# The output:
# (Marco)-[likes]->(Punk IPA)
# (Marco)-[likes]->(Hoegaarden Rosee)
from neo4jrestclient.client import GraphDatabase
db = GraphDatabase("http://localhost:7474", username="neo4j", password="EY-First-2017")
# Create some nodes with labels
user = db.labels.create("User")
u1 = db.nodes.create(name="Marco")
user.add(u1)
u2 = db.nodes.create(name="Daniela")
user.add(u2)
beer = db.labels.create("Beer")
b1 = db.nodes.create(name="Punk IPA")
b2 = db.nodes.create(name="Hoegaarden Rosee")
# You can associate a label with many nodes in one go
beer.add(b1, b2)
# User-likes->Beer relationships
u1.relationships.create("likes", b1)
u1.relationships.create("likes", b2)
u2.relationships.create("likes", b1)
# Bi-directional relationship?
u1.relationships.create("friends", u2)
from neo4jrestclient import client
q = 'MATCH (u:User)-[r:likes]->(m:Beer) WHERE u.name="Marco" RETURN u, type(r), m'
# "db" as defined above
results = db.query(q, returns=(client.Node, str, client.Node))
for r in results:
print("(%s)-[%s]->(%s)" % (r[0]["name"], r[1], r[2]["name"]))
# The output:
# (Marco)-[likes]->(Punk IPA)
# (Marco)-[likes]->(Hoegaarden Rosee)
q = 'MATCH (u:User {name:"Marco"}), (r:Beer {name:"Punk IPA"}) CREATE (u)-[:HAS_ROLE]->(r)'
results = db.query(q, returns=(client.Node, str, client.Node))
riqueAutoData = pd.read_csv("risque sante.csv",encoding="latin1",sep=";")
#Lieux
variablesPopulation = []
famillePopulation = []
nbreElements = 9
for i in range(1,nbreElements+1):
variablesPopulation.append(riqueAutoData.iloc[i+87, 2])
famillePopulation.append(riqueAutoData.iloc[i+20, 5])
variablesPopulation
riqueAutoData.columns
riqueAutoData.loc[228:][riqueAutoData.columns[2]]
risqueSanteData = pd.read_csv("risque sante.csv",encoding="latin1",sep=";")
risqueSanteData
###Output
_____no_output_____ |
notebooks/exploratory-questions/q1-pidgin-english-vs-english.ipynb | ###Markdown
Q1: What proportion of tweets are actually in Pidgin English?**`Goal:`** Determine how important it is to account for Pidgin English in the dataset 1. Import Packages
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
2. Import annotated dataset
###Code
lang_labelled = pd.read_csv('../../data/interim/lang_sample_labelled.csv')
lang_labelled.head()
print(f"There are {len(lang_labelled)} tweets in the dataset")
###Output
There are 78 tweets in the dataset
###Markdown
3. Compute proportion of tweets that are in Pidgin English
###Code
lang_labelled.language.value_counts()
lang_labelled.language.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Only 15% (12 tweets) of the 78 labelled tweets were in Pidgin English. Based on my labelling experience, most of these tweets were also in light Pidgin English (i.e. still featured a major portion of the sentence in grammatically correct plain English). This is explored below: 4. Exploring tweets containing Pidgin English
###Code
for idx, pdg_tweet in enumerate(lang_labelled.query(" language == 'pdg' ")['text']):
#Remove new line character
pdg_tweet = pdg_tweet.replace('\n',"")
#Print tweet
print(str(idx+1)+')', pdg_tweet, '\n')
###Output
1) Let me just transfer money for my next subscription to my Spectranet purse before story will enter...
2) @fimiletoks @mickey2ya @graffiti06 Tizeti is not scam o!They are the most gigantic scam. Dey show me fefe.
3) @Spectranet_NG what's up with your speeds na?
4) @eronmose1e @moyesparkle @whittyyumees @Spectranet_NG My brother all na scam but you see that spectranet ehn na sinzu them be, they Dey scam die! Internet speed self has been horrible 🤦🏽♂️
5) @bols_bols1 @Spectranet_NG You are special na
6) @Tukooldegreat Baba spectranet na scam, the 100gb finishes in 1 week, not as if I use the data to watch porn 😔
7) @aboyowa_e @Spectranet_NG Lmaoo! Na so, turn up!!
8) @Spectranet_NG , see no make me swear for you! Fix your wacky internet connection around Yaba!
9) MTNN @MTNNG and spectranet if you guys are not going to dash us data atleast come correct on your services.We can't be wasting money in these glorious times.
10) @rakspd You no see as I dey complain of @Spectranet_NG since
11) @amarachimex @Spectranet_NG You mind dem?? No network all evening. This is unacceptable!! @NgComCommission do something.
12) @lawaleto @Spectranet_NG I dey bro, you get fast internet ?
|
examples/train.ipynb | ###Markdown
Install QuickCNN in Google Colab
###Code
!pip install quickcnn
###Output
_____no_output_____
###Markdown
Upload dataset in Google DriveData is uploaded in splitted format, so we need to pass **train_dir_name** and **val_dir_name**. Now, Let's train the model* Here, **preserve_imagenet_classes** is True to predict ImageNet class with new class of our dataset.* We want to use Tensorboard, then **use_tensorboard** has to be True, but do not want to write histogram, so **histogram_freq** is 0.* Batch size is **32**. We can adust as per GPU utilization.* You can check other arguments in **README.md**.
###Code
from quickcnn import retrain
convnet = retrain.Retrain(train_dir_name = 'Food image data/train_data',
val_dir_name = 'Food image data/val_data',
preserve_imagenet_classes=True,
epoch=20, use_tensorboard=True, histogram_freq=0, batch_size=32)
###Output
_____no_output_____
###Markdown
Predict
###Code
# test_data folder having mixed class images OR test.jpg
convnet.predict('Food image data/val_data/burger')
print(convnet.results)
###Output
_____no_output_____
###Markdown
Setup
###Code
import os
from google.colab import drive as gdrive
# @markdown Setup output directory for the models
OUTPUT_DIR = 'Colab/varname/' # @param {type:'string'}
SAVE_ON_GDRIVE = False # @param {type:'boolean'}
if SAVE_ON_GDRIVE:
GDRIVE_ROOT = os.path.abspath('gdrive')
GDRIVE_OUT = os.path.join(GDRIVE_ROOT, 'My Drive', OUTPUT_DIR)
print('[INFO] Mounting Google Drive in {}'.format(GDRIVE_ROOT))
gdrive.mount(GDRIVE_ROOT, force_remount = True)
OUT_PATH = GDRIVE_OUT
else:
OUT_PATH = os.path.abspath(OUTPUT_DIR)
os.makedirs(OUT_PATH, exist_ok = True)
# @markdown Machine setup
# Install java 11
!sudo DEBIAN_FRONTEND=noninteractive apt-get install -qq git openjdk-11-jdk > /dev/null
# Install python 3.7 and pip
!sudo DEBIAN_FRONTEND=noninteractive apt-get install -qq python3.7 python3.7-dev python3.7-venv python3-pip > /dev/null
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1 > /dev/null
!python3 -m pip install -q --upgrade pip > /dev/null
# Install pipenv (i.e. a better python package manager).
!pip3 install pipenv -qq > /dev/null
%env PIPENV_QUIET 1
%env PIPENV_VENV_IN_PROJECT 1
%env PIPENV_SKIP_LOCK 1
from IPython.display import clear_output
clear_output()
# @markdown Download code
# Clone the project and cd into it
!git clone --branch master https://github.com/simonepri/varname-seq2seq code
%cd -q code
# Install dependencies
!pipenv install > /dev/null
# @markdown Download the dataset
DATASET = "java-corpora-dataset-obfuscated.tgz" # @param ["java-corpora-dataset-obfuscated.tgz", "java-corpora-dataset.tgz"]
!pipenv run bin src/bin/download_data.py \
--file-name "$DATASET" \
--data-path "data/dataset"
###Output
_____no_output_____
###Markdown
Model training
###Code
# @markdown Model configs
BATCH_SIZE = 256 # @param {type:'number'}
RNN_CELL = "lstm" # @param ['lstm', 'gru']
RNN_BIDIRECTIONAL = False # @param {type:'boolean'}
RNN_NUL_LAYERS = 1# @param {type:'number'}
RNN_HIDDEN_SIZE = 256 # @param {type:'number'}
RNN_EMBEDDING_SIZE = 256 # @param {type:'number'}
RNN_TF_RATIO = "auto" # @param {type:'raw'}
INPUT_SEQ_MAX_LEN = 256 # @param {type:'number'}
OUTPUT_SEQ_MAX_LEN = 32 # @param {type:'number'}
# @markdown Run training
RUN_TRAIN = True # @param {type:'boolean'}
TRAIN_RUN_ID = "lstm-256-256-dtf-obf" # @param {type:'string'}
TRAIN_EPOCHS = 35 # @param {type:'number'}
if RUN_TRAIN:
!pipenv run bin src/bin/run_seq2seq.py \
--do-train \
--run-id "$TRAIN_RUN_ID" \
--epochs "$TRAIN_EPOCHS" \
--batch-size "$BATCH_SIZE" \
--rnn-cell "$RNN_CELL" \
--rnn-num-layers "$RNN_NUL_LAYERS" \
--rnn-hidden-size "$RNN_HIDDEN_SIZE" \
--rnn-embedding-size "$RNN_EMBEDDING_SIZE" \
--rnn-tf-ratio "$RNN_TF_RATIO" \
--rnn-bidirectional "$RNN_BIDIRECTIONAL" \
--input-seq-max-length "$INPUT_SEQ_MAX_LEN" \
--output-seq-max-length "$OUTPUT_SEQ_MAX_LEN" \
--output-path "$OUT_PATH"/models \
--cache-path "$OUT_PATH"/cache \
--train-file data/dataset/train.mk.tsv \
--valid-file data/dataset/dev.mk.tsv
###Output
_____no_output_____
###Markdown
Model testing
###Code
# @markdown Print available models
!ls -Ral "$OUT_PATH"/models
# @markdown Run tests
RUN_TEST = True # @param {type:'boolean'}
TEST_RUN_ID = "lstm-256-256-dtf-obf" # @param {type:'string'}
if RUN_TEST:
!pipenv run bin src/bin/run_seq2seq.py \
--do-test \
--run-id "$TEST_RUN_ID" \
--batch-size "$BATCH_SIZE" \
--output-path "$OUT_PATH"/models \
--cache-path "$OUT_PATH"/cache \
--test-file data/dataset/test.mk.tsv
!pipenv run bin src/bin/run_seq2seq.py \
--do-test \
--run-id "$TEST_RUN_ID" \
--batch-size "$BATCH_SIZE" \
--output-path "$OUT_PATH"/models \
--cache-path "$OUT_PATH"/cache \
--test-file data/dataset/unseen.all.mk.tsv
###Output
_____no_output_____
###Markdown
1. GHZ Load Data
###Code
GHZ_traindata = QCIRCDataSetNumpy('GHZ_test_train.npy')
GHZ_testdata = QCIRCDataSetNumpy('GHZ_test_test.npy')
print('Total # of samples in train set: {}, test set:{}'.format(len(GHZ_traindata), len(GHZ_testdata)))
GHZ_trainloader = DataLoader(GHZ_traindata, batch_size=32, shuffle=True, pin_memory=True)
GHZ_testloader = DataLoader(GHZ_testdata, batch_size=32, shuffle=True, pin_memory=True)
###Output
Total # of samples in train set: 32000, test set:8000
###Markdown
initiate model
###Code
inputs, targets = GHZ_testdata[0]['input'], GHZ_testdata[0]['target']
inputs_dim = inputs.shape[0]
targets_dim = targets.shape[0]
ghz_net = DenseModel(inputs_dim=inputs_dim, targets_dim=targets_dim)
print(ghz_net)
###Output
DenseModel(
(fc1): Linear(in_features=256, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=512, bias=True)
(fc3): Linear(in_features=512, out_features=512, bias=True)
(fc4): Linear(in_features=512, out_features=256, bias=True)
(softmax): Softmax(dim=1)
)
###Markdown
Train
###Code
mse = torch.nn.MSELoss(reduction='sum')
ghz_net = train(ghz_net, GHZ_trainloader, mse, lr=5e-4, num_epochs=10)
###Output
Epoch=1, Batch= 500, Loss= 3.005
Epoch=1, Batch= 1000, Loss= 0.022
Epoch=2, Batch= 500, Loss= 0.013
Epoch=2, Batch= 1000, Loss= 0.008
Epoch=3, Batch= 500, Loss= 0.007
Epoch=3, Batch= 1000, Loss= 0.009
Epoch=4, Batch= 500, Loss= 0.013
Epoch=4, Batch= 1000, Loss= 0.004
Epoch=5, Batch= 500, Loss= 0.007
Epoch=5, Batch= 1000, Loss= 0.007
Epoch=6, Batch= 500, Loss= 0.007
Epoch=6, Batch= 1000, Loss= 0.005
Epoch=7, Batch= 500, Loss= 0.008
Epoch=7, Batch= 1000, Loss= 0.006
Epoch=8, Batch= 500, Loss= 0.006
Epoch=8, Batch= 1000, Loss= 0.007
Epoch=9, Batch= 500, Loss= 0.006
Epoch=9, Batch= 1000, Loss= 0.004
Epoch=10, Batch= 500, Loss= 0.007
Epoch=10, Batch= 1000, Loss= 0.005
###Markdown
Test
###Code
_ = test(ghz_net, GHZ_testloader, mse)
idx = np.random.randint(0, len(GHZ_testdata)-1)
print('sample=%d'%idx)
inputs, targets = GHZ_testdata[idx]['input'], GHZ_testdata[idx]['target']
with torch.no_grad():
net = ghz_net.to('cpu')
inputs = torch.unsqueeze(inputs,0)
outputs = net(inputs)
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.plot(np.squeeze(outputs.numpy()), label='Ouput', marker='o')
ax.plot(np.squeeze(targets.numpy()), label='Target', marker='x')
ax.plot(np.squeeze(inputs.numpy()), label='Input', marker='o')
ax.set_title('DenseModel: trained on GHZ, testing on GHZ')
ax.legend()
###Output
sample=3824
###Markdown
1. UCCSD Load Data
###Code
UCCSD_traindata = QCIRCDataSetNumpy('UCCSD_test_train.npy')
UCCSD_testdata = QCIRCDataSetNumpy('UCCSD_test_test.npy')
print('Total # of samples in train set: {}, test set:{}'.format(len(UCCSD_traindata), len(UCCSD_testdata)))
UCCSD_trainloader = DataLoader(UCCSD_traindata, batch_size=32, shuffle=True, pin_memory=True)
UCCSD_testloader = DataLoader(UCCSD_testdata, batch_size=32, shuffle=True, pin_memory=True)
###Output
Total # of samples in train set: 32000, test set:8000
###Markdown
initiate model
###Code
inputs, targets = UCCSD_testdata[0]['input'], UCCSD_testdata[0]['target']
inputs_dim = inputs.shape[0]
targets_dim = targets.shape[0]
uccsd_net = DenseModel(inputs_dim=inputs_dim, targets_dim=targets_dim)
print(uccsd_net)
###Output
DenseModel(
(fc1): Linear(in_features=256, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=512, bias=True)
(fc3): Linear(in_features=512, out_features=512, bias=True)
(fc4): Linear(in_features=512, out_features=256, bias=True)
(softmax): Softmax(dim=1)
)
###Markdown
Train
###Code
mse = torch.nn.MSELoss(reduction='sum')
uccsd_net = train(uccsd_net, UCCSD_trainloader, mse, lr=5e-4, num_epochs=10)
###Output
Epoch=1, Batch= 500, Loss= 2.022
Epoch=1, Batch= 1000, Loss= 0.841
Epoch=2, Batch= 500, Loss= 0.371
Epoch=2, Batch= 1000, Loss= 0.294
Epoch=3, Batch= 500, Loss= 0.251
Epoch=3, Batch= 1000, Loss= 0.203
Epoch=4, Batch= 500, Loss= 0.194
Epoch=4, Batch= 1000, Loss= 0.168
Epoch=5, Batch= 500, Loss= 0.157
Epoch=5, Batch= 1000, Loss= 0.147
Epoch=6, Batch= 500, Loss= 0.137
Epoch=6, Batch= 1000, Loss= 0.124
Epoch=7, Batch= 500, Loss= 0.120
Epoch=7, Batch= 1000, Loss= 0.111
Epoch=8, Batch= 500, Loss= 0.110
Epoch=8, Batch= 1000, Loss= 0.099
Epoch=9, Batch= 500, Loss= 0.091
Epoch=9, Batch= 1000, Loss= 0.089
Epoch=10, Batch= 500, Loss= 0.083
Epoch=10, Batch= 1000, Loss= 0.081
###Markdown
Test
###Code
_ = test(uccsd_net, UCCSD_testloader, mse)
idx = np.random.randint(0, len(UCCSD_testdata)-1)
print('sample=%d'%idx)
inputs, targets = UCCSD_testdata[idx]['input'], UCCSD_testdata[idx]['target']
with torch.no_grad():
net = uccsd_net.to('cpu')
inputs = torch.unsqueeze(inputs,0)
outputs = net(inputs)
fig,ax = plt.subplots(1,1, figsize=(8,6))
ax.plot(np.squeeze(outputs.numpy()), label='Ouput', marker='o')
ax.plot(np.squeeze(targets.numpy()), label='Target', marker='x')
ax.plot(np.squeeze(inputs.numpy()), label='Input', marker='o')
ax.set_title('DenseModel: trained on UCCSD, testing on UCCSD')
ax.legend()
###Output
sample=3915
###Markdown
Swapping Models Using trained GHZ model on UCCSD data
###Code
_ = test(ghz_net, UCCSD_testloader, mse)
_ = test(uccsd_net, GHZ_testloader, mse)
idx = np.random.randint(0, len(UCCSD_testdata)-1)
print('sample=%d'%idx)
inputs, targets = UCCSD_testdata[idx]['input'], UCCSD_testdata[idx]['target']
with torch.no_grad():
net = ghz_net.to('cpu')
inputs = torch.unsqueeze(inputs,0)
outputs = net(inputs)
fig,ax = plt.subplots(1,1, figsize=(8,6))
ax.plot(np.squeeze(outputs.numpy()), label='Ouput', marker='o')
ax.plot(np.squeeze(targets.numpy()), label='Target', marker='x')
ax.plot(np.squeeze(inputs.numpy()), label='Input', marker='o')
ax.set_title('DenseModel: trained on GHZ, testing on UCCSD')
ax.legend()
idx = np.random.randint(0, len(UCCSD_testdata)-1)
print('sample=%d'%idx)
inputs, targets = GHZ_testdata[idx]['input'], GHZ_testdata[idx]['target']
with torch.no_grad():
net = uccsd_net.to('cpu')
inputs = torch.unsqueeze(inputs,0)
outputs = net(inputs)
fig,ax = plt.subplots(1,1, figsize=(8,6))
ax.plot(np.squeeze(outputs.numpy()), label='Ouput', marker='o')
ax.plot(np.squeeze(targets.numpy()), label='Target', marker='x')
ax.plot(np.squeeze(inputs.numpy()), label='Input', marker='o')
ax.set_title('DenseModel: trained on UCCSD, testing on GHZ')
ax.legend()
###Output
sample=2956
|
Collections2.ipynb | ###Markdown
Python Bootcamp for Machine Learning, Level I Revision Date: 02-13-2022 Collections 2 When we deal with data we have to deal with collections, not just individuals. Collections are either ordered or unordered. If they are ordered, an individual's position in the sequence is marked by it's index. Collections are also mutable or immutable. - **Lists**: ordered collection, mutable- **Tuple**: ordered collection, immutable- **Sets**: unordered collection of unique elements, mutable- **Dictionary**: map of one collection to another, mutable; dictionaries are not ordered. Lists
###Code
# create a list
authors = ['Stefan Zweig','William Shakespeare','Friedrich Schiller',\
'Leila Slimani','Kazuo Ishiguro','Marcel Proust',\
'Ernest Hemingway','Miguel Cervantes']
# create an empty list
lonely = []
type(authors)
# add an element to a list. list elements don't have to be of same type.
authors.append(5)
authors
len(authors)
# removes last item from a list
authors.pop()
authors
who = authors.pop()
who
authors
authors.sort()
authors
authors[2]
authors.index('Leila Slimani')
###Output
_____no_output_____
###Markdown
lists can contain lists
###Code
library = [
['Stefan Zweig', 'The World of Yesterday'],
['William Shakespeare','Hamlet','Othello'],
['Friedrich Schiller','Wallenstein','On the Aesthetic Education of Man'],
['Leila Slimani','Chanson Douce'],
['Kazuo Ishiguro','Unconsoled'],
['Virginia Woolf', 'To the Lighthouse']]
# returns the first element (which is in fact the second) of the list which is a list
library[1]
library[1][1]
library[0][0]
library[0:2]
###Output
_____no_output_____
###Markdown
Tuples
###Code
suits = ('Hearts', 'Diamonds', 'Spades', 'Clubs')
ranks = ('2','3','4','5','6','7','8','9','10','Jack','Queen','King','Ace')
###Output
_____no_output_____
###Markdown
Tuples are immutable. Tuple supports only two methods: count and index. Count gives the number of occurences of a certain object. Index gives the index value of the object's first appearance.
###Code
new = list(suits)
new
ranks.index('Jack')
alpha = {3,5,7,3,3}
alpha
###Output
_____no_output_____
###Markdown
Sets
###Code
even = {2,4,6,8,10}
odd = {1,3,5,7,9}
prime = {2,3,5,7}
composite = {4,6,8,9,10}
# union with or operator
even | odd
even.union(odd)
even.intersection(odd)
# intersection with and operator
even & odd
even & composite
prime & even
###Output
_____no_output_____
###Markdown
DictionariesDictions are key, value pairs
###Code
lib = {'Stefan Zweig': ['The World of Yesterday'],
'William Shakespeare': ['Hamlet','Othello'],
'Friedrich Schiller': ['Wallenstein','On the Aesthetic Education of Man'],
'Leila Slimani' : ['Chanson Douce'],
'Kazuo Ishiguro': ['Unconsoled'],
'Virginia Woolf': ['To the Lighthouse']}
lib
lib['Kazuo Ishiguro']
lib
lib['William Shakespeare'].append('Richard III')
lib['William Shakespeare']
lib
lib
lib['William Shakespeare'].remove("Othello")
lib
lib.keys()
lib.values()
my_authors =list(lib.keys())
my_authors
###Output
_____no_output_____ |
Aspect Detection Mounts Reviews.ipynb | ###Markdown
Aspect Detection: Mounts Reviews This is a Natural Language Processing based solution which can detect up to 8 aspects from online product reviews for mounts.This sample notebook shows you how to deploy Aspect Detection: Mounts Reviews using Amazon SageMaker.> **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook. Pre-requisites:1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.1. Ensure that IAM role used has **AmazonSageMakerFullAccess**1. To deploy this ML model successfully, ensure that: 1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used: 1. **aws-marketplace:ViewSubscriptions** 1. **aws-marketplace:Unsubscribe** 1. **aws-marketplace:Subscribe** 2. or your AWS account has a subscription to Aspect Detection: Mounts Reviews. If so, skip step: [Subscribe to the model package](1.-Subscribe-to-the-model-package) Contents:1. [Subscribe to the model package](1.-Subscribe-to-the-model-package)2. [Create an endpoint and perform real-time inference](2.-Create-an-endpoint-and-perform-real-time-inference) 1. [Create an endpoint](A.-Create-an-endpoint) 2. [Create input payload](B.-Create-input-payload) 3. [Perform real-time inference](C.-Perform-real-time-inference) 4. [Visualize output](D.-Visualize-output) 5. [Delete the endpoint](E.-Delete-the-endpoint)3. [Perform batch inference](3.-Perform-batch-inference) 4. [Clean-up](4.-Clean-up) 1. [Delete the model](A.-Delete-the-model) 2. [Unsubscribe to the listing (optional)](B.-Unsubscribe-to-the-listing-(optional)) Usage instructionsYou can run this notebook one cell at a time (By using Shift+Enter for running a cell). 1. Subscribe to the model package To subscribe to the model package:1. Open the model package listing page Aspect Detection: Mounts Reviews1. On the AWS Marketplace listing, click on the **Continue to subscribe** button.1. On the **Subscribe to this software** page, review and click on **"Accept Offer"** if you and your organization agrees with EULA, pricing, and support terms. 1. Once you click on **Continue to configuration button** and then choose a **region**, you will see a **Product Arn** displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell.
###Code
model_package_arn='arn:aws:sagemaker:us-east-2:786796469737:model-package/mounts-aspect-extraction'
import base64
import json
import uuid
from sagemaker import ModelPackage
import sagemaker as sage
from sagemaker import get_execution_role
from sagemaker import ModelPackage
from urllib.parse import urlparse
import boto3
from IPython.display import Image
from PIL import Image as ImageEdit
import urllib.request
import numpy as np
role = get_execution_role()
sagemaker_session = sage.Session()
bucket=sagemaker_session.default_bucket()
bucket
###Output
_____no_output_____
###Markdown
2. Create an endpoint and perform real-time inference If you want to understand how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).
###Code
model_name='cooling-fans-aspect'
content_type='text/plain'
real_time_inference_instance_type='ml.m5.large'
batch_transform_inference_instance_type='ml.m5.large'
###Output
_____no_output_____
###Markdown
A. Create an endpoint
###Code
def predict_wrapper(endpoint, session):
return sage.predictor.Predictor(endpoint, session,content_type)
#create a deployable model from the model package.
model = ModelPackage(role=role,
model_package_arn=model_package_arn,
sagemaker_session=sagemaker_session,
predictor_cls=predict_wrapper)
#Deploy the model
predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name)
###Output
----!
###Markdown
Once endpoint has been created, you would be able to perform real-time inference. B. Create input payload
###Code
file_name = 'sample.txt'
###Output
_____no_output_____
###Markdown
C. Perform real-time inference
###Code
!aws sagemaker-runtime invoke-endpoint \
--endpoint-name $model_name \
--body fileb://$file_name \
--content-type $content_type \
--region $sagemaker_session.boto_region_name \
output.txt
###Output
{
"ContentType": "application/json",
"InvokedProductionVariant": "AllTraffic"
}
###Markdown
D. Visualize output
###Code
import json
with open('output.txt', 'r') as f:
output = json.load(f)
print(json.dumps(output, indent = 1))
###Output
{
"review": "This is cheap at this price. This is sturdy enough to hold my TV. It can carry load of upto 100kgs.",
"topics": [
{
"aspect": {
"Build Quality": 0.1636577891148708,
"Price": 0.7536088987684499
},
"sentence": "This is cheap at this price."
},
{
"aspect": {
"Build Quality": 0.27150485571407307,
"Load carrying": 0.3170931715378502,
"Size and Dimensions": 0.2888840991431125
},
"sentence": "This is sturdy enough to hold my TV."
},
{
"aspect": {
"Load carrying": 0.907263216039158
},
"sentence": "It can carry load of upto 100kgs."
}
]
}
###Markdown
E. Delete the endpoint Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged.
###Code
predictor=sage.predictor.Predictor(model_name, sagemaker_session,content_type)
predictor.delete_endpoint(delete_endpoint_config=True)
###Output
_____no_output_____
###Markdown
3. Perform batch inference In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see these links:1. [How it works](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-batch-transform.html)2. [How to run a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html)
###Code
#upload the batch-transform job input files to S3
transform_input_folder = "input"
transform_input = sagemaker_session.upload_data(transform_input_folder, key_prefix=model_name)
print("Transform input uploaded to " + transform_input)
#Run the batch-transform job
transformer = model.transformer(1, batch_transform_inference_instance_type)
transformer.transform(transform_input, content_type=content_type)
transformer.wait()
import os
s3_conn = boto3.client("s3")
with open('output2.txt', 'wb') as f:
s3_conn.download_fileobj(bucket, os.path.basename(transformer.output_path)+'/sample.txt.out', f)
print("Output file loaded from bucket")
with open('output2.txt', 'r') as f:
output = json.load(f)
print(json.dumps(output, indent = 1))
###Output
{
"review": "This is cheap at this price. This is sturdy enough to hold my TV. It can carry load of upto 100kgs.",
"topics": [
{
"aspect": {
"Build Quality": 0.1636577891148708,
"Price": 0.7536088987684499
},
"sentence": "This is cheap at this price."
},
{
"aspect": {
"Build Quality": 0.27150485571407307,
"Load carrying": 0.3170931715378502,
"Size and Dimensions": 0.2888840991431125
},
"sentence": "This is sturdy enough to hold my TV."
},
{
"aspect": {
"Load carrying": 0.907263216039158
},
"sentence": "It can carry load of upto 100kgs."
}
]
}
###Markdown
4. Clean-up A. Delete the model
###Code
model.delete_model()
###Output
_____no_output_____ |
Deep Learning/CS3 Improving Deep Neural Networks Hyperparameter Tuning, Regularization and Optimization/3 Hyperparameter Tuning, Batch Normalization and Programming Frameworks/Tensorflow_introduction.ipynb | ###Markdown
Introduction to TensorFlowWelcome to this week's programming assignment! Up until now, you've always used Numpy to build neural networks, but this week you'll explore a deep learning framework that allows you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. TensorFlow 2.3 has made significant improvements over its predecessor, some of which you'll encounter and implement here!By the end of this assignment, you'll be able to do the following in TensorFlow 2.3:* Use `tf.Variable` to modify the state of a variable* Explain the difference between a variable and a constant* Train a Neural Network on a TensorFlow datasetProgramming frameworks like TensorFlow not only cut down on time spent coding, but can also perform optimizations that speed up the code itself. Table of Contents- [1- Packages](1) - [1.1 - Checking TensorFlow Version](1-1)- [2 - Basic Optimization with GradientTape](2) - [2.1 - Linear Function](2-1) - [Exercise 1 - linear_function](ex-1) - [2.2 - Computing the Sigmoid](2-2) - [Exercise 2 - sigmoid](ex-2) - [2.3 - Using One Hot Encodings](2-3) - [Exercise 3 - one_hot_matrix](ex-3) - [2.4 - Initialize the Parameters](2-4) - [Exercise 4 - initialize_parameters](ex-4)- [3 - Building Your First Neural Network in TensorFlow](3) - [3.1 - Implement Forward Propagation](3-1) - [Exercise 5 - forward_propagation](ex-5) - [3.2 Compute the Cost](3-2) - [Exercise 6 - compute_cost](ex-6) - [3.3 - Train the Model](3-3)- [4 - Bibliography](4) 1 - Packages
###Code
import h5py
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.python.framework.ops import EagerTensor
from tensorflow.python.ops.resource_variable_ops import ResourceVariable
import time
###Output
_____no_output_____
###Markdown
1.1 - Checking TensorFlow Version You will be using v2.3 for this assignment, for maximum speed and efficiency.
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
2 - Basic Optimization with GradientTapeThe beauty of TensorFlow 2 is in its simplicity. Basically, all you need to do is implement forward propagation through a computational graph. TensorFlow will compute the derivatives for you, by moving backwards through the graph recorded with `GradientTape`. All that's left for you to do then is specify the cost function and optimizer you want to use! When writing a TensorFlow program, the main object to get used and transformed is the `tf.Tensor`. These tensors are the TensorFlow equivalent of Numpy arrays, i.e. multidimensional arrays of a given data type that also contain information about the computational graph.Below, you'll use `tf.Variable` to store the state of your variables. Variables can only be created once as its initial value defines the variable shape and type. Additionally, the `dtype` arg in `tf.Variable` can be set to allow data to be converted to that type. But if none is specified, either the datatype will be kept if the initial value is a Tensor, or `convert_to_tensor` will decide. It's generally best for you to specify directly, so nothing breaks! Here you'll call the TensorFlow dataset created on a HDF5 file, which you can use in place of a Numpy array to store your datasets. You can think of this as a TensorFlow data generator! You will use the Hand sign data set, that is composed of images with shape 64x64x3.
###Code
train_dataset = h5py.File('datasets/train_signs.h5', "r")
test_dataset = h5py.File('datasets/test_signs.h5', "r")
x_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_x'])
y_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_y'])
x_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_x'])
y_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_y'])
type(x_train)
###Output
_____no_output_____
###Markdown
Since TensorFlow Datasets are generators, you can't access directly the contents unless you iterate over them in a for loop, or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`. Also, you can inspect the `shape` and `dtype` of each element using the `element_spec` attribute.
###Code
print(x_train.element_spec)
print(next(iter(x_train)))
###Output
tf.Tensor(
[[[227 220 214]
[227 221 215]
[227 222 215]
...
[232 230 224]
[231 229 222]
[230 229 221]]
[[227 221 214]
[227 221 215]
[228 221 215]
...
[232 230 224]
[231 229 222]
[231 229 221]]
[[227 221 214]
[227 221 214]
[227 221 215]
...
[232 230 224]
[231 229 223]
[230 229 221]]
...
[[119 81 51]
[124 85 55]
[127 87 58]
...
[210 211 211]
[211 212 210]
[210 211 210]]
[[119 79 51]
[124 84 55]
[126 85 56]
...
[210 211 210]
[210 211 210]
[209 210 209]]
[[119 81 51]
[123 83 55]
[122 82 54]
...
[209 210 210]
[209 210 209]
[208 209 209]]], shape=(64, 64, 3), dtype=uint8)
###Markdown
The dataset that you'll be using during this assignment is a subset of the sign language digits. It contains six different classes representing the digits from 0 to 5.
###Code
unique_labels = set()
for element in y_train:
unique_labels.add(element.numpy())
print(unique_labels)
###Output
{0, 1, 2, 3, 4, 5}
###Markdown
You can see some of the images in the dataset by running the following cell.
###Code
images_iter = iter(x_train)
labels_iter = iter(y_train)
plt.figure(figsize=(10, 10))
for i in range(25):
ax = plt.subplot(5, 5, i + 1)
plt.imshow(next(images_iter).numpy().astype("uint8"))
plt.title(next(labels_iter).numpy().astype("uint8"))
plt.axis("off")
###Output
_____no_output_____
###Markdown
There's one more additional difference between TensorFlow datasets and Numpy arrays: If you need to transform one, you would invoke the `map` method to apply the function passed as an argument to each of the elements.
###Code
def normalize(image):
"""
Transform an image into a tensor of shape (64 * 64 * 3, )
and normalize its components.
Arguments
image - Tensor.
Returns:
result -- Transformed tensor
"""
image = tf.cast(image, tf.float32) / 255.0
image = tf.reshape(image, [-1,])
return image
new_train = x_train.map(normalize)
new_test = x_test.map(normalize)
new_train.element_spec
print(next(iter(new_train)))
###Output
tf.Tensor([0.8901961 0.8627451 0.8392157 ... 0.8156863 0.81960785 0.81960785], shape=(12288,), dtype=float32)
###Markdown
2.1 - Linear FunctionLet's begin this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. Exercise 1 - linear_functionCompute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, this is how to define a constant X with the shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```Note that the difference between `tf.constant` and `tf.Variable` is that you can modify the state of a `tf.Variable` but cannot change the state of a `tf.constant`.You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly
###Code
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes X to be a random tensor of shape (3,1)
Initializes W to be a random tensor of shape (4,3)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- Y = WX + b
"""
np.random.seed(1)
"""
Note, to ensure that the "random" numbers generated match the expected results,
please create the variables in the order given in the starting code below.
(Do not re-arrange the order).
"""
# (approx. 4 lines)
# X = ...
# W = ...
# b = ...
# Y = ...
# YOUR CODE STARTS HERE
X = tf.constant(np.random.randn(3,1), name="X")
W = tf.constant(np.random.randn(4,3), name="W")
b = tf.constant(np.random.randn(4,1), name="b")
Y = tf.matmul(W,X) + b
# YOUR CODE ENDS HERE
return Y
result = linear_function()
print(result)
assert type(result) == EagerTensor, "Use the TensorFlow API"
assert np.allclose(result, [[-2.15657382], [ 2.95891446], [-1.08926781], [-0.84538042]]), "Error"
print("\033[92mAll test passed")
###Output
tf.Tensor(
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]], shape=(4, 1), dtype=float64)
[92mAll test passed
###Markdown
**Expected Output**: ```result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]``` 2.2 - Computing the Sigmoid Amazing! You just implemented a linear function. TensorFlow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`.For this exercise, compute the sigmoid of z. In this exercise, you will: Cast your tensor to type `float32` using `tf.cast`, then compute the sigmoid using `tf.keras.activations.sigmoid`. Exercise 2 - sigmoidImplement the sigmoid function below. You should use the following: - `tf.cast("...", tf.float32)`- `tf.keras.activations.sigmoid("...")`
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
a -- (tf.float32) the sigmoid of z
"""
# tf.keras.activations.sigmoid requires float16, float32, float64, complex64, or complex128.
# (approx. 2 lines)
# z = ...
# a = ...
# YOUR CODE STARTS HERE
z = tf.cast(z,tf.float32)
a = tf.keras.activations.sigmoid(z)
# YOUR CODE ENDS HERE
return a
result = sigmoid(-1)
print ("type: " + str(type(result)))
print ("dtype: " + str(result.dtype))
print ("sigmoid(-1) = " + str(result))
print ("sigmoid(0) = " + str(sigmoid(0.0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
def sigmoid_test(target):
result = target(0)
assert(type(result) == EagerTensor)
assert (result.dtype == tf.float32)
assert sigmoid(0) == 0.5, "Error"
assert sigmoid(-1) == 0.26894143, "Error"
assert sigmoid(12) == 0.9999939, "Error"
print("\033[92mAll test passed")
sigmoid_test(sigmoid)
###Output
type: <class 'tensorflow.python.framework.ops.EagerTensor'>
dtype: <dtype: 'float32'>
sigmoid(-1) = tf.Tensor(0.26894143, shape=(), dtype=float32)
sigmoid(0) = tf.Tensor(0.5, shape=(), dtype=float32)
sigmoid(12) = tf.Tensor(0.9999939, shape=(), dtype=float32)
[92mAll test passed
###Markdown
**Expected Output**: typeclass 'tensorflow.python.framework.ops.EagerTensor' dtype"dtype: 'float32' Sigmoid(-1)0.2689414 Sigmoid(0)0.5 Sigmoid(12)0.999994 2.3 - Using One Hot EncodingsMany times in deep learning you will have a $Y$ vector with numbers ranging from $0$ to $C-1$, where $C$ is the number of classes. If $C$ is for example 4, then you might have the following y vector which you will need to convert like this:This is called "one hot" encoding, because in the converted representation, exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In TensorFlow, you can use one line of code: - [tf.one_hot(labels, depth, axis=0)](https://www.tensorflow.org/api_docs/python/tf/one_hot)`axis=0` indicates the new axis is created at dimension 0 Exercise 3 - one_hot_matrixImplement the function below to take one label and the total number of classes $C$, and return the one hot encoding in a column wise matrix. Use `tf.one_hot()` to do this, and `tf.reshape()` to reshape your one hot tensor! - `tf.reshape(tensor, shape)`
###Code
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(label, depth=6):
"""
Computes the one hot encoding for a single label
Arguments:
label -- (int) Categorical labels
depth -- (int) Number of different classes that label can take
Returns:
one_hot -- tf.Tensor A single-column matrix with the one hot encoding.
"""
# (approx. 1 line)
# one_hot = ...
# YOUR CODE STARTS HERE
one_hot = tf.reshape(tf.one_hot(label, depth, axis=0),[-1])
# YOUR CODE ENDS HERE
return one_hot
def one_hot_matrix_test(target):
label = tf.constant(1)
depth = 4
result = target(label, depth)
print("Test 1:",result)
assert result.shape[0] == depth, "Use the parameter depth"
assert np.allclose(result, [0., 1. ,0., 0.] ), "Wrong output. Use tf.one_hot"
label_2 = [2]
result = target(label_2, depth)
print("Test 2:", result)
assert result.shape[0] == depth, "Use the parameter depth"
assert np.allclose(result, [0., 0. ,1., 0.] ), "Wrong output. Use tf.reshape as instructed"
print("\033[92mAll test passed")
one_hot_matrix_test(one_hot_matrix)
###Output
Test 1: tf.Tensor([0. 1. 0. 0.], shape=(4,), dtype=float32)
Test 2: tf.Tensor([0. 0. 1. 0.], shape=(4,), dtype=float32)
[92mAll test passed
###Markdown
**Expected output**```Test 1: tf.Tensor([0. 1. 0. 0.], shape=(4,), dtype=float32)Test 2: tf.Tensor([0. 0. 1. 0.], shape=(4,), dtype=float32)```
###Code
new_y_test = y_test.map(one_hot_matrix)
new_y_train = y_train.map(one_hot_matrix)
print(next(iter(new_y_test)))
###Output
tf.Tensor([1. 0. 0. 0. 0. 0.], shape=(6,), dtype=float32)
###Markdown
2.4 - Initialize the Parameters Now you'll initialize a vector of numbers with the Glorot initializer. The function you'll be calling is `tf.keras.initializers.GlorotNormal`, which draws samples from a truncated normal distribution centered on 0, with `stddev = sqrt(2 / (fan_in + fan_out))`, where `fan_in` is the number of input units and `fan_out` is the number of output units, both in the weight tensor. To initialize with zeros or ones you could use `tf.zeros()` or `tf.ones()` instead. Exercise 4 - initialize_parametersImplement the function below to take in a shape and to return an array of numbers using the GlorotNormal initializer. - `tf.keras.initializers.GlorotNormal(seed=1)` - `tf.Variable(initializer(shape=())`
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with TensorFlow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
initializer = tf.keras.initializers.GlorotNormal(seed=1)
#(approx. 6 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# W3 = ...
# b3 = ...
# YOUR CODE STARTS HERE
W1 = tf.Variable(initializer(shape=(25,12288)))
b1 = tf.Variable(initializer(shape=(25,1)))
W2 = tf.Variable(initializer(shape=(12,25)))
b2 = tf.Variable(initializer(shape=(12,1)))
W3 = tf.Variable(initializer(shape=(6,12)))
b3 = tf.Variable(initializer(shape=(6,1)))
# YOUR CODE ENDS HERE
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
def initialize_parameters_test(target):
parameters = target()
values = {"W1": (25, 12288),
"b1": (25, 1),
"W2": (12, 25),
"b2": (12, 1),
"W3": (6, 12),
"b3": (6, 1)}
for key in parameters:
print(f"{key} shape: {tuple(parameters[key].shape)}")
assert type(parameters[key]) == ResourceVariable, "All parameter must be created using tf.Variable"
assert tuple(parameters[key].shape) == values[key], f"{key}: wrong shape"
assert np.abs(np.mean(parameters[key].numpy())) < 0.5, f"{key}: Use the GlorotNormal initializer"
assert np.std(parameters[key].numpy()) > 0 and np.std(parameters[key].numpy()) < 1, f"{key}: Use the GlorotNormal initializer"
print("\033[92mAll test passed")
initialize_parameters_test(initialize_parameters)
###Output
W1 shape: (25, 12288)
b1 shape: (25, 1)
W2 shape: (12, 25)
b2 shape: (12, 1)
W3 shape: (6, 12)
b3 shape: (6, 1)
[92mAll test passed
###Markdown
**Expected output**```W1 shape: (25, 12288)b1 shape: (25, 1)W2 shape: (12, 25)b2 shape: (12, 1)W3 shape: (6, 12)b3 shape: (6, 1)```
###Code
parameters = initialize_parameters()
###Output
_____no_output_____
###Markdown
3 - Building Your First Neural Network in TensorFlowIn this part of the assignment you will build a neural network using TensorFlow. Remember that there are two parts to implementing a TensorFlow model:- Implement forward propagation- Retrieve the gradients and train the modelLet's get into it! 3.1 - Implement Forward Propagation One of TensorFlow's great strengths lies in the fact that you only need to implement the forward propagation function and it will keep track of the operations you did to calculate the back propagation automatically. Exercise 5 - forward_propagationImplement the `forward_propagation` function.**Note** Use only the TF API. - tf.math.add- tf.linalg.matmul- tf.keras.activations.relu
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
#(approx. 5 lines) # Numpy Equivalents:
# Z1 = ... # Z1 = np.dot(W1, X) + b1
# A1 = ... # A1 = relu(Z1)
# Z2 = ... # Z2 = np.dot(W2, A1) + b2
# A2 = ... # A2 = relu(Z2)
# Z3 = ... # Z3 = np.dot(W3, A2) + b3
# YOUR CODE STARTS HERE
Z1 = tf.add(tf.linalg.matmul(W1,X), b1)
A1 = tf.keras.activations.relu(Z1)
Z2 = tf.add(tf.linalg.matmul(W2,A1), b2)
A2 = tf.keras.activations.relu(Z2)
Z3 = tf.add(tf.matmul(W3,A2), b3)
# YOUR CODE ENDS HERE
return Z3
def forward_propagation_test(target, examples):
minibatches = examples.batch(2)
for minibatch in minibatches:
forward_pass = target(tf.transpose(minibatch), parameters)
print(forward_pass)
assert type(forward_pass) == EagerTensor, "Your output is not a tensor"
assert forward_pass.shape == (6, 2), "Last layer must use W3 and b3"
assert np.allclose(forward_pass,
[[-0.13430887, 0.14086473],
[ 0.21588647, -0.02582335],
[ 0.7059658, 0.6484556 ],
[-1.1260961, -0.9329492 ],
[-0.20181894, -0.3382722 ],
[ 0.9558965, 0.94167566]]), "Output does not match"
break
print("\033[92mAll test passed")
forward_propagation_test(forward_propagation, new_train)
###Output
tf.Tensor(
[[-0.13430887 0.14086473]
[ 0.21588647 -0.02582335]
[ 0.7059658 0.6484556 ]
[-1.1260961 -0.9329492 ]
[-0.20181894 -0.3382722 ]
[ 0.9558965 0.94167566]], shape=(6, 2), dtype=float32)
[92mAll test passed
###Markdown
**Expected output**```tf.Tensor([[-0.13430887 0.14086473] [ 0.21588647 -0.02582335] [ 0.7059658 0.6484556 ] [-1.1260961 -0.9329492 ] [-0.20181894 -0.3382722 ] [ 0.9558965 0.94167566]], shape=(6, 2), dtype=float32)``` 3.2 Compute the CostAll you have to do now is define the loss function that you're going to use. For this case, since we have a classification problem with 6 labels, a categorical cross entropy will work! Exercise 6 - compute_costImplement the cost function below. - It's important to note that the "`y_pred`" and "`y_true`" inputs of [tf.keras.losses.categorical_crossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/categorical_crossentropy) are expected to be of shape (number of examples, num_classes). - `tf.reduce_mean` basically does the summation over the examples.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(logits, labels):
"""
Computes the cost
Arguments:
logits -- output of forward propagation (output of the last LINEAR unit), of shape (6, num_examples)
labels -- "true" labels vector, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
#(1 line of code)
# cost = ...
# YOUR CODE STARTS HERE
cost = tf.reduce_mean(tf.keras.losses.categorical_crossentropy(tf.transpose(labels), tf.transpose(logits), from_logits=True))
# YOUR CODE ENDS HERE
return cost
def compute_cost_test(target, Y):
pred = tf.constant([[ 2.4048107, 5.0334096 ],
[-0.7921977, -4.1523376 ],
[ 0.9447198, -0.46802214],
[ 1.158121, 3.9810789 ],
[ 4.768706, 2.3220146 ],
[ 6.1481323, 3.909829 ]])
minibatches = Y.batch(2)
for minibatch in minibatches:
result = target(pred, tf.transpose(minibatch))
break
print(result)
assert(type(result) == EagerTensor), "Use the TensorFlow API"
assert (np.abs(result - (0.25361037 + 0.5566767) / 2.0) < 1e-7), "Test does not match. Did you get the mean of your cost functions?"
print("\033[92mAll test passed")
compute_cost_test(compute_cost, new_y_train )
###Output
tf.Tensor(0.4051435, shape=(), dtype=float32)
[92mAll test passed
###Markdown
**Expected output**```tf.Tensor(0.4051435, shape=(), dtype=float32)``` 3.3 - Train the ModelLet's talk optimizers. You'll specify the type of optimizer in one line, in this case `tf.keras.optimizers.Adam` (though you can use others such as SGD), and then call it within the training loop. Notice the `tape.gradient` function: this allows you to retrieve the operations recorded for automatic differentiation inside the `GradientTape` block. Then, calling the optimizer method `apply_gradients`, will apply the optimizer's update rules to each trainable parameter. At the end of this assignment, you'll find some documentation that explains this more in detail, but for now, a simple explanation will do. ;) Here you should take note of an important extra step that's been added to the batch training process: - `tf.Data.dataset = dataset.prefetch(8)` What this does is prevent a memory bottleneck that can occur when reading from disk. `prefetch()` sets aside some data and keeps it ready for when it's needed. It does this by creating a source dataset from your input data, applying a transformation to preprocess the data, then iterating over the dataset the specified number of elements at a time. This works because the iteration is streaming, so the data doesn't need to fit into the memory.
###Code
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 10 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
costs = [] # To keep track of the cost
train_acc = []
test_acc = []
# Initialize your parameters
#(1 line)
parameters = initialize_parameters()
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
optimizer = tf.keras.optimizers.Adam(learning_rate)
# The CategoricalAccuracy will track the accuracy for this multiclass problem
test_accuracy = tf.keras.metrics.CategoricalAccuracy()
train_accuracy = tf.keras.metrics.CategoricalAccuracy()
dataset = tf.data.Dataset.zip((X_train, Y_train))
test_dataset = tf.data.Dataset.zip((X_test, Y_test))
# We can get the number of elements of a dataset using the cardinality method
m = dataset.cardinality().numpy()
minibatches = dataset.batch(minibatch_size).prefetch(8)
test_minibatches = test_dataset.batch(minibatch_size).prefetch(8)
#X_train = X_train.batch(minibatch_size, drop_remainder=True).prefetch(8)# <<< extra step
#Y_train = Y_train.batch(minibatch_size, drop_remainder=True).prefetch(8) # loads memory faster
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0.
#We need to reset object to start measuring from 0 the accuracy each epoch
train_accuracy.reset_states()
for (minibatch_X, minibatch_Y) in minibatches:
with tf.GradientTape() as tape:
# 1. predict
Z3 = forward_propagation(tf.transpose(minibatch_X), parameters)
# 2. loss
minibatch_cost = compute_cost(Z3, tf.transpose(minibatch_Y))
# We acumulate the accuracy of all the batches
train_accuracy.update_state(tf.transpose(Z3), minibatch_Y)
trainable_variables = [W1, b1, W2, b2, W3, b3]
grads = tape.gradient(minibatch_cost, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
epoch_cost += minibatch_cost
# We divide the epoch cost over the number of samples
epoch_cost /= m
# Print the cost every 10 epochs
if print_cost == True and epoch % 10 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
print("Train accuracy:", train_accuracy.result())
# We evaluate the test set every 10 epochs to avoid computational overhead
for (minibatch_X, minibatch_Y) in test_minibatches:
Z3 = forward_propagation(tf.transpose(minibatch_X), parameters)
test_accuracy.update_state(tf.transpose(Z3), minibatch_Y)
print("Test_accuracy:", test_accuracy.result())
costs.append(epoch_cost)
train_acc.append(train_accuracy.result())
test_acc.append(test_accuracy.result())
test_accuracy.reset_states()
return parameters, costs, train_acc, test_acc
parameters, costs, train_acc, test_acc = model(new_train, new_y_train, new_test, new_y_test, num_epochs=100)
###Output
Cost after epoch 0: 0.057612
Train accuracy: tf.Tensor(0.17314816, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.24166666, shape=(), dtype=float32)
Cost after epoch 10: 0.049332
Train accuracy: tf.Tensor(0.35833332, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.3, shape=(), dtype=float32)
Cost after epoch 20: 0.043173
Train accuracy: tf.Tensor(0.49907407, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.43333334, shape=(), dtype=float32)
Cost after epoch 30: 0.037322
Train accuracy: tf.Tensor(0.60462964, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.525, shape=(), dtype=float32)
Cost after epoch 40: 0.033147
Train accuracy: tf.Tensor(0.6490741, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.5416667, shape=(), dtype=float32)
Cost after epoch 50: 0.030203
Train accuracy: tf.Tensor(0.68333334, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.625, shape=(), dtype=float32)
Cost after epoch 60: 0.028050
Train accuracy: tf.Tensor(0.6935185, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.625, shape=(), dtype=float32)
Cost after epoch 70: 0.026298
Train accuracy: tf.Tensor(0.72407407, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.64166665, shape=(), dtype=float32)
Cost after epoch 80: 0.024799
Train accuracy: tf.Tensor(0.7425926, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.68333334, shape=(), dtype=float32)
Cost after epoch 90: 0.023551
Train accuracy: tf.Tensor(0.75277776, shape=(), dtype=float32)
Test_accuracy: tf.Tensor(0.68333334, shape=(), dtype=float32)
###Markdown
**Expected output**```Cost after epoch 0: 0.057612Train accuracy: tf.Tensor(0.17314816, shape=(), dtype=float32)Test_accuracy: tf.Tensor(0.24166666, shape=(), dtype=float32)Cost after epoch 10: 0.049332Train accuracy: tf.Tensor(0.35833332, shape=(), dtype=float32)Test_accuracy: tf.Tensor(0.3, shape=(), dtype=float32)...```Numbers you get can be different, just check that your loss is going down and your accuracy going up!
###Code
# Plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(0.0001))
plt.show()
# Plot the train accuracy
plt.plot(np.squeeze(train_acc))
plt.ylabel('Train Accuracy')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(0.0001))
# Plot the test accuracy
plt.plot(np.squeeze(test_acc))
plt.ylabel('Test Accuracy')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(0.0001))
plt.show()
###Output
_____no_output_____ |
uguryi-custom-ner-tagger-6c383ac66981/df-classifier.ipynb | ###Markdown
Last updated: 2019-03-02 Upload DF corpus Way 1 (not preferred)Manually upload a zip file of 2017 documents and then unzip it. It takes a couple of minutes to upload the whole thing.Notes:- A residue folder `__MACOSX` is created when unzipping; not sure why...- The following error is encountered when unzipping (maybe related to above?):```IOPub data rate exceeded.The notebook server will temporarily stop sending outputto the client in order to avoid crashing it.To change this limit, set the config variable`--NotebookApp.iopub_data_rate_limit`.Current values:NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)NotebookApp.rate_limit_window=3.0 (secs)```
###Code
#!unzip DocumentsParsed-2017.zip
###Output
_____no_output_____
###Markdown
Way 2 (better)Create a new directory `df-corpus` and copy the whole corpus from `S3` into this directory.
###Code
#!mkdir df-corpus
#!aws s3 cp s3://tagworks.thusly.co/decidingforce/corpus/ ./df-corpus --recursive
#!find df-corpus/* -maxdepth 0 -type d | wc -l # See how many folders are under df-corpus
###Output
_____no_output_____
###Markdown
Install Stanford CoreNLP
###Code
#!wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-10-05.zip
#!unzip stanford-corenlp-full-2018-10-05.zip
###Output
_____no_output_____
###Markdown
Install Java
###Code
#!java -version
###Output
_____no_output_____
###Markdown
Upload prop file `df-classifier.prop` tells the CRF classifier "how" to go about classifying. NER Feature Factory lists all the possible parameters that can be tuned: https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/ie/NERFeatureFactory.htmlRight now I'm manually uploading it here. Download stopwords and wordnet It takes a few seconds to load...
###Code
import nltk
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] /Users/kseniyausovich/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package wordnet to
[nltk_data] /Users/kseniyausovich/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package punkt to
[nltk_data] /Users/kseniyausovich/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /Users/kseniyausovich/nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
###Markdown
Create lemmatizer
###Code
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import gzip, json, nltk, os, re, string
from nltk.corpus import stopwords
import pandas as pd
import time
###Output
_____no_output_____
###Markdown
Define auxiliary functions
###Code
def store_annotations(path_to_data):
with gzip.open(os.path.join(path_to_data, "annotations.json.gz"),
mode='rt',
encoding='utf8') as unzipped:
annotations = json.load(unzipped)
return(annotations)
def store_text(path_to_data):
with gzip.open(os.path.join(path_to_data, "text.txt.gz"),
mode='rt',
encoding='utf8') as unzipped:
text = unzipped.read()
return(text)
def gen_lst_tags(annotations):
lst_tagged_text = []
for e1 in annotations["tuas"]:
for e2 in annotations["tuas"][e1]:
for e3 in annotations["tuas"][e1][e2]:
lst_tagged_text += [[e1, e3[0], e3[1], e3[2]]]
lst_tagged_text = sorted(lst_tagged_text, key = lambda x: x[1])
return(lst_tagged_text)
def reorganize_tag_positions(tag_positions):
keep_going = 1
while keep_going:
keep_going = 0
p = 0
tag_positions_better = []
while p < len(tag_positions) - 1:
if tag_positions[p][1] < tag_positions[p+1][0] - 1:
tag_positions_better += [tag_positions[p]]
p += 1
if p == len(tag_positions) - 1:
tag_positions_better += [tag_positions[p]]
elif tag_positions[p][1] >= tag_positions[p+1][1]:
tag_positions_better += [tag_positions[p]]
p += 2
keep_going = 1
if p == len(tag_positions) - 1:
tag_positions_better += [tag_positions[p]]
else:
tag_positions_better += [[tag_positions[p][0], tag_positions[p+1][1]]]
p += 2
keep_going = 1
if p == len(tag_positions) - 1:
tag_positions_better += [tag_positions[p]]
tag_positions = tag_positions_better.copy()
return(tag_positions_better)
def gen_lst_untagged(tag_positions_better, text):
lst_untagged_text = []
p0 = 0
for p in tag_positions_better:
#lst_untagged_text += [['Untagged', p0, p[0]-1, text[p0:p[0]]]]
lst_untagged_text += [['O', p0, p[0]-1, text[p0:p[0]]]]
p0 = p[1] + 1
lst_untagged_text = [e for e in lst_untagged_text]
return(lst_untagged_text)
###Output
_____no_output_____
###Markdown
Define main functions `gen_word_tag_lst`This function allows users to specify whether to:- remove stopwords or not- use POS tags or not- focus on one label and treat everything else as other (e.g., Protester vs. O) or not - in other words, binary classification vs. multiclass classificationIt is possible to add more flexibility to this function to also allow users to specificy whether to:- remove punctuation or not (removing punctuation is default right now)- transform words to lowercase or not (transforming to lowercase is default right now)- lemmatize words or not (lemmatizing is default right now) `write_to_tsv`This function allows users to specify which set of documents to use for the train and test datasets. The `tsv` file generated at the end includes words from documents between `start_index` and `end_index`. `end_index` can be as high as the number of documents in the corpus (here, 8094). The function needs to be run twice, once for generating the train dataset and once for generating the test dataset.
###Code
def gen_word_tag_lst(path_to_data, remove_stop_words, use_pos, focus, focus_word):
# Store annotations
annotations = store_annotations(path_to_data)
# Store full text
text = store_text(path_to_data)
# Generate list of tagged text
lst_tagged_text = gen_lst_tags(annotations)
# Generate list of tag positions
tag_positions = sorted([e[1:3] for e in lst_tagged_text])
# Reorganize tag positions
tag_positions_better = reorganize_tag_positions(tag_positions)
# Generate list of untagged text
lst_untagged_text = gen_lst_untagged(tag_positions_better, text)
# Generate list of tagged and untagged text
lst_full_text = sorted(lst_tagged_text + lst_untagged_text,
key = lambda x: x[1])
# Add part-of-speech (POS) tags
for i, e in enumerate(lst_full_text):
tokens = nltk.word_tokenize(e[3])
pos_document = nltk.pos_tag(tokens)
lst_full_text[i][3] = pos_document
# Generate table that stores info on what is going to be excluded from strings
table = str.maketrans({key: " " for key in set(string.punctuation +
"\n" + "\xa0" +
"“" + "’" + "–" +
"\u201d" + "\u2018" + "\u2013" + "\u2014")})
# Store English stop words
stopwords_en = stopwords.words('english')
# Generate final list to be converted to tsv format (lemmatize on the way)
lst = []
for e in lst_full_text:
for token in e[3]:
# Remove punctuation, transform to lower case, and strip any white space at start/end
token = (token[0].translate(table).lower().strip(), token[1])
if token[0]:
if remove_stop_words:
if token[0] not in stopwords_en:
if focus:
if e[0] == focus_word:
if use_pos:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + token[1] + "\t" + e[0]]
else:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + e[0]]
else:
if use_pos:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + token[1] + "\t" + 'O']
else:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + 'O']
else:
if use_pos:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + token[1] + "\t" + e[0]]
else:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + e[0]]
else:
if focus:
if e[0] == focus_word:
if use_pos:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + token[1] + "\t" + e[0]]
else:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + e[0]]
else:
if use_pos:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + token[1] + "\t" + 'O']
else:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + 'O']
else:
if use_pos:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + token[1] + "\t" + e[0]]
else:
lst += [lemmatizer.lemmatize(token[0]) + "\t" + e[0]]
return(lst)
def write_to_tsv(path_to_tsv,
path_to_data,
train_or_test,
start_index,
end_index,
remove_stop_words = True,
use_pos = True,
focus = True,
focus_word = "Protester"):
p = 0
with open(os.path.join(path_to_tsv, train_or_test), 'w') as file:
for root, dirs, files in os.walk(path_to_data):
if not dirs and "text.txt.gz" in files and "annotations.json.gz" in files:
if start_index <= p and end_index > p:
word_tag_lst = gen_word_tag_lst(root, remove_stop_words, use_pos, focus, focus_word)
# Filter out Useless and ToBe tags
word_tag_lst = list(filter(lambda x: 'Useless' not in x and 'ToBe' not in x, word_tag_lst))
for e in word_tag_lst:
file.write(e + '\n')
if word_tag_lst:
file.write('\n')
p += 1
###Output
_____no_output_____
###Markdown
Generate train and test data
###Code
#path_to_data_2017 = "./DocumentsParsed-2017"
path_to_data = "./df-corpus"
path_to_tsv = "."
# Generate train data
write_to_tsv(path_to_tsv, path_to_data, train_or_test = 'train.tsv',
start_index = 0, end_index = 500,
remove_stop_words = True, use_pos = True, focus = False, focus_word = "Protester")
# Generate test data
write_to_tsv(path_to_tsv, path_to_data, train_or_test = 'test.tsv',
start_index = 500, end_index = 600,
remove_stop_words = True, use_pos = True, focus = False, focus_word = "Protester")
###Output
_____no_output_____
###Markdown
Train and test model
###Code
start_time = time.time()
# Train model
!java -Xmx16g -cp "./stanford-corenlp-full-2018-10-05/*" edu.stanford.nlp.ie.crf.CRFClassifier \
-prop ./df-classifier.prop
print((time.time()-start_time)/60)
# Test model
!java -Xmx16g -cp "./stanford-corenlp-full-2018-10-05/*" edu.stanford.nlp.ie.crf.CRFClassifier \
-loadClassifier ./custom-tagger.ser.gz -testFile ./test.tsv \
-outputFormat tsv 1> "./test-results/0-500-500-600-RP-ULC-L-RSW-UPOS-DNF-NA.tsv"
# num1: train start
# num2: train end
# num3: test start
# num4: test end
# RP: remove punctuation
# DNRP: do not remove punctuation
# ULC: use lower case
# DNULC: do not use lower case
# L: lemmatize
# DNL: do not lemmatize
# RSW: remove stop words
# DNRSW: do not remove stop words
# UPOS: use part-of-speech
# DNUPOS: do not use part-of-speech
# F: focus
# DNF: do not focus
# Pr: Protester
# O: Opinioner
# C: Camp
# S: Strategy
# I: Info
# G: Government
# P: Police
# L: Legal_Action
# NA: not applicable
###Output
[main] INFO edu.stanford.nlp.ie.crf.CRFClassifier - Invoked on Sun Mar 03 02:02:49 UTC 2019 with arguments: -loadClassifier ./custom-tagger.ser.gz -testFile ./test.tsv -outputFormat tsv
[main] INFO edu.stanford.nlp.sequences.SeqClassifierFlags - testFile=./test.tsv
[main] INFO edu.stanford.nlp.sequences.SeqClassifierFlags - loadClassifier=./custom-tagger.ser.gz
[main] INFO edu.stanford.nlp.sequences.SeqClassifierFlags - outputFormat=tsv
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from ./custom-tagger.ser.gz ... done [0.3 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - CRFClassifier tagged 23289 words in 100 documents at 1696.21 words per second.
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Entity P R F1 TP FP FN
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Camp 0.0010 0.0077 0.0017 2 2025 257
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Government 0.0017 0.0102 0.0030 1 574 97
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Info 0.0000 0.0000 0.0000 0 239 29
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Legal_Action 0.0000 0.0000 0.0000 0 556 86
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Opinionor 0.0006 0.0085 0.0011 1 1742 116
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Police 0.0000 0.0000 0.0000 0 862 98
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Protester 0.0024 0.0309 0.0045 5 2059 157
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Strategy 0.0000 0.0000 0.0000 0 2120 194
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Totals 0.0009 0.0086 0.0016 9 10177 1034
###Markdown
Check model performance
###Code
df = pd.read_csv("./test-results/0-500-500-600-RP-ULC-L-RSW-UPOS-DNF-NA.tsv",
sep = '\t',
names = ["word", "obs", "pred"])
# TP, FP, TN, FN
d = {"O" : [0, 0, 0, 0],
"Protester" : [0, 0, 0, 0],
"Opinionor" : [0, 0, 0, 0],
"Camp" : [0, 0, 0, 0],
"Strategy" : [0, 0, 0, 0],
"Info" : [0, 0, 0, 0],
"Government" : [0, 0, 0, 0],
"Police" : [0, 0, 0, 0],
"Legal_Action" : [0, 0, 0, 0]}
for index, row in df.iterrows():
if row['obs'] == row['pred']:
d[row['pred']][0] += 1
for key in d.keys():
if key != row['pred']:
d[key][2] += 1
if row['obs'] != row['pred']:
d[row['pred']][1] += 1
d[row['obs']][3] += 1
for key in d.keys():
if d[key][0] == 0 and d[key][1] == 0 and d[key][3] == 0:
continue
else:
try:
accuracy = (d[key][0] + d[key][2])/sum(d[key])
except:
accuracy = 0
try:
precision = d[key][0]/(d[key][0] + d[key][1])
except:
precision = 0
try:
recall = d[key][0]/(d[key][0] + d[key][3])
except:
recal = 0
try:
specificity = d[key][2]/(d[key][1] + d[key][2])
except:
specificity = 0
try:
f1_score = 2*precision*recall/(precision+recall)
except:
f1_score = 0
print("TP, FP, TN, FN for " + key + " are: " + str(d[key]))
print("Accuracy for " + key + " is: " + str(accuracy))
print("Precision for " + key + " is: " + str(precision))
print("Recall for " + key + " is: " + str(recall))
print("Specificity for " + key + " is: " + str(specificity))
print("F1 score for " + key + " is: " + str(f1_score) + "\n")
###Output
TP, FP, TN, FN for O are: [4409, 6233, 3214, 3114]
Accuracy for O is: 0.4492044784914555
Precision for O is: 0.41430182296560797
Recall for O is: 0.5860693872125482
Specificity for O is: 0.3402138244945485
F1 score for O is: 0.485439031103771
TP, FP, TN, FN for Protester are: [666, 2047, 6957, 1601]
Accuracy for Protester is: 0.6763375033271227
Precision for Protester is: 0.2454847032805013
Recall for Protester is: 0.2937803264225849
Specificity for Protester is: 0.7726565970679697
F1 score for Protester is: 0.2674698795180723
TP, FP, TN, FN for Opinionor are: [741, 1601, 6882, 1730]
Accuracy for Opinionor is: 0.695910169800986
Precision for Opinionor is: 0.31639624252775406
Recall for Opinionor is: 0.2998785916632942
Specificity for Opinionor is: 0.8112695980195685
F1 score for Opinionor is: 0.30791606066902144
TP, FP, TN, FN for Camp are: [688, 1835, 6935, 2727]
Accuracy for Camp is: 0.6256052523594583
Precision for Camp is: 0.27269124058660327
Recall for Camp is: 0.20146412884333822
Specificity for Camp is: 0.7907639680729761
F1 score for Camp is: 0.23172785449646346
TP, FP, TN, FN for Strategy are: [624, 1952, 6999, 2564]
Accuracy for Strategy is: 0.6279759453002719
Precision for Strategy is: 0.2422360248447205
Recall for Strategy is: 0.19573400250941028
Specificity for Strategy is: 0.7819238073958217
F1 score for Strategy is: 0.21651630811936157
TP, FP, TN, FN for Info are: [27, 222, 7596, 446]
Accuracy for Info is: 0.9194307079966229
Precision for Info is: 0.10843373493975904
Recall for Info is: 0.05708245243128964
Specificity for Info is: 0.9716039907904835
F1 score for Info is: 0.07479224376731303
TP, FP, TN, FN for Government are: [126, 508, 7497, 954]
Accuracy for Government is: 0.8390753990093561
Precision for Government is: 0.19873817034700317
Recall for Government is: 0.11666666666666667
Specificity for Government is: 0.9365396627108058
F1 score for Government is: 0.14702450408401402
TP, FP, TN, FN for Police are: [258, 756, 7365, 1234]
Accuracy for Police is: 0.7929886611879746
Precision for Police is: 0.25443786982248523
Recall for Police is: 0.17292225201072386
Specificity for Police is: 0.9069080162541558
F1 score for Police is: 0.20590582601755789
TP, FP, TN, FN for Legal_Action are: [84, 512, 7539, 1296]
Accuracy for Legal_Action is: 0.8082918036263387
Precision for Legal_Action is: 0.14093959731543623
Recall for Legal_Action is: 0.06086956521739131
Specificity for Legal_Action is: 0.9364054154763384
F1 score for Legal_Action is: 0.08502024291497975
###Markdown
Sandbox
###Code
if False:
# Count number of articles in the corpus
count = 0
for root, dirs, files in os.walk("./df-corpus"):
if not dirs and "text.txt.gz" in files and "annotations.json.gz" in files:
count += 1
print(count)
if False:
# The slicing in the first for loop can be used
# to select only those directories from a specific city (e.g., 0:11 is Albany)
# The slicing in the second for loop can be used
# to select the number of articles from that specific city.
# This is relevant when splitting articles from a specific city
# into train and test batches.
def write_to_tsv_alt(path_to_tsv,
path_to_data,
train_or_test,
start1 = 0,
end1 = 1342,
start2 = 0,
end2 = None,
remove_stop_words = True,
focus = True,
focus_word = "Protester",
use_pos = True):
with open(os.path.join(path_to_tsv, train_or_test), 'w') as file:
for f in sorted(os.listdir(path_to_data))[start1:end1]:
if f != ".DS_Store":
for sf in sorted(os.listdir(os.path.join(path_to_data, f)))[start2:end2]:
if sf != ".DS_Store":
path = os.path.join(path_to_data, f, sf)
word_tag_lst = gen_word_tag_lst(path, remove_stop_words, focus, focus_word, use_pos)
# Filter out Useless and ToBe tags
word_tag_lst = list(
filter(lambda x: 'Useless' not in x and 'ToBe' not in x, word_tag_lst))
for e in word_tag_lst:
file.write(e + '\n')
if word_tag_lst:
file.write('\n')
if False:
# The slicing in the first for loop can be used
# to select only those directories from a specific city (e.g., 0:11 is Albany)
# The slicing in the second for loop can be used
# to select the number of articles from that specific city.
# This is relevant when splitting articles from a specific city
# into train and test batches.
count = 0
for f in sorted(os.listdir(path_to_data))[0:11]:
if f != ".DS_Store":
for sf in sorted(os.listdir(os.path.join(path_to_data, f)))[0:]:
if sf != ".DS_Store":
path = os.path.join(path_to_data, f, sf)
print(path)
count += 1
print(count)
###Output
_____no_output_____ |
Lesson02/Exercise05.ipynb | ###Markdown
Exercise 5: Creating a Histogram of Horsepower Distribution
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data"
df = pd.read_csv(url)
column_names = ['mpg', 'Cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'year', 'origin', 'name']
df = pd.read_csv(url, names= column_names, delim_whitespace=True)
df.head()
df.loc[df.horsepower == '?', 'horsepower'] = np.nan
df['horsepower'] = pd.to_numeric(df['horsepower'])
df['full_date'] = pd.to_datetime(df.year, format='%y')
df['year'] = df['full_date'].dt.year
df.horsepower.plot(kind='hist')
sns.distplot(df['weight'])
###Output
_____no_output_____ |
notebooks/ch05_Neural_Networks.ipynb | ###Markdown
5. Neural Networks
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_mldata, make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import accuracy_score
from prml import nn
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
5.1 Feed-forward Network Functions
###Code
class RegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data(func, n=50):
x = np.linspace(-1, 1, n)[:, None]
return x, func(x)
def sinusoidal(x):
return np.sin(np.pi * x)
def heaviside(x):
return 0.5 * (np.sign(x) + 1)
func_list = [np.square, sinusoidal, np.abs, heaviside]
plt.figure(figsize=(20, 10))
x = np.linspace(-1, 1, 1000)[:, None]
for i, func, n_iter in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000]):
plt.subplot(2, 2, i)
x_train, y_train = create_toy_data(func)
model = RegressionNetwork(1, 3, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for _ in range(n_iter):
model.clear()
loss = nn.square(y_train - model(x_train)).sum()
optimizer.minimize(loss)
y = model(x).value
plt.scatter(x_train, y_train, s=10)
plt.plot(x, y, color="r")
plt.show()
###Output
_____no_output_____
###Markdown
5.3 Error Backpropagation
###Code
class ClassificationNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data():
x = np.random.uniform(-1., 1., size=(100, 2))
labels = np.prod(x, axis=1) > 0
return x, labels.reshape(-1, 1)
x_train, y_train = create_toy_data()
model = ClassificationNetwork(2, 4, 1)
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
history = []
for i in range(10000):
model.clear()
logit = model(x_train)
log_likelihood = -nn.loss.sigmoid_cross_entropy(logit, y_train).sum()
optimizer.maximize(log_likelihood)
history.append(log_likelihood.value)
plt.plot(history)
plt.xlabel("iteration")
plt.ylabel("Log Likelihood")
plt.show()
x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
x = np.array([x0, x1]).reshape(2, -1).T
y = nn.sigmoid(model(x)).value.reshape(100, 100)
levels = np.linspace(0, 1, 11)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel())
plt.contourf(x0, x1, y, levels, alpha=0.2)
plt.colorbar()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.gca().set_aspect('equal')
plt.show()
###Output
_____no_output_____
###Markdown
5.5 Regularization in Neural Networks
###Code
def create_toy_data(n=10):
x = np.linspace(0, 1, n)[:, None]
return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1))
x_train, y_train = create_toy_data()
x = np.linspace(0, 1, 100)[:, None]
plt.figure(figsize=(20, 5))
for i, m in enumerate([1, 3, 30]):
plt.subplot(1, 3, i + 1)
model = RegressionNetwork(1, m, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for j in range(10000):
model.clear()
y = model(x_train)
optimizer.minimize(nn.square(y - y_train).sum())
if j % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x)
plt.scatter(x_train.ravel(), y_train.ravel(), marker="x", color="k")
plt.plot(x.ravel(), y.value.ravel(), color="k")
plt.annotate("M={}".format(m), (0.7, 0.5))
plt.show()
class RegularizedRegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def log_prior(self):
logp = 0
for param in self.parameter.values():
logp += self.prior.log_pdf(param)
return logp
model = RegularizedRegressionNetwork(1, 30, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(10000):
model.clear()
pred = model(x_train)
log_posterior = -nn.square(pred - y_train).sum() + model.log_prior()
optimizer.maximize(log_posterior)
if i % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x).value
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M=30", (0.7, 0.5))
plt.show()
def load_mnist():
mnist = fetch_mldata("MNIST original")
x = mnist.data
label = mnist.target
x = x / np.max(x, axis=1, keepdims=True)
x = x.reshape(-1, 28, 28, 1)
x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1)
y_train = LabelBinarizer().fit_transform(label_train)
return x_train, x_test, y_train, label_test
x_train, x_test, y_train, label_test = load_mnist()
class ConvolutionalNeuralNetwork(nn.Network):
def __init__(self):
super().__init__()
with self.set_parameter():
self.conv1 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 1, 20)),
stride=(1, 1), pad=(0, 0))
self.b1 = nn.array([0.1] * 20)
self.conv2 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 20, 20)),
stride=(1, 1), pad=(0, 0))
self.b2 = nn.array([0.1] * 20)
self.w3 = nn.random.truncnormal(-2, 2, 1, (4 * 4 * 20, 100))
self.b3 = nn.array([0.1] * 100)
self.w4 = nn.random.truncnormal(-2, 2, 1, (100, 10))
self.b4 = nn.array([0.1] * 10)
def __call__(self, x):
h = nn.relu(self.conv1(x) + self.b1)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = nn.relu(self.conv2(h) + self.b2)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = h.reshape(-1, 4 * 4 * 20)
h = nn.relu(h @ self.w3 + self.b3)
return h @ self.w4 + self.b4
model = ConvolutionalNeuralNetwork()
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
while True:
indices = np.random.permutation(len(x_train))
for index in range(0, len(x_train), 50):
model.clear()
x_batch = x_train[indices[index: index + 50]]
y_batch = y_train[indices[index: index + 50]]
logit = model(x_batch)
log_likelihood = -nn.loss.softmax_cross_entropy(logit, y_batch).mean(0).sum()
if optimizer.iter_count % 100 == 0:
accuracy = accuracy_score(
np.argmax(y_batch, axis=-1), np.argmax(logit.value, axis=-1)
)
print("step {:04d}".format(optimizer.iter_count), end=", ")
print("accuracy {:.2f}".format(accuracy), end=", ")
print("Log Likelihood {:g}".format(log_likelihood.value[0]))
optimizer.maximize(log_likelihood)
if optimizer.iter_count == 1000:
break
else:
continue
break
print("accuracy (test):", accuracy_score(np.argmax(model(x_test).value, axis=-1), label_test))
###Output
accuracy (test): 0.862
###Markdown
5.6 Mixture Density Networks
###Code
def create_toy_data(func, n=300):
t = np.random.uniform(size=(n, 1))
x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1))
return x, t
def func(x):
return x + 0.3 * np.sin(2 * np.pi * x)
def sample(x, t, n=None):
assert len(x) == len(t)
N = len(x)
if n is None:
n = N
indices = np.random.choice(N, n, replace=False)
return x[indices], t[indices]
x_train, y_train = create_toy_data(func)
class MixtureDensityNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_components):
self.n_components = n_components
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2c = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2c = nn.zeros(n_components)
self.w2m = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2m = nn.zeros(n_components)
self.w2s = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2s = nn.zeros(n_components)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
coef = nn.softmax(h @ self.w2c + self.b2c)
mean = h @ self.w2m + self.b2m
std = nn.exp(h @ self.w2s + self.b2s)
return coef, mean, std
def gaussian_mixture_pdf(x, coef, mu, std):
gauss = (
nn.exp(-0.5 * nn.square((x - mu) / std))
/ std / np.sqrt(2 * np.pi)
)
return (coef * gauss).sum(axis=-1)
model = MixtureDensityNetwork(1, 5, 3)
optimizer = nn.optimizer.Adam(model.parameter, 1e-4)
for i in range(30000):
model.clear()
coef, mean, std = model(x_train)
log_likelihood = nn.log(gaussian_mixture_pdf(y_train, coef, mean, std)).sum()
optimizer.maximize(log_likelihood)
x = np.linspace(x_train.min(), x_train.max(), 100)[:, None]
y = np.linspace(y_train.min(), y_train.max(), 100)[:, None, None]
coef, mean, std = model(x)
plt.figure(figsize=(20, 15))
plt.subplot(2, 2, 1)
plt.plot(x[:, 0], coef.value[:, 0], color="blue")
plt.plot(x[:, 0], coef.value[:, 1], color="red")
plt.plot(x[:, 0], coef.value[:, 2], color="green")
plt.title("weights")
plt.subplot(2, 2, 2)
plt.plot(x[:, 0], mean.value[:, 0], color="blue")
plt.plot(x[:, 0], mean.value[:, 1], color="red")
plt.plot(x[:, 0], mean.value[:, 2], color="green")
plt.title("means")
plt.subplot(2, 2, 3)
proba = gaussian_mixture_pdf(y, coef, mean, std).value
levels_log = np.linspace(0, np.log(proba.max()), 21)
levels = np.exp(levels_log)
levels[0] = 0
xx, yy = np.meshgrid(x.ravel(), y.ravel())
plt.contour(xx, yy, proba.reshape(100, 100), levels)
plt.xlim(x_train.min(), x_train.max())
plt.ylim(y_train.min(), y_train.max())
plt.subplot(2, 2, 4)
argmax = np.argmax(coef.value, axis=1)
for i in range(3):
indices = np.where(argmax == i)[0]
plt.plot(x[indices, 0], mean.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b")
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Bayesian Neural Networks
###Code
x_train, y_train = make_moons(n_samples=500, noise=0.2)
y_train = y_train[:, None]
class Gaussian(nn.Network):
def __init__(self, shape):
super().__init__()
with self.set_parameter():
self.m = nn.zeros(shape)
self.s = nn.zeros(shape)
def __call__(self):
self.q = nn.Gaussian(self.m, nn.softplus(self.s) + 1e-8)
return self.q.draw()
class BayesianNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output=1):
super().__init__()
with self.set_parameter():
self.qw1 = Gaussian((n_input, n_hidden))
self.qb1 = Gaussian(n_hidden)
self.qw2 = Gaussian((n_hidden, n_hidden))
self.qb2 = Gaussian(n_hidden)
self.qw3 = Gaussian((n_hidden, n_output))
self.qb3 = Gaussian(n_output)
self.posterior = [self.qw1, self.qb1, self.qw2, self.qb2, self.qw3, self.qb3]
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.qw1() + self.qb1())
h = nn.tanh(h @ self.qw2() + self.qb2())
return nn.Bernoulli(logit=h @ self.qw3() + self.qb3())
def kl(self):
kl = 0
for pos in self.posterior:
kl += nn.loss.kl_divergence(pos.q, self.prior).mean()
return kl
model = BayesianNetwork(2, 5, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(1, 2001, 1):
model.clear()
py = model(x_train)
elbo = py.log_pdf(y_train).mean(0).sum() - model.kl() / len(x_train)
optimizer.maximize(elbo)
if i % 100 == 0:
optimizer.learning_rate *= 0.9
x_grid = np.mgrid[-2:3:100j, -2:3:100j]
x1, x2 = x_grid[0], x_grid[1]
x_grid = x_grid.reshape(2, -1).T
y = np.mean([model(x_grid).mean.value.reshape(100, 100) for _ in range(10)], axis=0)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel(), s=5)
plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2)
plt.colorbar()
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____
###Markdown
5. Neural Networks
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml, make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import accuracy_score
from prml import nn
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
5.1 Feed-forward Network Functions
###Code
class RegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data(func, n=50):
x = np.linspace(-1, 1, n)[:, None]
return x, func(x)
def sinusoidal(x):
return np.sin(np.pi * x)
def heaviside(x):
return 0.5 * (np.sign(x) + 1)
func_list = [np.square, sinusoidal, np.abs, heaviside]
plt.figure(figsize=(20, 10))
x = np.linspace(-1, 1, 1000)[:, None]
for i, func, n_iter in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000]):
plt.subplot(2, 2, i)
x_train, y_train = create_toy_data(func)
model = RegressionNetwork(1, 3, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for _ in range(n_iter):
model.clear()
loss = nn.square(y_train - model(x_train)).sum()
optimizer.minimize(loss)
y = model(x).value
plt.scatter(x_train, y_train, s=10)
plt.plot(x, y, color="r")
plt.show()
###Output
_____no_output_____
###Markdown
5.3 Error Backpropagation
###Code
class ClassificationNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data():
x = np.random.uniform(-1., 1., size=(100, 2))
labels = np.prod(x, axis=1) > 0
return x, labels.reshape(-1, 1)
x_train, y_train = create_toy_data()
model = ClassificationNetwork(2, 4, 1)
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
history = []
for i in range(10000):
model.clear()
logit = model(x_train)
log_likelihood = -nn.loss.sigmoid_cross_entropy(logit, y_train).sum()
optimizer.maximize(log_likelihood)
history.append(log_likelihood.value)
plt.plot(history)
plt.xlabel("iteration")
plt.ylabel("Log Likelihood")
plt.show()
x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
x = np.array([x0, x1]).reshape(2, -1).T
y = nn.sigmoid(model(x)).value.reshape(100, 100)
levels = np.linspace(0, 1, 11)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel())
plt.contourf(x0, x1, y, levels, alpha=0.2)
plt.colorbar()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.gca().set_aspect('equal')
plt.show()
###Output
_____no_output_____
###Markdown
5.5 Regularization in Neural Networks
###Code
def create_toy_data(n=10):
x = np.linspace(0, 1, n)[:, None]
return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1))
x_train, y_train = create_toy_data()
x = np.linspace(0, 1, 100)[:, None]
plt.figure(figsize=(20, 5))
for i, m in enumerate([1, 3, 30]):
plt.subplot(1, 3, i + 1)
model = RegressionNetwork(1, m, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for j in range(10000):
model.clear()
y = model(x_train)
optimizer.minimize(nn.square(y - y_train).sum())
if j % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x)
plt.scatter(x_train.ravel(), y_train.ravel(), marker="x", color="k")
plt.plot(x.ravel(), y.value.ravel(), color="k")
plt.annotate("M={}".format(m), (0.7, 0.5))
plt.show()
class RegularizedRegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def log_prior(self):
logp = 0
for param in self.parameter.values():
logp += self.prior.log_pdf(param)
return logp
model = RegularizedRegressionNetwork(1, 30, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(10000):
model.clear()
pred = model(x_train)
log_posterior = -nn.square(pred - y_train).sum() + model.log_prior()
optimizer.maximize(log_posterior)
if i % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x).value
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M=30", (0.7, 0.5))
plt.show()
def load_mnist():
x, label = fetch_openml("mnist_784", return_X_y=True)
x = x / np.max(x, axis=1, keepdims=True)
x = x.reshape(-1, 28, 28, 1)
label = label.astype(np.int)
x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1)
y_train = LabelBinarizer().fit_transform(label_train)
return x_train, x_test, y_train, label_test
x_train, x_test, y_train, label_test = load_mnist()
class ConvolutionalNeuralNetwork(nn.Network):
def __init__(self):
super().__init__()
with self.set_parameter():
self.conv1 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 1, 20)),
stride=(1, 1), pad=(0, 0))
self.b1 = nn.array([0.1] * 20)
self.conv2 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 20, 20)),
stride=(1, 1), pad=(0, 0))
self.b2 = nn.array([0.1] * 20)
self.w3 = nn.random.truncnormal(-2, 2, 1, (4 * 4 * 20, 100))
self.b3 = nn.array([0.1] * 100)
self.w4 = nn.random.truncnormal(-2, 2, 1, (100, 10))
self.b4 = nn.array([0.1] * 10)
def __call__(self, x):
h = nn.relu(self.conv1(x) + self.b1)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = nn.relu(self.conv2(h) + self.b2)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = h.reshape(-1, 4 * 4 * 20)
h = nn.relu(h @ self.w3 + self.b3)
return h @ self.w4 + self.b4
model = ConvolutionalNeuralNetwork()
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
while True:
indices = np.random.permutation(len(x_train))
for index in range(0, len(x_train), 50):
model.clear()
x_batch = x_train[indices[index: index + 50]]
y_batch = y_train[indices[index: index + 50]]
logit = model(x_batch)
log_likelihood = -nn.loss.softmax_cross_entropy(logit, y_batch).mean(0).sum()
if optimizer.iter_count % 100 == 0:
accuracy = accuracy_score(
np.argmax(y_batch, axis=-1), np.argmax(logit.value, axis=-1)
)
print("step {:04d}".format(optimizer.iter_count), end=", ")
print("accuracy {:.2f}".format(accuracy), end=", ")
print("Log Likelihood {:g}".format(log_likelihood.value[0]))
optimizer.maximize(log_likelihood)
if optimizer.iter_count == 1000:
break
else:
continue
break
print("accuracy (test):", accuracy_score(np.argmax(model(x_test).value, axis=-1), label_test))
###Output
accuracy (test): 0.8595714285714285
###Markdown
5.6 Mixture Density Networks
###Code
def create_toy_data(func, n=300):
t = np.random.uniform(size=(n, 1))
x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1))
return x, t
def func(x):
return x + 0.3 * np.sin(2 * np.pi * x)
def sample(x, t, n=None):
assert len(x) == len(t)
N = len(x)
if n is None:
n = N
indices = np.random.choice(N, n, replace=False)
return x[indices], t[indices]
x_train, y_train = create_toy_data(func)
class MixtureDensityNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_components):
self.n_components = n_components
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2c = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2c = nn.zeros(n_components)
self.w2m = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2m = nn.zeros(n_components)
self.w2s = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2s = nn.zeros(n_components)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
coef = nn.softmax(h @ self.w2c + self.b2c)
mean = h @ self.w2m + self.b2m
std = nn.exp(h @ self.w2s + self.b2s)
return coef, mean, std
def gaussian_mixture_pdf(x, coef, mu, std):
gauss = (
nn.exp(-0.5 * nn.square((x - mu) / std))
/ std / np.sqrt(2 * np.pi)
)
return (coef * gauss).sum(axis=-1)
model = MixtureDensityNetwork(1, 5, 3)
optimizer = nn.optimizer.Adam(model.parameter, 1e-4)
for i in range(30000):
model.clear()
coef, mean, std = model(x_train)
log_likelihood = nn.log(gaussian_mixture_pdf(y_train, coef, mean, std)).sum()
optimizer.maximize(log_likelihood)
x = np.linspace(x_train.min(), x_train.max(), 100)[:, None]
y = np.linspace(y_train.min(), y_train.max(), 100)[:, None, None]
coef, mean, std = model(x)
plt.figure(figsize=(20, 15))
plt.subplot(2, 2, 1)
plt.plot(x[:, 0], coef.value[:, 0], color="blue")
plt.plot(x[:, 0], coef.value[:, 1], color="red")
plt.plot(x[:, 0], coef.value[:, 2], color="green")
plt.title("weights")
plt.subplot(2, 2, 2)
plt.plot(x[:, 0], mean.value[:, 0], color="blue")
plt.plot(x[:, 0], mean.value[:, 1], color="red")
plt.plot(x[:, 0], mean.value[:, 2], color="green")
plt.title("means")
plt.subplot(2, 2, 3)
proba = gaussian_mixture_pdf(y, coef, mean, std).value
levels_log = np.linspace(0, np.log(proba.max()), 21)
levels = np.exp(levels_log)
levels[0] = 0
xx, yy = np.meshgrid(x.ravel(), y.ravel())
plt.contour(xx, yy, proba.reshape(100, 100), levels)
plt.xlim(x_train.min(), x_train.max())
plt.ylim(y_train.min(), y_train.max())
plt.subplot(2, 2, 4)
argmax = np.argmax(coef.value, axis=1)
for i in range(3):
indices = np.where(argmax == i)[0]
plt.plot(x[indices, 0], mean.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b")
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Bayesian Neural Networks
###Code
x_train, y_train = make_moons(n_samples=500, noise=0.2)
y_train = y_train[:, None]
class Gaussian(nn.Network):
def __init__(self, shape):
super().__init__()
with self.set_parameter():
self.m = nn.zeros(shape)
self.s = nn.zeros(shape)
def __call__(self):
self.q = nn.Gaussian(self.m, nn.softplus(self.s) + 1e-8)
return self.q.draw()
class BayesianNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output=1):
super().__init__()
with self.set_parameter():
self.qw1 = Gaussian((n_input, n_hidden))
self.qb1 = Gaussian(n_hidden)
self.qw2 = Gaussian((n_hidden, n_hidden))
self.qb2 = Gaussian(n_hidden)
self.qw3 = Gaussian((n_hidden, n_output))
self.qb3 = Gaussian(n_output)
self.posterior = [self.qw1, self.qb1, self.qw2, self.qb2, self.qw3, self.qb3]
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.qw1() + self.qb1())
h = nn.tanh(h @ self.qw2() + self.qb2())
return nn.Bernoulli(logit=h @ self.qw3() + self.qb3())
def kl(self):
kl = 0
for pos in self.posterior:
kl += nn.loss.kl_divergence(pos.q, self.prior).mean()
return kl
model = BayesianNetwork(2, 5, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(1, 2001, 1):
model.clear()
py = model(x_train)
elbo = py.log_pdf(y_train).mean(0).sum() - model.kl() / len(x_train)
optimizer.maximize(elbo)
if i % 100 == 0:
optimizer.learning_rate *= 0.9
x_grid = np.mgrid[-2:3:100j, -2:3:100j]
x1, x2 = x_grid[0], x_grid[1]
x_grid = x_grid.reshape(2, -1).T
y = np.mean([model(x_grid).mean.value.reshape(100, 100) for _ in range(10)], axis=0)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel(), s=5)
plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2)
plt.colorbar()
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____
###Markdown
5. Neural Networks
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_mldata, make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import accuracy_score
from prml import nn
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
5.1 Feed-forward Network Functions
###Code
class RegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data(func, n=50):
x = np.linspace(-1, 1, n)[:, None]
return x, func(x)
def sinusoidal(x):
return np.sin(np.pi * x)
def heaviside(x):
return 0.5 * (np.sign(x) + 1)
func_list = [np.square, sinusoidal, np.abs, heaviside]
plt.figure(figsize=(20, 10))
x = np.linspace(-1, 1, 1000)[:, None]
for i, func, n_iter in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000]):
plt.subplot(2, 2, i)
x_train, y_train = create_toy_data(func)
model = RegressionNetwork(1, 3, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for _ in range(n_iter):
model.clear()
loss = nn.square(y_train - model(x_train)).sum()
optimizer.minimize(loss)
y = model(x).value
plt.scatter(x_train, y_train, s=10)
plt.plot(x, y, color="r")
plt.show()
###Output
_____no_output_____
###Markdown
5.3 Error Backpropagation
###Code
class ClassificationNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data():
x = np.random.uniform(-1., 1., size=(100, 2))
labels = np.prod(x, axis=1) > 0
return x, labels.reshape(-1, 1)
x_train, y_train = create_toy_data()
model = ClassificationNetwork(2, 4, 1)
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
history = []
for i in range(10000):
model.clear()
logit = model(x_train)
log_likelihood = -nn.loss.sigmoid_cross_entropy(logit, y_train).sum()
optimizer.maximize(log_likelihood)
history.append(log_likelihood.value)
plt.plot(history)
plt.xlabel("iteration")
plt.ylabel("Log Likelihood")
plt.show()
x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
x = np.array([x0, x1]).reshape(2, -1).T
y = nn.sigmoid(model(x)).value.reshape(100, 100)
levels = np.linspace(0, 1, 11)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel())
plt.contourf(x0, x1, y, levels, alpha=0.2)
plt.colorbar()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.gca().set_aspect('equal')
plt.show()
###Output
_____no_output_____
###Markdown
5.5 Regularization in Neural Networks
###Code
def create_toy_data(n=10):
x = np.linspace(0, 1, n)[:, None]
return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1))
x_train, y_train = create_toy_data()
x = np.linspace(0, 1, 100)[:, None]
plt.figure(figsize=(20, 5))
for i, m in enumerate([1, 3, 30]):
plt.subplot(1, 3, i + 1)
model = RegressionNetwork(1, m, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for j in range(10000):
model.clear()
y = model(x_train)
optimizer.minimize(nn.square(y - y_train).sum())
if j % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x)
plt.scatter(x_train.ravel(), y_train.ravel(), marker="x", color="k")
plt.plot(x.ravel(), y.value.ravel(), color="k")
plt.annotate("M={}".format(m), (0.7, 0.5))
plt.show()
class RegularizedRegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def log_prior(self):
logp = 0
for param in self.parameter.values():
logp += self.prior.log_pdf(param)
return logp
model = RegularizedRegressionNetwork(1, 30, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(10000):
model.clear()
pred = model(x_train)
log_posterior = -nn.square(pred - y_train).sum() + model.log_prior()
optimizer.maximize(log_posterior)
if i % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x).value
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M=30", (0.7, 0.5))
plt.show()
def load_mnist():
mnist = fetch_mldata("MNIST original")
x = mnist.data
label = mnist.target
x = x / np.max(x, axis=1, keepdims=True)
x = x.reshape(-1, 28, 28, 1)
x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1)
y_train = LabelBinarizer().fit_transform(label_train)
return x_train, x_test, y_train, label_test
x_train, x_test, y_train, label_test = load_mnist()
class ConvolutionalNeuralNetwork(nn.Network):
def __init__(self):
super().__init__()
with self.set_parameter():
self.conv1 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 1, 20)),
stride=(1, 1), pad=(0, 0))
self.b1 = nn.array([0.1] * 20)
self.conv2 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 20, 20)),
stride=(1, 1), pad=(0, 0))
self.b2 = nn.array([0.1] * 20)
self.w3 = nn.random.truncnormal(-2, 2, 1, (4 * 4 * 20, 100))
self.b3 = nn.array([0.1] * 100)
self.w4 = nn.random.truncnormal(-2, 2, 1, (100, 10))
self.b4 = nn.array([0.1] * 10)
def __call__(self, x):
h = nn.relu(self.conv1(x) + self.b1)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = nn.relu(self.conv2(h) + self.b2)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = h.reshape(-1, 4 * 4 * 20)
h = nn.relu(h @ self.w3 + self.b3)
return h @ self.w4 + self.b4
model = ConvolutionalNeuralNetwork()
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
while True:
indices = np.random.permutation(len(x_train))
for index in range(0, len(x_train), 50):
model.clear()
x_batch = x_train[indices[index: index + 50]]
y_batch = y_train[indices[index: index + 50]]
logit = model(x_batch)
log_likelihood = -nn.loss.softmax_cross_entropy(logit, y_batch).mean(0).sum()
if optimizer.iter_count % 100 == 0:
accuracy = accuracy_score(
np.argmax(y_batch, axis=-1), np.argmax(logit.value, axis=-1)
)
print("step {:04d}".format(optimizer.iter_count), end=", ")
print("accuracy {:.2f}".format(accuracy), end=", ")
print("Log Likelihood {:g}".format(log_likelihood.value[0]))
optimizer.maximize(log_likelihood)
if optimizer.iter_count == 1000:
break
else:
continue
break
print("accuracy (test):", accuracy_score(np.argmax(model(x_test).value, axis=-1), label_test))
###Output
accuracy (test): 0.862
###Markdown
5.6 Mixture Density Networks
###Code
def create_toy_data(func, n=300):
t = np.random.uniform(size=(n, 1))
x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1))
return x, t
def func(x):
return x + 0.3 * np.sin(2 * np.pi * x)
def sample(x, t, n=None):
assert len(x) == len(t)
N = len(x)
if n is None:
n = N
indices = np.random.choice(N, n, replace=False)
return x[indices], t[indices]
x_train, y_train = create_toy_data(func)
class MixtureDensityNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_components):
self.n_components = n_components
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2c = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2c = nn.zeros(n_components)
self.w2m = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2m = nn.zeros(n_components)
self.w2s = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2s = nn.zeros(n_components)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
coef = nn.softmax(h @ self.w2c + self.b2c)
mean = h @ self.w2m + self.b2m
std = nn.exp(h @ self.w2s + self.b2s)
return coef, mean, std
def gaussian_mixture_pdf(x, coef, mu, std):
gauss = (
nn.exp(-0.5 * nn.square((x - mu) / std))
/ std / np.sqrt(2 * np.pi)
)
return (coef * gauss).sum(axis=-1)
model = MixtureDensityNetwork(1, 5, 3)
optimizer = nn.optimizer.Adam(model.parameter, 1e-4)
for i in range(30000):
model.clear()
coef, mean, std = model(x_train)
log_likelihood = nn.log(gaussian_mixture_pdf(y_train, coef, mean, std)).sum()
optimizer.maximize(log_likelihood)
x = np.linspace(x_train.min(), x_train.max(), 100)[:, None]
y = np.linspace(y_train.min(), y_train.max(), 100)[:, None, None]
coef, mean, std = model(x)
plt.figure(figsize=(20, 15))
plt.subplot(2, 2, 1)
plt.plot(x[:, 0], coef.value[:, 0], color="blue")
plt.plot(x[:, 0], coef.value[:, 1], color="red")
plt.plot(x[:, 0], coef.value[:, 2], color="green")
plt.title("weights")
plt.subplot(2, 2, 2)
plt.plot(x[:, 0], mean.value[:, 0], color="blue")
plt.plot(x[:, 0], mean.value[:, 1], color="red")
plt.plot(x[:, 0], mean.value[:, 2], color="green")
plt.title("means")
plt.subplot(2, 2, 3)
proba = gaussian_mixture_pdf(y, coef, mean, std).value
levels_log = np.linspace(0, np.log(proba.max()), 21)
levels = np.exp(levels_log)
levels[0] = 0
xx, yy = np.meshgrid(x.ravel(), y.ravel())
plt.contour(xx, yy, proba.reshape(100, 100), levels)
plt.xlim(x_train.min(), x_train.max())
plt.ylim(y_train.min(), y_train.max())
plt.subplot(2, 2, 4)
argmax = np.argmax(coef.value, axis=1)
for i in range(3):
indices = np.where(argmax == i)[0]
plt.plot(x[indices, 0], mean.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b")
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Bayesian Neural Networks
###Code
x_train, y_train = make_moons(n_samples=500, noise=0.2)
y_train = y_train[:, None]
class Gaussian(nn.Network):
def __init__(self, shape):
super().__init__()
with self.set_parameter():
self.m = nn.zeros(shape)
self.s = nn.zeros(shape)
def __call__(self):
self.q = nn.Gaussian(self.m, nn.softplus(self.s) + 1e-8)
return self.q.draw()
class BayesianNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output=1):
super().__init__()
with self.set_parameter():
self.qw1 = Gaussian((n_input, n_hidden))
self.qb1 = Gaussian(n_hidden)
self.qw2 = Gaussian((n_hidden, n_hidden))
self.qb2 = Gaussian(n_hidden)
self.qw3 = Gaussian((n_hidden, n_output))
self.qb3 = Gaussian(n_output)
self.posterior = [self.qw1, self.qb1, self.qw2, self.qb2, self.qw3, self.qb3]
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.qw1() + self.qb1())
h = nn.tanh(h @ self.qw2() + self.qb2())
return nn.Bernoulli(logit=h @ self.qw3() + self.qb3())
def kl(self):
kl = 0
for pos in self.posterior:
kl += nn.loss.kl_divergence(pos.q, self.prior).mean()
return kl
model = BayesianNetwork(2, 5, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(1, 2001, 1):
model.clear()
py = model(x_train)
elbo = py.log_pdf(y_train).mean(0).sum() - model.kl() / len(x_train)
optimizer.maximize(elbo)
if i % 100 == 0:
optimizer.learning_rate *= 0.9
x_grid = np.mgrid[-2:3:100j, -2:3:100j]
x1, x2 = x_grid[0], x_grid[1]
x_grid = x_grid.reshape(2, -1).T
y = np.mean([model(x_grid).mean.value.reshape(100, 100) for _ in range(10)], axis=0)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel(), s=5)
plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2)
plt.colorbar()
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____
###Markdown
5. Neural Networks
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as st
from sklearn.datasets import fetch_mldata, make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import accuracy_score
from prml import nn
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
5.1 Feed-forward Network Functions
###Code
class RegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
truncnorm = st.truncnorm(a=-2, b=2, scale=1)
super().__init__(
w1=truncnorm.rvs((n_input, n_hidden)),
b1=np.zeros(n_hidden),
w2=truncnorm.rvs((n_hidden, n_output)),
b2=np.zeros(n_output)
)
def __call__(self, x, y=None):
h = nn.tanh(x @ self.w1 + self.b1)
self.py = nn.random.Gaussian(h @ self.w2 + self.b2, std=1., data=y)
return self.py.mu.value
def create_toy_data(func, n=50):
x = np.linspace(-1, 1, n)[:, None]
return x, func(x)
def sinusoidal(x):
return np.sin(np.pi * x)
def heaviside(x):
return 0.5 * (np.sign(x) + 1)
func_list = [np.square, sinusoidal, np.abs, heaviside]
plt.figure(figsize=(20, 10))
x = np.linspace(-1, 1, 1000)[:, None]
for i, func, n_iter, decay_step in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000], [100, 100, 1000, 1000]):
plt.subplot(2, 2, i)
x_train, y_train = create_toy_data(func)
model = RegressionNetwork(1, 3, 1)
optimizer = nn.optimizer.Adam(model, 0.1)
optimizer.set_decay(0.9, decay_step)
for _ in range(n_iter):
model.clear()
model(x_train, y_train)
log_likelihood = model.log_pdf()
log_likelihood.backward()
optimizer.update()
y = model(x)
plt.scatter(x_train, y_train, s=10)
plt.plot(x, y, color="r")
plt.show()
###Output
_____no_output_____
###Markdown
5.3 Error Backpropagation
###Code
class ClassificationNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
truncnorm = st.truncnorm(a=-2, b=2, scale=1)
super().__init__(
w1=truncnorm.rvs((n_input, n_hidden)),
b1=np.zeros(n_hidden),
w2=truncnorm.rvs((n_hidden, n_output)),
b2=np.zeros(n_output)
)
def __call__(self, x, y=None):
h = nn.tanh(x @ self.w1 + self.b1)
self.py = nn.random.Bernoulli(logit=h @ self.w2 + self.b2, data=y)
return self.py.mu.value
def create_toy_data():
x = np.random.uniform(-1., 1., size=(100, 2))
labels = np.prod(x, axis=1) > 0
return x, labels.reshape(-1, 1)
x_train, y_train = create_toy_data()
model = ClassificationNetwork(2, 4, 1)
optimizer = nn.optimizer.Adam(model, 1e-3)
history = []
for i in range(10000):
model.clear()
model(x_train, y_train)
log_likelihood = model.log_pdf()
log_likelihood.backward()
optimizer.update()
history.append(log_likelihood.value)
plt.plot(history)
plt.xlabel("iteration")
plt.ylabel("Log Likelihood")
plt.show()
x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
x = np.array([x0, x1]).reshape(2, -1).T
y = model(x).reshape(100, 100)
levels = np.linspace(0, 1, 11)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train)
plt.contourf(x0, x1, y, levels, alpha=0.2)
plt.colorbar()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.gca().set_aspect('equal')
plt.show()
###Output
_____no_output_____
###Markdown
5.5 Regularization in Neural Networks
###Code
def create_toy_data(n=10):
x = np.linspace(0, 1, n)[:, None]
return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1))
x_train, y_train = create_toy_data()
x = np.linspace(0, 1, 100)[:, None]
plt.figure(figsize=(20, 5))
for i, m in enumerate([1, 3, 30]):
plt.subplot(1, 3, i + 1)
model = RegressionNetwork(1, m, 1)
optimizer = nn.optimizer.Adam(model, 0.1)
optimizer.set_decay(0.9, 1000)
for j in range(10000):
model.clear()
model(x_train, y_train)
log_posterior = model.log_pdf()
log_posterior.backward()
optimizer.update()
y = model(x)
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M={}".format(m), (0.7, 0.5))
plt.show()
class RegularizedRegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
truncnorm = st.truncnorm(a=-2, b=2, scale=1)
super().__init__(
w1=truncnorm.rvs((n_input, n_hidden)),
b1=np.zeros(n_hidden),
w2=truncnorm.rvs((n_hidden, n_output)),
b2=np.zeros(n_output)
)
def __call__(self, x, y=None):
self.pw1 = nn.random.Gaussian(0., 1., data=self.w1)
self.pb1 = nn.random.Gaussian(0., 1., data=self.b1)
self.pw2 = nn.random.Gaussian(0., 1., data=self.w2)
self.pb2 = nn.random.Gaussian(0., 1., data=self.b2)
h = nn.tanh(x @ self.w1 + self.b1)
self.py = nn.random.Gaussian(h @ self.w2 + self.b2, std=0.1, data=y)
return self.py.mu.value
model = RegularizedRegressionNetwork(1, 30, 1)
optimizer = nn.optimizer.Adam(model, 0.1)
optimizer.set_decay(0.9, 1000)
for i in range(10000):
model.clear()
model(x_train, y_train)
log_posterior = model.log_pdf()
log_posterior.backward()
optimizer.update()
y = model(x)
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M=30", (0.7, 0.5))
plt.show()
def load_mnist():
mnist = fetch_mldata("MNIST original")
x = mnist.data
label = mnist.target
x = x / np.max(x, axis=1, keepdims=True)
x = x.reshape(-1, 28, 28, 1)
x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1)
y_train = LabelBinarizer().fit_transform(label_train)
return x_train, x_test, y_train, label_test
x_train, x_test, y_train, label_test = load_mnist()
class ConvolutionalNeuralNetwork(nn.Network):
def __init__(self):
truncnorm = st.truncnorm(a=-2, b=2, scale=0.1)
super().__init__(
w1=truncnorm.rvs((5, 5, 1, 20)),
b1=np.zeros(20) + 0.1,
w2=truncnorm.rvs((5, 5, 20, 20)),
b2=np.zeros(20) + 0.1,
w3=truncnorm.rvs((4 * 4 * 20, 500)),
b3=np.zeros(500) + 0.1,
w4=truncnorm.rvs((500, 10)),
b4=np.zeros(10) + 0.1
)
def __call__(self, x, y=None):
h = nn.relu(nn.convolve2d(x, self.w1) + self.b1)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = nn.relu(nn.convolve2d(h, self.w2) + self.b2)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = h.reshape(-1, 4 * 4 * 20)
h = nn.relu(h @ self.w3 + self.b3)
self.py = nn.random.Categorical(logit=h @ self.w4 + self.b4, data=y)
return self.py.mu.value
model = ConvolutionalNeuralNetwork()
optimizer = nn.optimizer.Adam(model, 1e-3)
while True:
indices = np.random.permutation(len(x_train))
for index in range(0, len(x_train), 50):
model.clear()
x_batch = x_train[indices[index: index + 50]]
y_batch = y_train[indices[index: index + 50]]
prob = model(x_batch, y_batch)
log_likelihood = model.log_pdf()
if optimizer.n_iter % 100 == 0:
accuracy = accuracy_score(
np.argmax(y_batch, axis=-1), np.argmax(prob, axis=-1)
)
print("step {:04d}".format(optimizer.n_iter), end=", ")
print("accuracy {:.2f}".format(accuracy), end=", ")
print("Log Likelihood {:g}".format(log_likelihood.value))
log_likelihood.backward()
optimizer.update()
if optimizer.n_iter == 1000:
break
else:
continue
break
label_pred = []
for i in range(0, len(x_test), 50):
label_pred.append(np.argmax(model(x_test[i: i + 50]), axis=-1))
label_pred = np.asarray(label_pred).ravel()
print("accuracy (test):", accuracy_score(label_test, label_pred))
###Output
accuracy (test): 0.969571428571
###Markdown
5.6 Mixture Density Networks
###Code
def create_toy_data(func, n=300):
t = np.random.uniform(size=(n, 1))
x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1))
return x, t
def func(x):
return x + 0.3 * np.sin(2 * np.pi * x)
def sample(x, t, n=None):
assert len(x) == len(t)
N = len(x)
if n is None:
n = N
indices = np.random.choice(N, n, replace=False)
return x[indices], t[indices]
x_train, y_train = create_toy_data(func)
class MixtureDensityNetwork(nn.Network):
def __init__(self, n_components):
truncnorm = st.truncnorm(a=-0.2, b=0.2, scale=0.1)
self.n_components = n_components
super().__init__(
w1=truncnorm.rvs((1, 5)),
b1=np.zeros(5),
w2_c=truncnorm.rvs((5, n_components)),
b2_c=np.zeros(n_components),
w2_m=truncnorm.rvs((5, n_components)),
b2_m=np.zeros(n_components),
w2_s=truncnorm.rvs((5, n_components)),
b2_s=np.zeros(n_components)
)
def __call__(self, x, y=None):
h = nn.tanh(x @ self.w1 + self.b1)
coef = nn.softmax(h @ self.w2_c + self.b2_c)
mean = h @ self.w2_m + self.b2_m
std = nn.exp(h @ self.w2_s + self.b2_s)
self.py = nn.random.GaussianMixture(coef, mean, std, data=y)
return self.py
model = MixtureDensityNetwork(3)
optimizer = nn.optimizer.Adam(model, 1e-4)
for i in range(30000):
model.clear()
batch = sample(x_train, y_train, n=100)
model(x_train, y_train)
log_likelihood = model.log_pdf()
log_likelihood.backward()
optimizer.update()
x, y = np.meshgrid(
np.linspace(x_train.min(), x_train.max(), 100),
np.linspace(y_train.min(), y_train.max(), 100))
xy = np.array([x, y]).reshape(2, -1).T
p = model(xy[:, 0].reshape(-1, 1), xy[:, 1].reshape(-1, 1))
plt.figure(figsize=(20, 15))
plt.subplot(2, 2, 1)
plt.plot(x[0], p.coef.value[:100, 0], color="blue")
plt.plot(x[0], p.coef.value[:100, 1], color="red")
plt.plot(x[0], p.coef.value[:100, 2], color="green")
plt.title("weights")
plt.subplot(2, 2, 2)
plt.plot(x[0], p.mu.value[:100, 0], color="blue")
plt.plot(x[0], p.mu.value[:100, 1], color="red")
plt.plot(x[0], p.mu.value[:100, 2], color="green")
plt.title("means")
plt.subplot(2, 2, 3)
prob = p.pdf().value
levels_log = np.linspace(0, np.log(prob.max()), 21)
levels = np.exp(levels_log)
levels[0] = 0
plt.contour(x, y, prob.reshape(100, 100), levels)
plt.xlim(x_train.min(), x_train.max())
plt.ylim(y_train.min(), y_train.max())
plt.subplot(2, 2, 4)
argmax = np.argmax(p.coef.value[:100], axis=1)
for i in range(3):
indices = np.where(argmax == i)[0]
plt.plot(x[0, indices], p.mu.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b")
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Bayesian Neural Networks
###Code
x_train, y_train = make_moons(n_samples=500, noise=0.2)
y_train = y_train[:, None]
class BayesianNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output=1):
super().__init__(
w1_m=np.zeros((n_input, n_hidden)),
w1_s=np.zeros((n_input, n_hidden)),
b1_m=np.zeros(n_hidden),
b1_s=np.zeros(n_hidden),
w2_m=np.zeros((n_hidden, n_hidden)),
w2_s=np.zeros((n_hidden, n_hidden)),
b2_m=np.zeros(n_hidden),
b2_s=np.zeros(n_hidden),
w3_m=np.zeros((n_hidden, n_output)),
w3_s=np.zeros((n_hidden, n_output)),
b3_m=np.zeros(n_output),
b3_s=np.zeros(n_output)
)
def __call__(self, x, y=None):
self.qw1 = nn.random.Gaussian(
self.w1_m, nn.softplus(self.w1_s),
p=nn.random.Gaussian(0, 1)
)
self.qb1 = nn.random.Gaussian(
self.b1_m, nn.softplus(self.b1_s),
p=nn.random.Gaussian(0, 1)
)
self.qw2 = nn.random.Gaussian(
self.w2_m, nn.softplus(self.w2_s),
p=nn.random.Gaussian(0, 1)
)
self.qb2 = nn.random.Gaussian(
self.b2_m, nn.softplus(self.b2_s),
p=nn.random.Gaussian(0, 1)
)
self.qw3 = nn.random.Gaussian(
self.w3_m, nn.softplus(self.w3_s),
p=nn.random.Gaussian(0, 1)
)
self.qb3 = nn.random.Gaussian(
self.b3_m, nn.softplus(self.b3_s),
p=nn.random.Gaussian(0, 1)
)
h = nn.tanh(x @ self.qw1.draw() + self.qb1.draw())
h = nn.tanh(h @ self.qw2.draw() + self.qb2.draw())
self.py = nn.random.Bernoulli(logit=h @ self.qw3.draw() + self.qb3.draw(), data=y)
return self.py.mu.value
model = BayesianNetwork(2, 5, 1)
optimizer = nn.optimizer.Adam(model, 0.1)
optimizer.set_decay(0.9, 100)
for i in range(2000):
model.clear()
model(x_train, y_train)
elbo = model.elbo()
elbo.backward()
optimizer.update()
x_grid = np.mgrid[-2:3:100j, -2:3:100j]
x1, x2 = x_grid[0], x_grid[1]
x_grid = x_grid.reshape(2, -1).T
y = np.mean([model(x_grid).reshape(100, 100) for _ in range(10)], axis=0)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train, s=5)
plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2)
plt.colorbar()
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____
###Markdown
5. Neural Networks
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml, make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import accuracy_score
from prml import nn
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
5.1 Feed-forward Network Functions
###Code
class RegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data(func, n=50):
x = np.linspace(-1, 1, n)[:, None]
return x, func(x)
def sinusoidal(x):
return np.sin(np.pi * x)
def heaviside(x):
return 0.5 * (np.sign(x) + 1)
func_list = [np.square, sinusoidal, np.abs, heaviside]
plt.figure(figsize=(20, 10))
x = np.linspace(-1, 1, 1000)[:, None]
for i, func, n_iter in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000]):
plt.subplot(2, 2, i)
x_train, y_train = create_toy_data(func)
model = RegressionNetwork(1, 3, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for _ in range(n_iter):
model.clear()
loss = nn.square(y_train - model(x_train)).sum()
optimizer.minimize(loss)
y = model(x).value
plt.scatter(x_train, y_train, s=10)
plt.plot(x, y, color="r")
plt.show()
###Output
_____no_output_____
###Markdown
5.3 Error Backpropagation
###Code
class ClassificationNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data():
x = np.random.uniform(-1., 1., size=(100, 2))
labels = np.prod(x, axis=1) > 0
return x, labels.reshape(-1, 1)
x_train, y_train = create_toy_data()
model = ClassificationNetwork(2, 4, 1)
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
history = []
for i in range(10000):
model.clear()
logit = model(x_train)
log_likelihood = -nn.loss.sigmoid_cross_entropy(logit, y_train).sum()
optimizer.maximize(log_likelihood)
history.append(log_likelihood.value)
plt.plot(history)
plt.xlabel("iteration")
plt.ylabel("Log Likelihood")
plt.show()
x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
x = np.array([x0, x1]).reshape(2, -1).T
y = nn.sigmoid(model(x)).value.reshape(100, 100)
levels = np.linspace(0, 1, 11)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel())
plt.contourf(x0, x1, y, levels, alpha=0.2)
plt.colorbar()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.gca().set_aspect('equal')
plt.show()
###Output
_____no_output_____
###Markdown
5.5 Regularization in Neural Networks
###Code
def create_toy_data(n=10):
x = np.linspace(0, 1, n)[:, None]
return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1))
x_train, y_train = create_toy_data()
x = np.linspace(0, 1, 100)[:, None]
plt.figure(figsize=(20, 5))
for i, m in enumerate([1, 3, 30]):
plt.subplot(1, 3, i + 1)
model = RegressionNetwork(1, m, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for j in range(10000):
model.clear()
y = model(x_train)
optimizer.minimize(nn.square(y - y_train).sum())
if j % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x)
plt.scatter(x_train.ravel(), y_train.ravel(), marker="x", color="k")
plt.plot(x.ravel(), y.value.ravel(), color="k")
plt.annotate("M={}".format(m), (0.7, 0.5))
plt.show()
class RegularizedRegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def log_prior(self):
logp = 0
for param in self.parameter.values():
logp += self.prior.log_pdf(param)
return logp
model = RegularizedRegressionNetwork(1, 30, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(10000):
model.clear()
pred = model(x_train)
log_posterior = -nn.square(pred - y_train).sum() + model.log_prior()
optimizer.maximize(log_posterior)
if i % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x).value
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M=30", (0.7, 0.5))
plt.show()
def load_mnist():
x, label = fetch_openml("mnist_784", return_X_y=True)
x = x / np.max(x, axis=1, keepdims=True)
x = x.reshape(-1, 28, 28, 1)
label = label.astype(np.int)
x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1)
y_train = LabelBinarizer().fit_transform(label_train)
return x_train, x_test, y_train, label_test
x_train, x_test, y_train, label_test = load_mnist()
class ConvolutionalNeuralNetwork(nn.Network):
def __init__(self):
super().__init__()
with self.set_parameter():
self.conv1 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 1, 20)),
stride=(1, 1), pad=(0, 0))
self.b1 = nn.array([0.1] * 20)
self.conv2 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 20, 20)),
stride=(1, 1), pad=(0, 0))
self.b2 = nn.array([0.1] * 20)
self.w3 = nn.random.truncnormal(-2, 2, 1, (4 * 4 * 20, 100))
self.b3 = nn.array([0.1] * 100)
self.w4 = nn.random.truncnormal(-2, 2, 1, (100, 10))
self.b4 = nn.array([0.1] * 10)
def __call__(self, x):
h = nn.relu(self.conv1(x) + self.b1)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = nn.relu(self.conv2(h) + self.b2)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = h.reshape(-1, 4 * 4 * 20)
h = nn.relu(h @ self.w3 + self.b3)
return h @ self.w4 + self.b4
model = ConvolutionalNeuralNetwork()
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
while True:
indices = np.random.permutation(len(x_train))
for index in range(0, len(x_train), 50):
model.clear()
x_batch = x_train[indices[index: index + 50]]
y_batch = y_train[indices[index: index + 50]]
logit = model(x_batch)
log_likelihood = -nn.loss.softmax_cross_entropy(logit, y_batch).mean(0).sum()
if optimizer.iter_count % 100 == 0:
accuracy = accuracy_score(
np.argmax(y_batch, axis=-1), np.argmax(logit.value, axis=-1)
)
print("step {:04d}".format(optimizer.iter_count), end=", ")
print("accuracy {:.2f}".format(accuracy), end=", ")
print("Log Likelihood {:g}".format(log_likelihood.value[0]))
optimizer.maximize(log_likelihood)
if optimizer.iter_count == 1000:
break
else:
continue
break
print("accuracy (test):", accuracy_score(np.argmax(model(x_test).value, axis=-1), label_test))
###Output
accuracy (test): 0.8595714285714285
###Markdown
5.6 Mixture Density Networks
###Code
def create_toy_data(func, n=300):
t = np.random.uniform(size=(n, 1))
x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1))
return x, t
def func(x):
return x + 0.3 * np.sin(2 * np.pi * x)
def sample(x, t, n=None):
assert len(x) == len(t)
N = len(x)
if n is None:
n = N
indices = np.random.choice(N, n, replace=False)
return x[indices], t[indices]
x_train, y_train = create_toy_data(func)
class MixtureDensityNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_components):
self.n_components = n_components
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2c = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2c = nn.zeros(n_components)
self.w2m = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2m = nn.zeros(n_components)
self.w2s = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2s = nn.zeros(n_components)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
coef = nn.softmax(h @ self.w2c + self.b2c)
mean = h @ self.w2m + self.b2m
std = nn.exp(h @ self.w2s + self.b2s)
return coef, mean, std
def gaussian_mixture_pdf(x, coef, mu, std):
gauss = (
nn.exp(-0.5 * nn.square((x - mu) / std))
/ std / np.sqrt(2 * np.pi)
)
return (coef * gauss).sum(axis=-1)
model = MixtureDensityNetwork(1, 5, 3)
optimizer = nn.optimizer.Adam(model.parameter, 1e-4)
for i in range(30000):
model.clear()
coef, mean, std = model(x_train)
log_likelihood = nn.log(gaussian_mixture_pdf(y_train, coef, mean, std)).sum()
optimizer.maximize(log_likelihood)
x = np.linspace(x_train.min(), x_train.max(), 100)[:, None]
y = np.linspace(y_train.min(), y_train.max(), 100)[:, None, None]
coef, mean, std = model(x)
plt.figure(figsize=(20, 15))
plt.subplot(2, 2, 1)
plt.plot(x[:, 0], coef.value[:, 0], color="blue")
plt.plot(x[:, 0], coef.value[:, 1], color="red")
plt.plot(x[:, 0], coef.value[:, 2], color="green")
plt.title("weights")
plt.subplot(2, 2, 2)
plt.plot(x[:, 0], mean.value[:, 0], color="blue")
plt.plot(x[:, 0], mean.value[:, 1], color="red")
plt.plot(x[:, 0], mean.value[:, 2], color="green")
plt.title("means")
plt.subplot(2, 2, 3)
proba = gaussian_mixture_pdf(y, coef, mean, std).value
levels_log = np.linspace(0, np.log(proba.max()), 21)
levels = np.exp(levels_log)
levels[0] = 0
xx, yy = np.meshgrid(x.ravel(), y.ravel())
plt.contour(xx, yy, proba.reshape(100, 100), levels)
plt.xlim(x_train.min(), x_train.max())
plt.ylim(y_train.min(), y_train.max())
plt.subplot(2, 2, 4)
argmax = np.argmax(coef.value, axis=1)
for i in range(3):
indices = np.where(argmax == i)[0]
plt.plot(x[indices, 0], mean.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b")
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Bayesian Neural Networks
###Code
x_train, y_train = make_moons(n_samples=500, noise=0.2)
y_train = y_train[:, None]
class Gaussian(nn.Network):
def __init__(self, shape):
super().__init__()
with self.set_parameter():
self.m = nn.zeros(shape)
self.s = nn.zeros(shape)
def __call__(self):
self.q = nn.Gaussian(self.m, nn.softplus(self.s) + 1e-8)
return self.q.draw()
class BayesianNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output=1):
super().__init__()
with self.set_parameter():
self.qw1 = Gaussian((n_input, n_hidden))
self.qb1 = Gaussian(n_hidden)
self.qw2 = Gaussian((n_hidden, n_hidden))
self.qb2 = Gaussian(n_hidden)
self.qw3 = Gaussian((n_hidden, n_output))
self.qb3 = Gaussian(n_output)
self.posterior = [self.qw1, self.qb1, self.qw2, self.qb2, self.qw3, self.qb3]
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.qw1() + self.qb1())
h = nn.tanh(h @ self.qw2() + self.qb2())
return nn.Bernoulli(logit=h @ self.qw3() + self.qb3())
def kl(self):
kl = 0
for pos in self.posterior:
kl += nn.loss.kl_divergence(pos.q, self.prior).mean()
return kl
model = BayesianNetwork(2, 5, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(1, 2001, 1):
model.clear()
py = model(x_train)
elbo = py.log_pdf(y_train).mean(0).sum() - model.kl() / len(x_train)
optimizer.maximize(elbo)
if i % 100 == 0:
optimizer.learning_rate *= 0.9
x_grid = np.mgrid[-2:3:100j, -2:3:100j]
x1, x2 = x_grid[0], x_grid[1]
x_grid = x_grid.reshape(2, -1).T
y = np.mean([model(x_grid).mean.value.reshape(100, 100) for _ in range(10)], axis=0)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel(), s=5)
plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2)
plt.colorbar()
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____
###Markdown
5. Neural Networks
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml, make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import accuracy_score
from prml import nn
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
5.1 Feed-forward Network Functions
###Code
class RegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data(func, n=50):
x = np.linspace(-1, 1, n)[:, None]
return x, func(x)
def sinusoidal(x):
return np.sin(np.pi * x)
def heaviside(x):
return 0.5 * (np.sign(x) + 1)
func_list = [np.square, sinusoidal, np.abs, heaviside]
plt.figure(figsize=(20, 10))
x = np.linspace(-1, 1, 1000)[:, None]
for i, func, n_iter in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000]):
plt.subplot(2, 2, i)
x_train, y_train = create_toy_data(func)
model = RegressionNetwork(1, 3, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for _ in range(n_iter):
model.clear()
loss = nn.square(y_train - model(x_train)).sum()
optimizer.minimize(loss)
y = model(x).value
plt.scatter(x_train, y_train, s=10)
plt.plot(x, y, color="r")
plt.show()
###Output
_____no_output_____
###Markdown
5.3 Error Backpropagation
###Code
class ClassificationNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data():
x = np.random.uniform(-1., 1., size=(100, 2))
labels = np.prod(x, axis=1) > 0
return x, labels.reshape(-1, 1)
x_train, y_train = create_toy_data()
model = ClassificationNetwork(2, 4, 1)
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
history = []
for i in range(10000):
model.clear()
logit = model(x_train)
log_likelihood = -nn.loss.sigmoid_cross_entropy(logit, y_train).sum()
optimizer.maximize(log_likelihood)
history.append(log_likelihood.value)
plt.plot(history)
plt.xlabel("iteration")
plt.ylabel("Log Likelihood")
plt.show()
x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
x = np.array([x0, x1]).reshape(2, -1).T
y = nn.sigmoid(model(x)).value.reshape(100, 100)
levels = np.linspace(0, 1, 11)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel())
plt.contourf(x0, x1, y, levels, alpha=0.2)
plt.colorbar()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.gca().set_aspect('equal')
plt.show()
###Output
_____no_output_____
###Markdown
5.5 Regularization in Neural Networks
###Code
def create_toy_data(n=10):
x = np.linspace(0, 1, n)[:, None]
return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1))
x_train, y_train = create_toy_data()
x = np.linspace(0, 1, 100)[:, None]
plt.figure(figsize=(20, 5))
for i, m in enumerate([1, 3, 30]):
plt.subplot(1, 3, i + 1)
model = RegressionNetwork(1, m, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for j in range(10000):
model.clear()
y = model(x_train)
optimizer.minimize(nn.square(y - y_train).sum())
if j % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x)
plt.scatter(x_train.ravel(), y_train.ravel(), marker="x", color="k")
plt.plot(x.ravel(), y.value.ravel(), color="k")
plt.annotate("M={}".format(m), (0.7, 0.5))
plt.show()
class RegularizedRegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def log_prior(self):
logp = 0
for param in self.parameter.values():
logp += self.prior.log_pdf(param)
return logp
model = RegularizedRegressionNetwork(1, 30, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(10000):
model.clear()
pred = model(x_train)
log_posterior = -nn.square(pred - y_train).sum() + model.log_prior()
optimizer.maximize(log_posterior)
if i % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x).value
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M=30", (0.7, 0.5))
plt.show()
def load_mnist():
x, label = fetch_openml("mnist_784", return_X_y=True, as_frame=False)
x = x / np.max(x, axis=1, keepdims=True)
x = x.reshape(-1, 28, 28, 1)
label = label.astype(np.int)
x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1)
y_train = LabelBinarizer().fit_transform(label_train)
return x_train, x_test, y_train, label_test
x_train, x_test, y_train, label_test = load_mnist()
class ConvolutionalNeuralNetwork(nn.Network):
def __init__(self):
super().__init__()
with self.set_parameter():
self.conv1 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 1, 20)),
stride=(1, 1), pad=(0, 0))
self.b1 = nn.array([0.1] * 20)
self.conv2 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 20, 20)),
stride=(1, 1), pad=(0, 0))
self.b2 = nn.array([0.1] * 20)
self.w3 = nn.random.truncnormal(-2, 2, 1, (4 * 4 * 20, 100))
self.b3 = nn.array([0.1] * 100)
self.w4 = nn.random.truncnormal(-2, 2, 1, (100, 10))
self.b4 = nn.array([0.1] * 10)
def __call__(self, x):
h = nn.relu(self.conv1(x) + self.b1)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = nn.relu(self.conv2(h) + self.b2)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = h.reshape(-1, 4 * 4 * 20)
h = nn.relu(h @ self.w3 + self.b3)
return h @ self.w4 + self.b4
model = ConvolutionalNeuralNetwork()
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
while True:
indices = np.random.permutation(len(x_train))
for index in range(0, len(x_train), 50):
model.clear()
x_batch = x_train[indices[index: index + 50]]
y_batch = y_train[indices[index: index + 50]]
logit = model(x_batch)
log_likelihood = -nn.loss.softmax_cross_entropy(logit, y_batch).mean(0).sum()
if optimizer.iter_count % 100 == 0:
accuracy = accuracy_score(
np.argmax(y_batch, axis=-1), np.argmax(logit.value, axis=-1)
)
print("step {:04d}".format(optimizer.iter_count), end=", ")
print("accuracy {:.2f}".format(accuracy), end=", ")
print("Log Likelihood {:g}".format(log_likelihood.value[0]))
optimizer.maximize(log_likelihood)
if optimizer.iter_count == 1000:
break
else:
continue
break
print("accuracy (test):", accuracy_score(np.argmax(model(x_test).value, axis=-1), label_test))
###Output
accuracy (test): 0.8594285714285714
###Markdown
5.6 Mixture Density Networks
###Code
def create_toy_data(func, n=300):
t = np.random.uniform(size=(n, 1))
x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1))
return x, t
def func(x):
return x + 0.3 * np.sin(2 * np.pi * x)
def sample(x, t, n=None):
assert len(x) == len(t)
N = len(x)
if n is None:
n = N
indices = np.random.choice(N, n, replace=False)
return x[indices], t[indices]
x_train, y_train = create_toy_data(func)
class MixtureDensityNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_components):
self.n_components = n_components
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2c = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2c = nn.zeros(n_components)
self.w2m = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2m = nn.zeros(n_components)
self.w2s = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2s = nn.zeros(n_components)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
coef = nn.softmax(h @ self.w2c + self.b2c)
mean = h @ self.w2m + self.b2m
std = nn.exp(h @ self.w2s + self.b2s)
return coef, mean, std
def gaussian_mixture_pdf(x, coef, mu, std):
gauss = (
nn.exp(-0.5 * nn.square((x - mu) / std))
/ std / np.sqrt(2 * np.pi)
)
return (coef * gauss).sum(axis=-1)
model = MixtureDensityNetwork(1, 5, 3)
optimizer = nn.optimizer.Adam(model.parameter, 1e-4)
for i in range(30000):
model.clear()
coef, mean, std = model(x_train)
log_likelihood = nn.log(gaussian_mixture_pdf(y_train, coef, mean, std)).sum()
optimizer.maximize(log_likelihood)
x = np.linspace(x_train.min(), x_train.max(), 100)[:, None]
y = np.linspace(y_train.min(), y_train.max(), 100)[:, None, None]
coef, mean, std = model(x)
plt.figure(figsize=(20, 15))
plt.subplot(2, 2, 1)
plt.plot(x[:, 0], coef.value[:, 0], color="blue")
plt.plot(x[:, 0], coef.value[:, 1], color="red")
plt.plot(x[:, 0], coef.value[:, 2], color="green")
plt.title("weights")
plt.subplot(2, 2, 2)
plt.plot(x[:, 0], mean.value[:, 0], color="blue")
plt.plot(x[:, 0], mean.value[:, 1], color="red")
plt.plot(x[:, 0], mean.value[:, 2], color="green")
plt.title("means")
plt.subplot(2, 2, 3)
proba = gaussian_mixture_pdf(y, coef, mean, std).value
levels_log = np.linspace(0, np.log(proba.max()), 21)
levels = np.exp(levels_log)
levels[0] = 0
xx, yy = np.meshgrid(x.ravel(), y.ravel())
plt.contour(xx, yy, proba.reshape(100, 100), levels)
plt.xlim(x_train.min(), x_train.max())
plt.ylim(y_train.min(), y_train.max())
plt.subplot(2, 2, 4)
argmax = np.argmax(coef.value, axis=1)
for i in range(3):
indices = np.where(argmax == i)[0]
plt.plot(x[indices, 0], mean.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b")
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Bayesian Neural Networks
###Code
x_train, y_train = make_moons(n_samples=500, noise=0.2)
y_train = y_train[:, None]
class Gaussian(nn.Network):
def __init__(self, shape):
super().__init__()
with self.set_parameter():
self.m = nn.zeros(shape)
self.s = nn.zeros(shape)
def __call__(self):
self.q = nn.Gaussian(self.m, nn.softplus(self.s) + 1e-8)
return self.q.draw()
class BayesianNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output=1):
super().__init__()
with self.set_parameter():
self.qw1 = Gaussian((n_input, n_hidden))
self.qb1 = Gaussian(n_hidden)
self.qw2 = Gaussian((n_hidden, n_hidden))
self.qb2 = Gaussian(n_hidden)
self.qw3 = Gaussian((n_hidden, n_output))
self.qb3 = Gaussian(n_output)
self.posterior = [self.qw1, self.qb1, self.qw2, self.qb2, self.qw3, self.qb3]
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.qw1() + self.qb1())
h = nn.tanh(h @ self.qw2() + self.qb2())
return nn.Bernoulli(logit=h @ self.qw3() + self.qb3())
def kl(self):
kl = 0
for pos in self.posterior:
kl += nn.loss.kl_divergence(pos.q, self.prior).mean()
return kl
model = BayesianNetwork(2, 5, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(1, 2001, 1):
model.clear()
py = model(x_train)
elbo = py.log_pdf(y_train).mean(0).sum() - model.kl() / len(x_train)
optimizer.maximize(elbo)
if i % 100 == 0:
optimizer.learning_rate *= 0.9
x_grid = np.mgrid[-2:3:100j, -2:3:100j]
x1, x2 = x_grid[0], x_grid[1]
x_grid = x_grid.reshape(2, -1).T
y = np.mean([model(x_grid).mean.value.reshape(100, 100) for _ in range(10)], axis=0)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel(), s=5)
plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2)
plt.colorbar()
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____
###Markdown
5. Neural Networks
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_mldata, make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import accuracy_score
from prml import nn
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
5.1 Feed-forward Network Functions
###Code
class RegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data(func, n=50):
x = np.linspace(-1, 1, n)[:, None]
return x, func(x)
def sinusoidal(x):
return np.sin(np.pi * x)
def heaviside(x):
return 0.5 * (np.sign(x) + 1)
func_list = [np.square, sinusoidal, np.abs, heaviside]
plt.figure(figsize=(20, 10))
x = np.linspace(-1, 1, 1000)[:, None]
for i, func, n_iter in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000]):
plt.subplot(2, 2, i)
x_train, y_train = create_toy_data(func)
model = RegressionNetwork(1, 3, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for _ in range(n_iter):
model.clear()
loss = nn.square(y_train - model(x_train)).sum()
optimizer.minimize(loss)
y = model(x).value
plt.scatter(x_train, y_train, s=10)
plt.plot(x, y, color="r")
plt.show()
###Output
_____no_output_____
###Markdown
5.3 Error Backpropagation
###Code
class ClassificationNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data():
x = np.random.uniform(-1., 1., size=(100, 2))
labels = np.prod(x, axis=1) > 0
return x, labels.reshape(-1, 1)
x_train, y_train = create_toy_data()
model = ClassificationNetwork(2, 4, 1)
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
history = []
for i in range(10000):
model.clear()
logit = model(x_train)
log_likelihood = -nn.loss.sigmoid_cross_entropy(logit, y_train).sum()
optimizer.maximize(log_likelihood)
history.append(log_likelihood.value)
plt.plot(history)
plt.xlabel("iteration")
plt.ylabel("Log Likelihood")
plt.show()
x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
x = np.array([x0, x1]).reshape(2, -1).T
y = nn.sigmoid(model(x)).value.reshape(100, 100)
levels = np.linspace(0, 1, 11)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel())
plt.contourf(x0, x1, y, levels, alpha=0.2)
plt.colorbar()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.gca().set_aspect('equal')
plt.show()
###Output
_____no_output_____
###Markdown
5.5 Regularization in Neural Networks
###Code
def create_toy_data(n=10):
x = np.linspace(0, 1, n)[:, None]
return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1))
x_train, y_train = create_toy_data()
x = np.linspace(0, 1, 100)[:, None]
plt.figure(figsize=(20, 5))
for i, m in enumerate([1, 3, 30]):
plt.subplot(1, 3, i + 1)
model = RegressionNetwork(1, m, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for j in range(10000):
model.clear()
y = model(x_train)
optimizer.minimize(nn.square(y - y_train).sum())
if j % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x)
plt.scatter(x_train.ravel(), y_train.ravel(), marker="x", color="k")
plt.plot(x.ravel(), y.value.ravel(), color="k")
plt.annotate("M={}".format(m), (0.7, 0.5))
plt.show()
class RegularizedRegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def log_prior(self):
logp = 0
for param in self.parameter.values():
logp += self.prior.log_pdf(param)
return logp
model = RegularizedRegressionNetwork(1, 30, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(10000):
model.clear()
pred = model(x_train)
log_posterior = -nn.square(pred - y_train).sum() + model.log_prior()
optimizer.maximize(log_posterior)
if i % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x).value
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M=30", (0.7, 0.5))
plt.show()
def load_mnist():
mnist = fetch_mldata("MNIST original")
x = mnist.data
label = mnist.target
x = x / np.max(x, axis=1, keepdims=True)
x = x.reshape(-1, 28, 28, 1)
x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1)
y_train = LabelBinarizer().fit_transform(label_train)
return x_train, x_test, y_train, label_test
x_train, x_test, y_train, label_test = load_mnist()
class ConvolutionalNeuralNetwork(nn.Network):
def __init__(self):
super().__init__()
with self.set_parameter():
self.conv1 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 1, 20)),
stride=(1, 1), pad=(0, 0))
self.b1 = nn.array([0.1] * 20)
self.conv2 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 20, 20)),
stride=(1, 1), pad=(0, 0))
self.b2 = nn.array([0.1] * 20)
self.w3 = nn.random.truncnormal(-2, 2, 1, (4 * 4 * 20, 100))
self.b3 = nn.array([0.1] * 100)
self.w4 = nn.random.truncnormal(-2, 2, 1, (100, 10))
self.b4 = nn.array([0.1] * 10)
def __call__(self, x):
h = nn.relu(self.conv1(x) + self.b1)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = nn.relu(self.conv2(h) + self.b2)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = h.reshape(-1, 4 * 4 * 20)
h = nn.relu(h @ self.w3 + self.b3)
return h @ self.w4 + self.b4
model = ConvolutionalNeuralNetwork()
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
while True:
indices = np.random.permutation(len(x_train))
for index in range(0, len(x_train), 50):
model.clear()
x_batch = x_train[indices[index: index + 50]]
y_batch = y_train[indices[index: index + 50]]
logit = model(x_batch)
log_likelihood = -nn.loss.softmax_cross_entropy(logit, y_batch).mean(0).sum()
if optimizer.iter_count % 100 == 0:
accuracy = accuracy_score(
np.argmax(y_batch, axis=-1), np.argmax(logit.value, axis=-1)
)
print("step {:04d}".format(optimizer.iter_count), end=", ")
print("accuracy {:.2f}".format(accuracy), end=", ")
print("Log Likelihood {:g}".format(log_likelihood.value[0]))
optimizer.maximize(log_likelihood)
if optimizer.iter_count == 1000:
break
else:
continue
break
print("accuracy (test):", accuracy_score(np.argmax(model(x_test).value, axis=-1), label_test))
###Output
accuracy (test): 0.862
###Markdown
5.6 Mixture Density Networks
###Code
def create_toy_data(func, n=300):
t = np.random.uniform(size=(n, 1))
x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1))
return x, t
def func(x):
return x + 0.3 * np.sin(2 * np.pi * x)
def sample(x, t, n=None):
assert len(x) == len(t)
N = len(x)
if n is None:
n = N
indices = np.random.choice(N, n, replace=False)
return x[indices], t[indices]
x_train, y_train = create_toy_data(func)
class MixtureDensityNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_components):
self.n_components = n_components
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2c = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2c = nn.zeros(n_components)
self.w2m = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2m = nn.zeros(n_components)
self.w2s = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2s = nn.zeros(n_components)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
coef = nn.softmax(h @ self.w2c + self.b2c)
mean = h @ self.w2m + self.b2m
std = nn.exp(h @ self.w2s + self.b2s)
return coef, mean, std
def gaussian_mixture_pdf(x, coef, mu, std):
gauss = (
nn.exp(-0.5 * nn.square((x - mu) / std))
/ std / np.sqrt(2 * np.pi)
)
return (coef * gauss).sum(axis=-1)
model = MixtureDensityNetwork(1, 5, 3)
optimizer = nn.optimizer.Adam(model.parameter, 1e-4)
for i in range(30000):
model.clear()
coef, mean, std = model(x_train)
log_likelihood = nn.log(gaussian_mixture_pdf(y_train, coef, mean, std)).sum()
optimizer.maximize(log_likelihood)
x = np.linspace(x_train.min(), x_train.max(), 100)[:, None]
y = np.linspace(y_train.min(), y_train.max(), 100)[:, None, None]
coef, mean, std = model(x)
plt.figure(figsize=(20, 15))
plt.subplot(2, 2, 1)
plt.plot(x[:, 0], coef.value[:, 0], color="blue")
plt.plot(x[:, 0], coef.value[:, 1], color="red")
plt.plot(x[:, 0], coef.value[:, 2], color="green")
plt.title("weights")
plt.subplot(2, 2, 2)
plt.plot(x[:, 0], mean.value[:, 0], color="blue")
plt.plot(x[:, 0], mean.value[:, 1], color="red")
plt.plot(x[:, 0], mean.value[:, 2], color="green")
plt.title("means")
plt.subplot(2, 2, 3)
proba = gaussian_mixture_pdf(y, coef, mean, std).value
levels_log = np.linspace(0, np.log(proba.max()), 21)
levels = np.exp(levels_log)
levels[0] = 0
xx, yy = np.meshgrid(x.ravel(), y.ravel())
plt.contour(xx, yy, proba.reshape(100, 100), levels)
plt.xlim(x_train.min(), x_train.max())
plt.ylim(y_train.min(), y_train.max())
plt.subplot(2, 2, 4)
argmax = np.argmax(coef.value, axis=1)
for i in range(3):
indices = np.where(argmax == i)[0]
plt.plot(x[indices, 0], mean.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b")
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Bayesian Neural Networks
###Code
x_train, y_train = make_moons(n_samples=500, noise=0.2)
y_train = y_train[:, None]
class Gaussian(nn.Network):
def __init__(self, shape):
super().__init__()
with self.set_parameter():
self.m = nn.zeros(shape)
self.s = nn.zeros(shape)
def __call__(self):
self.q = nn.Gaussian(self.m, nn.softplus(self.s) + 1e-8)
return self.q.draw()
class BayesianNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output=1):
super().__init__()
with self.set_parameter():
self.qw1 = Gaussian((n_input, n_hidden))
self.qb1 = Gaussian(n_hidden)
self.qw2 = Gaussian((n_hidden, n_hidden))
self.qb2 = Gaussian(n_hidden)
self.qw3 = Gaussian((n_hidden, n_output))
self.qb3 = Gaussian(n_output)
self.posterior = [self.qw1, self.qb1, self.qw2, self.qb2, self.qw3, self.qb3]
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.qw1() + self.qb1())
h = nn.tanh(h @ self.qw2() + self.qb2())
return nn.Bernoulli(logit=h @ self.qw3() + self.qb3())
def kl(self):
kl = 0
for pos in self.posterior:
kl += nn.loss.kl_divergence(pos.q, self.prior).mean()
return kl
model = BayesianNetwork(2, 5, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(1, 2001, 1):
model.clear()
py = model(x_train)
elbo = py.log_pdf(y_train).mean(0).sum() - model.kl() / len(x_train)
optimizer.maximize(elbo)
if i % 100 == 0:
optimizer.learning_rate *= 0.9
x_grid = np.mgrid[-2:3:100j, -2:3:100j]
x1, x2 = x_grid[0], x_grid[1]
x_grid = x_grid.reshape(2, -1).T
y = np.mean([model(x_grid).mean.value.reshape(100, 100) for _ in range(10)], axis=0)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel(), s=5)
plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2)
plt.colorbar()
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____
###Markdown
5. Neural Networks
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_mldata, make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import accuracy_score
from prml import nn
np.random.seed(1234)
###Output
_____no_output_____
###Markdown
5.1 Feed-forward Network Functions
###Code
class RegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data(func, n=50):
x = np.linspace(-1, 1, n)[:, None]
return x, func(x)
def sinusoidal(x):
return np.sin(np.pi * x)
def heaviside(x):
return 0.5 * (np.sign(x) + 1)
func_list = [np.square, sinusoidal, np.abs, heaviside]
plt.figure(figsize=(20, 10))
x = np.linspace(-1, 1, 1000)[:, None]
for i, func, n_iter in zip(range(1, 5), func_list, [1000, 10000, 10000, 10000]):
plt.subplot(2, 2, i)
x_train, y_train = create_toy_data(func)
model = RegressionNetwork(1, 3, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for _ in range(n_iter):
model.clear()
loss = nn.square(y_train - model(x_train)).sum()
optimizer.minimize(loss)
y = model(x).value
plt.scatter(x_train, y_train, s=10)
plt.plot(x, y, color="r")
plt.show()
###Output
_____no_output_____
###Markdown
5.3 Error Backpropagation
###Code
class ClassificationNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def create_toy_data():
x = np.random.uniform(-1., 1., size=(100, 2))
labels = np.prod(x, axis=1) > 0
return x, labels.reshape(-1, 1)
x_train, y_train = create_toy_data()
model = ClassificationNetwork(2, 4, 1)
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
history = []
for i in range(10000):
model.clear()
logit = model(x_train)
log_likelihood = -nn.loss.sigmoid_cross_entropy(logit, y_train).sum()
optimizer.maximize(log_likelihood)
history.append(log_likelihood.value)
plt.plot(history)
plt.xlabel("iteration")
plt.ylabel("Log Likelihood")
plt.show()
x0, x1 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
x = np.array([x0, x1]).reshape(2, -1).T
y = nn.sigmoid(model(x)).value.reshape(100, 100)
levels = np.linspace(0, 1, 11)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel())
plt.contourf(x0, x1, y, levels, alpha=0.2)
plt.colorbar()
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.gca().set_aspect('equal')
plt.show()
###Output
_____no_output_____
###Markdown
5.5 Regularization in Neural Networks
###Code
def create_toy_data(n=10):
x = np.linspace(0, 1, n)[:, None]
return x, np.sin(2 * np.pi * x) + np.random.normal(scale=0.25, size=(10, 1))
x_train, y_train = create_toy_data()
x = np.linspace(0, 1, 100)[:, None]
plt.figure(figsize=(20, 5))
for i, m in enumerate([1, 3, 30]):
plt.subplot(1, 3, i + 1)
model = RegressionNetwork(1, m, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for j in range(10000):
model.clear()
y = model(x_train)
optimizer.minimize(nn.square(y - y_train).sum())
if j % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x)
plt.scatter(x_train.ravel(), y_train.ravel(), marker="x", color="k")
plt.plot(x.ravel(), y.value.ravel(), color="k")
plt.annotate("M={}".format(m), (0.7, 0.5))
plt.show()
class RegularizedRegressionNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output):
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2 = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_output))
self.b2 = nn.zeros(n_output)
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
return h @ self.w2 + self.b2
def log_prior(self):
logp = 0
for param in self.parameter.values():
logp += self.prior.log_pdf(param)
return logp
model = RegularizedRegressionNetwork(1, 30, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(10000):
model.clear()
pred = model(x_train)
log_posterior = -nn.square(pred - y_train).sum() + model.log_prior()
optimizer.maximize(log_posterior)
if i % 1000 == 0:
optimizer.learning_rate *= 0.9
y = model(x).value
plt.scatter(x_train, y_train, marker="x", color="k")
plt.plot(x, y, color="k")
plt.annotate("M=30", (0.7, 0.5))
plt.show()
def load_mnist():
mnist = fetch_mldata("MNIST original")
x = mnist.data
label = mnist.target
x = x / np.max(x, axis=1, keepdims=True)
x = x.reshape(-1, 28, 28, 1)
x_train, x_test, label_train, label_test = train_test_split(x, label, test_size=0.1)
y_train = LabelBinarizer().fit_transform(label_train)
return x_train, x_test, y_train, label_test
x_train, x_test, y_train, label_test = load_mnist()
class ConvolutionalNeuralNetwork(nn.Network):
def __init__(self):
super().__init__()
with self.set_parameter():
self.conv1 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 1, 20)),
stride=(1, 1), pad=(0, 0))
self.b1 = nn.array([0.1] * 20)
self.conv2 = nn.image.Convolve2d(
nn.random.truncnormal(-2, 2, 1, (5, 5, 20, 20)),
stride=(1, 1), pad=(0, 0))
self.b2 = nn.array([0.1] * 20)
self.w3 = nn.random.truncnormal(-2, 2, 1, (4 * 4 * 20, 100))
self.b3 = nn.array([0.1] * 100)
self.w4 = nn.random.truncnormal(-2, 2, 1, (100, 10))
self.b4 = nn.array([0.1] * 10)
def __call__(self, x):
h = nn.relu(self.conv1(x) + self.b1)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = nn.relu(self.conv2(h) + self.b2)
h = nn.max_pooling2d(h, (2, 2), (2, 2))
h = h.reshape(-1, 4 * 4 * 20)
h = nn.relu(h @ self.w3 + self.b3)
return h @ self.w4 + self.b4
model = ConvolutionalNeuralNetwork()
optimizer = nn.optimizer.Adam(model.parameter, 1e-3)
while True:
indices = np.random.permutation(len(x_train))
for index in range(0, len(x_train), 50):
model.clear()
x_batch = x_train[indices[index: index + 50]]
y_batch = y_train[indices[index: index + 50]]
logit = model(x_batch)
log_likelihood = -nn.loss.softmax_cross_entropy(logit, y_batch).mean(0).sum()
if optimizer.iter_count % 100 == 0:
accuracy = accuracy_score(
np.argmax(y_batch, axis=-1), np.argmax(logit.value, axis=-1)
)
print("step {:04d}".format(optimizer.iter_count), end=", ")
print("accuracy {:.2f}".format(accuracy), end=", ")
print("Log Likelihood {:g}".format(log_likelihood.value[0]))
optimizer.maximize(log_likelihood)
if optimizer.iter_count == 1000:
break
else:
continue
break
print("accuracy (test):", accuracy_score(np.argmax(model(x_test).value, axis=-1), label_test))
###Output
accuracy (test): 0.862
###Markdown
5.6 Mixture Density Networks
###Code
def create_toy_data(func, n=300):
t = np.random.uniform(size=(n, 1))
x = func(t) + np.random.uniform(-0.05, 0.05, size=(n, 1))
return x, t
def func(x):
return x + 0.3 * np.sin(2 * np.pi * x)
def sample(x, t, n=None):
assert len(x) == len(t)
N = len(x)
if n is None:
n = N
indices = np.random.choice(N, n, replace=False)
return x[indices], t[indices]
x_train, y_train = create_toy_data(func)
class MixtureDensityNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_components):
self.n_components = n_components
super().__init__()
with self.set_parameter():
self.w1 = nn.random.truncnormal(-2, 2, 1, (n_input, n_hidden))
self.b1 = nn.zeros(n_hidden)
self.w2c = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2c = nn.zeros(n_components)
self.w2m = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2m = nn.zeros(n_components)
self.w2s = nn.random.truncnormal(-2, 2, 1, (n_hidden, n_components))
self.b2s = nn.zeros(n_components)
def __call__(self, x):
h = nn.tanh(x @ self.w1 + self.b1)
coef = nn.softmax(h @ self.w2c + self.b2c)
mean = h @ self.w2m + self.b2m
std = nn.exp(h @ self.w2s + self.b2s)
return coef, mean, std
def gaussian_mixture_pdf(x, coef, mu, std):
gauss = (
nn.exp(-0.5 * nn.square((x - mu) / std))
/ std / np.sqrt(2 * np.pi)
)
return (coef * gauss).sum(axis=-1)
model = MixtureDensityNetwork(1, 5, 3)
optimizer = nn.optimizer.Adam(model.parameter, 1e-4)
for i in range(30000):
model.clear()
coef, mean, std = model(x_train)
log_likelihood = nn.log(gaussian_mixture_pdf(y_train, coef, mean, std)).sum()
optimizer.maximize(log_likelihood)
x = np.linspace(x_train.min(), x_train.max(), 100)[:, None]
y = np.linspace(y_train.min(), y_train.max(), 100)[:, None, None]
coef, mean, std = model(x)
plt.figure(figsize=(20, 15))
plt.subplot(2, 2, 1)
plt.plot(x[:, 0], coef.value[:, 0], color="blue")
plt.plot(x[:, 0], coef.value[:, 1], color="red")
plt.plot(x[:, 0], coef.value[:, 2], color="green")
plt.title("weights")
plt.subplot(2, 2, 2)
plt.plot(x[:, 0], mean.value[:, 0], color="blue")
plt.plot(x[:, 0], mean.value[:, 1], color="red")
plt.plot(x[:, 0], mean.value[:, 2], color="green")
plt.title("means")
plt.subplot(2, 2, 3)
proba = gaussian_mixture_pdf(y, coef, mean, std).value
levels_log = np.linspace(0, np.log(proba.max()), 21)
levels = np.exp(levels_log)
levels[0] = 0
xx, yy = np.meshgrid(x.ravel(), y.ravel())
plt.contour(xx, yy, proba.reshape(100, 100), levels)
plt.xlim(x_train.min(), x_train.max())
plt.ylim(y_train.min(), y_train.max())
plt.subplot(2, 2, 4)
argmax = np.argmax(coef.value, axis=1)
for i in range(3):
indices = np.where(argmax == i)[0]
plt.plot(x[indices, 0], mean.value[(indices, np.zeros_like(indices) + i)], color="r", linewidth=2)
plt.scatter(x_train, y_train, facecolor="none", edgecolor="b")
plt.show()
###Output
_____no_output_____
###Markdown
5.7 Bayesian Neural Networks
###Code
x_train, y_train = make_moons(n_samples=500, noise=0.2)
y_train = y_train[:, None]
class Gaussian(nn.Network):
def __init__(self, shape):
super().__init__()
with self.set_parameter():
self.m = nn.zeros(shape)
self.s = nn.zeros(shape)
def __call__(self):
self.q = nn.Gaussian(self.m, nn.softplus(self.s) + 1e-8)
return self.q.draw()
class BayesianNetwork(nn.Network):
def __init__(self, n_input, n_hidden, n_output=1):
super().__init__()
with self.set_parameter():
self.qw1 = Gaussian((n_input, n_hidden))
self.qb1 = Gaussian(n_hidden)
self.qw2 = Gaussian((n_hidden, n_hidden))
self.qb2 = Gaussian(n_hidden)
self.qw3 = Gaussian((n_hidden, n_output))
self.qb3 = Gaussian(n_output)
self.posterior = [self.qw1, self.qb1, self.qw2, self.qb2, self.qw3, self.qb3]
self.prior = nn.Gaussian(0, 1)
def __call__(self, x):
h = nn.tanh(x @ self.qw1() + self.qb1())
h = nn.tanh(h @ self.qw2() + self.qb2())
return nn.Bernoulli(logit=h @ self.qw3() + self.qb3())
def kl(self):
kl = 0
for pos in self.posterior:
kl += nn.loss.kl_divergence(pos.q, self.prior).mean()
return kl
model = BayesianNetwork(2, 5, 1)
optimizer = nn.optimizer.Adam(model.parameter, 0.1)
for i in range(1, 2001, 1):
model.clear()
py = model(x_train)
elbo = py.log_pdf(y_train).mean(0).sum() - model.kl() / len(x_train)
optimizer.maximize(elbo)
if i % 100 == 0:
optimizer.learning_rate *= 0.9
x_grid = np.mgrid[-2:3:100j, -2:3:100j]
x1, x2 = x_grid[0], x_grid[1]
x_grid = x_grid.reshape(2, -1).T
y = np.mean([model(x_grid).mean.value.reshape(100, 100) for _ in range(10)], axis=0)
plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train.ravel(), s=5)
plt.contourf(x1, x2, y, np.linspace(0, 1, 11), alpha=0.2)
plt.colorbar()
plt.xlim(-2, 3)
plt.ylim(-2, 3)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____ |
course_content/case_study/Case Study B/notebooks/answers/4_logreg_tune-answers.ipynb | ###Markdown
<img src="https://datasciencecampus.ons.gov.uk/wp-content/uploads/sites/10/2017/03/data-science-campus-logo-new.svg" alt="ONS Data Science Campus Logo" width = "240" style="margin: 0px 60px" /> 4.0 Tuning the Selected ModelPurpose of script: tune logreg on titanic_engineered
###Code
# import necessary libraries
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
# import cached data from titanic_EDA.py
titanic_engineered = pd.read_pickle('../../cache/titanic_engineered.pkl')
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# define processing functions
def preprocess_target(df) :
# Create arrays for the features and the target variable
target = df['Survived'].values
return(target)
def preprocess_features(df) :
#extract features series
features = df.drop('Survived', axis=1)
#remove features that cannot be converted to float: name, ticket & cabin
features = features.drop(['Name', 'Ticket', 'Cabin'], axis=1)
# dummy encoding of any remaining categorical data
features = pd.get_dummies(features, drop_first=True)
# ensure np.nan used to replace missing values
features.replace('nan', np.nan, inplace=True)
return features
toggle_code(title='answers')
# preprocess target from titanic_train
target = preprocess_target(titanic_engineered)
#preprocess features from titanic_train
features = preprocess_features(titanic_engineered)
###Output
_____no_output_____
###Markdown
Train test split
###Code
# unpack the necessary test and train sets using a test size of 25 % and a random state of 36
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.25, random_state=36)
###Output
_____no_output_____
###Markdown
Instantiate
###Code
#impute median for NaNs in age column
imp = SimpleImputer(missing_values=np.nan, strategy='median')
# instantiate classifier
logreg = LogisticRegression()
# create a list called steps, each step should be a tuple
# required steps are 'imputation', 'scaler', 'logistic_regression'
steps = [('imputation', imp),
('scaler', StandardScaler()),
('logistic_regression', logreg)]
# establish pipeline
pipeline = Pipeline(steps)
###Output
_____no_output_____
###Markdown
Train model
###Code
# How do you fit the model?
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict labels
###Code
# Can you predict the labels of the test set?
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Review
###Code
pipeline.score(X_train, y_train)
###Output
_____no_output_____
###Markdown
Down from 0.7934131736526946 in non-engineered df
###Code
pipeline.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Up from 0.8116591928251121 in non engineered df
###Code
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Precision is 10% lower in the survived category. High precision == low FP rate. This model performs 10 % better in relation to false positives (assigning survived when in fact died) when class assigned is 0 than 1.Recall (false negative rate - assigning died but in truth survived) is largelycomparable across both classes. The harmmonic mean of precision and recall - f1 - has a 6 percent increase when assigning 0 as survived. This has resulted in 133 rows (versus 90 rows in survived) of the trueresponse sampled faling within the 0 (died) category.Overall, it appears that this model is considerably better at predicting whenpeople died rather than survived. After comparison of the two datasets and logreg vs knn, this model datasetcombination yields the highest performance metrics across the board. Tuning
###Code
# specify the hyperparameter space
parameters = [
{'logistic_regression__C':np.logspace(-1,1,20),
'logistic_regression__penalty':['l2'],
'logistic_regression__solver': ['lbfgs'],
'logistic_regression__max_iter' : [50, 100, 150, 200]
}
]
# instantiate the gridsearch object with 5 fold cross validation
cv = GridSearchCV(pipeline, param_grid=parameters, cv=5)
###Output
_____no_output_____
###Markdown
Train model
###Code
# fit the cross validation model to the training data
cv.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict labels
###Code
# predict labels of test set
y_pred = cv.predict(X_test)
###Output
_____no_output_____
###Markdown
Review
###Code
print("Accuracy: {}".format(cv.score(X_test, y_test)))
print(classification_report(y_test, y_pred))
print("Tuned model parameters: {}".format(cv.best_params_))
###Output
_____no_output_____ |
notebooks/cross_validation_ex_01.ipynb | ###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
model = make_pipeline(StandardScaler(), SVC(kernel='rbf'))
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
from sklearn.model_selection import cross_validate, ShuffleSplit
cv = ShuffleSplit(random_state=0)
cv_results = cross_validate(model, data, target,cv=cv, n_jobs=2)
cv_results = pd.DataFrame(cv_results)
cv_results
print(
f"Accuracy score of our model:\n"
f"{cv_results['test_score'].mean():.3f} +/- "
f"{cv_results['test_score'].std():.3f}"
)
###Output
Accuracy score of our model:
0.765 +/- 0.043
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
model.get_params().keys()
# Write your code here.
# %%time
from sklearn.model_selection import validation_curve
import numpy as np
#max_depth = [1, 5, 10, 15, 20, 25]
gamma = np.logspace(-3, 2, num=30)
train_scores, test_scores = validation_curve(
model, data, target, param_name="svc__gamma", param_range=gamma, cv=cv, n_jobs=2)
train_errors, test_errors = -train_scores, -test_scores
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
import matplotlib.pyplot as plt
plt.errorbar(gamma, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label='Training score')
plt.errorbar(gamma, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label='Testing score')
plt.legend()
plt.xscale("log")
plt.xlabel(r"Value of hyperparameter $\gamma$")
plt.ylabel("Accuracy score")
_ = plt.title("Validation score of support vector machine")
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
from sklearn.model_selection import learning_curve
import numpy as np
train_sizes = np.linspace(0.1, 1.0, num=5, endpoint=True)
results = learning_curve(
model, data, target, train_sizes=train_sizes, cv=cv, n_jobs=2)
train_size, train_scores, test_scores = results[:3]
# Convert the scores into errors
train_errors, test_errors = -train_scores, -test_scores
import matplotlib.pyplot as plt
plt.errorbar(train_size, train_errors.mean(axis=1),
yerr=train_errors.std(axis=1), label="Training error")
plt.errorbar(train_size, test_errors.mean(axis=1),
yerr=test_errors.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Mean absolute error (k$)")
_ = plt.title("Learning curve for decision tree")
plt.errorbar(train_size, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label='Training score')
plt.errorbar(train_size, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label='Testing score')
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Accuracy")
_ = plt.title("Learning curve for support vector machine")
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* study if it would be useful in term of classification if we could add new samples in the dataset using a learning curve.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
print(data)
print(target)
###Output
Recency Frequency Monetary Time
0 2 50 12500 98
1 0 13 3250 28
2 1 16 4000 35
3 2 20 5000 45
4 1 24 6000 77
.. ... ... ... ...
743 23 2 500 38
744 21 2 500 52
745 23 3 750 62
746 39 1 250 39
747 72 1 250 72
[748 rows x 4 columns]
0 donated
1 donated
2 donated
3 donated
4 not donated
...
743 not donated
744 not donated
745 not donated
746 not donated
747 not donated
Name: Class, Length: 748, dtype: object
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel making the model becomes non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.svm import SVC
model = make_pipeline(StandardScaler(), SVC(kernel = 'rbf', gamma='auto'))
###Output
_____no_output_____
###Markdown
Evaluate the statistical performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_validate
import pandas as pd
cv = ShuffleSplit(n_splits=10, random_state=0)
cv_results = cross_validate(model, data, target, cv=cv, n_jobs=2, return_train_score = True)
cv_results = pd.DataFrame(cv_results)
print(cv_results)
print(f'Le score est de {cv_results["test_score"].mean():.3f} +/- {cv_results["test_score"].std():.3f}')
###Output
fit_time score_time test_score train_score
0 0.019992 0.007001 0.680000 0.787519
1 0.021995 0.006001 0.746667 0.793462
2 0.024997 0.006000 0.786667 0.787519
3 0.021995 0.007006 0.800000 0.787519
4 0.020018 0.005000 0.746667 0.777117
5 0.025975 0.005001 0.786667 0.794948
6 0.019023 0.005977 0.800000 0.783061
7 0.024000 0.004999 0.826667 0.791976
8 0.018997 0.006002 0.746667 0.803863
9 0.019002 0.005000 0.733333 0.794948
Le score est de 0.765 +/- 0.043
###Markdown
As previously mentioned, the parameter `gamma` is one of the parametercontrolling under/over-fitting in support vector machine with an RBF kernel.Compute the validation curve(using [`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html))to evaluate the effect of the parameter `gamma`. You can vary its valuebetween `10e-3` and `10e2` by generating samples on a logarithmic scale.Thus, you can use `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into details regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
from sklearn.model_selection import validation_curve
import numpy as np
gamma_interval = np.logspace(-3,2,num=30)
## Afficher les hyperparamètres disponibles pour ce modèle
print(clf.get_params().keys())
train_scores, test_scores = validation_curve(clf, data, target, param_name="svc__gamma", param_range=gamma_interval, cv=cv, n_jobs=2)
# print(train_scores.mean(axis=1))
# print(test_scores.mean(axis=1))
###Output
dict_keys(['memory', 'steps', 'verbose', 'standardscaler', 'svc', 'standardscaler__copy', 'standardscaler__with_mean', 'standardscaler__with_std', 'svc__C', 'svc__break_ties', 'svc__cache_size', 'svc__class_weight', 'svc__coef0', 'svc__decision_function_shape', 'svc__degree', 'svc__gamma', 'svc__kernel', 'svc__max_iter', 'svc__probability', 'svc__random_state', 'svc__shrinking', 'svc__tol', 'svc__verbose'])
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
import matplotlib.pyplot as plt
# plt.plot(gamma_interval, train_scores.mean(axis=1), label="Training error")
# plt.plot(gamma_interval, test_scores.mean(axis=1), label="Testing error")
plt.errorbar(gamma_interval, train_scores.mean(axis=1), yerr=train_scores.std(axis=1), label="Training error")
plt.errorbar(gamma_interval, test_scores.mean(axis=1), yerr=test_scores.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Variation of gamma hyperparameter")
plt.ylabel("Score")
_ = plt.title("Validation curve for SVC (after StandardScaler)")
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
from sklearn.model_selection import learning_curve
train_size = np.linspace(0.1, 1, num=10, endpoint=True)
results = learning_curve(clf, data, target, train_sizes=train_size, cv=cv, n_jobs=2)
train_size, train_scores, test_scores = results[:3]
plt.errorbar(train_size, train_scores.mean(axis=1), yerr=train_scores.std(axis=1), label="Training error")
plt.errorbar(train_size, test_scores.mean(axis=1), yerr=test_scores.std(axis=1), label="Testing error")
plt.legend()
# plt.xscale("log")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Score")
_ = plt.title("Learning curve for SVC model")
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* study if it would be useful in term of classification if we could add new samples in the dataset using a learning curve.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel making the model becomes non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(), SVC())
###Output
_____no_output_____
###Markdown
Evaluate the statistical performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
from sklearn.model_selection import cross_validate, ShuffleSplit
cv = ShuffleSplit(random_state=0)
cv_results = cross_validate(model, data, target, cv=cv, n_jobs=2)
cv_results = pd.DataFrame(cv_results)
cv_results
print(
f"Accuracy score of our model:\n"
f"{cv_results['test_score'].mean():.3f} +/- "
f"{cv_results['test_score'].std():.3f}"
)
###Output
Accuracy score of our model:
0.765 +/- 0.043
###Markdown
As previously mentioned, the parameter `gamma` is one of the parametercontrolling under/over-fitting in support vector machine with an RBF kernel.Compute the validation curve(using [`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html))to evaluate the effect of the parameter `gamma`. You can vary its valuebetween `10e-3` and `10e2` by generating samples on a logarithmic scale.Thus, you can use `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into details regardingaccessing and setting hyperparameter in the next section.
###Code
%%time
import numpy as np
from sklearn.model_selection import validation_curve
gamma = np.logspace(-3, 2, num=30)
train_scores, test_scores = validation_curve(
model, data, target, param_name='svc__gamma', param_range=gamma, cv=cv, n_jobs=2
)
###Output
Wall time: 10.6 s
###Markdown
Plot the validation curve for the train and test scores.
###Code
import matplotlib.pyplot as plt
plt.errorbar(gamma, train_scores.mean(axis=1), yerr=train_scores.std(axis=1), label="Training score")
plt.errorbar(gamma, test_scores.mean(axis=1), yerr=test_scores.std(axis=1), label="Testing score")
plt.legend()
plt.xscale("log")
plt.xlabel("Gamma param of SVC")
plt.ylabel("Accuracy score")
_ = plt.title("Validation curve for SVM")
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
train_sizes = np.linspace(0.1, 1.0, num=10, endpoint=True)
train_sizes
from sklearn.model_selection import learning_curve
results = learning_curve(
model, data, target, train_sizes=train_sizes, cv=cv, n_jobs=2
)
train_size, train_scores, test_scores = results[:3]
plt.errorbar(train_size, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label="Training score")
plt.errorbar(train_size, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label="Testing score")
plt.legend()
plt.xlabel("Number of samples in the training set")
plt.ylabel("Accuracy score")
_ = plt.title("Learning curve for SVM")
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`. Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise 01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* study if it would be useful in term of classification if we could add new samples in the dataset using a learning curve.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel. The model becomes non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the statistical performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parametercontrolling under/over-fitting in support vector machine with an RBF kernel.Compute the validation curve(using [`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html))to evaluate the effect of the parameter `gamma`. You can vary its valuebetween `10e-3` and `10e2` by generating samples on a logarithmic scale.Thus, you can use `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into details regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* study if it would be useful in term of classification if we could add new samples in the dataset using a learning curve.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel. The model becomes non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the statistical performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parametercontrolling under/over-fitting in support vector machine with an RBF kernel.Compute the validation curve(using [`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html))to evaluate the effect of the parameter `gamma`. You can vary its valuebetween `10e-3` and `10e2` by generating samples on a logarithmic scale.Thus, you can use `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into details regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise 01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* study if it would be useful in term of classification if we could add new samples in the dataset using a learning curve.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel. The model becomes non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the statistical performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parametercontrolling under/over-fitting in support vector machine with an RBF kernel.Compute the validation curve(using [`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html))to evaluate the effect of the parameter `gamma`. You can vary its valuebetween `10e-3` and `10e2` by generating samples on a logarithmic scale.Thus, you can use `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into details regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(), SVC(kernel="rbf"))
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
from sklearn.model_selection import ShuffleSplit, cross_validate
cv = ShuffleSplit(random_state=0)
cv_results = cross_validate(model, data, target, cv=cv)
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
from sklearn.model_selection import validation_curve
import numpy as np
gamma = np.logspace(-3, 2, num=30)
train_scores, test_scores = validation_curve(
model, data, target, param_name="svc__gamma", param_range=gamma)
train_error, test_error = -train_scores, -test_scores
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
import matplotlib.pyplot as plt
plt.errorbar(gamma, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label="Train scores")
plt.errorbar(gamma, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label="Test scores")
plt.legend()
plt.xscale("log")
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
from sklearn.model_selection import learning_curve
train_sizes = np.linspace(0.1, 1.0, 5, endpoint=True)
print(len(train_sizes))
_, train_scores, test_scores = learning_curve(
model, data, target, train_sizes=train_sizes, cv=cv)
plt.errorbar(train_sizes, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label="Train score")
plt.errorbar(train_sizes, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label="Test score")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`. Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* study if it would be useful in term of classification if we could add new samples in the dataset using a learning curve.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel making the model becomes non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the statistical performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parametercontrolling under/over-fitting in support vector machine with an RBF kernel.Compute the validation curve(using [`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html))to evaluate the effect of the parameter `gamma`. You can vary its valuebetween `10e-3` and `10e2` by generating samples on a logarithmic scale.Thus, you can use `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into details regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel that makes the model become non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
clf = make_pipeline(StandardScaler(), SVC(kernel='rbf'))
clf.fit(X=data, y=target)
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
%time
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_validate
cv = ShuffleSplit(random_state=0)
cv_results = cross_validate(clf, data, target, cv=cv, n_jobs=2)
cv_results
###Output
CPU times: user 2 µs, sys: 1e+03 ns, total: 3 µs
Wall time: 5.25 µs
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html) function.You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma`between `10e-3` and `10e2` by generating samples on a logarithmic scalewith the help of `np.logspace(-3, 2, num=30)`. Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
print(
f"Accuracy score of our model:\n"
f"{cv_results['test_score'].mean():.3f} +/- "
f"{cv_results['test_score'].std():.3f}"
)
import numpy as np
from sklearn.model_selection import validation_curve
gammas = np.logspace(-3, 2, num=30)
param_name = "svc__gamma"
train_scores, test_scores = validation_curve(
clf, data, target, param_name=param_name, param_range=gammas, cv=cv,
n_jobs=2)
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
import matplotlib.pyplot as plt
plt.errorbar(gammas, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label='Training score')
plt.errorbar(gammas, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label='Testing score')
plt.legend()
plt.xscale("log")
plt.xlabel(r"Value of hyperparameter $\gamma$")
plt.ylabel("Accuracy score")
_ = plt.title("Validation score of support vector machine")
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
from sklearn.model_selection import learning_curve
train_sizes = np.linspace(0.1, 1, num=10)
results = learning_curve(
clf, data, target, train_sizes=train_sizes, cv=cv, n_jobs=2)
train_size, train_scores, test_scores = results[:3]
plt.errorbar(train_size, train_scores.mean(axis=1),
yerr=train_scores.std(axis=1), label='Training score')
plt.errorbar(train_size, test_scores.mean(axis=1),
yerr=test_scores.std(axis=1), label='Testing score')
plt.legend()
plt.xlabel("Number of samples in the training set")
plt.ylabel("Accuracy")
_ = plt.title("Learning curve for support vector machine")
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* use a learning curve to determine the usefulness of adding new samples in the dataset when building a classifier.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to a logisticregression. Indeed, the optimization used to find the optimal weights of thelinear model are different but we don't need to know these details for theexercise.Also, this classifier can become more flexible/expressive by using a so-calledkernel that makes the model become non-linear. Again, no requirement regardingthe mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Evaluate the generalization performance of your model by cross-validation witha `ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a[`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit` andlet the other parameters to the default.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parameterscontrolling under/over-fitting in support vector machine with an RBF kernel.Evaluate the effect of the parameter `gamma` by using the[`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html)function. You can leave the default `scoring=None` which is equivalent to`scoring="accuracy"` for classification problems. You can vary `gamma` between`10e-3` and `10e2` by generating samples on a logarithmic scale with the helpof `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into detail regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M2.01The aim of this exercise is to make the following experiments:* train and test a support vector machine classifier through cross-validation;* study the effect of the parameter gamma of this classifier using a validation curve;* study if it would be useful in term of classification if we could add new samples in the dataset using a learning curve.To make these experiments we will first load the blood transfusion dataset. NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC.
###Code
import pandas as pd
blood_transfusion = pd.read_csv("../datasets/blood_transfusion.csv")
data = blood_transfusion.drop(columns="Class")
target = blood_transfusion["Class"]
###Output
_____no_output_____
###Markdown
We will use a support vector machine classifier (SVM). In its most simpleform, a SVM classifier is a linear classifier behaving similarly to alogistic regression. Indeed, the optimization used to find the optimalweights of the linear model are different but we don't need to know thesedetails for the exercise.Also, this classifier can become more flexible/expressive by using aso-called kernel making the model becomes non-linear. Again, no requirementregarding the mathematics is required to accomplish this exercise.We will use an RBF kernel where a parameter `gamma` allows to tune theflexibility of the model.First let's create a predictive pipeline made of:* a [`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) with default parameter;* a [`sklearn.svm.SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) where the parameter `kernel` could be set to `"rbf"`. Note that this is the default.
###Code
# to display nice model diagram
from sklearn import set_config
set_config(display='diagram')
# Write your code here.
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(), SVC(kernel='rbf'))
model
###Output
_____no_output_____
###Markdown
Evaluate the statistical performance of your model by cross-validation with a`ShuffleSplit` scheme. Thus, you can use[`sklearn.model_selection.cross_validate`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html)and pass a [`sklearn.model_selection.ShuffleSplit`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.ShuffleSplit.html)to the `cv` parameter. Only fix the `random_state=0` in the `ShuffleSplit`and let the other parameters to the default.
###Code
# Write your code here.
import pandas as pd
from sklearn.model_selection import cross_validate, ShuffleSplit
cv = ShuffleSplit(random_state=0)
cv_results = cross_validate(model, data, target,
cv=cv)
cv_results = pd.DataFrame(cv_results)
cv_results
###Output
_____no_output_____
###Markdown
As previously mentioned, the parameter `gamma` is one of the parametercontrolling under/over-fitting in support vector machine with an RBF kernel.Compute the validation curve(using [`sklearn.model_selection.validation_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.validation_curve.html))to evaluate the effect of the parameter `gamma`. You can vary its valuebetween `10e-3` and `10e2` by generating samples on a logarithmic scale.Thus, you can use `np.logspace(-3, 2, num=30)`.Since we are manipulating a `Pipeline` the parameter name will be set to`svc__gamma` instead of only `gamma`. You can retrieve the parameter nameusing `model.get_params().keys()`. We will go more into details regardingaccessing and setting hyperparameter in the next section.
###Code
# Write your code here.
from sklearn.model_selection import validation_curve
import numpy as np
gamma = np.logspace(-3, 2, num=30)
train_scores, test_scores = validation_curve(
model, data, target, param_name="svc__gamma", param_range=gamma,
cv=cv)
train_errors, test_errors = -train_scores, -test_scores
###Output
_____no_output_____
###Markdown
Plot the validation curve for the train and test scores.
###Code
# Write your code here.
import matplotlib.pyplot as plt
plt.errorbar(gamma, train_scores.mean(axis=1),yerr=train_scores.std(axis=1), label="Training error")
plt.errorbar(gamma, test_scores.mean(axis=1),yerr=test_scores.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Gamma value for SVC")
plt.ylabel("Mean absolute error")
_ = plt.title("Validation curve for SVC")
###Output
_____no_output_____
###Markdown
Now, you can perform an analysis to check whether adding new samples to thedataset could help our model to better generalize. Compute the learning curve(using [`sklearn.model_selection.learning_curve`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.learning_curve.html))by computing the train and test scores for different training dataset size.Plot the train and test scores with respect to the number of samples.
###Code
# Write your code here.
from sklearn.model_selection import learning_curve
import numpy as np
train_sizes = np.linspace(0.1, 1.0, num=5, endpoint=True)
train_sizes
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=30, test_size=0.2)
results = learning_curve(
model, data, target, train_sizes=train_sizes, cv=cv)
train_size, train_scores, test_scores = results[:3]
# Convert the scores into errors
train_errors, test_errors = -train_scores, -test_scores
import matplotlib.pyplot as plt
plt.errorbar(train_size, train_errors.mean(axis=1),
yerr=train_errors.std(axis=1), label="Training error")
plt.errorbar(train_size, test_errors.mean(axis=1),
yerr=test_errors.std(axis=1), label="Testing error")
plt.legend()
plt.xscale("log")
plt.xlabel("Number of samples in the training set")
plt.ylabel("Mean absolute error")
_ = plt.title("Learning curve for SVC")
###Output
_____no_output_____ |
old ideas/Protocol 4/Scripts/Protocol_4.0/protocol_4.0.ipynb | ###Markdown
Protocol 4.0VLS and SAG NWs with standard lock-in techniqueThe code can be used for 1, 2 or 3 devices silmutaneouslyThis version supports both 4-probe and 2-probe measurements Imports
###Code
# Copy this to all notebooks!
from qcodes.logger import start_all_logging
start_all_logging()
# Import qcodes and other necessary packages
import qcodes as qc
import numpy as np
import time
from time import sleep
import matplotlib
import matplotlib.pyplot as plt
import os
import os.path
# Import device drivers
from qcodes.instrument_drivers.QuantumDesign.DynaCoolPPMS import DynaCool
from qcodes.instrument_drivers.Keysight.Infiniium import Infiniium
# Import qcodes packages
from qcodes import Station
from qcodes import config
from qcodes.dataset.measurements import Measurement
from qcodes.dataset.plotting import plot_by_id
from qcodes.dataset.database import initialise_database,get_DB_location
from qcodes.dataset.experiment_container import (Experiment,
load_last_experiment,
new_experiment,
load_experiment_by_name)
from qcodes.instrument.base import Instrument
from qcodes.utils.dataset.doNd import do1d,do2d
%matplotlib notebook
go = 7.7480917310e-5
###Output
_____no_output_____
###Markdown
Station(Need to load 3 Keithleys and 6 Lock-In Amps)
###Code
# Create station, instantiate instruments
Instrument.close_all()
path_to_station_file = 'C:/Users/lyn-ppmsmsr-01usr/Desktop/station.yaml'
# 'file//station.yaml'
# Here we load the station file.
station = Station()
station.load_config_file(path_to_station_file)
# Connect to ppms
#Instrument.find_instrument('ppms_cryostat')
ppms = DynaCool.DynaCool(name = "ppms_cryostat", address="TCPIP0::10.10.117.37::5000::SOCKET")
station.add_component(ppms)
# SRS
lockin_1 = station.load_instrument('lockin_1')
lockin_2 = station.load_instrument('lockin_2')
lockin_3 = station.load_instrument('lockin_3')
lockin_4 = station.load_instrument('lockin_4')
lockin_5 = station.load_instrument('lockin_5')
lockin_6 = station.load_instrument('lockin_6')
# DMMs
dmm_a = station.load_instrument('Keithley_A')
dmm_b = station.load_instrument('Keithley_B')
dmm_c = station.load_instrument('Keithley_C')
dmm_a.smua.volt(0) # Set voltages to 0
dmm_a.smub.volt(0) # Set voltages to 0
dmm_b.smua.volt(0) # Set voltages to 0
dmm_b.smub.volt(0) # Set voltages to 0
dmm_c.smua.volt(0) # Set voltages to 0
dmm_c.smub.volt(0) # Set voltages to 0
for inst in station.components.values():
inst.print_readable_snapshot()
###Output
_____no_output_____
###Markdown
DB File, Location
###Code
### Initialize database, make new measurement
mainpath = 'C:/Users/MicrosoftQ/Desktop/Results/Operator_name' #remember to change << /Operator_name >> to save the db file in your own user folder
config.current_config.core.db_location = os.path.join(mainpath,'GROWTHXXXX_BATCHXX_YYYYMMDD.db')
config.current_config
newpath = os.path.join(mainpath,'GROWTHXXXX_BATCHXX_YYYYMMDD')
if not os.path.exists(newpath):
os.makedirs(newpath)
figurepath = newpath
initialise_database()
###Output
_____no_output_____
###Markdown
Functions
###Code
def wait_for_field():
time.sleep(1)
Magnet_state = ppms.magnet_state()
while Magnet_state is not 'holding':
#print('waiting for field')
time.sleep(0.1)
Magnet_state = ppms.magnet_state()
#print('field ready')
return
def wait_for_field_ramp():
Magnet_state = ppms.magnet_state()
while Magnet_state is not 'ramping':
time.sleep(1)
Magnet_state = ppms.magnet_state()
return
def field_ready():
return ppms.magnet_state() == 'holding'
def wait_for_temp():
Temp_state = ppms.temperature_state()
while Temp_state is not 'stable':
time.sleep(1)
Temp_state = ppms.temperature_state()
return
def wait_for_near_temp():
Temp_state = ppms.temperature_state()
while Temp_state is not 'near':
time.sleep(2)
Temp_state = ppms.temperature_state()
time.sleep(10)
return
###Output
_____no_output_____
###Markdown
Lock-in add-on functionsGains and conductance
###Code
# AMPLIFICATIONS AND VOLTAGE DIVISIONS
ACdiv = 1e-4
DCdiv = 1e-2
GIamp1 = 1e7
GVamp2 = 100
GIamp3 = 1e6
GVamp4 = 100
GIamp5 = 1e6
GVamp6 = 100
# DEFINICTIONS OF FUNCTIONS FOR DIFFERENTIAL CONDUCTANCE AND RERISTANCE FOR 2 AND 4 PROBE MEASUREMENTS
# Lock-ins 1(current), 2(voltage)
def desoverh_fpm12():
volt_ampl_1 = lockin_1.X
volt_ampl_2 = lockin_2.X
I_fpm = volt_ampl_1()/GIamp1
V_fpm = volt_ampl_2()/GVamp2
if V_fpm== 0:
dcond_fpm = 0
else:
dcond_fpm = I_fpm/V_fpm/go
return dcond_fpm
def desoverh_tpm1():
volt_ampl = lockin_1.X
sig_ampl = lockin_1.amplitude()
I_tpm = volt_ampl()/GIamp1
V_tpm = sig_ampl*ACdiv
dcond_tpm = I_tpm/V_tpm/go
return dcond_tpm
def ohms_law12():
volt_ampl_1 = lockin_1.X
volt_ampl_2 = lockin_2.X
I_fpm = volt_ampl_1()/GIamp1
V_fpm = volt_ampl_2()/GVamp2
if I_fpm== 0:
res_fpm = 0
else:
res_fpm = V_fpm/I_fpm
return res_fpm
# Lock-ins 3(current), 4(voltage)
def desoverh_fpm34():
volt_ampl_3 = lockin_3.X
volt_ampl_4 = lockin_4.X
I_fpm = volt_ampl_3()/GIamp3
V_fpm = volt_ampl_4()/GVamp4
if V_fpm== 0:
dcond_fpm = 0
else:
dcond_fpm = I_fpm/V_fpm/go
return dcond_fpm
def desoverh_tpm3():
volt_ampl = lockin_3.X
sig_ampl = lockin_3.amplitude()
I_tpm = volt_ampl()/GIamp1
V_tpm = sig_ampl*ACdiv
dcond_tpm = I_tpm/V_tpm/go
return dcond_tpm
def ohms_law34():
volt_ampl_3 = lockin_3.X
volt_ampl_4 = lockin_4.X
I_fpm = volt_ampl_3()/GIamp3
V_fpm = volt_ampl_4()/GVamp4
if I_fpm== 0:
res_fpm = 0
else:
res_fpm = V_fpm/I_fpm
return res_fpm
# Lock-ins 5(current), 6(voltage)
def desoverh_fpm56():
volt_ampl_5 = lockin_5.X
volt_ampl_6 = lockin_6.X
I_fpm = volt_ampl_5()/GIamp5
V_fpm = volt_ampl_6()/GVamp6
if V_fpm== 0:
dcond_fpm = 0
else:
dcond_fpm = I_fpm/V_fpm/go
return dcond_fpm
def desoverh_tpm5():
volt_ampl = lockin_5.X
sig_ampl = lockin_5.amplitude()
I_tpm = volt_ampl()/GIamp1
V_tpm = sig_ampl*ACdiv
dcond_tpm = I_tpm/V_tpm/go
return dcond_tpm
def ohms_law56():
volt_ampl_5 = lockin_5.X
volt_ampl_6 = lockin_6.X
I_fpm = volt_ampl_5()/GIamp5
V_fpm = volt_ampl_6()/GVamp6
if I_fpm== 0:
res_fpm = 0
else:
res_fpm = V_fpm/I_fpm
return res_fpm
try:
lockin_1.add_parameter("diff_conductance_fpm", label="dI/dV", unit="2e^2/h", get_cmd = desoverh_fpm12)
except KeyError:
print("parameter already exists. Deleting. Try again")
del lockin_1.parameters['diff_conductance_fpm']
try:
lockin_1.add_parameter("conductance_tpm", label="I/V", unit="2e^2/h", get_cmd = desoverh_tpm1)
except KeyError:
print("parameter already exists. Deleting. Try again")
del lockin_1.parameters['conductance_tpm']
try:
lockin_1.add_parameter("resistance_fpm", label="R", unit="Ohm", get_cmd = ohms_law12)
except KeyError:
print("parameter already exists. Deleting. Try again")
del lockin_1.parameters['resistance_fpm']
try:
lockin_3.add_parameter("diff_conductance_fpm", label="dI/dV", unit="2e^2/h", get_cmd = desoverh_fpm34)
except KeyError:
print("parameter already exists. Deleting. Try again")
del lockin_3.parameters['diff_conductance_fpm']
try:
lockin_3.add_parameter("conductance_tpm", label="I/V", unit="2e^2/h", get_cmd = desoverh_tpm3)
except KeyError:
print("parameter already exists. Deleting. Try again")
del lockin_3.parameters['conductance_tpm']
try:
lockin_3.add_parameter("resistance_fpm", label="R", unit="Ohm", get_cmd = ohms_law34)
except KeyError:
print("parameter already exists. Deleting. Try again")
del lockin_3.parameters['resistance_fpm']
try:
lockin_5.add_parameter("diff_conductance_fpm", label="dI/dV", unit="2e^2/h", get_cmd = desoverh_fpm56)
except KeyError:
print("parameter already exists. Deleting. Try again")
del lockin_5.parameters['diff_conductance_fpm']
try:
lockin_5.add_parameter("conductance_tpm", label="I/V", unit="2e^2/h", get_cmd = desoverh_tpm5)
except KeyError:
print("parameter already exists. Deleting. Try again")
del lockin_5.parameters['conductance_tpm']
###Output
_____no_output_____
###Markdown
Measurement parameters
###Code
Vgmin = -2 #V [consult the ppt protocol]
Vgmax = +5 #V [consult the ppt protocol]
Npoints = 801 # [consult the ppt protocol]
VSD = 0 #V DC [consult the ppt protocol]
timedelay = 0.1 # sec [consult the ppt protocol]
VAC = 1 #V AC [consult the ppt protocol]
f = 136.5 #Hz [consult the ppt protocol]
tcI = 0.03 #sec [consult the ppt protocol]
tcV = 0.03 #sec [consult the ppt protocol] Preferably the same with tcI
dB_slope = 12 # dB [consult the ppt protocol]
N = 1 #Repetitions [consult the ppt protocol]
temperature = 1.7 #K
temperature_rate = 0.1
magnetic_field = 0 #T
magnetic_field_rate = 0.22
# Small calculation for measurement parameters
if 1/f*5 <= tcI and 1/f*5 <= tcV:
valid_meas = True
elif 1/f < tcI and 1/f < tcV:
valid_meas = True
print("Warning: Time constant must be much smaller than signal oscillation period", 1/f*1000, "msec")
else:
valid_meas = False
print("Error: Time constant must be smaller than signal oscillation period", 1/f*1000, "msec")
if tcI*2.5<=timedelay and tcV*2.5<=timedelay:
valid_meas = True
elif tcI<=timedelay and tcV<=timedelay:
valid_meas = True
print("Warning: Time delay is comparable with time constant")
print("Time constant:",tcI*1e3 ,"msec, (current); ", tcV*1e3, "msec, (voltage)")
print("Time delay:", timedelay*1e3,"msec")
else:
valid_meas = False
print("Error: Time delay is smaller than the time constant")
valid_meas
###Output
_____no_output_____
###Markdown
Frequency TestSmall measurement for frequency choiseUse whichever lock-in you are interested to test (eg. lockin_X)
###Code
new_experiment(name='lockin start-up', sample_name='DEVXX S21D18G38')
# Time constant choise:
# Example: f_min = 60 Hz => t_c = 1/60*2.5 sec = 42 msec => we should choose the closest value: 100 ms
lockin_1.time_constant(0.1)
tdelay = 0.3
dmm_a.smub.output('on') # Turn on the gate channel
dmm_a.smub.volt(-2) # Set the gate on a very high resistance area (below the pinch-off)
# 1-D sweep for amplitude dependence
#do1d(lockin_1.frequency,45,75,100,tdelay,lockin_1.X,lockin_1.Y,lockin_1.conductance_tpm)
# 2-D sweep repetition on a smaller frequency range for noise inspection
do2d(dmm_a.smua.volt,1,50,50,1,lockin_1.frequency,45,75,100,tdelay,lockin_1.X,lockin_1.Y,lockin_1.conductance_tpm)
dmm_a.smub.volt(0)
dmm_a.smub.output('off')
# Set things up to the station
lockin_1.time_constant(tcI) # set time constant on the lock-in
lockin_1.frequency(f) # set frequency on the lock-in
lockin_1.amplitude(VAC) # set amplitude on the lock-in
lockin_1.filter_slope(dB_slope) # set filter slope on the lock-in
lockin_2.time_constant(tcV) # set time constant on the lock-in
lockin_2.filter_slope(dB_slope) # set filter slope on the lock-in
lockin_3.time_constant(tcI) # set time constant on the lock-in
lockin_3.frequency(f) # set frequency on the lock-in
lockin_3.amplitude(VAC) # set amplitude on the lock-in
lockin_3.filter_slope(dB_slope) # set filter slope on the lock-in
lockin_4.time_constant(tcV) # set time constant on the lock-in
lockin_4.filter_slope(dB_slope) # set filter slope on the lock-in
lockin_5.time_constant(tcI) # set time constant on the lock-in
lockin_5.frequency(f) # set frequency on the lock-in
lockin_5.amplitude(VAC) # set amplitude on the lock-in
lockin_5.filter_slope(dB_slope) # set filter slope on the lock-in
lockin_6.time_constant(tcV) # set time constant on the lock-in
lockin_6.filter_slope(dB_slope) # set filter slope on the lock-in
dcond1 = lockin_1.diff_conductance_fpm
cond1 = lockin_1.conductance_tpm
res1 = lockin_1.resistance_fpm
X1 = lockin_1.X
X2 = lockin_2.X
Y1 = lockin_1.Y
Y2 = lockin_2.Y
dcond3 = lockin_3.diff_conductance_fpm
cond3 = lockin_3.conductance_tpm
res3 = lockin_3.resistance_fpm
X3 = lockin_3.X
X4 = lockin_4.X
Y3 = lockin_3.Y
Y4 = lockin_4.Y
dcond5 = lockin_5.diff_conductance_fpm
cond5 = lockin_5.conductance_tpm
res5 = lockin_5.resistance_fpm
X5 = lockin_5.X
X6 = lockin_6.X
Y5 = lockin_5.Y
Y6 = lockin_6.Y
gate = dmm_a.smub.volt
bias1 = dmm_a.smua.volt
bias3 = dmm_b.smua.volt
bias5 = dmm_b.smub.volt
temp = ppms.temperature # read the temperature
temp_set = ppms.temperature_setpoint # set the temperature
temp_rate = ppms.temperature_rate # set the temperature rate
temp_rate(temperature_rate)
temp_set(temperature)
field = ppms.field_measured # read the magnetic field
field_set = ppms.field_target # set the field; a new qcodes function! field_rate is not in use anymore
field_rate = ppms.field_rate # set the the magnetic field rate
field_rate(magnetic_field_rate)
field_set(magnetic_field)
###Output
_____no_output_____
###Markdown
The measurement Temperature is considered as control parameter for a sequence of measurements in this cellSource-Drain DC bias voltage and gate voltage may appliedThis measurement can be used for both WAL and critial field
###Code
# If you want to add bias then uncheck
#dmm_a.smua.output('on') # the bias for 1
#dmm_b.smua.output('on') # the bias for 3
#dmm_b.smub.output('on') # the bias for 5
#bias1(1e-3/DCdiv)
#bias3(1e-3/DCdiv)
#bias5(1e-3/DCdiv)
# If you want to add gate voltage then uncheck
gate(0) # set the gate to zero if you will not apply any
# dmm_a.smub.output('on') # Turn on the gate
#gate(2)
# The control parameter (temperature)
paramrange = [1.7] #arbitrary values
#paramrange = np.arange(0,9,0.5) #steps
#paramrange = np.linspace(0,9,10) #Npoints
# Sweeping parameters
b_start = 0
b_end = 5
for var_param in paramrange:
ppms.temperature_setpoint(var_param)
wait_for_temp()
#vv1 = "Vsd1="+"{:.3f}".format(bias1()*DCdiv*1e3)+"mV "
#vv2 = "Vsd2="+"{:.3f}".format(bias2()*DCdiv*1e3)+"mV "
#vv2 = "Vsd3="+"{:.3f}".format(bias3()*DCdiv*1e3)+"mV "
tt = "T="+"{:.3f}".format(temperature())+"K "
gg = "Vg="+"{:.1f}".format(gate())+"V "
ff = "f="+"{:.1f}".format(lockin_1.frequency())+"Hz "
aa = "Ampl="+"{:.4f}".format(lockin_1.amplitude()*ACdiv*1e3)+"mV"
Conditions = bb + gg + ff + aa
d1 = "/1/ DEV00 S99 VH99 VL99 D99"
d2 = "/3/ DEV00 S99 VH99 VL99 D99"
d3 = "/5/ DEV00 S99 VH99 VL99 D99"
Sample_name = d1# + d2 + d3
Experiment_name = "Protocol 4.0: "
new_experiment(name=Experiment_name + Conditions, sample_name = Sample_name)
meas = Measurement()
meas.register_parameter(field)
meas.register_parameter(dcond1, setpoints=(field,))
meas.register_parameter(res1, setpoints=(field,))
meas.register_parameter(X1, setpoints=(field,))
meas.register_parameter(Y1, setpoints=(field,))
meas.register_parameter(X2, setpoints=(field,))
meas.register_parameter(Y2, setpoints=(field,))
# meas.register_parameter(dcond3, setpoints=(field,))
# meas.register_parameter(res3, setpoints=(field,))
# meas.register_parameter(X3, setpoints=(field,))
# meas.register_parameter(Y3, setpoints=(field,))
# meas.register_parameter(X4, setpoints=(field,))
# meas.register_parameter(Y4, setpoints=(field,))
# meas.register_parameter(dcond5, setpoints=(field,))
# meas.register_parameter(res5, setpoints=(field,))
# meas.register_parameter(X5, setpoints=(field,))
# meas.register_parameter(Y5, setpoints=(field,))
# meas.register_parameter(X6, setpoints=(field,))
# meas.register_parameter(Y6, setpoints=(field,))
field_rate(0.2)
field_set(b_start)
ppms.ramp('blocking')
wait_for_field()
with meas.run() as datasaver:
run_id = datasaver.run_id
field_set(b_end)
field_rate(0.003)
ppms.ramp('non-blocking')
while (round(field()*100) != round(b_end*100)):
datasaver.add_result((field,field()),
(dcond1,dcond1()),(res1,res1()),(X1,X1()),(Y1,Y1()),(X2,X2()),(Y2,Y2()))#,
#(dcond3,dcond3()),(res3,res3()),(X3,X3()),(Y3,Y3()),(X4,X4()),(Y4,Y4()),
#(dcond5,dcond5()),(res5,res5()),(X5,X5()),(Y5,Y5()),(X6,X6()),(Y6,Y6()))
sleep(timedelay)
dmm_a.smub.output('off')
###Output
_____no_output_____ |
.ipynb_checkpoints/home_loan-checkpoint.ipynb | ###Markdown
Data FormatA finance company offering home loans wants to automate the loan eligibility process based on customer detail provided while filling online application form. To automate this process, they have provided a dataset to identify the customers segments that are eligible for loan amount so that they can specifically target these customers.These details are:- Loan_ID = Unique Loan ID- Gender = Male/ Female- Married = Applicant married (Y/N)- Dependents = Number of dependents- Education = Applicant Education (Graduate/ Under Graduate)- Self_Employed = Self-employed (Y/N)- ApplicantIncome = Applicant income- CoapplicantIncome = Coapplicant income- LoanAmount = Loan amount in thousands- Loan_Amount_Term = Term of loan in months- Credit_History = Credit history meets guidelines (0: Bad, 1: Good)- Property_Area = Urban/ Semi Urban/ Rural- Loan_Status = Loan approved (Y/N)
###Code
import warnings
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
warnings.filterwarnings('ignore')
sklearn.__version__
###Output
_____no_output_____
###Markdown
Load the dataset
###Code
# Load the data
train_data = pd.read_csv(r'C:\Users\asus\Desktop\DATA201\DATASETS\train.csv')
test_data = pd.read_csv(r'C:\Users\asus\Desktop\DATA201\DATASETS\test.csv')
# determine the target column
target_column = 'Loan_Status'
# remove irrelevant variables
train_data = train_data.drop("Loan_ID", axis=1)
test_data = test_data.drop("Loan_ID", axis=1)
train_data.head()
# convert the target column from categorical to numerical
train_data[target_column].replace({"N":0, "Y":1}, inplace=True)
test_data[target_column].replace({"N":0, "Y":1}, inplace=True)
# # convert yes/no to 1/0
# train_data['Loan_Status'] = train_data.Loan_Status.eq('Y').mul(1)
# test_data['Loan_Status'] = test_data.Loan_Status.eq('Y').mul(1)
train_data.describe()
train_data.head()
train_data.info()
test_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 123 entries, 0 to 122
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Gender 120 non-null object
1 Married 123 non-null object
2 Dependents 121 non-null object
3 Education 123 non-null object
4 Self_Employed 117 non-null object
5 ApplicantIncome 123 non-null int64
6 CoapplicantIncome 123 non-null float64
7 LoanAmount 119 non-null float64
8 Loan_Amount_Term 120 non-null float64
9 Credit_History 112 non-null float64
10 Property_Area 123 non-null object
11 Loan_Status 123 non-null int64
dtypes: float64(4), int64(2), object(6)
memory usage: 11.7+ KB
###Markdown
Explore the training set to gain insights.
###Code
train_data["Dependents"].value_counts()
train_data["Education"].value_counts()
train_data["Property_Area"].value_counts()
loan = train_data.copy()
loan.hist(figsize=(20,12));
fig = plt.gcf()
fig.savefig('hist.pdf', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
* `LoanAmount`: there are not that many points for `LoanAmount > 400`;* `ApplicantIncome` peaks around 0-10000, this was very likely the standard number of applicants income at the time of the data collection; The correlations
###Code
import seaborn as sns
plt.figure(figsize = (10,5))
sns.heatmap(loan.corr(), annot = True)
plt.show()
###Output
_____no_output_____
###Markdown
Comment:* There is a positive correlation between `ApplicantIncome` and `LoanAmount`, which is 0.56, and `CoapplicantIncome` and `LoanAmount` which is 0.23.* All the other correlations are weak as the coefficients close to 0.
###Code
import seaborn as sns
n_samples_to_plot = 5000
columns = ['ApplicantIncome', 'LoanAmount']
sns.pairplot(data=loan[:n_samples_to_plot], vars=columns,
hue="Loan_Status", plot_kws={'alpha': 0.2},
height=3, diag_kind='hist', diag_kws={'bins': 30});
###Output
_____no_output_____
###Markdown
Select one machine learning model, train, optimise.
###Code
# separate the predictors and the labels
X_train = train_data.drop("Loan_Status", axis=1)
y_train = train_data["Loan_Status"].copy() # save the labels
X_train.head()
y_train.head()
X_train.dtypes
X_train.shape
from sklearn.compose import make_column_selector as selector
from sklearn.compose import ColumnTransformer
# a function for getting all categorical_columns, apart from Dependents
def get_categorical_columns(df):
categorical_columns_selector = selector(dtype_include=object)
categorical_columns = categorical_columns_selector(df.drop("Dependents", axis=1))
return categorical_columns
get_categorical_columns(X_train)
# a function for getting all numerical_columns
def get_numerical_columns(df):
numerical_columns_selector = selector(dtype_exclude=object)
numerical_columns = numerical_columns_selector(df)
return numerical_columns
get_numerical_columns(X_train)
from sklearn.preprocessing import OrdinalEncoder, OneHotEncoder, StandardScaler
from sklearn.preprocessing import PolynomialFeatures
# a function for Transformation the data
def my_transformation(df):
df = df.copy()
numerical_columns = get_numerical_columns(df)
nominal_columns = get_categorical_columns(df)
ordinal_columns = ['Dependents']
order = [['0', '1', '2', '3+']]
numerical_pipeline = Pipeline([('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
nominal_pipeline = Pipeline([('imputer', SimpleImputer(strategy='most_frequent')),
('encoder', OneHotEncoder(handle_unknown='ignore'))])
ordinal_pipeline = Pipeline([('imputer', SimpleImputer(strategy='most_frequent')),
('encoder', OrdinalEncoder(categories=order,
handle_unknown='use_encoded_value',
unknown_value=-1,)),
('scaler', StandardScaler())])
preprocessor = ColumnTransformer([
('numerical_transformer', numerical_pipeline, numerical_columns),
('nominal_transformer', nominal_pipeline, nominal_columns),
('ordinal_transformer', ordinal_pipeline, ordinal_columns),
])
# adding new features
preprocessor2 = Pipeline([('pre', preprocessor),
('poly', PolynomialFeatures(degree=2, include_bias=False))])
preprocessor2.fit(df)
return preprocessor2
###Output
_____no_output_____
###Markdown
Prepare the data
###Code
preprocessor = my_transformation(X_train)
X_train_prepared = preprocessor.transform(X_train)
X_train_prepared.shape
from sklearn.model_selection import GridSearchCV
# a function for tuning the model with hyper-parameter using grid search
def tune_model(model, param_grid, X_train_prepared):
grid_search = GridSearchCV(model, param_grid, cv=5, scoring='roc_auc', return_train_score=True)
grid_search.fit(X_train_prepared, y_train);
print('grid_search.best_estimator_: ', grid_search.best_estimator_)
final_model = grid_search.best_estimator_
return final_model
from sklearn.model_selection import
, StratifiedKFold, cross_val_predict
# a function for estimating the performance of the model with cross-validation
def estimat_model(model, X_train_prepared, y_train, score):
cv = StratifiedKFold(n_splits=5)
scores = cross_val_score(model, X_train_prepared, y_train, cv=cv, scoring = score)
return scores.mean()
###Output
_____no_output_____
###Markdown
Train a LogisticRegression model
###Code
from sklearn.linear_model import LogisticRegression
lr_model = LogisticRegression(random_state=42,max_iter=1000).fit(X_train_prepared, y_train);
%%time
param_grid = [
{'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]},
]
final_model_lr = tune_model(lr_model, param_grid, X_train_prepared)
###Output
grid_search.best_estimator_: LogisticRegression(C=0.001, max_iter=1000, random_state=42)
Wall time: 5.14 s
###Markdown
Train a SVM model
###Code
from sklearn.svm import SVC
svm = SVC(random_state=42,probability=True).fit(X_train_prepared, y_train)
%%time
param_grid = [
{'C': [0.1, 1, 10, 100, 1000, 10000],
'gamma': [0.001, 0.01, 0.1, 1, 10, 'scale','auto']},
]
final_model_SVM = tune_model(svm, param_grid, X_train_prepared)
###Output
grid_search.best_estimator_: SVC(C=0.1, gamma=0.1, probability=True, random_state=42)
Wall time: 30.9 s
###Markdown
Train a RandomForestClassifier model
###Code
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(random_state=42).fit(X_train_prepared, y_train)
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
param_distributions = {
'n_estimators': randint(50, 200),
'max_features': randint(3, 11),
'max_depth': randint(5, 100),
'max_leaf_nodes':randint(2, 20),
'min_samples_leaf': randint(2, 4),
}
final_model_rf = RandomizedSearchCV(rf, param_distributions, n_iter=10, cv=5,
scoring='roc_auc', return_train_score=True, random_state=0)
final_model_rf.fit(X_train_prepared, y_train);
final_model_rf = final_model_rf.best_estimator_
final_model_rf
###Output
_____no_output_____
###Markdown
Train a DecisionTreeClassifier model
###Code
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(max_depth=1, random_state=42).fit(X_train_prepared, y_train)
%%time
param_grid = [
{'max_depth': [1, 2, 3, 5, 10, 20],
'min_samples_leaf': [2, 3, 4, 5, 10, 20, 50, 100],
'criterion': ["gini", "entropy"]},
]
final_tree = tune_model(tree, param_grid, X_train_prepared)
###Output
grid_search.best_estimator_: DecisionTreeClassifier(max_depth=5, min_samples_leaf=10, random_state=42)
Wall time: 6.33 s
###Markdown
Train a KNeighborsClassifier model
###Code
from sklearn.metrics import euclidean_distances
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier(n_neighbors=3).fit(X_train_prepared, y_train)
%%time
param_grid = [
{'n_neighbors': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]},
]
final_clf = tune_model(clf, param_grid, X_train_prepared)
###Output
grid_search.best_estimator_: KNeighborsClassifier(n_neighbors=7)
Wall time: 613 ms
###Markdown
Train a GradientBoostingClassifier
###Code
from sklearn.ensemble import GradientBoostingClassifier
gbrt = GradientBoostingClassifier(random_state=42).fit(X_train_prepared, y_train)
%%time
param_grid = [
{'n_estimators': [10, 50, 100, 150, 200],
'max_depth': [1, 2, 3, 5],
'learning_rate': [0.01, 0.1]},
]
final_gbrt = tune_model(gbrt, param_grid, X_train_prepared)
###Output
grid_search.best_estimator_: GradientBoostingClassifier(max_depth=5, n_estimators=200, random_state=42)
Wall time: 1min 45s
###Markdown
Train a VotingClassifier
###Code
%%time
from sklearn.ensemble import VotingClassifier
voting_clf = VotingClassifier(estimators=[('lr', final_model_lr), ('rf', final_model_rf), ('svc', final_model_SVM)],voting='soft')
voting_clf = voting_clf.fit(X_train_prepared, y_train)
###Output
Wall time: 445 ms
###Markdown
The performance
###Code
y_train.value_counts(normalize=True).plot.barh()
plt.xlabel("Loan_Status frequency")
plt.title("Loan_Status frequency in the training set");
from sklearn.metrics import accuracy_score, precision_score, recall_score, balanced_accuracy_score, f1_score, average_precision_score, roc_auc_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import cross_val_predict
# a function for getting the performance of the model on the validation set
def get_performance(model, X, y):
res = []
acc_mean = estimat_model(model, X, y, score = "accuracy")
bc_mean = estimat_model(model, X, y, score = "balanced_accuracy")
y_train_pred = cross_val_predict(model, X, y, cv=3)
M = confusion_matrix(y, y_train_pred)
tn, fp, fn, tp = M.ravel()
spe = tn / (tn + fp)
precision = precision_score(y, y_train_pred)
recall = recall_score(y, y_train_pred)
f1 = f1_score(y, y_train_pred)
ROC = estimat_model(model, X, y, score = "roc_auc")
res.append([acc_mean, bc_mean, precision, recall, spe, f1, ROC])
return res
from sklearn.metrics import accuracy_score, precision_score, recall_score, balanced_accuracy_score, f1_score, average_precision_score, roc_auc_score
from sklearn.metrics import confusion_matrix
# a function for getting all evaluation metrics
def get_metric(model, X, y):
results = []
predicted = model.predict(X)
M = confusion_matrix(y, predicted)
tn, fp, fn, tp = M.ravel()
spe = tn / (tn + fp) # specificity, selectivity or true negative rate (TNR)
ACC = accuracy_score(y, predicted)
BAC = balanced_accuracy_score(y, predicted)
precision = precision_score(y, predicted)
recall = recall_score(y, predicted)
F1 = f1_score(y, predicted)
y_score = model.predict_proba(X)[:, 1]
ROC = roc_auc_score(y, y_score)
PR = average_precision_score(y, y_score)
results.append([ACC, BAC, precision, recall, spe, F1, ROC, PR])
return results
# a function to display all scores
def show_results(x, y, func, models):
if (models == classifiers):
names = ['SVM', 'LogisticRegression','RandomForestClassifier', 'DecisionTreeClassifier', 'KNeighborsClassifier', 'VotingClassifier','GradientBoostingClassifier']
else:
names = ['RandomForestClassifier']
metrics1 = ['Accuracy', 'Balance-Acc','Precision', 'Recall(Sensitivity)','Specificity','F1-score', 'AUC-ROC']
metrics2 = ['Accuracy', 'Balance-Acc','Precision', 'Recall(Sensitivity)','Specificity','F1-score', 'AUC-ROC', 'AUC-PR']
data_res = [func(c, x, y)[0] for c in models]
if(func == get_performance):
metrics = metrics1
else:
metrics = metrics2
results = pd.DataFrame(data=data_res, index=names, columns=metrics)
results = results.sort_values(by=['AUC-ROC'], ascending=False)
return results
###Output
_____no_output_____
###Markdown
Estimate the performance before tunning
###Code
classifiers = [svm, lr_model, rf, tree, clf, voting_clf, gbrt]
print('Training set model performance before tuning: ')
a = show_results(X_train_prepared, y_train, get_metric, classifiers)
a
classifiers = [svm, lr_model, rf, tree, clf, voting_clf, gbrt]
print('Validation set model performance before tuning: ')
b = show_results(X_train_prepared, y_train, get_performance, classifiers)
b
variance_error = a['AUC-ROC']-b['AUC-ROC']
variance_error.sort_values()
###Output
_____no_output_____
###Markdown
Comment: why RandomForestClassifier?- From the above model performance metrics, we can see that for `RandomForestClassifier` has one of the highest AUC-ROC score in the cross-validation, which is 0.77.- Also in the training set, the AUC-ROC score is around 1 which means is doing pretty well. - Therefore, we should choose RandomForestClassifier. Estimate the performance after tunning.
###Code
classifiers = [final_model_SVM, final_model_lr, final_model_rf, final_tree, final_clf, voting_clf, final_gbrt]
print('Training set model performance after tuning: ')
c = show_results(X_train_prepared, y_train, get_metric, classifiers)
c
classifiers = [final_model_SVM, final_model_lr, final_model_rf, final_tree, final_clf, voting_clf, final_gbrt]
print('Validation set model performance after tuning: ')
d = show_results(X_train_prepared, y_train, get_performance, classifiers)
d
variance_error = c['AUC-ROC']-d['AUC-ROC']
variance_error.sort_values()
###Output
_____no_output_____
###Markdown
Comment: why RandomForestClassifier?- From the above model performance metrics, we can see that for `RandomForestClassifier` has the highest AUC-ROC score in the cross-validation after tunning, which is 0.78.- Also in the training set, the AUC-ROC score of RandomForestClassifier is around 0.96 which is pretty good. Even though it does not have the lowest variance_error. - Overall, RandomForestClassifier is a better choice out of all the others. Test the final model on the test set.
###Code
# separate the test set and the labels
X_test = test_data.drop("Loan_Status", axis=1)
y_test = test_data["Loan_Status"].copy() # save the labels
X_test_prepared = preprocessor.transform(X_test)
X_test_prepared.shape
###Output
_____no_output_____
###Markdown
The ROC Curve
###Code
from sklearn.dummy import DummyClassifier
dummy_classifier = DummyClassifier(strategy="most_frequent")
dummy_classifier.fit(X_train_prepared, y_train);
from sklearn.metrics import plot_roc_curve
def plot_roc(model, x, y):
f = plot_roc_curve(model, x, y, ax=plt.figure(figsize=(5,5)).gca())
f = plot_roc_curve(dummy_classifier, x, y, color="tab:orange", linestyle="--", ax=f.ax_)
f.ax_.set_title("ROC AUC curve");
f.figure_.savefig('roc_curve.pdf', bbox_inches='tight')
plot_roc(final_model_rf, X_test_prepared, y_test)
from sklearn.metrics import plot_precision_recall_curve
f = plot_precision_recall_curve(final_model_rf, X_test_prepared, y_test,
ax=plt.figure(figsize=(5,5)).gca())
f.ax_.set_title("Precision-recall curve");
f.figure_.savefig('pr_curve.pdf', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Evaluation metrics
###Code
X_test_prepared.shape
print('Test set model performance: ')
classifier = [final_model_rf]
show_results(X_test_prepared, y_test, get_metric, classifier)
classifiers = [final_model_SVM, final_model_lr, final_model_rf, final_tree, final_clf, voting_clf, final_gbrt]
show_results(X_test_prepared, y_test, get_metric, classifiers)
from sklearn.metrics import plot_confusion_matrix
plot_confusion_matrix(final_model_rf, X_test_prepared, y_test);
###Output
_____no_output_____ |
magnolia/sandbox/BLSTM-DC/DeepClustering.ipynb | ###Markdown
Hyperparameters used
###Code
# Size of BLSTM layers
layer_size = 600
# Size of embedding vectors
K = 40
# Size of training batches T = #windows F = #Frequency bins
T = 40
F = 257
# Training parameters
batch_size = 512
# STFT parameters used
sample_rate = 10e3
window_size = 0.0512
overlap = 0.0256
fft_size = 512
###Output
_____no_output_____
###Markdown
Create feature mixes for both the training and validation data
###Code
train_data = 'Data/librispeech/processed_train-clean-100.h5'
validation_data = 'Data/librispeech/processed_dev_clean.h5'
train_mixer = FeatureMixer([train_data,train_data], shape=(T,None))
validation_mixer = FeatureMixer([validation_data,validation_data], shape=(T,None))
###Output
_____no_output_____
###Markdown
Functions for creating training batches and dealing with spectrograms
###Code
def scale_spectrogram(spectrogram):
mag_spec = np.abs(spectrogram)
phases = np.unwrap(np.angle(spectrogram))
mag_spec = np.sqrt(mag_spec)
M = mag_spec.max()
m = mag_spec.min()
return (mag_spec - m)/(M - m), phases
def gen_batch(mixer,batch_size):
X = np.zeros((batch_size,T,F))
phases = np.zeros((batch_size,T,F))
y = np.zeros((batch_size,T,F,2))
for i in range(batch_size):
data = next(mixer)
X[i], _ = scale_spectrogram(data[0])
phases[i] = np.unwrap(np.angle(data[0]))
y[i,:,:,0] = 1/2*(np.sign(np.abs(data[1]) - np.abs(data[2])) + 1)
y[i,:,:,1] = 1 - y[i,:,:,0]
return X, y, phases
def invert_spectrogram(magnitude,phase):
return istft(np.square(magnitude)*np.exp(phase*1.0j),sample_rate,None,overlap,two_sided=False,fft_size=fft_size)
###Output
_____no_output_____
###Markdown
Generate a sample from the validation data
###Code
X_vala, y_vala, phases = gen_batch(validation_mixer,10*batch_size)
###Output
_____no_output_____
###Markdown
Load an instance of the deep clustering model
###Code
model = DeepClusteringModel()
model.initialize()
#model.load('models/magnolia/deep_clustering.ckpt')
iterations = []
costs = []
t_costs = []
v_costs = []
###Output
_____no_output_____
###Markdown
Train the model on batches from the training datasetPlot the error on the training set and the validation sample every so often
###Code
try:
start = iterations[-1]
except:
start = 0
for i in range(200000):
X_train, y_train, phases = gen_batch(train_mixer,batch_size)
c = model.train_on_batch(X_train, y_train)
costs.append(c)
if (i+1) % 10 == 0:
IPython.display.clear_output(wait=True)
c_v = model.get_cost(X_vala, y_vala)
if len(iterations):
if c_v < min(v_costs) and iterations[-1] > 0:
print("Saving the model because c_v is", c_v)
model.save('models/magnolia/deep_clustering.ckpt')
t_costs.append(np.mean(costs))
v_costs.append(c_v)
iterations.append(i + start)
length = len(iterations)
cutoff = int(0.5*length)
f, (ax1, ax2) = plt.subplots(2,1)
ax1.plot(iterations,t_costs)
ax1.plot(iterations,v_costs)
y_u = max(max(t_costs[cutoff:]),max(v_costs[cutoff:]))
y_l = min(min(t_costs[cutoff:]),min(v_costs[cutoff:]))
ax2.set_ylim(y_l,y_u)
ax2.plot(iterations[cutoff:], t_costs[cutoff:])
ax2.plot(iterations[cutoff:], v_costs[cutoff:])
plt.show()
print("Cost is", c_v)
costs = []
length = len(iterations)
cutoff = int(0.5*length)
f, (ax1, ax2) = plt.subplots(2,1)
ax1.plot(iterations,t_costs)
ax1.plot(iterations,v_costs)
y_u = max(max(t_costs[cutoff:]),max(v_costs[cutoff:]))
y_l = min(min(t_costs[cutoff:]),min(v_costs[cutoff:]))
ax2.set_ylim(y_l,y_u)
ax2.plot(iterations[cutoff:], t_costs[cutoff:])
ax2.plot(iterations[cutoff:], v_costs[cutoff:])
plt.show()
###Output
_____no_output_____
###Markdown
Listen to an example separation from the validation data
###Code
long_mixer = FeatureMixer([validation_data,validation_data], shape=(5*T,None))
data = next(long_mixer)
spec = data[0]
signal = istft(spec,sample_rate,None,overlap,two_sided=False,fft_size=512)
signal = undo_preemphasis(signal)
Audio(signal,rate=sample_rate)
sources = clustering_separate(signal,sample_rate,model,2)
Audio(sources[0], rate=sample_rate)
Audio(sources[1], rate=sample_rate)
###Output
_____no_output_____
###Markdown
Visualize the the learned affinity matrix
###Code
X_ex, y_ex, phases = gen_batch(validation_mixer,1)
vectors = model.get_vectors(X_ex)
res = vectors[0].reshape((T*F,K))
resa = y_ex[0].reshape((T*F,2))
A = resa @ resa.T
B = (res @ res.T)
plt.matshow(A[0:6000,0:6000])
plt.show()
plt.matshow(B[0:6000,0:6000])
plt.show()
plt.matshow(np.square(B[0:6000,0:6000] - 1/2))
plt.show()
###Output
_____no_output_____
###Markdown
Evaluate BSS metrics on the test data
###Code
test_data = 'Data/librispeech/processed_test_clean.h5'
test_mixer = FeatureMixer([test_data,test_data], shape=(T,None))
X_test, y_test, _ = gen_batch(test_mixer, batch_size)
def bss_eval_batch(mixer, num_sources):
data = next(mixer)
mixes = [invert_spectrogram(np.abs(data[0]),np.unwrap(np.angle(data[0]))) for i in range(1,num_sources + 1)]
sources = [invert_spectrogram(np.abs(data[i]),np.unwrap(np.angle(data[i]))) for i in range(1,num_sources + 1)]
mixes = [undo_preemphasis(mix) for mix in mixes]
sources = [undo_preemphasis(source) for source in sources]
input_mix = np.stack(mixes)
reference_sources = np.stack(sources)
estimated_sources = clustering_separate(mixes[0],1e4,model,num_sources)
do_nothing = bss_eval_sources(reference_sources, input_mix)
do_something = bss_eval_sources(reference_sources, estimated_sources)
sdr = do_something[0] - do_nothing[0]
sir = do_something[1] - do_nothing[1]
sar = do_something[2] - do_nothing[2]
return {'SDR': sdr, 'SIR': sir, 'SAR': sar}
###Output
_____no_output_____ |
project/i_extract_and_clean.ipynb | ###Markdown
Part i - Extract Data and Clean It 1. Import libraries and set options
###Code
import os
import pandas as pd
from IPython.display import display
from fuzzywuzzy import process
pd.set_option('max_colwidth', 400)
import pickle
import missingno as msno
###Output
_____no_output_____
###Markdown
2. Create Dataframes and clean data 2.1 Match data DataframeCollate csv files, convert into lists and create first dataframe containing result data.
###Code
project_dir = os.path.dirname(os.path.abspath(''))
data_dir = os.path.join(project_dir, 'raw_data', 'dataset_1')
field_names = []
df_list = []
for root, _, files in os.walk(data_dir):
for filenames in files:
file_path = os.path.join(root, filenames)
if field_names == []:
field_names = pd.read_csv(file_path, nrows=0).columns.tolist()
else:
new_field_names = pd.read_csv(file_path, nrows=0).columns.tolist()
for index, element in enumerate(field_names):
if element != new_field_names[index]:
print(f"Field names don't match in {filenames}")
break
df_list.extend(pd.read_csv(file_path).values.tolist())
results_df = pd.DataFrame(df_list, columns=field_names)
display(results_df.head())
results_df.info()
###Output
_____no_output_____
###Markdown
Visualise missing data.
###Code
%matplotlib inline
msno.matrix(results_df)
###Output
_____no_output_____
###Markdown
Remove inconsistent information from link string.
###Code
results_df['Link'] = results_df['Link'].apply(lambda x: x[:(x.rfind('/') + 5)])
###Output
_____no_output_____
###Markdown
Remove all duplicate entries.
###Code
results_df.info()
results_df = results_df.drop_duplicates(subset='Link')
results_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 146498 entries, 0 to 146497
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Home_Team 146498 non-null object
1 Away_Team 146498 non-null object
2 Result 146498 non-null object
3 Link 146498 non-null object
4 Season 146498 non-null int64
5 Round 146498 non-null int64
6 League 146498 non-null object
dtypes: int64(2), object(5)
memory usage: 7.8+ MB
<class 'pandas.core.frame.DataFrame'>
Int64Index: 132109 entries, 0 to 146497
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Home_Team 132109 non-null object
1 Away_Team 132109 non-null object
2 Result 132109 non-null object
3 Link 132109 non-null object
4 Season 132109 non-null int64
5 Round 132109 non-null int64
6 League 132109 non-null object
dtypes: int64(2), object(5)
memory usage: 8.1+ MB
###Markdown
**Findings:**Based on the above, the results, season, round and league need to be validated and if applicable cleaned.Team names and links will have to be assumed to be correct for now.**Results** - Check scores validity and remove data if not in consistent format.
###Code
possible_results = []
for i in range(20):
for j in range(20):
possible_results.append(f'{i}-{j}')
display(results_df.loc[~results_df['Result'].isin(possible_results)])
results_df = results_df.drop(results_df.loc[~results_df['Result'].isin(possible_results)].index)
display(results_df.loc[~results_df['Result'].isin(possible_results)])
###Output
_____no_output_____
###Markdown
**Team Names** - Confirm that there are no spurious/mis spelt team names (i.e. appearing less than 10 times).
###Code
display(results_df[results_df.groupby('Home_Team')['Home_Team'].transform('size') < 10])
###Output
_____no_output_____
###Markdown
**Season, Round, League** - Confirm that the set of values is consistent and valid.
###Code
print(set(results_df['Season']))
print(set(results_df['League']))
print(set(results_df['Round']))
###Output
{1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021}
{'premier_league', 'serie_b', 'primera_division', 'bundesliga', 'primeira_liga', 'championship', 'eredivisie', 'ligue_1', 'eerste_divisie', 'serie_a', '2_liga', 'segunda_division', 'ligue_2', 'segunda_liga'}
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46}
###Markdown
2.2 Match Info DataframeConvert csv files into dataframe containing match data.
###Code
data_dir = os.path.join(project_dir, 'raw_data', 'dataset_2')
match_csv = os.path.join(data_dir, 'Match_Info.csv')
match_df = pd.read_csv(match_csv)
display(match_df.head())
match_df.info()
###Output
_____no_output_____
###Markdown
Visualise missing data.
###Code
%matplotlib inline
msno.matrix(match_df)
###Output
_____no_output_____
###Markdown
Remove all duplicates.
###Code
match_df.info()
match_df = match_df.drop_duplicates(subset='Link')
match_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 143348 entries, 0 to 143347
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Link 143348 non-null object
1 Date_New 143348 non-null object
2 Referee 143348 non-null object
3 Home_Yellow 122798 non-null float64
4 Home_Red 122798 non-null float64
5 Away_Yellow 122798 non-null float64
6 Away_Red 122798 non-null float64
dtypes: float64(4), object(3)
memory usage: 7.7+ MB
<class 'pandas.core.frame.DataFrame'>
Int64Index: 143348 entries, 0 to 143347
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Link 143348 non-null object
1 Date_New 143348 non-null object
2 Referee 143348 non-null object
3 Home_Yellow 122798 non-null float64
4 Home_Red 122798 non-null float64
5 Away_Yellow 122798 non-null float64
6 Away_Red 122798 non-null float64
dtypes: float64(4), object(3)
memory usage: 8.7+ MB
###Markdown
**Findings:**- Based on the above, the links are incomplete compared to the results df and will need manipulating so that the dfs can be joined.- Card numbers need to be validated. There are several matches in which this dataset is incomplete. These will have to be left (approx 20k have null values)- Referee strings need to be cleaned.- Links need to be cleaned to match those in results_df.**Cards** - Validate numbers of cards.
###Code
print(set(match_df.loc[~match_df['Home_Yellow'].isna(), 'Home_Yellow']))
print(set(match_df.loc[~match_df['Home_Red'].isna(), 'Home_Red']))
print(set(match_df.loc[~match_df['Away_Yellow'].isna(), 'Away_Yellow']))
print(set(match_df.loc[~match_df['Away_Red'].isna(), 'Away_Red']))
###Output
{0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0}
{0.0, 1.0, 2.0, 3.0}
{0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0}
{0.0, 1.0, 2.0, 3.0, 4.0}
###Markdown
**Referee** - Clean up referee strings
###Code
match_df['Referee'] = match_df['Referee'].replace('\r\n', '', regex=True)
display(match_df[match_df['Referee'].str.contains('\r\n')])
match_df.head()
###Output
_____no_output_____
###Markdown
Check links in the results df are in the match_df by standardising link string.
###Code
match_df['Link'] = 'https://www.besoccer.com' + match_df['Link']
match_df['Link'] = match_df['Link'].replace('match_\w+/', 'match/', regex=True)
###Output
_____no_output_____
###Markdown
2.3 Team Info DataframeConvert csv files into dataframe containing team info data.
###Code
data_dir = os.path.join(project_dir, 'raw_data', 'dataset_2')
team_csv = os.path.join(data_dir, 'Team_Info.csv')
team_df = pd.read_csv(team_csv)
display(team_df.head())
print(team_df.info())
###Output
_____no_output_____
###Markdown
Visualise missing data.
###Code
%matplotlib inline
msno.matrix(team_df)
###Output
_____no_output_____
###Markdown
Remove all duplicates.
###Code
team_df.info()
team_df = team_df.drop_duplicates(subset='Team')
team_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 544 entries, 0 to 543
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Team 544 non-null object
1 City 544 non-null object
2 Country 544 non-null object
3 Stadium 447 non-null object
4 Capacity 544 non-null object
5 Pitch 447 non-null object
dtypes: object(6)
memory usage: 25.6+ KB
<class 'pandas.core.frame.DataFrame'>
Int64Index: 544 entries, 0 to 543
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Team 544 non-null object
1 City 544 non-null object
2 Country 544 non-null object
3 Stadium 447 non-null object
4 Capacity 544 non-null object
5 Pitch 447 non-null object
dtypes: object(6)
memory usage: 29.8+ KB
###Markdown
**Findings:**- Based on the above, the country and pitch need to be validated and if applicable cleaned.- City, team names, capacity and stadium will have to be assumed to be correct for now.**Country** - Check countries are applicable and valid.
###Code
print(set(team_df['Country']))
###Output
{'Italy', 'France', 'England', 'Spain', 'Netherlands', 'Portugal', 'Germany'}
###Markdown
**Pitch** - Standardise entries for pitch type.
###Code
print(set(team_df['Pitch']))
list_to_update = ['cesped real', 'Grass', 'Césped Natural', 'Cesped natural', 'NATURAL', 'Natural grass', 'Césped', 'Césped natural', 'natural', 'natural grass', 'cesped natural', 'grass']
team_df.loc[team_df['Pitch'].isin(list_to_update), 'Pitch'] = 'Natural'
team_df.loc[team_df['Pitch'] == 'Césped Artificial', 'Pitch'] = 'Artificial'
print(set(team_df['Pitch']))
###Output
{nan, 'grass', 'Grass', 'cesped natural', 'Cesped natural', 'AirFibr ', 'natural grass', 'Artificial', 'Natural grass', 'natural', 'Césped natural', 'NATURAL', 'Césped Artificial', 'Natural', 'Césped Natural', 'cesped real', 'Césped'}
{'Natural', nan, 'AirFibr ', 'Artificial'}
###Markdown
3 Combine Datasets 3.1 Compare Datasets and CleanFind results with teams not in team_df.Create dictionary of team names to be replaced.
###Code
not_found_home = set(results_df[~results_df['Home_Team'].isin(team_df['Team'])]['Home_Team'])
not_found_away = set(results_df[~results_df['Away_Team'].isin(team_df['Team'])]['Away_Team'])
print(not_found_home == not_found_away)
print(not_found_home)
team_list = list(set(team_df['Team'].to_list()))
teams_to_change = {}
for team in not_found_home:
teams_to_change[team] = process.extractOne(team, team_list)[0]
teams_to_change
###Output
_____no_output_____
###Markdown
Pop team names that are incorrectly matched. And then update the dictionary.
###Code
keys_to_drop = {
'Licata': 'Alicante',
'Casertana': 'Catania',
'Barletta': 'Arles',
'Taranto': 'Atalanta',
'Calcio Portogruaro-Summaga': 'Calcio',
'FC Libourne Saint Seurin': 'Paris FC'}
for k in keys_to_drop.keys():
teams_to_change.pop(k)
values_to_update = {"Home_Team": teams_to_change}
results_df.replace(values_to_update, inplace=True)
values_to_update = {"Away_Team": teams_to_change}
results_df.replace(values_to_update, inplace=True)
not_found_home = set(results_df[~results_df['Home_Team'].isin(team_df['Team'])]['Home_Team'])
print(not_found_home)
###Output
{'Calcio Portogruaro-Summaga', 'Licata', 'Casertana', 'FC Libourne Saint Seurin', 'Taranto', 'Barletta'}
###Markdown
As there are 3503 unmatched links out 146000 data entries, these unmatched links can be dropped. Matching these would otherwise be too computationally/time expensive. 3.2 Merge DatasetsMerge as follows:- Pull in team_df into results_df- Pull in match_df into results_df
###Code
team_df = team_df.rename(columns={'Team' : 'Home_Team'})
df = pd.merge(results_df, match_df, on='Link', how='left')
df = pd.merge(df, team_df, on='Home_Team', how='left')
display(df.head())
print(df.info())
###Output
_____no_output_____
###Markdown
3.2 Update with ELO dataCreate new dataframe with ELO data and common links
###Code
elo_dict = pickle.load(open(os.path.join(project_dir, 'raw_data', 'elo_dict.pkl'), 'rb'))
elo_df = pd.DataFrame.from_dict(elo_dict, orient='index')
elo_df = elo_df.reset_index(level=0)
elo_df = elo_df.rename(columns={'index': 'Link'})
elo_df['Link'] = elo_df['Link'].apply(lambda x: x[:(x.rfind('/') + 5)])
elo_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 132111 entries, 0 to 132110
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Link 132111 non-null object
1 Elo_home 122314 non-null float64
2 Elo_away 122314 non-null float64
dtypes: float64(2), object(1)
memory usage: 3.0+ MB
###Markdown
Drop duplicate values.
###Code
elo_df.info()
elo_df = elo_df.drop_duplicates()
elo_df.info()
df = pd.merge(df, elo_df, on='Link', how='left')
display(df.head())
print(df.info())
###Output
_____no_output_____
###Markdown
3.3 Final Clean of DataNow dataset has been merged and is complete, remove all remaining unreliable data for the features that matter.
###Code
df = df.dropna(axis=0, subset=['Date_New', 'Capacity', 'Elo_home', 'Elo_away', 'Home_Yellow', 'Home_Red', 'Away_Yellow', 'Away_Red'])
print(df.info())
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 105540 entries, 0 to 131799
Data columns (total 20 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Home_Team 105540 non-null object
1 Away_Team 105540 non-null object
2 Result 105540 non-null object
3 Link 105540 non-null object
4 Season 105540 non-null int64
5 Round 105540 non-null int64
6 League 105540 non-null object
7 Date_New 105540 non-null object
8 Referee 105540 non-null object
9 Home_Yellow 105540 non-null float64
10 Home_Red 105540 non-null float64
11 Away_Yellow 105540 non-null float64
12 Away_Red 105540 non-null float64
13 City 105540 non-null object
14 Country 105540 non-null object
15 Stadium 101639 non-null object
16 Capacity 105540 non-null object
17 Pitch 101391 non-null object
18 Elo_home 105540 non-null float64
19 Elo_away 105540 non-null float64
dtypes: float64(6), int64(2), object(12)
memory usage: 16.9+ MB
None
###Markdown
Remove teams that aren't consistent across home team and away team lists.
###Code
away_not_in_home = set(df[~df['Away_Team'].isin(df['Home_Team'])]['Away_Team'])
print(away_not_in_home)
df.drop(df[df['Away_Team'].isin(away_not_in_home)].index, inplace=True)
home_not_in_away = set(df[~df['Home_Team'].isin(df['Away_Team'])]['Home_Team'])
print(home_not_in_away)
df.drop(df[df['Home_Team'].isin(home_not_in_away)].index, inplace=True)
df
###Output
{'Oriental Lisboa', 'Carregado', 'Vilafranquense', 'Real Unión Irún', 'Alcoyano', 'Casa Pia', 'Villarreal B', 'FC Libourne Saint Seurin', 'Pontevedra', 'Calcio Portogruaro-Summaga', 'Achilles 29', 'Fafe'}
{'Real Sport Clube', 'Jong Twente', 'Poli Ejido', 'Racing Paris'}
###Markdown
4 Export DatasetSave to json file
###Code
df.to_json(os.path.join(project_dir, 'cleaned_dataset.json'))
###Output
_____no_output_____ |
notebooks/RandomPhaseGadgets.ipynb | ###Markdown
Random Phase GadgetsCode for producing the family of circuits seen in **Figure 4** from [_A quantum-classical cloud platform optimized for variational hybrid algorithms_](https://arxiv.org/abs/2001.04449).
###Code
import random
from typing import Optional
from pyquil import Program, get_qc
from pyquil.gates import CNOT, H, MEASURE, RZ
from pyquil.latex import display
from pyquil.quilbase import Gate
def random_phase_gadget(qubits: int, depth: int, seed: Optional[int] = None) -> Program:
if seed:
random.seed(seed)
pairs = qubits // 2
alphas = pairs * depth
permutation = list(range(qubits))
random.shuffle(permutation)
i = 0
p = Program()
alpha = p.declare("alpha", "REAL", alphas)
for layer in range(depth):
for pair in range(pairs):
control = permutation[2 * pair]
target = permutation[2 * pair + 1]
p += H(control)
p += H(target)
p += CNOT(control, target)
p += RZ(alpha[i], target)
p += CNOT(control, target)
i += 1
random.shuffle(permutation)
for qubit in permutation:
p += H(qubit)
ro = p.declare("ro", "BIT", qubits)
for idx, qubit in enumerate(permutation):
p += MEASURE(qubit, ro[idx])
return p
m = d = 2
rpg = random_phase_gadget(m, d)
print(rpg)
display(rpg)
qvm = get_qc("3q-qvm")
compiled_rpg = Program(qvm.compile(rpg).program)
nCZ = nRX = 0
for gate in compiled_rpg:
if isinstance(gate, Gate) and gate.name == "CZ":
nCZ += 1
if isinstance(gate, Gate) and gate.name == "RX":
nRX += 1
print(f"Number of CZ gates: {nCZ}")
print(f"Number of RX gates: {nRX}")
###Output
Number of CZ gates: 4
Number of RX gates: 16
|
mvsd.ipynb | ###Markdown
Reading data
###Code
tweet_df = pd.read_csv('data/tweets_sample_preprocessed.zip',compression = 'zip', sep = '|')
tweet_df = tweet_df[tweet_df.UserID != 84165878]
###Output
_____no_output_____
###Markdown
Feature extraction
###Code
"""Secondary functions"""
def count_phrase_freq(phrase, text):
phrase = phrase.lower()
text = text.lower()
regex_obj = re.findall('\\b'+phrase+'\\b', text)
if regex_obj:
return len(regex_obj)
else:
return 0
spam_list = [line.rstrip('\n') for line in open('spam_phrases.txt', 'r')]
def count_spam_phrases_per_tweet(spam_list, tweet):
count = 0
for phrase in spam_list:
count += count_phrase_freq(phrase, tweet)
return count
###Output
_____no_output_____
###Markdown
Content-based features extraction
###Code
#add feature: num of mentions in tweet
tweet_df['NumOfMentions'] = tweet_df['Mention'].map(lambda x: len(ast.literal_eval(x)))
def retweet_rate(tweet_df):
tweet_df['hasRetweet'] = tweet_df.Tweet.str.contains("^RE ")
num_tweets_with_RT = tweet_df.groupby('UserID')['hasRetweet'].sum()
total_num_tweets = tweet_df.groupby('UserID')['Tweet'].count()
feature = num_tweets_with_RT/total_num_tweets
tweet_df.drop(columns='hasRetweet')
return feature
def avg_length_of_tweet(tweet_df):
tweet_df['Tweet_Length'] = tweet_df['Tweet'].str.len()
tweet_length = tweet_df.groupby('UserID')['Tweet_Length'].sum()
num_of_tweets = tweet_df.groupby('UserID')['Tweet_Length'].count()
feature = tweet_length/num_of_tweets
tweet_df.drop(columns='Tweet_Length', inplace=True)
return feature
def avg_num_mentions_per_tweet(tweet_df):
num_mentions_per_user = tweet_df.groupby('UserID')['NumOfMentions'].sum()
num_tweets_per_user = tweet_df.groupby('UserID')['Tweet'].count()
feature = num_mentions_per_user/num_tweets_per_user
return feature
#count spam phrases in tweets, source: (https://blog.hubspot.com/blog/tabid/6307/bid/30684/the-ultimate-list-of-email-spam-trigger-words.aspx)
def avg_num_spam_phrases_per_tweet(tweet_df):
tweet_df['NumSpamWords'] = list(map(lambda x: count_spam_phrases_per_tweet(spam_list, x), tweet_df.Tweet))
sum_spam_phrases_per_user = tweet_df.groupby('UserID')['NumSpamWords'].sum()
num_tweets_per_user = tweet_df.groupby('UserID')['Tweet'].count()
feature = sum_spam_phrases_per_user/num_tweets_per_user
return feature
#tweet_df.drop(columns='NumOfMentions', inplace=True)
###Output
_____no_output_____
###Markdown
Hashtag features extraction
###Code
#add feature: num of hashtags in tweet
tweet_df['NumOfHashtags'] = tweet_df.Hashtag.map(lambda x: len(ast.literal_eval(x)))
#average number of Hashtags per tweet
def avg_num_hashtags(tweet_df):
count_URL_per_user = tweet_df.groupby('UserID')['NumOfHashtags'].sum()
count_Tweets_per_user = tweet_df.groupby('UserID')['Tweet'].count()
return count_URL_per_user/count_Tweets_per_user
#
def avg_same_hashtag_count(tweet_df):
tweet_df['isHashtagUnique'] = np.where(tweet_df['NumOfHashtags'] == 1, 1, 0)
tweet_df['isHashtagDuplicate'] = np.where(tweet_df['NumOfHashtags'] > 1, 1, 0)
num_unique_hashtags = tweet_df.groupby('UserID')['isHashtagUnique'].sum()
num_duplicate_hashtags = tweet_df.groupby('UserID')['isHashtagDuplicate'].sum()
total_tweet_count = num_duplicate_hashtags = tweet_df.groupby('UserID')['Tweet'].count()
feature = num_duplicate_hashtags/(num_unique_hashtags*total_tweet_count)
feature = feature.replace(np.inf, 0)
return feature
def num_hashtags_per_tweet(tweet_df):
tweet_df['hasHashtag'] = tweet_df[tweet_df['NumOfHashtags'] > 0]
total_tweet_count = tweet_df.groupby('UserID')['Tweet'].count()
num_tweets_with_hashtag = tweet_df.groupby('UserID')['hasHashtag'].sum()
feature = num_tweets_with_hashtag/total_tweet_count
return feature
#tweet_df.drop(columns='NumOf#', inplace=True)
###Output
_____no_output_____
###Markdown
URL features extraction
###Code
#add feature: num of mentions in tweet
tweet_df['NumOfURLs'] = tweet_df['URL'].map(lambda x: len(ast.literal_eval(x)))
#average number of URLs per tweet
def avg_num_URLs(tweet_df):
count_URL_per_user = tweet_df.groupby('UserID')['NumOfURLs'].sum()
count_Tweets_per_user = tweet_df.groupby('UserID')['Tweet'].count()
return count_URL_per_user/count_Tweets_per_user
def avg_same_URL_count(tweet_df):
tweet_df['isURLUnique'] = np.where(tweet_df['NumOfURLs'] == 1, 1, 0)
tweet_df['isURLDuplicate'] = np.where(tweet_df['NumOfURLs'] > 1, 1, 0)
num_unique_URLs = tweet_df.groupby('UserID')['isURLUnique'].sum()
num_duplicate_URLs = tweet_df.groupby('UserID')['isURLDuplicate'].sum()
total_tweet_count = num_duplicate_URLs = tweet_df.groupby('UserID').Tweet.count()
feature = num_duplicate_URLs/(num_unique_URLs*total_tweet_count)
feature = feature.replace(np.inf, 0)
return feature
#tweet_df.drop(columns='NumOfURLs#', inplace=True)
###Output
_____no_output_____
###Markdown
Combining features into a single-view matrices
###Code
try:
content_view_df = pd.read_csv(r'data/views_df_preprocessed/content_view_df.csv', sep = '|', index_col=0)
URL_view_df = pd.read_csv(r'data/views_df_preprocessed/URL_view_df.csv', sep = '|', index_col=0)
hashtag_view_df = pd.read_csv(r'data/views_df_preprocessed/hashtag_view_df.csv', sep = '|', index_col=0)
except:
#Content-based view
content_view_df = pd.DataFrame(dict(AvgLengthOfTweets = avg_length_of_tweet(tweet_df),
RetweetRate = retweet_rate(tweet_df),
AvgNumMentions = avg_num_mentions_per_tweet(tweet_df),
AvgNumSpamPhrases = avg_num_spam_phrases_per_tweet(tweet_df)
))
#URL-based view
URL_view_df = pd.DataFrame(dict(AvgNumURLs = avg_num_URLs(tweet_df),
AvgSameURLCount = avg_same_URL_count(tweet_df)))
#Hashtag-based view
hashtag_view_df = pd.DataFrame(dict(AvgNumHashtags = avg_num_hashtags(tweet_df),
AvgSamHashtagCount = avg_same_hashtag_count(tweet_df)
))
content_view_df.to_csv(r"data\views_df_preprocessed\content_view_df.csv", index= True, sep = '|')
URL_view_df.to_csv(r"data\views_df_preprocessed\URL_view_df.csv", index= True, sep = '|')
hashtag_view_df.to_csv(r"data\views_df_preprocessed\hashtag_view_df.csv", index= True, sep = '|')
###Output
_____no_output_____
###Markdown
Creating label matrix
###Code
users_legitimate_df = pd.read_csv('data\social_honeypot\legitimate_users.txt',
sep = '\t',
names = ['UserID',
'CreatedAt',
'CollectedAt',
'NumberOfFollowings',
'NumberOfFollowers',
'NumberOfTweets',
'LengthOfScreenName',
'LengthOfDescriptionInUserPro'])
users_polluters_df = pd.read_csv('data/social_honeypot/content_polluters.txt',
sep = '\t',
names = ['UserID',
'CreatedAt',
'CollectedAt',
'NumberOfFollowings',
'NumberOfFollowers',
'NumberOfTweets',
'LengthOfScreenName',
'LengthOfDescriptionInUserPro'])
tweet_df['isSpammer'] = np.where(tweet_df['UserID'].isin(list(users_polluters_df['UserID'])), -1, 0)
tweet_df['isLegitimate'] = np.where(tweet_df['UserID'].isin(list(users_legitimate_df['UserID'])), 1, 0)
class_label_df = tweet_df[['UserID','isLegitimate', 'isSpammer']].drop_duplicates(['UserID']).sort_values('UserID').set_index('UserID')
class_label_df = class_label_df[['isSpammer','isLegitimate']]
###Output
_____no_output_____
###Markdown
Multiview Spam Detection Algorithm (MVSD)
###Code
importlib.reload(mv)
#content_view_df.AvgLengthOfTweets = content_view_df.AvgLengthOfTweets/content_view_df.AvgLengthOfTweets.max()
X_nv = [content_view_df, URL_view_df, hashtag_view_df]
#shuffle data points
X_nv = [df.sample(frac = 1, random_state = 2) for df in X_nv]
# normalize X
X_nv = [normalize(X, axis = 0, norm = 'l1') for X in X_nv]
#transpose to correspond to the notations of dimensions used in the paper
X_nv = [np.transpose(X_nv[v]) for v in range(len(X_nv))]
Y = np.array(class_label_df.sample(frac = 1, random_state = 2))
mvsd = mv.multiview(X = X_nv, Y = Y, num_components = 10 )
mvsd.solve(training_size=0.70, learning_rate= 0.001, alpha=0.01)
confusion_matrix, precision, recall, F1_score = mvsd.evaluate_train()
confusion_matrix_ = pd.DataFrame(data = {'Actual_Spammer': confusion_matrix[:,0], 'Actual_Legitimate': confusion_matrix[:,1]}, index = ['Predicted_Spammer ','Predicted_Legitimate'])
print(confusion_matrix_)
print("\n")
print("Precision: {}\n".format(precision))
print("Recall: {}\n".format(recall))
print("F1-score: {}\n".format(F1_score))
confusion_matrix, precision, recall, F1_score = mvsd.evaluate_test()
confusion_matrix_ = pd.DataFrame(data = {'Actual_Spammer': confusion_matrix[:,0], 'Actual_Legitimate': confusion_matrix[:,1]}, index = ['Predicted_Spammer ','Predicted_Legitimate'])
print(confusion_matrix_)
print("\n")
print("Precision: {}\n".format(precision))
print("Recall: {}\n".format(recall))
print("F1-score: {}\n".format(F1_score))
###Output
_____no_output_____
###Markdown
Comparison with single-view approaches Content view features
###Code
importlib.reload(sv)
X_nv = [content_view_df, URL_view_df, hashtag_view_df]
X_nv = [df.sample(frac = 1, random_state = 2) for df in X_nv]
X_nv = [np.transpose(X_nv[v]) for v in range(len(X_nv))]
Y = np.array(class_label_df.sample(frac = 1, random_state = 2))
content_view_svm = sv.singleview(data = X_nv[0], class_ = Y)
model_svm = SVC(gamma = "auto")
training_sizes = [0.30, 0.50, 0.80]
for s in training_sizes:
print("---------------------------------------------------------------------")
print("Training size: {}\n".format(s))
precision, recall, F1_score, confusion_matrix_CV = content_view_svm.evaluate(model = model_svm, training_size=s)
###Output
_____no_output_____
###Markdown
URL view
###Code
importlib.reload(sv)
X_nv = [content_view_df, URL_view_df, hashtag_view_df]
X_nv = [df.sample(frac = 1, random_state = 2) for df in X_nv]
X_nv = [np.transpose(X_nv[v]) for v in range(len(X_nv))]
Y = np.array(class_label_df.sample(frac = 1, random_state = 2))
content_view_svm = sv.singleview(data = X_nv[1], class_ = Y)
model_svm = SVC(gamma = "auto")
training_sizes = [0.30, 0.50, 0.80]
for s in training_sizes:
print("---------------------------------------------------------------------")
print("Training size: {}\n".format(s))
precision, recall, F1_score, confusion_matrix_CV = content_view_svm.evaluate(model = model_svm, training_size=s)
###Output
_____no_output_____
###Markdown
Hashtag View
###Code
importlib.reload(sv)
X_nv = [content_view_df, URL_view_df, hashtag_view_df]
X_nv = [df.sample(frac = 1, random_state = 2) for df in X_nv]
X_nv = [np.transpose(X_nv[v]) for v in range(len(X_nv))]
Y = np.array(class_label_df.sample(frac = 1, random_state = 2))
content_view_svm = sv.singleview(data = X_nv[2], class_ = Y)
model_svm = SVC(gamma = "auto")
training_sizes = [0.30, 0.50, 0.80]
for s in training_sizes:
print("---------------------------------------------------------------------")
print("Training size: {}\n".format(s))
precision, recall, F1_score, confusion_matrix_CV = content_view_svm.evaluate(model = model_svm, training_size=s)
###Output
_____no_output_____
###Markdown
Concatenated features
###Code
importlib.reload(sv)
Y = np.array(class_label_df.sample(frac = 1, random_state = 2))
X = np.array(pd.concat(X_nv, axis=0))
content_view_svm = sv.singleview(data = X, class_ = Y)
model_svm = SVC(gamma = "auto")
training_sizes = [0.30, 0.50, 0.80]
for s in training_sizes:
print("---------------------------------------------------------------------")
print("Training size: {}\n".format(s))
precision, recall, F1_score, confusion_matrix_CV = content_view_svm.evaluate(model = model_svm, training_size=s)
###Output
_____no_output_____ |
notebooks/4_Tokenization_Lemmatization_Striplog_V3.ipynb | ###Markdown
Manual Classification
###Code
#Dir = '/mnt/d/Dropbox/Ranee_Joshi_PhD_Local/04_PythonCodes/dh2loop_old/shp_NSW'
#DF=litho_Dataframe(Dir)
#DF.to_csv('export.csv')
DF = pd.read_csv('/mnt/d/Dropbox/Ranee_Joshi_PhD_Local/04_PythonCodes/dh2loop/notebooks/Upscaled_Litho_Test2.csv')
DF['FromDepth'] = pd.to_numeric(DF.FromDepth)
DF['ToDepth'] = pd.to_numeric(DF.ToDepth)
DF['TopElev'] = pd.to_numeric(DF.TopElev)
DF['BottomElev'] = pd.to_numeric(DF.BottomElev)
DF['x'] = pd.to_numeric(DF.x)
DF['y'] = pd.to_numeric(DF.y)
print('number of original litho classes:', len(DF.MajorLithCode.unique()))
print('number of litho classes :',
len(DF['reclass'].unique()))
print('unclassified descriptions:',
len(DF[DF['reclass'].isnull()]))
def save_file(DF, name):
'''Function to save manually reclassified dataframe
Inputs:
-DF: reclassified pandas dataframe
-name: name (string) to save dataframe file
'''
DF.to_pickle('{}.pkl'.format(name))
save_file(DF, 'manualTest_ygsb')
###Output
_____no_output_____
###Markdown
MLP Classification
###Code
def load_geovec(path):
instance = Glove()
with h5py.File(path, 'r') as f:
v = np.zeros(f['vectors'].shape, f['vectors'].dtype)
f['vectors'].read_direct(v)
dct = f['dct'][()].tostring().decode('utf-8')
dct = json.loads(dct)
instance.word_vectors = v
instance.no_components = v.shape[1]
instance.word_biases = np.zeros(v.shape[0])
instance.add_dictionary(dct)
return instance
# Stopwords
extra_stopwords = [
'also',
]
stop = stopwords.words('english') + extra_stopwords
def tokenize(text, min_len=1):
'''Function that tokenize a set of strings
Input:
-text: set of strings
-min_len: tokens length
Output:
-list containing set of tokens'''
tokens = [word.lower() for sent in nltk.sent_tokenize(text)
for word in nltk.word_tokenize(sent)]
filtered_tokens = []
for token in tokens:
if token.isalpha() and len(token) >= min_len:
filtered_tokens.append(token)
return [x.lower() for x in filtered_tokens if x not in stop]
def tokenize_and_lemma(text, min_len=0):
'''Function that retrieves lemmatised tokens
Inputs:
-text: set of strings
-min_len: length of text
Outputs:
-list containing lemmatised tokens'''
filtered_tokens = tokenize(text, min_len=min_len)
lemmas = [lemma.lemmatize(t) for t in filtered_tokens]
return lemmas
def get_vector(word, model, return_zero=False):
'''Function that retrieves word embeddings (vector)
Inputs:
-word: token (string)
-model: trained MLP model
-return_zero: boolean variable
Outputs:
-wv: numpy array (vector)'''
epsilon = 1.e-10
unk_idx = model.dictionary['unk']
idx = model.dictionary.get(word, unk_idx)
wv = model.word_vectors[idx].copy()
if return_zero and word not in model.dictionary:
n_comp = model.word_vectors.shape[1]
wv = np.zeros(n_comp) + epsilon
return wv
def mean_embeddings(dataframe_file, model):
'''Function to retrieve sentence embeddings from dataframe with
lithological descriptions.
Inputs:
-dataframe_file: pandas dataframe containing lithological descriptions
and reclassified lithologies
-model: word embeddings model generated using GloVe
Outputs:
-DF: pandas dataframe including sentence embeddings'''
DF = pd.read_pickle(dataframe_file)
DF = DF.drop_duplicates(subset=['x', 'y', 'z'])
DF['tokens'] = DF['Description'].apply(lambda x: tokenize_and_lemma(x))
DF['length'] = DF['tokens'].apply(lambda x: len(x))
DF = DF.loc[DF['length']> 0]
DF['vectors'] = DF['tokens'].apply(lambda x: np.asarray([get_vector(n, model) for n in x]))
DF['mean'] = DF['vectors'].apply(lambda x: np.mean(x[~np.all(x == 1.e-10, axis=1)], axis=0))
DF['reclass'] = pd.Categorical(DF.reclass)
DF['code'] = DF.reclass.cat.codes
DF['drop'] = DF['mean'].apply(lambda x: (~np.isnan(x).any()))
DF = DF[DF['drop']]
return DF
# loading word embeddings model
# (This can be obtained from https://github.com/spadarian/GeoVec )
#modelEmb = Glove.load('/home/ignacio/Documents/chapter2/best_glove_300_317413_w10_lemma.pkl')
modelEmb = load_geovec('geovec_300d_v1.h5')
# getting the mean embeddings of descriptions
DF = mean_embeddings('manualTest_ygsb.pkl', modelEmb)
DF2 = DF[DF['code'].isin(DF['code'].value_counts()[DF['code'].value_counts()>2].index)]
print(DF2)
def split_stratified_dataset(Dataframe, test_size, validation_size):
'''Function that split dataset into test, training and validation subsets
Inputs:
-Dataframe: pandas dataframe with sentence mean_embeddings
-test_size: decimal number to generate the test subset
-validation_size: decimal number to generate the validation subset
Outputs:
-X: numpy array with embeddings
-Y: numpy array with lithological classes
-X_test: numpy array with embeddings for test subset
-Y_test: numpy array with lithological classes for test subset
-Xt: numpy array with embeddings for training subset
-yt: numpy array with lithological classes for training subset
-Xv: numpy array with embeddings for validation subset
-yv: numpy array with lithological classes for validation subset
'''
#df2 = Dataframe[Dataframe['code'].isin(Dataframe['code'].value_counts()[Dataframe['code'].value_counts()>2].index)]
#X = np.vstack(df2['mean'].values)
#Y = df2.code.values.reshape(len(df2.code), 1)
X = np.vstack(Dataframe['mean'].values)
Y = Dataframe.code.values.reshape(len(Dataframe.code), 1)
#print(X.shape)
#print (Dataframe.code.values.shape)
#print (len(Dataframe.code))
#print (Y.shape)
X_train, X_test, y_train, y_test = train_test_split(X,
Y, stratify=Y,
test_size=test_size,
random_state=42)
#print(X_train.shape)
#print(Y_train.shape)
Xt, Xv, yt, yv = train_test_split(X_train,
y_train,
test_size=validation_size,
stratify=None,
random_state=1)
return X, Y, X_test, y_test, Xt, yt, Xv, yv
# subseting dataset for training classifier
X, Y, X_test, Y_test, X_train, Y_train, X_validation, Y_validation = split_stratified_dataset(DF2, 0.1, 0.1)
# encoding lithological classes
encodes = one_enc.fit_transform(Y_train).toarray()
# MLP model generation
model = Sequential()
model.add(Dense(100, input_dim=300, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(100, activation='relu'))
model.add(Dense(units=len(DF2.code.unique()), activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# training MLP model
model.fit(X_train, encodes, epochs=30, batch_size=100, verbose=2)
# saving MLP model
model.save('mlp_prob_model.h5')
def retrieve_predictions(classifier, x):
'''Function that retrieves lithological classes using the trained classifier
Inputs:
-classifier: trained MLP classifier
-x: numpy array containing embbedings
Outputs:
-codes_pred: numpy array containing lithological classes predicted'''
preds = classifier.predict(x, verbose=0)
new_onehot = np.zeros((x.shape[0], 72))
new_onehot[np.arange(len(preds)), preds.argmax(axis=1)] = 1
codes_pred = one_enc.inverse_transform(new_onehot)
return codes_pred
def classifier_assess(classifier, x, y):
'''Function that prints the performance of the classifier
Inputs:
-classifier: trained MLP classifier
-x: numpy array with embeddings
-y: numpy array with lithological classes predicted'''
Y2 = retrieve_predictions(classifier, x)
print('f1 score: ', metrics.f1_score(y, Y2, average='macro'),
'accuracy: ', metrics.accuracy_score(y, Y2),
'balanced_accuracy:', metrics.balanced_accuracy_score(y, Y2))
def save_predictions(Dataframe, classifier, x, name):
'''Function that saves dataframe predictions as a pickle file
Inputs:
-Dataframe: pandas dataframe with mean_embeddings
-classifier: trained MLP model,
-x: numpy array with embeddings,
-name: string name to save dataframe
Outputs:
-save dataframe'''
preds = classifier.predict(x, verbose=0)
Dataframe['predicted_probabilities'] = preds.tolist()
Dataframe['pred'] = retrieve_predictions(classifier, x).astype(np.int32)
Dataframe[['x', 'y', 'FromDepth', 'ToDepth', 'TopElev', 'BottomElev',
'mean', 'predicted_probabilities', 'pred', 'reclass', 'code']].to_pickle('{}.pkl'.format(name))
# assessment of model performance
classifier_assess(model, X_validation, Y_validation)
# save lithological prediction likelihoods dataframe
save_predictions(DF2, model, X, 'YGSBpredictions')
import pickle
with open('YGSBpredictions.pkl', 'rb') as f:
data = pickle.load(f)
print(data)
len(data)
data.head()
tmp = data['predicted_probabilities'][0]
len(tmp)
#data.to_csv('YGSBpredictions.csv')
import striplog
striplog.__version__
from striplog import Lexicon, Component, Position, Interval, Decor, Legend, Striplog
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
legend = Legend.builtin('NSDOE')
lexicon = Lexicon.default()
s = Striplog.from_csv(text=data, stop=650)
###Output
_____no_output_____ |
ipython.ipynb | ###Markdown
Help Command- print(pd.DataFrame.__doc__) ---> pd.DataFrame?- %quickref : porivdes a brief reference summary of each of the main IPython commands and magis
###Code
# get the history
# -o flag get the ouput as well input
%history -n
# Aliases
# if %automagic is on, this alias will be link
%alias lstdir ls -d */
%lstdir # list the directory with lstdir instead of ls -d */
#bookmark
# %bookmark <name> directory
import random
numbers = list(range(1,101))
random.shuffle(numbers)
numbers
# can set the timer for the code to run
%timeit -n 1000 -r 5 sorted(numbers)
# rerun the selected execution number
# %rerun 50
# recall the execution number but don't run
%recall 50
# load code from external file
%load basic.ipynb
# run 10 times of that python file
%run -t -N10 main.py
# save the line of code in python file
%save -a <filename> <line numbers>
# capturing the output of a shell command
# %sx ----> equivalent with !!command
files = %sx ls -l
# return the last fields of fiels
files.fields(-1)
# sort the files by size
files.sort(4, nums = True)
files
###Output
_____no_output_____ |
crime_analysis/UCRanalysis.ipynb | ###Markdown
**Crime Trends in Large and Medium Jurisdictions**By Aaron MargolisThis notebook is an update of Mean Shift Analysis I first performed in 2017 to examine the recent increases in crime following the historic decrease during the late 1990s and early 2000s. For instance, was crime increasing everywhere or just in certain cities, such as Baltimore? By grouping cities into clusters based on their crime patterns over time, we can see where crime is continuing to fall and where it is rising.This notebook will look at crime rates in jurisdictions over 250,000 people. These 131 jurisdictions account for approximately 30% of the US population. They are a mix of urban areas such as cities and suburbs, and also suburban counties. Urban areas are over-represented, but there are enough lower density jurisdictions to conduct analysis.This analysis can be expanded to look at smaller jurisdictions, especially using a more powerful backend. I incorporated TensorFlow 2.0 and its eager execution capability. Because there are only 600 columns for 131 jurisdiction, or 78,600 data points, this notebook uses CPUs rather GPUs or TPUs. If more jurisdictions were incorporated, a more powerful backend can be added.**Results:**Where crime was highest in the late 1990s, such as New York and other large cities, crime continues to be down considerably. But in rural jurisdictions, such as Anchorage and Wichita, the amount of crime today is much higher than it was 25 years ago, despite the large overall decrease in crime. There has been an increase in most of these jurisdictions in the last few years. The large variation in crime trends across jurisdictions explains the differing perception of crime overall. **Methodology**We start by loading csv files that I created using an API on cloud.gov's Crime Data Explorer, which hosts FBI Uniform Crime Data in a computer-friendly format. ORI stands for Originating Reporter Identifier, the police department providing the data.
###Code
import pandas as pd
ori_guide=pd.read_csv('https://raw.githubusercontent.com/ARMargolis/UCRanalysis/main/ORI.csv').set_index('ori')
raw_ori_data=pd.read_csv('https://raw.githubusercontent.com/ARMargolis/UCRanalysis/main/ori_over_250k_full.csv')
raw_ori_data.head()
###Output
_____no_output_____
###Markdown
Each row includes a police department identifier (ORI), a year, a crime, and the number that were reported (actual) and resulted in arrest (cleared). However, this method already introduces uncertainty in terms of actual crime, because many crimes go unreported.We will use a pivot table to put the data across 4 axes (ORI, year, crime, actual vs. cleared).
###Code
ori_data_pivot=raw_ori_data.pivot_table(index='ori', columns=['data_year','offense'], values=['cleared', 'actual'])
ori_data_pivot.head()
###Output
_____no_output_____
###Markdown
Next we look at the null values. Police departments either report no data values or all 24, so the nulls should be multiples of 24. Let's see which departments have the most null results.
###Code
most_nulls=ori_data_pivot.isnull().sum(axis=1).sort_values(ascending=False).head(10)
print(most_nulls)
ori_guide.loc[most_nulls.index, 'agency_name']
###Output
ori
AKAST0100 456
KY0568000 192
NC0920100 72
NY0510100 72
NY0290000 72
MDBPD0000 24
FL0500000 24
FL0510000 24
FL0520000 24
OHCIP0000 24
dtype: int64
###Markdown
We will remove the 5 police departments that have multiples missed years (Alaska State Troopers, Louisville, Raleigh and two Long Island counties). These null values show the importance of using jurisdiction data rather than state data: Kentucky and North Carolina will have much lower crime rates in years where their major cities did not provide data. Even the one year where Cincinnati did not provide data may affect analysis of Ohio crime data.
###Code
ori_data_final=ori_data_pivot.drop(most_nulls.index[:5])
###Output
_____no_output_____
###Markdown
For the cases with only one missing value, we will interpolate.
###Code
ori_data_final=ori_data_final.interpolate(method='linear', axis=0)
###Output
_____no_output_____
###Markdown
To ease comparison, we will look at crime rates per 100,000 people.
###Code
for row_num in range(ori_data_final.shape[0]):
ori_data_final.iloc[row_num]*=100000/ori_guide.loc[ori_data_final.index[row_num], 'population']
###Output
_____no_output_____
###Markdown
Before going to TensorFlow from a Pandas dataframe, we need to reshape the via Numpy to show all 4 axes. We will print the last 24 values of the first row in both Pandas and Numpy to confirm the reshaping is correct.
###Code
print(ori_data_final.iloc[0,-24:])
ori_np=ori_data_final.values.reshape(131,2,25,12)
ori_np[0,-1,-2:,:]
###Output
data_year offense
cleared 2018 aggravated-assault 507.238439
arson 4.979435
burglary 103.240284
homicide 4.647473
human-trafficing 0.000000
larceny 433.210839
motor-vehicle-theft 153.034634
property-crime 689.485757
rape 40.831366
rape-legacy 0.000000
robbery 91.621603
violent-crime 644.338880
2019 aggravated-assault 485.660887
arson 7.967096
burglary 95.605151
homicide 6.971209
human-trafficing 0.000000
larceny 373.125658
motor-vehicle-theft 88.965904
property-crime 557.696713
rape 20.581664
rape-legacy 0.000000
robbery 65.064616
violent-crime 578.278377
Name: AK0010100, dtype: float64
###Markdown
We import TensorFlow, convert the Numpy array and then normalize it.
###Code
import tensorflow as tf
ori_tf=tf.Variable(ori_np, dtype=tf.float32)
ori_norm_tf=tf.keras.utils.normalize(ori_tf)
###Output
_____no_output_____
###Markdown
Now we are going to perform Mean Shift Analysis in TensorFlow. The concept is to gradually shift data points closer to its neighbors, until all the points converge with their neighbors. This implementation uses a Gaussian function with a given "bandwidth" to weight the nearer neighbors. We are implementing in TensorFlow because the process is O(r^2*c), where r is the number of rows and c is the number of columns. The cluster_step functions returns both the new data and the square of the change.
###Code
def cluster_step(data, bandwidth):
change=np.zeros(data.shape)
for x in range(data.shape[0]):
difference=tf.math.subtract(data,tf.broadcast_to(tf.gather(data,x) , data.shape))
distance=tf.scalar_mul(-0.5/bandwidth**2, tf.math.square(difference))
change[x]=tf.reduce_sum(tf.multiply(tf.exp(distance), difference), axis=0).numpy()
return tf.math.subtract(data,tf.constant(change, dtype=data.dtype)), np.square(change).sum()
###Output
_____no_output_____
###Markdown
We will keep clustering until the change is less than 0.01, which we also set as the bandwidth. We will also note the time.
###Code
from time import ctime
import numpy as np
dist_sq=1
count=0
new_ori_tf=ori_norm_tf
print('Start', ctime())
while dist_sq>0.01*0.01:
new_ori_tf, dist_sq=cluster_step(new_ori_tf, 0.01)
count+=1
if count%500==0:
print(count, dist_sq, ctime())
print('Done', dist_sq, ctime())
###Output
Start Wed Jan 13 20:39:02 2021
500 0.02276879082391721 Wed Jan 13 20:40:14 2021
1000 0.006877025073071565 Wed Jan 13 20:41:23 2021
1500 0.0019519331326280913 Wed Jan 13 20:42:33 2021
2000 0.0006848493218250168 Wed Jan 13 20:43:42 2021
2500 0.0004287393047034695 Wed Jan 13 20:44:51 2021
3000 0.00031875683125860536 Wed Jan 13 20:45:59 2021
3500 0.00024394157005310326 Wed Jan 13 20:47:07 2021
4000 0.00019083634949514338 Wed Jan 13 20:48:15 2021
4500 0.00015201034493367795 Wed Jan 13 20:49:22 2021
5000 0.00012297801430013615 Wed Jan 13 20:50:33 2021
5500 0.00010086351100298652 Wed Jan 13 20:51:41 2021
Done 9.997841607564483e-05 Wed Jan 13 20:51:44 2021
###Markdown
Now that TensorFlow has done the math-intensive part, we use sklearn to label the points based on where their means have shifted.
###Code
from sklearn.cluster import AffinityPropagation
X=new_ori_tf.numpy().reshape([131,600])
clustering = AffinityPropagation(damping=0.95, max_iter=1000).fit(X)
clustering.labels_
###Output
_____no_output_____
###Markdown
We group the jurisdictions by creating a list of lists.
###Code
lbl_lists=[]
drop_last2words=lambda s:' '.join(s.split(' ')[:-2])
for lbl in range(clustering.labels_.max()+1):
lbl_lists.append([x for x in range(ori_data_final.shape[0]) if clustering.labels_[x]==lbl])
print(lbl, [drop_last2words(ori_guide.loc[ori_data_final.index[x], 'agency_name']) for x in lbl_lists[-1]])
###Output
0 ['Oakland', 'Kern County', 'Los Angeles County', 'Long Beach', 'Los Angeles', 'Santa Ana', 'Riverside County', 'San Bernardino County', 'San Diego County', 'San Diego', 'Denver', 'Indianapolis', 'Detroit', 'St. Louis', 'Newark', 'Buffalo', 'Cleveland', 'Fort Bend County']
1 ['Anchorage', 'Aurora', 'Colorado Springs', 'Wichita', 'King County']
2 ['Tucson', 'New Castle County', 'Miami-Dade County', 'Jacksonville', 'Hillsborough County', 'Manatee County', 'Orange County', 'Orlando', 'Anne Arundel County', 'Albuquerque', 'Nashville Metropolitan']
3 ['Anaheim', 'Connecticut', 'Collier County', 'Escambia County', 'Tampa', 'Lee County', 'Marion County', 'Palm Beach County', 'Pasco County', 'Pinellas County', 'St. Petersburg', 'Polk County', 'Minneapolis', 'Henderson', 'Cincinnati', 'Greenville County', 'Bexar County', 'Hidalgo County', 'Pierce County', 'Snohomish County']
4 ['Washington', 'Miami', 'Atlanta', 'New Orleans', 'Boston', 'Baltimore', 'Philadelphia', 'Richland County']
5 ['Chicago', 'New York City']
6 ['Mobile', 'Phoenix', 'Bakersfield', 'Chula Vista', 'San Francisco', 'Cobb County', 'DeKalb County', 'Gwinnett County', 'Lexington', 'Jefferson County', 'Baltimore County', 'St. Paul', 'Durham', 'Greensboro', 'Portland', 'Fort Worth', 'Seattle']
7 ['Chandler', 'Mesa', 'Irvine', 'Sarasota County', 'Lincoln', 'Plano', 'Laredo', 'Salt Lake County Unified']
8 ['Maricopa County', 'Fresno', 'Riverside', 'Sacramento County', 'Sacramento', 'Stockton', 'San Jose', "Prince George's County", 'Kansas City', 'Charlotte-Mecklenburg', 'Jersey City', 'Las Vegas Metropolitan Police Department', 'Toledo', 'Columbus', 'Oklahoma City', 'Tulsa', 'Pittsburgh Bureau', 'Memphis', 'Harris County', 'Dallas', 'Houston', 'Milwaukee']
9 ['Pima County', 'St. Louis County', 'Omaha', 'Knox County', 'El Paso', 'Montgomery County', 'Corpus Christi', 'Arlington', 'Austin', 'San Antonio']
10 ['Honolulu', 'Fort Wayne', 'Howard County', 'Montgomery County', 'Chesterfield County', 'Fairfax County', 'Henrico County', 'Loudoun County', 'Prince William County', 'Virginia Beach']
###Markdown
We will create a table to see how reported aggravated assaults have changed over time in each of the groups. Aggravated assaults are a relatively common crime, so they are a good indicator of overall trends. We will take the average amount of each group in order to chart assaults over time.
###Code
lbl_agg_means=pd.concat([ori_data_final.iloc[lbl_lst,range(0,300,12)].mean(axis=0) for lbl_lst in lbl_lists], axis=1)
lbl_agg_means=lbl_agg_means.reset_index(level=0, drop=True).reset_index(level=1, drop=True)
names=[', '.join([drop_last2words(ori_guide.loc[ori_data_final.index[x],
'agency_name']) for x in lbl_lst]) for lbl_lst in lbl_lists]
lbl_agg_means.columns=pd.Series(names, name='Names')
lbl_agg_means=lbl_agg_means.sort_values(by=2019,axis=1, ascending=False)
lbl_agg_means.tail()
###Output
_____no_output_____
###Markdown
Now we will use bokeh to create a chart. We'll immediately see one group (brown) where was assaults were highest in the lates 1990s but has fallen by about half over the past 25 years. We also see another group (bright red) where this crime started low but increased.
###Code
from bokeh.models import ColumnDataSource, Legend
from bokeh.plotting import figure, output_file, show
from bokeh.io import output_notebook
output_notebook()
color_list=['brown','red', 'darkviolet', 'orange','yellow','olive','darkgreen','magenta','cyan','blue','black','gray']
source = ColumnDataSource(lbl_agg_means)
p = figure(plot_width=1200, plot_height=400, title='Assaults per 100,000 residents', tools=[])
for c,lbl in enumerate(lbl_agg_means.columns):
p.line(x='data_year', y=lbl, source=source, line_color=color_list[c])
show(p)
###Output
_____no_output_____
###Markdown
Now we'll create an interactive map to show these jurisdictions, using the population and geographic data from the ORI guide, which comes from the Department of Justice's National Justice Information System. Some locations give their coordinates in terms of latitude and longitude as whole number, without minutes or seconds, so they may seem off on the map.
###Code
from math import sqrt
map_viz=ori_guide.loc[ori_data_final.index, ['agency_name', 'agency_type_name', 'icpsr_lat', 'icpsr_lng', 'population']]
map_viz=pd.concat([map_viz, pd.Series(clustering.labels_, index=ori_data_final.index, name='group')], axis=1)
map_viz['color']=map_viz['group'].apply(lambda c:color_list[c])
map_viz['radius']=map_viz['population'].apply(lambda x:sqrt(x)/1000)
map_viz['desc']=map_viz['agency_name'].apply(drop_last2words)
map_viz.head()
###Output
_____no_output_____
###Markdown
Using Bokeh, we create an interactive map where each jurisdiction is represented by a circle. The area each circle is proportional to the population, and the color of the outline shows what group it belongs to. You can scroll over the circles to get the jurisdiction. The background map is taken from Google Maps.
###Code
output_notebook()
color_list=['brown','red', 'darkviolet', 'orange','yellow','olive','darkgreen','magenta','cyan','blue','black','gray']
source = ColumnDataSource(map_viz)
TOOLTIPS=[('Agency:','@desc'),('Population', '@population')]
q = figure(plot_width=1200, plot_height=800, title='Crime patterns', y_range=(20,70), tooltips=TOOLTIPS)
q.image_url(url=['https://raw.githubusercontent.com/ARMargolis/UCRanalysis/main/Map_United_States.png'], x=-170, y=88,
w=108, h=70)
q.circle(x='icpsr_lng', y='icpsr_lat', source=source, fill_color=None, line_color='color', line_width=2, radius='radius')
q.axis.visible=False
show(q)
###Output
_____no_output_____ |
R_lab1_ML_Bay_Regresion/Pract_regression_student.ipynb | ###Markdown
Parametric ML and Bayesian regression Notebook version: 1.2 (Sep 28, 2018) Authors: Miguel Lázaro Gredilla Jerónimo Arenas García ([email protected]) Jesús Cid Sueiro ([email protected]) Changes: v.1.0 - First version. Python version v.1.1 - Python 3 compatibility. ML section. v.1.2 - Revised content. 2D visualization removed. Pending changes:
###Code
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import scipy.io # To read matlab files
from scipy import spatial
import pylab
pylab.rcParams['figure.figsize'] = 8, 5
###Output
_____no_output_____
###Markdown
1. IntroductionIn this exercise the student will review several key concepts of Maximum Likelihood and Bayesian regression. To do so, we will assume the regression model$$s = f({\bf x}) + \varepsilon$$where $s$ is the output corresponding to input ${\bf x}$, $f({\bf x})$ is an unobservable latent function, and $\varepsilon$ is white zero-mean Gaussian noise, i.e., $$\varepsilon \sim {\cal N}(0,\sigma_\varepsilon^2).$$In addition, we will assume that the latent function is *linear in the parameters*$$f({\bf x}) = {\bf w}^\top {\bf z}$$where ${\bf z} = T({\bf x})$ is a possibly non-linear transformation of the input. Along this notebook, we will explore different types of transformations.Also, we will assume an a priori distribution for ${\bf w}$ given by$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ Practical considerations - Though sometimes unavoidable, it is recommended not to use explicit matrix inversion whenever possible. For instance, if an operation like ${\mathbf A}^{-1} {\mathbf b}$ must be performed, it is preferable to code it using python $\mbox{numpy.linalg.lstsq}$ function (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html), which provides the LS solution to the overdetermined system ${\mathbf A} {\mathbf w} = {\mathbf b}$. - Sometimes, the computation of $\log|{\mathbf A}|$ (where ${\mathbf A}$ is a positive definite matrix) can overflow available precision, producing incorrect results. A numerically more stable alternative, providing the same result is $2\sum_i \log([{\mathbf L}]_{ii})$, where $\mathbf L$ is the Cholesky decomposition of $\mathbf A$ (i.e., ${\mathbf A} = {\mathbf L}^\top {\mathbf L}$), and $[{\mathbf L}]_{ii}$ is the $i$th element of the diagonal of ${\mathbf L}$. - Non-degenerate covariance matrices, such as the ones in this exercise, are always positive definite. It may happen, as a consequence of chained rounding errors, that a matrix which was mathematically expected to be positive definite, turns out not to be so. This implies its Cholesky decomposition will not be available. A quick way to palliate this problem is by adding a small number (such as $10^{-6}$) to the diagonal of such matrix. Reproducibility of computationsTo guarantee the exact reproducibility of the experiments, it may be useful to start your code initializing the seed of the random numbers generator, so that you can compare your results with the ones given in this notebook.
###Code
np.random.seed(3)
###Output
_____no_output_____
###Markdown
2. Data generation with a linear modelDuring this section, we will assume affine transformation$${\bf z} = T({\bf x}) = (1, {\bf x}^\top)^\top$$.The a priori distribution of ${\bf w}$ is assumed to be$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ 2.1. Synthetic data generationFirst, we are going to generate synthetic data (so that we have the ground-truth model) and use them to make sure everything works correctly and our estimations are sensible.* [1] Set parameters $\sigma_p^2 = 2$ and $\sigma_{\varepsilon}^2 = 0.2$. To do so, define variables `sigma_p` and `sigma_eps` containing the respective standard deviations.
###Code
# Parameter settings
# sigma_p = <FILL IN>
# sigma_eps = <FILL IN>
###Output
_____no_output_____
###Markdown
* [2] Generate a weight vector `true_w` with two elements from the *a priori* distribution of the weights. This vector determines the regression line that we want to find (i.e., the optimum unknown solution).
###Code
# Data dimension:
dim_x = 2
# Generate a parameter vector taking a random sample from the prior distributions
# (the np.random module may be usefull for this purpose)
# true_w = <FILL IN>
print('The true parameter vector is:')
print(true_w)
###Output
_____no_output_____
###Markdown
* [3] Generate an input matrix ${\bf X}$ (in this case, a single column) containing 20 samples with equally spaced values between 0 and 2 (method `linspace` from numpy can be useful for this)
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [4] Finally, generate the output vector ${\bf s}$ as the product ${\bf Z} \ast \text{true_w}$ plus Gaussian noise of pdf ${\cal N}(0,\sigma_\varepsilon^2)$ at each element.
###Code
# Expand input matrix with an all-ones column
col_1 = np.ones((n_points, 1))
# Z = <FILL IN>
# Generate values of the target variable
# s = <FILL IN>
print(s)
###Output
_____no_output_____
###Markdown
2.2. Data visualization * Plot the generated data. You will notice a linear behavior, but the presence of noise makes it hard to estimate precisely the original straight line that generated them (which is stored in `true_w`).
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
3. Maximum Likelihood (ML) regression 3.1. Likelihood function * [1] Define a function `predict(w, Z)` that computes the linear predictions for all inputs in data matrix `Z` (a 2-D numpy arry), for a given parameter vector `w` (a 1-D numpy array). The output should be a 1-D array. Test your function with the given dataset and `w = [0.4, 0.7]`
###Code
# <SOL>
# </SOL>
# Print predictions
print(p)
###Output
_____no_output_____
###Markdown
* [2] Define a function `sse(w, Z, s)` that computes the sum of squared errors (SSE) for the linear prediction with parameters `w ` (1D numpy array), inputs `Z ` (2D numpy array) and targets `s ` (1D numpy array). Using this function, compute the SSE of the true parameter vector in `true_w`.
###Code
# <SOL>
# </SOL>
print(" The SSE is: {0}".format(SSE))
###Output
_____no_output_____
###Markdown
* [3] Define a function `likelihood(w, Z, s, sigma_eps)` that computes the likelihood of parameter vector `w` for a given dataset in matrix `Z` and vector `s`, assuming Gaussian noise with varianze $\sigma_\epsilon^2$. Note that this function can use the `sse` function defined above. Using this function, compute the likelihood of the true parameter vector in `true_w`.
###Code
# <SOL>
# </SOL>
print("The likelihood of the true parameter vector is {0}".format(L_w_true))
###Output
_____no_output_____
###Markdown
* [4] Define a function `LL(w, Z, s, sigma_eps)` that computes the log-likelihood of parameter vector `w` for a given dataset in matrix `Z` and vector `s`, assuming Gaussian noise with varianze $\sigma_\epsilon^2$. Note that this function can use the `likelihood` function defined above. However, for a higher numerical precission, implemening a direct expression for the log-likelihood is recommended. Using this function, compute the likelihood of the true parameter vector in `true_w`.
###Code
# <SOL>
# </SOL>
print("The log-likelihood of the true parameter vector is {0}".format(LL_w_true))
###Output
_____no_output_____
###Markdown
3.2. ML estimate* [1] Compute the ML estimate of ${\bf w}$ given the data. Remind that using `np.linalg.lstsq` ia a better option than a direct implementation of the formula of the ML estimate, that would involve a matrix inversion.
###Code
# <SOL>
# </SOL>
print(w_ML)
###Output
_____no_output_____
###Markdown
* [2] Compute the maximum likelihood, and the maximum log-likelihood.
###Code
# <SOL>
# </SOL>
print('Maximum likelihood: {0}'.format(L_w_ML))
print('Maximum log-likelihood: {0}'.format(LL_w_ML))
###Output
_____no_output_____
###Markdown
Just as an illustration, the code below generates a set of points in a two dimensional grid going from $(-\sigma_p, -\sigma_p)$ to $(\sigma_p, \sigma_p)$, computes the log-likelihood for all these points and visualize them using a 2-dimensional plot. You can see the difference between the true value of the parameter ${\bf w}$ (black) and the ML estimate (red). If they are not quite close to each other, maybe you have made some mistake in the above exercises:
###Code
# First construct a grid of (theta0, theta1) parameter pairs and their
# corresponding cost function values.
N = 200 # Number of points along each dimension.
w0_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N)
w1_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N)
Lw = np.zeros((N,N))
# Fill Lw with the likelihood values
for i, w0i in enumerate(w0_grid):
for j, w1j in enumerate(w1_grid):
we = np.array((w0i, w1j))
Lw[i, j] = LL(we, Z, s, sigma_eps)
WW0, WW1 = np.meshgrid(w0_grid, w1_grid, indexing='ij')
contours = plt.contour(WW0, WW1, Lw, 20)
plt.figure
plt.clabel(contours)
plt.scatter([true_w[0]]*2, [true_w[1]]*2, s=[50,10], color=['k','w'])
plt.scatter([w_ML[0]]*2, [w_ML[1]]*2, s=[50,10], color=['r','w'])
plt.xlabel('$w_0$')
plt.ylabel('$w_1$')
plt.show()
###Output
_____no_output_____
###Markdown
3.3. [OPTIONAL]: Convergence of the ML estimate for the true modelNote that the likelihood of the true parameter vector is, in general, smaller than that of the ML estimate. However, as the sample size increasis, both should converge to the same value.* [1] Generate a longer dataset, with $K_\text{max}=2^{16}$ samples, uniformly spaced between 0 and 2. Store it in the 2D-array `X2` and the 1D-array `s2`
###Code
# Parameter settings
x_min = 0
x_max = 2
n_points = 2**16
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [2] Compute the ML estimate based on the first $2^k$ samples, for $k=2,3,\ldots, 15$. For each value of $k$ compute the squared euclidean distance between the true parameter vector and the ML estimate. Represent it graphically (using a logarithmic scale in the y-axis).
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
4. ML estimation with real data. The stocks dataset.Once our code has been tested on synthetic data, we will use it with real data. 4.1. Dataset * [1] Load the dataset file provided with this notebook, corresponding to the evolution of the stocks of 10 airline companies. (The dataset is an adaptation of the Stock dataset, which in turn was taken from the StatLib Repository)
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
Parametric ML and Bayesian regression Notebook version: 1.2 (Sep 28, 2018) Authors: Miguel Lázaro Gredilla Jerónimo Arenas García ([email protected]) Jesús Cid Sueiro ([email protected]) Changes: v.1.0 - First version. Python version v.1.1 - Python 3 compatibility. ML section. v.1.2 - Revised content. 2D visualization removed. Pending changes:
###Code
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import scipy.io # To read matlab files
from scipy import spatial
import pylab
pylab.rcParams['figure.figsize'] = 8, 5
###Output
_____no_output_____
###Markdown
1. IntroductionIn this exercise the student will review several key concepts of Maximum Likelihood and Bayesian regression. To do so, we will assume the regression model$$s = f({\bf x}) + \varepsilon$$where $s$ is the output corresponding to input ${\bf x}$, $f({\bf x})$ is an unobservable latent function, and $\varepsilon$ is white zero-mean Gaussian noise, i.e., $$\varepsilon \sim {\cal N}(0,\sigma_\varepsilon^2).$$In addition, we will assume that the latent function is *linear in the parameters*$$f({\bf x}) = {\bf w}^\top {\bf z}$$where ${\bf z} = T({\bf x})$ is a possibly non-linear transformation of the input. Along this notebook, we will explore different types of transformations.Also, we will assume an a priori distribution for ${\bf w}$ given by$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ Practical considerations - Though sometimes unavoidable, it is recommended not to use explicit matrix inversion whenever possible. For instance, if an operation like ${\mathbf A}^{-1} {\mathbf b}$ must be performed, it is preferable to code it using python $\mbox{numpy.linalg.lstsq}$ function (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html), which provides the LS solution to the overdetermined system ${\mathbf A} {\mathbf w} = {\mathbf b}$. - Sometimes, the computation of $\log|{\mathbf A}|$ (where ${\mathbf A}$ is a positive definite matrix) can overflow available precision, producing incorrect results. A numerically more stable alternative, providing the same result is $2\sum_i \log([{\mathbf L}]_{ii})$, where $\mathbf L$ is the Cholesky decomposition of $\mathbf A$ (i.e., ${\mathbf A} = {\mathbf L}^\top {\mathbf L}$), and $[{\mathbf L}]_{ii}$ is the $i$th element of the diagonal of ${\mathbf L}$. - Non-degenerate covariance matrices, such as the ones in this exercise, are always positive definite. It may happen, as a consequence of chained rounding errors, that a matrix which was mathematically expected to be positive definite, turns out not to be so. This implies its Cholesky decomposition will not be available. A quick way to palliate this problem is by adding a small number (such as $10^{-6}$) to the diagonal of such matrix. Reproducibility of computationsTo guarantee the exact reproducibility of the experiments, it may be useful to start your code initializing the seed of the random numbers generator, so that you can compare your results with the ones given in this notebook.
###Code
np.random.seed(3)
###Output
_____no_output_____
###Markdown
2. Data generation with a linear modelDuring this section, we will assume affine transformation$${\bf z} = T({\bf x}) = (1, {\bf x}^\top)^\top$$.The a priori distribution of ${\bf w}$ is assumed to be$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ 2.1. Synthetic data generationFirst, we are going to generate synthetic data (so that we have the ground-truth model) and use them to make sure everything works correctly and our estimations are sensible.* [1] Set parameters $\sigma_p^2 = 2$ and $\sigma_{\varepsilon}^2 = 0.2$. To do so, define variables `sigma_p` and `sigma_eps` containing the respective standard deviations.
###Code
# Parameter settings
# sigma_p = <FILL IN>
# sigma_eps = <FILL IN>
###Output
_____no_output_____
###Markdown
* [2] Generate a weight vector $\mbox{true_w}$ with two elements from the a priori distribution of the weights. This vector determines the regression line that we want to find (i.e., the optimum unknown solution).
###Code
# Data dimension:
dim_x = 2
# Generate a parameter vector taking a random sample from the prior distributions
# (the np.random module may be usefull for this purpose)
# true_w = <FILL IN>
print('The true parameter vector is:')
print(true_w)
###Output
The true parameter vector is:
[[2.52950265]
[0.61731815]]
###Markdown
* [3] Generate an input matrix ${\bf X}$ (in this case, a single column) containing 20 samples with equally spaced values between 0 and 2 (method `linspace` from numpy can be useful for this)
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [4] Finally, generate the output vector ${\mbox s}$ as the product $\mbox{X} \ast \mbox{true_w}$ plus Gaussian noise of pdf ${\cal N}(0,\sigma_\varepsilon^2)$ at each element.
###Code
# Expand input matrix with an all-ones column
col_1 = np.ones((n_points, 1))
# Z = <FILL IN>
# Generate values of the target variable
# s = <FILL IN>
###Output
_____no_output_____
###Markdown
2.2. Data visualization * Plot the generated data. You will notice a linear behavior, but the presence of noise makes it hard to estimate precisely the original straight line that generated them (which is stored in $\mbox{true_w}$).
###Code
# <SOL>
# </SOL>
###Output
[[2.57265762]
[1.76110423]
[2.53541259]
[2.56579218]
[2.75242296]
[2.57400371]
[2.89979171]
[2.77095026]
[2.46177133]
[3.50994552]
[3.57344864]
[4.0088364 ]
[3.33164867]
[3.19327656]
[3.19534227]
[2.81260983]
[4.00852445]
[3.14176482]
[3.16918917]
[3.67216952]]
###Markdown
3. Maximum Likelihood (ML) regression 3.1. Likelihood function * [1] Define a function `predict(w, Z)` that computes the linear predictions for all inputs in data matrix `Z` (a 2-D numpy arry), for a given parameter vector `w` (a 1-D numpy array). The output should be a 1-D array. Test your function with the given dataset and `w = [0.4, 0.7]`
###Code
# <SOL>
# </SOL>
# Print predictions
print(p)
###Output
[0.4 0.47368421 0.54736842 0.62105263 0.69473684 0.76842105
0.84210526 0.91578947 0.98947368 1.06315789 1.13684211 1.21052632
1.28421053 1.35789474 1.43157895 1.50526316 1.57894737 1.65263158
1.72631579 1.8 ]
###Markdown
* [2] Define a function `sse(w, Z, s)` that computes the sum of squared errors (SSE) for the linear prediction with parameters `w ` (1D numpy array), inputs `Z ` (2D numpy array) and targets `s ` (1D numpy array). Using this function, compute the SSE of the true parameter vector in `true_w`.
###Code
# <SOL>
# </SOL>
print(" The SSE is: {0}".format(SSE))
###Output
The SSE is: 3.4003613068704324
###Markdown
* [3] Define a function `likelihood(w, Z, s, sigma_eps)` that computes the likelihood of parameter vector `w` for a given dataset in matrix `Z` and vector `s`, assuming Gaussian noise with varianze $\sigma_\epsilon^2$. Note that this function can use the `sse` function defined above. Using this function, compute the likelihood of the true parameter vector in `true_w`.
###Code
# <SOL>
# </SOL>
print("The likelihood of the true parameter vector is {0}".format(L_w_true))
###Output
The likelihood of the true parameter vector is 2.0701698520505036e-05
###Markdown
* [4] Define a function `LL(w, Z, s, sigma_eps)` that computes the log-likelihood of parameter vector `w` for a given dataset in matrix `Z` and vector `s`, assuming Gaussian noise with varianze $\sigma_\epsilon^2$. Note that this function can use the `likelihood` function defined above. However, for a higher numerical precission, implemening a direct expression for the log-likelihood is recommended. Using this function, compute the likelihood of the true parameter vector in `true_w`.
###Code
# <SOL>
# </SOL>
print("The log-likelihood of the true parameter vector is {0}".format(LL_w_true))
###Output
The log-likelihood of the true parameter vector is -10.785294806928531
###Markdown
3.2. ML estimate* [1] Compute the ML estimate of ${\bf w}$ given the data. Remind that using `np.linalg.lstsq` ia a better option than a direct implementation of the formula of the ML estimate, that would involve a matrix inversion.
###Code
# <SOL>
# </SOL>
print(w_ML)
###Output
[[2.39342127]
[0.63211186]]
###Markdown
* [2] Compute the maximum likelihood, and the maximum log-likelihood.
###Code
# <SOL>
# </SOL>
print('Maximum likelihood: {0}'.format(L_w_ML))
print('Maximum log-likelihood: {0}'.format(LL_w_ML))
###Output
Maximum likelihood: 4.3370620534450416e-05
Maximum log-likelihood: -10.045728292300282
###Markdown
Just as an illustration, the code below generates a set of points in a two dimensional grid going from $(-\sigma_p, -\sigma_p)$ to $(\sigma_p, \sigma_p)$, computes the log-likelihood for all these points and visualize them using a 2-dimensional plot. You can see the difference between the true value of the parameter ${\bf w}$ (black) and the ML estimate (red). If they are not quite close to each other, maybe you have made some mistake in the above exercises:
###Code
# First construct a grid of (theta0, theta1) parameter pairs and their
# corresponding cost function values.
N = 200 # Number of points along each dimension.
w0_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N)
w1_grid = np.linspace(-2.5*sigma_p, 2.5*sigma_p, N)
Lw = np.zeros((N,N))
# Fill Lw with the likelihood values
for i, w0i in enumerate(w0_grid):
for j, w1j in enumerate(w1_grid):
we = np.array((w0i, w1j))
Lw[i, j] = LL(we, Z, s, sigma_eps)
WW0, WW1 = np.meshgrid(w0_grid, w1_grid, indexing='ij')
contours = plt.contour(WW0, WW1, Lw, 20)
plt.figure
plt.clabel(contours)
plt.scatter([true_w[0]]*2, [true_w[1]]*2, s=[50,10], color=['k','w'])
plt.scatter([w_ML[0]]*2, [w_ML[1]]*2, s=[50,10], color=['r','w'])
plt.xlabel('$w_0$')
plt.ylabel('$w_1$')
plt.show()
###Output
_____no_output_____
###Markdown
3.3. [OPTIONAL]: Convergence of the ML estimate for the true modelNote that the likelihood of the true parameter vector is, in general, smaller than that of the ML estimate. However, as the sample size increasis, both should converge to the same value.* [1] Generate a longer dataset, with $K_\text{max}=2^{16}$ samples, uniformly spaced between 0 and 2. Store it in the 2D-array `X2` and the 1D-array `s2`
###Code
# Parameter settings
x_min = 0
x_max = 2
n_points = 2**16
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [2] Compute the ML estimate based on the first $2^k$ samples, for $k=2,3,\ldots, 15$. For each value of $k$ compute the squared euclidean distance between the true parameter vector and the ML estimate. Represent it graphically (using a logarithmic scale in the y-axis).
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
4. ML estimation with real data. The stocks dataset.Once our code has been tested on synthetic data, we will use it with real data. 4.1. Dataset * [1] Load data corresponding to the evolution of the stocks of 10 airline companies. This data set is an adaptation of the Stock dataset from http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html, which in turn was taken from the StatLib Repository, http://lib.stat.cmu.edu/
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [2] Normalize the data so all training sample components have zero mean and unit standard deviation. Store the normalized training and test samples in 2D numpy arrays `Xtrain` and `Xtest`, respectively.
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
4.2. Polynomial ML regression with a single variableIn this first part, we will work with the first component of the input only. * [1] Take the first column of `Xtrain` and `Xtest` into arrays `X0train` and `X0test`, respectively.
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [2] Visualize, in a single scatter plot, the target variable (in the vertical axes) versus the input variable, using the training data
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [3] Since the data have been taken from a real scenario, we do not have any *true* mathematical model of the process that generated the data. Thus, we will explore different models trying to take the one that fits better the training data. Assume a polinomial model given by $$ {\bf z} = T({\bf x}) = (1, x_0, x_0^2, \ldots, x_0^{g-1})^\top. $$ Compute matrices `Ztrain` and `Ztest` that result from applying the polynomial transformation to the inputs in `X0train` and `X0test` for a model with degree `g_max = 50`. The `np.vander()` method may be useful for this. Note that, despite `X0train` and `X0test` where normalized, you will need to re-normalize the transformed variables. Note, also, that the first component of the transformed variables, which must be equal to 1, should not be normalized. To simplify the job, the code below defines a normalizer class that performs normalization to all components unless for the first one.
###Code
# The following normalizer will be helpful: it normalizes all components of the
# input matrix, unless for the first one (the "all-one's" column) that
# should not be normalized
class Normalizer():
"""
A data normalizer. Usage:
nm = Normalizer()
Z = nm.fit_transform(X) # to estimate the normalization mean and variance an normalize
# all columns of X unles the first one
Z2 = nm.transform(X) # to normalize X without recomputing mean and variance parameters
"""
def fit_transform(self, Z):
self.mean_z = np.mean(Z, axis=0)
self.mean_z[0] = 0
self.std_z = np.std(Z, axis=0)
self.std_z[0] = 1
Zout = (Z - self.mean_z) / self.std_z
# sc = StandardScaler()
# Ztrain = sc.fit_transform(Ztrain)
return Zout
def transform(self, Z):
return (Z - self.mean_z) / self.std_z
# Ztest = sc.transform(Ztest)
# Set the maximum degree of the polynomial model
g_max = 50
# Compute polynomial transformation for train and test data
# <SOL>
# </SOL>
# Normalize training and test data
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [4] Fit a polynomial model with degree $g$ for $g$ ranging from 0 to `g_max`. Store the weights of all models in a list of weight vectors, named `models`, such that `models[g]` returns the parameters estimated for the polynomial model with degree $g$. We will use these models in the following sections.
###Code
# IMPORTANT NOTE: Use np.linalg.lstsq() with option rcond=-1 for better precission.
# HINT: Take into account that the data matrix required to fit a polynomial model
# with degree g consists of the first g+1 columns of Ztrain.
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [5] Plot the polynomial models with degrees 1, 3 and `g_max`, superimposed over a scatter plot of the training data.
###Code
# Create a grid of samples along the x-axis.
n_points = 10000
xmin = min(X0train)
xmax = max(X0train)
X = np.linspace(xmin, xmax, n_points)[:, np.newaxis]
# Apply the polynomial transformation to the inputs with degree g_max.
# <SOL>
# </SOL>
# Plot training points
plt.plot(X0train, strain, 'b.', markersize=4);
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
plt.xlim(xmin, xmax)
plt.ylim(30, 65)
# Plot the regresion function for the required degrees
# <SOL>
# </SOL>
plt.show()
###Output
_____no_output_____
###Markdown
* [6] Taking `sigma_eps = 1`, show, in the same plot: - The log-likelihood function corresponding to each model, as a function of $g$, computed over the training set. - The log-likelihood function corresponding to each model, as a function of $g$, computed over the test set.
###Code
LLtrain = []
LLtest = []
sigma_eps = 1
# Fill LLtrain and LLtest with the log-likelihood values for all values of
# g ranging from 0 to g_max (included).
# <SOL>
# </SOL>
plt.figure()
plt.plot(range(g_max + 1), LLtrain, label='Training')
plt.plot(range(g_max + 1), LLtest, label='Test')
plt.xlabel('g')
plt.ylabel('Log-likelihood')
plt.xlim(0, g_max)
plt.ylim(-5e4,100)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
* [7] You may have seen the likelihood function over the training data grows with the degree of the polynomial. However, large values of $g$ produce a strong data overfitting. For this reasong, $g$ cannot be selected with the same data used to fit the model. This kind of parameters, like $g$ are usually called *hyperparameters* and need to be selected by cross validation. Select the optimal value of $g$ by 10-fold cross-validation. To do so, the cross validation methods provided by sklearn will simplify this task.
###Code
from sklearn.model_selection import KFold
# Select the number of splits
n_sp = 10
# Create a cross-validator object
kf = KFold(n_splits=n_sp)
# Split data from Ztrain
kf.get_n_splits(Ztrain)
LLmean = []
for g in range(g_max + 1):
# Compute the cross-validation Likelihood
LLg = 0
for tr_index, val_index in kf.split(Ztrain):
# Take the data matrices for the current split
Z_tr, Z_val = Ztrain[tr_index, 0:g+1], Ztrain[val_index, 0:g+1]
s_tr, s_val = strain[tr_index], strain[val_index]
# Train with the current training splits.
# w_MLk, _, _, _ = np.linalg.lstsq(<FILL IN>)
# Compute the validation likelihood for this split
# LLg += LL(<FILL IN>)
LLmean.append(LLg / n_sp)
# Take the optimal value of g and its correpsponding likelihood
# g_opt = <FILL IN>
# LLmax = <FILL IN>
print("The optimal degree is: {}".format(g_opt))
print("The maximum cross-validation likehood is {}".format(LLmax))
plt.figure()
plt.plot(range(g_max + 1), LLmean, label='Training')
plt.plot([g_opt], [LLmax], 'g.', markersize = 20)
plt.xlabel('g')
plt.ylabel('Log-likelihood')
plt.xlim(0, g_max)
plt.ylim(-1e3, LLmax + 100)
plt.legend()
plt.show()
###Output
The optimal degree is: 14
The maximum cross-validation likehood is -365.90322425522174
###Markdown
* [8] You may have observed the overfitting effect for large values of $g$. The best degree of the polynomial may depend on the size of the training set. Take a smaller dataset by running, after the code in section 4.2[1]: + `X0train = Xtrain[0:55, [0]]` + `X0test = Xtest[0:100, [0]]` Then, re-run the whole code after that. What is the optimal value of $g$ in that case?
###Code
# You do not need to code here. Just copy the value of g_opt obtained after re-running the code
# g_opt_new = <FILL IN>
print("The optimal value of g for the 100-sample training set is {}".format(g_opt_new))
###Output
The optimal value of g for the 100-sample training set is 7
###Markdown
* [9] [OPTIONAL] Note that the model coefficients do not depend on $\sigma_\epsilon^2$. Therefore, we do not need to care about its values for polynomial ML regression. However, the log-likelihood function do depends on $\sigma_\epsilon^2$. Actually, we can estimate its value by cross-validation. By simple differentiation, it is not difficult to see that the optimal ML estimate of $\sigma_\epsilon$ is $$ \widehat{\sigma}_\epsilon^2 = \sqrt{\frac{1}{K} \|{\bf s}-{\bf Z}{\bf w}\|^2} $$ Plot the log-likelihood function corresponding to the polynomial model with degree 3 for different values of $\sigma_\epsilon^2$, for the training set, and verify that the value computed with the above formula is actually optimal.
###Code
# Explore the values of sigma logarithmically spaced according to the following array
sigma_eps = np.logspace(-0.1, 5, num=50)
g = 3
K = len(strain)
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [10] [OPTIONAL] For the selected model: - Plot the regresion function over the scater plot of the data. - Compute the log-likelihood and the SSE over the test set.
###Code
# Note that you can easily adapt your code in 4.2[5]
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
5. Bayesian regression. The stock dataset.In this section we will keep using the first component of the data from the stock dataset, assuming the same kind of plolynomial model. We will explore the potential advantages of using a Bayesian model. To do so, we will asume that the a priori distribution of ${\bf w}$ is$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ 5.1. Hyperparameter selectionSince the values $\sigma_p$ and $\sigma_\varepsilon$ are no longer known, a first rough estimation is needed (we will soon see how to estimate these values in a principled way).To this end, we will adjust them using the ML solution to the regression problem with g=10: - $\sigma_p^2$ will be taken as the average of the square values of ${\hat {\bf w}}_{ML}$ - $\sigma_\varepsilon^2$ will be taken as two times the average of the square of the residuals when using ${\hat {\bf w}}_{ML}$
###Code
# Degree for bayesian regression
gb = 10
# w_LS, residuals, rank, s = <FILL IN>
# sigma_p = <FILL IN>
# sigma_eps = <FILL IN>
print(sigma_p)
print(sigma_eps)
###Output
57.218887890322954
3.938634177822986
###Markdown
5.2. Posterior pdf of the weight vectorIn this section we will visualize prior and the posterior distribution functions. First, we will restore the dataset at the begining of this notebook: * [1] Define a function `posterior_stats(Z, s, sigma_eps, sigma_p)` that computes the parameters of the posterior coefficient distribution given the dataset in matrix `Z` and vector `s`, for given values of the hyperparameters.This function should return the posterior mean, the covariance matrix and the precision matrix (the inverse of the covariance matrix). Test the function to the given dataset, for $g=3$.
###Code
# <SOL>
# </SOL>
mean_w, Cov_w, iCov_w = posterior_stats(Ztrain[:, :gb+1], strain, sigma_eps, sigma_p)
print('mean_w = {0}'.format(mean_w))
# print('Cov_w = {0}'.format(Cov_w))
# print('iCov_w = {0}'.format(iCov_w))
###Output
mean_w = [[ 47.06072634]
[ -5.43972905]
[-19.6947545 ]
[ 40.06018631]
[ 49.95913747]
[-75.35809116]
[-44.86743888]
[ 67.27934244]
[ 10.79541196]
[-21.80928632]
[ 1.5718318 ]]
###Markdown
* [2] Define a function `gauss_pdf(w, mean_w, iCov_w)` that computes the Gaussian pdf with mean `mean_w` and precision matrix `iCov_w`. Use this function to compute and compare the ML estimate and the MSE estimate, given the dataset.
###Code
# <SOL>
# </SOL>
print('p(w_ML | s) = {0}'.format(gauss_pdf(w_ML, mean_w, iCov_w)))
print('p(w_MSE | s) = {0}'.format(gauss_pdf(mean_w, mean_w, iCov_w)))
###Output
p(w_ML | s) = 4.805542251827298e-06
p(w_MSE | s) = 2.1783427470055817e-05
###Markdown
* [3] [OPTIONAL] Define a function `log_gauss_pdf(w, mean_w, iCov_w)` that computes the log of the Gaussian pdf with mean `mean_w` and precision matrix `iCov_w`. Use this function to compute and compare the log of the posterior pdf value of the true coefficients, the ML estimate and the MSE estimate, given the dataset.
###Code
# <SOL>
# </SOL>
print('log(p(w_ML | s)) = {0}'.format(log_gauss_pdf(w_ML, mean_w, iCov_w)))
print('log(p(w_MSE | s)) = {0}'.format(log_gauss_pdf(mean_w, mean_w, iCov_w)))
###Output
log(p(w_ML | s)) = -12.245740670332317
log(p(w_MSE | s)) = -10.73436108506931
###Markdown
5.3 Sampling regression curves from the posteriorIn this section we will plot the functions corresponding to different samples drawn from the posterior distribution of the weight vector. To this end, we will first generate an input dataset of equally spaced samples. We will compute the functions at these points
###Code
# Definition of the interval for representation purposes
xmin = min(X0train)
xmax = max(X0train)
n_points = 100 # Only two points are needed to plot a straigh line
# Build the input data matrix:
# Input values for representation of the regression curves
X = np.linspace(xmin, xmax, n_points)
Z = np.vander(X.flatten(), g_max+1, increasing=True)
Z = nm.transform(Z)[:, :gb+1]
###Output
_____no_output_____
###Markdown
Generate random vectors ${\bf w}_l$ with $l = 1,\dots, 50$, from the posterior density of the weights, $p({\bf w}\mid{\bf s})$, and use them to generate 50 polinomial regression functions, $f({\bf x}^\ast) = {{\bf z}^\ast}^\top {\bf w}_l$, with ${\bf x}^\ast$ between $-1.2$ and $1.2$, with step $0.1$.Plot the line corresponding to the model with the posterior mean parameters, along with the $50$ generated straight lines and the original samples, all in the same plot. As you can check, the Bayesian model is not providing a single answer, but instead a density over them, from which we have extracted 50 options.
###Code
# Drawing weights from the posterior
for l in range(50):
# Generate a random sample from the posterior distribution (you can use np.random.multivariate_normal())
# w_l = <FILL IN>
# Compute predictions for the inputs in the data matrix
# p_l = <FILL IN>
# Plot prediction function
# plt.plot(<FILL IN>, 'c:');
# Plot the training points
plt.plot(X0train, strain,'b.',markersize=2);
plt.xlim((xmin, xmax));
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
###Output
_____no_output_____
###Markdown
5.4. Plotting the confidence intervalsOn top of the previous figure (copy here your code from the previous section), plot functions$${\mathbb E}\left\{f({\bf x}^\ast)\mid{\bf s}\right\}$$and$${\mathbb E}\left\{f({\bf x}^\ast)\mid{\bf s}\right\} \pm 2 \sqrt{{\mathbb V}\left\{f({\bf x}^\ast)\mid{\bf s}\right\}}$$(i.e., the posterior mean of $f({\bf x}^\ast)$, as well as two standard deviations above and below).It is possible to show analytically that this region comprises $95.45\%$ probability of the posterior probability $p(f({\bf x}^\ast)\mid {\bf s})$ at each ${\bf x}^\ast$.
###Code
# Note that you can re-use code from sect. 4.2 to solve this exercise
# Plot the training points
# plt.plot(X, Z.dot(true_w), 'b', label='True model', linewidth=2);
plt.plot(X0train, strain,'b.',markersize=2);
plt.xlim(xmin, xmax);
# </SOL>
# Plot the posterior mean.
# mean_s = <FILL IN>
plt.plot(X, mean_ast, 'g', label='Predictive mean', linewidth=2);
# Plot the posterior mean +- two standard deviations
# std_f = <FILL IN>
# Plot the confidence intervals.
# To do so, you can use the fill_between method
plt.fill_between(X, (mean_s - 2*std_f).flatten(), (mean_s + 2*std_f).flatten(),
alpha=0.4, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=2)
# plt.legend(loc='best')
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
plt.show()
###Output
_____no_output_____
###Markdown
Plot now ${\mathbb E}\left\{s({\bf x}^\ast)\mid{\bf s}\right\} \pm 2 \sqrt{{\mathbb V}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}}$ (note that the posterior means of $f({\bf x}^\ast)$ and $s({\bf x}^\ast)$ are the same, so there is no need to plot it again). Notice that $95.45\%$ of observed data lie now within the newly designated region. These new limits establish a confidence range for our predictions. See how the uncertainty grows as we move away from the interpolation region to the extrapolation areas.
###Code
# Plot sample functions confidence intervals and sampling points
# Note that you can simply copy and paste most of the code used in the cell above.
# <SOL>
# </SOL>
plt.show()
###Output
_____no_output_____
###Markdown
5.5. Test square error* [1] To test the regularization effect of the Bayesian prior. To do so, compute and plot the sum of square errors of both the ML and Bayesian estimates as a function of the polynomial degree.
###Code
SSE_ML = []
SSE_Bayes = []
# Compute the SSE for the ML and the bayes estimates
for g in range(g_max + 1):
# <SOL>
# </SOL>
plt.figure()
plt.semilogy(range(g_max + 1), SSE_ML, label='ML')
plt.semilogy(range(g_max + 1), SSE_Bayes, 'g.', label='Bayes')
plt.xlabel('g')
plt.ylabel('Sum of square errors')
plt.xlim(0, g_max)
plt.ylim(min(min(SSE_Bayes), min(SSE_ML)),10000)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
5.6. [Optional] Model assessmentIn order to verify the performance of the resulting model, compute the posterior mean and variance of each of the test outputs from the posterior over ${\bf w}$. I.e, compute ${\mathbb E}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}$ and $\sqrt{{\mathbb V}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}}$ for each test sample ${\bf x}^\ast$ contained in each row of `Xtest`. Store the predictive mean and variance of all test samples in two column vectors called `m_s` and `v_s`, respectively.
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
Compute now the mean square error (MSE) and the negative log-predictive density (NLPD) with the following code:
###Code
# <SOL>
# </SOL>
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
###Output
_____no_output_____
###Markdown
* [2] Normalize the data so all training sample components have zero mean and unit standard deviation. Store the normalized training and test samples in 2D numpy arrays `Xtrain` and `Xtest`, respectively.
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
4.2. Polynomial ML regression with a single variableIn this first part, we will work with the first component of the input only. * [1] Take the first column of `Xtrain` and `Xtest` into arrays `X0train` and `X0test`, respectively.
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [2] Visualize, in a single scatter plot, the target variable (in the vertical axes) versus the input variable, using the training data
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [3] Since the data have been taken from a real scenario, we do not have any *true* mathematical model of the process that generated the data. Thus, we will explore different models trying to take the one that fits better the training data. Assume a polinomial model given by $$ {\bf z} = T({\bf x}) = (1, x_0, x_0^2, \ldots, x_0^{g-1})^\top. $$ Compute matrices `Ztrain` and `Ztest` that result from applying the polynomial transformation to the inputs in `X0train` and `X0test` for a model with degree `g_max = 50`. The `np.vander()` method may be useful for this. Note that, despite `X0train` and `X0test` where normalized, you will need to re-normalize the transformed variables. Note, also, that the first component of the transformed variables, which must be equal to 1, should not be normalized. To simplify the job, the code below defines a normalizer class that performs normalization to all components unless for the first one.
###Code
# The following normalizer will be helpful: it normalizes all components of the
# input matrix, unless for the first one (the "all-one's" column) that
# should not be normalized
class Normalizer():
"""
A data normalizer. Usage:
nm = Normalizer()
Z = nm.fit_transform(X) # to estimate the normalization mean and variance an normalize
# all columns of X unles the first one
Z2 = nm.transform(X) # to normalize X without recomputing mean and variance parameters
"""
def fit_transform(self, Z):
self.mean_z = np.mean(Z, axis=0)
self.mean_z[0] = 0
self.std_z = np.std(Z, axis=0)
self.std_z[0] = 1
Zout = (Z - self.mean_z) / self.std_z
# sc = StandardScaler()
# Ztrain = sc.fit_transform(Ztrain)
return Zout
def transform(self, Z):
return (Z - self.mean_z) / self.std_z
# Ztest = sc.transform(Ztest)
# Set the maximum degree of the polynomial model
g_max = 50
# Compute polynomial transformation for train and test data
# <SOL>
# </SOL>
# Normalize training and test data
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [4] Fit a polynomial model with degree $g$ for $g$ ranging from 0 to `g_max`. Store the weights of all models in a list of weight vectors, named `models`, such that `models[g]` returns the parameters estimated for the polynomial model with degree $g$. We will use these models in the following sections.
###Code
# IMPORTANT NOTE: Use np.linalg.lstsq() with option rcond=-1 for better precission.
# HINT: Take into account that the data matrix required to fit a polynomial model
# with degree g consists of the first g+1 columns of Ztrain.
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [5] Plot the polynomial models with degrees 1, 3 and `g_max`, superimposed over a scatter plot of the training data.
###Code
# Create a grid of samples along the x-axis.
n_points = 10000
xmin = min(X0train)
xmax = max(X0train)
X = np.linspace(xmin, xmax, n_points)
# Apply the polynomial transformation to the inputs with degree g_max.
# <SOL>
# </SOL>
# Plot training points
plt.plot(X0train, strain, 'b.', markersize=4);
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
plt.xlim(xmin, xmax)
plt.ylim(30, 65)
# Plot the regresion function for the required degrees
# <SOL>
# </SOL>
plt.show()
###Output
_____no_output_____
###Markdown
* [6] Taking `sigma_eps = 1`, show, in the same plot: - The log-likelihood function corresponding to each model, as a function of $g$, computed over the training set. - The log-likelihood function corresponding to each model, as a function of $g$, computed over the test set.
###Code
LLtrain = []
LLtest = []
sigma_eps = 1
# Fill LLtrain and LLtest with the log-likelihood values for all values of
# g ranging from 0 to g_max (included).
# <SOL>
# </SOL>
plt.figure()
plt.plot(range(g_max + 1), LLtrain, label='Training')
plt.plot(range(g_max + 1), LLtest, label='Test')
plt.xlabel('g')
plt.ylabel('Log-likelihood')
plt.xlim(0, g_max)
plt.ylim(-5e4,100)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
* [7] You may have seen the likelihood function over the training data grows with the degree of the polynomial. However, large values of $g$ produce a strong data overfitting. For this reasong, $g$ cannot be selected with the same data used to fit the model. This kind of parameters, like $g$ are usually called *hyperparameters* and need to be selected by cross validation. Select the optimal value of $g$ by 10-fold cross-validation. To do so, the cross validation methods provided by sklearn will simplify this task.
###Code
from sklearn.model_selection import KFold
# Select the number of splits
n_sp = 10
# Create a cross-validator object
kf = KFold(n_splits=n_sp)
# Split data from Ztrain
kf.get_n_splits(Ztrain)
LLmean = []
for g in range(g_max + 1):
# Compute the cross-validation Likelihood
LLg = 0
for tr_index, val_index in kf.split(Ztrain):
# Take the data matrices for the current split
Z_tr, Z_val = Ztrain[tr_index, 0:g+1], Ztrain[val_index, 0:g+1]
s_tr, s_val = strain[tr_index], strain[val_index]
# Train with the current training splits.
# w_MLk, _, _, _ = np.linalg.lstsq(<FILL IN>)
# Compute the validation likelihood for this split
# LLg += LL(<FILL IN>)
LLmean.append(LLg / n_sp)
# Take the optimal value of g and its correpsponding likelihood
# g_opt = <FILL IN>
# LLmax = <FILL IN>
print("The optimal degree is: {}".format(g_opt))
print("The maximum cross-validation likehood is {}".format(LLmax))
plt.figure()
plt.plot(range(g_max + 1), LLmean, label='Training')
plt.plot([g_opt], [LLmax], 'g.', markersize = 20)
plt.xlabel('g')
plt.ylabel('Log-likelihood')
plt.xlim(0, g_max)
plt.ylim(-1e3, LLmax + 100)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
* [8] You may have observed the overfitting effect for large values of $g$. The best degree of the polynomial may depend on the size of the training set. Take a smaller dataset by running, after the code in section 4.2[1]: + `X0train = Xtrain[0:55, [0]]` + `X0test = Xtest[0:100, [0]]` Then, re-run the whole code after that. What is the optimal value of $g$ in that case?
###Code
# You do not need to code here. Just copy the value of g_opt obtained after re-running the code
# g_opt_new = <FILL IN>
print("The optimal value of g for the 100-sample training set is {}".format(g_opt_new))
###Output
_____no_output_____
###Markdown
* [9] [OPTIONAL] Note that the model coefficients do not depend on $\sigma_\epsilon^2$. Therefore, we do not need to care about its values for polynomial ML regression. However, the log-likelihood function do depends on $\sigma_\epsilon^2$. Actually, we can estimate its value by cross-validation. By simple differentiation, it is not difficult to see that the optimal ML estimate of $\sigma_\epsilon$ is $$ \widehat{\sigma}_\epsilon^2 = \sqrt{\frac{1}{K} \|{\bf s}-{\bf Z}{\bf w}\|^2} $$ Plot the log-likelihood function corresponding to the polynomial model with degree 3 for different values of $\sigma_\epsilon^2$, for the training set, and verify that the value computed with the above formula is actually optimal.
###Code
# Explore the values of sigma logarithmically spaced according to the following array
sigma_eps = np.logspace(-0.1, 5, num=50)
g = 3
K = len(strain)
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
* [10] [OPTIONAL] For the selected model: - Plot the regresion function over the scater plot of the data. - Compute the log-likelihood and the SSE over the test set.
###Code
# Note that you can easily adapt your code in 4.2[5]
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
5. Bayesian regression. The stock dataset.In this section we will keep using the first component of the data from the stock dataset, assuming the same kind of plolynomial model. We will explore the potential advantages of using a Bayesian model. To do so, we will asume that the a priori distribution of ${\bf w}$ is$${\bf w} \sim {\cal N}({\bf 0}, \sigma_p^2~{\bf I})$$ 5.1. Hyperparameter selectionSince the values $\sigma_p$ and $\sigma_\varepsilon$ are no longer known, a first rough estimation is needed (we will soon see how to estimate these values in a principled way).To this end, we will adjust them using the ML solution to the regression problem with g=10: - $\sigma_p^2$ will be taken as the average of the square values of ${\hat {\bf w}}_{ML}$ - $\sigma_\varepsilon^2$ will be taken as two times the average of the square of the residuals when using ${\hat {\bf w}}_{ML}$
###Code
# Degree for bayesian regression
gb = 10
# w_LS, residuals, rank, s = <FILL IN>
# sigma_p = <FILL IN>
# sigma_eps = <FILL IN>
print(sigma_p)
print(sigma_eps)
###Output
_____no_output_____
###Markdown
5.2. Posterior pdf of the weight vectorIn this section we will visualize prior and the posterior distribution functions. First, we will restore the dataset at the begining of this notebook: * [1] Define a function `posterior_stats(Z, s, sigma_eps, sigma_p)` that computes the parameters of the posterior coefficient distribution given the dataset in matrix `Z` and vector `s`, for given values of the hyperparameters.This function should return the posterior mean, the covariance matrix and the precision matrix (the inverse of the covariance matrix). Test the function to the given dataset, for $g=3$.
###Code
# <SOL>
# </SOL>
mean_w, Cov_w, iCov_w = posterior_stats(Ztrain[:, :gb+1], strain, sigma_eps, sigma_p)
print('mean_w = {0}'.format(mean_w))
# print('Cov_w = {0}'.format(Cov_w))
# print('iCov_w = {0}'.format(iCov_w))
###Output
_____no_output_____
###Markdown
* [2] Define a function `gauss_pdf(w, mean_w, iCov_w)` that computes the Gaussian pdf with mean `mean_w` and precision matrix `iCov_w`. Use this function to compute and compare the ML estimate and the MSE estimate, given the dataset.
###Code
# <SOL>
# </SOL>
print('p(w_ML | s) = {0}'.format(gauss_pdf(w_ML, mean_w, iCov_w)))
print('p(w_MSE | s) = {0}'.format(gauss_pdf(mean_w, mean_w, iCov_w)))
###Output
_____no_output_____
###Markdown
* [3] [OPTIONAL] Define a function `log_gauss_pdf(w, mean_w, iCov_w)` that computes the log of the Gaussian pdf with mean `mean_w` and precision matrix `iCov_w`. Use this function to compute and compare the log of the posterior pdf value of the true coefficients, the ML estimate and the MSE estimate, given the dataset.
###Code
# <SOL>
# </SOL>
print('log(p(w_ML | s)) = {0}'.format(log_gauss_pdf(w_ML, mean_w, iCov_w)))
print('log(p(w_MSE | s)) = {0}'.format(log_gauss_pdf(mean_w, mean_w, iCov_w)))
###Output
_____no_output_____
###Markdown
5.3 Sampling regression curves from the posteriorIn this section we will plot the functions corresponding to different samples drawn from the posterior distribution of the weight vector. To this end, we will first generate an input dataset of equally spaced samples. We will compute the functions at these points
###Code
# Definition of the interval for representation purposes
xmin = min(X0train)
xmax = max(X0train)
n_points = 100 # Only two points are needed to plot a straigh line
# Build the input data matrix:
# Input values for representation of the regression curves
X = np.linspace(xmin, xmax, n_points)
Z = np.vander(X.flatten(), g_max+1, increasing=True)
Z = nm.transform(Z)[:, :gb+1]
###Output
_____no_output_____
###Markdown
Generate random vectors ${\bf w}_l$ with $l = 1,\dots, 50$, from the posterior density of the weights, $p({\bf w}\mid{\bf s})$, and use them to generate 50 polinomial regression functions, $f({\bf x}^\ast) = {{\bf z}^\ast}^\top {\bf w}_l$, with ${\bf x}^\ast$ between $-1.2$ and $1.2$, with step $0.1$.Plot the line corresponding to the model with the posterior mean parameters, along with the $50$ generated straight lines and the original samples, all in the same plot. As you can check, the Bayesian model is not providing a single answer, but instead a density over them, from which we have extracted 50 options.
###Code
# Drawing weights from the posterior
for l in range(50):
# Generate a random sample from the posterior distribution (you can use np.random.multivariate_normal())
# w_l = <FILL IN>
# Compute predictions for the inputs in the data matrix
# p_l = <FILL IN>
# Plot prediction function
# plt.plot(<FILL IN>, 'c:');
# Plot the training points
plt.plot(X0train, strain,'b.',markersize=2);
plt.xlim((xmin, xmax));
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
###Output
_____no_output_____
###Markdown
5.4. Plotting the confidence intervalsOn top of the previous figure (copy here your code from the previous section), plot functions$${\mathbb E}\left\{f({\bf x}^\ast)\mid{\bf s}\right\}$$and$${\mathbb E}\left\{f({\bf x}^\ast)\mid{\bf s}\right\} \pm 2 \sqrt{{\mathbb V}\left\{f({\bf x}^\ast)\mid{\bf s}\right\}}$$(i.e., the posterior mean of $f({\bf x}^\ast)$, as well as two standard deviations above and below).It is possible to show analytically that this region comprises $95.45\%$ probability of the posterior probability $p(f({\bf x}^\ast)\mid {\bf s})$ at each ${\bf x}^\ast$.
###Code
# Note that you can re-use code from sect. 4.2 to solve this exercise
# Plot the training points
# plt.plot(X, Z.dot(true_w), 'b', label='True model', linewidth=2);
plt.plot(X0train, strain,'b.',markersize=2);
plt.xlim(xmin, xmax);
# </SOL>
# Plot the posterior mean.
# mean_s = <FILL IN>
plt.plot(X, mean_s, 'g', label='Predictive mean', linewidth=2);
# Plot the posterior mean +- two standard deviations
# std_f = <FILL IN>
# Plot the confidence intervals.
# To do so, you can use the fill_between method
plt.fill_between(X.flatten(), (mean_s - 2*std_f).flatten(), (mean_s + 2*std_f).flatten(),
alpha=0.4, edgecolor='#1B2ACC', facecolor='#089FFF', linewidth=2)
# plt.legend(loc='best')
plt.xlabel('$x$',fontsize=14);
plt.ylabel('$s$',fontsize=14);
plt.show()
###Output
_____no_output_____
###Markdown
Plot now ${\mathbb E}\left\{s({\bf x}^\ast)\mid{\bf s}\right\} \pm 2 \sqrt{{\mathbb V}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}}$ (note that the posterior means of $f({\bf x}^\ast)$ and $s({\bf x}^\ast)$ are the same, so there is no need to plot it again). Notice that $95.45\%$ of observed data lie now within the newly designated region. These new limits establish a confidence range for our predictions. See how the uncertainty grows as we move away from the interpolation region to the extrapolation areas.
###Code
# Plot sample functions confidence intervals and sampling points
# Note that you can simply copy and paste most of the code used in the cell above.
# <SOL>
# </SOL>
plt.show()
###Output
_____no_output_____
###Markdown
5.5. Test square error* [1] To test the regularization effect of the Bayesian prior. To do so, compute and plot the sum of square errors of both the ML and Bayesian estimates as a function of the polynomial degree.
###Code
SSE_ML = []
SSE_Bayes = []
# Compute the SSE for the ML and the bayes estimates
for g in range(g_max + 1):
# <SOL>
# </SOL>
plt.figure()
plt.semilogy(range(g_max + 1), SSE_ML, label='ML')
plt.semilogy(range(g_max + 1), SSE_Bayes, 'g.', label='Bayes')
plt.xlabel('g')
plt.ylabel('Sum of square errors')
plt.xlim(0, g_max)
plt.ylim(min(min(SSE_Bayes), min(SSE_ML)),10000)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
5.6. [Optional] Model assessmentIn order to verify the performance of the resulting model, compute the posterior mean and variance of each of the test outputs from the posterior over ${\bf w}$. I.e, compute ${\mathbb E}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}$ and $\sqrt{{\mathbb V}\left\{s({\bf x}^\ast)\mid{\bf s}\right\}}$ for each test sample ${\bf x}^\ast$ contained in each row of `Xtest`. Store the predictive mean and variance of all test samples in two column vectors called `m_s` and `v_s`, respectively.
###Code
# <SOL>
# </SOL>
###Output
_____no_output_____
###Markdown
Compute now the mean square error (MSE) and the negative log-predictive density (NLPD) with the following code:
###Code
# <SOL>
# </SOL>
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
###Output
_____no_output_____ |
_notebooks/2022-02-13-r.ipynb | ###Markdown
데이터 시각화, dplyr 패키지
###Code
data(iris) # iris 데이터 불러오기
attributes(iris) # iris 데이터프레임의 5개 컬럼명 확인
names(iris)
# 5개의 칼럼명을 names 혹은 attributes을 통해 확인할 수 있다.
# pairs 함수는 matrix 또는 데이터프레임의 numeric 칼럼을 대상으로 변수들 사이의 비교 결과를 행렬구조의 분산된 그래프로 제공한다.
# virginica 꽃을 대상으로 4개 변수를 비교하여 행렬구조로 차트를 그린 결과이다.
pairs(iris[iris$Species == 'virginica',1:4])
pairs(iris[iris$Species == 'setosa',1:4])
###Output
_____no_output_____
###Markdown
- iris에서 Species 칼럼인 꽃의 종 setosa, versicolor, virginica을 대상으로 하여 3차원 산점도로 데이터를 시각화한다.
###Code
# 패키지 로딩
library(scatterplot3d)
# 꽃의 종류별 분류
iris_setosa = iris[iris$Species == 'setosa',]
iris_versicolor = iris[iris$Species == 'versicolor',]
iris_virginica = iris[iris$Species == 'virginica',]
# 3차원 프레임을 생성하기 위해서 scatter3d() 함수를 사용한다.
d3 <- scatterplot3d(iris$Petal.Length,
iris$Sepal.Length,
iris$Sepal.Width,
type='n')
# 각각 밑변, 오른쪽 변의 칼럼명, 왼쪽 변의 칼럼명
# type='n' => 기본 산점도를 표시하지 않음
# 현재 만든 것은 3차원 틀 Frame을 생성한 것이다.
# 여기서 셀을 갈라서 실행하면 실행이 안 됨
# 위 아래 셀 꼭 같이 실행해줘야함
# 예를 들어 예전에 배웠던 plot과 lines는 같은 셀에서 사용해야하며
# lines나 abline같은 건 독자적 사용이 불가능한 것과 일맥상통한 논리이다.
# 이제 3차원 산점도를 시각화한다.
d3$points3d(iris_setosa$Petal.Length,
iris_setosa$Sepal.Length,
iris_setosa$Sepal.Width,
bg='orange',pch=21)
d3$points3d(iris_versicolor$Petal.Length,
iris_versicolor$Sepal.Length,
iris_versicolor$Sepal.Width,
bg='blue',pch=23)
d3$points3d(iris_virginica$Petal.Length,
iris_virginica$Sepal.Length,
iris_virginica$Sepal.Width,
bg='green',pch=25)
###Output
_____no_output_____
###Markdown
--- - dplyr 패키지는 데이터프레임 자료구조를 갖는 정형화된 데이터를 처리하는 데 적합한 패키지이다. - 파이프 연산자 %>%를 이용한 함수 적용 - 데이터프레임을 조작하는 데 필요한 함수를 순차적으로 적용할 경우 사용할 수 있는 연산자이다.
###Code
library(dplyr)
iris %>% head() %>% subset(Sepal.Length>=5)
###Output
_____no_output_____
###Markdown
- 대용량의 관계형 데이터베이스나 데이터프레임에서 수집된 데이터 셋을 대상으로 콘솔 창의 크기에 맞게 데이터를 추출하고, 나머지는 축약형으로 제공한다면 데이터를 효과적으로 처리할 수 있을 것이다.
###Code
library(hflights)
# 데이터 셋 구조보기
str(hflights)
###Output
'data.frame': 227496 obs. of 21 variables:
$ Year : int 2011 2011 2011 2011 2011 2011 2011 2011 2011 2011 ...
$ Month : int 1 1 1 1 1 1 1 1 1 1 ...
$ DayofMonth : int 1 2 3 4 5 6 7 8 9 10 ...
$ DayOfWeek : int 6 7 1 2 3 4 5 6 7 1 ...
$ DepTime : int 1400 1401 1352 1403 1405 1359 1359 1355 1443 1443 ...
$ ArrTime : int 1500 1501 1502 1513 1507 1503 1509 1454 1554 1553 ...
$ UniqueCarrier : chr "AA" "AA" "AA" "AA" ...
$ FlightNum : int 428 428 428 428 428 428 428 428 428 428 ...
$ TailNum : chr "N576AA" "N557AA" "N541AA" "N403AA" ...
$ ActualElapsedTime: int 60 60 70 70 62 64 70 59 71 70 ...
$ AirTime : int 40 45 48 39 44 45 43 40 41 45 ...
$ ArrDelay : int -10 -9 -8 3 -3 -7 -1 -16 44 43 ...
$ DepDelay : int 0 1 -8 3 5 -1 -1 -5 43 43 ...
$ Origin : chr "IAH" "IAH" "IAH" "IAH" ...
$ Dest : chr "DFW" "DFW" "DFW" "DFW" ...
$ Distance : int 224 224 224 224 224 224 224 224 224 224 ...
$ TaxiIn : int 7 6 5 9 9 6 12 7 8 6 ...
$ TaxiOut : int 13 9 17 22 9 13 15 12 22 19 ...
$ Cancelled : int 0 0 0 0 0 0 0 0 0 0 ...
$ CancellationCode : chr "" "" "" "" ...
$ Diverted : int 0 0 0 0 0 0 0 0 0 0 ...
###Markdown
- 해당 데이터 셋의 자료 구조는 data.frame 형식이고 전체 관측치는 227,496행이며, 변수는 21개로 구성되어 있다.
###Code
tbl_df(hflights)
###Output
_____no_output_____
###Markdown
- 원래는 R의 콘솔 창 크기에서 볼 수 있는 만큼 10개 행과 8개의 칼럼으로 결과가 나타나고 나머지는 아래에 생략된 행 수와 칼럼명으로 표시되어야 하는데, 현재 jupyter notebook이라서 이렇게 반환된 것 같다. --- - 조건에 맞는 데이터 필터링 - 대용량의 데이터 셋을 대상으로 필요한 데이터만 추출하는 필터링 관련 함수에 대해서 알아보자 - subset과 유사한가?,,,,
###Code
# 1월 2일 데이터 추출
filter(hflights,Month == 1 & DayofMonth==2)
# 또는 이렇게 할 수도 있다.
hflights %>% filter(Month==1 & DayofMonth==1)
###Output
_____no_output_____ |
notebooks/Explore.ipynb | ###Markdown
RepresentationSpace - Discovering Interpretable GAN Controls for Architectural Image SynthesisUsing [Ganspace]( https://github.com/armaank/archlectures/ganspace) to find latent directions in a StyleGAN2 model trained trained on the [ArchML dataset](http://165.227.182.79/) Instructions and Setup1) Click the play button of the blocks titled "Initialization" and wait for it to finish the initialization.2) Click the play button to on the block titled "Load Model". This block will take a little bit (~1-2 minutes) to run. 3) In the section named "Explore RepresentationSpace", generate samples, and play with the sliders. In the next block generate videos.
###Code
%%capture
#@title Initialization - Setup
# Clone git
%reset -f c
%tensorflow_version 1.x
%rm -rf archlectures
!git clone https://github.com/armaank/archlectures
%cd archlectures/generative/
%ls
#@title Initialization - Download Models
%%capture
%%sh
chmod 755 get_directions.sh
./get_directions.sh
chmod 755 get_models.sh
./get_models.sh
ls
#@title Initilization - Install Requirements
%%capture
from IPython.display import Javascript
display(Javascript('''google.colab.output.setIframeHeight(0, true, {maxHeight: 200})'''))
!pip install fbpca boto3
!git submodule update --init --recursive
!python -c "import nltk; nltk.download('wordnet')"
%cd ./ganspace/
from IPython.utils import io
import torch
import PIL
import numpy as np
import ipywidgets as widgets
from PIL import Image
import imageio
from models import get_instrumented_model
from decomposition import get_or_compute
from config import Config
from skimage import img_as_ubyte
# Speed up computation
torch.autograd.set_grad_enabled(False)
torch.backends.cudnn.benchmark = True
# Custom OPs no longer required
#!pip install Ninja
#%cd models/stylegan2/stylegan2-pytorch/op
#!python setup.py install
#!python -c "import torch; import upfirdn2d_op; import fused; print('OK')"
#%cd "/content/ganspace"
#@title Load Model
# model = "Adaily_B" #@param ["Adaily_A", "Adaily_B"]
# num_components = 80#@param {type:"number"}
# layer = 'style'#@param ["style","input","convs","upsamples","noises"]
model = 'Adaily_B'
num_components = 80
layer = 'style'
model_class = model # this is the name of model
model_name = 'StyleGAN2'
# !python visualize.py --model $model_name --class $model_class --use_w --layer=style -c $num_components
from IPython.display import display, clear_output
from ipywidgets import fixed
#@title Load Model and Component
config = Config(
model='StyleGAN2',
layer=layer,
output_class=model_class,
components=num_components,
use_w=True,
batch_size=5_000, # style layer quite small
)
inst = get_instrumented_model(config.model, config.output_class,
config.layer, torch.device('cuda'), use_w=config.use_w)
path_to_components = get_or_compute(config, inst)
model = inst.model
# named_directions = {} #init named_directions dict to save directions
named_directions = {'Site - Drawing': [0, 0, 3], 'Image - Drawing': [0, 0, 18], 'Shaded - Hatched': [0, 6, 10], 'Light - Dark': [0, 14, 18], 'Outline - Poche': [0, 7, 14], 'Subdivided - Open': [1, 8, 10], 'Interior Color': [2, 11, 18], 'Small - Large': [2, 4, 8], 'Elevation - Plan': [3, 0, 18], 'Paper Color': [4, 12, 18], 'Shadows': [4, 12, 14], 'Tall - Long': [4, 0, 18], 'Section - Plan': [5, 0, 18], 'Shaded - Outline': [6, 10, 18], 'Closed - Open': [7, 6, 7], 'Multiple - Single': [7, 0, 4], 'Detail': [13, 6, 9]}
comps = np.load(path_to_components)
lst = comps.files
latent_dirs = []
latent_stdevs = []
load_activations = False
for item in lst:
if load_activations:
if item == 'act_comp':
for i in range(comps[item].shape[0]):
latent_dirs.append(comps[item][i])
if item == 'act_stdev':
for i in range(comps[item].shape[0]):
latent_stdevs.append(comps[item][i])
else:
if item == 'lat_comp':
for i in range(comps[item].shape[0]):
latent_dirs.append(comps[item][i])
if item == 'lat_stdev':
for i in range(comps[item].shape[0]):
latent_stdevs.append(comps[item][i])
#load one at random
num = np.random.randint(20)
if num in named_directions.values():
print(f'Direction already named: {list(named_directions.keys())[list(named_directions.values()).index(num)]}')
random_dir = latent_dirs[num]
random_dir_stdev = latent_stdevs[num]
# print(f'Loaded Component No. {num}')
print(f'Model Loaded')
###Output
../models/Adaily_B/torch_official/stylegan2_Adaily_1024.pt
Not cached
[12.05 00:48] Computing stylegan2-Adaily_B_style_ipca_c80_n300000_w.npz
Reusing InstrumentedModel instance
Using W latent space
Feature shape: torch.Size([1, 512])
B=5000, N=300000, dims=512, N/dims=585.9
###Markdown
Explore RepresentationSpaceUsing the UI, you can explore the latent directions by selecting their name.The variable `Seed` controls the starting image.The `Truncation` slider controls the quality of the image sample, .7 is a good starting point.`Distance` is the main slider, it controls the strength/emphasis of the component.
###Code
#@title Visualize Named Directions
# Taken from https://github.com/alexanderkuk/log-progress
def log_progress(sequence, every=1, size=None, name='Items'):
from ipywidgets import IntProgress, HTML, VBox
from IPython.display import display
is_iterator = False
if size is None:
try:
size = len(sequence)
except TypeError:
is_iterator = True
if size is not None:
if every is None:
if size <= 200:
every = 1
else:
every = int(size / 200) # every 0.5%
else:
assert every is not None, 'sequence is iterator, set every'
if is_iterator:
progress = IntProgress(min=0, max=1, value=1)
progress.bar_style = 'info'
else:
progress = IntProgress(min=0, max=size, value=0)
label = HTML()
box = VBox(children=[label, progress])
display(box)
index = 0
try:
for index, record in enumerate(sequence, 1):
if index == 1 or index % every == 0:
if is_iterator:
label.value = '{name}: {index} / ?'.format(
name=name,
index=index
)
else:
progress.value = index
label.value = u'{name}: {index} / {size}'.format(
name=name,
index=index,
size=size
)
yield record
except:
progress.bar_style = 'danger'
raise
else:
progress.bar_style = 'success'
progress.value = index
label.value = "{name}: {index}".format(
name=name,
index=str(index or '?')
)
def name_direction(sender):
if not text.value:
print('Please name the direction before saving')
return
if num in named_directions.values():
target_key = list(named_directions.keys())[list(named_directions.values()).index(num)]
print(f'Direction already named: {target_key}')
print(f'Overwriting... ')
del(named_directions[target_key])
named_directions[text.value] = [num, start_layer.value, end_layer.value]
save_direction(random_dir, text.value)
for item in named_directions:
print(item, named_directions[item])
def save_direction(direction, filename):
filename += ".npy"
np.save(filename, direction, allow_pickle=True, fix_imports=True)
print(f'Latent direction saved as {filename}')
def display_sample_pytorch(seed, truncation, direction, distance, start, end, disp=True, save=None, noise_spec=None, scale=2,):
# blockPrint()
with io.capture_output() as captured:
w = model.sample_latent(1, seed=seed).cpu().numpy()
model.truncation = truncation
w = [w]*model.get_max_latents() # one per layer
for l in range(start, end):
w[l] = w[l] + direction * distance * scale
#save image and display
out = model.sample_np(w)
final_im = Image.fromarray((out * 255).astype(np.uint8)).resize((500,500),Image.LANCZOS)
if disp:
display(final_im)
if save is not None:
if disp == False:
print(save)
final_im.save(f'out/{seed}_{save:05}.png')
def generate_mov(seed, truncation, direction_vec, layers, n_frames, out_name = 'out', scale = 2, noise_spec = None, loop=True):
"""Generates a mov moving back and forth along the chosen direction vector"""
# Example of reading a generated set of images, and storing as MP4.
%mkdir out
movieName = f'out/{out_name}.mp4'
offset = -10
step = 20 / n_frames
imgs = []
for i in log_progress(range(n_frames), name = "Generating frames"):
print(f'\r{i} / {n_frames}', end='')
w = model.sample_latent(1, seed=seed).cpu().numpy()
model.truncation = truncation
w = [w]*model.get_max_latents() # one per layer
for l in layers:
if l <= model.get_max_latents():
w[l] = w[l] + direction_vec * offset * scale
#save image and display
out = model.sample_np(w)
final_im = Image.fromarray((out * 255).astype(np.uint8))
imgs.append(out)
#increase offset
offset += step
if loop:
imgs += imgs[::-1]
with imageio.get_writer(movieName, mode='I') as writer:
for image in log_progress(list(imgs), name = "Creating animation"):
writer.append_data(img_as_ubyte(image))
vardict = list(named_directions.keys())
select_variable = widgets.Dropdown(
options=vardict,
value=vardict[0],
description='Select variable:',
disabled=False,
button_style=''
)
def set_direction(b):
clear_output()
random_dir = latent_dirs[named_directions[select_variable.value][0]]
start_layer = named_directions[select_variable.value][1]
end_layer = named_directions[select_variable.value][2]
print(start_layer, end_layer)
out = widgets.interactive_output(display_sample_pytorch, {'seed': seed, 'truncation': truncation, 'direction': fixed(random_dir), 'distance': distance, 'scale': scale, 'start': fixed(start_layer), 'end': fixed(end_layer)})
display(select_variable)
display(ui, out)
random_dir = latent_dirs[named_directions[select_variable.value][0]]
start_layer = named_directions[select_variable.value][1]
end_layer = named_directions[select_variable.value][2]
seed = np.random.randint(0,100000)
style = {'description_width': 'initial'}
seed = widgets.IntSlider(min=0, max=100000, step=1, value=seed, description='Seed: ', continuous_update=False)
truncation = widgets.FloatSlider(min=0, max=2, step=0.1, value=0.7, description='Truncation: ', continuous_update=False)
distance = widgets.FloatSlider(min=-10, max=10, step=0.1, value=0, description='Distance: ', continuous_update=False, style=style)
scale = widgets.FloatSlider(min=0, max=10, step=0.05, value=1, description='Scale: ', continuous_update=False)
bot_box = widgets.HBox([seed, truncation, distance])
ui = widgets.VBox([bot_box])
out = widgets.interactive_output(display_sample_pytorch, {'seed': seed, 'truncation': truncation, 'direction': fixed(random_dir), 'distance': distance, 'scale': scale, 'start': fixed(start_layer), 'end': fixed(end_layer)})
display(select_variable)
display(ui, out)
select_variable.observe(set_direction, names='value')
#@title Generate Video from Representation (Optional)
direction_name = "a" #@param {type:"string"}
num_frames = 5 #@param {type:"number"}
truncation = 0.8 #@param {type:"number"}
num_samples = num_frames
assert direction_name in named_directions, \
f'"{direction_name}" not found, please save it first using the cell above.'
loc = named_directions[direction_name][0]
for i in range(num_samples):
s = np.random.randint(0, 10000)
generate_mov(seed = s, truncation = 0.8, direction_vec = latent_dirs[loc], scale = 2, layers=range(named_directions[direction_name][1], named_directions[direction_name][2]), n_frames = 20, out_name = f'{model_class}_{direction_name}_{i}', loop=True)
print('Video saved to ./ganspace/out/')
###Output
_____no_output_____
###Markdown
https://nationalregisterofhistoricplaces.com/oh/adams/state.html
###Code
datadir = '/Users/klarnemann/Documents/Insight/Project/data'
landmark_df = pd.read_excel('%s/federal_historic_places.xlsx' % (datadir))
landmark_df.shape
print(landmark_df.columns)
landmark_df.head(5)
plt.figure(figsize=(8,6))
plt.bar(np.arange(len(agencies)), landmark_df[agencies].sum())
plt.xlim(-1, len(agencies)+0.1)
plt.xticks(np.arange(len(agencies)), agencies, rotation=90);
plt.ylabel('# Landmarks')
plt.tight_layout()
#plt.savefig('/Users/klarnemann/Documents/Insight/Project/docs/figures/landmark_agencies.png', dpi=150)
###Output
_____no_output_____
###Markdown
Clean
###Code
dirty_landmark_df = pd.read_excel('%s/historic_places_federal_listed_20190404.xlsx' % (datadir))
agencies = set()
for item in dirty_landmark_df['Federal Agencies'].unique():
split_agencies = re.split('; | , | & | U.S.', item)
if len(split_agencies) > 1:
for sub_item in split_agencies:
agencies = set.union(agencies, set([sub_item.lstrip().title()]))
else:
agencies = set.union(agencies, set([item.lstrip().title()]))
agencies = set.difference(agencies, set(['']))
agencies = list(agencies)
agencies.sort()
columns = list(dirty_landmark_df.columns) + agencies
dirty_landmark_df = dirty_landmark_df.reindex(columns=columns)
n_rows, n_cols = dirty_landmark_df.shape
for row in np.arange(n_rows):
for agency in agencies:
if agency.lower() in dirty_landmark_df.loc[row, 'Federal Agencies'].lower():
dirty_landmark_df.loc[row, agency] = 1
agencies[3].lower() in dirty_landmark_df.loc[row, 'Federal Agencies'].lower()
agencies = [x for _, x in sorted(zip(dirty_landmark_df[agencies].sum(),agencies), reverse=True)]
###Output
_____no_output_____
###Markdown
Explore Import necessary libraries
###Code
from utils import pickle_to, pickle_from, ignore_warnings
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd
from collections import Counter
from collections import defaultdict
import matplotlib.pyplot as plt
import seaborn as sns
from wordcloud import WordCloud
import re
# def identity(text):
# return text
# Loading scrubed data
interim_token = pickle_from('../data/interim/interim_token.pkl')
interim_data = pickle_from('../data/interim/interim_data.pkl')
interim_token.head()
interim_data.head()
###Output
_____no_output_____
###Markdown
Distribution of fake and real news
###Code
def create_distribution(data):
return sns.countplot(x='numeric_label', data=data, palette='hls')
create_distribution(interim_data);
###Output
_____no_output_____
###Markdown
The dataset is a balanced one. As the model is built to be its highest degree of correctness and as the data is balanced, the chosen performance metric is accuracy. Calculate average number of tokenized words in real vs fake
###Code
real = interim_data[interim_data['numeric_label']==0]
fake = interim_data[interim_data['numeric_label']==1]
real.shape, fake.shape
real.head()
# Total number of words in each artcile in real and fake news data
real['total_words'] = [len(x.split()) for x in real['text'].tolist()]
fake['total_words'] = [len(x.split()) for x in fake['text'].tolist()]
real.head()
# Total number of words in real and fake news data
real_word_count = real.total_words.sum()
fake_word_count = real.total_words.sum()
# Total number of articles in real and fake data
real_article_count = real.text.count()
fake_article_count = fake.text.count()
# Getting average number of words in a sentence in real and fake data
real_avg = real_word_count / real_article_count
fake_avg = fake_word_count / fake_article_count
print('Average number of tokens in real data is : ', real_avg)
print('Average number of tokens in fake data is : ', fake_avg)
###Output
Average number of tokens in real data is : 668.6871597822607
Average number of tokens in fake data is : 671.9144144144144
###Markdown
Though the number of articles in real and fake news is balanced, the average number of tokens in real and fake varies. This can mean that fake articles are using more words than real words. Split train, val and test sets
###Code
X = interim_token['tokenized']
y = np.array(interim_token['numeric_label'])
X_tr_val, X_test, y_tr_val, y_test= train_test_split(X,y,random_state=42,test_size = 0.2)
X_train, X_val, y_train, y_val = train_test_split(X_tr_val,y_tr_val,random_state=50,test_size = 0.25)
print(y_train.shape)
print(y_val.shape)
print(y_test.shape)
print(y.shape)
###Output
(3738,)
(1246,)
(1247,)
(6231,)
###Markdown
Pickling train, val and test sets
###Code
pickle_to(X_train,'../data/processed/X_train.pkl')
pickle_to(X_val,'../data/processed/X_val.pkl')
pickle_to(X_test,'../data/processed/X_test.pkl')
pickle_to(X_tr_val,'../data/processed/X_tr_val.pkl')
pickle_to(y_train,'../data/processed/y_train.pkl')
pickle_to(y_val,'../data/processed/y_val.pkl')
pickle_to(y_test,'../data/processed/y_test.pkl')
pickle_to(y_tr_val,'../data/processed/y_tr_val.pkl')
###Output
Sucessfully saved to ../data/processed/y_train.pkl
Sucessfully saved to ../data/processed/y_val.pkl
Sucessfully saved to ../data/processed/y_test.pkl
Sucessfully saved to ../data/processed/y_tr_val.pkl
###Markdown
SVM prediction
###Code
a_list = [1, 3, 2, 1, 1, 2]
collections.Counter(a_list)
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
import pandas as pd
import numpy as np
data = load_iris()
# bear with me for the next few steps... I'm trying to walk you through
# how my data object landscape looks... i.e. how I get from raw data
# to matrices with the actual data I have, not the iris dataset
# put feature matrix into columnar format in dataframe
df = pd.DataFrame(data = data.data)
# add outcome variable
df['class'] = data.target
X = np.matrix(df.loc[:, [0, 1, 2, 3]])
y = np.array(df['class'])
# finally, split into train-test
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.8)
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
# I've got my predictions now
y_hats = model.predict(X_test)
X_test
###Output
_____no_output_____ |
testing/jupyter_unit_tests.ipynb | ###Markdown
Unit testing Python code in Jupyter notebooksMost of us agree that we should write unit tests, and many of us actually do. This should be especially true for production code, library code, or if you ascribe to test driven development, during the entire development process.Often Jupyter notebooks running Python are used for data exploration, and so users may not choose (or need) to write unit tests for their notebook code since they typically may be looking at results for each cell as they progress through the notebook, then coming to a conclusion, and moving on. However, in my experience what typically happens with notebooks is soon the code in the notebook moves beyond data exploration and is useful for further work. Or, perhaps the notebook itself produces results that are useful and need to be run on a regular basis. Perhaps the code needs to be maintained and integrated with external data sources. Then it becomes important to ensure that the code in the notebook can be tested and verified. In this case, what are our options for unit testing notebook code? In this article I'll cover several options for unit testing Python code in a Jupyter notebook. Maybe just don't do it?The first option of Jupyter notebook unit testing is to just not do it at all. By this, I don't mean don't unit test your code, but rather *extract* it from the notebook into separate Python modules that you import back into your notebook. That code should be tested the way you usually unit test your code, whether that be with ```unittest```, ```pytest```, ```doctest```, or another unit testing framework. This article won't cover all those frameworks in detail, but a great choice for python developers is to not test inside their Jupyter notebooks, but to use the rich assortment of testing frameworks already available for Python code, and to move code to external modules as soon as possible in the development process. OK, so you can test in a notebookIf you end up deciding you want to leave your code inside a Jupyter notebook, there actually are some unit testing options. Before reviewing a few of them, let's just setup a code example that we might encounter in a Jupyter notebook. Let's say your notebook pulls some data from an API, calculates some results from it, then produces some graphs and other data summaries that it persists elsewhere. Maybe there's a function that produces the proper API URL, and we want to unit test that function. This function has some logic that changes the URL format based on the date for the report. Here's a debugged version.
###Code
import datetime
import dateutil
def make_url(date):
"""Return the url for our API call based on date."""
if isinstance(date, str):
date = dateutil.parser.parse(date).date()
elif not isinstance(date, datetime.date):
raise ValueError("must be a date")
if date >= datetime.date(2020, 1, 1):
return f"https://api.example.com/v2/{date.year}/{date.month}/{date.day}"
else:
return f"https://api.example.com/v1/{date:%Y-%m-%d}"
###Output
_____no_output_____
###Markdown
Unit testing with unittestNormally, when we test with [```unittest```](https://docs.python.org/3/library/unittest.html) we would either put our test methods in a separate test module, or possibly we'd mix those methods inside the main module. Then we'd need to execute the ```unittest.main``` method, possibly as the default ```__main__``` method. We can basically do the same thing in our Jupyter notebook. We can make a ```unitest.TestCase``` class, perform the tests we want, and then just execute the unit tests in any cell. The results of the tests can even be inspected or asserted to include no failures if you want the notebook execution to fail on errors. You just need to save the output of the ```unittest.main``` method and inspect it for errors.
###Code
import unittest
class TestUrl(unittest.TestCase):
def test_make_url_v2(self):
date = datetime.date(2020, 1, 1)
self.assertEqual(make_url(date), "https://api.example.com/v2/2020/1/1")
def test_make_url_v1(self):
date = datetime.date(2019, 12, 31)
self.assertEqual(make_url(date), "https://api.example.com/v1/2019-12-31")
res = unittest.main(argv=[''], verbosity=3, exit=False)
# if we want our notebook to stop processing due to failures, we need a cell itself to fail
assert len(res.result.failures) == 0
###Output
test_make_url_v1 (__main__.TestUrl) ... ok
test_make_url_v2 (__main__.TestUrl) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
###Markdown
This turns out to be fairly straightforward, and if you don't mind comingling code and tests in your notebook, it works fine. Unit testing with doctestAnother way to include tests in your code is to use [doctest](https://docs.python.org/3/library/doctest.htmlmodule-doctest). Doctest uses specially formatted code documentation that includes our tests and the expected results. Below is an updated method with this special code documentation included, both for positive and negative test cases. This is a simple way to test and document code in one place, and often will be used in python modules where the main guard will just run the doct test, like this:```if __name__ == __main__: doctest.testmod()```Since we're in a notebook, we will just add this to a cell below where our code is defined, and it will also work. First, here's our updated ```make_url``` method with the doctest comments.
###Code
def make_url(date):
"""Return the url for our API call based on date.
>>> make_url("1/1/2020")
'https://api.example.com/v2/2020/1/1'
>>> make_url("1-1-x1")
Traceback (most recent call last):
...
dateutil.parser._parser.ParserError: Unknown string format: 1-1-x1
>>> make_url("1/1/20001")
Traceback (most recent call last):
...
dateutil.parser._parser.ParserError: year 20001 is out of range: 1/1/20001
>>> make_url(datetime.date(2020,1,1))
'https://api.example.com/v2/2020/1/1'
>>> make_url(datetime.date(2019,12,31))
'https://api.example.com/v1/2019-12-31'
"""
if isinstance(date, str):
date = dateutil.parser.parse(date).date()
elif not isinstance(date, datetime.date):
raise ValueError("must be a date")
if date >= datetime.date(2020, 1, 1):
return f"https://api.example.com/v2/{date.year}/{date.month}/{date.day}"
else:
return f"https://api.example.com/v1/{date:%Y-%m-%d}"
import doctest
doctest.testmod()
###Output
_____no_output_____
###Markdown
Unit testing with testbookThe [testbook](https://github.com/nteract/testbook) project is a different take on notebook unit testing. It allows you to refer to your notebooks in pure Python code from outside a notebook. This allows you to use any testing framework you like (for example, ```pytest```, or ```unittest```) in separate Python modules. You may have a situation where allowing users to modify and update notebook code is the best way to keep code updated and to allow for flexibility for end users. But you may prefer that the code still be tested and verified separately. Testbook makes this an option.First, you have to install it in your environment:```pip install testbook```or in your notebook```%pip install testbook````Now, in a separate python file, you can import your notebook code and test it there. In that file, you'll create code that looks like the following, and then you'll use whichever unit testing framework you prefer to actually execute the unit test. You might create the following code in a Python file (say ```jupyter_unit_tests.py```).
###Code
import datetime
import testbook
@testbook.testbook('./jupyter_unit_tests.ipynb', execute=True)
def test_make_url(tb):
func = tb.ref("make_url")
date = datetime.date(2020, 1, 2)
assert make_url(date) == "https://api.example.com/v2/2020/1/1"
###Output
_____no_output_____
###Markdown
In this case, you can now run the tests with any unit testing framework. For example, with pytest, you would just run the following:```pytest jupyter_unit_tests.py```This works as a normal unit test, and the tests should pass. However, in developing this article, I realized that the ```testbook``` code has limited support for passing arguments in the unit test back into the notebook kernel for testing. These arguments are JSON serialized, and the current code knows how to handle a wide array of Python types. But it doesn't pass a datetime as an object, for example, but as a string. Since our code makes an attempt to parse strings into dates (after I modified it), it works. In other words, the unit test above is not passing in a ```datetime.date``` to the ```make_url``` method, but rather a string (```2020-01-02```) that is then parsed into a date. How could you pass in a date from the unit test into the notebook code? You have several options. First, you can make a date object in your notebook just for testing purposes and then refer to that in your unit tests.
###Code
testdate1 = datetime.date(2020,1,1) # for unit test
###Output
_____no_output_____
###Markdown
Then, you could write your unit test to use that variable in the test.A second option is to inject Python code into the notebook, then refer to it back in your unit test. Both options are shown in the final version of the external unit test, which would need to be saved to ```jupyter_unit_tests.py```.
###Code
import datetime
import testbook
@testbook.testbook('./jupyter_unit_tests.ipynb', execute=True)
def test_make_url(tb):
f = tb.ref("make_url")
d = "2020-01-02"
assert f(d) == "https://api.example.com/v2/2020/1/2"
# note that this is actually converted to a string
d = datetime.date(2020, 1, 2)
assert f(d) == "https://api.example.com/v2/2020/1/2"
# this one will be testing the date functionality
d2 = tb.ref("testdate1")
assert f(d2) == "https://api.example.com/v2/2020/1/1"
# this one will inject similar code as above, then use it
tb.inject("d3 = datetime.date(2020, 2, 3)")
d3 = tb.ref("d3")
assert f(d3) == "https://api.example.com/v2/2020/2/3"
###Output
_____no_output_____ |
demos/linear_systems/Vanilla Gaussian Elimination.ipynb | ###Markdown
Gaussian EliminationCopyright (C) 2020 Andreas KloecknerMIT LicensePermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included inall copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS INTHE SOFTWARE.
###Code
import numpy as np
np.random.seed(5)
n = 4
A = np.round(np.random.randn(n, n) * 5)
A
###Output
_____no_output_____
###Markdown
Now compute `A1` to eliminate `A[1,0]`:
###Code
#clear
A1 = A.copy()
A1[1] -= 1/2*A1[0]
A1
###Output
_____no_output_____
###Markdown
And `A2` with `A[2,0] == 0`:
###Code
#clear
A2 = A1.copy()
A2[2] -= 1/2*A[0]
A2
###Output
_____no_output_____
###Markdown
Gaussian EliminationCopyright (C) 2020 Andreas KloecknerMIT LicensePermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included inall copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS INTHE SOFTWARE.
###Code
import numpy as np
np.random.seed(5)
n = 4
A = np.round(np.random.randn(n, n) * 5)
A
###Output
_____no_output_____
###Markdown
Now compute `A1` to eliminate `A[1,0]`:
###Code
#clear
A1 = A.copy()
A1[1] -= 1/2*A1[0]
A1
###Output
_____no_output_____
###Markdown
And `A2` with `A[2,0] == 0`:
###Code
#clear
A2 = A1.copy()
A2[2] -= 1/2*A[0]
A2
###Output
_____no_output_____ |
20_extra_autodiff/autodiff.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Introduction to gradients and automatic differentiation View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Automatic Differentiation and Gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks.In this guide, you will explore ways to compute gradients with TensorFlow, especially in [eager execution](eager.ipynb). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Computing gradientsTo differentiate automatically, TensorFlow needs to remember what operations happen in what order during the *forward* pass. Then, during the *backward pass*, TensorFlow traverses this list of operations in reverse order to compute gradients. Gradient tapesTensorFlow provides the `tf.GradientTape` API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually `tf.Variable`s.TensorFlow "records" relevant operations executed inside the context of a `tf.GradientTape` onto a "tape". TensorFlow then uses that tape to compute the gradients of a "recorded" computation using [reverse mode differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation).Here is a simple example:
###Code
x = tf.Variable(3.0)
with tf.GradientTape() as tape:
y = x**2
###Output
_____no_output_____
###Markdown
Once you've recorded some operations, use `GradientTape.gradient(target, sources)` to calculate the gradient of some target (often a loss) relative to some source (often the model's variables):
###Code
# dy = 2x * dx
dy_dx = tape.gradient(y, x)
dy_dx.numpy()
###Output
_____no_output_____
###Markdown
The above example uses scalars, but `tf.GradientTape` works as easily on any tensor:
###Code
w = tf.Variable(tf.random.normal((3, 2)), name='w')
b = tf.Variable(tf.zeros(2, dtype=tf.float32), name='b')
x = [[1., 2., 3.]]
with tf.GradientTape(persistent=True) as tape:
y = x @ w + b
loss = tf.reduce_mean(y**2)
###Output
_____no_output_____
###Markdown
To get the gradient of `y` with respect to both variables, you can pass both as sources to the `gradient` method. The tape is flexible about how sources are passed and will accept any nested combination of lists or dictionaries and return the gradient structured the same way (see `tf.nest`).
###Code
[dl_dw, dl_db] = tape.gradient(loss, [w, b])
###Output
_____no_output_____
###Markdown
The gradient with respect to each source has the shape of the source:
###Code
print(w.shape)
print(dl_dw.shape)
###Output
(3, 2)
(3, 2)
###Markdown
Here is the gradient calculation again, this time passing a dictionary of variables:
###Code
my_vars = {
'w': w,
'b': b
}
grad = tape.gradient(loss, my_vars)
grad['b']
###Output
_____no_output_____
###Markdown
Gradients with respect to a modelIt's common to collect `tf.Variables` into a `tf.Module` or one of its subclasses (`layers.Layer`, `keras.Model`) for [checkpointing](checkpoint.ipynb) and [exporting](saved_model.ipynb).In most cases, you will want to calculate gradients with respect to a model's trainable variables. Since all subclasses of `tf.Module` aggregate their variables in the `Module.trainable_variables` property, you can calculate these gradients in a few lines of code:
###Code
layer = tf.keras.layers.Dense(2, activation='relu')
x = tf.constant([[1., 2., 3.]])
with tf.GradientTape() as tape:
# Forward pass
y = layer(x)
loss = tf.reduce_mean(y**2)
# Calculate gradients with respect to every trainable variable
grad = tape.gradient(loss, layer.trainable_variables)
for var, g in zip(layer.trainable_variables, grad):
print(f'{var.name}, shape: {g.shape}')
###Output
dense/kernel:0, shape: (3, 2)
dense/bias:0, shape: (2,)
###Markdown
Controlling what the tape watches The default behavior is to record all operations after accessing a trainable `tf.Variable`. The reasons for this are:* The tape needs to know which operations to record in the forward pass to calculate the gradients in the backwards pass.* The tape holds references to intermediate outputs, so you don't want to record unnecessary operations.* The most common use case involves calculating the gradient of a loss with respect to all a model's trainable variables.For example, the following fails to calculate a gradient because the `tf.Tensor` is not "watched" by default, and the `tf.Variable` is not trainable:
###Code
# A trainable variable
x0 = tf.Variable(3.0, name='x0')
# Not trainable
x1 = tf.Variable(3.0, name='x1', trainable=False)
# Not a Variable: A variable + tensor returns a tensor.
x2 = tf.Variable(2.0, name='x2') + 1.0
# Not a variable
x3 = tf.constant(3.0, name='x3')
with tf.GradientTape() as tape:
y = (x0**2) + (x1**2) + (x2**2)
grad = tape.gradient(y, [x0, x1, x2, x3])
for g in grad:
print(g)
###Output
tf.Tensor(6.0, shape=(), dtype=float32)
None
None
None
###Markdown
You can list the variables being watched by the tape using the `GradientTape.watched_variables` method:
###Code
[var.name for var in tape.watched_variables()]
###Output
_____no_output_____
###Markdown
`tf.GradientTape` provides hooks that give the user control over what is or is not watched.To record gradients with respect to a `tf.Tensor`, you need to call `GradientTape.watch(x)`:
###Code
x = tf.constant(3.0)
with tf.GradientTape() as tape:
tape.watch(x)
y = x**2
# dy = 2x * dx
dy_dx = tape.gradient(y, x)
print(dy_dx.numpy())
###Output
6.0
###Markdown
Conversely, to disable the default behavior of watching all `tf.Variables`, set `watch_accessed_variables=False` when creating the gradient tape. This calculation uses two variables, but only connects the gradient for one of the variables:
###Code
x0 = tf.Variable(0.0)
x1 = tf.Variable(10.0)
with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(x1)
y0 = tf.math.sin(x0)
y1 = tf.nn.softplus(x1)
y = y0 + y1
ys = tf.reduce_sum(y)
###Output
_____no_output_____
###Markdown
Since `GradientTape.watch` was not called on `x0`, no gradient is computed with respect to it:
###Code
# dys/dx1 = exp(x1) / (1 + exp(x1)) = sigmoid(x1)
grad = tape.gradient(ys, {'x0': x0, 'x1': x1})
print('dy/dx0:', grad['x0'])
print('dy/dx1:', grad['x1'].numpy())
###Output
dy/dx0: None
dy/dx1: 0.9999546
###Markdown
Intermediate resultsYou can also request gradients of the output with respect to intermediate values computed inside the `tf.GradientTape` context.
###Code
x = tf.constant(3.0)
with tf.GradientTape() as tape:
tape.watch(x)
y = x * x
z = y * y
# Use the tape to compute the gradient of z with respect to the
# intermediate value y.
# dz_dx = 2 * y, where y = x ** 2
print(tape.gradient(z, y).numpy())
###Output
18.0
###Markdown
By default, the resources held by a `GradientTape` are released as soon as the `GradientTape.gradient` method is called. To compute multiple gradients over the same computation, create a gradient tape with `persistent=True`. This allows multiple calls to the `gradient` method as resources are released when the tape object is garbage collected. For example:
###Code
x = tf.constant([1, 3.0])
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
y = x * x
z = y * y
print(tape.gradient(z, x).numpy()) # 108.0 (4 * x**3 at x = 3)
print(tape.gradient(y, x).numpy()) # 6.0 (2 * x)
del tape # Drop the reference to the tape
###Output
_____no_output_____
###Markdown
Notes on performance* There is a tiny overhead associated with doing operations inside a gradient tape context. For most eager execution this will not be a noticeable cost, but you should still use tape context around the areas only where it is required.* Gradient tapes use memory to store intermediate results, including inputs and outputs, for use during the backwards pass. For efficiency, some ops (like `ReLU`) don't need to keep their intermediate results and they are pruned during the forward pass. However, if you use `persistent=True` on your tape, *nothing is discarded* and your peak memory usage will be higher. Gradients of non-scalar targets A gradient is fundamentally an operation on a scalar.
###Code
x = tf.Variable(2.0)
with tf.GradientTape(persistent=True) as tape:
y0 = x**2
y1 = 1 / x
print(tape.gradient(y0, x).numpy())
print(tape.gradient(y1, x).numpy())
###Output
4.0
-0.25
###Markdown
Thus, if you ask for the gradient of multiple targets, the result for each source is:* The gradient of the sum of the targets, or equivalently* The sum of the gradients of each target.
###Code
x = tf.Variable(2.0)
with tf.GradientTape() as tape:
y0 = x**2
y1 = 1 / x
print(tape.gradient({'y0': y0, 'y1': y1}, x).numpy())
###Output
3.75
###Markdown
Similarly, if the target(s) are not scalar the gradient of the sum is calculated:
###Code
x = tf.Variable(2.)
with tf.GradientTape() as tape:
y = x * [3., 4.]
print(tape.gradient(y, x).numpy())
###Output
7.0
###Markdown
This makes it simple to take the gradient of the sum of a collection of losses, or the gradient of the sum of an element-wise loss calculation.If you need a separate gradient for each item, refer to [Jacobians](advanced_autodiff.ipynbjacobians). In some cases you can skip the Jacobian. For an element-wise calculation, the gradient of the sum gives the derivative of each element with respect to its input-element, since each element is independent:
###Code
x = tf.linspace(-10.0, 10.0, 200+1)
with tf.GradientTape() as tape:
tape.watch(x)
y = tf.nn.sigmoid(x)
dy_dx = tape.gradient(y, x)
plt.plot(x, y, label='y')
plt.plot(x, dy_dx, label='dy/dx')
plt.legend()
_ = plt.xlabel('x')
###Output
_____no_output_____
###Markdown
Control flowBecause a gradient tape records operations as they are executed, Python control flow is naturally handled (for example, `if` and `while` statements).Here a different variable is used on each branch of an `if`. The gradient only connects to the variable that was used:
###Code
x = tf.constant(1.0)
v0 = tf.Variable(2.0)
v1 = tf.Variable(2.0)
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
if x > 0.0:
result = v0
else:
result = v1**2
dv0, dv1 = tape.gradient(result, [v0, v1])
print(dv0)
print(dv1)
###Output
tf.Tensor(1.0, shape=(), dtype=float32)
None
###Markdown
Just remember that the control statements themselves are not differentiable, so they are invisible to gradient-based optimizers.Depending on the value of `x` in the above example, the tape either records `result = v0` or `result = v1**2`. The gradient with respect to `x` is always `None`.
###Code
dx = tape.gradient(result, x)
print(dx)
###Output
None
###Markdown
Getting a gradient of `None`When a target is not connected to a source you will get a gradient of `None`.
###Code
x = tf.Variable(2.)
y = tf.Variable(3.)
with tf.GradientTape() as tape:
z = y * y
print(tape.gradient(z, x))
###Output
None
###Markdown
Here `z` is obviously not connected to `x`, but there are several less-obvious ways that a gradient can be disconnected. 1. Replaced a variable with a tensorIn the section on ["controlling what the tape watches"](watches) you saw that the tape will automatically watch a `tf.Variable` but not a `tf.Tensor`.One common error is to inadvertently replace a `tf.Variable` with a `tf.Tensor`, instead of using `Variable.assign` to update the `tf.Variable`. Here is an example:
###Code
x = tf.Variable(2.0)
for epoch in range(2):
with tf.GradientTape() as tape:
y = x+1
print(type(x).__name__, ":", tape.gradient(y, x))
x = x + 1 # This should be `x.assign_add(1)`
###Output
ResourceVariable : tf.Tensor(1.0, shape=(), dtype=float32)
EagerTensor : None
###Markdown
2. Did calculations outside of TensorFlowThe tape can't record the gradient path if the calculation exits TensorFlow.For example:
###Code
x = tf.Variable([[1.0, 2.0],
[3.0, 4.0]], dtype=tf.float32)
with tf.GradientTape() as tape:
x2 = x**2
# This step is calculated with NumPy
y = np.mean(x2, axis=0)
# Like most ops, reduce_mean will cast the NumPy array to a constant tensor
# using `tf.convert_to_tensor`.
y = tf.reduce_mean(y, axis=0)
print(tape.gradient(y, x))
###Output
None
###Markdown
3. Took gradients through an integer or stringIntegers and strings are not differentiable. If a calculation path uses these data types there will be no gradient.Nobody expects strings to be differentiable, but it's easy to accidentally create an `int` constant or variable if you don't specify the `dtype`.
###Code
x = tf.constant(10)
with tf.GradientTape() as g:
g.watch(x)
y = x * x
print(g.gradient(y, x))
###Output
WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32
###Markdown
TensorFlow doesn't automatically cast between types, so, in practice, you'll often get a type error instead of a missing gradient. 4. Took gradients through a stateful objectState stops gradients. When you read from a stateful object, the tape can only observe the current state, not the history that lead to it.A `tf.Tensor` is immutable. You can't change a tensor once it's created. It has a _value_, but no _state_. All the operations discussed so far are also stateless: the output of a `tf.matmul` only depends on its inputs.A `tf.Variable` has internal state—its value. When you use the variable, the state is read. It's normal to calculate a gradient with respect to a variable, but the variable's state blocks gradient calculations from going farther back. For example:
###Code
x0 = tf.Variable(3.0)
x1 = tf.Variable(0.0)
with tf.GradientTape() as tape:
# Update x1 = x1 + x0.
x1.assign_add(x0)
# The tape starts recording from x1.
y = x1**2 # y = (x1 + x0)**2
# This doesn't work.
print(tape.gradient(y, x0)) #dy/dx0 = 2*(x1 + x0)
###Output
None
###Markdown
Similarly, `tf.data.Dataset` iterators and `tf.queue`s are stateful, and will stop all gradients on tensors that pass through them. No gradient registered Some `tf.Operation`s are **registered as being non-differentiable** and will return `None`. Others have **no gradient registered**.The `tf.raw_ops` page shows which low-level ops have gradients registered.If you attempt to take a gradient through a float op that has no gradient registered the tape will throw an error instead of silently returning `None`. This way you know something has gone wrong.For example, the `tf.image.adjust_contrast` function wraps `raw_ops.AdjustContrastv2`, which could have a gradient but the gradient is not implemented:
###Code
image = tf.Variable([[[0.5, 0.0, 0.0]]])
delta = tf.Variable(0.1)
with tf.GradientTape() as tape:
new_image = tf.image.adjust_contrast(image, delta)
try:
print(tape.gradient(new_image, [image, delta]))
assert False # This should not happen.
except LookupError as e:
print(f'{type(e).__name__}: {e}')
###Output
LookupError: gradient registry has no entry for: AdjustContrastv2
###Markdown
If you need to differentiate through this op, you'll either need to implement the gradient and register it (using `tf.RegisterGradient`) or re-implement the function using other ops. Zeros instead of None In some cases it would be convenient to get 0 instead of `None` for unconnected gradients. You can decide what to return when you have unconnected gradients using the `unconnected_gradients` argument:
###Code
x = tf.Variable([2., 2.])
y = tf.Variable(3.)
with tf.GradientTape() as tape:
z = y**2
print(tape.gradient(z, x, unconnected_gradients=tf.UnconnectedGradients.ZERO))
###Output
tf.Tensor([0. 0.], shape=(2,), dtype=float32)
|
code/.ipynb_checkpoints/NN_based_models_v4-3-Copy1-checkpoint.ipynb | ###Markdown
Table of Contents1 TextCNN1.1 notes:2 LSTM Table of Contents1 TextCNN1.1 notes:2 LSTM
###Code
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir("/content/drive/MyDrive/Text-Classification/code")
!pip install pyLDAvis
!pip install gensim
!pip install pandas==1.3.0
import nltk
nltk.download('punkt')
nltk.download('stopwords')
import numpy as np
from sklearn import metrics
from clustering_utils import *
from eda_utils import *
from nn_utils_keras import *
from sklearn.model_selection import train_test_split
from tensorflow.keras.utils import to_categorical
####################################
### string normalized
####################################
from gensim.utils import tokenize
from nltk.tokenize import word_tokenize
from gensim.parsing.preprocessing import remove_stopwords
def normal_string(x):
x = remove_stopwords(x)
# x = " ".join(preprocess_string(x))
x = " ".join(word_tokenize(x, preserve_line=False)).strip()
return x
train, test = load_data()
train, upsampling_info = upsampling_train(train)
train_text, train_label = train_augmentation(train, select_comb=[['text'], ['reply', 'reference_one'], ['Subject', 'reference_one', 'reference_two']])
# train_text, train_label = train_augmentation(train, select_comb=None)
test_text, test_label = test['text'], test['label']
# test_text = test_text.apply(lambda x: normal_string(x))
# train_text = train_text.apply(lambda x: normal_string(x))
####################################
### label mapper
####################################
labels = sorted(train_label.unique())
label_mapper = dict(zip(labels, range(len(labels))))
train_label = train_label.map(label_mapper)
test_label = test_label.map(label_mapper)
y_train = train_label
y_test = test_label
print(train_text.shape)
print(test_text.shape)
print(train_label.shape)
print(test_label.shape)
print(labels)
####################################
### hyper params
####################################
filters = '"#$%&()*+,-/:;<=>@[\\]^_`{|}~\t\n0123465789!.?\''
MAX_NB_WORDS_ratio = 0.98
MAX_DOC_LEN_ratio = 0.999
MAX_NB_WORDS = eda_MAX_NB_WORDS(train_text, ratio=MAX_NB_WORDS_ratio, char_level=False, filters=filters)
MAX_DOC_LEN = eda_MAX_DOC_LEN(train_text, ratio=MAX_DOC_LEN_ratio, char_level=False, filters=filters)
from tensorflow.keras import optimizers
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.layers import Embedding, Dense, Conv1D, MaxPooling1D, Dropout, Activation, Input, Flatten, Concatenate, Lambda
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.utils import to_categorical
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from tensorflow import keras
import numpy as np
import pandas as pd
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import os
###Output
_____no_output_____
###Markdown
TextCNN notes:
###Code
####################################
### train val test split
####################################
X_train_val, y_train_val, X_test, y_test = train_text, train_label, test_text, test_label
X_train, x_val, y_train, y_val = train_test_split(X_train_val, y_train_val, test_size=0.2, stratify=y_train_val)
####################################
### preprocessor for NN input
####################################
processor = text_preprocessor(MAX_DOC_LEN, MAX_NB_WORDS, train_text, filters='"#$%&()*+,-/:;<=>@[\\]^_`{|}~\t\n0123465789')
X_train = processor.generate_seq(X_train)
x_val = processor.generate_seq(x_val)
X_test = processor.generate_seq(X_test)
# y_train = to_categorical(y_train)
# y_val = to_categorical(y_val)
# y_test = to_categorical(y_test)
print('Shape of x_tr: ' + str(X_train.shape))
print('Shape of y_tr: ' + str(y_train.shape))
print('Shape of x_val: ' + str(x_val.shape))
print('Shape of y_val: ' + str(y_val.shape))
print('Shape of X_test: ' + str(X_test.shape))
print('Shape of y_test: ' + str(y_test.shape))
info = pd.concat([y_train.value_counts(), y_val.value_counts(), y_val.value_counts()/y_train.value_counts(), y_train.value_counts()/y_train.size\
, y_test.value_counts(), y_test.value_counts()/y_test.size], axis=1)
info.index = labels
info.columns = ['tr_size', 'val_size', 'val_ratio', 'tr_prop', 'test_size', 'test_prop']
info
# define Model for classification
def model_Create(FS, NF, EMB, MDL, MNW, PWV=None, optimizer='RMSprop', trainable_switch=True):
cnn_box = cnn_model_l2(FILTER_SIZES=FS, MAX_NB_WORDS=MNW, MAX_DOC_LEN=MDL, EMBEDDING_DIM=EMB,
NUM_FILTERS=NF, PRETRAINED_WORD_VECTOR=PWV, trainable_switch=trainable_switch)
# Hyperparameters: MAX_DOC_LEN
return cnn_box
q1_input = Input(shape=(MDL,), name='q1_input')
encode_input1 = cnn_box(q1_input)
# half_features = int(len(FS)*NF/2)*10
x = Dense(384, activation='relu', name='half_features')(encode_input1)
x = Dropout(rate=0.3, name='dropout1')(x)
# x = Dense(256, activation='relu', name='dense1')(x)
# x = Dropout(rate=0.3, name='dropou2')(x)
x = Dense(128, activation='relu', name='dense2')(x)
x = Dropout(rate=0.3, name='dropout3')(x)
x = Dense(64, activation='relu', name='dense3')(x)
x = Dropout(rate=0.3, name='dropout4')(x)
pred = Dense(len(labels), activation='softmax', name='Prediction')(x)
model = Model(inputs=q1_input, outputs=pred)
model.compile(optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
return model
EMBEDDING_DIM = 200
# W2V = processor.w2v_pretrain(EMBEDDING_DIM, min_count=2, seed=1, cbow_mean=1,negative=5, window=20, workers=7) # pretrain w2v by gensim
# W2V = processor.load_glove_w2v(EMBEDDING_DIM) # download glove
W2V = None
trainable_switch = True
MAX_DOC_LEN = 8110
MAX_NB_WORDS =31994
# Set hyper parameters
FILTER_SIZES = [2, 4,6,8]
# FILTER_SIZES = [2,3,4]
NUM_FILTERS = 64
# OPT = optimizers.Adam(learning_rate=0.005)
OPT = optimizers.RMSprop(learning_rate=0.0005) # 'RMSprop'
PWV = W2V
model = model_Create(FS=FILTER_SIZES, NF=NUM_FILTERS, EMB=EMBEDDING_DIM,
MDL=MAX_DOC_LEN, MNW=MAX_NB_WORDS+1, PWV=PWV,
optimizer=OPT, trainable_switch=trainable_switch)
def visual_textCNN(model, filename='multichannel-CNN.png'):
print(model.summary())
return SVG(model_to_dot(model, dpi=70, show_shapes=True, show_layer_names=True).create(prog='dot', format='svg'),filename=filename )
visual_textCNN(model)
BATCH_SIZE = 32 # 先在小的batch上train, 容易找到全局最优部分, 然后再到 大 batch 上train, 快速收敛到局部最优
NUM_EPOCHES = 50 # 20步以上
patience = 30
file_name = 'test'
BestModel_Name = file_name + 'Best_GS_3'
BEST_MODEL_FILEPATH = BestModel_Name
# model.load_weights(BestModel_Name) # 这样就能接着上次train
earlyStopping = EarlyStopping(monitor='val_sparse_categorical_accuracy', patience=patience, verbose=1, mode='max') # patience: number of epochs with no improvement on monitor : val_loss
checkpoint = ModelCheckpoint(BEST_MODEL_FILEPATH, monitor='val_sparse_categorical_accuracy', verbose=1, save_best_only=True, mode='max')
# history = model.fit(X_train, y_train, validation_data=(X_test,y_test), batch_size=BATCH_SIZE, epochs=NUM_EPOCHES, callbacks=[earlyStopping, checkpoint], verbose=1)
history = model.fit(X_train, y_train, validation_data=(x_val, y_val), batch_size=BATCH_SIZE, epochs=NUM_EPOCHES, callbacks=[earlyStopping, checkpoint], verbose=1)
model.load_weights(BestModel_Name)
#### classification Report
history_plot(history)
y_pred = model.predict(X_test)
# print(classification_report(y_test, np.argmax(y_pred, axis=1)))
print(classification_report(test_label, np.argmax(y_pred, axis=1), target_names=labels))
scores = model.evaluate(X_test, y_test, verbose=2)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
print( "\n\n\n")
###Output
========================================================================
loss val_loss
###Markdown
LSTM
###Code
# from tensorflow.keras.layers import SpatialDropout1D, GlobalMaxPooling1D, GlobalMaxPooling2D
# def model_Create(FS, NF, EMB, MDL, MNW, PWV = None, optimizer='RMSprop', trainable_switch=True):
# model = Sequential()
# model.add(Embedding(input_dim=MNW, output_dim=EMB, embeddings_initializer='uniform', mask_zero=True, input_length=MDL))
# model.add(Flatten())
# # model.add(GlobalMaxPooling2D()) # downsampling
# # model.add(SpatialDropout1D(0.2))
# model.add(Dense(1024, activation='relu'))
# model.add(Dense(512, activation='relu'))
# model.add(Dense(128, activation='relu'))
# model.add(Dense(64, activation='relu'))
# # model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
# model.add(Dense(20, activation='softmax'))
# model.compile(optimizer=optimizer,
# loss=keras.losses.SparseCategoricalCrossentropy(from_logits=False),
# metrics=[keras.metrics.SparseCategoricalAccuracy()])
# return model
# model = model_Create(FS=FILTER_SIZES, NF=NUM_FILTERS, EMB=EMBEDDING_DIM,
# MDL=MAX_DOC_LEN, MNW=MAX_NB_WORDS+1, PWV=PWV, trainable_switch=trainable_switch)
# visual_textCNN(model)
# EMBEDDING_DIM = 200
# # W2V = processor.w2v_pretrain(EMBEDDING_DIM, min_count=2, seed=1, cbow_mean=1,negative=5, window=20, workers=7) # pretrain w2v by gensim
# # W2V = processor.load_glove_w2v(EMBEDDING_DIM) # download glove
# trainable_switch = True
# W2V = None
# BATCH_SIZE = 64
# NUM_EPOCHES = 10 # patience=20
# patience = 30
# BestModel_Name = 'text_CNN.h5'
# BEST_MODEL_FILEPATH = BestModel_Name
# earlyStopping = EarlyStopping(monitor='val_sparse_categorical_accuracy', patience=patience, verbose=1, mode='max') # patience: number of epochs with no improvement on monitor : val_loss
# checkpoint = ModelCheckpoint(BEST_MODEL_FILEPATH, monitor='val_sparse_categorical_accuracy', verbose=1, save_best_only=True, mode='max')
# history = model.fit(X_train, y_train, validation_split=0.2, batch_size=BATCH_SIZE, epochs=NUM_EPOCHES, callbacks=[earlyStopping, checkpoint], verbose=1)
# model.load_weights(BestModel_Name)
# #### classification Report
# history_plot(history)
# y_pred = model.predict(X_test)
# # print(classification_report(y_test, np.argmax(y_pred, axis=1)))
# print(classification_report(test_label, np.argmax(y_pred, axis=1), target_names=labels))
# scores = model.evaluate(X_test, y_test, verbose=2)
# print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# print( "\n\n\n")
###Output
_____no_output_____ |
Hierarchical Agglomerative.ipynb | ###Markdown
Hierarchical model with spatial constraint
###Code
from sklearn.cluster import AgglomerativeClustering
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import ipdb
nodes = pd.read_csv('Proj_Data/node.csv', index_col=0)
nodes['index abs'] = range(len(nodes))
edges = pd.read_csv('Proj_Data/edges_with_qkv.csv', index_col=0)
nodes
connectivity_mat = np.zeros([nodes.shape[0], nodes.shape[0]])
for i in range(len(edges)):
from_ = edges.loc[i, 'node1']
from_ = nodes.loc[from_, 'index abs']
to_ = edges.loc[i, 'node2']
to_ = nodes.loc[to_, 'index abs']
connectivity_mat[from_, to_] = 1
connectivity_mat[to_, from_] = 1
# Set seed for reproducibility
np.random.seed(0)
# Iniciate the algorithm, 'ward' means minimize variance in each cluster
model = AgglomerativeClustering(linkage='ward', connectivity=connectivity_mat, n_clusters=4)
# Run clustering
model.fit(nodes[['Long', 'Lat', 'q']])
# Assign labels to main data table
nodes['cls'] = model.labels_
data_new = pd.read_csv('./Proj_Data/2019-10-21_with_cord.csv', index_col=0)
data_new = data_new.loc[data_new['Lane type']=='ML']
data_new['cls'] = ''
for i in nodes.index:
# ipdb.set_trace()
ID = nodes.loc[i, 'ID']
cls = nodes.loc[i, 'cls']
data_new.loc[data_new['ID']==int(ID), 'cls'] = cls
data_new['q0'] = data_new['q'] * 12
data_new['k0'] = data_new['q0'] / data_new['Avg v']
color_set = ['#2ca02c', '#ff7f0e', '#d62728', '#1f77b4', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
for i in range(3,10):
plt.plot([1,2], [2,i], color=color_set[i-3], label=str(i-3))
plt.legend()
c = 0
c_set = []
plt.rcParams['font.family'] = 'Times New Roman'
fig_mfd = plt.figure(figsize=[8,5])
ax_mfd = fig_mfd.add_subplot(111)
fig_net = plt.figure(figsize=[8,5])
ax_net = fig_net.add_subplot(111)
for i in edges.index:
node1 = edges.loc[i, 'node1']
node2 = edges.loc[i, 'node2']
ax_net.plot([nodes.loc[node1, 'Long'], nodes.loc[node2, 'Long']], [nodes.loc[node1, 'Lat'], nodes.loc[node2, 'Lat']], 'black', lw=0.5)
ft = 20
font = {'family': 'Times New Roman',
'weight': 'normal',
'size': ft,
}
for i in [0,1,2,3]:
data_cls = data_new.loc[data_new['cls']==i].sort_values(by=['ID', 'Time'])
q_cls = data_cls['q'].values
if q_cls.reshape(-1, 288).shape[0] <= 1:
continue
q_cls_avg = q_cls.reshape(-1, 288).mean(axis=0)
k_cls = data_cls['Avg k'].values
k_cls_avg = k_cls.reshape(-1, 288).mean(axis=0)
ax_mfd.scatter(k_cls_avg, q_cls_avg, s=.5, c=color_set[c])
ax_mfd.set_xlabel('Occupancy', fontdict=font)
ax_mfd.set_ylabel('Flow/[veh/5 min]', fontdict=font)
ax_mfd.tick_params(axis='both', which='major', labelsize=ft*0.9)
lng = nodes.loc[nodes['cls']==i, 'Long']
lat = nodes.loc[nodes['cls']==i, 'Lat']
ax_net.scatter(lng, lat, s=5, c=color_set[c])
ax_net.set_xlabel('Longitude', fontdict=font)
ax_net.set_ylabel('Latitude', fontdict=font)
ax_net.tick_params(axis='both', which='major', labelsize=ft*0.9)
c+=1
c_set.append(i)
print('There are %i classes'%c)
# fig_mfd.savefig('./img/HS_fig_mfd.png', dpi=500)
# fig_net.savefig('./img/HS_fig_net.png', dpi=500)
color_name_set = ['g', 'y', 'r', 'b']
for a in c_set:
NSk = 0
for c in c_set:
if a==c:
continue
NSk_temp = 2*nodes.loc[nodes['cls']==a, 'q'].std()**2/(nodes.loc[nodes['cls']==a, 'q'].std()**2+nodes.loc[nodes['cls']==c, 'q'].std()**2+(nodes.loc[nodes['cls']==a, 'q'].mean()-nodes.loc[nodes['cls']==c, 'q'].mean())**2)
if NSk_temp > NSk:
NSk = NSk_temp
print(color_name_set[a], NSk)
TV = 0
for c in c_set:
TV += nodes.loc[nodes['cls']==c, 'q'].__len__()*nodes.loc[nodes['cls']==c, 'q'].std()**2
print('Abs TV:', TV)
print('Norm TV:', TV/(nodes.__len__()*nodes['q'].std()**2))
###Output
Abs TV: 9577273.458745336
Norm TV: 0.6531563585151293
|
Week_1/week_1.ipynb | ###Markdown
Advanced Chemistry Practical: Computational ChemistryWelcome to the advanced pratical focusing on [computational chemistry](./README.md). Over the next four weeks you will: - gain a understanding of, and familiarity, with molecular dynamics (MD) simulations.- learn how MD simulations are performed in practice.- use MD simulations to study the solid state materials, such as batteries and solar cells. - rationalise your results in terms of physical chemistry phenomena you are familiar with. For more details about the learning objectives of this practical, please see the [lesson plan](https://github.com/symmy596/Bath_University_Advanced_Practical_Chemistry_Year_2/blob/master/LESSONPLAN.md) online. This pratical will also make use of some of the **Python** and **Jupyter** skills that you were introduced to in the first and second year computational laboratory, if you feel that these are not fresh in your mind it might be worth looking back at the exercises from previous years, or investigate the links provided in this document.This first week we will focus on an introduction to **classical molecular dynamics simulation**, if you took the "Introduction to Computational Chemistry" (CH20238) module last year this **will** involve some revision. However, it is **important** that you work through the whole introduction as it should make the basis for the methodology section of your report. That said, as with all work, this notebook should **not** be your exclusive source of background information about molecular dynamics. Below is a non-exhaustive list of books in the library that can be used for more information. - Harvey, J. (2017). *Computational Chemistry*. Oxford, UK. Oxford University Press - Bath Library Shelf Reference: 542.85 HAR- Grant, G. H. & Richards, W. G. (1995). *Computational Chemistry*. Oxford, UK. Oxford University Press - Bath Library Shelf Reference: 542.85 GRA- Leach, A. R. (1996). *Molecular modelling: principles and applications*. Harlow, UK. Longman - Bath Library Shelf Reference: 541.6 LEA- Frenkel, D. & Smit, B. (2002). *Understanding molecular simulation: from algorithms to applications*. San Diego, USA. Academic Press - Bath Library Shelf Reference: 541.572.6 FRE - Note: This book is a personal favourite, great if you love maths and algorithms but is particularly **hardcore**.- Allen, M. P. & Tildesley, D. J. (1987). *Computer simulation of liquids*. Oxford, UK. Clarendon Press - Bath Library Shelf Reference: 532.9 ALL - Note : This is also pretty **hardcore**. Introduction to classical molecular dynamics**Classical molecular dynamics** is one of the most commonly applied techniques in computational chemistry, in particular for the study of large systems such as proteins, polymers, batteries materials, and solar cells. In classical molecular dynamics, as you would expect, we use **classical methods** to study the **dynamics** of **molecules**. Classical methodsThe term **classical methods** is used to distinguish from quantum mechanical methods, such as the Hartree-Fock method or Møller–Plesset perturbation theory. In these classical methods, the quantum mechanical **weirdness** is not present, which has a significant impact on the efficiency of the calculation. The need for quantum mechanics is removed by integrating over all of the electronic orbitals and motions and describing the atom as a fixed electron distribution. This **simplification** has some drawbacks, classical methods are only suitable for the study of molecular ground states, limiting the ability to study reactions. Furthermore, it is necessary to determine some way to **describe** this electron distribution. In practice, the model used to describe the electron distribution is usually **isotropic**, e.g. a sphere, with the electron sharing bonds between the atoms described as springs. Figure 1. A pictorial example of the models used in a classical method. The aim of a lot of chemistry is to understand the **energy** of the given system, therefore we must parameterise the **models** of our system in terms of the energy. For a molecular system, the energy is defined in terms of bonded and non-bonded interactions, $$ E_{\text{tot}} = E_{\text{bond}} + E_{\text{angle}} + E_{\text{dihedral}} + E_{\text{non-bond}} $$where, $E_{\text{bond}}$, $E_{\text{angle}}$, and $E_{\text{dihedral}}$ are the energies associated with all of the bonded interactions, and $E_{\text{non-bond}}$ is the energy associated with all the of the non-bonded interactions. In this project, we will be focusing on **atomic ionic solids**, where there are no covalent bonds between the atoms, therefore in this introduction will focus on the **non-bonded interactions**. The parameterisation of the model involves the use of **mathematical functions** to describe some **physical relationship**. For example, one of the two common non-bonded interactions is the electrostatic interaction between two charged particles, to model this interaction we use **Coulomb's law**, which was first defined in 1785, $$ E_{\text{Coulomb}}(r_{ij}) = \frac{1}{4\pi\epsilon_0}\frac{q_iq_je^2}{r_{ij}}, $$ where, $q_i$ and $q_j$ are the charges on the particles, $e$ is the charge of the electron, $\epsilon$ is the dielectric permitivity of vacuum, and $r_{ij}$ is the distance between the two particles. In the cell below the example code is shown. Here is function which models the electrostatic interaction using Coulomb's law, before plotting it (if you need a quick reminder of function definition, check out [this blog](http://pythoninchemistry.org/functions)). A note on Python The lessons that were taught in first and second year have given you enough programming experience to complete this exercise. In reality, you will not need to convey any knowledge of Python in your reports or in your viva. Python is a useful tool that allows you to see the underlying algorithms that underpin computational chemistry and allows you to setup and analyse simulations quickly and efficiently. In this tutorial we are relying on the numpy library. For more information on importing libraries please [see](https://pythoninchemistry.org/import-anything). The main objective for you in this exercise is to define functions that describe the interactions between atoms. For more information on functions please [see](https://pythoninchemistry.org/functions).
###Code
%matplotlib inline
from scipy.constants import e, epsilon_0
from math import pi
def Coulomb(qi, qj, dr):
return (qi * qj * e ** 2.) / (4. * pi * epsilon_0 * dr)
r = np.linspace(3e-10, 8e-10, 100)
plt.plot(r, Coulomb(1, -1, r))
plt.xlabel(r'$r_{ij}$/m')
plt.ylabel(r'$E$/J')
plt.show()
###Output
_____no_output_____
###Markdown
Note that if $q_i$ and $q_j$ have different signs (e.g. are oppositely charged) then the value of $E_{\text{Coulomb}}$ will **always** be less then zero (e.g. attractive). It is clear that this mathematical function has clear roots in the physics of the system. However, the other component of the non-bonded interaction is less well defined. This is the **van der Waals** interaction, which encompasses both the attractive London dispersion effects and the repulsive Pauli exclusion principle. There are a variety of ways that the van der Waals interaction can be modelled, this week we will investigate a few of these. One commonly applied model is the **Lennard-Jones** potential model, which considers the attractive London dispersion effects as follows, $$ E_{\text{attractive}}(r_{ij}) = \frac{-B}{r_{ij}^6}, $$where $B$ is some constant for the interaction, and $r_{ij}$ is the distance between the two atoms. The Pauli exclusion principle is repulsive and only presented over very short distances, and is therefore modelled with the relation, $$ E_{\text{repulsive}}(r_{ij}) = \frac{A}{r_{ij}^{12}}, $$again $A$ is some interaction specific constant. The total Lennard-Jones interaction is then the linear combination of these two terms, $$ E_{LJ}(r_{ij}) = E_{\text{repulsive}}(r_{ij}) + E_{\text{attractive}}(r_{ij}) = \frac{A}{r_{ij}^{12}} - \frac{B}{r_{ij}^6}. $$As was performed for the electrostatic interaction, in the cell below **define** each of the attractive, repulsive and total van der Waals interaction energies as defined by the Lennard-Jones potential and plot **all three** on a single graph, where $A = 1.363\times10^{-134}\text{ Jm}^{-12}$ and $B = 9.273\times10^{-78}\text{ Jm}^{-6}$.
###Code
%matplotlib inline
def attractive(dr, b):
return □ □ □
def repulsive(dr, a):
return □ □ □
def lj(dr, constants):
return □ □ □
r = np.linspace(3e-10, 8e-10, 100)
plt.plot(r, attractive(r, 9.273e-78), label='Attractive')
plt.plot(r, repulsive(r, 1.363e-134), label='Repulsive')
plt.plot(r, lj(r, [1.363e-134, 9.273e-78]), label='Lennard-Jones')
plt.xlabel(r'$r_{ij}$/m')
plt.ylabel(r'$E$/J')
plt.legend()
plt.savefig("LJ.png", dpi=600)
plt.show()
###Output
_____no_output_____
###Markdown
The following cell is a testing cell. If your functions are correct, it will run without issue, if it fails, then there is an erro in your function. These will be used throughout this exercise.
###Code
np.testing.assert_almost_equal(attractive(5e-10, 9.273e-78) * 1e18, -5.93472e-4)
np.testing.assert_almost_equal(repulsive(5e-10, 1.363e-134) * 1e18, 5.5828e-5)
np.testing.assert_almost_equal(lj(5e-10, [1.363e-134, 9.273e-78]) * 1e18, -5.3764e-4)
###Output
_____no_output_____
###Markdown
The Lennard-Jones potential is by no means the only way to model the van der Waals interaction. Another common potential model is the **Buckingham** potential, like the Lennard-Jones potential, the Buckingham models the attractive term with a power-6. However, instead of the power-12 repulsion, this is modelled with an exponential function. The total Buckingham potential is as follows, $$ E_{\text{Buckingham}}(r_{ij}) = A\exp{-Br_{ij}} - \frac{C}{r_{ij}^6}, $$where $A$, $B$, and $C$ are interaction specific. N.B. these are not the same $A$ and $B$ as in the Lennard-Jones potential. **In the cell below**, define a Buckingham potential and plot it, where $A = 1.69\times10^{-15}\text{ J}$, $B = 3.66\times10^{10}\text{ m}$, and $C = 1.02\times10^{-77}\text{ Jm}^{-6}$.
###Code
%matplotlib inline
def buckingham(dr, constants):
return □ □ □
r = np.linspace(0.6e-10, 8e-10, 100)
plt.plot(r, buckingham(r, [1.69e-15, 3.66e10, 1.02e-77]), label='Buckingham')
plt.xlabel(r'$r_{ij}$/m')
plt.ylabel(r'$E$/J')
plt.legend()
plt.show()
np.testing.assert_almost_equal(buckingham(5e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e18, -6.3373e-4)
np.testing.assert_almost_equal(buckingham(0.5e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e15, -.381701)
###Output
_____no_output_____
###Markdown
When the Buckingham potential is plotted from $3~Å$ to $10~Å$, the potential looks similar to the Lennard-Jones. There is a well of ideal interatomic distance with a shallow path out as the particles move apart and a very steep incline for the particles to move closer. Now **investigate** the Buckingham potential over the range of $0.6~Å$ and $8~Å$ and comment on the interaction when $r_{ij} < 0.75~Å$. Does this appear physically realistic? **Comment** on problems that may occur when the Buckingham potential is being used at very high temperature. Comment on the problems that may occur when the Buckingham potential is being used at very high temperature. More simplificationsThe classical methods that involve modelling atoms as a series of particles with analytical mathematical functions to describe their energy is currently regularly used to model the properties of very large systems, like biological macromolecules. While these calculations are a lot faster using classical methods than quantum mechanics, for a system with $10 000$ atoms, there are still nearly $50 000 000$ interactions to consider. Therefore, so that our calculation run on a feasible timescale we make use of some additional simplifications. Cut-offsIf we plot the Lennard-Jones potential all the way out to $15 Å$, we get something that looks like *Figure 2*. Figure 2. The Lennard-Jones potential (blue) and a line of y=0 (orange). It is clear from *Figure 2*, and from our understanding of the particle interaction, that as the particle move away from each other their interaction energy tends towards $0$. The concept of a cut-off suggests that if to particles are found to be very far apart ($\sim15~Å$), there is no need calculate the energy between them and it can just be taken as $0$, $$ E(r_{ij})=\left\{ \begin{array}{@{}ll@{}} \dfrac{A}{r_{ij}^{12}} - \dfrac{B}{r_{ij}^6}, & \text{if}\ a<15\text{ Å} \\ 0, & \text{otherwise.} \end{array}\right.$$This saves significant computation time, as power (e.g. power-12 and power-6 in the Lennard-Jones potential) are very computationally expensive to calculate. In the cell below, **modify** your Lennard-Jones and Buckingham potential functions to have a cut-off of $15 Å$ (for this you will need to recall if and else statements from the previous Python labs).
###Code
def lj(dr, constants):
if dr < 15e-10:
return □ □ □
else:
return □ □ □
def buckingham(dr, constants):
if dr < 15e-10:
return □ □ □
else:
return □ □ □
np.testing.assert_almost_equal(lj(5e-10, [1.363e-134, 9.273e-78]) * 1e18, -5.3764e-4)
np.testing.assert_almost_equal(buckingham(5e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e18, -6.3373e-4)
np.testing.assert_almost_equal(buckingham(0.5e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e15, -.381701)
np.testing.assert_equal(lj(15e-10, [1.363e-134, 9.273e-78]) * 1e18, 0)
np.testing.assert_equal(buckingham(15e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e18, 0)
###Output
_____no_output_____
###Markdown
Periodic boundary conditionsEven with cut-offs, it is not straightforward to design a large enough simulation cell to represent the bulk behaviour of liquids or solids in a physically relevant way, for example what happens when the atoms interact with the walls of the cell? This is dealt with using **periodic boundary conditions**, which state that the cell being simulated is part of an infinite number of identical cells arranged in a lattice (*Figure 3*). Figure 3. A two-dimensional example of a periodic cell. When a particle reaches the cell wall, it moves into the adjecent cell, and since all the cells are identical, it appears on the other side. **Run** the cell below to see a periodic boundary condition in action for a single cell.
###Code
%matplotlib notebook
examples.pbc()
###Output
_____no_output_____
###Markdown
Molecular dynamicsHaving introduced the classical methods, it is now necessary to discuss how the **dynamics of molecules** are obtained. The particles that we are studying are classical in nature, therefore it is possible to apply classical mechanics to rationalise their dynamical behaviour. For this the starting point is Newton's second law of motion, $$ \mathbf{f} = m\mathbf{a}, $$ where, $\mathbf{f}$ is the force on an atom of mass, $m$, and acceleration, $\mathbf{a}$. The force between two particles, $i$ and $j$, can be found from the interaction energy, $$ f_{ij} = \frac{-\text{d}E(r_{ij})}{\text{d}r_{ij}}. $$ Which is to say that the force is the negative of the first derivative of the energy with respect to the distance between them. In the cell below, a new function has been defined for the Buckingham energy **or** force.
###Code
def lennard_jones(dr, constants, force):
if force:
return 12 * constants[0] * np.power(dr, -13) - (6 * constants[1] * np.power(dr, -7))
else:
return constants[0] * np.power(dr, -12) - (constants[1] * np.power(dr, -6))
###Output
_____no_output_____
###Markdown
Use the above function as a template to **define** a similar function to determine the energy **or** force from the Buckingham potential.
###Code
def buckingham(dr, constants, force):
if force:
return □ □ □
else:
return □ □ □
np.testing.assert_almost_equal(lennard_jones(5e-10, [1.363e-134, 9.273e-78], False) * 1e18, -5.3764e-4)
np.testing.assert_almost_equal(lennard_jones(5e-10, [1.363e-134, 9.273e-78], True) * 1e10, -5.78178e-2)
np.testing.assert_almost_equal(lennard_jones([5e-10, 5e-10], [1.363e-134, 9.273e-78], True) * 1e10,
[-5.78178e-2, -5.78178e-2])
###Output
_____no_output_____
###Markdown
You may have noted that the force in eqn. 8 is a vector quantity, whereas that in eqn. 9 is not. Therefore it is necessary to convert obtain the force vector in each dimension, by multiplication by the unit vector in that dimenion, $$ \mathbf{f}_x = f \mathbf{\hat{r}}_x \text{, where } \mathbf{\hat{r}}_x = \frac{r_x}{|\mathbf{r}|}. $$This must be carried out to determine the force on the particle in each dimension that is being considered. However, in this example we will only consider the $x$-dimension for now.This means for a system with two argon particles, at positions of $x_0 = 5~Å$ and $x_1 = 10~Å$, we are able to determine the energy of the interaction and force, and acceleration on each particle, as **shown** in the cell below.
###Code
mass_of_argon = 39.948 # amu
mass_of_argon_kg = mass_of_argon * 1.6605e-27
def get_acceleration(positions):
rx = np.zeros_like(positions)
k = 0
for i in range(0, len(positions)):
for j in range(0, len(positions)):
if i != j:
rx[k] = positions[i] - positions[j]
k += 1
r_mag = np.sqrt(rx * rx)
force = lennard_jones(r_mag, [1.363e-134, 9.273e-78], True)
force_x = force * rx / r_mag
acceleration_x = force_x / mass_of_argon_kg
return acceleration_x
positions = np.array([5e-10, 10e-10])
acc = get_acceleration(positions)
print('acceleration on particle 0 = {:.2e} m/s2'.format(acc[0]))
print('acceleration on particle 1 = {:.2e} m/s2'.format(acc[1]))
###Output
_____no_output_____
###Markdown
IntegrationThis means that we now know the position of the particle and the acceleration that it has, so it is only necessary to then find the velocity of the particle and we can apply the basic equations of motion to our particles,$$ \mathbf{x}_i(t + \Delta t) = \mathbf{x}_i(t) + \mathbf{v}_i(t)\Delta t + \dfrac{1}{2} \mathbf{a}_i(t)\Delta t^2, $$$$ \mathbf{v}_i(t + \Delta t) = \mathbf{v}_i(t) + \dfrac{1}{2}\big[\mathbf{a}_i(t) + \mathbf{a}_i(t+\Delta t)\big]\Delta t, $$ where, $\Delta t$ is the timestep (how far in time is incremented), $\mathbf{x}_i$ is the particle position, $\mathbf{v}_i$ is the velocity, and $\mathbf{a}_i$ the acceleration. This pair of equations is known as the Velocity-Verlet algorithm, which can be written as:1. find the position of the particle after some timestep using eqn. 11, 2. calculate the force (and acceleration) on the particle,3. determine a new velocity for the particle, based on the average acceleration at the current and new positions, using eqn. 12, 4. overwrite the old acceleration values with the new ones, $\mathbf{a}_{i}(t) = \mathbf{a}_{i}(t + \Delta t)$,4. go to 1.This process can be continued for as long as is required to get good statistics for the quanity you are interested in (or for as long as you can wait for/afford to run the computer for). This process is called the integration step, and the Velocity-Verlet is the **integrator**. The Velocity-Verlet integration is numerical in nature, meaning that the accuracy of this method is dependent on the timestep, $\Delta t$, size. Small values of $\Delta t$ are capable of keeping the resultant uncertainty of the position and velocity small, these values are usually on the scale of $10^{-15}\text{ s}$ (femtoseconds). This means that to even measure a nanosecond of "real-time" molecular dynamics, 1 000 000 (one million) iterations of the above algorithm must be performed. In the cell below, these have been defined.
###Code
def update_pos(x, v, a, dt):
return x + v * dt + 0.5 * a * dt * dt
def update_velo(v, a, a1, dt):
return v + 0.5 * (a + a1) * dt
###Output
_____no_output_____
###Markdown
InitialisationThere are only two tools left that you need to run a molecular dynamics simulation, and both are associated with the original configuration of the system; the original particle positions, and the original particle velocities. The particle positions are usually taken from some library of structures (e.g. the protein data bank if you are simulating proteins) or based on some knowledge of the system (e.g. CaF2 is known to have a face-centred cubic structure). The particle velocities are a bit more nuanced, as the total kinetic energy, $E_K$ of the system (and therefore the particle velocities) are dependent on the temperature of the simulation, $T$. $$ E_K = \sum_{i=1}^N \frac{m_i|v_i|^2}{2} = \frac{3}{2}Nk_BT, $$where $m_i$ is the mass of particle $i$, $N$ is the number of particles and $k_B$ is the Boltzmann constant. Based on this knowledge, the most common way to obtain initial velocities is to assign random values and then scale them based on the temperature of the system. For example, in the software you will use later today the initial velocity are determined as follow, $$ v_i = R_i \sqrt{\dfrac{k_BT}{m_i}}, $$where $R_i$ is some random number between $-0.5$ and $0.5$, $k_B$ is the Boltzmann constant, $T$ is the temperature, and $m_i$ is the mass of the particle.In the cell below the example code is shown
###Code
def init_velocity(temperature, part_numb):
v = np.random.rand(part_numb) - 0.5
return v * np.sqrt(temperature * 1.3806e-23 / mass_of_argon_kg)
###Output
_____no_output_____
###Markdown
Build an MD simulationWe will now try and use what we have done so far to build a 1-dimensional molecular dynamics simulation.A molecular dynamics simulation is essentially an algorithm that can be broken down into a series of steps. Each step has already been defined in a function above. Now you need to stitch them together to build your own 1D MD simulation. In the cell below the steps have been laid out for you.1. Define the timestep, number of steps and initial positions of the particles (Done for you),2. Initialise the velocities - Use the init_velocity function - Temperature of 30 with 2 particles,3. Calculate the accelerations - Use the get_acceleration function,4. Begin a loop of the number of timesteps,5. Update the positions - Use the update_pos functions,6. Calculate the new accelerations - Use the get_acceleration function,7. Update the velocity - Use the update_velo function,8. Save the accelerations,
###Code
dt = 1e-14 # (seconds)
number_of_steps = 1000
distances = []
# initialisation
x = np.array([5e-10, 10e-10]) # (meters) these are the starting positions of the particles
#v =
#a =
for i in range(0, number_of_steps):
# x =
# a1 =
# v =
# a =
distances.append(np.abs(x[1] - x[0]))
###Output
_____no_output_____
###Markdown
**Ensure** that a demonstrator has checked the MD simulation before you continue!
###Code
%matplotlib inline
plt.plot(distances)
plt.xlabel('Steps')
plt.ylabel('Distances/m')
plt.show()
###Output
_____no_output_____
###Markdown
Run your 1-D molecular dynamics simulation a few times each at a range of different initial temperatures. In the cell below, **comment** on the effect of the different temperature on the distances that are sampled in the simulation. Comment on the effect of the different temperature on the interatomic distances sampled in the simulation Phase diagramHaving been introduced to the main aspects of the molecular dynamics simulation methodlogy, we will make use of existing software packages to probe material structure. This is common pratice, as writing a full software package is very complicated, so it is best to use a *well-troden*, and optimised, code.This week you will make use of the pylj [1] code, which simulates argon atoms in a 2-dimensional environment. Next week, you will be introduced to DLPOLY [2], a more general purpose molecular dynamics package. Before we introduce how to use the pylj software, it is necessary to consider the problem to which it will be applied,> The aim of the rest of this session is to determine and plot the phase diagram for two-dimension argonThe determination of a material's phase on the atomistic scale is a non-trivial task. In this exercise, we will use two main tools for phase identification:- Mean squared displacement (MSD)- Radial distribution function (RDF) Mean squared displacementYou will find out more about the MSD next week. However, for now we only need to be aware the MSD is a measure of how far the particles have moved during the simulation. The result is that it is possible to identify different phase of matter from the MSD plot, see *Figure 4* below. Figure 4. The anticipated MSD form for each state of matter. It should be expected that in a simulation of a given time, gaseous particles will be able to travel further than liquids, which can travel further then solids. Radial distribution functionA radial distribution function is the probability that another atom would be found at a given distance from each atom, and is a very useful measure of order in the system, of-course more disorder means more gas-like. Shown in *Figure 5*, are the RDFs for three materials; consider the shape of each one and the amount of **order** represented, in the cell below **comment on** and **explain** the expected state (solid, liquid or gas) for each. Figure 5. The radial distribution functions for 3 states of matter .
###Code
Comment on and explain the expected state from each of a, b, and c.
###Output
_____no_output_____
###Markdown
Software[pylj](http://pythoninchemistry.org/pylj) (python Lennard-Jones) [1] is an open-source Python package for producing molecular dynamics simulations of argon particles (interacting through the Lennard-Jones potential) in 2-dimensions. In the cell below, a molecular dynamics algorithm is **defined** using the pylj library. **Run this cell as is.**
###Code
from pylj import md, sample
def md_simulation(temperature, number_of_particles, number_of_steps, ff, parameters):
# Creates the visualisation environment
%matplotlib notebook
# Initialise the system
system = md.initialise(number_of_particles, temperature, 20, 'square', forcefield=ff, constants=parameters)
# This sets the sampling class
sample_system = sample.Phase(system)
# Start at time 0
system.time = 0
# Begin the molecular dynamics loop
for i in range(0, number_of_steps):
# Run the equations of motion integrator algorithm, this
# includes the force calculation
system.integrate(md.velocity_verlet)
# Sample the thermodynamic and structural parameters of the system
system.md_sample()
# Allow the system to interact with a heat bath
system.heat_bath(temperature)
# Iterate the time
system.time += system.timestep_length
system.step += 1
# At a given frequency sample the positions and plot the RDF
if system.step % 25 == 0:
sample_system.update(system)
sample_system.average()
return system, sample_system
###Output
_____no_output_____
###Markdown
Having defined the molecular dynamics function, we can run it below. The variables that this function takes are as follows:- temperature (K)- number of particles- number of simulation steps- forcefieldRunning this function will result in four panels being presented. The top left shows the particles in the simulation, the top right gives the total energy for the system, the bottom left is the mean squared displacement and bottom right is the radial distribution function.
###Code
temp = □ □ □
n_particles = □ □ □
n_steps = □ □ □
sim, samp_sim = md_simulation(temperature=temp,
number_of_particles=n_particles,
number_of_steps=n_steps,
ff=buckingham,
parameters=[1.69e-15, 3.66e10, 1.02e-77])
###Output
_____no_output_____
###Markdown
What happens if we change the forcefield?
###Code
#temp = □ □ □
#n_particles = □ □ □
#n_steps = □ □ □
sim, samp_sim = md_simulation(temperature=temp,
number_of_particles=n_particles,
number_of_steps=n_steps,
ff=lennard_jones,
parameters=[1.363e-134, 9.273e-78])
###Output
_____no_output_____
###Markdown
Plotting a phase diagram A phase diagram should be familiar from first-year, this is a graphical representation of the physical state of a substance under different conditions of state such as temperature, pressure and density. In this exercise the two variables will be temperature and density (by controlling the number of particles). Using the information that pylj returns about the MSD and the RDF determine the phase for a range of values of temperature (T) and number of particles (N). If the system is a solid, place the pair of T and N in the `solid` array, and similar for if the system is a liquid or a gas. Be aware that if the system is not yet at **equilibrium** (e.g. the energy has not minimised) then the data may not be reliable, make sure you run your simulations for long enough!Record the data in the arrays below and once you have enough datapoints, plot the data. A datapoint for each phase has been provide.
###Code
solid_N = np.array([10])
solid_T = np.array([0])
liquid_N = np.array([15])
liquid_T = np.array([50])
gas_N = np.array([10])
gas_T = np.array([50])
###Output
_____no_output_____
###Markdown
**Make sure you close the pylj app before running the next cell. Off button in the top corner of the app**
###Code
plt.plot(solid_T, solid_N, 'o', c='#0173B2')
plt.plot(liquid_T, liquid_N, 'o', c='#DE8F05')
plt.plot(gas_T, gas_N, 'o', c='#029E73')
plt.xlabel('temperature/K')
plt.ylabel('number')
#plt.text(x, y, 'solid', size=30,
# verticalalignment='center',
# horizontalalignment='center')
#plt.text(x, y, 'liquid', size=30,
# verticalalignment='center',
# horizontalalignment='center')
#plt.text(x, y, 'gas', size=30,
# verticalalignment='center',
# horizontalalignment='center')
plt.show()
###Output
_____no_output_____
###Markdown
Advanced Chemistry Practical: Computational ChemistryWelcome to the advanced pratical focusing on [computational chemistry](./README.md). Over the next four weeks you will: - gain a understanding of, and familiarity, with molecular dynamics (MD) simulations.- learn how MD simulations are performed in practice.- use MD simulations to study the solid state materials, such as batteries and solar cells. - rationalise your results in terms of physical chemistry phenomena you are familiar with. For more details about the learning objectives of this practical, please see the [lesson plan](https://github.com/symmy596/Advanced_Practical_Chemistry_Teaching/blob/master/LESSONPLAN.md) online. This pratical will also make use of some of the **Python** and **Jupyter** skills that you were introduced to in the first and second year computational laboratory, if you feel that these are not fresh in your mind it might be worth looking back at the exercises from previous years, or investigate the links provided in this document.This first week we will focus on an introduction to **classical molecular dynamics simulation**, if you took the "Introduction to Computational Chemistry" (CH20238) module last year this **will** involve some revision. However, it is **important** that you work through the whole introduction as it should make the basis for the methodology section of your report. That said, as with all work, this notebook should **not** be your exclusive source of background information about molecular dynamics. Below is a non-exhaustive list of books in the library that can be used for more information. - Harvey, J. (2017). *Computational Chemistry*. Oxford, UK. Oxford University Press - Bath Library Shelf Reference: 542.85 HAR- Grant, G. H. & Richards, W. G. (1995). *Computational Chemistry*. Oxford, UK. Oxford University Press - Bath Library Shelf Reference: 542.85 GRA- Leach, A. R. (1996). *Molecular modelling: principles and applications*. Harlow, UK. Longman - Bath Library Shelf Reference: 541.6 LEA- Frenkel, D. & Smit, B. (2002). *Understanding molecular simulation: from algorithms to applications*. San Diego, USA. Academic Press - Bath Library Shelf Reference: 541.572.6 FRE - Note: This book is a personal favourite, great if you love maths and algorithms but is particularly **hardcore**.- Allen, M. P. & Tildesley, D. J. (1987). *Computer simulation of liquids*. Oxford, UK. Clarendon Press - Bath Library Shelf Reference: 532.9 ALL - Note : This is also pretty **hardcore**. Introduction to classical molecular dynamics**Classical molecular dynamics** is one of the most commonly applied techniques in computational chemistry, in particular for the study of large systems such as proteins, polymers, batteries materials, and solar cells. In classical molecular dynamics, as you would expect, we use **classical methods** to study the **dynamics** of **molecules**. Classical methodsThe term **classical methods** is used to distinguish from quantum mechanical methods, such as the Hartree-Fock method or Møller–Plesset perturbation theory. In these classical methods, the quantum mechanical **weirdness** is not present, which has a significant impact on the efficiency of the calculation. The need for quantum mechanics is removed by integrating over all of the electronic orbitals and motions and describing the atom as a fixed electron distribution. This **simplification** has some drawbacks, classical methods are only suitable for the study of molecular ground states, limiting the ability to study reactions. Furthermore, it is necessary to determine some way to **describe** this electron distribution. In practice, the model used to describe the electron distribution is usually **isotropic**, e.g. a sphere, with the electron sharing bonds between the atoms described as springs. Figure 1. A pictorial example of the models used in a classical method. The aim of a lot of chemistry is to understand the **energy** of the given system, therefore we must parameterise the **models** of our system in terms of the energy. For a molecular system, the energy is defined in terms of bonded and non-bonded interactions, $$ E_{\text{tot}} = E_{\text{bond}} + E_{\text{angle}} + E_{\text{dihedral}} + E_{\text{non-bond}} $$where, $E_{\text{bond}}$, $E_{\text{angle}}$, and $E_{\text{dihedral}}$ are the energies associated with all of the bonded interactions, and $E_{\text{non-bond}}$ is the energy associated with all the of the non-bonded interactions. In this project, we will be focusing on **atomic ionic solids**, where there are no covalent bonds between the atoms, therefore in this introduction will focus on the **non-bonded interactions**. The parameterisation of the model involves the use of **mathematical functions** to describe some **physical relationship**. For example, one of the two common non-bonded interactions is the electrostatic interaction between two charged particles, to model this interaction we use **Coulomb's law**, which was first defined in 1785, $$ E_{\text{Coulomb}}(r_{ij}) = \frac{1}{4\pi\epsilon_0}\frac{q_iq_je^2}{r_{ij}}, $$ where, $q_i$ and $q_j$ are the charges on the particles, $e$ is the charge of the electron, $\epsilon$ is the dielectric permitivity of vacuum, and $r_{ij}$ is the distance between the two particles. In the cell below, **define** a function which models the electrostatic interaction using Coulomb's law, before plotting it (if you need a quick reminder of function definition, check out [this blog](http://pythoninchemistry.org/functions)).
###Code
%matplotlib inline
from scipy.constants import e, epsilon_0
from math import pi
def Coulomb(qi, qj, dr):
return □ □ □
r = np.linspace(3e-10, 8e-10, 100)
plt.plot(r, Coulomb(1, -1, r))
plt.xlabel(r'$r_{ij}$/m')
plt.ylabel(r'$E$/J')
plt.show()
# this cell is present to test if your code
# DO NOT edit this cell
np.testing.assert_almost_equal(Coulomb(1, -1, 5e-10), -4.614e-19)
np.testing.assert_almost_equal(Coulomb(1, -2, 2e-10), -2.307e-18)
np.testing.assert_almost_equal(Coulomb(1, 1, 10e-10), 2.307e-19)
###Output
_____no_output_____
###Markdown
Note that if $q_i$ and $q_j$ have different signs (e.g. are oppositely charged) then the value of $E_{\text{Coulomb}}$ will **always** be less then zero (e.g. attractive). It is clear that this mathematical function has clear roots in the physics of the system. However, the other component of the non-bonded interaction is less well defined. This is the **van der Waals** interaction, which encompasses both the attractive London dispersion effects and the repulsive Pauli exclusion principle. There are a variety of ways that the van der Waals interaction can be modelled, this week we will investigate a few of these. One commonly applied model is the **Lennard-Jones** potential model, which considers the attractive London dispersion effects as follows, $$ E_{\text{attractive}}(r_{ij}) = \frac{-B}{r_{ij}^6}, $$where $B$ is some constant for the interaction, and $r_{ij}$ is the distance between the two atoms. The Pauli exclusion principle is repulsive and only presented over very short distances, and is therefore modelled with the relation, $$ E_{\text{repulsive}}(r_{ij}) = \frac{A}{r_{ij}^{12}}, $$again $A$ is some interaction specific constant. The total Lennard-Jones interaction is then the linear combination of these two terms, $$ E_{LJ}(r_{ij}) = E_{\text{repulsive}}(r_{ij}) + E_{\text{attractive}}(r_{ij}) = \frac{A}{r_{ij}^{12}} - \frac{B}{r_{ij}^6}. $$As was performed for the electrostatic interaction, in the cell below **define** each of the attractive, repulsive and total van der Waals interaction energies as defined by the Lennard-Jones potential and plot **all three** on a single graph, where $A = 1.363\times10^{-134}\text{ Jm}^{-12}$ and $B = 9.273\times10^{-78}\text{ Jm}^{-6}$.
###Code
%matplotlib inline
def attractive(dr, b):
return □ □ □
def repulsive(dr, a):
return □ □ □
def lj(dr, constants):
return □ □ □
r = np.linspace(3e-10, 8e-10, 100)
plt.plot(r, attractive(r, 9.273e-78), label='Attractive')
plt.plot(r, repulsive(r, 1.363e-134), label='Repulsive')
plt.plot(r, lj(r, [1.363e-134, 9.273e-78]), label='Lennard-Jones')
plt.xlabel(r'$r_{ij}$/m')
plt.ylabel(r'$E$/J')
plt.legend()
plt.show()
np.testing.assert_almost_equal(attractive(5e-10, 9.273e-78) * 1e18, -5.93472e-4)
np.testing.assert_almost_equal(repulsive(5e-10, 1.363e-134) * 1e18, 5.5828e-5)
np.testing.assert_almost_equal(lj(5e-10, [1.363e-134, 9.273e-78]) * 1e18, -5.3764e-4)
###Output
_____no_output_____
###Markdown
The Lennard-Jones potential is by no means the only way to model the van der Waals interaction. Another common potential model is the **Buckingham** potential, like the Lennard-Jones potential, the Buckingham models the attractive term with a power-6. However, instead of the power-12 repulsion, this is modelled with an exponential function. The total Buckingham potential is as follows, $$ E_{\text{Buckingham}}(r_{ij}) = A\exp{-Br_{ij}} - \frac{C}{r_{ij}^6}, $$where $A$, $B$, and $C$ are interaction specific. N.B. these are not the same $A$ and $B$ as in the Lennard-Jones potential. **In the cell below**, define a Buckingham potential and plot it, where $A = 1.69\times10^{-15}\text{ J}$, $B = 3.66\times10^{10}\text{ m}$, and $C = 1.02\times10^{-77}\text{ Jm}^{-6}$.
###Code
%matplotlib inline
def buckingham(dr, constants):
return □ □ □
r = np.linspace(3e-10, 10e-10, 100)
plt.plot(r, buckingham(r, [1.69e-15, 3.66e10, 1.02e-77]), label='Buckingham')
plt.xlabel(r'$r_{ij}$/m')
plt.ylabel(r'$E$/J')
plt.legend()
plt.show()
np.testing.assert_almost_equal(buckingham(5e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e18, -6.3373e-4)
np.testing.assert_almost_equal(buckingham(0.5e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e15, -.381701)
###Output
_____no_output_____
###Markdown
When the Buckingham potential is plotted from $3~Å$ to $10~Å$, the potential looks similar to the Lennard-Jones. There is a well of ideal interatomic distance with a shallow path out as the particles move apart and a very steep incline for the particles to move closer. Now **investigate** the Buckingham potential over the range of $0.6~Å$ and $8~Å$ and comment on the interaction when $r_{ij} < 0.75~Å$. Does this appear physically realistic? **Comment** on problems that may occur when the Buckingham potential is being used at very high temperature.
###Code
Comment on the problems that may occur when the Buckingham potential is being used at very high temperature.
###Output
_____no_output_____
###Markdown
More simplificationsThe classical methods that involve modelling atoms as a series of particles with analytical mathematical functions to describe their energy is currently regularly used to model the properties of very large systems, like biological macromolecules. While these calculations are a lot faster using classical methods than quantum mechanics, for a system with $10 000$ atoms, there are still nearly $50 000 000$ interactions to consider. Therefore, so that our calculation run on a feasible timescale we make use of some additional simplifications. Cut-offsIf we plot the Lennard-Jones potential all the way out to $15 Å$, we get something that looks like *Figure 2*. Figure 2. The Lennard-Jones potential (blue) and a line of y=0 (orange). It is clear from *Figure 2*, and from our understanding of the particle interaction, that as the particle move away from each other their interaction energy tends towards $0$. The concept of a cut-off suggests that if to particles are found to be very far apart ($\sim15~Å$), there is no need calculate the energy between them and it can just be taken as $0$, $$ E(r_{ij})=\left\{ \begin{array}{@{}ll@{}} \dfrac{A}{r_{ij}^{12}} - \dfrac{B}{r_{ij}^6}, & \text{if}\ a<15\text{ Å} \\ 0, & \text{otherwise.} \end{array}\right.$$This saves significant computation time, as power (e.g. power-12 and power-6 in the Lennard-Jones potential) are very computationally expensive to calculate. In the cell below, **modify** your Lennard-Jones and Buckingham potential functions to have a cut-off of $15 Å$ (for this you will need to recall if and else statements from the previous Python labs).
###Code
def lj(dr, constants):
if □ □ □:
return □ □ □
else:
return □ □ □
def buckingham(dr, constants):
if □ □ □:
return □ □ □
else:
return □ □ □
np.testing.assert_almost_equal(lj(5e-10, [1.363e-134, 9.273e-78]) * 1e18, -5.3764e-4)
np.testing.assert_almost_equal(buckingham(5e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e18, -6.3373e-4)
np.testing.assert_almost_equal(buckingham(0.5e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e15, -.381701)
np.testing.assert_equal(lj(15e-10, [1.363e-134, 9.273e-78]) * 1e18, 0)
np.testing.assert_equal(buckingham(15e-10, [1.69e-15, 3.66e10, 1.02e-77]) * 1e18, 0)
###Output
_____no_output_____
###Markdown
Periodic boundary conditionsEven with cut-offs, it is not straightforward to design a large enough simulation cell to represent the bulk behaviour of liquids or solids in a physically relevant way, for example what happens when the atoms interact with the walls of the cell? This is dealt with using **periodic boundary conditions**, which state that the cell being simulated is part of an infinite number of identical cells arranged in a lattice (*Figure 3*). Figure 3. A two-dimensional example of a periodic cell. When a particle reaches the cell wall, it moves into the adjecent cell, and since all the cells are identical, it appears on the other side. **Run** the cell below to see a periodic boundary condition in action for a single cell.
###Code
%matplotlib notebook
examples.pbc()
###Output
_____no_output_____
###Markdown
Molecular dynamicsHaving introduced the classical methods, it is now necessary to discuss how the **dynamics of molecules** are obtained. The particles that we are studying are classical in nature, therefore it is possible to apply classical mechanics to rationalise their dynamical behaviour. For this the starting point is Newton's second law of motion, $$ \mathbf{f} = m\mathbf{a}, $$ where, $\mathbf{f}$ is the force on an atom of mass, $m$, and acceleration, $\mathbf{a}$. The force between two particles, $i$ and $j$, can be found from the interaction energy, $$ f_{ij} = \frac{-\text{d}E(r_{ij})}{\text{d}r_{ij}}. $$ Which is to say that the force is the negative of the first derivative of the energy with respect to the distance between them. In the cell below, a new function has been defined for the Buckingham energy **or** force.
###Code
def buckingham(dr, constants, force):
if force:
return constants[0] * constants[1] * np.exp(-constants[1] * dr) - 6 * constants[2] / np.power(dr, 7)
else:
return constants[0] * np.exp(-constants[1] * dr) - constants[2] / np.power(dr, 6)
###Output
_____no_output_____
###Markdown
Use the above function as a template to **define** a similar function to determine the energy **or** force from the Buckingham potential.
###Code
def lennard_jones(dr, constants, force):
□ □ □
np.testing.assert_almost_equal(lennard_jones(5e-10, [1.363e-134, 9.273e-78], False) * 1e18, -5.3764e-4)
np.testing.assert_almost_equal(lennard_jones(5e-10, [1.363e-134, 9.273e-78], True) * 1e10, -5.78178e-2)
np.testing.assert_almost_equal(lennard_jones([5e-10, 5e-10], [1.363e-134, 9.273e-78], True) * 1e10,
[-5.78178e-2, -5.78178e-2])
###Output
_____no_output_____
###Markdown
You may have noted that the force in eqn. 8 is a vector quantity, whereas that in eqn. 9 is not. Therefore it is necessary to convert obtain the force vector in each dimension, by multiplication by the unit vector in that dimenion, $$ \mathbf{f}_x = f \mathbf{\hat{r}}_x \text{, where } \mathbf{\hat{r}}_x = \frac{r_x}{|\mathbf{r}|}. $$This must be carried out to determine the force on the particle in each dimension that is being considered. However, in this example we will only consider the $x$-dimension for now.This means for a system with two argon particles, at positions of $x_0 = 5~Å$ and $x_1 = 10~Å$, we are able to determine the energy of the interaction and force, and acceleration on each particle, as **shown** in the cell below.
###Code
mass_of_argon = 39.948 # amu
mass_of_argon_kg = mass_of_argon * 1.6605e-27
def get_acceleration(positions):
rx = np.zeros_like(positions)
k = 0
for i in range(0, len(positions)):
for j in range(0, len(positions)):
if i != j:
rx[k] = positions[i] - positions[j]
k += 1
r_mag = np.sqrt(rx * rx)
force = lennard_jones(r_mag, [1.363e-134, 9.273e-78], True)
force_x = force * rx / r_mag
acceleration_x = force_x / mass_of_argon_kg
return acceleration_x
positions = np.array([5e-10, 10e-10])
acc = get_acceleration(positions)
print('acceleration on particle 0 = {:.2e} m/s2'.format(acc[0]))
print('acceleration on particle 1 = {:.2e} m/s2'.format(acc[1]))
###Output
_____no_output_____
###Markdown
IntegrationThis means that we now know the position of the particle and the acceleration that it has, so it is only necessary to then find the velocity of the particle and we can apply the basic equations of motion to our particles,$$ \mathbf{x}_i(t + \Delta t) = \mathbf{x}_i(t) + \mathbf{v}_i(t)\Delta t + \dfrac{1}{2} \mathbf{a}_i(t)\Delta t^2, $$$$ \mathbf{v}_i(t + \Delta t) = \mathbf{v}_i(t) + \dfrac{1}{2}\big[\mathbf{a}_i(t) + \mathbf{a}_i(t+\Delta t)\big]\Delta t, $$ where, $\Delta t$ is the timestep (how far in time is incremented), $\mathbf{x}_i$ is the particle position, $\mathbf{v}_i$ is the velocity, and $\mathbf{a}_i$ the acceleration. This pair of equations is known as the Velocity-Verlet algorithm, which can be written as:1. find the position of the particle after some timestep using eqn. 11, 2. calculate the force (and acceleration) on the particle,3. determine a new velocity for the particle, based on the average acceleration at the current and new positions, using eqn. 12, 4. overwrite the old acceleration values with the new ones, $\mathbf{a}_{i}(t) = \mathbf{a}_{i}(t + \Delta t)$,4. go to 1.This process can be continued for as long as is required to get good statistics for the quanity you are interested in (or for as long as you can wait for/afford to run the computer for). This process is called the integration step, and the Velocity-Verlet is the **integrator**. The Velocity-Verlet integration is numerical in nature, meaning that the accuracy of this method is dependent on the timestep, $\Delta t$, size. Small values of $\Delta t$ are capable of keeping the resultant uncertainty of the position and velocity small, these values are usually on the scale of $10^{-15}\text{ s}$ (femtoseconds). This means that to even measure a nanosecond of "real-time" molecular dynamics, 1 000 000 (one million) iterations of the above algorithm must be performed. In the cell below, **define** a set of functions for eqns 11 and 12.
###Code
def update_pos(x, v, a, dt):
return □ □ □
def update_velo(v, a, a1, dt):
return □ □ □
a = np.array([1, 2])
np.testing.assert_equal(update_pos(1, 1, 1, 1), 2.5)
np.testing.assert_equal(update_pos(a, a, a, 1), [2.5, 5])
np.testing.assert_equal(update_velo(1, 1, 1, 1), 2.)
np.testing.assert_equal(update_velo(a, a, a, 1), [2., 4])
###Output
_____no_output_____
###Markdown
InitialisationThere are only two tools left that you need to run a molecular dynamics simulation, and both are associated with the original configuration of the system; the original particle positions, and the original particle velocities. The particle positions are usually taken from some library of structures (e.g. the protein data bank if you are simulating proteins) or based on some knowledge of the system (e.g. CaF2 is known to have a face-centred cubic structure). The particle velocities are a bit more nuanced, as the total kinetic energy, $E_K$ of the system (and therefore the particle velocities) are dependent on the temperature of the simulation, $T$. $$ E_K = \sum_{i=1}^N \frac{m_i|v_i|^2}{2} = \frac{3}{2}Nk_BT, $$where $m_i$ is the mass of particle $i$, $N$ is the number of particles and $k_B$ is the Boltzmann constant. Based on this knowledge, the most common way to obtain initial velocities is to assign random values and then scale them based on the temperature of the system. For example, in the software you will use later today the initial velocity are determined as follow, $$ v_i = R_i \sqrt{\dfrac{k_BT}{m_i}}, $$where $R_i$ is some random number between $-0.5$ and $0.5$, $k_B$ is the Boltzmann constant, $T$ is the temperature, and $m_i$ is the mass of the particle.In the cell below, **define** a function to generate an initial velocity for an arbitrary number of particles.
###Code
def init_velocity(temperature, part_numb):
v = □ □ □
return v * □ □ □
###Output
_____no_output_____
###Markdown
Build an MD simulationWe will now try and use what we have done so far to build a 1-dimensional molecular dynamics simulation.
###Code
dt = 1e-14 # (seconds)
number_of_steps = # define a number of steps
distances = []
# initialisation
x = np.array([5e-10, 10e-10]) # (meters) these are the starting positions of the particles
v = # initialise the velocities
a = # calculate the initial accelerations
for i in range(0, number_of_steps):
# impliment the velocity verlet algorithm here
# the line below will add the distance between the
# two particles to the distance array for plotting
distances.append(np.abs(x[1] - x[0]))
###Output
_____no_output_____
###Markdown
**Ensure** that a demonstrator has checked the MD simulation before you continue!
###Code
%matplotlib inline
plt.plot(distances)
plt.xlabel('Steps')
plt.ylabel('Distances/m')
plt.show()
###Output
_____no_output_____
###Markdown
Run your 1-D molecular dynamics simulation a few times each at a range of different initial temperatures. In the cell below, **comment** on the effect of the different temperature on the distances that are sampled in the simulation.
###Code
Comment on the effect of the different temperature on the interatomic distances sampled in the simulation
###Output
_____no_output_____
###Markdown
Phase diagramHaving been introduced to the main aspects of the molecular dynamics simulation methodlogy, we will make use of existing software packages to probe material structure. This is common pratice, as writing a full software package is very complicated, so it is best to use a *well-troden*, and optimised, code.This week you will make use of the pylj [1] code, which simulates argon atoms in a 2-dimensional environment. Next week, you will be introduced to DLPOLY [2], a more general purpose molecular dynamics package. Before we introduce how to use the pylj software, it is necessary to consider the problem to which it will be applied,> The aim of the rest of this session is to determine and plot the phase diagram for two-dimension argonThe determination of a material's phase on the atomistic scale is a non-trivial task. In this exercise, we will use two main tools for phase identification:- Mean squared displacement (MSD)- Radial distribution function (RDF) Mean squared displacementYou will find out more about the MSD next week. However, for now we only need to be aware the MSD is a measure of how far the particles have moved during the simulation. The result is that it is possible to identify different phase of matter from the MSD plot, see *Figure 4* below. Figure 4. The anticipated MSD form for each state of matter. It should be expected that in a simulation of a given time, gaseous particles will be able to travel further than liquids, which can travel further then solids. Radial distribution functionA radial distribution function is the probability that another atom would be found at a given distance from each atom, and is a very useful measure of order in the system, of-course more disorder means more gas-like. Shown in *Figure 5*, are the RDFs for three materials; consider the shape of each one and the amount of **order** represented, in the cell below **comment on** and **explain** the expected state (solid, liquid or gas) for each. Figure 5. The radial distribution functions for 3 states of matter .
###Code
Comment on and explain the expected state from each of a, b, and c.
###Output
_____no_output_____
###Markdown
Software[pylj](http://pythoninchemistry.org/pylj) (python Lennard-Jones) [1] is an open-source Python package for producing molecular dynamics simulations of argon particles (interacting through the Lennard-Jones potential) in 2-dimensions. In the cell below, a molecular dynamics algorithm is **defined** using the pylj library. **Run this cell as is.**
###Code
from pylj import md, sample
def md_simulation(temperature, number_of_particles, number_of_steps, ff):
# Creates the visualisation environment
%matplotlib notebook
# Initialise the system
system = md.initialise(number_of_particles, temperature, 20, 'square', forcefield=ff)
# This sets the sampling class
sample_system = sample.Phase(system)
# Start at time 0
system.time = 0
# Begin the molecular dynamics loop
for i in range(0, number_of_steps):
# Run the equations of motion integrator algorithm, this
# includes the force calculation
system.integrate(md.velocity_verlet)
# Sample the thermodynamic and structural parameters of the system
system.md_sample()
# Allow the system to interact with a heat bath
system.heat_bath(temperature)
# Iterate the time
system.time += system.timestep_length
system.step += 1
# At a given frequency sample the positions and plot the RDF
if system.step % 25 == 0:
sample_system.update(system)
sample_system.average()
return system, sample_system
###Output
_____no_output_____
###Markdown
Having defined the molecular dynamics function, we can run it below. The variables that this function takes are as follows:- temperature (K)- number of particles- number of simulation steps- forcefieldRunning this function will result in four panels being presented. The top left shows the particles in the simulation, the top right gives the total energy for the system, the bottom left is the mean squared displacement and bottom right is the radial distribution function.
###Code
sim, samp_sim = md_simulation(100, 35, 5000, lennard_jones)
###Output
_____no_output_____
###Markdown
Plotting a phase diagram A phase diagram should be familiar from first-year, this is a graphical representation of the physical state of a substance under different conditions of state such as temperature, pressure and density. In this exercise the two variables will be temperature and density (by controlling the number of particles). Using the information that pylj returns about the MSD and the RDF determine the phase for a range of values of temperature (T) and number of particles (N). If the system is a solid, place the pair of T and N in the `solid` array, and similar for if the system is a liquid or a gas. Be aware that if the system is not yet at **equilibrium** (e.g. the energy has not minimised) then the data may not be reliable, make sure you run your simulations for long enough!
###Code
solid_N = np.array([□ □ □])
solid_T = np.array([□ □ □])
liquid_N = np.array([□ □ □])
liquid_T = np.array([□ □ □])
gas_N = np.array([□ □ □])
gas_T = np.array([□ □ □])
fig, ax = plt.subplots(figsize=(5, 5))
plt.plot(solid_T, solid_N, 'o', c='#0173B2')
plt.plot(liquid_T, liquid_N, 'o', c='#DE8F05')
plt.plot(gas_T, gas_N, 'o', c='#029E73')
plt.xlabel('temperature/K')
plt.ylabel('number')
plt.show()
###Output
_____no_output_____
###Markdown
A Quick Refresher on Using Jupyer NotebooksJupyter Notebooks allow you to run Python in an interactive way.Each of the boxes below is called a "Cell".To run the code in each cell:1. **Click** anywhere in the cell2. The left-hand border should turn green3. **Hit** "Shift and "Enter" at the same time4. In [ ]: in the left-hand margin should display In [*]: as the code runs5. In [ ]: in the left-hand margin should display In [n]: where n is the order of execution when the code has completed Alternatively:1. **Click** anywhere in the cell2. The left-hand border should turn green3. **Select** "Cell" then "Run Cells" from the top menu4. In [ ]: in the left-hand margin should display In [*]: as the code runs5. In [ ]: in the left-hand margin should display In [n]: where n is the order of execution when the code has completed ** NOTE: The order of execution is important - so pay attention to In [n]: **To clear the output of a given cell:1. **Click** anywhere in the cell2. The left-hand border should turn green3. **Select** "Cell" then "Current Outputs" then "Clear" from the top menuTo clear the output of all cells:1. **Click** anywhere in the cell2. The left-hand border should turn green3. **Select** "Cell" then "All Output" then "Clear" from the top menuTo save your progress:1. **Click** "file" then "Save and Checkpoint" from the top menuTo completely reset the Kernel:1. **Click** "Kernel" then "Restart & Clear Output " from the top menu
###Code
import subprocess
import os, sys
# Test polypy install
import polypy
# Test scipy install
import scipy
# Test pylj install
import pylj
# sets the current working directory (cwd) to the Week_1 directory
cwd = os.getcwd()
print(cwd)
###Output
_____no_output_____
###Markdown
Aim and Objectives The **Aim** of this week's exercise is to introduce molecular dynamics for atomistic simulation.The **first objective** is to make sure that the programmes we need are correctly installed.The **second objective** is to carry out molecular dynamics (MD) simulations of generated structures of simple materials using a package called DL_POLY.By the end of this task you will be able to:1. **Perform** molecular dynamics simulations at different temperatures2. **Manipulate** the input files3. **Adjust** the ensemble for the simulation4. **Examine** the volume and energy of different simulations5. **Apply** VMD to visualize the simulation cell and evaluate radial distribution coefficients**PLEASE NOTE** 1. **It is essential that the codes that were downloaded from [here](https://people.bath.ac.uk/chsscp/teach/adv.bho/progs.zip) are in the Codes/ folder in the parent directory, or the following cells will crash**2. Most of the instructions should be performed within this Notebook. However, some have to be executed on your own machineMost of the instructions should be performed within this Notebook. However, some have to be executed on your own machine. 1. Testing Before we can run some MD simulations, we first need to check whether the programs we are using (**Metadise_Test** and **DL_POLY**) are set up correctly:1. **Run** the cells below2. **Check** the output of your Anaconda Prompt is free of errors3. **Check** that files have been produced in the Metadise_Test/ and DLPOLY_Test/ directoriesto make sure that everything is set-up correctly. METADISE The METADISE code uses simple interatomic potentials to calculate the forces between the atoms and energy minimization to find the most stable structures.METADISE has three core components, that we will be using throughout the course:1. **The structural information**, which can be in a variety of formats. We will use it to generate a simulation cell of a crystal structure from its cell dimensions, space group and atomic coordinates2. **The potential interaction between ions**, which includes parameters defining the charge, size and hardness of the ions3. **Control parameters**, in this exercise will include information on growing the cell and generating DL_POLY input files for crystalline system run MD calculations (with DL_POLY).Further information about more METADISE functionality can be found [here](https://people.bath.ac.uk/chsscp/teach/metadise.bho/)
###Code
# Test METADISE
os.chdir(cwd)
os.chdir("Metadise_Test/")
subprocess.call('../../Codes/metadise.exe')
os.chdir(cwd)
###Output
_____no_output_____
###Markdown
The METADISE/ directory should contain the following input files:**input.txt**Specifies the structural information including the dimensions of the simulation cell and then positions of all the atoms (in Å ) as well as the instructions to METADISE.as well as the following output files: **summ_o000n.OUT** A summary of the output file.**job_o000n.cml** Structure file in XML format.**fin_o000n.res** A restart file.**field_o000n.DLP** DL_POLY FIELD file.**config_o000n.DLP** Structure file in DL_POLY CONFIG file format.**control_o000n.DLP** DL_POLY CONTROL file.**code_o000n.OUT** The main output file. This contains a summary of the input information and details of the METADISE operation.**af_co000n.MSI** Structure file in MSI format.**af_co000n.XYZ** Structure file in XYZ format.**af_co000n.CIF** Structure file in CIF format.**af_co000n.CAR** Structure file in CAR format. DL_POLY DL_POLY is a general purpose parallel molecular dynamics package that was written by Daresbury Laboratory, primarily to support CCP5.The code is available free of charge and was written to be sufficiently flexible that it can be applied to many different condensed matter materials.
###Code
# Test DL_POLY
# This may take several minutes
os.chdir(cwd)
os.chdir("DLPOLY_Test/")
subprocess.call("../../Codes/dlpoly_classic")
os.chdir(cwd)
###Output
_____no_output_____
###Markdown
The DLPOLY_Test/ directory should contain the following input files:**CONTROL **Specifies the conditions for a run of the program e.g. steps, timestep, temperature, pressure, required ensemble etc. **FIELD** Specifies the force field for the simulation. It is also important to appreciate that it defines the order in which atoms will appear in the configuration. For example, if there were 25 W and 75 O atoms, this file will give the order of atoms in the simulation cell. **CONFIG** Specifies the dimensions of the simulation cell and then positions of all the atoms (in Å ). If it is generated from a previous run, it may also contain the atomic velocities and forces for each atom. as well as the following output files: **OUTPUT** Contains a summary of the simulation, including the input data, simulation progress report and summary of final system averages. **REVCON** This contains the positions, velocities and forces of all the atoms in the system at the end of the simulation. When renamed CONFIG is used as the restart configuration for a continuation run. It is written at the same time as the REVIVE file. As with the CONFIG file, it is always worth checking that the atoms are at sensible positions. **STATIS** Contains a number of system variables at regular (user-specified) intervals throughout a simulation. It can be used for later statistical analysis. Note the file grows every time DL_POLY is run and is not overwritten. It should be removed from the execute subdirectory if a new simulation is to be started. **HISTORY** This details the atomic positions, (although can be made to contain velocities and forces) at selected intervals in the simulation. It forms the basis for much of the later analysis of the system. This file can become extremely large (beware) and is appended to, not overwritten, by later runs. It should always be removed from the execute subdirectory if a new simulation is to be started. We also need to check whether the visualisation programs we are using (**VESTA** and **VMD**) are set up correctly:1. **Follow ** instructions in the cells belowto make sure that everything is set-up correctly. If you have not already, please **download** [VESTA](https://jp-minerals.org/vesta/en/download.html) and [VMD](https://www.ks.uiuc.edu/Development/Download/download.cgi?PackageName=VMD) VESTA **VESTA** is a 3D visualization program for structural models, volumetric data such as electron/nuclear densities, and crystal morphologies. VESTA TEST 1. **Open** VESTA (Start Menu -> VESTA)2. **Open** the DL_POLY CONFIG file from the DLPOLY_Test/ directory (File -> Open -> CONFIG)3. **Inspect** the structure by experimenting with using the viewer to manipulate the cell. For example you might try to rotate the cell or change the display type or grow the crystal. VMD **VMD** is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3D graphics and built-in scripting.We can use VMD to look in more detail at structure and to visualize the trajectories directly. As well as visualization, VMD can also calculate various properties including radial distribution functions g(r) to enable a more quantitative structural analysis, which can easily distinguish between a solid and liquid, based on the structure VMD TEST 1. **Open** VMD (Start Menu -> VMD)2. **Open** the DL_POLY HISTORY file from the DLPOLY_Test/ directory (File -> New Molecule -> Browse -> HISTORY)3. **Change** file type to DL_POLY V2 History from the ‘Determine file type’ drop-down menu4. **Inspect** the structure by experimenting with using the viewer to manipulate the cell. For example you might try to rotate the cell or zoom in and out. 2. Extension: Quick Molecular Dynamics Exercise We will mainly be adjusting the DL_POLY CONTROL file to adjust the simulation conditions and analysing the output obtained from MD simulations using a package called VMD. Once this task is complete we will explore the structural changes in different materials. Checking The Structure A useful first check if the atom positions are not chemically sensible is to open the CONFIG file with VESTA as we did above.The DL_POLY jobs will take just under 10 minutes to run – if you find that yours is terminating immediately, or lasting for significantly longer than 15 minutes, please inform a demonstrator.
###Code
# Running DL_POLY
os.chdir(cwd)
os.chdir("DLPOLY_Exercise/")
subprocess.call("../../Codes/dlpoly_classic")
os.chdir(cwd)
###Output
_____no_output_____
###Markdown
Changing The Parameters Open the file CONTROL in **Notepad++**. This file, as its name suggests, contains all the control variables for the simulation, i.e. it tells the program what to do. We have generated a template file with some standard values for a typical simulation; however for the simulation we are going to perform we will need to change a few of these values.1. **Check** that the time step is set at 0.001 ps (1 fs)2. **Check** the number of ‘steps’ is set to 200003. **Change** the values traj 1 250 0 to traj 0 100 0. This changes how often the program writes out to the HISTORY file (more on this later)4. **Select** a temperature to run: first try 85. This is the temperature in Kelvin.Once you have made these changes save the file as CONTROL. (again, all capitals with no suffix – ignore any warnings about changing suffix type). **NOTE**: The reliability of the result will depend on the number of steps as this improves the statistics. Thus, if the computer is fast enough, or you are leaving it running etc, try increasing the number of steps, but be careful or you may spend too much time waiting. All DL_POLY simulations should be run in separate folders. Investigate The System Properties **Open** the OUTPUT file in WordPad or NotePad++ and search for the word “final averages”. Under this line, you should find a table of properties and their fluctuations.Properties we particularly consider are temp_tot, eng_cfg, volume and press (Temperature, Potential Energy, Volume and Pressure). As this is run in the NVE ensemble, the volume will stay fixed.**Check** that the temperature is close to your chosen value, if not, increase the number of equilibration steps (e.g. from 1000 to 10000) and increase the total number of steps by 10000.**Increase** the total number of steps and see if the properties remain reasonably constant, i.e. checking that the results are not dependent on the number of timesteps.**Repeat** the simulation in a separate folder but at 110 K by changing the CONTROL file and the information in the cell below.Is there a phase change from solid to liquid based on the properties?
###Code
# Running your own DL_POLY calculation at 110 K
os.chdir(cwd)
os.chdir("<your directory>)
subprocess.call("<path_to_dl_poly>")
os.chdir(cwd)
###Output
_____no_output_____ |
notebooks/source/bayesian_hierarchical_stacking.ipynb | ###Markdown
Bayesian Hierarchical Stacking: Well Switching Case Study Photo by Belinda Fewings, https://unsplash.com/photos/6p-KtXCBGNw. Table of Contents* [Intro](intro)* [1. Exploratory Data Analysis](1)* [2. Prepare 6 Different Models](2) * [2.1 Feature Engineering](2.1) * [2.2 Training](2.2)* [3. Bayesian Hierarchical Stacking](3) * [3.1 Prepare stacking datasets](3.1) * [3.2 Define stacking model](3.2)* [4. Evaluate on test set](4) * [4.1 Stack predictions](4.1) * [4.2 Compare methods](4.2)* [Conclusion](conclusion)* [References](references) Intro Suppose you have just fit 6 models to a dataset, and need to choose which one to use to make predictions on your test set. How do you choose which one to use? A couple of common tactics are:- choose the best model based on cross-validation;- average the models, using weights based on cross-validation scores.In the paper [Bayesian hierarchical stacking: Some models are (somewhere) useful](https://arxiv.org/abs/2101.08954), a new technique is introduced: average models based on weights which are allowed to vary across according to the input data, based on a hierarchical structure.Here, we'll implement the first case study from that paper - readers are nonetheless encouraged to look at the original paper to find other cases studies, as well as theoretical results. Code from the article (in R / Stan) can be found [here](https://github.com/yao-yl/hierarchical-stacking-code).
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.interpolate import BSpline
import seaborn as sns
import jax
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.9.1")
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Exploratory Data Analysis The data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?We'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!But first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are:- switch: whether a household switched to another well;- arsenic: level of arsenic in drinking water;- educ: level of education of "head of household";- dist100: distance to nearest safe-drinking well;- assoc: whether the household participates in any community activities.
###Code
wells = pd.read_csv(
"http://stat.columbia.edu/~gelman/arm/examples/arsenic/wells.dat", sep=" "
)
wells.head()
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.suptitle("Target variable plotted against various predictors")
sns.scatterplot(data=wells, x="arsenic", y="switch", ax=ax[0][0])
sns.scatterplot(data=wells, x="dist", y="switch", ax=ax[0][1])
sns.barplot(
data=wells.groupby("assoc")["switch"].mean().reset_index(),
x="assoc",
y="switch",
ax=ax[1][0],
)
ax[1][0].set_ylabel("Proportion switch")
sns.barplot(
data=wells.groupby("educ")["switch"].mean().reset_index(),
x="educ",
y="switch",
ax=ax[1][1],
)
ax[1][1].set_ylabel("Proportion switch");
###Output
_____no_output_____
###Markdown
Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
###Code
np.random.seed(1)
train_id = wells.sample(n=200).index
test_id = wells.loc[~wells.index.isin(train_id)].sample(n=1500).index
y_train = wells.loc[train_id, "switch"].to_numpy()
y_test = wells.loc[test_id, "switch"].to_numpy()
###Output
_____no_output_____
###Markdown
2. Prepare 6 different candidate models 2.1 Feature Engineering First, let's add a few new columns:- `edu0`: whether `educ` is `0`,- `edu1`: whether `educ` is between `1` and `5`,- `edu2`: whether `educ` is between `6` and `11`,- `edu3`: whether `educ` is between `12` and `17`,- `logarsenic`: natural logarithm of `arsenic`,- `assoc_half`: half of `assoc`,- `as_square`: natural logarithm of `arsenic`, squared,- `as_third`: natural logarithm of `arsenic`, cubed,- `dist100`: `dist` divided by `100`, - `intercept`: just a columns of `1`s.We're going to start by fitting 6 different models to our train set:- logistic regression using `intercept`, `arsenic`, `assoc`, `edu1`, `edu2`, and `edu3`;- same as above, but with `logarsenic` instead of `arsenic`;- same as the first one, but with square and cubic features as well;- same as the first one, but with spline features derived from `logarsenic` as well;- same as the first one, but with spline features derived from `dist100` as well;- same as the first one, but with `educ` instead of the binary `edu` variables.
###Code
wells["edu0"] = wells["educ"].isin(np.arange(0, 1)).astype(int)
wells["edu1"] = wells["educ"].isin(np.arange(1, 6)).astype(int)
wells["edu2"] = wells["educ"].isin(np.arange(6, 12)).astype(int)
wells["edu3"] = wells["educ"].isin(np.arange(12, 18)).astype(int)
wells["logarsenic"] = np.log(wells["arsenic"])
wells["assoc_half"] = wells["assoc"] / 2.0
wells["as_square"] = wells["logarsenic"] ** 2
wells["as_third"] = wells["logarsenic"] ** 3
wells["dist100"] = wells["dist"] / 100.0
wells["intercept"] = 1
def bs(x, knots, degree):
"""
Generate the B-spline basis matrix for a polynomial spline.
Parameters
----------
x
predictor variable.
knots
locations of internal breakpoints (not padded).
degree
degree of the piecewise polynomial.
Returns
-------
pd.DataFrame
Spline basis matrix.
Notes
-----
This mirrors ``bs`` from splines package in R.
"""
padded_knots = np.hstack(
[[x.min()] * (degree + 1), knots, [x.max()] * (degree + 1)]
)
return pd.DataFrame(
BSpline(padded_knots, np.eye(len(padded_knots) - degree - 1), degree)(x)[:, 1:],
index=x.index,
)
knots = np.quantile(wells.loc[train_id, "logarsenic"], np.linspace(0.1, 0.9, num=10))
spline_arsenic = bs(wells["logarsenic"], knots=knots, degree=3)
knots = np.quantile(wells.loc[train_id, "dist100"], np.linspace(0.1, 0.9, num=10))
spline_dist = bs(wells["dist100"], knots=knots, degree=3)
features_0 = ["intercept", "dist100", "arsenic", "assoc", "edu1", "edu2", "edu3"]
features_1 = ["intercept", "dist100", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_2 = [
"intercept",
"dist100",
"arsenic",
"as_third",
"as_square",
"assoc",
"edu1",
"edu2",
"edu3",
]
features_3 = ["intercept", "dist100", "assoc", "edu1", "edu2", "edu3"]
features_4 = ["intercept", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_5 = ["intercept", "dist100", "logarsenic", "assoc", "educ"]
X0 = wells.loc[train_id, features_0].to_numpy()
X1 = wells.loc[train_id, features_1].to_numpy()
X2 = wells.loc[train_id, features_2].to_numpy()
X3 = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[train_id]
.to_numpy()
)
X4 = pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[train_id].to_numpy()
X5 = wells.loc[train_id, features_5].to_numpy()
X0_test = wells.loc[test_id, features_0].to_numpy()
X1_test = wells.loc[test_id, features_1].to_numpy()
X2_test = wells.loc[test_id, features_2].to_numpy()
X3_test = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[test_id]
.to_numpy()
)
X4_test = (
pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[test_id].to_numpy()
)
X5_test = wells.loc[test_id, features_5].to_numpy()
train_x_list = [X0, X1, X2, X3, X4, X5]
test_x_list = [X0_test, X1_test, X2_test, X3_test, X4_test, X5_test]
K = len(train_x_list)
###Output
_____no_output_____
###Markdown
2.2 Training Each model will be trained in the same way - with a Bernoulli likelihood and a logit link function.
###Code
def logistic(x, y=None):
beta = numpyro.sample("beta", dist.Normal(0, 3).expand([x.shape[1]]))
logits = numpyro.deterministic("logits", jnp.matmul(x, beta))
numpyro.sample(
"obs",
dist.Bernoulli(logits=logits),
obs=y,
)
fit_list = []
for k in range(K):
sampler = numpyro.infer.NUTS(logistic)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
rng_key = jax.random.fold_in(jax.random.PRNGKey(13), k)
mcmc.run(rng_key, x=train_x_list[k], y=y_train)
fit_list.append(mcmc)
###Output
_____no_output_____
###Markdown
2.3 Estimate leave-one-out cross-validated score for each training point Rather than refitting each model 100 times, we will estimate the leave-one-out cross-validated score using [LOO](https://arxiv.org/abs/2001.00980).
###Code
def find_point_wise_loo_score(fit):
return az.loo(az.from_numpyro(fit), pointwise=True, scale="log").loo_i.values
lpd_point = np.vstack([find_point_wise_loo_score(fit) for fit in fit_list]).T
exp_lpd_point = np.exp(lpd_point)
###Output
_____no_output_____
###Markdown
3. Bayesian Hierarchical Stacking 3.1 Prepare stacking datasets To determine how the stacking weights should vary across training and test sets, we will need to create "stacking datasets" which include all the features which we want the stacking weights to depend on. How should such features be included? For discrete features, this is easy, we just one-hot-encode them. But for continuous features, we need a trick. In Equation (16), the authors recommend the following: if you have a continuous feature `f`, then replace it with the following two features:- `f_l`: `f` minus the median of `f`, clipped above at 0;- `f_r`: `f` minus the median of `f`, clipped below at 0;
###Code
dist100_median = wells.loc[wells.index[train_id], "dist100"].median()
logarsenic_median = wells.loc[wells.index[train_id], "logarsenic"].median()
wells["dist100_l"] = (wells["dist100"] - dist100_median).clip(upper=0)
wells["dist100_r"] = (wells["dist100"] - dist100_median).clip(lower=0)
wells["logarsenic_l"] = (wells["logarsenic"] - logarsenic_median).clip(upper=0)
wells["logarsenic_r"] = (wells["logarsenic"] - logarsenic_median).clip(lower=0)
stacking_features = [
"edu0",
"edu1",
"edu2",
"edu3",
"assoc_half",
"dist100_l",
"dist100_r",
"logarsenic_l",
"logarsenic_r",
]
X_stacking_train = wells.loc[train_id, stacking_features].to_numpy()
X_stacking_test = wells.loc[test_id, stacking_features].to_numpy()
###Output
_____no_output_____
###Markdown
3.2 Define stacking model What we seek to find is a matrix of weights $W$ with which to multiply the models' predictions. Let's define a matrix $Pred$ such that $Pred_{i,k}$ represents the prediction made for point $i$ by model $k$. Then the final prediction for point $i$ will then be:$$ \sum_k W_{i, k}Pred_{i,k} $$Such a matrix $W$ would be required to have each column sum to $1$. Hence, we calculate each row $W_i$ of $W$ as:$$ W_i = \text{softmax}(X\_\text{stacking}_i \cdot \beta), $$where $\beta$ is a matrix whose values we seek to determine. For the discrete features, $\beta$ is given a hierarchical structure over the possible inputs. Continuous features, on the other hand, get no hierarchical structure in this case study and just vary according to the input values.Notice how, for the discrete features, a [non-centered parametrisation is used](https://twiecki.io/blog/2017/02/08/bayesian-hierchical-non-centered/). Also note that we only need to estimate `K-1` columns of $\beta$, because the weights `W_{i, k}` will have to sum to `1` for each `i`.
###Code
def stacking(
X,
d_discrete,
X_test,
exp_lpd_point,
tau_mu,
tau_sigma,
*,
test,
):
"""
Get weights with which to stack candidate models' predictions.
Parameters
----------
X
Training stacking matrix: features on which stacking weights should depend, for the
training set.
d_discrete
Number of discrete features in `X` and `X_test`. The first `d_discrete` features
from these matrices should be the discrete ones, with the continuous ones coming
after them.
X_test
Test stacking matrix: features on which stacking weights should depend, for the
testing set.
exp_lpd_point
LOO score evaluated at each point in the training set, for each candidate model.
tau_mu
Hyperprior for mean of `beta`, for discrete features.
tau_sigma
Hyperprior for standard deviation of `beta`, for continuous features.
test
Whether to calculate stacking weights for test set.
Notes
-----
Naming of variables mirrors what's used in the original paper.
"""
N = X.shape[0]
d = X.shape[1]
N_test = X_test.shape[0]
K = lpd_point.shape[1] # number of candidate models
with numpyro.plate("Candidate models", K - 1, dim=-2):
# mean effect of discrete features on stacking weights
mu = numpyro.sample("mu", dist.Normal(0, tau_mu))
# standard deviation effect of discrete features on stacking weights
sigma = numpyro.sample("sigma", dist.HalfNormal(scale=tau_sigma))
with numpyro.plate("Discrete features", d_discrete, dim=-1):
# effect of discrete features on stacking weights
tau = numpyro.sample("tau", dist.Normal(0, 1))
with numpyro.plate("Continuous features", d - d_discrete, dim=-1):
# effect of continuous features on stacking weights
beta_con = numpyro.sample("beta_con", dist.Normal(0, 1))
# effects of features on stacking weights
beta = numpyro.deterministic(
"beta", jnp.hstack([(sigma.squeeze() * tau.T + mu.squeeze()).T, beta_con])
)
assert beta.shape == (K - 1, d)
# stacking weights (in unconstrained space)
f = jnp.hstack([X @ beta.T, jnp.zeros((N, 1))])
assert f.shape == (N, K)
# log probability of LOO training scores weighted by stacking weights.
log_w = jax.nn.log_softmax(f, axis=1)
# stacking weights (constrained to sum to 1)
numpyro.deterministic("w", jnp.exp(log_w))
logp = jax.nn.logsumexp(lpd_point + log_w, axis=1)
numpyro.factor("logp", jnp.sum(logp))
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
sampler = numpyro.infer.NUTS(stacking)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
mcmc.run(
jax.random.PRNGKey(17),
X=X_stacking_train,
d_discrete=4,
X_test=X_stacking_test,
exp_lpd_point=exp_lpd_point,
tau_mu=1.0,
tau_sigma=0.5,
test=True,
)
trace = mcmc.get_samples()
###Output
_____no_output_____
###Markdown
We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.Let's compare them with what the weights would've been if we'd just used fixed stacking weights (computed using ArviZ - see [their docs](https://arviz-devs.github.io/arviz/api/generated/arviz.compare.html) for details).
###Code
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6), sharey=True)
training_stacking_weights = trace["w"].mean(axis=0)
sns.scatterplot(data=pd.DataFrame(training_stacking_weights), ax=ax[0])
fixed_weights = (
az.compare({idx: fit for idx, fit in enumerate(fit_list)}, method="stacking")
.sort_index()["weight"]
.to_numpy()
)
fixed_weights_df = pd.DataFrame(
np.repeat(
fixed_weights[jnp.newaxis, :],
len(X_stacking_train),
axis=0,
)
)
sns.scatterplot(data=fixed_weights_df, ax=ax[1])
ax[0].set_title("Training weights from Bayesian Hierarchical stacking")
ax[1].set_title("Fixed weights stacking")
ax[0].set_xlabel("Index")
ax[1].set_xlabel("Index")
fig.suptitle(
"Bayesian Hierarchical Stacking weights can vary according to the input",
fontsize=18,
)
fig.tight_layout();
###Output
_____no_output_____
###Markdown
4. Evaluate on test set 4.1 Stack predictions Now, for each model, let's evaluate the log predictive density for each point in the test set. Once we have predictions for each model, we need to think about how to combine them, such that for each test point, we get a single prediction.We decided we'd do this in three ways:- Bayesian Hierarchical Stacking (`bhs_pred`);- choosing the model with the best training set LOO score (`model_selection_preds`);- fixed-weights stacking (`fixed_weights_preds`).
###Code
# for each candidate model, extract the posterior predictive logits
train_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(19), k)
train_pred = predictive(rng_key, x=train_x_list[k])["logits"]
train_preds.append(train_pred.mean(axis=0))
# reshape, so we have (N, K)
train_preds = np.vstack(train_preds).T
# same as previous cell, but for test set
test_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(20), k)
test_pred = predictive(rng_key, x=test_x_list[k])["logits"]
test_preds.append(test_pred.mean(axis=0))
test_preds = np.vstack(test_preds).T
# get the stacking weights for the test set
test_stacking_weights = trace["w_test"].mean(axis=0)
# get predictions using the stacking weights
bhs_predictions = (test_stacking_weights * test_preds).sum(axis=1)
# get predictions using only the model with the best LOO score
model_selection_preds = test_preds[:, lpd_point.sum(axis=0).argmax()]
# get predictions using fixed stacking weights, dependent on the LOO score
fixed_weights_preds = (fixed_weights * test_preds).sum(axis=1)
###Output
_____no_output_____
###Markdown
4.2 Compare methods Let's compare the negative log predictive density scores on the test set (note - lower is better):
###Code
fig, ax = plt.subplots(figsize=(12, 6))
neg_log_pred_densities = np.vstack(
[
-dist.Bernoulli(logits=bhs_predictions).log_prob(y_test),
-dist.Bernoulli(logits=model_selection_preds).log_prob(y_test),
-dist.Bernoulli(logits=fixed_weights_preds).log_prob(y_test),
]
).T
neg_log_pred_density = pd.DataFrame(
neg_log_pred_densities,
columns=[
"Bayesian Hierarchical Stacking",
"Model selection",
"Fixed stacking weights",
],
)
sns.barplot(
data=neg_log_pred_density.reindex(
columns=neg_log_pred_density.mean(axis=0).sort_values(ascending=False).index
),
orient="h",
ax=ax,
)
ax.set_title(
"Bayesian Hierarchical Stacking performs best here", fontdict={"fontsize": 18}
)
ax.set_xlabel("Negative mean log predictive density (lower is better)");
###Output
_____no_output_____
###Markdown
Bayesian Hierarchical Stacking: Well Switching Case Study Photo by Belinda Fewings, https://unsplash.com/photos/6p-KtXCBGNw. Table of Contents* [Intro](intro)* [1. Exploratory Data Analysis](1)* [2. Prepare 6 Different Models](2) * [2.1 Feature Engineering](2.1) * [2.2 Training](2.2)* [3. Bayesian Hierarchical Stacking](3) * [3.1 Prepare stacking datasets](3.1) * [3.2 Define stacking model](3.2)* [4. Evaluate on test set](4) * [4.1 Stack predictions](4.1) * [4.2 Compare methods](4.2)* [Conclusion](conclusion)* [References](references) Intro Suppose you have just fit 6 models to a dataset, and need to choose which one to use to make predictions on your test set. How do you choose which one to use? A couple of common tactics are:- choose the best model based on cross-validation;- average the models, using weights based on cross-validation scores.In the paper [Bayesian hierarchical stacking: Some models are (somewhere) useful](https://arxiv.org/abs/2101.08954), a new technique is introduced: average models based on weights which are allowed to vary across according to the input data, based on a hierarchical structure.Here, we'll implement the first case study from that paper - readers are nonetheless encouraged to look at the original paper to find other cases studies, as well as theoretical results. Code from the article (in R / Stan) can be found [here](https://github.com/yao-yl/hierarchical-stacking-code).
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.interpolate import BSpline
import seaborn as sns
import jax
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.9.0")
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Exploratory Data Analysis The data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?We'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!But first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are:- switch: whether a household switched to another well;- arsenic: level of arsenic in drinking water;- educ: level of education of "head of household";- dist100: distance to nearest safe-drinking well;- assoc: whether the household participates in any community activities.
###Code
wells = pd.read_csv(
"http://stat.columbia.edu/~gelman/arm/examples/arsenic/wells.dat", sep=" "
)
wells.head()
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.suptitle("Target variable plotted against various predictors")
sns.scatterplot(data=wells, x="arsenic", y="switch", ax=ax[0][0])
sns.scatterplot(data=wells, x="dist", y="switch", ax=ax[0][1])
sns.barplot(
data=wells.groupby("assoc")["switch"].mean().reset_index(),
x="assoc",
y="switch",
ax=ax[1][0],
)
ax[1][0].set_ylabel("Proportion switch")
sns.barplot(
data=wells.groupby("educ")["switch"].mean().reset_index(),
x="educ",
y="switch",
ax=ax[1][1],
)
ax[1][1].set_ylabel("Proportion switch");
###Output
_____no_output_____
###Markdown
Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
###Code
np.random.seed(1)
train_id = wells.sample(n=200).index
test_id = wells.loc[~wells.index.isin(train_id)].sample(n=1500).index
y_train = wells.loc[train_id, "switch"].to_numpy()
y_test = wells.loc[test_id, "switch"].to_numpy()
###Output
_____no_output_____
###Markdown
2. Prepare 6 different candidate models 2.1 Feature Engineering First, let's add a few new columns:- `edu0`: whether `educ` is `0`,- `edu1`: whether `educ` is between `1` and `5`,- `edu2`: whether `educ` is between `6` and `11`,- `edu3`: whether `educ` is between `12` and `17`,- `logarsenic`: natural logarithm of `arsenic`,- `assoc_half`: half of `assoc`,- `as_square`: natural logarithm of `arsenic`, squared,- `as_third`: natural logarithm of `arsenic`, cubed,- `dist100`: `dist` divided by `100`, - `intercept`: just a columns of `1`s.We're going to start by fitting 6 different models to our train set:- logistic regression using `intercept`, `arsenic`, `assoc`, `edu1`, `edu2`, and `edu3`;- same as above, but with `logarsenic` instead of `arsenic`;- same as the first one, but with square and cubic features as well;- same as the first one, but with spline features derived from `logarsenic` as well;- same as the first one, but with spline features derived from `dist100` as well;- same as the first one, but with `educ` instead of the binary `edu` variables.
###Code
wells["edu0"] = wells["educ"].isin(np.arange(0, 1)).astype(int)
wells["edu1"] = wells["educ"].isin(np.arange(1, 6)).astype(int)
wells["edu2"] = wells["educ"].isin(np.arange(6, 12)).astype(int)
wells["edu3"] = wells["educ"].isin(np.arange(12, 18)).astype(int)
wells["logarsenic"] = np.log(wells["arsenic"])
wells["assoc_half"] = wells["assoc"] / 2.0
wells["as_square"] = wells["logarsenic"] ** 2
wells["as_third"] = wells["logarsenic"] ** 3
wells["dist100"] = wells["dist"] / 100.0
wells["intercept"] = 1
def bs(x, knots, degree):
"""
Generate the B-spline basis matrix for a polynomial spline.
Parameters
----------
x
predictor variable.
knots
locations of internal breakpoints (not padded).
degree
degree of the piecewise polynomial.
Returns
-------
pd.DataFrame
Spline basis matrix.
Notes
-----
This mirrors ``bs`` from splines package in R.
"""
padded_knots = np.hstack(
[[x.min()] * (degree + 1), knots, [x.max()] * (degree + 1)]
)
return pd.DataFrame(
BSpline(padded_knots, np.eye(len(padded_knots) - degree - 1), degree)(x)[:, 1:],
index=x.index,
)
knots = np.quantile(wells.loc[train_id, "logarsenic"], np.linspace(0.1, 0.9, num=10))
spline_arsenic = bs(wells["logarsenic"], knots=knots, degree=3)
knots = np.quantile(wells.loc[train_id, "dist100"], np.linspace(0.1, 0.9, num=10))
spline_dist = bs(wells["dist100"], knots=knots, degree=3)
features_0 = ["intercept", "dist100", "arsenic", "assoc", "edu1", "edu2", "edu3"]
features_1 = ["intercept", "dist100", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_2 = [
"intercept",
"dist100",
"arsenic",
"as_third",
"as_square",
"assoc",
"edu1",
"edu2",
"edu3",
]
features_3 = ["intercept", "dist100", "assoc", "edu1", "edu2", "edu3"]
features_4 = ["intercept", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_5 = ["intercept", "dist100", "logarsenic", "assoc", "educ"]
X0 = wells.loc[train_id, features_0].to_numpy()
X1 = wells.loc[train_id, features_1].to_numpy()
X2 = wells.loc[train_id, features_2].to_numpy()
X3 = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[train_id]
.to_numpy()
)
X4 = pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[train_id].to_numpy()
X5 = wells.loc[train_id, features_5].to_numpy()
X0_test = wells.loc[test_id, features_0].to_numpy()
X1_test = wells.loc[test_id, features_1].to_numpy()
X2_test = wells.loc[test_id, features_2].to_numpy()
X3_test = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[test_id]
.to_numpy()
)
X4_test = (
pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[test_id].to_numpy()
)
X5_test = wells.loc[test_id, features_5].to_numpy()
train_x_list = [X0, X1, X2, X3, X4, X5]
test_x_list = [X0_test, X1_test, X2_test, X3_test, X4_test, X5_test]
K = len(train_x_list)
###Output
_____no_output_____
###Markdown
2.2 Training Each model will be trained in the same way - with a Bernoulli likelihood and a logit link function.
###Code
def logistic(x, y=None):
beta = numpyro.sample("beta", dist.Normal(0, 3).expand([x.shape[1]]))
logits = numpyro.deterministic("logits", jnp.matmul(x, beta))
numpyro.sample(
"obs",
dist.Bernoulli(logits=logits),
obs=y,
)
fit_list = []
for k in range(K):
sampler = numpyro.infer.NUTS(logistic)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
rng_key = jax.random.fold_in(jax.random.PRNGKey(13), k)
mcmc.run(rng_key, x=train_x_list[k], y=y_train)
fit_list.append(mcmc)
###Output
_____no_output_____
###Markdown
2.3 Estimate leave-one-out cross-validated score for each training point Rather than refitting each model 100 times, we will estimate the leave-one-out cross-validated score using [LOO](https://arxiv.org/abs/2001.00980).
###Code
def find_point_wise_loo_score(fit):
return az.loo(az.from_numpyro(fit), pointwise=True, scale="log").loo_i.values
lpd_point = np.vstack([find_point_wise_loo_score(fit) for fit in fit_list]).T
exp_lpd_point = np.exp(lpd_point)
###Output
_____no_output_____
###Markdown
3. Bayesian Hierarchical Stacking 3.1 Prepare stacking datasets To determine how the stacking weights should vary across training and test sets, we will need to create "stacking datasets" which include all the features which we want the stacking weights to depend on. How should such features be included? For discrete features, this is easy, we just one-hot-encode them. But for continuous features, we need a trick. In Equation (16), the authors recommend the following: if you have a continuous feature `f`, then replace it with the following two features:- `f_l`: `f` minus the median of `f`, clipped above at 0;- `f_r`: `f` minus the median of `f`, clipped below at 0;
###Code
dist100_median = wells.loc[wells.index[train_id], "dist100"].median()
logarsenic_median = wells.loc[wells.index[train_id], "logarsenic"].median()
wells["dist100_l"] = (wells["dist100"] - dist100_median).clip(upper=0)
wells["dist100_r"] = (wells["dist100"] - dist100_median).clip(lower=0)
wells["logarsenic_l"] = (wells["logarsenic"] - logarsenic_median).clip(upper=0)
wells["logarsenic_r"] = (wells["logarsenic"] - logarsenic_median).clip(lower=0)
stacking_features = [
"edu0",
"edu1",
"edu2",
"edu3",
"assoc_half",
"dist100_l",
"dist100_r",
"logarsenic_l",
"logarsenic_r",
]
X_stacking_train = wells.loc[train_id, stacking_features].to_numpy()
X_stacking_test = wells.loc[test_id, stacking_features].to_numpy()
###Output
_____no_output_____
###Markdown
3.2 Define stacking model What we seek to find is a matrix of weights $W$ with which to multiply the models' predictions. Let's define a matrix $Pred$ such that $Pred_{i,k}$ represents the prediction made for point $i$ by model $k$. Then the final prediction for point $i$ will then be:$$ \sum_k W_{i, k}Pred_{i,k} $$Such a matrix $W$ would be required to have each column sum to $1$. Hence, we calculate each row $W_i$ of $W$ as:$$ W_i = \text{softmax}(X\_\text{stacking}_i \cdot \beta), $$where $\beta$ is a matrix whose values we seek to determine. For the discrete features, $\beta$ is given a hierarchical structure over the possible inputs. Continuous features, on the other hand, get no hierarchical structure in this case study and just vary according to the input values.Notice how, for the discrete features, a [non-centered parametrisation is used](https://twiecki.io/blog/2017/02/08/bayesian-hierchical-non-centered/). Also note that we only need to estimate `K-1` columns of $\beta$, because the weights `W_{i, k}` will have to sum to `1` for each `i`.
###Code
def stacking(
X,
d_discrete,
X_test,
exp_lpd_point,
tau_mu,
tau_sigma,
*,
test,
):
"""
Get weights with which to stack candidate models' predictions.
Parameters
----------
X
Training stacking matrix: features on which stacking weights should depend, for the
training set.
d_discrete
Number of discrete features in `X` and `X_test`. The first `d_discrete` features
from these matrices should be the discrete ones, with the continuous ones coming
after them.
X_test
Test stacking matrix: features on which stacking weights should depend, for the
testing set.
exp_lpd_point
LOO score evaluated at each point in the training set, for each candidate model.
tau_mu
Hyperprior for mean of `beta`, for discrete features.
tau_sigma
Hyperprior for standard deviation of `beta`, for continuous features.
test
Whether to calculate stacking weights for test set.
Notes
-----
Naming of variables mirrors what's used in the original paper.
"""
N = X.shape[0]
d = X.shape[1]
N_test = X_test.shape[0]
K = lpd_point.shape[1] # number of candidate models
with numpyro.plate("Candidate models", K - 1, dim=-2):
# mean effect of discrete features on stacking weights
mu = numpyro.sample("mu", dist.Normal(0, tau_mu))
# standard deviation effect of discrete features on stacking weights
sigma = numpyro.sample("sigma", dist.HalfNormal(scale=tau_sigma))
with numpyro.plate("Discrete features", d_discrete, dim=-1):
# effect of discrete features on stacking weights
tau = numpyro.sample("tau", dist.Normal(0, 1))
with numpyro.plate("Continuous features", d - d_discrete, dim=-1):
# effect of continuous features on stacking weights
beta_con = numpyro.sample("beta_con", dist.Normal(0, 1))
# effects of features on stacking weights
beta = numpyro.deterministic(
"beta", jnp.hstack([(sigma.squeeze() * tau.T + mu.squeeze()).T, beta_con])
)
assert beta.shape == (K - 1, d)
# stacking weights (in unconstrained space)
f = jnp.hstack([X @ beta.T, jnp.zeros((N, 1))])
assert f.shape == (N, K)
# log probability of LOO training scores weighted by stacking weights.
log_w = jax.nn.log_softmax(f, axis=1)
# stacking weights (constrained to sum to 1)
numpyro.deterministic("w", jnp.exp(log_w))
logp = jax.nn.logsumexp(lpd_point + log_w, axis=1)
numpyro.factor("logp", jnp.sum(logp))
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
sampler = numpyro.infer.NUTS(stacking)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
mcmc.run(
jax.random.PRNGKey(17),
X=X_stacking_train,
d_discrete=4,
X_test=X_stacking_test,
exp_lpd_point=exp_lpd_point,
tau_mu=1.0,
tau_sigma=0.5,
test=True,
)
trace = mcmc.get_samples()
###Output
_____no_output_____
###Markdown
We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.Let's compare them with what the weights would've been if we'd just used fixed stacking weights (computed using ArviZ - see [their docs](https://arviz-devs.github.io/arviz/api/generated/arviz.compare.html) for details).
###Code
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6), sharey=True)
training_stacking_weights = trace["w"].mean(axis=0)
sns.scatterplot(data=pd.DataFrame(training_stacking_weights), ax=ax[0])
fixed_weights = (
az.compare({idx: fit for idx, fit in enumerate(fit_list)}, method="stacking")
.sort_index()["weight"]
.to_numpy()
)
fixed_weights_df = pd.DataFrame(
np.repeat(
fixed_weights[jnp.newaxis, :],
len(X_stacking_train),
axis=0,
)
)
sns.scatterplot(data=fixed_weights_df, ax=ax[1])
ax[0].set_title("Training weights from Bayesian Hierarchical stacking")
ax[1].set_title("Fixed weights stacking")
ax[0].set_xlabel("Index")
ax[1].set_xlabel("Index")
fig.suptitle(
"Bayesian Hierarchical Stacking weights can vary according to the input",
fontsize=18,
)
fig.tight_layout();
###Output
_____no_output_____
###Markdown
4. Evaluate on test set 4.1 Stack predictions Now, for each model, let's evaluate the log predictive density for each point in the test set. Once we have predictions for each model, we need to think about how to combine them, such that for each test point, we get a single prediction.We decided we'd do this in three ways:- Bayesian Hierarchical Stacking (`bhs_pred`);- choosing the model with the best training set LOO score (`model_selection_preds`);- fixed-weights stacking (`fixed_weights_preds`).
###Code
# for each candidate model, extract the posterior predictive logits
train_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(19), k)
train_pred = predictive(rng_key, x=train_x_list[k])["logits"]
train_preds.append(train_pred.mean(axis=0))
# reshape, so we have (N, K)
train_preds = np.vstack(train_preds).T
# same as previous cell, but for test set
test_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(20), k)
test_pred = predictive(rng_key, x=test_x_list[k])["logits"]
test_preds.append(test_pred.mean(axis=0))
test_preds = np.vstack(test_preds).T
# get the stacking weights for the test set
test_stacking_weights = trace["w_test"].mean(axis=0)
# get predictions using the stacking weights
bhs_predictions = (test_stacking_weights * test_preds).sum(axis=1)
# get predictions using only the model with the best LOO score
model_selection_preds = test_preds[:, lpd_point.sum(axis=0).argmax()]
# get predictions using fixed stacking weights, dependent on the LOO score
fixed_weights_preds = (fixed_weights * test_preds).sum(axis=1)
###Output
_____no_output_____
###Markdown
4.2 Compare methods Let's compare the negative log predictive density scores on the test set (note - lower is better):
###Code
fig, ax = plt.subplots(figsize=(12, 6))
neg_log_pred_densities = np.vstack(
[
-dist.Bernoulli(logits=bhs_predictions).log_prob(y_test),
-dist.Bernoulli(logits=model_selection_preds).log_prob(y_test),
-dist.Bernoulli(logits=fixed_weights_preds).log_prob(y_test),
]
).T
neg_log_pred_density = pd.DataFrame(
neg_log_pred_densities,
columns=[
"Bayesian Hierarchical Stacking",
"Model selection",
"Fixed stacking weights",
],
)
sns.barplot(
data=neg_log_pred_density.reindex(
columns=neg_log_pred_density.mean(axis=0).sort_values(ascending=False).index
),
orient="h",
ax=ax,
)
ax.set_title(
"Bayesian Hierarchical Stacking performs best here", fontdict={"fontsize": 18}
)
ax.set_xlabel("Negative mean log predictive density (lower is better)");
###Output
_____no_output_____
###Markdown
Bayesian Hierarchical Stacking: Well Switching Case Study Photo by Belinda Fewings, https://unsplash.com/photos/6p-KtXCBGNw. Table of Contents* [Intro](intro)* [1. Exploratory Data Analysis](1)* [2. Prepare 6 Different Models](2) * [2.1 Feature Engineering](2.1) * [2.2 Training](2.2)* [3. Bayesian Hierarchical Stacking](3) * [3.1 Prepare stacking datasets](3.1) * [3.2 Define stacking model](3.2)* [4. Evaluate on test set](4) * [4.1 Stack predictions](4.1) * [4.2 Compare methods](4.2)* [Conclusion](conclusion)* [References](references) Intro Suppose you have just fit 6 models to a dataset, and need to choose which one to use to make predictions on your test set. How do you choose which one to use? A couple of common tactics are:- choose the best model based on cross-validation;- average the models, using weights based on cross-validation scores.In the paper [Bayesian hierarchical stacking: Some models are (somewhere) useful](https://arxiv.org/abs/2101.08954), a new technique is introduced: average models based on weights which are allowed to vary across according to the input data, based on a hierarchical structure.Here, we'll implement the first case study from that paper - readers are nonetheless encouraged to look at the original paper to find other cases studies, as well as theoretical results. Code from the article (in R / Stan) can be found [here](https://github.com/yao-yl/hierarchical-stacking-code).
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.interpolate import BSpline
import seaborn as sns
import jax
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.9.2")
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Exploratory Data Analysis The data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?We'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!But first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are:- switch: whether a household switched to another well;- arsenic: level of arsenic in drinking water;- educ: level of education of "head of household";- dist100: distance to nearest safe-drinking well;- assoc: whether the household participates in any community activities.
###Code
wells = pd.read_csv(
"http://stat.columbia.edu/~gelman/arm/examples/arsenic/wells.dat", sep=" "
)
wells.head()
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.suptitle("Target variable plotted against various predictors")
sns.scatterplot(data=wells, x="arsenic", y="switch", ax=ax[0][0])
sns.scatterplot(data=wells, x="dist", y="switch", ax=ax[0][1])
sns.barplot(
data=wells.groupby("assoc")["switch"].mean().reset_index(),
x="assoc",
y="switch",
ax=ax[1][0],
)
ax[1][0].set_ylabel("Proportion switch")
sns.barplot(
data=wells.groupby("educ")["switch"].mean().reset_index(),
x="educ",
y="switch",
ax=ax[1][1],
)
ax[1][1].set_ylabel("Proportion switch");
###Output
_____no_output_____
###Markdown
Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
###Code
np.random.seed(1)
train_id = wells.sample(n=200).index
test_id = wells.loc[~wells.index.isin(train_id)].sample(n=1500).index
y_train = wells.loc[train_id, "switch"].to_numpy()
y_test = wells.loc[test_id, "switch"].to_numpy()
###Output
_____no_output_____
###Markdown
2. Prepare 6 different candidate models 2.1 Feature Engineering First, let's add a few new columns:- `edu0`: whether `educ` is `0`,- `edu1`: whether `educ` is between `1` and `5`,- `edu2`: whether `educ` is between `6` and `11`,- `edu3`: whether `educ` is between `12` and `17`,- `logarsenic`: natural logarithm of `arsenic`,- `assoc_half`: half of `assoc`,- `as_square`: natural logarithm of `arsenic`, squared,- `as_third`: natural logarithm of `arsenic`, cubed,- `dist100`: `dist` divided by `100`, - `intercept`: just a columns of `1`s.We're going to start by fitting 6 different models to our train set:- logistic regression using `intercept`, `arsenic`, `assoc`, `edu1`, `edu2`, and `edu3`;- same as above, but with `logarsenic` instead of `arsenic`;- same as the first one, but with square and cubic features as well;- same as the first one, but with spline features derived from `logarsenic` as well;- same as the first one, but with spline features derived from `dist100` as well;- same as the first one, but with `educ` instead of the binary `edu` variables.
###Code
wells["edu0"] = wells["educ"].isin(np.arange(0, 1)).astype(int)
wells["edu1"] = wells["educ"].isin(np.arange(1, 6)).astype(int)
wells["edu2"] = wells["educ"].isin(np.arange(6, 12)).astype(int)
wells["edu3"] = wells["educ"].isin(np.arange(12, 18)).astype(int)
wells["logarsenic"] = np.log(wells["arsenic"])
wells["assoc_half"] = wells["assoc"] / 2.0
wells["as_square"] = wells["logarsenic"] ** 2
wells["as_third"] = wells["logarsenic"] ** 3
wells["dist100"] = wells["dist"] / 100.0
wells["intercept"] = 1
def bs(x, knots, degree):
"""
Generate the B-spline basis matrix for a polynomial spline.
Parameters
----------
x
predictor variable.
knots
locations of internal breakpoints (not padded).
degree
degree of the piecewise polynomial.
Returns
-------
pd.DataFrame
Spline basis matrix.
Notes
-----
This mirrors ``bs`` from splines package in R.
"""
padded_knots = np.hstack(
[[x.min()] * (degree + 1), knots, [x.max()] * (degree + 1)]
)
return pd.DataFrame(
BSpline(padded_knots, np.eye(len(padded_knots) - degree - 1), degree)(x)[:, 1:],
index=x.index,
)
knots = np.quantile(wells.loc[train_id, "logarsenic"], np.linspace(0.1, 0.9, num=10))
spline_arsenic = bs(wells["logarsenic"], knots=knots, degree=3)
knots = np.quantile(wells.loc[train_id, "dist100"], np.linspace(0.1, 0.9, num=10))
spline_dist = bs(wells["dist100"], knots=knots, degree=3)
features_0 = ["intercept", "dist100", "arsenic", "assoc", "edu1", "edu2", "edu3"]
features_1 = ["intercept", "dist100", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_2 = [
"intercept",
"dist100",
"arsenic",
"as_third",
"as_square",
"assoc",
"edu1",
"edu2",
"edu3",
]
features_3 = ["intercept", "dist100", "assoc", "edu1", "edu2", "edu3"]
features_4 = ["intercept", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_5 = ["intercept", "dist100", "logarsenic", "assoc", "educ"]
X0 = wells.loc[train_id, features_0].to_numpy()
X1 = wells.loc[train_id, features_1].to_numpy()
X2 = wells.loc[train_id, features_2].to_numpy()
X3 = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[train_id]
.to_numpy()
)
X4 = pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[train_id].to_numpy()
X5 = wells.loc[train_id, features_5].to_numpy()
X0_test = wells.loc[test_id, features_0].to_numpy()
X1_test = wells.loc[test_id, features_1].to_numpy()
X2_test = wells.loc[test_id, features_2].to_numpy()
X3_test = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[test_id]
.to_numpy()
)
X4_test = (
pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[test_id].to_numpy()
)
X5_test = wells.loc[test_id, features_5].to_numpy()
train_x_list = [X0, X1, X2, X3, X4, X5]
test_x_list = [X0_test, X1_test, X2_test, X3_test, X4_test, X5_test]
K = len(train_x_list)
###Output
_____no_output_____
###Markdown
2.2 Training Each model will be trained in the same way - with a Bernoulli likelihood and a logit link function.
###Code
def logistic(x, y=None):
beta = numpyro.sample("beta", dist.Normal(0, 3).expand([x.shape[1]]))
logits = numpyro.deterministic("logits", jnp.matmul(x, beta))
numpyro.sample(
"obs",
dist.Bernoulli(logits=logits),
obs=y,
)
fit_list = []
for k in range(K):
sampler = numpyro.infer.NUTS(logistic)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
rng_key = jax.random.fold_in(jax.random.PRNGKey(13), k)
mcmc.run(rng_key, x=train_x_list[k], y=y_train)
fit_list.append(mcmc)
###Output
_____no_output_____
###Markdown
2.3 Estimate leave-one-out cross-validated score for each training point Rather than refitting each model 100 times, we will estimate the leave-one-out cross-validated score using [LOO](https://arxiv.org/abs/2001.00980).
###Code
def find_point_wise_loo_score(fit):
return az.loo(az.from_numpyro(fit), pointwise=True, scale="log").loo_i.values
lpd_point = np.vstack([find_point_wise_loo_score(fit) for fit in fit_list]).T
exp_lpd_point = np.exp(lpd_point)
###Output
_____no_output_____
###Markdown
3. Bayesian Hierarchical Stacking 3.1 Prepare stacking datasets To determine how the stacking weights should vary across training and test sets, we will need to create "stacking datasets" which include all the features which we want the stacking weights to depend on. How should such features be included? For discrete features, this is easy, we just one-hot-encode them. But for continuous features, we need a trick. In Equation (16), the authors recommend the following: if you have a continuous feature `f`, then replace it with the following two features:- `f_l`: `f` minus the median of `f`, clipped above at 0;- `f_r`: `f` minus the median of `f`, clipped below at 0;
###Code
dist100_median = wells.loc[wells.index[train_id], "dist100"].median()
logarsenic_median = wells.loc[wells.index[train_id], "logarsenic"].median()
wells["dist100_l"] = (wells["dist100"] - dist100_median).clip(upper=0)
wells["dist100_r"] = (wells["dist100"] - dist100_median).clip(lower=0)
wells["logarsenic_l"] = (wells["logarsenic"] - logarsenic_median).clip(upper=0)
wells["logarsenic_r"] = (wells["logarsenic"] - logarsenic_median).clip(lower=0)
stacking_features = [
"edu0",
"edu1",
"edu2",
"edu3",
"assoc_half",
"dist100_l",
"dist100_r",
"logarsenic_l",
"logarsenic_r",
]
X_stacking_train = wells.loc[train_id, stacking_features].to_numpy()
X_stacking_test = wells.loc[test_id, stacking_features].to_numpy()
###Output
_____no_output_____
###Markdown
3.2 Define stacking model What we seek to find is a matrix of weights $W$ with which to multiply the models' predictions. Let's define a matrix $Pred$ such that $Pred_{i,k}$ represents the prediction made for point $i$ by model $k$. Then the final prediction for point $i$ will then be:$$ \sum_k W_{i, k}Pred_{i,k} $$Such a matrix $W$ would be required to have each column sum to $1$. Hence, we calculate each row $W_i$ of $W$ as:$$ W_i = \text{softmax}(X\_\text{stacking}_i \cdot \beta), $$where $\beta$ is a matrix whose values we seek to determine. For the discrete features, $\beta$ is given a hierarchical structure over the possible inputs. Continuous features, on the other hand, get no hierarchical structure in this case study and just vary according to the input values.Notice how, for the discrete features, a [non-centered parametrisation is used](https://twiecki.io/blog/2017/02/08/bayesian-hierchical-non-centered/). Also note that we only need to estimate `K-1` columns of $\beta$, because the weights `W_{i, k}` will have to sum to `1` for each `i`.
###Code
def stacking(
X,
d_discrete,
X_test,
exp_lpd_point,
tau_mu,
tau_sigma,
*,
test,
):
"""
Get weights with which to stack candidate models' predictions.
Parameters
----------
X
Training stacking matrix: features on which stacking weights should depend, for the
training set.
d_discrete
Number of discrete features in `X` and `X_test`. The first `d_discrete` features
from these matrices should be the discrete ones, with the continuous ones coming
after them.
X_test
Test stacking matrix: features on which stacking weights should depend, for the
testing set.
exp_lpd_point
LOO score evaluated at each point in the training set, for each candidate model.
tau_mu
Hyperprior for mean of `beta`, for discrete features.
tau_sigma
Hyperprior for standard deviation of `beta`, for continuous features.
test
Whether to calculate stacking weights for test set.
Notes
-----
Naming of variables mirrors what's used in the original paper.
"""
N = X.shape[0]
d = X.shape[1]
N_test = X_test.shape[0]
K = lpd_point.shape[1] # number of candidate models
with numpyro.plate("Candidate models", K - 1, dim=-2):
# mean effect of discrete features on stacking weights
mu = numpyro.sample("mu", dist.Normal(0, tau_mu))
# standard deviation effect of discrete features on stacking weights
sigma = numpyro.sample("sigma", dist.HalfNormal(scale=tau_sigma))
with numpyro.plate("Discrete features", d_discrete, dim=-1):
# effect of discrete features on stacking weights
tau = numpyro.sample("tau", dist.Normal(0, 1))
with numpyro.plate("Continuous features", d - d_discrete, dim=-1):
# effect of continuous features on stacking weights
beta_con = numpyro.sample("beta_con", dist.Normal(0, 1))
# effects of features on stacking weights
beta = numpyro.deterministic(
"beta", jnp.hstack([(sigma.squeeze() * tau.T + mu.squeeze()).T, beta_con])
)
assert beta.shape == (K - 1, d)
# stacking weights (in unconstrained space)
f = jnp.hstack([X @ beta.T, jnp.zeros((N, 1))])
assert f.shape == (N, K)
# log probability of LOO training scores weighted by stacking weights.
log_w = jax.nn.log_softmax(f, axis=1)
# stacking weights (constrained to sum to 1)
numpyro.deterministic("w", jnp.exp(log_w))
logp = jax.nn.logsumexp(lpd_point + log_w, axis=1)
numpyro.factor("logp", jnp.sum(logp))
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
sampler = numpyro.infer.NUTS(stacking)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
mcmc.run(
jax.random.PRNGKey(17),
X=X_stacking_train,
d_discrete=4,
X_test=X_stacking_test,
exp_lpd_point=exp_lpd_point,
tau_mu=1.0,
tau_sigma=0.5,
test=True,
)
trace = mcmc.get_samples()
###Output
_____no_output_____
###Markdown
We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.Let's compare them with what the weights would've been if we'd just used fixed stacking weights (computed using ArviZ - see [their docs](https://arviz-devs.github.io/arviz/api/generated/arviz.compare.html) for details).
###Code
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6), sharey=True)
training_stacking_weights = trace["w"].mean(axis=0)
sns.scatterplot(data=pd.DataFrame(training_stacking_weights), ax=ax[0])
fixed_weights = (
az.compare({idx: fit for idx, fit in enumerate(fit_list)}, method="stacking")
.sort_index()["weight"]
.to_numpy()
)
fixed_weights_df = pd.DataFrame(
np.repeat(
fixed_weights[jnp.newaxis, :],
len(X_stacking_train),
axis=0,
)
)
sns.scatterplot(data=fixed_weights_df, ax=ax[1])
ax[0].set_title("Training weights from Bayesian Hierarchical stacking")
ax[1].set_title("Fixed weights stacking")
ax[0].set_xlabel("Index")
ax[1].set_xlabel("Index")
fig.suptitle(
"Bayesian Hierarchical Stacking weights can vary according to the input",
fontsize=18,
)
fig.tight_layout();
###Output
_____no_output_____
###Markdown
4. Evaluate on test set 4.1 Stack predictions Now, for each model, let's evaluate the log predictive density for each point in the test set. Once we have predictions for each model, we need to think about how to combine them, such that for each test point, we get a single prediction.We decided we'd do this in three ways:- Bayesian Hierarchical Stacking (`bhs_pred`);- choosing the model with the best training set LOO score (`model_selection_preds`);- fixed-weights stacking (`fixed_weights_preds`).
###Code
# for each candidate model, extract the posterior predictive logits
train_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(19), k)
train_pred = predictive(rng_key, x=train_x_list[k])["logits"]
train_preds.append(train_pred.mean(axis=0))
# reshape, so we have (N, K)
train_preds = np.vstack(train_preds).T
# same as previous cell, but for test set
test_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(20), k)
test_pred = predictive(rng_key, x=test_x_list[k])["logits"]
test_preds.append(test_pred.mean(axis=0))
test_preds = np.vstack(test_preds).T
# get the stacking weights for the test set
test_stacking_weights = trace["w_test"].mean(axis=0)
# get predictions using the stacking weights
bhs_predictions = (test_stacking_weights * test_preds).sum(axis=1)
# get predictions using only the model with the best LOO score
model_selection_preds = test_preds[:, lpd_point.sum(axis=0).argmax()]
# get predictions using fixed stacking weights, dependent on the LOO score
fixed_weights_preds = (fixed_weights * test_preds).sum(axis=1)
###Output
_____no_output_____
###Markdown
4.2 Compare methods Let's compare the negative log predictive density scores on the test set (note - lower is better):
###Code
fig, ax = plt.subplots(figsize=(12, 6))
neg_log_pred_densities = np.vstack(
[
-dist.Bernoulli(logits=bhs_predictions).log_prob(y_test),
-dist.Bernoulli(logits=model_selection_preds).log_prob(y_test),
-dist.Bernoulli(logits=fixed_weights_preds).log_prob(y_test),
]
).T
neg_log_pred_density = pd.DataFrame(
neg_log_pred_densities,
columns=[
"Bayesian Hierarchical Stacking",
"Model selection",
"Fixed stacking weights",
],
)
sns.barplot(
data=neg_log_pred_density.reindex(
columns=neg_log_pred_density.mean(axis=0).sort_values(ascending=False).index
),
orient="h",
ax=ax,
)
ax.set_title(
"Bayesian Hierarchical Stacking performs best here", fontdict={"fontsize": 18}
)
ax.set_xlabel("Negative mean log predictive density (lower is better)");
###Output
_____no_output_____
###Markdown
Bayesian Hierarchical Stacking: Well Switching Case Study Photo by Belinda Fewings, https://unsplash.com/photos/6p-KtXCBGNw. Table of Contents* [Intro](intro)* [1. Exploratory Data Analysis](1)* [2. Prepare 6 Different Models](2) * [2.1 Feature Engineering](2.1) * [2.2 Training](2.2)* [3. Bayesian Hierarchical Stacking](3) * [3.1 Prepare stacking datasets](3.1) * [3.2 Define stacking model](3.2)* [4. Evaluate on test set](4) * [4.1 Stack predictions](4.1) * [4.2 Compare methods](4.2)* [Conclusion](conclusion)* [References](references) Intro Suppose you have just fit 6 models to a dataset, and need to choose which one to use to make predictions on your test set. How do you choose which one to use? A couple of common tactics are:- choose the best model based on cross-validation;- average the models, using weights based on cross-validation scores.In the paper [Bayesian hierarchical stacking: Some models are (somewhere) useful](https://arxiv.org/abs/2101.08954), a new technique is introduced: average models based on weights which are allowed to vary across according to the input data, based on a hierarchical structure.Here, we'll implement the first case study from that paper - readers are nonetheless encouraged to look at the original paper to find other cases studies, as well as theoretical results. Code from the article (in R / Stan) can be found [here](https://github.com/yao-yl/hierarchical-stacking-code).
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.interpolate import BSpline
import seaborn as sns
import jax
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.8.0")
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Exploratory Data Analysis The data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?We'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!But first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are:- switch: whether a household switched to another well;- arsenic: level of arsenic in drinking water;- educ: level of education of "head of household";- dist100: distance to nearest safe-drinking well;- assoc: whether the household participates in any community activities.
###Code
wells = pd.read_csv(
"http://stat.columbia.edu/~gelman/arm/examples/arsenic/wells.dat", sep=" "
)
wells.head()
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.suptitle("Target variable plotted against various predictors")
sns.scatterplot(data=wells, x="arsenic", y="switch", ax=ax[0][0])
sns.scatterplot(data=wells, x="dist", y="switch", ax=ax[0][1])
sns.barplot(
data=wells.groupby("assoc")["switch"].mean().reset_index(),
x="assoc",
y="switch",
ax=ax[1][0],
)
ax[1][0].set_ylabel("Proportion switch")
sns.barplot(
data=wells.groupby("educ")["switch"].mean().reset_index(),
x="educ",
y="switch",
ax=ax[1][1],
)
ax[1][1].set_ylabel("Proportion switch");
###Output
_____no_output_____
###Markdown
Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
###Code
np.random.seed(1)
train_id = wells.sample(n=200).index
test_id = wells.loc[~wells.index.isin(train_id)].sample(n=1500).index
y_train = wells.loc[train_id, "switch"].to_numpy()
y_test = wells.loc[test_id, "switch"].to_numpy()
###Output
_____no_output_____
###Markdown
2. Prepare 6 different candidate models 2.1 Feature Engineering First, let's add a few new columns:- `edu0`: whether `educ` is `0`,- `edu1`: whether `educ` is between `1` and `5`,- `edu2`: whether `educ` is between `6` and `11`,- `edu3`: whether `educ` is between `12` and `17`,- `logarsenic`: natural logarithm of `arsenic`,- `assoc_half`: half of `assoc`,- `as_square`: natural logarithm of `arsenic`, squared,- `as_third`: natural logarithm of `arsenic`, cubed,- `dist100`: `dist` divided by `100`, - `intercept`: just a columns of `1`s.We're going to start by fitting 6 different models to our train set:- logistic regression using `intercept`, `arsenic`, `assoc`, `edu1`, `edu2`, and `edu3`;- same as above, but with `logarsenic` instead of `arsenic`;- same as the first one, but with square and cubic features as well;- same as the first one, but with spline features derived from `logarsenic` as well;- same as the first one, but with spline features derived from `dist100` as well;- same as the first one, but with `educ` instead of the binary `edu` variables.
###Code
wells["edu0"] = wells["educ"].isin(np.arange(0, 1)).astype(int)
wells["edu1"] = wells["educ"].isin(np.arange(1, 6)).astype(int)
wells["edu2"] = wells["educ"].isin(np.arange(6, 12)).astype(int)
wells["edu3"] = wells["educ"].isin(np.arange(12, 18)).astype(int)
wells["logarsenic"] = np.log(wells["arsenic"])
wells["assoc_half"] = wells["assoc"] / 2.0
wells["as_square"] = wells["logarsenic"] ** 2
wells["as_third"] = wells["logarsenic"] ** 3
wells["dist100"] = wells["dist"] / 100.0
wells["intercept"] = 1
def bs(x, knots, degree):
"""
Generate the B-spline basis matrix for a polynomial spline.
Parameters
----------
x
predictor variable.
knots
locations of internal breakpoints (not padded).
degree
degree of the piecewise polynomial.
Returns
-------
pd.DataFrame
Spline basis matrix.
Notes
-----
This mirrors ``bs`` from splines package in R.
"""
padded_knots = np.hstack(
[[x.min()] * (degree + 1), knots, [x.max()] * (degree + 1)]
)
return pd.DataFrame(
BSpline(padded_knots, np.eye(len(padded_knots) - degree - 1), degree)(x)[:, 1:],
index=x.index,
)
knots = np.quantile(wells.loc[train_id, "logarsenic"], np.linspace(0.1, 0.9, num=10))
spline_arsenic = bs(wells["logarsenic"], knots=knots, degree=3)
knots = np.quantile(wells.loc[train_id, "dist100"], np.linspace(0.1, 0.9, num=10))
spline_dist = bs(wells["dist100"], knots=knots, degree=3)
features_0 = ["intercept", "dist100", "arsenic", "assoc", "edu1", "edu2", "edu3"]
features_1 = ["intercept", "dist100", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_2 = [
"intercept",
"dist100",
"arsenic",
"as_third",
"as_square",
"assoc",
"edu1",
"edu2",
"edu3",
]
features_3 = ["intercept", "dist100", "assoc", "edu1", "edu2", "edu3"]
features_4 = ["intercept", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_5 = ["intercept", "dist100", "logarsenic", "assoc", "educ"]
X0 = wells.loc[train_id, features_0].to_numpy()
X1 = wells.loc[train_id, features_1].to_numpy()
X2 = wells.loc[train_id, features_2].to_numpy()
X3 = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[train_id]
.to_numpy()
)
X4 = pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[train_id].to_numpy()
X5 = wells.loc[train_id, features_5].to_numpy()
X0_test = wells.loc[test_id, features_0].to_numpy()
X1_test = wells.loc[test_id, features_1].to_numpy()
X2_test = wells.loc[test_id, features_2].to_numpy()
X3_test = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[test_id]
.to_numpy()
)
X4_test = (
pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[test_id].to_numpy()
)
X5_test = wells.loc[test_id, features_5].to_numpy()
train_x_list = [X0, X1, X2, X3, X4, X5]
test_x_list = [X0_test, X1_test, X2_test, X3_test, X4_test, X5_test]
K = len(train_x_list)
###Output
_____no_output_____
###Markdown
2.2 Training Each model will be trained in the same way - with a Bernoulli likelihood and a logit link function.
###Code
def logistic(x, y=None):
beta = numpyro.sample("beta", dist.Normal(0, 3).expand([x.shape[1]]))
logits = numpyro.deterministic("logits", jnp.matmul(x, beta))
numpyro.sample(
"obs",
dist.Bernoulli(logits=logits),
obs=y,
)
fit_list = []
for k in range(K):
sampler = numpyro.infer.NUTS(logistic)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
rng_key = jax.random.fold_in(jax.random.PRNGKey(13), k)
mcmc.run(rng_key, x=train_x_list[k], y=y_train)
fit_list.append(mcmc)
###Output
_____no_output_____
###Markdown
2.3 Estimate leave-one-out cross-validated score for each training point Rather than refitting each model 100 times, we will estimate the leave-one-out cross-validated score using [LOO](https://arxiv.org/abs/2001.00980).
###Code
def find_point_wise_loo_score(fit):
return az.loo(az.from_numpyro(fit), pointwise=True, scale="log").loo_i.values
lpd_point = np.vstack([find_point_wise_loo_score(fit) for fit in fit_list]).T
exp_lpd_point = np.exp(lpd_point)
###Output
_____no_output_____
###Markdown
3. Bayesian Hierarchical Stacking 3.1 Prepare stacking datasets To determine how the stacking weights should vary across training and test sets, we will need to create "stacking datasets" which include all the features which we want the stacking weights to depend on. How should such features be included? For discrete features, this is easy, we just one-hot-encode them. But for continuous features, we need a trick. In Equation (16), the authors recommend the following: if you have a continuous feature `f`, then replace it with the following two features:- `f_l`: `f` minus the median of `f`, clipped above at 0;- `f_r`: `f` minus the median of `f`, clipped below at 0;
###Code
dist100_median = wells.loc[wells.index[train_id], "dist100"].median()
logarsenic_median = wells.loc[wells.index[train_id], "logarsenic"].median()
wells["dist100_l"] = (wells["dist100"] - dist100_median).clip(upper=0)
wells["dist100_r"] = (wells["dist100"] - dist100_median).clip(lower=0)
wells["logarsenic_l"] = (wells["logarsenic"] - logarsenic_median).clip(upper=0)
wells["logarsenic_r"] = (wells["logarsenic"] - logarsenic_median).clip(lower=0)
stacking_features = [
"edu0",
"edu1",
"edu2",
"edu3",
"assoc_half",
"dist100_l",
"dist100_r",
"logarsenic_l",
"logarsenic_r",
]
X_stacking_train = wells.loc[train_id, stacking_features].to_numpy()
X_stacking_test = wells.loc[test_id, stacking_features].to_numpy()
###Output
_____no_output_____
###Markdown
3.2 Define stacking model What we seek to find is a matrix of weights $W$ with which to multiply the models' predictions. Let's define a matrix $Pred$ such that $Pred_{i,k}$ represents the prediction made for point $i$ by model $k$. Then the final prediction for point $i$ will then be:$$ \sum_k W_{i, k}Pred_{i,k} $$Such a matrix $W$ would be required to have each column sum to $1$. Hence, we calculate each row $W_i$ of $W$ as:$$ W_i = \text{softmax}(X\_\text{stacking}_i \cdot \beta), $$where $\beta$ is a matrix whose values we seek to determine. For the discrete features, $\beta$ is given a hierarchical structure over the possible inputs. Continuous features, on the other hand, get no hierarchical structure in this case study and just vary according to the input values.Notice how, for the discrete features, a [non-centered parametrisation is used](https://twiecki.io/blog/2017/02/08/bayesian-hierchical-non-centered/). Also note that we only need to estimate `K-1` columns of $\beta$, because the weights `W_{i, k}` will have to sum to `1` for each `i`.
###Code
def stacking(
X,
d_discrete,
X_test,
exp_lpd_point,
tau_mu,
tau_sigma,
*,
test,
):
"""
Get weights with which to stack candidate models' predictions.
Parameters
----------
X
Training stacking matrix: features on which stacking weights should depend, for the
training set.
d_discrete
Number of discrete features in `X` and `X_test`. The first `d_discrete` features
from these matrices should be the discrete ones, with the continuous ones coming
after them.
X_test
Test stacking matrix: features on which stacking weights should depend, for the
testing set.
exp_lpd_point
LOO score evaluated at each point in the training set, for each candidate model.
tau_mu
Hyperprior for mean of `beta`, for discrete features.
tau_sigma
Hyperprior for standard deviation of `beta`, for continuous features.
test
Whether to calculate stacking weights for test set.
Notes
-----
Naming of variables mirrors what's used in the original paper.
"""
N = X.shape[0]
d = X.shape[1]
N_test = X_test.shape[0]
K = lpd_point.shape[1] # number of candidate models
with numpyro.plate("Candidate models", K - 1, dim=-2):
# mean effect of discrete features on stacking weights
mu = numpyro.sample("mu", dist.Normal(0, tau_mu))
# standard deviation effect of discrete features on stacking weights
sigma = numpyro.sample("sigma", dist.HalfNormal(scale=tau_sigma))
with numpyro.plate("Discrete features", d_discrete, dim=-1):
# effect of discrete features on stacking weights
tau = numpyro.sample("tau", dist.Normal(0, 1))
with numpyro.plate("Continuous features", d - d_discrete, dim=-1):
# effect of continuous features on stacking weights
beta_con = numpyro.sample("beta_con", dist.Normal(0, 1))
# effects of features on stacking weights
beta = numpyro.deterministic(
"beta", jnp.hstack([(sigma.squeeze() * tau.T + mu.squeeze()).T, beta_con])
)
assert beta.shape == (K - 1, d)
# stacking weights (in unconstrained space)
f = jnp.hstack([X @ beta.T, jnp.zeros((N, 1))])
assert f.shape == (N, K)
# log probability of LOO training scores weighted by stacking weights.
log_w = jax.nn.log_softmax(f, axis=1)
# stacking weights (constrained to sum to 1)
numpyro.deterministic("w", jnp.exp(log_w))
logp = jax.nn.logsumexp(lpd_point + log_w, axis=1)
numpyro.factor("logp", jnp.sum(logp))
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
numpyro.deterministic("w_test", w_test)
sampler = numpyro.infer.NUTS(stacking)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
mcmc.run(
jax.random.PRNGKey(17),
X=X_stacking_train,
d_discrete=4,
X_test=X_stacking_test,
exp_lpd_point=exp_lpd_point,
tau_mu=1.0,
tau_sigma=0.5,
test=True,
)
trace = mcmc.get_samples()
###Output
_____no_output_____
###Markdown
We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.Let's compare them with what the weights would've been if we'd just used fixed stacking weights (computed using ArviZ - see [their docs](https://arviz-devs.github.io/arviz/api/generated/arviz.compare.html) for details).
###Code
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6), sharey=True)
training_stacking_weights = trace["w"].mean(axis=0)
sns.scatterplot(data=pd.DataFrame(training_stacking_weights), ax=ax[0])
fixed_weights = (
az.compare({idx: fit for idx, fit in enumerate(fit_list)}, method="stacking")
.sort_index()["weight"]
.to_numpy()
)
fixed_weights_df = pd.DataFrame(
np.repeat(
fixed_weights[jnp.newaxis, :],
len(X_stacking_train),
axis=0,
)
)
sns.scatterplot(data=fixed_weights_df, ax=ax[1])
ax[0].set_title("Training weights from Bayesian Hierarchical stacking")
ax[1].set_title("Fixed weights stacking")
ax[0].set_xlabel("Index")
ax[1].set_xlabel("Index")
fig.suptitle(
"Bayesian Hierarchical Stacking weights can vary according to the input",
fontsize=18,
)
fig.tight_layout();
###Output
_____no_output_____
###Markdown
4. Evaluate on test set 4.1 Stack predictions Now, for each model, let's evaluate the log predictive density for each point in the test set. Once we have predictions for each model, we need to think about how to combine them, such that for each test point, we get a single prediction.We decided we'd do this in three ways:- Bayesian Hierarchical Stacking (`bhs_pred`);- choosing the model with the best training set LOO score (`model_selection_preds`);- fixed-weights stacking (`fixed_weights_preds`).
###Code
# for each candidate model, extract the posterior predictive logits
train_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(19), k)
train_pred = predictive(rng_key, x=train_x_list[k])["logits"]
train_preds.append(train_pred.mean(axis=0))
# reshape, so we have (N, K)
train_preds = np.vstack(train_preds).T
# same as previous cell, but for test set
test_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(20), k)
test_pred = predictive(rng_key, x=test_x_list[k])["logits"]
test_preds.append(test_pred.mean(axis=0))
test_preds = np.vstack(test_preds).T
# get the stacking weights for the test set
test_stacking_weights = trace["w_test"].mean(axis=0)
# get predictions using the stacking weights
bhs_predictions = (test_stacking_weights * test_preds).sum(axis=1)
# get predictions using only the model with the best LOO score
model_selection_preds = test_preds[:, lpd_point.sum(axis=0).argmax()]
# get predictions using fixed stacking weights, dependent on the LOO score
fixed_weights_preds = (fixed_weights * test_preds).sum(axis=1)
###Output
_____no_output_____
###Markdown
4.2 Compare methods Let's compare the negative log predictive density scores on the test set (note - lower is better):
###Code
fig, ax = plt.subplots(figsize=(12, 6))
neg_log_pred_densities = np.vstack(
[
-dist.Bernoulli(logits=bhs_predictions).log_prob(y_test),
-dist.Bernoulli(logits=model_selection_preds).log_prob(y_test),
-dist.Bernoulli(logits=fixed_weights_preds).log_prob(y_test),
]
).T
neg_log_pred_density = pd.DataFrame(
neg_log_pred_densities,
columns=[
"Bayesian Hierarchical Stacking",
"Model selection",
"Fixed stacking weights",
],
)
sns.barplot(
data=neg_log_pred_density.reindex(
columns=neg_log_pred_density.mean(axis=0).sort_values(ascending=False).index
),
orient="h",
ax=ax,
)
ax.set_title(
"Bayesian Hierarchical Stacking performs best here", fontdict={"fontsize": 18}
)
ax.set_xlabel("Negative mean log predictive density (lower is better)");
###Output
_____no_output_____
###Markdown
Bayesian Hierarchical Stacking: Well Switching Case Study Photo by Belinda Fewings, https://unsplash.com/photos/6p-KtXCBGNw. Table of Contents* [Intro](intro)* [1. Exploratory Data Analysis](1)* [2. Prepare 6 Different Models](2) * [2.1 Feature Engineering](2.1) * [2.2 Training](2.2)* [3. Bayesian Hierarchical Stacking](3) * [3.1 Prepare stacking datasets](3.1) * [3.2 Define stacking model](3.2)* [4. Evaluate on test set](4) * [4.1 Stack predictions](4.1) * [4.2 Compare methods](4.2)* [Conclusion](conclusion)* [References](references) Intro Suppose you have just fit 6 models to a dataset, and need to choose which one to use to make predictions on your test set. How do you choose which one to use? A couple of common tactics are:- choose the best model based on cross-validation;- average the models, using weights based on cross-validation scores.In the paper [Bayesian hierarchical stacking: Some models are (somewhere) useful](https://arxiv.org/abs/2101.08954), a new technique is introduced: average models based on weights which are allowed to vary across according to the input data, based on a hierarchical structure.Here, we'll implement the first case study from that paper - readers are nonetheless encouraged to look at the original paper to find other cases studies, as well as theoretical results. Code from the article (in R / Stan) can be found [here](https://github.com/yao-yl/hierarchical-stacking-code).
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
from IPython.display import set_matplotlib_formats
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.interpolate import BSpline
import seaborn as sns
import jax
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.9.0")
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Exploratory Data Analysis The data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?We'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!But first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are:- switch: whether a household switched to another well;- arsenic: level of arsenic in drinking water;- educ: level of education of "head of household";- dist100: distance to nearest safe-drinking well;- assoc: whether the household participates in any community activities.
###Code
wells = pd.read_csv(
"http://stat.columbia.edu/~gelman/arm/examples/arsenic/wells.dat", sep=" "
)
wells.head()
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.suptitle("Target variable plotted against various predictors")
sns.scatterplot(data=wells, x="arsenic", y="switch", ax=ax[0][0])
sns.scatterplot(data=wells, x="dist", y="switch", ax=ax[0][1])
sns.barplot(
data=wells.groupby("assoc")["switch"].mean().reset_index(),
x="assoc",
y="switch",
ax=ax[1][0],
)
ax[1][0].set_ylabel("Proportion switch")
sns.barplot(
data=wells.groupby("educ")["switch"].mean().reset_index(),
x="educ",
y="switch",
ax=ax[1][1],
)
ax[1][1].set_ylabel("Proportion switch");
###Output
_____no_output_____
###Markdown
Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
###Code
np.random.seed(1)
train_id = wells.sample(n=200).index
test_id = wells.loc[~wells.index.isin(train_id)].sample(n=1500).index
y_train = wells.loc[train_id, "switch"].to_numpy()
y_test = wells.loc[test_id, "switch"].to_numpy()
###Output
_____no_output_____
###Markdown
2. Prepare 6 different candidate models 2.1 Feature Engineering First, let's add a few new columns:- `edu0`: whether `educ` is `0`,- `edu1`: whether `educ` is between `1` and `5`,- `edu2`: whether `educ` is between `6` and `11`,- `edu3`: whether `educ` is between `12` and `17`,- `logarsenic`: natural logarithm of `arsenic`,- `assoc_half`: half of `assoc`,- `as_square`: natural logarithm of `arsenic`, squared,- `as_third`: natural logarithm of `arsenic`, cubed,- `dist100`: `dist` divided by `100`, - `intercept`: just a columns of `1`s.We're going to start by fitting 6 different models to our train set:- logistic regression using `intercept`, `arsenic`, `assoc`, `edu1`, `edu2`, and `edu3`;- same as above, but with `logarsenic` instead of `arsenic`;- same as the first one, but with square and cubic features as well;- same as the first one, but with spline features derived from `logarsenic` as well;- same as the first one, but with spline features derived from `dist100` as well;- same as the first one, but with `educ` instead of the binary `edu` variables.
###Code
wells["edu0"] = wells["educ"].isin(np.arange(0, 1)).astype(int)
wells["edu1"] = wells["educ"].isin(np.arange(1, 6)).astype(int)
wells["edu2"] = wells["educ"].isin(np.arange(6, 12)).astype(int)
wells["edu3"] = wells["educ"].isin(np.arange(12, 18)).astype(int)
wells["logarsenic"] = np.log(wells["arsenic"])
wells["assoc_half"] = wells["assoc"] / 2.0
wells["as_square"] = wells["logarsenic"] ** 2
wells["as_third"] = wells["logarsenic"] ** 3
wells["dist100"] = wells["dist"] / 100.0
wells["intercept"] = 1
def bs(x, knots, degree):
"""
Generate the B-spline basis matrix for a polynomial spline.
Parameters
----------
x
predictor variable.
knots
locations of internal breakpoints (not padded).
degree
degree of the piecewise polynomial.
Returns
-------
pd.DataFrame
Spline basis matrix.
Notes
-----
This mirrors ``bs`` from splines package in R.
"""
padded_knots = np.hstack(
[[x.min()] * (degree + 1), knots, [x.max()] * (degree + 1)]
)
return pd.DataFrame(
BSpline(padded_knots, np.eye(len(padded_knots) - degree - 1), degree)(x)[:, 1:],
index=x.index,
)
knots = np.quantile(wells.loc[train_id, "logarsenic"], np.linspace(0.1, 0.9, num=10))
spline_arsenic = bs(wells["logarsenic"], knots=knots, degree=3)
knots = np.quantile(wells.loc[train_id, "dist100"], np.linspace(0.1, 0.9, num=10))
spline_dist = bs(wells["dist100"], knots=knots, degree=3)
features_0 = ["intercept", "dist100", "arsenic", "assoc", "edu1", "edu2", "edu3"]
features_1 = ["intercept", "dist100", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_2 = [
"intercept",
"dist100",
"arsenic",
"as_third",
"as_square",
"assoc",
"edu1",
"edu2",
"edu3",
]
features_3 = ["intercept", "dist100", "assoc", "edu1", "edu2", "edu3"]
features_4 = ["intercept", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_5 = ["intercept", "dist100", "logarsenic", "assoc", "educ"]
X0 = wells.loc[train_id, features_0].to_numpy()
X1 = wells.loc[train_id, features_1].to_numpy()
X2 = wells.loc[train_id, features_2].to_numpy()
X3 = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[train_id]
.to_numpy()
)
X4 = pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[train_id].to_numpy()
X5 = wells.loc[train_id, features_5].to_numpy()
X0_test = wells.loc[test_id, features_0].to_numpy()
X1_test = wells.loc[test_id, features_1].to_numpy()
X2_test = wells.loc[test_id, features_2].to_numpy()
X3_test = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[test_id]
.to_numpy()
)
X4_test = (
pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[test_id].to_numpy()
)
X5_test = wells.loc[test_id, features_5].to_numpy()
train_x_list = [X0, X1, X2, X3, X4, X5]
test_x_list = [X0_test, X1_test, X2_test, X3_test, X4_test, X5_test]
K = len(train_x_list)
###Output
_____no_output_____
###Markdown
2.2 Training Each model will be trained in the same way - with a Bernoulli likelihood and a logit link function.
###Code
def logistic(x, y=None):
beta = numpyro.sample("beta", dist.Normal(0, 3).expand([x.shape[1]]))
logits = numpyro.deterministic("logits", jnp.matmul(x, beta))
numpyro.sample(
"obs",
dist.Bernoulli(logits=logits),
obs=y,
)
fit_list = []
for k in range(K):
sampler = numpyro.infer.NUTS(logistic)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
rng_key = jax.random.fold_in(jax.random.PRNGKey(13), k)
mcmc.run(rng_key, x=train_x_list[k], y=y_train)
fit_list.append(mcmc)
###Output
_____no_output_____
###Markdown
2.3 Estimate leave-one-out cross-validated score for each training point Rather than refitting each model 100 times, we will estimate the leave-one-out cross-validated score using [LOO](https://arxiv.org/abs/2001.00980).
###Code
def find_point_wise_loo_score(fit):
return az.loo(az.from_numpyro(fit), pointwise=True, scale="log").loo_i.values
lpd_point = np.vstack([find_point_wise_loo_score(fit) for fit in fit_list]).T
exp_lpd_point = np.exp(lpd_point)
###Output
_____no_output_____
###Markdown
3. Bayesian Hierarchical Stacking 3.1 Prepare stacking datasets To determine how the stacking weights should vary across training and test sets, we will need to create "stacking datasets" which include all the features which we want the stacking weights to depend on. How should such features be included? For discrete features, this is easy, we just one-hot-encode them. But for continuous features, we need a trick. In Equation (16), the authors recommend the following: if you have a continuous feature `f`, then replace it with the following two features:- `f_l`: `f` minus the median of `f`, clipped above at 0;- `f_r`: `f` minus the median of `f`, clipped below at 0;
###Code
dist100_median = wells.loc[wells.index[train_id], "dist100"].median()
logarsenic_median = wells.loc[wells.index[train_id], "logarsenic"].median()
wells["dist100_l"] = (wells["dist100"] - dist100_median).clip(upper=0)
wells["dist100_r"] = (wells["dist100"] - dist100_median).clip(lower=0)
wells["logarsenic_l"] = (wells["logarsenic"] - logarsenic_median).clip(upper=0)
wells["logarsenic_r"] = (wells["logarsenic"] - logarsenic_median).clip(lower=0)
stacking_features = [
"edu0",
"edu1",
"edu2",
"edu3",
"assoc_half",
"dist100_l",
"dist100_r",
"logarsenic_l",
"logarsenic_r",
]
X_stacking_train = wells.loc[train_id, stacking_features].to_numpy()
X_stacking_test = wells.loc[test_id, stacking_features].to_numpy()
###Output
_____no_output_____
###Markdown
3.2 Define stacking model What we seek to find is a matrix of weights $W$ with which to multiply the models' predictions. Let's define a matrix $Pred$ such that $Pred_{i,k}$ represents the prediction made for point $i$ by model $k$. Then the final prediction for point $i$ will then be:$$ \sum_k W_{i, k}Pred_{i,k} $$Such a matrix $W$ would be required to have each column sum to $1$. Hence, we calculate each row $W_i$ of $W$ as:$$ W_i = \text{softmax}(X\_\text{stacking}_i \cdot \beta), $$where $\beta$ is a matrix whose values we seek to determine. For the discrete features, $\beta$ is given a hierarchical structure over the possible inputs. Continuous features, on the other hand, get no hierarchical structure in this case study and just vary according to the input values.Notice how, for the discrete features, a [non-centered parametrisation is used](https://twiecki.io/blog/2017/02/08/bayesian-hierchical-non-centered/). Also note that we only need to estimate `K-1` columns of $\beta$, because the weights `W_{i, k}` will have to sum to `1` for each `i`.
###Code
def stacking(
X,
d_discrete,
X_test,
exp_lpd_point,
tau_mu,
tau_sigma,
*,
test,
):
"""
Get weights with which to stack candidate models' predictions.
Parameters
----------
X
Training stacking matrix: features on which stacking weights should depend, for the
training set.
d_discrete
Number of discrete features in `X` and `X_test`. The first `d_discrete` features
from these matrices should be the discrete ones, with the continuous ones coming
after them.
X_test
Test stacking matrix: features on which stacking weights should depend, for the
testing set.
exp_lpd_point
LOO score evaluated at each point in the training set, for each candidate model.
tau_mu
Hyperprior for mean of `beta`, for discrete features.
tau_sigma
Hyperprior for standard deviation of `beta`, for continuous features.
test
Whether to calculate stacking weights for test set.
Notes
-----
Naming of variables mirrors what's used in the original paper.
"""
N = X.shape[0]
d = X.shape[1]
N_test = X_test.shape[0]
K = lpd_point.shape[1] # number of candidate models
with numpyro.plate("Candidate models", K - 1, dim=-2):
# mean effect of discrete features on stacking weights
mu = numpyro.sample("mu", dist.Normal(0, tau_mu))
# standard deviation effect of discrete features on stacking weights
sigma = numpyro.sample("sigma", dist.HalfNormal(scale=tau_sigma))
with numpyro.plate("Discrete features", d_discrete, dim=-1):
# effect of discrete features on stacking weights
tau = numpyro.sample("tau", dist.Normal(0, 1))
with numpyro.plate("Continuous features", d - d_discrete, dim=-1):
# effect of continuous features on stacking weights
beta_con = numpyro.sample("beta_con", dist.Normal(0, 1))
# effects of features on stacking weights
beta = numpyro.deterministic(
"beta", jnp.hstack([(sigma.squeeze() * tau.T + mu.squeeze()).T, beta_con])
)
assert beta.shape == (K - 1, d)
# stacking weights (in unconstrained space)
f = jnp.hstack([X @ beta.T, jnp.zeros((N, 1))])
assert f.shape == (N, K)
# log probability of LOO training scores weighted by stacking weights.
log_w = jax.nn.log_softmax(f, axis=1)
# stacking weights (constrained to sum to 1)
numpyro.deterministic("w", jnp.exp(log_w))
logp = jax.nn.logsumexp(lpd_point + log_w, axis=1)
numpyro.factor("logp", jnp.sum(logp))
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
numpyro.deterministic("w_test", w_test)
sampler = numpyro.infer.NUTS(stacking)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
mcmc.run(
jax.random.PRNGKey(17),
X=X_stacking_train,
d_discrete=4,
X_test=X_stacking_test,
exp_lpd_point=exp_lpd_point,
tau_mu=1.0,
tau_sigma=0.5,
test=True,
)
trace = mcmc.get_samples()
###Output
_____no_output_____
###Markdown
We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.Let's compare them with what the weights would've been if we'd just used fixed stacking weights (computed using ArviZ - see [their docs](https://arviz-devs.github.io/arviz/api/generated/arviz.compare.html) for details).
###Code
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6), sharey=True)
training_stacking_weights = trace["w"].mean(axis=0)
sns.scatterplot(data=pd.DataFrame(training_stacking_weights), ax=ax[0])
fixed_weights = (
az.compare({idx: fit for idx, fit in enumerate(fit_list)}, method="stacking")
.sort_index()["weight"]
.to_numpy()
)
fixed_weights_df = pd.DataFrame(
np.repeat(
fixed_weights[jnp.newaxis, :],
len(X_stacking_train),
axis=0,
)
)
sns.scatterplot(data=fixed_weights_df, ax=ax[1])
ax[0].set_title("Training weights from Bayesian Hierarchical stacking")
ax[1].set_title("Fixed weights stacking")
ax[0].set_xlabel("Index")
ax[1].set_xlabel("Index")
fig.suptitle(
"Bayesian Hierarchical Stacking weights can vary according to the input",
fontsize=18,
)
fig.tight_layout();
###Output
_____no_output_____
###Markdown
4. Evaluate on test set 4.1 Stack predictions Now, for each model, let's evaluate the log predictive density for each point in the test set. Once we have predictions for each model, we need to think about how to combine them, such that for each test point, we get a single prediction.We decided we'd do this in three ways:- Bayesian Hierarchical Stacking (`bhs_pred`);- choosing the model with the best training set LOO score (`model_selection_preds`);- fixed-weights stacking (`fixed_weights_preds`).
###Code
# for each candidate model, extract the posterior predictive logits
train_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(19), k)
train_pred = predictive(rng_key, x=train_x_list[k])["logits"]
train_preds.append(train_pred.mean(axis=0))
# reshape, so we have (N, K)
train_preds = np.vstack(train_preds).T
# same as previous cell, but for test set
test_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(20), k)
test_pred = predictive(rng_key, x=test_x_list[k])["logits"]
test_preds.append(test_pred.mean(axis=0))
test_preds = np.vstack(test_preds).T
# get the stacking weights for the test set
test_stacking_weights = trace["w_test"].mean(axis=0)
# get predictions using the stacking weights
bhs_predictions = (test_stacking_weights * test_preds).sum(axis=1)
# get predictions using only the model with the best LOO score
model_selection_preds = test_preds[:, lpd_point.sum(axis=0).argmax()]
# get predictions using fixed stacking weights, dependent on the LOO score
fixed_weights_preds = (fixed_weights * test_preds).sum(axis=1)
###Output
_____no_output_____
###Markdown
4.2 Compare methods Let's compare the negative log predictive density scores on the test set (note - lower is better):
###Code
fig, ax = plt.subplots(figsize=(12, 6))
neg_log_pred_densities = np.vstack(
[
-dist.Bernoulli(logits=bhs_predictions).log_prob(y_test),
-dist.Bernoulli(logits=model_selection_preds).log_prob(y_test),
-dist.Bernoulli(logits=fixed_weights_preds).log_prob(y_test),
]
).T
neg_log_pred_density = pd.DataFrame(
neg_log_pred_densities,
columns=[
"Bayesian Hierarchical Stacking",
"Model selection",
"Fixed stacking weights",
],
)
sns.barplot(
data=neg_log_pred_density.reindex(
columns=neg_log_pred_density.mean(axis=0).sort_values(ascending=False).index
),
orient="h",
ax=ax,
)
ax.set_title(
"Bayesian Hierarchical Stacking performs best here", fontdict={"fontsize": 18}
)
ax.set_xlabel("Negative mean log predictive density (lower is better)");
###Output
_____no_output_____
###Markdown
Bayesian Hierarchical Stacking: Well Switching Case Study Photo by Belinda Fewings, https://unsplash.com/photos/6p-KtXCBGNw. Table of Contents* [Intro](intro)* [1. Exploratory Data Analysis](1)* [2. Prepare 6 Different Models](2) * [2.1 Feature Engineering](2.1) * [2.2 Training](2.2)* [3. Bayesian Hierarchical Stacking](3) * [3.1 Prepare stacking datasets](3.1) * [3.2 Define stacking model](3.2)* [4. Evaluate on test set](4) * [4.1 Stack predictions](4.1) * [4.2 Compare methods](4.2)* [Conclusion](conclusion)* [References](references) Intro Suppose you have just fit 6 models to a dataset, and need to choose which one to use to make predictions on your test set. How do you choose which one to use? A couple of common tactics are:- choose the best model based on cross-validation;- average the models, using weights based on cross-validation scores.In the paper [Bayesian hierarchical stacking: Some models are (somewhere) useful](https://arxiv.org/abs/2101.08954), a new technique is introduced: average models based on weights which are allowed to vary across according to the input data, based on a hierarchical structure.Here, we'll implement the first case study from that paper - readers are nonetheless encouraged to look at the original paper to find other cases studies, as well as theoretical results. Code from the article (in R / Stan) can be found [here](https://github.com/yao-yl/hierarchical-stacking-code).
###Code
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy
from scipy.interpolate import BSpline
import scipy.stats as stats
import seaborn as sns
import jax
import jax.numpy as jnp
import numpyro
import numpyro.distributions as dist
plt.style.use("seaborn")
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.7.2")
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Exploratory Data Analysis The data we have to work with looks at households in Bangladesh, some of which were affected by high levels of arsenic in their water. Would affected households want to switch to a neighbour's well?We'll split the data into a train and test set, and then we'll train six different models to try to predict whether households would switch wells. Then, we'll see how we can stack them when predicting on the test set!But first, let's load it in and visualise it! Each row represents a household, and the features we have available to us are:- switch: whether a household switched to another well;- arsenic: level of arsenic in drinking water;- educ: level of education of "head of household";- dist100: distance to nearest safe-drinking well;- assoc: whether the household participates in any community activities.
###Code
wells = pd.read_csv(
"http://stat.columbia.edu/~gelman/arm/examples/arsenic/wells.dat", sep=" "
)
wells.head()
fig, ax = plt.subplots(2, 2, figsize=(12, 6))
fig.suptitle("Target variable plotted against various predictors")
sns.scatterplot(data=wells, x="arsenic", y="switch", ax=ax[0][0])
sns.scatterplot(data=wells, x="dist", y="switch", ax=ax[0][1])
sns.barplot(
data=wells.groupby("assoc")["switch"].mean().reset_index(),
x="assoc",
y="switch",
ax=ax[1][0],
)
ax[1][0].set_ylabel("Proportion switch")
sns.barplot(
data=wells.groupby("educ")["switch"].mean().reset_index(),
x="educ",
y="switch",
ax=ax[1][1],
)
ax[1][1].set_ylabel("Proportion switch");
###Output
_____no_output_____
###Markdown
Next, we'll choose 200 observations to be part of our train set, and 1500 to be part of our test set.
###Code
np.random.seed(1)
train_id = wells.sample(n=200).index
test_id = wells.loc[~wells.index.isin(train_id)].sample(n=1500).index
y_train = wells.loc[train_id, "switch"].to_numpy()
y_test = wells.loc[test_id, "switch"].to_numpy()
###Output
_____no_output_____
###Markdown
2. Prepare 6 different candidate models 2.1 Feature Engineering First, let's add a few new columns:- `edu0`: whether `educ` is `0`,- `edu1`: whether `educ` is between `1` and `5`,- `edu2`: whether `educ` is between `6` and `11`,- `edu3`: whether `educ` is between `12` and `17`,- `logarsenic`: natural logarithm of `arsenic`,- `assoc_half`: half of `assoc`,- `as_square`: natural logarithm of `arsenic`, squared,- `as_third`: natural logarithm of `arsenic`, cubed,- `dist100`: `dist` divided by `100`, - `intercept`: just a columns of `1`s.We're going to start by fitting 6 different models to our train set:- logistic regression using `intercept`, `arsenic`, `assoc`, `edu1`, `edu2`, and `edu3`;- same as above, but with `logarsenic` instead of `arsenic`;- same as the first one, but with square and cubic features as well;- same as the first one, but with spline features derived from `logarsenic` as well;- same as the first one, but with spline features derived from `dist100` as well;- same as the first one, but with `educ` instead of the binary `edu` variables.
###Code
wells["edu0"] = wells["educ"].isin(np.arange(0, 1)).astype(int)
wells["edu1"] = wells["educ"].isin(np.arange(1, 6)).astype(int)
wells["edu2"] = wells["educ"].isin(np.arange(6, 12)).astype(int)
wells["edu3"] = wells["educ"].isin(np.arange(12, 18)).astype(int)
wells["logarsenic"] = np.log(wells["arsenic"])
wells["assoc_half"] = wells["assoc"] / 2.0
wells["as_square"] = wells["logarsenic"] ** 2
wells["as_third"] = wells["logarsenic"] ** 3
wells["dist100"] = wells["dist"] / 100.0
wells["intercept"] = 1
def bs(x, knots, degree):
"""
Generate the B-spline basis matrix for a polynomial spline.
Parameters
----------
x
predictor variable.
knots
locations of internal breakpoints (not padded).
degree
degree of the piecewise polynomial.
Returns
-------
pd.DataFrame
Spline basis matrix.
Notes
-----
This mirrors ``bs`` from splines package in R.
"""
padded_knots = np.hstack(
[[x.min()] * (degree + 1), knots, [x.max()] * (degree + 1)]
)
return pd.DataFrame(
BSpline(padded_knots, np.eye(len(padded_knots) - degree - 1), degree)(x)[:, 1:],
index=x.index,
)
knots = np.quantile(wells.loc[train_id, "logarsenic"], np.linspace(0.1, 0.9, num=10))
spline_arsenic = bs(wells["logarsenic"], knots=knots, degree=3)
knots = np.quantile(wells.loc[train_id, "dist100"], np.linspace(0.1, 0.9, num=10))
spline_dist = bs(wells["dist100"], knots=knots, degree=3)
features_0 = ["intercept", "dist100", "arsenic", "assoc", "edu1", "edu2", "edu3"]
features_1 = ["intercept", "dist100", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_2 = [
"intercept",
"dist100",
"arsenic",
"as_third",
"as_square",
"assoc",
"edu1",
"edu2",
"edu3",
]
features_3 = ["intercept", "dist100", "assoc", "edu1", "edu2", "edu3"]
features_4 = ["intercept", "logarsenic", "assoc", "edu1", "edu2", "edu3"]
features_5 = ["intercept", "dist100", "logarsenic", "assoc", "educ"]
X0 = wells.loc[train_id, features_0].to_numpy()
X1 = wells.loc[train_id, features_1].to_numpy()
X2 = wells.loc[train_id, features_2].to_numpy()
X3 = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[train_id]
.to_numpy()
)
X4 = pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[train_id].to_numpy()
X5 = wells.loc[train_id, features_5].to_numpy()
X0_test = wells.loc[test_id, features_0].to_numpy()
X1_test = wells.loc[test_id, features_1].to_numpy()
X2_test = wells.loc[test_id, features_2].to_numpy()
X3_test = (
pd.concat([wells.loc[:, features_3], spline_arsenic], axis=1)
.loc[test_id]
.to_numpy()
)
X4_test = (
pd.concat([wells.loc[:, features_4], spline_dist], axis=1).loc[test_id].to_numpy()
)
X5_test = wells.loc[test_id, features_5].to_numpy()
train_x_list = [X0, X1, X2, X3, X4, X5]
test_x_list = [X0_test, X1_test, X2_test, X3_test, X4_test, X5_test]
K = len(train_x_list)
###Output
_____no_output_____
###Markdown
2.2 Training Each model will be trained in the same way - with a Bernoulli likelihood and a logit link function.
###Code
def logistic(x, y=None):
beta = numpyro.sample("beta", dist.Normal(0, 3).expand([x.shape[1]]))
logits = numpyro.deterministic("logits", jnp.matmul(x, beta))
numpyro.sample(
"obs",
dist.Bernoulli(logits=logits),
obs=y,
)
fit_list = []
for k in range(K):
sampler = numpyro.infer.NUTS(logistic)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
rng_key = jax.random.fold_in(jax.random.PRNGKey(13), k)
mcmc.run(rng_key, x=train_x_list[k], y=y_train)
fit_list.append(mcmc)
###Output
_____no_output_____
###Markdown
2.3 Estimate leave-one-out cross-validated score for each training point Rather than refitting each model 100 times, we will estimate the leave-one-out cross-validated score using [LOO](https://arxiv.org/abs/2001.00980).
###Code
def find_point_wise_loo_score(fit):
return az.loo(az.from_numpyro(fit), pointwise=True, scale="log").loo_i.values
lpd_point = np.vstack([find_point_wise_loo_score(fit) for fit in fit_list]).T
exp_lpd_point = np.exp(lpd_point)
###Output
_____no_output_____
###Markdown
3. Bayesian Hierarchical Stacking 3.1 Prepare stacking datasets To determine how the stacking weights should vary across training and test sets, we will need to create "stacking datasets" which include all the features which we want the stacking weights to depend on. How should such features be included? For discrete features, this is easy, we just one-hot-encode them. But for continuous features, we need a trick. In Equation (16), the authors recommend the following: if you have a continuous feature `f`, then replace it with the following two features:- `f_l`: `f` minus the median of `f`, clipped above at 0;- `f_r`: `f` minus the median of `f`, clipped below at 0;
###Code
dist100_median = wells.loc[wells.index[train_id], "dist100"].median()
logarsenic_median = wells.loc[wells.index[train_id], "logarsenic"].median()
wells["dist100_l"] = (wells["dist100"] - dist100_median).clip(upper=0)
wells["dist100_r"] = (wells["dist100"] - dist100_median).clip(lower=0)
wells["logarsenic_l"] = (wells["logarsenic"] - logarsenic_median).clip(upper=0)
wells["logarsenic_r"] = (wells["logarsenic"] - logarsenic_median).clip(lower=0)
stacking_features = [
"edu0",
"edu1",
"edu2",
"edu3",
"assoc_half",
"dist100_l",
"dist100_r",
"logarsenic_l",
"logarsenic_r",
]
X_stacking_train = wells.loc[train_id, stacking_features].to_numpy()
X_stacking_test = wells.loc[test_id, stacking_features].to_numpy()
###Output
_____no_output_____
###Markdown
3.2 Define stacking model What we seek to find is a matrix of weights $W$ with which to multiply the models' predictions. Let's define a matrix $Pred$ such that $Pred_{i,k}$ represents the prediction made for point $i$ by model $k$. Then the final prediction for point $i$ will then be:$$ \sum_k W_{i, k}Pred_{i,k} $$Such a matrix $W$ would be required to have each column sum to $1$. Hence, we calculate each row $W_i$ of $W$ as:$$ W_i = \text{softmax}(X\text{_stacking}_i \cdot \beta), $$where $\beta$ is a matrix whose values we seek to determine. For the discrete features, $\beta$ is given a hierarchical structure over the possible inputs. Continuous features, on the other hand, get no hierarchical structure in this case study and just vary according to the input values.Notice how, for the discrete features, a [non-centered parametrisation is used](https://twiecki.io/blog/2017/02/08/bayesian-hierchical-non-centered/). Also note that we only need to estimate `K-1` columns of $\beta$, because the weights `W_{i, k}` will have to sum to `1` for each `i`.
###Code
def stacking(
X,
d_discrete,
X_test,
exp_lpd_point,
tau_mu,
tau_sigma,
*,
test,
):
"""
Get weights with which to stack candidate models' predictions.
Parameters
----------
X
Training stacking matrix: features on which stacking weights should depend, for the
training set.
d_discrete
Number of discrete features in `X` and `X_test`. The first `d_discrete` features
from these matrices should be the discrete ones, with the continuous ones coming
after them.
X_test
Test stacking matrix: features on which stacking weights should depend, for the
testing set.
exp_lpd_point
LOO score evaluated at each point in the training set, for each candidate model.
tau_mu
Hyperprior for mean of `beta`, for discrete features.
tau_sigma
Hyperprior for standard deviation of `beta`, for continuous features.
test
Whether to calculate stacking weights for test set.
Notes
-----
Naming of variables mirrors what's used in the original paper.
"""
N = X.shape[0]
d = X.shape[1]
N_test = X_test.shape[0]
K = lpd_point.shape[1] # number of candidate models
with numpyro.plate("Candidate models", K - 1, dim=-2):
# mean effect of discrete features on stacking weights
mu = numpyro.sample("mu", dist.Normal(0, tau_mu))
# standard deviation effect of discrete features on stacking weights
sigma = numpyro.sample("sigma", dist.HalfNormal(scale=tau_sigma))
with numpyro.plate("Discrete features", d_discrete, dim=-1):
# effect of discrete features on stacking weights
tau = numpyro.sample("tau", dist.Normal(0, 1))
with numpyro.plate("Continuous features", d - d_discrete, dim=-1):
# effect of continuous features on stacking weights
beta_con = numpyro.sample("beta_con", dist.Normal(0, 1))
# effects of features on stacking weights
beta = numpyro.deterministic(
"beta", jnp.hstack([(sigma.squeeze() * tau.T + mu.squeeze()).T, beta_con])
)
assert beta.shape == (K - 1, d)
# stacking weights (in unconstrained space)
f = jnp.hstack([X @ beta.T, jnp.zeros((N, 1))])
assert f.shape == (N, K)
# log probability of LOO training scores weighted by stacking weights.
log_w = jax.nn.log_softmax(f, axis=1)
# stacking weights (constrained to sum to 1)
w = numpyro.deterministic("w", jnp.exp(log_w))
logp = jax.nn.logsumexp(lpd_point + log_w, axis=1)
numpyro.factor("logp", jnp.sum(logp))
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
numpyro.deterministic("w_test", w_test)
sampler = numpyro.infer.NUTS(stacking)
mcmc = numpyro.infer.MCMC(
sampler, num_chains=4, num_samples=1000, num_warmup=1000, progress_bar=False
)
mcmc.run(
jax.random.PRNGKey(17),
X=X_stacking_train,
d_discrete=4,
X_test=X_stacking_test,
exp_lpd_point=exp_lpd_point,
tau_mu=1.0,
tau_sigma=0.5,
test=True,
)
trace = mcmc.get_samples()
###Output
_____no_output_____
###Markdown
We can now extract the weights with which to weight the different models from the posterior, and then visualise how they vary across the training set.Let's compare them with what the weights would've been if we'd just used fixed stacking weights derived from the LOO scores.
###Code
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 6), sharey=True)
training_stacking_weights = trace["w"].mean(axis=0)
sns.scatterplot(data=pd.DataFrame(training_stacking_weights), ax=ax[0])
fixed_weights = pd.DataFrame(
np.repeat(
scipy.special.softmax(lpd_point.sum(axis=0))[:, np.newaxis].T,
len(X_stacking_train),
axis=0,
)
)
sns.scatterplot(
data=fixed_weights,
ax=ax[1],
)
ax[0].set_title("Training weights from Bayesian Hierarchical stacking")
ax[1].set_title("Fixed weights derived from lpd_point")
ax[0].set_xlabel("Index")
ax[1].set_xlabel("Index")
fig.suptitle(
"Bayesian Hierarchical Stacking weights can vary according to the input",
fontsize=18,
);
###Output
_____no_output_____
###Markdown
4. Evaluate on test set 4.1 Stack predictions Now, for each model, let's evaluate the log predictive density for each point in the test set. Once we have predictions for each model, we need to think about how to combine them, such that for each test point, we get a single prediction.We decided we'd do this in three ways:- Bayesian Hierarchical Stacking (`bhs_pred`);- choosing the model with the best training set LOO score (`model_selection_preds`);- fixed-weights stacking based on LOO scores (`fixed_weights_preds`).
###Code
# for each candidate model, extract the posterior predictive logits
train_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(19), k)
train_pred = predictive(rng_key, x=train_x_list[k])["logits"]
train_preds.append(train_pred.mean(axis=0))
# reshape, so we have (N, K)
train_preds = np.vstack(train_preds).T
# same as previous cell, but for test set
test_preds = []
for k in range(K):
predictive = numpyro.infer.Predictive(logistic, fit_list[k].get_samples())
rng_key = jax.random.fold_in(jax.random.PRNGKey(20), k)
test_pred = predictive(rng_key, x=test_x_list[k])["logits"]
test_preds.append(test_pred.mean(axis=0))
test_preds = np.vstack(test_preds).T
# get the stacking weights for the test set
test_stacking_weights = trace["w_test"].mean(axis=0)
# get predictions using the stacking weights
bhs_predictions = (test_stacking_weights * test_preds).sum(axis=1)
# get predictions using only the model with the best LOO score
model_selection_preds = test_preds[:, lpd_point.sum(axis=0).argmax()]
# get predictions using fixed stacking weights, dependent on the LOO score
fixed_weights_preds = (scipy.special.softmax(lpd_point.sum(axis=0)) * test_preds).sum(
axis=1
)
###Output
_____no_output_____
###Markdown
4.2 Compare methods Let's compare the negative log predictive density scores on the test set (note - lower is better):
###Code
fig, ax = plt.subplots(figsize=(12, 6))
neg_log_pred_densities = np.vstack(
[
-dist.Bernoulli(logits=bhs_predictions).log_prob(y_test),
-dist.Bernoulli(logits=model_selection_preds).log_prob(y_test),
-dist.Bernoulli(logits=fixed_weights_preds).log_prob(y_test),
]
).T
neg_log_pred_density = pd.DataFrame(
neg_log_pred_densities,
columns=[
"Bayesian Hierarchical Stacking",
"Model selection",
"Fixed stacking weights",
],
)
sns.barplot(
data=neg_log_pred_density.reindex(
columns=neg_log_pred_density.mean(axis=0).sort_values(ascending=False).index
),
orient="h",
ax=ax,
)
ax.set_title(
"Bayesian Hierarchical Stacking performs best here", fontdict={"fontsize": 18}
)
ax.set_xlabel("Negative mean log predictive density (lower is better)");
###Output
_____no_output_____ |
Notebooks/ArcGIS/R walkthrough ArcGIS Data analysis and ML.ipynb | ###Markdown
This is an end to end example of using ArcGIS with R(A shorter version of the similar one as in python but adapted to R) , the pyhton one is here https://github.com/Azure/DataScienceVM/blob/master/Notebooks/ArcGIS/Python%20walkthrough%20ArcGIS%20Data%20analysis%20and%20ML.ipynbFor better visualization, maps have been published into the GIS server with a public read access to display it in the notebook cell
###Code
#It might show warning e.g. "*** Please call arc.check_product() to define a desktop license." but can be disregarded
library(arcgisbinding)
###Output
*** Please call arc.check_product() to define a desktop license.
###Markdown
This should show sonme version info like:license 'Advanced'version '12.1.0.10257'path 'C:\\Program Files\\ArcGIS\\Pro\\'dll 'rarcproxy_pro'app 'ArcGIS Pro'pkg_ver '1.0.0.128'Do not proceed if this fails, means the arcgisbindings is not working
###Code
arc.check_product()
inputDir <- arc.open(path = 'C:/GISDemo/SeaGrass/SeaGrass.gdb/FloridaSeaGrass')
analysis_map_URL <- 'https://services3.arcgis.com/oZfKvdlWHN1MwS48/ArcGIS/rest/services/MachineLearningSeagrass/FeatureServer/1&source=sd'
url <- paste('http://www.arcgis.com/home/webmap/viewer.html?url=',analysis_map_URL)
display_url<- paste('<iframe src=','"',url, '"','width=100%, height=500></iframe>')
IRdisplay::display_html(display_url)
#Names of Prediction Variables
predictVars <- c('salinity', 'temp', 'phosphate','nitrate',
'silicate', 'dissO2', 'NameEMU')
#Name of Classification Variable
classVar <- 'Present'
#List of all Variables
allVars <- c(predictVars , classVar)
allVars
data <- arc.select(object = inputDir, fields = allVars)
head(data)
###Output
_____no_output_____
###Markdown
Now you’ll convert your R data frame into a spatial data frame object using the arc.data2sp() function. A spatial data frame object is one of the spatial data classes contained in the sp package. The sp package offers classes and methods for working with spatial data such as points, lines, polygons, pixels, rings, and grids. With this function, you can transfer all of the spatial attributes from your data, including projections, from ArcGIS into R without worrying about a loss of information. If you've never used the sp package, you need to install the sp package into your RStudio package library, and load the functions from the sp package into your workspace environment.
###Code
library(sp)
###Output
_____no_output_____
###Markdown
Use the arc.data2sp() function. For the first argument, use the enrich_select_df data frame as the object you are converting to an sp object.
###Code
#This will be used to write back to the Arc GIS DB
data_sp <- arc.data2sp(data)
#ignore any warning related to dummies package
install.packages("dummies")
library(dummies)
data<- dummy.data.frame(data_sp@data)
#head(data)
#Abbreviate Long Categorical Variable Names
newNames = c('c1','c2','c3')
names(data)[7:9]<-newNames
head(data)
# Get lower triangle of the correlation matrix
get_lower_tri<-function(cormat) {
cormat[upper.tri(cormat)] <- NA
return(cormat)
}
#
# Get upper triangle of the correlation matrix
get_upper_tri <- function(cormat) {
cormat[lower.tri(cormat)] <- NA
return(cormat)
}
#
reorder_cormat <- function(cormat) {
# Use correlation between variables as distance
dd <- as.dist((1-cormat) / 2)
hc <- hclust(dd)
cormat <- cormat [hc$order, hc$order]
}
#install.packages("reshape2")
library (reshape2)
#install.packages("ggplot2")
library (ggplot2)
#install.packages("ggmap")
library (ggmap)
corr_sub <- data[ c('salinity', 'temp', 'phosphate', 'nitrate', 'silicate', 'dissO2' , 'Present')]
cormax <- round (cor(corr_sub), 2)
upper_tri <- get_upper_tri (cormax)
melted_cormax <- melt (upper_tri, na.rm = TRUE)
cormax <- reorder_cormat (cormax)
upper_tri <- get_upper_tri (cormax)
melted_cormax <- melt (upper_tri, na.rm = TRUE)
ggheatmap <- ggplot (melted_cormax, aes (Var2, Var1, fill = value)) +
geom_tile(color = "white") +
scale_fill_gradient2 (low = "blue", high = "red", mid = "white", midpoint = 0, limit = c(-1,1), space = "Lab", name = "Pearson\nCorrelation") +
theme_minimal() + # minimal theme
theme (axis.text.x = element_text(angle = 45, vjust = 1, size = 12, hjust = 1)) +
coord_fixed()
#print (ggheatmap)
ggheatmap +
geom_text (aes (Var2, Var1, label = value), color = "black", size = 4) +
theme (
axis.title.x = element_blank(),
axis.title.y = element_blank(),
panel.grid.major = element_blank(),
panel.border = element_blank(),
axis.ticks = element_blank(),
legend.justification = c (1, 0),
legend.position = c (0.6, 0.7),
legend.direction = "horizontal") +
guides (fill = guide_colorbar (barwidth = 7, barheight = 1, title.position = "top", title.hjust = 0.5))
install.packages("caret")
install.packages("randomForest")
library(caret)
#Also convert Present to factor as it is integer type
data$Present <-as.factor(data$Present)
##PERFORM RANDOM FOREST CLASSIFICATION
trainIndex = createDataPartition(data$Present,
p=0.7, list=FALSE,times=1)
train = data[trainIndex,]
test = data[-trainIndex,]
nrow(train)
nrow(test)
library(randomForest)
model <- randomForest(Present ~ ., train,ntree=500)
summary(model)
pred <- predict(model, newdata = test)
table(pred, test$Present)
confusionMatrix(table(pred, test$Present))
###Output
_____no_output_____ |
Lessons-OOP/lesson_12a.ipynb | ###Markdown
1-minute introduction to Jupyter A Jupyter notebook consists of cells. Each cell contains either text or code.A text cell will not have any text to the left of the cell. A code cell has `In [ ]:` to the left of the cell.If the cell contains code, you can edit it. Press Enter to edit the selected cell. While editing the code, press Enter to create a new line, or Shift+Enter to run the code. If you are not editing the code, select a cell and press Ctrl+Enter to run the code. Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
--- Lesson 12a: Class inheritance, local imports
###Code
# This code cell loads a PEP8 linter
# Linting is the process of flagging programming errors,
# bugs, stylistic errors, and other code problems
# pycodestyle is a linter that highlights any syntax that
# is not PEP8-compliant.
%load_ext pycodestyle_magic
%pycodestyle_on
###Output
_____no_output_____
###Markdown
Class InheritanceWhile `class`es are really useful, we will find ourselves repeating a lot of code if we don’t think through and plan the code in advance. Let’s try writing a simple chess game. Before we start writing any code, it helps immensely to think about what we will need first.At the least, we will need:- a chess board- chess pieces - king - queen - rook - bishop - knight - pawn The pieces have one common characteristics:- a colour (white or black)6 different types of pieces. That means repeating our `__init__()`, `__repr__()`, and other methods _6 times_! And if we change our minds and decide to change one of those features, we will have to change it in 6 places … Is there any way we can reduce that repetition?Of course. Python lets us define classes _based on other classes_.Since all chess pieces have some common characteristics, let’s define a `BasePiece` class that represents a _generic_ chess piece. It will implement code that is common to all chess pieces:
###Code
class BasePiece:
def __init__(self, colour):
if type(colour) != str:
raise TypeError('colour argument must be str')
elif colour.lower() not in {'white', 'black'}:
raise ValueError('colour must be {white, black}')
else:
self.colour = colour
def __repr__(self):
return f'BasePiece({repr(self.colour)})'
###Output
_____no_output_____
###Markdown
Child classesOur child classes should have the attributes of the `BasePiece`, which is the parent class. We call them child classes because they are derived from the parent class. We say that the child class **inherits** the attributes and methods of the parent class.Let’s start with our first child class, the `King`:
###Code
class King(BasePiece):
pass
###Output
_____no_output_____
###Markdown
This child class inherits the `colour` attribute, as well as the `__init__()` and `__repr__()` methods from `BasePiece`.
###Code
k = King('white')
k
###Output
_____no_output_____
###Markdown
Class attributesHmm, that’s not so helpful now. Our `BasePiece` only needed a `colour` since it was a generic piece, but now that we are creating pieces of a specific type, `__repr__()` should also return the piece name.We will need to **override** the `__repr__()` method of `BasePiece` by defining a new one for `King`. But how do we create a `name` attribute for `King` without going through the `__init__()` method?We can set it as a **class attribute** instead.Notice that the `colour` attribute of `BasePiece` was set only in `__init__()`. The `BasePiece` class **does not** actually have this attribute, only its instances have it:
###Code
b = BasePiece('white')
print(f'b.colour: {b.colour}')
print(f'BasePiece.colour: {BasePiece.colour}')
###Output
_____no_output_____
###Markdown
The `King` class, on the other hand, represents a `king` piece, whether it has been instantiated or not. I should be able to do this: >>> King.name 'king' So let’s go ahead and give the `King` class a `name` class attribute, and a new `__repr__()` method.
###Code
class King(BasePiece):
# define a `name` class attribute for the King class
name = 'king'
def __repr__(self):
# define the __repr__() method for the King class
k = King('white')
print(k)
# AUTOGRADING: test class attribute and __repr__()
assert King.name == 'king', \
'`name` attribute wrongly defined for King class'
assert repr(King('white')) == "King('white')", \
"__repr__() method wrongly defined for King class"
###Output
_____no_output_____
###Markdown
Task 1Later, we are going to need a `__str__()` method to produce a simple description (e.g. `'white king'`) too. That’s a simple combination of the `colour` and `name` attributes, and it would be tedious to repeat the `__str__()` definition for all the piece classes.So let’s define it in `BasePiece` instead, using `try-except` to catch the `NameError` if the `name` attribute is not found:
###Code
class BasePiece:
def __init__(self, colour):
if type(colour) != str:
raise TypeError('colour argument must be str')
elif colour.lower() not in {'white', 'black'}:
raise ValueError('colour must be {white, black}')
else:
self.colour = colour
def __repr__(self):
return f'BasePiece({repr(self.colour)})'
def __str__(self):
try:
# define __str__() to return a simple description
# e.g. 'white king', 'black queen'
except NameError:
return f'{self.colour} piece'
class King(BasePiece):
name = 'king'
def __repr__(self):
return f'King({repr(self.colour)})'
k = King('white')
print(k)
# Test cell to check your code
assert str(King('white')).strip() == 'white king', \
"__str__() method wrongly defined for King class"
###Output
_____no_output_____
###Markdown
There, `King` is working much better now. Instance attributes and attribute overridingWait, why is `self.name` able to be used when we didn’t set it in `__init__()`? That’s because class attributes are available to all its instances. If you set the `name` attribute of an instance, it will override its class attribute (note that the class attribute is not deleted). And if you delete the instance attribute, the class attribute will be used again:
###Code
k.name = 'da king'
# Uses instance attribute
print(f'After overriding class attribute: {k.name}')
del k.name
# Back to class attribute
print(f'After removing instance attribute: {k.name}')
###Output
_____no_output_____
###Markdown
Lets go ahead and make `Board` first, before we come back to look at the other pieces. Making `Board`Our chess board needs to have an 8×8 grid. It also needs to have a way to keep track of which piece is on which square of the grid. How should we go about doing this?A newcomer might think of creating an 8-by-8 nested list, like this:
###Code
board = []
for x in range(8):
none_row = [None]*8
board.append(none_row)
board[0][4] = King('black')
board[7][4] = King('white')
board
###Output
_____no_output_____
###Markdown
But then you are going to have a hard time tracking their positions; how are you going to find `King('black')` after it has moved? for x in range(8): for y in range(8): piece = board[x][y] if piece.name == 'king' and piece.colour = 'black': That could work, but it’s so inefficient. You have 64 board positions, and 32 board pieces. How complex is your code going to have to be?It would be easier instead to just store the positions of the pieces. Lets think about our ideal code. We would like to be able to get the positions of each chess piece this way: >>> b = Board() instantiates a game board >>> b.position_of_piece('white king') indexes start from 0 (7, 4) It would also be helpful if we could examine which piece was at a particular position: >>> b.piece_at_position(7, 4) 'white king' Then it looks like each piece is **mapped** to a position. Sounds familiar? Perhaps we could use a `dict` to map each chess piece to a position!Oh man, there’s no easy way around this. We are going to have to write code to set the initial position of each piece. Task 2Before you run the cell below, time to define the other classes first so this will work:
###Code
# Define the other chess piece classes and their reprs here:
###Output
_____no_output_____
###Markdown
Now that we have our pieces, we can start to set them up on the playing board. We will map each piece to a specific position on the board. `dict`s accept any immutable object as a key, so we will use tuples as the key for each piece; the piece object itself is the value.We also need a method, `display()`, to print the entire board, so that we can see what is going on.
###Code
# Run this cell to update `Board` and generate
# a playing position with starting positions
class Board:
def __init__(self):
pass
def start(self):
self.position = []
colour = 'black'
self.position[(0, 7)] = Rook(colour)
self.position[(1, 7)] = Knight(colour)
self.position[(2, 7)] = Bishop(colour)
self.position[(3, 7)] = Queen(colour)
self.position[(4, 7)] = King(colour)
self.position[(5, 7)] = Bishop(colour)
self.position[(6, 7)] = Knight(colour)
self.position[(7, 7)] = Rook(colour)
for x in range(0, 8):
self.position[(x, 6)] = Pawn(colour)
colour = 'white'
self.position[(0, 0)] = Rook(colour)
self.position[(1, 0)] = Knight(colour)
self.position[(2, 0)] = Bishop(colour)
self.position[(3, 0)] = Queen(colour)
self.position[(4, 0)] = King(colour)
self.position[(5, 0)] = Bishop(colour)
self.position[(6, 0)] = Knight(colour)
self.position[(7, 0)] = Rook(colour)
for x in range(0, 8):
self.position[(x, 1)] = Pawn(colour)
def display(self):
'''
Displays the contents of the board.
Each piece is represented by two letters.
First letter is the colour (W for white, B for black).
Second letter is the name (Starting letter for each piece).
'''
# Write your code here
b = Board()
b.position
###Output
_____no_output_____
###Markdown
That’s all the _information_ we need about the pieces. But we will need more _methods_ in the process of programming the chess game. We will continue that in **Lesson 12b**.For now, we have quite a lot of code scattered across many cells. Let’s put them all into a single file so that it is easier to manage. In Python, this is known as making a **module**. Task 1: Making a `chess` (single-file) moduleCopy the latest code for `Board`, `BasePiece`, and each chess piece class (`King`,`Queen`,`Bishop`,`Knight`,`Rook`,`Pawn`) into `chess.py`, overriding the old definitions. Local importsBesides the standard Python libraries, you can also install other libraries. The most common way of doing this is through the Package Installer for Python, also known as `pip`. It acts like an “app store” for Python, except it has python libraries instead of apps.You can’t run `pip` on the school laptops as you don’t have administrator permissions, but on your own laptop you can do so. We will look at `pip` use once we begin on group-based projects. For now, let’s look at a related concern: how do you import a library you wrote yourself, but which is not available on `pip`? Importing from another Python file (`.py`) in the same directoryA library can be very simple; nothing more than another `.py` file containing functions and classes. It can also be very complex, consisting of multiple layers of files and directories, possibly even requiring installation.We have just created a`chess` module, inside `chess.py`, in the same directory as this Jupyter Notebook.Let’s import those objects into this notebook.
###Code
from chess import Board, BasePiece, King, Queen, Bishop, Knight, Rook, Pawn
b = Board()
b.field
###Output
_____no_output_____
###Markdown
For modules with many more objects, it can get tedious to list every single class and function. In those cases, to keep the names clear (remember how hard it is to name things?), we simply import the module. The classes (and any functions) are available with the `module.class` (or `module.function`) syntax:
###Code
import chess
board = chess.Board()
board.field
###Output
_____no_output_____
###Markdown
Very similar to your normal `import`s, right?Python will first search in the directory to see if there is a file or library named `chess`. If it doesn’t exist, then Python will search in a list of directories to see if there are any libraries or modules named `chess`. This list of directories is known as the **system path**.
###Code
import sys
print('Directories in system path:')
for path in sys.path:
print(path)
###Output
_____no_output_____
###Markdown
If nothing named `chess` is found in any of these places, Python raises a `ModuleNotFoundError`. For example:
###Code
import chess2
###Output
_____no_output_____ |
JNotebooks/tutorial01-python.ipynb | ###Markdown
Introduction to PythonPython is a powerful programming language commonly used in image processing and machine learning applications. Most of the deep learning (hot-topic on machine learning) libraries have a Python interface. This tutorial is meant to give you an introduction to the python programming language.The goals of this tutorial is introducing the basics of Python (we use Python 3), such as:- Python data types,- Flow control commands (if, while, for,...),- Declaring functions in Python. Python variable typesPython is a high level, object oriented, and interpreted programming language. It has the following characteristics:- No need to pre-declare variables and their types;- Blocks, such as "if", "for" are delimited by code identation and not delimiters like "{}" "BEGIN...END";- It has high level data types: strings, lists, tuples, dictionaries, classes...Python is a modern language suitable both for scientific and non-scientific applications. For scientific applications that involve numerical computations, it has a very powerful package for processing multidimensional arrays called *NumPy*, which we will learn more in our next tutorial. In its native form, Python supports the following variable types:| Variable type | Description | Syntax example ||---------------|---------------------------------------------|----------------------|| *int* | Integer variable | a = 103458 || *float* | Floating point variable | pi = 3.14159265 || *bool* | *boolean* variable - *True* or *False* | a = False || *complex* | Complex number variable | c = 2+3j || *str* | UNICODE characters variable | a = "Example" || *list* | Heterogeneous list (any type of elements) | my_list = [4,'me',1] || *tuple* | Heterogeneous tuple (values can't change) | my_tuple = (1,'I',2) || *dict* | Associative set of values | dic = {'me':1,'you':2} | Numerical Types- Declaring integer, boolean, floating point and complex variables and doing some simple operations, like:
###Code
a = 3
print(type(a))
b = 3.14
print(type(b))
c = 3 + 4j
print(type(c))
d = False
print(type(d))
print(a + b)
print(b * c)
print(c / a)
###Output
<class 'int'>
<class 'float'>
<class 'complex'>
<class 'bool'>
6.140000000000001
(9.42+12.56j)
(1+1.3333333333333333j)
###Markdown
Notice that when performing operations with variables of different types, Python converts the variables to a suitable type according to the following hierarchy: complex > floating point > integer. Integer division will result in a floating point number. Sequential typesPython has three main sequential types: lists, tuples and strings. StringsStrings can be declared both using single quotation (') and double quotation ("). Strings are immutable vectors of characters. The size of a string can be computed using the command *len*.
###Code
name1 = 'Faraday' # Single quoation
name2 = "Maxwell" #Double quotation
print('Type:', type(name1), '\nName1:', name1, '\nLength:', len(name1))
###Output
Type: <class 'str'>
Name1: Faraday
Length: 7
###Markdown
It is possible to access a character in a specific position of a string by indexing its position. The first element of the string has the index 0. It is also possible to use negative indexes. For instance, -1 corresponds to the last element of the string.
###Code
print('First character of ', name1, ' is: ', name1[0])
print('The last character of ', name1, ' is: ', name1[-1])
print('String multiplication replicates the string:', 3 * name1)
###Output
First character of Faraday is: F
The last character of Faraday is: y
String multiplication replicates the string: FaradayFaradayFaraday
###Markdown
ListsA list is a sequence of elements that may be of different types. The elements can be indexed, altered, and operations can be performed on them. Lists are defined by [ ] and the elements are separated by commas.
###Code
list1 = [1, 1.1, 'one']
list2 = [3+4j, list1] # Other lists can be elements of a list!
print('list1 type=', type(list1))
print('list2 type=', type(list2))
list2[1] = 'Faraday' # list elements can be altered
print('list2=', list2)
list3 = list1 + list2 # Concatenates 2 lists
print('list3=',list3)
print('List multiplication replicates the list:',2*list3)
###Output
list1 type= <class 'list'>
list2 type= <class 'list'>
list2= [(3+4j), 'Faraday']
list3= [1, 1.1, 'one', (3+4j), 'Faraday']
List multiplication replicates the list: [1, 1.1, 'one', (3+4j), 'Faraday', 1, 1.1, 'one', (3+4j), 'Faraday']
###Markdown
TuplesA tuple is similar to a list, but its values can not be altered. A tuple is defined by () and its elements are delimited by commas.**Note:** Tuples are very important, because many functions of the NumPy library receive tuples as input arguments.
###Code
#Declaring tuples
tuple1 = () # empty tuple
tuple2 = ('Gauss',) # One element tuple. Pay attention at the comma!
tuple3 = (1.1, 'Ohm', 3+4j)
tuple4 = 3, 'aqui', True
print('tuple1=',tuple1)
print('tuple2=', tuple2)
print('tuple3=',tuple3)
print('tuple4=', tuple4)
print('tuple3 type=', type(tuple3))
tuple[0] = "reset"
###Output
tuple1= ()
tuple2= ('Gauss',)
tuple3= (1.1, 'Ohm', (3+4j))
tuple4= (3, 'aqui', True)
tuple3 type= <class 'tuple'>
###Markdown
Slicing Sequential TypesSlicing is an operation that selects a subset of the elements of a sequential type variable. See the examples below:
###Code
s = 'abcdefg'
print('s=',s)
print('s[0:2] =', s[0:2]) # Characters between [0,1]
print('s[2:5] =', s[2:5]) # Characters between [2,4]
###Output
s= abcdefg
s[0:2] = ab
s[2:5] = cde
###Markdown
When the first element is the initial element or the last element of the slice is the last element of the sequential variable, they can be ommited from the slicing syntax:
###Code
s = 'abcdefg'
print('s=',s)
print('s[:2] =', s[:2]) # Characters between [0,1]
print('s[2:] =', s[2:]) # Characters between [2,last element]
print('s[-2:] =', s[-2:]) # Last 2 elements
###Output
s= abcdefg
s[:2] = ab
s[2:] = cdefg
s[-2:] = fg
###Markdown
Observe that the initial index is included in the slice, while the last element is not included. Therefore, s[:i] + s[i:] is equal to s.Slicing allows a third parameter, which is the step. If you are familiar with the C programming language, the 3 slicing parameters are similar to the *for* command in C:|Command *for* | *slicing* ||-----------------------------------------|-----------------------||`for (i=begin; i < end; i += step) a[i]` | `a[begin:end:step]` |See some slicing examples below:
###Code
s = 'abcdefg'
print('s=',s)
print('s[2:5]=', s[2:5])
print('s[0:5:2]=',s[0:5:2])
print('s[::2]=', s[::2])
print('s[:5]=', s[:5])
print('s[3:]=', s[3:])
print('s[::-1]=', s[::-1])
###Output
s= abcdefg
s[2:5]= cde
s[0:5:2]= ace
s[::2]= aceg
s[:5]= abcde
s[3:]= defg
s[::-1]= gfedcba
###Markdown
The slicing concept is essential to become a good Python/NumPy programmer. It is applicable to strings, lists, tuples and NumPy arrays. Unpacking Sequential TypesSequential types can be unpacked using assignment operation. See the example below:
###Code
s = "abc"
s1,s2,s3 = s
print('s1:',s1)
print('s2:',s2)
print('s3:',s3)
list1 = [1,2,3]
t = 8,9,True
print('list1=',list1)
list1 = t
print('list1=',list1)
(_,a,_) = t
print('a=',a)
###Output
s1: a
s2: b
s3: c
list1= [1, 2, 3]
list1= (8, 9, True)
a= 9
###Markdown
Formatting a string for printingA string can be formatted using a similar syntax as the one used by the sprintf function from C/C++. %d stands for integers, %f for floating point variables, and %s for strings. See if you can understand the example below:
###Code
s = 'Formatting strings. Integer:%d, float:%f, string:%s' % (5, 3.2, 'hello')
print(s)
###Output
Formatting strings. Integer:5, float:3.200000, string:hello
###Markdown
Other data typesOther data types not so commonly used in our applications are the sets and dictionaries. DictionaryDicitionaries can be seen as associative lists. Instead of associating its elements to numerical indexes, each of its elements is associated to a unique key-word.See below how to declare a dictionary, access its elements and listing its keys.
###Code
dict1 = {'blue':135,'green':12.34,'red':'ocean'} # dictionary declaration
print(type(dict1))
print(dict1)
print(dict1['blue'])
print(dict1.keys()) # Dictionary keys
del dict1['blue'] # Deleting a dictionary element
print(dict1.keys()) # Dicitionary keys after deleting 'blue'
###Output
<class 'dict'>
{'blue': 135, 'green': 12.34, 'red': 'ocean'}
135
dict_keys(['blue', 'green', 'red'])
dict_keys(['green', 'red'])
###Markdown
SetsSets are collections of elements with no clear ordering. Sets' elements are always unique.See below how to declare a set variable and perform some simple operations.
###Code
list1 = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
list2 = ['red', 'blue', 'green','red','red']
set1 = set(list1) # Defining a set
set2 = set(list2)
print(set1) # Repeated elements are counted only once
print(type(set1))
print(set1 | set2) # Union of the 2 sets
###Output
{'banana', 'pear', 'apple', 'orange'}
<class 'set'>
{'banana', 'pear', 'apple', 'orange', 'green', 'red', 'blue'}
###Markdown
The importance of Indentation on the Python LanguageUnlike other programming languages, Python does not use *begin* and *end*, or {, } to delimit its code blocks (if, for, while, etc.). Python uses code indentation to determine, which commands are encompassed within a block. Therefore, indentation is fundamental in Python.
###Code
#Example1: The last print command writes to the output
#independently of the value of x
x = -1
if x<0:
print('x is smaller than zero!')
elif x==0:
print('x is equal to zero')
else:
print ('x is greater than zero!')
print ('This sentence is writen regardless of the value of x')
#Example2: The last 2 print commands are within the last "else" block,
# therefore they are only executed if x is gretaer than zero
if x<0:
print('x is smaller than zero!')
elif x==0:
print('x is euqal to zero')
else:
print('x is greater than zero!')
print('This sentence is writen only if x > 0')
###Output
x is smaller than zero!
This sentence is writen regardless of the value of x
x is smaller than zero!
###Markdown
Loops forLooping through a list of strings:
###Code
browsers = ["Safari", "Firefox", "Google Chrome", "Opera", "IE"]
for browser in browsers:
print(browser)
###Output
Safari
Firefox
Google Chrome
Opera
IE
###Markdown
Looping through a list of integers:
###Code
numbers = [1,10,20,30,40,50]
sum = 0
for number in numbers:
sum = sum + number
print(sum)
###Output
151
###Markdown
Looping through the characters of a string:
###Code
word = "computer"
for letter in word:
print(letter)
###Output
c
o
m
p
u
t
e
r
###Markdown
Looping through a sequence of numbers with the help of the xrange iterator:
###Code
for a in range(21,-1,-2):
print(a)
###Output
21
19
17
15
13
11
9
7
5
3
1
###Markdown
whileThe loop executes until a stop condition is reached.
###Code
browsers = ["Safari", "Firefox", "Google Chrome", "Opera", "IE"]
i = 0
while browsers[i]!= "Opera": # 2 conditions checked in order to continue
print(browsers[i])
i = i + 1
###Output
Safari
Firefox
Google Chrome
###Markdown
Nested LoopsIt is possible to nest loops in python. Notice that in order to do that identation is really important!
###Code
for x in range(1, 4):
for y in range(1, 3):
print('%d * %d = %d' % (x, y, x*y))
print('Inside the first for loop, but out of the second')
###Output
1 * 1 = 1
1 * 2 = 2
Inside the first for loop, but out of the second
2 * 1 = 2
2 * 2 = 4
Inside the first for loop, but out of the second
3 * 1 = 3
3 * 2 = 6
Inside the first for loop, but out of the second
###Markdown
Functions Function Declaration SyntaxPython functions are declared using the keyword *def* followed by the function name, the list of input parameters between parenthesis, and an ending semicolon (:).Look at the example below where a function is defined to perform the + operation between two elements and return the result.
###Code
def sum1( x, y):
s = x + y
return s
###Output
_____no_output_____
###Markdown
Here is how to call the function:
###Code
r = sum1(50, 20)
print(r)
###Output
70
###Markdown
Function ParametersThere are 2 kinds of function parameters: positional and key-word. The positional parameters are identified by their position in the sequence of parameters. The key-word parameters are identified by their name. Key-word parameters come with a default value, so you do not have to always pass it as an input when calling the function. See the example below with two 2 positional and 1 key-word parameters.
###Code
def sum2( x, y, squared=False):
if squared:
s = (x + y)**2
else:
s = (x + y)
return s
###Output
_____no_output_____
###Markdown
See examples of the function calling:
###Code
print('sum2(2, 3):', sum2(2, 3))
print('sum2(2, 3, False):', sum2(2, 3, False))
print('sum2(2, 3, True):', sum2(2, 3, True))
print('sum2(2, 3, squared= True):', sum2(2, 3, squared= True))
###Output
sum2(2, 3): 5
sum2(2, 3, False): 5
sum2(2, 3, True): 25
sum2(2, 3, squared= True): 25
###Markdown
Introduction to PythonPython is a powerful programming language commonly used in image processing and machine learning applications. Most of the deep learning (hot-topic on machine learning) libraries have a Python interface. This tutorial is meant to give you an introduction to the python programming language.The goals of this tutorial is introducing the basics of Python (we use Python 3), such as:- Python data types,- Flow control commands (if, while, for,...),- Declaring functions in Python. Python variable typesPython is a high level, object oriented, and interpreted programming language. It has the following characteristics:- No need to pre-declare variables and their types;- Blocks, such as "if", "for" are delimited by code identation and not delimiters like "{}" "BEGIN...END";- It has high level data types: strings, lists, tuples, dictionaries, classes...Python is a modern language suitable both for scientific and non-scientific applications. For scientific applications that involve numerical computations, it has a very powerful package for processing multidimensional arrays called *NumPy*, which we will learn more in our next tutorial. In its native form, Python supports the following variable types:| Variable type | Description | Syntax example ||---------------|---------------------------------------------|----------------------|| *int* | Integer variable | a = 103458 || *float* | Floating point variable | pi = 3.14159265 || *bool* | *boolean* variable - *True* or *False* | a = False || *complex* | Complex number variable | c = 2+3j || *str* | UNICODE characters variable | a = "Example" || *list* | Heterogeneous list (any type of elements) | my_list = [4,'me',1] || *tuple* | Heterogeneous tuple (values can't change) | my_tuple = (1,'I',2) || *dict* | Associative set of values | dic = {'me':1,'you':2} | Numerical Types- Declaring integer, boolean, floating point and complex variables and doing some simple operations, like:
###Code
a = 3
print(type(a))
b = 3.14
print(type(b))
c = 3 + 4j
print(type(c))
d = False
print(type(d))
print(a + b)
print(b * c)
print(c / a)
###Output
<class 'int'>
<class 'float'>
<class 'complex'>
<class 'bool'>
6.140000000000001
(9.42+12.56j)
(1+1.3333333333333333j)
###Markdown
Notice that when performing operations with variables of different types, Python converts the variables to a suitable type according to the following hierarchy: complex > floating point > integer. Integer division will result in a floating point number. Sequential typesPython has three main sequential types: lists, tuples and strings. StringsStrings can be declared both using single quotation (') and double quotation ("). Strings are immutable vectors of characters. The size of a string can be computed using the command *len*.
###Code
name1 = 'Faraday' # Single quoation
name2 = "Maxwell" #Double quotation
print('Type:', type(name1), '\nName1:', name1, '\nLength:', len(name1))
###Output
Type: <class 'str'>
Name1: Faraday
Length: 7
###Markdown
It is possible to access a character in a specific position of a string by indexing its position. The first element of the string has the index 0. It is also possible to use negative indexes. For instance, -1 corresponds to the last element of the string.
###Code
print('First character of ', name1, ' is: ', name1[0])
print('The last character of ', name1, ' is: ', name1[-1])
print('String multiplication replicates the string:', 3 * name1)
###Output
First character of Faraday is: F
The last character of Faraday is: y
String multiplication replicates the string: FaradayFaradayFaraday
###Markdown
ListsA list is a sequence of elements that may be of different types. The elements can be indexed, altered, and operations can be performed on them. Lists are defined by [ ] and the elements are separated by commas.
###Code
list1 = [1, 1.1, 'one']
list2 = [3+4j, list1] # Other lists can be elements of a list!
print('list1 type=', type(list1))
print('list2 type=', type(list2))
list2[1] = 'Faraday' # list elements can be altered
print('list2=', list2)
list3 = list1 + list2 # Concatenates 2 lists
print('list3=',list3)
print('List multiplication replicates the list:',2*list3)
###Output
list1 type= <class 'list'>
list2 type= <class 'list'>
list2= [(3+4j), 'Faraday']
list3= [1, 1.1, 'one', (3+4j), 'Faraday']
List multiplication replicates the list: [1, 1.1, 'one', (3+4j), 'Faraday', 1, 1.1, 'one', (3+4j), 'Faraday']
###Markdown
TuplesA tuple is similar to a list, but its values can not be altered. A tuple is defined by () and its elements are delimited by commas.**Note:** Tuples are very important, because many functions of the NumPy library receive tuples as input arguments.
###Code
#Declaring tuples
tuple1 = () # empty tuple
tuple2 = ('Gauss',) # One element tuple. Pay attention at the comma!
tuple3 = (1.1, 'Ohm', 3+4j)
tuple4 = 3, 'aqui', True
print('tuple1=',tuple1)
print('tuple2=', tuple2)
print('tuple3=',tuple3)
print('tuple4=', tuple4)
print('tuple3 type=', type(tuple3))
tuple[0] = "reset"
###Output
tuple1= ()
tuple2= ('Gauss',)
tuple3= (1.1, 'Ohm', (3+4j))
tuple4= (3, 'aqui', True)
tuple3 type= <class 'tuple'>
###Markdown
Slicing Sequential TypesSlicing is an operation that selects a subset of the elements of a sequential type variable. See the examples below:
###Code
s = 'abcdefg'
print('s=',s)
print('s[0:2] =', s[0:2]) # Characters between [0,1]
print('s[2:5] =', s[2:5]) # Characters between [2,4]
###Output
s= abcdefg
s[0:2] = ab
s[2:5] = cde
###Markdown
When the first element is the initial element or the last element of the slice is the last element of the sequential variable, they can be ommited from the slicing syntax:
###Code
s = 'abcdefg'
print('s=',s)
print('s[:2] =', s[:2]) # Characters between [0,1]
print('s[2:] =', s[2:]) # Characters between [2,last element]
print('s[-2:] =', s[-2:]) # Last 2 elements
###Output
s= abcdefg
s[:2] = ab
s[2:] = cdefg
s[-2:] = fg
###Markdown
Observe that the initial index is included in the slice, while the last element is not included. Therefore, s[:i] + s[i:] is equal to s.Slicing allows a third parameter, which is the step. If you are familiar with the C programming language, the 3 slicing parameters are similar to the *for* command in C:|Command *for* | *slicing* ||-----------------------------------------|-----------------------||`for (i=begin; i < end; i += step) a[i]` | `a[begin:end:step]` |See some slicing examples below:
###Code
s = 'abcdefg'
print('s=',s)
print('s[2:5]=', s[2:5])
print('s[0:5:2]=',s[0:5:2])
print('s[::2]=', s[::2])
print('s[:5]=', s[:5])
print('s[3:]=', s[3:])
print('s[::-1]=', s[::-1])
###Output
s= abcdefg
s[2:5]= cde
s[0:5:2]= ace
s[::2]= aceg
s[:5]= abcde
s[3:]= defg
s[::-1]= gfedcba
###Markdown
The slicing concept is essential to become a good Python/NumPy programmer. It is applicable to strings, lists, tuples and NumPy arrays. Unpacking Sequential TypesSequential types can be unpacked using assignment operation. See the example below:
###Code
s = "abc"
s1,s2,s3 = s
print('s1:',s1)
print('s2:',s2)
print('s3:',s3)
list1 = [1,2,3]
t = 8,9,True
print('list1=',list1)
list1 = t
print('list1=',list1)
(_,a,_) = t
print('a=',a)
###Output
s1: a
s2: b
s3: c
list1= [1, 2, 3]
list1= (8, 9, True)
a= 9
###Markdown
Formatting a string for printingA string can be formatted using a similar syntax as the one used by the sprintf function from C/C++. %d stands for integers, %f for floating point variables, and %s for strings. See if you can understand the example below:
###Code
s = 'Formatting strings. Integer:%d, float:%f, string:%s' % (5, 3.2, 'hello')
print(s)
###Output
Formatting strings. Integer:5, float:3.200000, string:hello
###Markdown
Other data typesOther data types not so commonly used in our applications are the sets and dictionaries. DictionaryDicitionaries can be seen as associative lists. Instead of associating its elements to numerical indexes, each of its elements is associated to a unique key-word.See below how to declare a dictionary, access its elements and listing its keys.
###Code
dict1 = {'blue':135,'green':12.34,'red':'ocean'} # dictionary declaration
print(type(dict1))
print(dict1)
print(dict1['blue'])
print(dict1.keys()) # Dictionary keys
del dict1['blue'] # Deleting a dictionary element
print(dict1.keys()) # Dicitionary keys after deleting 'blue'
###Output
<class 'dict'>
{'blue': 135, 'green': 12.34, 'red': 'ocean'}
135
dict_keys(['blue', 'green', 'red'])
dict_keys(['green', 'red'])
###Markdown
SetsSets are collections of elements with no clear ordering. Sets elements are always unique.See below how to declare a set variable and perform some simple operations.
###Code
list1 = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
list2 = ['red', 'blue', 'green','red','red']
set1 = set(list1) # Defining a set
set2 = set(list2)
print(set1) # Repeated elements are counted only once
print(type(set1))
print(set1 | set2) # Union of the 2 sets
###Output
{'banana', 'apple', 'orange', 'pear'}
<class 'set'>
{'blue', 'green', 'banana', 'red', 'apple', 'orange', 'pear'}
###Markdown
The importance of Indentation on the Python LanguageUnlike other programming languages, Python does not use *begin* and *end*, or {, } to delimit its code blocks (if, for, while, etc.). Python uses code indentation to determine, which commands are encompassed within a block. Therefore, indentation is fundamental in Python.
###Code
#Example1: The last print command writes to the output
#independently of the value of x
x = -1
if x<0:
print('x is smaller than zero!')
elif x==0:
print('x is equal to zero')
else:
print ('x is greater than zero!')
print ('This sentence is writen regardless of the value of x')
#Example2: The last 2 print commands are within the last "else" block,
# therefore they are only executed if x is gretaer than zero
if x<0:
print('x is smaller than zero!')
elif x==0:
print('x is euqal to zero')
else:
print('x is greater than zero!')
print('This sentence is writen only if x > 0')
###Output
x is smaller than zero!
This sentence is writen regardless of the value of x
x is smaller than zero!
###Markdown
Loops forLooping through a list of strings:
###Code
browsers = ["Safari", "Firefox", "Google Chrome", "Opera", "IE"]
for browser in browsers:
print(browser)
###Output
Safari
Firefox
Google Chrome
Opera
IE
###Markdown
Looping through a list of integers:
###Code
numbers = [1,10,20,30,40,50]
sum = 0
for number in numbers:
sum = sum + number
print(sum)
###Output
151
###Markdown
Looping through the characters of a string:
###Code
word = "computer"
for letter in word:
print(letter)
###Output
c
o
m
p
u
t
e
r
###Markdown
Looping through a sequence of numbers with the help of the xrange iterator:
###Code
for a in range(21,-1,-2):
print(a)
###Output
21
19
17
15
13
11
9
7
5
3
1
###Markdown
whileThe loop executes until a stop condition is reached.
###Code
browsers = ["Safari", "Firefox", "Google Chrome", "Opera", "IE"]
i = 0
while browsers[i]!= "Opera": # 2 conditions checked in order to continue
print(browsers[i])
i = i + 1
###Output
Safari
Firefox
Google Chrome
###Markdown
Nested LoopsIt is possible to nest loops in python. Notice that in order to do that identation is really important!
###Code
for x in range(1, 4):
for y in range(1, 3):
print('%d * %d = %d' % (x, y, x*y))
print('Inside the first for loop, but out of the second')
###Output
1 * 1 = 1
1 * 2 = 2
Inside the first for loop, but out of the second
2 * 1 = 2
2 * 2 = 4
Inside the first for loop, but out of the second
3 * 1 = 3
3 * 2 = 6
Inside the first for loop, but out of the second
###Markdown
Functions Function Declaration SyntaxPython functions are declared using the keyword *def* followed by the function name, the list of input parameters between parenthesis, and an ending colon (:).Look at the example below where a function is defined to perform the + operation between two elements and return the result.
###Code
def sum1( x, y):
s = x + y
return s
###Output
_____no_output_____
###Markdown
Here is how to call the function:
###Code
r = sum1(50, 20)
print(r)
###Output
70
###Markdown
Function ParametersThere are 2 kinds of function parameters: positional and key-word. The positional parameters are identified by their position in the sequence of parameters. The key-word parameters are identified by their name. Key-word parameters come with a default value, so you do not have to always pass it as an input when calling the function. See the example below with two 2 positional and 1 key-word parameters.
###Code
def sum2( x, y, squared=False):
if squared:
s = (x + y)**2
else:
s = (x + y)
return s
###Output
_____no_output_____
###Markdown
See examples of the function calling:
###Code
print('sum2(2, 3):', sum2(2, 3))
print('sum2(2, 3, False):', sum2(2, 3, False))
print('sum2(2, 3, True):', sum2(2, 3, True))
print('sum2(2, 3, squared= True):', sum2(2, 3, squared= True))
###Output
sum2(2, 3): 5
sum2(2, 3, False): 5
sum2(2, 3, True): 25
sum2(2, 3, squared= True): 25
|
notebooks/Python_misc_Pandas.ipynb | ###Markdown
Pandasの使い方 (基礎) ```Pandas```は、データ分析のためのライブラリで 統計量を計算・表示したり、それらをグラフとして可視化出来たり データサイエンスや機械学習などで必要な作業を簡単に行うことができます。Numpyや機械学習ライブラリなどに入れるデータの前処理などにもよく用いられます。まずはインポートしましょう。```pd```という名前で使うのが慣例です。
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
DataFrame型 DataFrameは二次元のデータを表現するのに利用され 各種データ分析などで非常に役にたちます。
###Code
from pandas import DataFrame
###Output
_____no_output_____
###Markdown
以下の辞書型をDataFrame型のオブジェクトに変換してみましょう。
###Code
data = { '名前': ["Aさん", "Bさん", "Cさん", "Dさん", "Eさん"],
'出身都道府県':['Tokyo', 'Tochigi', 'Hokkaido','Kyoto','Tochigi'],
'生年': [ 1998, 1993,2000,1989,2002],
'身長': [172, 156, 162, 180,158]}
df = DataFrame(data)
print("dataの型", type(data))
print("dfの型",type(df))
###Output
dataの型 <class 'dict'>
dfの型 <class 'pandas.core.frame.DataFrame'>
###Markdown
jupyter環境でDataFrameを読むと、"いい感じ"に表示してくれる
###Code
df
###Output
_____no_output_____
###Markdown
printだとちょっと無機質な感じに。
###Code
print(df)
###Output
名前 出身都道府県 生年 身長
0 Aさん Tokyo 1998 172
1 Bさん Tochigi 1993 156
2 Cさん Hokkaido 2000 162
3 Dさん Kyoto 1989 180
4 Eさん Tochigi 2002 158
###Markdown
```info()```関数を作用させると、詳細な情報が得られる。 列ごとにどんな種類のデータが格納されているのかや、メモリ使用量など表示することができる。
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 名前 5 non-null object
1 出身都道府県 5 non-null object
2 生年 5 non-null int64
3 身長 5 non-null int64
dtypes: int64(2), object(2)
memory usage: 288.0+ bytes
###Markdown
DataFrameの要素を確認・指定する方法 index: 行方向のデータ項目(おもに整数値(行番号),ID,名前など) columns: 列方向のデータの項目(おもにデータの種類) をそれぞれ表示してみよう。
###Code
df.index
df.columns
###Output
_____no_output_____
###Markdown
行方向を、整数値(行数)ではなく名前にしたければ
###Code
data1 = {'出身都道府県':['Tokyo', 'Tochigi', 'Hokkaido','Kyoto','Tochigi'],
'生年': [ 1998, 1993,2000,1989,2002],
'身長': [172, 156, 162, 180,158]}
df1 = DataFrame(data1)
df1.index =["Aさん", "Bさん", "Cさん", "Dさん", "Eさん"]
df1
###Output
_____no_output_____
###Markdown
などとしてもよい。 特定の列を取得したい場合
###Code
df["身長"]
###Output
_____no_output_____
###Markdown
とする。 以下の方法は非推奨とする。
###Code
df.身長
###Output
_____no_output_____
###Markdown
値のリスト(正確にはnumpy.ndarray型)として取得したければ
###Code
df["身長"].values
df["出身都道府県"].values
###Output
_____no_output_____
###Markdown
などとすればよい。慣れ親しんだ形に変換したければ、リストに変換すればよい
###Code
list(df["出身都道府県"].values)
###Output
_____no_output_____
###Markdown
ある列が特定のものに一致するもののみを抽出するのも簡単にできる
###Code
df[df["出身都道府県"]=="Tochigi"]
###Output
_____no_output_____
###Markdown
これは
###Code
df["出身都道府県"]=="Tochigi"
###Output
_____no_output_____
###Markdown
が条件に合致するかどうかTrue/Falseの配列になっていて、 df[ [True/Falseの配列] ]とすると、Trueに対応する要素のみを返す フィルターのような役割になっている。 列の追加
###Code
#スカラー値の場合"初期化"のような振る舞いをする
df["血液型"] = "A"
df
#リストで追加
df["血液型"] = [ "A", "O","AB","B","A"]
df
###Output
_____no_output_____
###Markdown
特定の行を取得したい場合 たとえば、行番号がわかっているなら、```iloc```関数を使えば良い
###Code
df.iloc[3]
###Output
_____no_output_____
###Markdown
値のみ取得したければ先程と同様
###Code
df.iloc[3].values
###Output
_____no_output_____
###Markdown
また、以下のような使い方もできるが
###Code
df[1:4] #1から3行目まで
###Output
_____no_output_____
###Markdown
```df[1]```といった使い方は出来ない。 より複雑な行・列の抽出 上にならって、2000年より前に生まれた人だけを抽出し
###Code
df[ df["生年"] < 2000 ]
###Output
_____no_output_____
###Markdown
さらにこのうち身長が170cm以上の人だけがほしければ
###Code
df[(df["生年"] < 2000) & (df["身長"]>170)]
###Output
_____no_output_____
###Markdown
などとすればよい。 他にも、```iloc```,```loc```などを用いれば、特定の行・列を抽出することができるちなみに、```iloc```は番号の指定のみに対応,```loc```は名前のみ。**欲しい要素の数値もしくは項目名のリスト**を、行、列2ついれてやればよい。
###Code
df.iloc[[0], [0]] #0行目,0列目
#スライスで指定することもできる
df.iloc[1:4, :3] #1-3行目かつ0-2列目 (スライスの終点は含まれないことに注意)
#スライスの場合は、 1:4が[1,2,3]と同じ働きをするので、括弧[]はいらない
###Output
_____no_output_____
###Markdown
```loc```を使う場合は、indexの代わりに項目名で指定する。
###Code
df.loc[1:4,["名前","身長"]]
df.loc[[1,2,3,4],"名前":"生年"]
###Output
_____no_output_____
###Markdown
といった具合。```loc```を使う場合、1:4や[1,2,3,4]は indexのスライスではなく、項目名を意味し Eさんのデータも含まれている。 Webページにある表をDataFrameとして取得する ```pandas```内の```read_html```関数を用いれば、 Webページの中にある表をDataFrame形式で取得することもできます。以下では例としてWikipediaの[ノーベル物理学賞](https://ja.wikipedia.org/wiki/%e3%83%8e%e3%83%bc%e3%83%99%e3%83%ab%e7%89%a9%e7%90%86%e5%ad%a6%e8%b3%9e)のページにある、受賞者一覧を取得してみましょう
###Code
url = "https://ja.wikipedia.org/wiki/%e3%83%8e%e3%83%bc%e3%83%99%e3%83%ab%e7%89%a9%e7%90%86%e5%ad%a6%e8%b3%9e"
tables = pd.read_html(url)
print(len(tables))
###Output
21
###Markdown
ページ内に、21個もの表があることがわかります。 (ほとんどはwikipediaのテンプレート等)たとえば、2010年代の受賞者のみに興味がある場合は
###Code
df = tables[12]
df
###Output
_____no_output_____
###Markdown
DataFrameのcsv/Excelファイルへの書き出し DataFrameオブジェクトは、```pandas```内の関数を用いれば、 簡単にcsvやExcelファイルとして書き出すことができます。先程の、2010年代のノーベル物理学賞受賞者のデータを、 Google Driveにファイルとして書き出してみましょう。
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
**csvとして書き出す場合**適当にパスを指定して、DataFrameオブジェクトに ```to_csv```関数を作用させます。
###Code
df.to_csv("/content/drive/My Drive/AdDS2021/pd_write_test.csv")
###Output
_____no_output_____
###Markdown
**Excelファイルとして書き出す場合**この場合も同様で、```to_excel```関数を用います。
###Code
df.to_excel("/content/drive/My Drive/AdDS2021/pd_write_test.xlsx")
###Output
_____no_output_____
###Markdown
上記の関数内で文字コードを指定することもできます。 例: ```encoding="utf-8_sig"```, ```encoding="shift_jis"``` Pandasで複雑なエクセルファイルを操作する Pandasにはread_excel()という関数が用意されていて、 多数のシートを含むようなエクセルファイルを開くことも出来る。まずは必要なモジュールをインポートしよう。
###Code
!pip install xlrd #xlrdモジュールのインストール
import xlrd
import pandas as pd
from pandas import DataFrame
import urllib.request
###Output
_____no_output_____
###Markdown
今まではGoogle Driveにいれたファイルを読み出していたが、 Webから直接xlsxファイルを読み込んでみよう。
###Code
url = "https://www.mext.go.jp/content/20201225-mxt_kagsei-mext_01110_012.xlsx"
f = urllib.request.urlopen(url)
#ワークブック(作業するエクセルファイル)をwbという変数名で開く. 文字コードはutf-8と仮定した(shift-jisのものがたまにあるので注意)
wb = xlrd.open_workbook(file_contents=f.read(),encoding_override="utf-8")
f.close()
###Output
_____no_output_____
###Markdown
ブック内のシートの一覧は以下のように取得できる。
###Code
print("シート名の一覧", wb.sheet_names())
###Output
_____no_output_____
###Markdown
シートを指定するのは、インデックスかシート名の文字列で行う。"1 穀類"を使うことにして、 pandasにあるread_excel関数を使ってみよう。(他にもxlrdの関数を使って読む方法などもある)
###Code
df = pd.read_excel(wb,sheet_name=0) #excelの指定したシートを読んで、DataFrameとして変数dfに格納
print(df)
ndf = pd.read_excel(wb,sheet_name="1 穀類")
print(ndf)
###Output
_____no_output_____
###Markdown
同じものが得られている。 データの整形次に、今取得したデータフレームのままでは少々扱い辛いので"整形"を考える。 というのも前から4行ほど表示してみると...
###Code
df[0:4]
###Output
_____no_output_____
###Markdown
最初の4行ほどに栄養素等の情報が入っているのだが、 セルが結合されたりしているため、所々にNaNが入っていたりして見辛い。(碁盤目の構造を破壊してしまうため「セルの結合」は機械的な処理と やや相性が悪く、プログラミングを用いたデータ分析では嫌われる)各省庁の公開データのフォーマットの統一化は今後に期待することにして... まず以下の項目に該当する列だけを抽出する事を考える。
###Code
targets = ["食品名", "エネルギー","たんぱく質", "脂質", "炭水化物"]
###Output
_____no_output_____
###Markdown
該当するデータがどの行・列に格納されているかをコードで指定するのは、 前述のファイル構造の事情からやや面倒くさい。 以下では、その場しのぎ的ではあるが、 興味のある量が何番目かを指定してまとめてみることにしよう。そのために、1行目の要素を表示してみよう。
###Code
#1行目(エクセルだと2行目)の要素を表示してみる
print(list(df.iloc[0].values))
#半角空白, 全角空白(\u3000)や改行コード\nを取り除いたリストを作って表示してみる
tlist = list(map( lambda s: str(s).replace("\u3000","").replace("\n","").replace(" ",""),df.iloc[0].values))
print(tlist)
###Output
_____no_output_____
###Markdown
セルの結合により、興味のあるデータがどの列に記述されているかは注意が必要。 実際、[エネルギー]という文字列は1行目の6列目(それぞれインデックスでいうと0,5)で取得できるが、 kJ単位になっていて、kcal単位でほしければ、7列目に格納された値が必要になる。 また、エクセルファイルを見るとわかるように、たんぱく質・脂質・炭水化物はさらに細分化されており、 O列R列など、細かい列の分割が挿入されている. ~~これは大変困る~~単純にたんぱく質・脂質・炭水化物と表記されている列のインデックスはそれぞれ9,12,20となる。 食品名が格納されている列(3)、エネルギー[kJ単位] (6)と合わせて確認してみよう。
###Code
df.iloc[:,[3,6,9,12,20]]
###Output
_____no_output_____
###Markdown
もう少し整形したいので、新しいデータフレームのコラムを書き換える。食品名等が記載されているのは10行目以降なので、それを使い columnを指定する。さらに、食品名に含まれる余分な文字コードも削除しておこう。
###Code
ndf = df.iloc[:,[3,6,9,12,20]]
ndf = ndf.iloc[10:,:]
ndf.columns=["食品名","エネルギー(kcal)","たんぱく質(g)","脂質(g)","炭水化物(g)"]
ndf["食品名"] = ndf["食品名"].str.replace("\u3000"," ") # 食品名の中にある余分な全角空白(\u3000)を半角スペースに置き換える
ndf
###Output
_____no_output_____
###Markdown
次に、食品名の一覧を取得した後、興味のあるもの(日常的に馴染みのあるもの)だけを ピックアップしてみよう。
###Code
print(list(ndf["食品名"]))
###Output
_____no_output_____
###Markdown
この中から...* こむぎ[パン類]食パンリッチタイプ* こむぎ[パン類]フランスパン* こめ[水稲軟めし]精白米* そばそばゆで* こむぎ[うどん・そうめん類]うどんゆでのみに興味があれば
###Code
tshokuhin = ["こむぎ [パン類] 食パン リッチタイプ","こむぎ [パン類] フランスパン","こめ [水稲軟めし] 精白米", "そば そば ゆで", "こむぎ [うどん・そうめん類] うどん ゆで"]
ndf[ ndf["食品名"].isin(tshokuhin)]
###Output
_____no_output_____
###Markdown
などとする。 '6 野菜類'でも同様に...
###Code
df6 = pd.read_excel(wb,sheet_name="6 野菜類")
df6.iloc[:,[3,6,9,12,20]]
ndf6 = df6.iloc[:,[3,6,9,12,20]]
ndf6 = ndf6.iloc[10:,:]
ndf6.columns=["食品名","エネルギー(kcal)","たんぱく質(g)","脂質(g)","炭水化物(g)"]
ndf6["食品名"] = ndf6["食品名"].str.replace("\u3000"," ")
ndf6
###Output
_____no_output_____
###Markdown
特定のキーワードを含むものを全て取得して、 食品名を細かく指定したり、対応する行番号のインデックスを取得できたりする
###Code
kyabetu = ndf6[ndf6["食品名"].str.contains('キャベツ')]
kyabetu
tomato = ndf6[ndf6["食品名"].str.contains('トマト')]
tomato
###Output
_____no_output_____
###Markdown
DataFrame同士を結合してまとめるなどして 扱いやすいデータに整形していく.縦方向の結合はpandasのconcat(concatenateの略)を使う。
###Code
tdf = pd.concat([kyabetu, tomato])
tdf
###Output
_____no_output_____
###Markdown
Pandasの使い方 (基礎) ```Pandas```は、データ分析のためのライブラリで 統計量を計算・表示したり、それらをグラフとして可視化出来たり 前処理などの地道だが重要な作業を比較的簡単に行うことができます。まずはインポートしましょう。```pd```という名前で使うのが慣例です。
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
pandasでは主に```Series```と```DataFrame```の2つのオブジェクトを扱います。 SeriesはDataFrameの特殊な場合とみなせるので、以下ではDataFrameのみ説明することにします。 DataFrame型 DataFrameはExcelシートのような二次元のデータを表現するのに利用され 各種データ分析などで非常に役にたちます。
###Code
from pandas import DataFrame
###Output
_____no_output_____
###Markdown
以下の辞書型をDataFrame型のオブジェクトに変換してみましょう。
###Code
data = { '名前': ["Aさん", "Bさん", "Cさん", "Dさん", "Eさん"],
'出身都道府県':['Tokyo', 'Tochigi', 'Hokkaido','Kyoto','Tochigi'],
'生年': [ 1998, 1993,2000,1989,2002],
'身長': [172, 156, 162, 180,158]}
df = DataFrame(data)
print("dataの型", type(data))
print("dfの型",type(df))
###Output
_____no_output_____
###Markdown
jupyter環境でDataFrameを読むと、"いい感じ"に表示してくれる
###Code
df
###Output
_____no_output_____
###Markdown
printだとちょっと無機質な感じに。
###Code
print(df)
###Output
_____no_output_____
###Markdown
```info()```関数を作用させると、詳細な情報が得られる。 列ごとにどんな種類のデータが格納されているのかや、メモリ使用量など表示することができる。
###Code
df.info()
###Output
_____no_output_____
###Markdown
DataFrameの要素を確認・指定する方法 index: 行方向のデータ項目(おもに整数値(行番号),ID,名前など) columns: 列方向のデータの項目(おもにデータの種類) をそれぞれ表示してみよう。
###Code
df.index
df.columns
###Output
_____no_output_____
###Markdown
行方向を、整数値(行数)ではなく名前にしたければ
###Code
data1 = {'出身都道府県':['Tokyo', 'Tochigi', 'Hokkaido','Kyoto','Tochigi'],
'生年': [ 1998, 1993,2000,1989,2002],
'身長': [172, 156, 162, 180,158]}
df1 = DataFrame(data1)
df1.index =["Aさん", "Bさん", "Cさん", "Dさん", "Eさん"]
df1
###Output
_____no_output_____
###Markdown
などとしてもよい。 特定の列を取得したい場合
###Code
df["身長"]
###Output
_____no_output_____
###Markdown
とする。 以下の方法は非推奨とする。
###Code
df.身長
###Output
_____no_output_____
###Markdown
値のリスト(正確にはnumpy.ndarray型)として取得したければ
###Code
df["身長"].values
df["出身都道府県"].values
###Output
_____no_output_____
###Markdown
などとすればよい。慣れ親しんだ形に変換したければ、リストに変換すればよい
###Code
list(df["出身都道府県"].values)
###Output
_____no_output_____
###Markdown
ある列が特定のものに一致するもののみを抽出するのも簡単にできる
###Code
df[df["出身都道府県"]=="Tochigi"]
###Output
_____no_output_____
###Markdown
これは
###Code
df["出身都道府県"]=="Tochigi"
###Output
_____no_output_____
###Markdown
が条件に合致するかどうかTrue/Falseの配列になっていて、 df[ [True/Falseの配列] ]とすると、Trueに対応する要素のみを返す フィルターのような役割になっている。 列の追加
###Code
#スカラー値の場合"初期化"のような振る舞いをする
df["血液型"] = "A"
df
#リストで追加
df["血液型"] = [ "A", "O","AB","B","A"]
df
###Output
_____no_output_____
###Markdown
特定の行を取得したい場合 たとえば、行番号がわかっているなら、```iloc```関数を使えば良い
###Code
df.iloc[3]
###Output
_____no_output_____
###Markdown
値のみ取得したければ先程と同様
###Code
df.iloc[3].values
###Output
_____no_output_____
###Markdown
また、以下のような使い方もできるが
###Code
df[1:4] #1から3行目まで
###Output
_____no_output_____
###Markdown
```df[1]```といった使い方は出来ない。 より複雑な行・列の抽出 上にならって、2000年より前に生まれた人だけを抽出し
###Code
df[ df["生年"] < 2000 ]
###Output
_____no_output_____
###Markdown
さらにこのうち身長が170cm以上の人だけがほしければ
###Code
df[(df["生年"] < 2000) & (df["身長"]>170)]
###Output
_____no_output_____
###Markdown
などとすればよい。 他にも```iloc```,```loc```などを用いれば 特定の行・列を抽出することができる* ```iloc```は番号の指定のみに対応* ```loc```は名前のみ**欲しい要素の数値もしくは項目名のリスト**を 行・列の2つついて指定してやればよい。
###Code
df.iloc[[0], [0]] #0行目,0列目
#スライスで指定することもできる
df.iloc[1:4, :3] #1-3行目かつ0-2列目 (スライスの終点は含まれないことに注意)
#スライスの場合は、 1:4が[1,2,3]と同じ働きをするので、括弧[]はいらない
###Output
_____no_output_____
###Markdown
```loc```を使う場合は、indexの代わりに項目名で指定する。※今の場合、行を指定する項目名が既に整数値なので インデックスと見分けが付きづらいことに注意
###Code
df.loc[1:4,["名前","身長"]]
df.loc[[1,2,3,4],"名前":"生年"]
###Output
_____no_output_____
###Markdown
といった具合。```loc```を使う場合、1:4や[1,2,3,4]は indexのスライスではなく、項目名を意味し Eさんのデータも含まれている事がわかる。 Webページにある表をDataFrameとして取得する ```pandas```内の```read_html```関数を用いれば、 Webページの中にある表をDataFrame形式で取得することもできます。以下では例としてWikipediaの[ノーベル物理学賞](https://ja.wikipedia.org/wiki/%e3%83%8e%e3%83%bc%e3%83%99%e3%83%ab%e7%89%a9%e7%90%86%e5%ad%a6%e8%b3%9e)のページにある、受賞者一覧を取得してみましょう
###Code
url = "https://ja.wikipedia.org/wiki/%e3%83%8e%e3%83%bc%e3%83%99%e3%83%ab%e7%89%a9%e7%90%86%e5%ad%a6%e8%b3%9e"
tables = pd.read_html(url)
print(len(tables))
###Output
_____no_output_____
###Markdown
ページ内に、21個もの表があることがわかります。 (ほとんどはwikipediaのテンプレート等)たとえば、2010年代の受賞者のみに興味がある場合は
###Code
df = tables[12]
df
###Output
_____no_output_____
###Markdown
Pandasで複雑なエクセルファイルを操作する Pandasにはread_excel()という関数が用意されていて、 多数のシートを含むようなエクセルファイルを開くことも出来る。まずは必要なモジュールをインポートしよう。
###Code
import pandas as pd
from pandas import DataFrame
###Output
_____no_output_____
###Markdown
今まではGoogle Driveにいれたファイルを読み出していたが、 Webから直接xlsxファイルを読み込んでみよう。
###Code
url = "https://www.mext.go.jp/content/20201225-mxt_kagsei-mext_01110_012.xlsx"
input_file = pd.ExcelFile(url)
###Output
_____no_output_____
###Markdown
ブック内のシートの一覧は以下のように取得できる。
###Code
sheet_names = input_file.sheet_names
print("pandas: シート名",sheet_names)
###Output
_____no_output_____
###Markdown
シートを指定するのは、インデックスかシート名の文字列で行う。"1 穀類"を使うことにして、 pandasにあるread_excel関数を使ってみよう。 read_excel関数の最初の引数にはパスの他に、urlも取れる。
###Code
df = pd.read_excel(url,sheet_name="1穀類")
df
###Output
_____no_output_____
###Markdown
同じものが得られている。 データの整形次に、今取得したデータフレームのままでは少々扱い辛いので"整形"を考える。 というのも前から4行ほど表示してみると...
###Code
df[0:4]
###Output
_____no_output_____
###Markdown
最初の4行ほどに栄養素等の情報が入っているのだが、 セルが結合されたりしているため、所々にNaNが入っていたりして見辛い。(碁盤目の構造を破壊してしまうため「セルの結合」は機械的な処理と やや相性が悪く、プログラミングを用いたデータ分析では嫌われる)各省庁の公開データのフォーマットの統一化は今後に期待することにして... まず以下の項目に該当する列だけを抽出する事を考える。
###Code
targets = ["食品名", "エネルギー","たんぱく質", "脂質", "炭水化物"]
###Output
_____no_output_____
###Markdown
該当するデータがどの行・列に格納されているかをコードで指定するのは、 前述のファイル構造の事情からやや面倒くさい。 以下では、その場しのぎ的ではあるが、 興味のある量が何番目かを指定してまとめてみることにしよう。そのために、1-2行目の要素を表示してみよう。
###Code
#1-2行目(エクセルだと2行目)の要素から
#半角空白, 全角空白(\u3000)や改行コード\nを取り除いたリストを作って表示してみる
for idx in range(1,3):
tmp = df.iloc[idx].values
tlist = list(map( lambda s: str(s).replace("\u3000","").replace("\n","").replace(" ",""),tmp))
print(tlist)
# for target in targets:
# tlist.index(target)
###Output
_____no_output_____
###Markdown
セルの結合により、興味のあるデータがどの列に記述されているかは注意が必要。 実際、[エネルギー]という文字列は1行目の6列目(それぞれインデックスでいうと0,5)で取得できるが、 kJ単位になっていて、kcal単位でほしければ、7列目に格納された値が必要になる。 また、エクセルファイルを見るとわかるように、たんぱく質・脂質・炭水化物はさらに細分化されており、 O列R列など、細かい列の分割が挿入されている. ~~これは大変困る~~単純にたんぱく質・脂質・炭水化物と表記されている列のインデックスはそれぞれ9,12,20となる。 食品名が格納されている列(3)、エネルギー[kJ単位] (6)と合わせて確認してみよう。
###Code
targets = [3,6,9,12,20]
df.iloc[:,targets]
###Output
_____no_output_____
###Markdown
もう少し整形したいので、新しいデータフレームのコラムを書き換える。食品名等が記載されているのは10行目以降なので、それを使い columnを指定する。さらに、食品名に含まれる余分な文字コードも削除しておこう。
###Code
ndf = df.iloc[:,targets]
ndf = ndf.iloc[10:,:]
ndf.columns=["食品名","エネルギー(kcal)","たんぱく質(g)","脂質(g)","炭水化物(g)"]
ndf["食品名"] = ndf["食品名"].str.replace("\u3000"," ") # 食品名の中にある余分な全角空白(\u3000)を半角スペースに置き換える
ndf
###Output
_____no_output_____
###Markdown
次に、食品名の一覧を取得した後、興味のあるもの(日常的に馴染みのあるもの)だけを ピックアップしてみよう。
###Code
print(list(ndf["食品名"]))
###Output
_____no_output_____
###Markdown
この中から...* こむぎ[パン類]食パンリッチタイプ* こむぎ[パン類]フランスパン* こめ[水稲軟めし]精白米* そばそばゆで* こむぎ[うどん・そうめん類]うどんゆでのみに興味があれば
###Code
tshokuhin = ["こむぎ [パン類] 食パン リッチタイプ","こむぎ [パン類] フランスパン","こめ [水稲軟めし] 精白米", "そば そば ゆで", "こむぎ [うどん・そうめん類] うどん ゆで"]
ndf[ ndf["食品名"].isin(tshokuhin)]
###Output
_____no_output_____
###Markdown
などとする。 '6野菜類'でも同様に...
###Code
df6 = pd.read_excel(url,sheet_name="6野菜類")
df6.iloc[:,[3,6,9,12,20]]
ndf6 = df6.iloc[:,[3,6,9,12,20]]
ndf6 = ndf6.iloc[10:,:]
ndf6.columns=["食品名","エネルギー(kcal)","たんぱく質(g)","脂質(g)","炭水化物(g)"]
ndf6["食品名"] = ndf6["食品名"].str.replace("\u3000"," ")
ndf6
###Output
_____no_output_____
###Markdown
特定のキーワードを含むものを全て取得して、 食品名を細かく指定したり、対応する行番号のインデックスを取得できたりする
###Code
kyabetu = ndf6[ndf6["食品名"].str.contains('キャベツ')]
kyabetu
tomato = ndf6[ndf6["食品名"].str.contains('トマト')]
tomato
###Output
_____no_output_____
###Markdown
DataFrame同士を結合してまとめるなどして 扱いやすいデータに整形していく.縦方向の結合はpandasのconcat(concatenateの略)を使う。
###Code
tdf = pd.concat([kyabetu, tomato])
tdf
###Output
_____no_output_____
###Markdown
DataFrameのcsv/Excelファイルへの書き出し DataFrameオブジェクトは、```pandas```内の関数を用いれば、 簡単にcsvやExcelファイルとして書き出すことができます。先程の、2010年代のノーベル物理学賞受賞者のデータを、 Google Driveにファイルとして書き出してみましょう。
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
**csvとして書き出す場合**適当にパスを指定して、DataFrameオブジェクトに ```to_csv```関数を作用させます。
###Code
df.to_csv("/content/drive/My Drive/AdDS2021/pd_write_test.csv")
###Output
_____no_output_____
###Markdown
**Excelファイルとして書き出す場合**この場合も同様で、```to_excel```関数を用います。
###Code
df.to_excel("/content/drive/My Drive/AdDS2021/pd_write_test.xlsx")
###Output
_____no_output_____ |
optimized CNN - SampleDataset.ipynb | ###Markdown
Pretrained model only Load data
###Code
import pickle
train_filename = "C:/Users/behl/Desktop/minor/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "C:/Users/behl/Desktop/minor/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "C:/Users/behl/Desktop/minor/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
###Output
_____no_output_____
###Markdown
CNN model
###Code
import time
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Dropout, Flatten, Dense
from keras.models import Sequential
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras import regularizers, applications, optimizers, initializers
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import VGG16
# VGG16
# resnet50.ResNet50
# inception_v3.InceptionV3 299x299
# inception_resnet_v2.InceptionResNetV2 299x299
base_model = VGG16(weights='imagenet', include_top=False, input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
add_model.add(Dropout(0.2))
add_model.add(Dense(256, activation='relu'))
add_model.add(Dropout(0.2))
add_model.add(Dense(50, activation='relu'))
add_model.add(Dropout(0.2))
add_model.add(Dense(1, activation='sigmoid'))
model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.summary()
add_model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
import keras.backend as K
model.compile(optimizer=optimizers.SGD(lr=1e-4, decay=1e-6, momentum=0.9, nesterov=True),
loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
import numpy as np
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=1, mode='auto')
log = CSVLogger('C:/Users/behl/Desktop/minor/log_pretrained_CNN.csv')
checkpointer = ModelCheckpoint(filepath='C:/Users/behl/Desktop/minor/pretrainedVGG.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
# model.fit(train_tensors, train_labels,
# validation_data=(valid_tensors, valid_labels),
# epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
def train_generator(x, y, batch_size):
train_datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
generator = train_datagen.flow(x, y, batch_size=batch_size)
while 1:
x_batch, y_batch = generator.next()
yield [x_batch, y_batch]
# Training with data augmentation. If shift_fraction=0., also no augmentation.
model.fit_generator(generator=train_generator(train_tensors, train_labels, batch_size),
steps_per_epoch=int(train_labels.shape[0] / batch_size),
validation_data=(valid_tensors, valid_labels),
epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
###Output
Epoch 1/20
###Markdown
Metric
###Code
model.load_weights('saved_models/pretrainedVGG.best.from_scratch.hdf5')
prediction = model.predict(test_tensors)
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
prediction[:30]
###Output
_____no_output_____
###Markdown
Extra data
###Code
import pickle
train_filename = "data_preprocessed/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "data_preprocessed/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "data_preprocessed/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
import time
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras import regularizers
from keras import applications
from keras.models import Model
from keras import optimizers
from keras.layers import Input, merge, concatenate
base_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
added_model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
inp = Input(batch_shape=(None, train_data.shape[1]))
# out = Dense(8)(inp)
extra_model = Model(input=inp, output=inp)
x = concatenate([added_model.output,
extra_model.output])
# x = Dropout(0.5)(x)
# x = Dense(1024, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(input=[added_model.input,
extra_model.input],
output=x)
model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
model.compile(optimizer='sgd', loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
import numpy as np
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto')
log = CSVLogger('saved_models/log_pretrained_extradata_CNN.csv')
checkpointer = ModelCheckpoint(filepath='saved_models/pretrained_extradata_CNN.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
model.fit([train_tensors, train_data], train_labels,
validation_data=([valid_tensors, valid_data], valid_labels),
epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
# def train_generator(x1, x2, y, batch_size):
# train_datagen = ImageDataGenerator(
# featurewise_center=False, # set input mean to 0 over the dataset
# samplewise_center=False, # set each sample mean to 0
# featurewise_std_normalization=False, # divide inputs by std of the dataset
# samplewise_std_normalization=False, # divide each input by its std
# zca_whitening=False, # apply ZCA whitening
# rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
# width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
# height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
# horizontal_flip=True, # randomly flip images
# vertical_flip=False) # randomly flip images
# generator = train_datagen.flow((x1, x2), y, batch_size=batch_size)
# while 1:
# (x1_batch, x2_batch), y_batch = generator.next()
# yield [[x1_batch, x2_batch], y_batch]
# # Training with data augmentation. If shift_fraction=0., also no augmentation.
# model.fit_generator(generator=train_generator(train_tensors, train_data, train_labels, batch_size),
# steps_per_epoch=int(train_labels.shape[0] / batch_size),
# validation_data=([valid_tensors, valid_data], valid_labels),
# epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
model.load_weights('saved_models/pretrained_extradata_CNN.best.from_scratch.hdf5')
prediction = model.predict([test_tensors, test_data])
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
prediction[:30]
###Output
_____no_output_____
###Markdown
Train with extra data and spacial transformer
###Code
import pickle
train_filename = "data_preprocessed/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "data_preprocessed/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "data_preprocessed/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
import time
import numpy as np
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Lambda
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras import regularizers
from keras import applications
from keras.models import Model
from keras import optimizers
from keras.layers import Input, merge, concatenate
from spatial_transformer import SpatialTransformer
def locnet():
b = np.zeros((2, 3), dtype='float32')
b[0, 0] = 1
b[1, 1] = 1
W = np.zeros((64, 6), dtype='float32')
weights = [W, b.flatten()]
locnet = Sequential()
locnet.add(Conv2D(16, (7, 7), padding='valid', input_shape=train_tensors.shape[1:]))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Conv2D(32, (5, 5), padding='valid'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Conv2D(64, (3, 3), padding='valid'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Flatten())
locnet.add(Dense(128, activation='elu'))
locnet.add(Dense(64, activation='elu'))
locnet.add(Dense(6, weights=weights))
return locnet
base_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
added0_model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
stn_model = Sequential()
stn_model.add(Lambda(
lambda x: 2*x - 1.,
input_shape=train_tensors.shape[1:],
output_shape=train_tensors.shape[1:]))
stn_model.add(BatchNormalization())
stn_model.add(SpatialTransformer(localization_net=locnet(),
output_size=train_tensors.shape[1:3]))
added_model = Model(inputs=stn_model.input, outputs=added0_model(stn_model.output))
inp = Input(batch_shape=(None, train_data.shape[1]))
# out = Dense(8)(inp)
extra_model = Model(input=inp, output=inp)
x = concatenate([added_model.output,
extra_model.output])
# x = Dropout(0.5)(x)
# x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(input=[added_model.input,
extra_model.input],
output=x)
model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
model.compile(optimizer=optimizers.SGD(lr=1e-4, decay=1e-6, momentum=0.9, nesterov=True),
loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto')
log = CSVLogger('saved_models/log_pretrained_extradata_stn_CNN.csv')
checkpointer = ModelCheckpoint(filepath='saved_models/log_pretrained_extradata_stn_CNN.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
model.fit([train_tensors, train_data], train_labels,
validation_data=([valid_tensors, valid_data], valid_labels),
epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
# def train_generator(x, y, batch_size):
# train_datagen = ImageDataGenerator(
# featurewise_center=False, # set input mean to 0 over the dataset
# samplewise_center=False, # set each sample mean to 0
# featurewise_std_normalization=False, # divide inputs by std of the dataset
# samplewise_std_normalization=False, # divide each input by its std
# zca_whitening=False, # apply ZCA whitening
# rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
# width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
# height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
# horizontal_flip=True, # randomly flip images
# vertical_flip=False) # randomly flip images
# generator = train_datagen.flow(x, y, batch_size=batch_size)
# while 1:
# x_batch, y_batch = generator.next()
# yield [x_batch, y_batch]
# # Training with data augmentation. If shift_fraction=0., also no augmentation.
# model.fit_generator(generator=train_generator(train_tensors, train_labels, batch_size),
# steps_per_epoch=int(train_labels.shape[0] / batch_size),
# validation_data=(valid_tensors, valid_labels),
# epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
model.load_weights('saved_models/log_pretrained_extradata_stn_CNN.best.from_scratch.hdf5')
prediction = model.predict([test_tensors, test_data])
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
###Output
_____no_output_____
###Markdown
Pretrained model only Load data
###Code
import pickle
train_filename = "data_preprocessed/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "data_preprocessed/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "data_preprocessed/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
###Output
_____no_output_____
###Markdown
CNN model
###Code
import time
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Dropout, Flatten, Dense
from keras.models import Sequential, Model
from keras.layers.normalization import BatchNormalization
from keras import regularizers, applications, optimizers, initializers
from keras.preprocessing.image import ImageDataGenerator
# VGG16
# resnet50.ResNet50
# inception_v3.InceptionV3 299x299
# inception_resnet_v2.InceptionResNetV2 299x299
base_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
# add_model.add(Conv2D(filters=512,
# kernel_size=4,
# strides=2,
# # kernel_regularizer=regularizers.l2(0.01),
# # activity_regularizer=regularizers.l1(0.01),
# kernel_initializer=initializers.random_normal(stddev=0.01),
# padding='same',
# activation='relu',
# input_shape=base_model.output_shape[1:]))
# # add_model.add(MaxPooling2D(pool_size=2))
# add_model.add(BatchNormalization())
# add_model.add(Flatten())
# add_model.add(Dropout(0.2))
# add_model.add(Dense(1024, activation='relu'))
add_model.add(Dropout(0.2))
add_model.add(Dense(256, activation='relu'))
add_model.add(Dropout(0.2))
add_model.add(Dense(50, activation='relu'))
add_model.add(Dropout(0.2))
add_model.add(Dense(1, activation='sigmoid'))
model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.summary()
add_model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
import keras.backend as K
model.compile(optimizer=optimizers.SGD(lr=1e-4, decay=1e-6, momentum=0.9, nesterov=True),
loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
import numpy as np
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=1, mode='auto')
log = CSVLogger('saved_models/log_pretrained_CNN.csv')
checkpointer = ModelCheckpoint(filepath='saved_models/pretrainedVGG.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
# model.fit(train_tensors, train_labels,
# validation_data=(valid_tensors, valid_labels),
# epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
def train_generator(x, y, batch_size):
train_datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
generator = train_datagen.flow(x, y, batch_size=batch_size)
while 1:
x_batch, y_batch = generator.next()
yield [x_batch, y_batch]
# Training with data augmentation. If shift_fraction=0., also no augmentation.
model.fit_generator(generator=train_generator(train_tensors, train_labels, batch_size),
steps_per_epoch=int(train_labels.shape[0] / batch_size),
validation_data=(valid_tensors, valid_labels),
epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
###Output
Epoch 1/20
105/106 [============================>.] - ETA: 0s - loss: 0.6447 - binary_accuracy: 0.6342 - precision_1: 0.5578 - recall_1: 0.7575 - fbeta_score_1: 0.5836 - precision_2: 0.6173 - recall_2: 0.5807 - fbeta_score_2: 0.5958 - precision_3: 0.6798 - recall_3: 0.3690 - fbeta_score_3: 0.5517Epoch 00001: val_loss improved from inf to 0.62077, saving model to saved_models/pretrainedVGG.best.from_scratch.hdf5
106/106 [==============================] - 16s 151ms/step - loss: 0.6449 - binary_accuracy: 0.6333 - precision_1: 0.5553 - recall_1: 0.7575 - fbeta_score_1: 0.5813 - precision_2: 0.6140 - recall_2: 0.5799 - fbeta_score_2: 0.5929 - precision_3: 0.6775 - recall_3: 0.3703 - fbeta_score_3: 0.5508 - val_loss: 0.6208 - val_binary_accuracy: 0.6691 - val_precision_1: 0.6438 - val_recall_1: 0.6767 - val_fbeta_score_1: 0.6444 - val_precision_2: 0.7247 - val_recall_2: 0.4713 - val_fbeta_score_2: 0.6413 - val_precision_3: 0.7505 - val_recall_3: 0.2084 - val_fbeta_score_3: 0.4703
Epoch 2/20
105/106 [============================>.] - ETA: 0s - loss: 0.6388 - binary_accuracy: 0.6497 - precision_1: 0.5654 - recall_1: 0.7681 - fbeta_score_1: 0.5930 - precision_2: 0.6397 - recall_2: 0.5754 - fbeta_score_2: 0.6183 - precision_3: 0.6734 - recall_3: 0.3287 - fbeta_score_3: 0.5388Epoch 00002: val_loss improved from 0.62077 to 0.60743, saving model to saved_models/pretrainedVGG.best.from_scratch.hdf5
106/106 [==============================] - 16s 150ms/step - loss: 0.6389 - binary_accuracy: 0.6498 - precision_1: 0.5637 - recall_1: 0.7693 - fbeta_score_1: 0.5916 - precision_2: 0.6381 - recall_2: 0.5766 - fbeta_score_2: 0.6172 - precision_3: 0.6718 - recall_3: 0.3303 - fbeta_score_3: 0.5385 - val_loss: 0.6074 - val_binary_accuracy: 0.6900 - val_precision_1: 0.6501 - val_recall_1: 0.6871 - val_fbeta_score_1: 0.6507 - val_precision_2: 0.6961 - val_recall_2: 0.5752 - val_fbeta_score_2: 0.6631 - val_precision_3: 0.7252 - val_recall_3: 0.3738 - val_fbeta_score_3: 0.5960
Epoch 3/20
105/106 [============================>.] - ETA: 0s - loss: 0.6402 - binary_accuracy: 0.6411 - precision_1: 0.5665 - recall_1: 0.7516 - fbeta_score_1: 0.5892 - precision_2: 0.6189 - recall_2: 0.5729 - fbeta_score_2: 0.5971 - precision_3: 0.6701 - recall_3: 0.3696 - fbeta_score_3: 0.5557Epoch 00003: val_loss did not improve
106/106 [==============================] - 15s 146ms/step - loss: 0.6397 - binary_accuracy: 0.6418 - precision_1: 0.5671 - recall_1: 0.7529 - fbeta_score_1: 0.5899 - precision_2: 0.6201 - recall_2: 0.5741 - fbeta_score_2: 0.5985 - precision_3: 0.6708 - recall_3: 0.3694 - fbeta_score_3: 0.5562 - val_loss: 0.6100 - val_binary_accuracy: 0.6900 - val_precision_1: 0.6402 - val_recall_1: 0.7095 - val_fbeta_score_1: 0.6472 - val_precision_2: 0.7095 - val_recall_2: 0.5528 - val_fbeta_score_2: 0.6644 - val_precision_3: 0.7375 - val_recall_3: 0.3093 - val_fbeta_score_3: 0.5624
Epoch 4/20
105/106 [============================>.] - ETA: 0s - loss: 0.6365 - binary_accuracy: 0.6488 - precision_1: 0.5764 - recall_1: 0.7591 - fbeta_score_1: 0.6002 - precision_2: 0.6334 - recall_2: 0.5839 - fbeta_score_2: 0.6139 - precision_3: 0.6809 - recall_3: 0.3726 - fbeta_score_3: 0.5618Epoch 00004: val_loss did not improve
106/106 [==============================] - 15s 146ms/step - loss: 0.6365 - binary_accuracy: 0.6483 - precision_1: 0.5758 - recall_1: 0.7600 - fbeta_score_1: 0.5999 - precision_2: 0.6324 - recall_2: 0.5844 - fbeta_score_2: 0.6133 - precision_3: 0.6807 - recall_3: 0.3732 - fbeta_score_3: 0.5622 - val_loss: 0.6086 - val_binary_accuracy: 0.6900 - val_precision_1: 0.6368 - val_recall_1: 0.7110 - val_fbeta_score_1: 0.6449 - val_precision_2: 0.6963 - val_recall_2: 0.5750 - val_fbeta_score_2: 0.6627 - val_precision_3: 0.7376 - val_recall_3: 0.3423 - val_fbeta_score_3: 0.5847
Epoch 5/20
105/106 [============================>.] - ETA: 0s - loss: 0.6323 - binary_accuracy: 0.6586 - precision_1: 0.5755 - recall_1: 0.7596 - fbeta_score_1: 0.6006 - precision_2: 0.6443 - recall_2: 0.5963 - fbeta_score_2: 0.6254 - precision_3: 0.7007 - recall_3: 0.3983 - fbeta_score_3: 0.5891Epoch 00005: val_loss did not improve
106/106 [==============================] - 16s 147ms/step - loss: 0.6319 - binary_accuracy: 0.6595 - precision_1: 0.5742 - recall_1: 0.7592 - fbeta_score_1: 0.5995 - precision_2: 0.6442 - recall_2: 0.5966 - fbeta_score_2: 0.6255 - precision_3: 0.6995 - recall_3: 0.3980 - fbeta_score_3: 0.5884 - val_loss: 0.6141 - val_binary_accuracy: 0.6873 - val_precision_1: 0.6662 - val_recall_1: 0.6584 - val_fbeta_score_1: 0.6585 - val_precision_2: 0.7181 - val_recall_2: 0.5290 - val_fbeta_score_2: 0.6610 - val_precision_3: 0.7746 - val_recall_3: 0.2935 - val_fbeta_score_3: 0.5549
Epoch 6/20
105/106 [============================>.] - ETA: 0s - loss: 0.6272 - binary_accuracy: 0.6580 - precision_1: 0.5809 - recall_1: 0.7516 - fbeta_score_1: 0.6023 - precision_2: 0.6481 - recall_2: 0.6005 - fbeta_score_2: 0.6271 - precision_3: 0.6957 - recall_3: 0.3897 - fbeta_score_3: 0.5818Epoch 00006: val_loss improved from 0.60743 to 0.60369, saving model to saved_models/pretrainedVGG.best.from_scratch.hdf5
106/106 [==============================] - 16s 151ms/step - loss: 0.6270 - binary_accuracy: 0.6580 - precision_1: 0.5804 - recall_1: 0.7517 - fbeta_score_1: 0.6019 - precision_2: 0.6476 - recall_2: 0.5992 - fbeta_score_2: 0.6265 - precision_3: 0.6967 - recall_3: 0.3890 - fbeta_score_3: 0.5820 - val_loss: 0.6037 - val_binary_accuracy: 0.6955 - val_precision_1: 0.6374 - val_recall_1: 0.7146 - val_fbeta_score_1: 0.6451 - val_precision_2: 0.7027 - val_recall_2: 0.5904 - val_fbeta_score_2: 0.6715 - val_precision_3: 0.7477 - val_recall_3: 0.3676 - val_fbeta_score_3: 0.6047
Epoch 7/20
105/106 [============================>.] - ETA: 0s - loss: 0.6287 - binary_accuracy: 0.6491 - precision_1: 0.5690 - recall_1: 0.7633 - fbeta_score_1: 0.5947 - precision_2: 0.6276 - recall_2: 0.5909 - fbeta_score_2: 0.6115 - precision_3: 0.7034 - recall_3: 0.3800 - fbeta_score_3: 0.5773Epoch 00007: val_loss did not improve
106/106 [==============================] - 16s 147ms/step - loss: 0.6287 - binary_accuracy: 0.6486 - precision_1: 0.5692 - recall_1: 0.7636 - fbeta_score_1: 0.5950 - precision_2: 0.6270 - recall_2: 0.5910 - fbeta_score_2: 0.6111 - precision_3: 0.7024 - recall_3: 0.3802 - fbeta_score_3: 0.5770 - val_loss: 0.6064 - val_binary_accuracy: 0.6945 - val_precision_1: 0.6398 - val_recall_1: 0.6906 - val_fbeta_score_1: 0.6430 - val_precision_2: 0.7121 - val_recall_2: 0.5702 - val_fbeta_score_2: 0.6710 - val_precision_3: 0.7597 - val_recall_3: 0.3315 - val_fbeta_score_3: 0.5841
Epoch 8/20
105/106 [============================>.] - ETA: 0s - loss: 0.6353 - binary_accuracy: 0.6443 - precision_1: 0.5693 - recall_1: 0.7463 - fbeta_score_1: 0.5927 - precision_2: 0.6201 - recall_2: 0.5853 - fbeta_score_2: 0.6030 - precision_3: 0.6542 - recall_3: 0.3590 - fbeta_score_3: 0.5438Epoch 00008: val_loss did not improve
106/106 [==============================] - 16s 147ms/step - loss: 0.6354 - binary_accuracy: 0.6436 - precision_1: 0.5686 - recall_1: 0.7474 - fbeta_score_1: 0.5922 - precision_2: 0.6192 - recall_2: 0.5867 - fbeta_score_2: 0.6026 - precision_3: 0.6548 - recall_3: 0.3619 - fbeta_score_3: 0.5453 - val_loss: 0.6136 - val_binary_accuracy: 0.6718 - val_precision_1: 0.5468 - val_recall_1: 0.8886 - val_fbeta_score_1: 0.5892 - val_precision_2: 0.6273 - val_recall_2: 0.7364 - val_fbeta_score_2: 0.6410 - val_precision_3: 0.7157 - val_recall_3: 0.5036 - val_fbeta_score_3: 0.6514
Epoch 9/20
105/106 [============================>.] - ETA: 0s - loss: 0.6188 - binary_accuracy: 0.6634 - precision_1: 0.5960 - recall_1: 0.7756 - fbeta_score_1: 0.6189 - precision_2: 0.6477 - recall_2: 0.6233 - fbeta_score_2: 0.6324 - precision_3: 0.6975 - recall_3: 0.4358 - fbeta_score_3: 0.6031Epoch 00009: val_loss did not improve
106/106 [==============================] - 16s 148ms/step - loss: 0.6189 - binary_accuracy: 0.6624 - precision_1: 0.5939 - recall_1: 0.7767 - fbeta_score_1: 0.6171 - precision_2: 0.6445 - recall_2: 0.6217 - fbeta_score_2: 0.6295 - precision_3: 0.6945 - recall_3: 0.4349 - fbeta_score_3: 0.6008 - val_loss: 0.6056 - val_binary_accuracy: 0.6882 - val_precision_1: 0.6470 - val_recall_1: 0.6939 - val_fbeta_score_1: 0.6494 - val_precision_2: 0.6987 - val_recall_2: 0.5629 - val_fbeta_score_2: 0.6608 - val_precision_3: 0.7435 - val_recall_3: 0.3439 - val_fbeta_score_3: 0.5853
###Markdown
Metric
###Code
model.load_weights('saved_models/pretrainedVGG.best.from_scratch.hdf5')
prediction = model.predict(test_tensors)
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
prediction[:30]
###Output
_____no_output_____
###Markdown
Extra data
###Code
import pickle
train_filename = "data_preprocessed/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "data_preprocessed/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "data_preprocessed/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
import time
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras import regularizers
from keras import applications
from keras.models import Model
from keras import optimizers
from keras.layers import Input, merge, concatenate
base_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
added_model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
inp = Input(batch_shape=(None, train_data.shape[1]))
# out = Dense(8)(inp)
extra_model = Model(input=inp, output=inp)
x = concatenate([added_model.output,
extra_model.output])
# x = Dropout(0.5)(x)
# x = Dense(1024, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(input=[added_model.input,
extra_model.input],
output=x)
model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
model.compile(optimizer='sgd', loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
import numpy as np
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto')
log = CSVLogger('saved_models/log_pretrained_extradata_CNN.csv')
checkpointer = ModelCheckpoint(filepath='saved_models/pretrained_extradata_CNN.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
model.fit([train_tensors, train_data], train_labels,
validation_data=([valid_tensors, valid_data], valid_labels),
epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
# def train_generator(x1, x2, y, batch_size):
# train_datagen = ImageDataGenerator(
# featurewise_center=False, # set input mean to 0 over the dataset
# samplewise_center=False, # set each sample mean to 0
# featurewise_std_normalization=False, # divide inputs by std of the dataset
# samplewise_std_normalization=False, # divide each input by its std
# zca_whitening=False, # apply ZCA whitening
# rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
# width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
# height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
# horizontal_flip=True, # randomly flip images
# vertical_flip=False) # randomly flip images
# generator = train_datagen.flow((x1, x2), y, batch_size=batch_size)
# while 1:
# (x1_batch, x2_batch), y_batch = generator.next()
# yield [[x1_batch, x2_batch], y_batch]
# # Training with data augmentation. If shift_fraction=0., also no augmentation.
# model.fit_generator(generator=train_generator(train_tensors, train_data, train_labels, batch_size),
# steps_per_epoch=int(train_labels.shape[0] / batch_size),
# validation_data=([valid_tensors, valid_data], valid_labels),
# epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
model.load_weights('saved_models/pretrained_extradata_CNN.best.from_scratch.hdf5')
prediction = model.predict([test_tensors, test_data])
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
prediction[:30]
###Output
_____no_output_____
###Markdown
Train with extra data and spacial transformer
###Code
import pickle
train_filename = "data_preprocessed/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "data_preprocessed/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "data_preprocessed/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
import time
import numpy as np
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Lambda
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras import regularizers
from keras import applications
from keras.models import Model
from keras import optimizers
from keras.layers import Input, merge, concatenate
from spatial_transformer import SpatialTransformer
def locnet():
b = np.zeros((2, 3), dtype='float32')
b[0, 0] = 1
b[1, 1] = 1
W = np.zeros((64, 6), dtype='float32')
weights = [W, b.flatten()]
locnet = Sequential()
locnet.add(Conv2D(16, (7, 7), padding='valid', input_shape=train_tensors.shape[1:]))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Conv2D(32, (5, 5), padding='valid'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Conv2D(64, (3, 3), padding='valid'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Flatten())
locnet.add(Dense(128, activation='elu'))
locnet.add(Dense(64, activation='elu'))
locnet.add(Dense(6, weights=weights))
return locnet
base_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
added0_model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
stn_model = Sequential()
stn_model.add(Lambda(
lambda x: 2*x - 1.,
input_shape=train_tensors.shape[1:],
output_shape=train_tensors.shape[1:]))
stn_model.add(BatchNormalization())
stn_model.add(SpatialTransformer(localization_net=locnet(),
output_size=train_tensors.shape[1:3]))
added_model = Model(inputs=stn_model.input, outputs=added0_model(stn_model.output))
inp = Input(batch_shape=(None, train_data.shape[1]))
# out = Dense(8)(inp)
extra_model = Model(input=inp, output=inp)
x = concatenate([added_model.output,
extra_model.output])
# x = Dropout(0.5)(x)
# x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(input=[added_model.input,
extra_model.input],
output=x)
model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
model.compile(optimizer=optimizers.SGD(lr=1e-4, decay=1e-6, momentum=0.9, nesterov=True),
loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto')
log = CSVLogger('saved_models/log_pretrained_extradata_stn_CNN.csv')
checkpointer = ModelCheckpoint(filepath='saved_models/log_pretrained_extradata_stn_CNN.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
model.fit([train_tensors, train_data], train_labels,
validation_data=([valid_tensors, valid_data], valid_labels),
epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
# def train_generator(x, y, batch_size):
# train_datagen = ImageDataGenerator(
# featurewise_center=False, # set input mean to 0 over the dataset
# samplewise_center=False, # set each sample mean to 0
# featurewise_std_normalization=False, # divide inputs by std of the dataset
# samplewise_std_normalization=False, # divide each input by its std
# zca_whitening=False, # apply ZCA whitening
# rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
# width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
# height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
# horizontal_flip=True, # randomly flip images
# vertical_flip=False) # randomly flip images
# generator = train_datagen.flow(x, y, batch_size=batch_size)
# while 1:
# x_batch, y_batch = generator.next()
# yield [x_batch, y_batch]
# # Training with data augmentation. If shift_fraction=0., also no augmentation.
# model.fit_generator(generator=train_generator(train_tensors, train_labels, batch_size),
# steps_per_epoch=int(train_labels.shape[0] / batch_size),
# validation_data=(valid_tensors, valid_labels),
# epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
model.load_weights('saved_models/log_pretrained_extradata_stn_CNN.best.from_scratch.hdf5')
prediction = model.predict([test_tensors, test_data])
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
###Output
_____no_output_____
###Markdown
Pretrained model only Load data
###Code
import pickle
train_filename = "C:/Users/behl/Desktop/lung disease/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "C:/Users/behl/Desktop/lung disease/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "C:/Users/behl/Desktop/lung disease/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
###Output
_____no_output_____
###Markdown
CNN model
###Code
import time
from tensorflow.keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Dropout, Flatten, Dense
from tensorflow.keras.models import Sequential, Model
from keras.layers.normalization import BatchNormalization
from keras import regularizers, applications, optimizers, initializers
from keras.preprocessing.image import ImageDataGenerator
# VGG16
# resnet50.ResNet50
# inception_v3.InceptionV3 299x299
# inception_resnet_v2.InceptionResNetV2 299x299
base_model = applications.VGG16(weights='imagenet',include_top=False,
input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
# add_model.add(Conv2D(filters=512,
# kernel_size=4,
# strides=2,
# # kernel_regularizer=regularizers.l2(0.01),
# # activity_regularizer=regularizers.l1(0.01),
# kernel_initializer=initializers.random_normal(stddev=0.01),
# padding='same',
# activation='relu',
# input_shape=base_model.output_shape[1:]))
# # add_model.add(MaxPooling2D(pool_size=2))
# add_model.add(BatchNormalization())
# add_model.add(Flatten())
# add_model.add(Dropout(0.2))
# add_model.add(Dense(1024, activation='relu'))
add_model.add(Dropout(0.2))
add_model.add(Dense(256, activation='relu'))
add_model.add(Dropout(0.2))
add_model.add(Dense(50, activation='relu'))
add_model.add(Dropout(0.2))
add_model.add(Dense(1, activation='sigmoid'))
model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
model.summary()
add_model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
import keras.backend as K
model.compile(optimizer=optimizers.SGD(lr=1e-4, decay=1e-6, momentum=0.9, nesterov=True),
loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
import numpy as np
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=1, mode='auto')
log = CSVLogger('saved_models/log_pretrained_CNN.csv')
checkpointer = ModelCheckpoint(filepath='saved_models/pretrainedVGG.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
# model.fit(train_tensors, train_labels,
# validation_data=(valid_tensors, valid_labels),
# epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
def train_generator(x, y, batch_size):
train_datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=True, # randomly flip images
vertical_flip=False) # randomly flip images
generator = train_datagen.flow(x, y, batch_size=batch_size)
while 1:
x_batch, y_batch = generator.next()
yield [x_batch, y_batch]
# Training with data augmentation. If shift_fraction=0., also no augmentation.
model.fit_generator(generator=train_generator(train_tensors, train_labels, batch_size),
steps_per_epoch=int(train_labels.shape[0] / batch_size),
validation_data=(valid_tensors, valid_labels),
epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
###Output
Epoch 1/20
105/106 [============================>.] - ETA: 0s - loss: 0.6447 - binary_accuracy: 0.6342 - precision_1: 0.5578 - recall_1: 0.7575 - fbeta_score_1: 0.5836 - precision_2: 0.6173 - recall_2: 0.5807 - fbeta_score_2: 0.5958 - precision_3: 0.6798 - recall_3: 0.3690 - fbeta_score_3: 0.5517Epoch 00001: val_loss improved from inf to 0.62077, saving model to saved_models/pretrainedVGG.best.from_scratch.hdf5
106/106 [==============================] - 16s 151ms/step - loss: 0.6449 - binary_accuracy: 0.6333 - precision_1: 0.5553 - recall_1: 0.7575 - fbeta_score_1: 0.5813 - precision_2: 0.6140 - recall_2: 0.5799 - fbeta_score_2: 0.5929 - precision_3: 0.6775 - recall_3: 0.3703 - fbeta_score_3: 0.5508 - val_loss: 0.6208 - val_binary_accuracy: 0.6691 - val_precision_1: 0.6438 - val_recall_1: 0.6767 - val_fbeta_score_1: 0.6444 - val_precision_2: 0.7247 - val_recall_2: 0.4713 - val_fbeta_score_2: 0.6413 - val_precision_3: 0.7505 - val_recall_3: 0.2084 - val_fbeta_score_3: 0.4703
Epoch 2/20
105/106 [============================>.] - ETA: 0s - loss: 0.6388 - binary_accuracy: 0.6497 - precision_1: 0.5654 - recall_1: 0.7681 - fbeta_score_1: 0.5930 - precision_2: 0.6397 - recall_2: 0.5754 - fbeta_score_2: 0.6183 - precision_3: 0.6734 - recall_3: 0.3287 - fbeta_score_3: 0.5388Epoch 00002: val_loss improved from 0.62077 to 0.60743, saving model to saved_models/pretrainedVGG.best.from_scratch.hdf5
106/106 [==============================] - 16s 150ms/step - loss: 0.6389 - binary_accuracy: 0.6498 - precision_1: 0.5637 - recall_1: 0.7693 - fbeta_score_1: 0.5916 - precision_2: 0.6381 - recall_2: 0.5766 - fbeta_score_2: 0.6172 - precision_3: 0.6718 - recall_3: 0.3303 - fbeta_score_3: 0.5385 - val_loss: 0.6074 - val_binary_accuracy: 0.6900 - val_precision_1: 0.6501 - val_recall_1: 0.6871 - val_fbeta_score_1: 0.6507 - val_precision_2: 0.6961 - val_recall_2: 0.5752 - val_fbeta_score_2: 0.6631 - val_precision_3: 0.7252 - val_recall_3: 0.3738 - val_fbeta_score_3: 0.5960
Epoch 3/20
105/106 [============================>.] - ETA: 0s - loss: 0.6402 - binary_accuracy: 0.6411 - precision_1: 0.5665 - recall_1: 0.7516 - fbeta_score_1: 0.5892 - precision_2: 0.6189 - recall_2: 0.5729 - fbeta_score_2: 0.5971 - precision_3: 0.6701 - recall_3: 0.3696 - fbeta_score_3: 0.5557Epoch 00003: val_loss did not improve
106/106 [==============================] - 15s 146ms/step - loss: 0.6397 - binary_accuracy: 0.6418 - precision_1: 0.5671 - recall_1: 0.7529 - fbeta_score_1: 0.5899 - precision_2: 0.6201 - recall_2: 0.5741 - fbeta_score_2: 0.5985 - precision_3: 0.6708 - recall_3: 0.3694 - fbeta_score_3: 0.5562 - val_loss: 0.6100 - val_binary_accuracy: 0.6900 - val_precision_1: 0.6402 - val_recall_1: 0.7095 - val_fbeta_score_1: 0.6472 - val_precision_2: 0.7095 - val_recall_2: 0.5528 - val_fbeta_score_2: 0.6644 - val_precision_3: 0.7375 - val_recall_3: 0.3093 - val_fbeta_score_3: 0.5624
Epoch 4/20
105/106 [============================>.] - ETA: 0s - loss: 0.6365 - binary_accuracy: 0.6488 - precision_1: 0.5764 - recall_1: 0.7591 - fbeta_score_1: 0.6002 - precision_2: 0.6334 - recall_2: 0.5839 - fbeta_score_2: 0.6139 - precision_3: 0.6809 - recall_3: 0.3726 - fbeta_score_3: 0.5618Epoch 00004: val_loss did not improve
106/106 [==============================] - 15s 146ms/step - loss: 0.6365 - binary_accuracy: 0.6483 - precision_1: 0.5758 - recall_1: 0.7600 - fbeta_score_1: 0.5999 - precision_2: 0.6324 - recall_2: 0.5844 - fbeta_score_2: 0.6133 - precision_3: 0.6807 - recall_3: 0.3732 - fbeta_score_3: 0.5622 - val_loss: 0.6086 - val_binary_accuracy: 0.6900 - val_precision_1: 0.6368 - val_recall_1: 0.7110 - val_fbeta_score_1: 0.6449 - val_precision_2: 0.6963 - val_recall_2: 0.5750 - val_fbeta_score_2: 0.6627 - val_precision_3: 0.7376 - val_recall_3: 0.3423 - val_fbeta_score_3: 0.5847
Epoch 5/20
105/106 [============================>.] - ETA: 0s - loss: 0.6323 - binary_accuracy: 0.6586 - precision_1: 0.5755 - recall_1: 0.7596 - fbeta_score_1: 0.6006 - precision_2: 0.6443 - recall_2: 0.5963 - fbeta_score_2: 0.6254 - precision_3: 0.7007 - recall_3: 0.3983 - fbeta_score_3: 0.5891Epoch 00005: val_loss did not improve
106/106 [==============================] - 16s 147ms/step - loss: 0.6319 - binary_accuracy: 0.6595 - precision_1: 0.5742 - recall_1: 0.7592 - fbeta_score_1: 0.5995 - precision_2: 0.6442 - recall_2: 0.5966 - fbeta_score_2: 0.6255 - precision_3: 0.6995 - recall_3: 0.3980 - fbeta_score_3: 0.5884 - val_loss: 0.6141 - val_binary_accuracy: 0.6873 - val_precision_1: 0.6662 - val_recall_1: 0.6584 - val_fbeta_score_1: 0.6585 - val_precision_2: 0.7181 - val_recall_2: 0.5290 - val_fbeta_score_2: 0.6610 - val_precision_3: 0.7746 - val_recall_3: 0.2935 - val_fbeta_score_3: 0.5549
Epoch 6/20
105/106 [============================>.] - ETA: 0s - loss: 0.6272 - binary_accuracy: 0.6580 - precision_1: 0.5809 - recall_1: 0.7516 - fbeta_score_1: 0.6023 - precision_2: 0.6481 - recall_2: 0.6005 - fbeta_score_2: 0.6271 - precision_3: 0.6957 - recall_3: 0.3897 - fbeta_score_3: 0.5818Epoch 00006: val_loss improved from 0.60743 to 0.60369, saving model to saved_models/pretrainedVGG.best.from_scratch.hdf5
106/106 [==============================] - 16s 151ms/step - loss: 0.6270 - binary_accuracy: 0.6580 - precision_1: 0.5804 - recall_1: 0.7517 - fbeta_score_1: 0.6019 - precision_2: 0.6476 - recall_2: 0.5992 - fbeta_score_2: 0.6265 - precision_3: 0.6967 - recall_3: 0.3890 - fbeta_score_3: 0.5820 - val_loss: 0.6037 - val_binary_accuracy: 0.6955 - val_precision_1: 0.6374 - val_recall_1: 0.7146 - val_fbeta_score_1: 0.6451 - val_precision_2: 0.7027 - val_recall_2: 0.5904 - val_fbeta_score_2: 0.6715 - val_precision_3: 0.7477 - val_recall_3: 0.3676 - val_fbeta_score_3: 0.6047
Epoch 7/20
105/106 [============================>.] - ETA: 0s - loss: 0.6287 - binary_accuracy: 0.6491 - precision_1: 0.5690 - recall_1: 0.7633 - fbeta_score_1: 0.5947 - precision_2: 0.6276 - recall_2: 0.5909 - fbeta_score_2: 0.6115 - precision_3: 0.7034 - recall_3: 0.3800 - fbeta_score_3: 0.5773Epoch 00007: val_loss did not improve
106/106 [==============================] - 16s 147ms/step - loss: 0.6287 - binary_accuracy: 0.6486 - precision_1: 0.5692 - recall_1: 0.7636 - fbeta_score_1: 0.5950 - precision_2: 0.6270 - recall_2: 0.5910 - fbeta_score_2: 0.6111 - precision_3: 0.7024 - recall_3: 0.3802 - fbeta_score_3: 0.5770 - val_loss: 0.6064 - val_binary_accuracy: 0.6945 - val_precision_1: 0.6398 - val_recall_1: 0.6906 - val_fbeta_score_1: 0.6430 - val_precision_2: 0.7121 - val_recall_2: 0.5702 - val_fbeta_score_2: 0.6710 - val_precision_3: 0.7597 - val_recall_3: 0.3315 - val_fbeta_score_3: 0.5841
Epoch 8/20
105/106 [============================>.] - ETA: 0s - loss: 0.6353 - binary_accuracy: 0.6443 - precision_1: 0.5693 - recall_1: 0.7463 - fbeta_score_1: 0.5927 - precision_2: 0.6201 - recall_2: 0.5853 - fbeta_score_2: 0.6030 - precision_3: 0.6542 - recall_3: 0.3590 - fbeta_score_3: 0.5438Epoch 00008: val_loss did not improve
106/106 [==============================] - 16s 147ms/step - loss: 0.6354 - binary_accuracy: 0.6436 - precision_1: 0.5686 - recall_1: 0.7474 - fbeta_score_1: 0.5922 - precision_2: 0.6192 - recall_2: 0.5867 - fbeta_score_2: 0.6026 - precision_3: 0.6548 - recall_3: 0.3619 - fbeta_score_3: 0.5453 - val_loss: 0.6136 - val_binary_accuracy: 0.6718 - val_precision_1: 0.5468 - val_recall_1: 0.8886 - val_fbeta_score_1: 0.5892 - val_precision_2: 0.6273 - val_recall_2: 0.7364 - val_fbeta_score_2: 0.6410 - val_precision_3: 0.7157 - val_recall_3: 0.5036 - val_fbeta_score_3: 0.6514
Epoch 9/20
105/106 [============================>.] - ETA: 0s - loss: 0.6188 - binary_accuracy: 0.6634 - precision_1: 0.5960 - recall_1: 0.7756 - fbeta_score_1: 0.6189 - precision_2: 0.6477 - recall_2: 0.6233 - fbeta_score_2: 0.6324 - precision_3: 0.6975 - recall_3: 0.4358 - fbeta_score_3: 0.6031Epoch 00009: val_loss did not improve
106/106 [==============================] - 16s 148ms/step - loss: 0.6189 - binary_accuracy: 0.6624 - precision_1: 0.5939 - recall_1: 0.7767 - fbeta_score_1: 0.6171 - precision_2: 0.6445 - recall_2: 0.6217 - fbeta_score_2: 0.6295 - precision_3: 0.6945 - recall_3: 0.4349 - fbeta_score_3: 0.6008 - val_loss: 0.6056 - val_binary_accuracy: 0.6882 - val_precision_1: 0.6470 - val_recall_1: 0.6939 - val_fbeta_score_1: 0.6494 - val_precision_2: 0.6987 - val_recall_2: 0.5629 - val_fbeta_score_2: 0.6608 - val_precision_3: 0.7435 - val_recall_3: 0.3439 - val_fbeta_score_3: 0.5853
###Markdown
Metric
###Code
model.load_weights('saved_models/pretrainedVGG.best.from_scratch.hdf5')
prediction = model.predict(test_tensors)
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
prediction[:30]
###Output
_____no_output_____
###Markdown
Extra data
###Code
import pickle
train_filename = "data_preprocessed/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "data_preprocessed/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "data_preprocessed/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
import time
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras import regularizers
from keras import applications
from keras.models import Model
from keras import optimizers
from keras.layers import Input, merge, concatenate
base_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
added_model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
inp = Input(batch_shape=(None, train_data.shape[1]))
# out = Dense(8)(inp)
extra_model = Model(input=inp, output=inp)
x = concatenate([added_model.output,
extra_model.output])
# x = Dropout(0.5)(x)
# x = Dense(1024, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(input=[added_model.input,
extra_model.input],
output=x)
model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
model.compile(optimizer='sgd', loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
import numpy as np
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto')
log = CSVLogger('saved_models/log_pretrained_extradata_CNN.csv')
checkpointer = ModelCheckpoint(filepath='saved_models/pretrained_extradata_CNN.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
model.fit([train_tensors, train_data], train_labels,
validation_data=([valid_tensors, valid_data], valid_labels),
epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
# def train_generator(x1, x2, y, batch_size):
# train_datagen = ImageDataGenerator(
# featurewise_center=False, # set input mean to 0 over the dataset
# samplewise_center=False, # set each sample mean to 0
# featurewise_std_normalization=False, # divide inputs by std of the dataset
# samplewise_std_normalization=False, # divide each input by its std
# zca_whitening=False, # apply ZCA whitening
# rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
# width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
# height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
# horizontal_flip=True, # randomly flip images
# vertical_flip=False) # randomly flip images
# generator = train_datagen.flow((x1, x2), y, batch_size=batch_size)
# while 1:
# (x1_batch, x2_batch), y_batch = generator.next()
# yield [[x1_batch, x2_batch], y_batch]
# # Training with data augmentation. If shift_fraction=0., also no augmentation.
# model.fit_generator(generator=train_generator(train_tensors, train_data, train_labels, batch_size),
# steps_per_epoch=int(train_labels.shape[0] / batch_size),
# validation_data=([valid_tensors, valid_data], valid_labels),
# epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
model.load_weights('saved_models/pretrained_extradata_CNN.best.from_scratch.hdf5')
prediction = model.predict([test_tensors, test_data])
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
prediction[:30]
###Output
_____no_output_____
###Markdown
Train with extra data and spacial transformer
###Code
import pickle
train_filename = "data_preprocessed/train_data_sample_rgb.p"
(train_labels, train_data, train_tensors) = pickle.load(open(train_filename, mode='rb'))
valid_filename = "data_preprocessed/valid_data_sample_rgb.p"
(valid_labels, valid_data, valid_tensors) = pickle.load(open(valid_filename, mode='rb'))
test_filename = "data_preprocessed/test_data_sample_rgb.p"
(test_labels, test_data, test_tensors) = pickle.load(open(test_filename, mode='rb'))
import time
import numpy as np
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Lambda
from keras.layers import Dropout, Flatten, Dense
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras import regularizers
from keras import applications
from keras.models import Model
from keras import optimizers
from keras.layers import Input, merge, concatenate
from spatial_transformer import SpatialTransformer
def locnet():
b = np.zeros((2, 3), dtype='float32')
b[0, 0] = 1
b[1, 1] = 1
W = np.zeros((64, 6), dtype='float32')
weights = [W, b.flatten()]
locnet = Sequential()
locnet.add(Conv2D(16, (7, 7), padding='valid', input_shape=train_tensors.shape[1:]))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Conv2D(32, (5, 5), padding='valid'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Conv2D(64, (3, 3), padding='valid'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Flatten())
locnet.add(Dense(128, activation='elu'))
locnet.add(Dense(64, activation='elu'))
locnet.add(Dense(6, weights=weights))
return locnet
base_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=train_tensors.shape[1:])
add_model = Sequential()
add_model.add(Flatten(input_shape=base_model.output_shape[1:]))
added0_model = Model(inputs=base_model.input, outputs=add_model(base_model.output))
stn_model = Sequential()
stn_model.add(Lambda(
lambda x: 2*x - 1.,
input_shape=train_tensors.shape[1:],
output_shape=train_tensors.shape[1:]))
stn_model.add(BatchNormalization())
stn_model.add(SpatialTransformer(localization_net=locnet(),
output_size=train_tensors.shape[1:3]))
added_model = Model(inputs=stn_model.input, outputs=added0_model(stn_model.output))
inp = Input(batch_shape=(None, train_data.shape[1]))
# out = Dense(8)(inp)
extra_model = Model(input=inp, output=inp)
x = concatenate([added_model.output,
extra_model.output])
# x = Dropout(0.5)(x)
# x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(input=[added_model.input,
extra_model.input],
output=x)
model.summary()
from keras import backend as K
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)))
def precision_threshold(threshold = 0.5):
def precision(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(y_pred)
precision_ratio = true_positives / (predicted_positives + K.epsilon())
return precision_ratio
return precision
def recall_threshold(threshold = 0.5):
def recall(y_true, y_pred):
threshold_value = threshold
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold_value), K.floatx())
true_positives = K.round(K.sum(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.clip(y_true, 0, 1))
recall_ratio = true_positives / (possible_positives + K.epsilon())
return recall_ratio
return recall
def fbeta_score_threshold(beta = 1, threshold = 0.5):
def fbeta_score(y_true, y_pred):
threshold_value = threshold
beta_value = beta
p = precision_threshold(threshold_value)(y_true, y_pred)
r = recall_threshold(threshold_value)(y_true, y_pred)
bb = beta_value ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score
return fbeta_score
model.compile(optimizer=optimizers.SGD(lr=1e-4, decay=1e-6, momentum=0.9, nesterov=True),
loss='binary_crossentropy',
metrics=[binary_accuracy,
precision_threshold(threshold = 0.4),
recall_threshold(threshold = 0.4),
fbeta_score_threshold(beta=0.5, threshold = 0.4),
precision_threshold(threshold = 0.5),
recall_threshold(threshold = 0.5),
fbeta_score_threshold(beta=0.5, threshold = 0.5),
precision_threshold(threshold = 0.6),
recall_threshold(threshold = 0.6),
fbeta_score_threshold(beta=0.5, threshold = 0.6)])
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
epochs = 20
batch_size = 32
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto')
log = CSVLogger('saved_models/log_pretrained_extradata_stn_CNN.csv')
checkpointer = ModelCheckpoint(filepath='saved_models/log_pretrained_extradata_stn_CNN.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
start = time.time()
model.fit([train_tensors, train_data], train_labels,
validation_data=([valid_tensors, valid_data], valid_labels),
epochs=epochs, batch_size=batch_size, callbacks=[checkpointer, log, earlystop], verbose=1)
# def train_generator(x, y, batch_size):
# train_datagen = ImageDataGenerator(
# featurewise_center=False, # set input mean to 0 over the dataset
# samplewise_center=False, # set each sample mean to 0
# featurewise_std_normalization=False, # divide inputs by std of the dataset
# samplewise_std_normalization=False, # divide each input by its std
# zca_whitening=False, # apply ZCA whitening
# rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
# width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
# height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
# horizontal_flip=True, # randomly flip images
# vertical_flip=False) # randomly flip images
# generator = train_datagen.flow(x, y, batch_size=batch_size)
# while 1:
# x_batch, y_batch = generator.next()
# yield [x_batch, y_batch]
# # Training with data augmentation. If shift_fraction=0., also no augmentation.
# model.fit_generator(generator=train_generator(train_tensors, train_labels, batch_size),
# steps_per_epoch=int(train_labels.shape[0] / batch_size),
# validation_data=(valid_tensors, valid_labels),
# epochs=epochs, callbacks=[checkpointer, log, earlystop], verbose=1)
# Show total training time
print("training time: %.2f minutes"%((time.time()-start)/60))
model.load_weights('saved_models/log_pretrained_extradata_stn_CNN.best.from_scratch.hdf5')
prediction = model.predict([test_tensors, test_data])
threshold = 0.5
beta = 0.5
pre = K.eval(precision_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
rec = K.eval(recall_threshold(threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
fsc = K.eval(fbeta_score_threshold(beta = beta, threshold = threshold)(K.variable(value=test_labels),
K.variable(value=prediction)))
print ("Precision: %f %%\nRecall: %f %%\nFscore: %f %%"% (pre, rec, fsc))
K.eval(binary_accuracy(K.variable(value=test_labels),
K.variable(value=prediction)))
###Output
_____no_output_____ |
week01.ipynb | ###Markdown
Machine Learning Foundations: A Case Study Approach Week1-------------------Lectures A simple intro to ML discussing its origin from robots. Old ML pipelineData -> ML Method -> My curve is better -> write a paper New Machine learning pipelineData -> ML Method -> IntelligenceFor this Coursera course we will use SFrame and graphlab libraries for python. First is free second is commerical package, which I got free for a year. Main advantage over python is that it can run massive datasets allowing to cache data from HDD. Lets see how it stack up. Data ProcessingLets quickly process example from the course Importing data
###Code
data = gl.SFrame('people-example.csv')
data.tail()
###Output
_____no_output_____
###Markdown
Inspecting data
###Code
data.show()
data['age'].show(view='Categorical')
#everything else looks preety much like pandas
print data['age'].mean()
print data['age'].max()
print data['Country']
###Output
_____no_output_____
###Markdown
Feature engineering
###Code
data['Full Name'] = data['First Name'] + ' ' + data['Last Name']
data
###Output
_____no_output_____
###Markdown
Some function funLets create function and then run it on our SFrame
###Code
def transform_country(country):
if country == 'USA':
return 'United States'
else:
return country
print transform_country('Brazil')
print transform_country('USA')
data['Country'] = data['Country'].apply(transform_country)
data
###Output
_____no_output_____
###Markdown
This is the same logic as lambdas function, see example below.
###Code
a = 5
square = lambda x: x*x
square(a)
###Output
_____no_output_____
###Markdown
Doing it all with pandasLets see how does pands stack up to this Import
###Code
import pandas as pd
import pylab
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
#import seaborn as sns
import numpy as np
%matplotlib inline
df = pd.read_csv('people-example.csv')
df.tail()
###Output
_____no_output_____
###Markdown
pandas are quicker, but dont forget that it does not have out-of RAM functionality. Inspecting
###Code
df.plot(kind="hist", orientation='horizontal', cumulative=True,legend=False)
df.describe()
#make it look like R
def Rstr(df): return df.shape, df.apply(lambda x: [x.unique()])
Rstr(df)
###Output
_____no_output_____
###Markdown
It is a bit more crude then graphlab. Feature engineringNo suprises here, practically the same.
###Code
df.Country.apply(transform_country)
df['Full Name'] = df['First Name'] + ' ' + df['Last Name']
df
###Output
_____no_output_____ |
FDA Project.ipynb | ###Markdown
Fundamentals of Data Analysis*** Project 2020: Linear Regression Analysis of the powerproduction dataset *** This jupyter notebook contains the linear regression analysis performed by Dervla Candon on the powerproduction dataset as part of the assessment of the Fundamentals of Data Analysis module 2020.
###Code
# ensuring all plots will show in the notebook
%matplotlib inline
# for creating plots
import matplotlib.pyplot as plt
# for creating numerical arrays
import numpy as np
# for creating a dataframe with the csv data
import pandas as pd
plt.rcParams['figure.figsize'] = (9, 7)
# for pearsons correlation coefficient
import scipy.stats
# creating a dataframe with the powerproduction dataset
df = pd.read_csv("https://raw.githubusercontent.com/ianmcloughlin/2020A-machstat-project/master/dataset/powerproduction.csv")
# printing a summary of the dataset
df.describe()
###Output
_____no_output_____
###Markdown
1: Initial AssumptionsThis powerproduction contains two variables; speed and power.A given row within the dataset describes the quantity of power produced by a wind turbine for the corresponding speed measured for the wind.Speed values range from 0 to 25 (no indication of units, however given the range I will assume the units are m/s) and the power produced ranges from 0 to 113.556. It is not possible to make a reasonable assumption of the energy units without knowing the time frame used to measure the power produced.I will assume that the timeframe over which each power production has been measured remained constant throughout the experiment. 2: Simple Linear Regression on Unmodified DatasetTo begin my analysis, I will perform simple linear regression using the np.polyfit function [1], and plot this against the points from the dataset to allow for a visual comparison.
###Code
# np.polyfit produces two outputs, the first is the slope of the line and the second is the constant
m,c = np.polyfit(df['speed'],df['power'], 1)
# plot the individual data points to compare to the line of best fit
plt.plot(df['speed'],df['power'],'k.',label="Original Data Points")
# draw the line of best fit for the range of speed values included in the dataset
plt.plot(df['speed'],df['speed']*m + c, 'r-',label = "Best Fit Line")
# add appropriate x and y axis labels
plt.xlabel('Speed')
plt.ylabel('Power')
plt.legend()
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
From inspection of the above plot, it is clear that the above equation does not provide a very accurate representation of the dataset.Initial observations are as follows: - the plotted datapoints do not appear to have a strong linear relationship, so it is unclear if a linear regression would best describe this relationship; - for a speed of 0m/s the plot predicts a negative value for speed, which is not a possible or useful prediction in a real-world scenario - while there appear to be some isolated data points which return a zero power value for speeds between 10m/s and 24m/s, there also appears to be a cluster of values grouped around the 25m/s speed with zero power production.
###Code
# this lambda cost function has been obtained from the linear regression lecture [2]
cost = lambda m,c: np.sum([(df['power'][i] - m * df['speed'][i] - c)**2 for i in range(df['speed'].size)])
print(f"The cost of the best fit line with no adjustments is {cost(m,c)}")
###Output
The cost of the best fit line with no adjustments is 234241.1641532122
###Markdown
The above calculation of the cost of the best fit line - based on the content covered in topic 9 of the lectures [2] - reaffirms my initial impression that this best fit line is not an accurate representation of the dataset.In the remainder of this jupyter notebook, I will investigate if there exists a better linear equation to represent this dataset (better implying a lower cost), or if in fact there exists a non-linear relationship which better describes the relationship betwqeen wind speeds and wind turbine power production. 3: Outlier, Yes or No?As briefly touched on previously, I have identified two potential groups of outliers in the dataset.For the first group, there are 4 isolated points for which a wind speed value between 10m/s and 24m/s returns a zero power value. For each of these speed values, the graph identifies numerous data points with equal or near-equal speeds that have a non-zero power production. The combination of these two factors leads me to conclude that these factors are indeed outliers, which do not provide accurate representations of the speed/power relationship I am investigating.The second group contains numerous datapoints which, while the are not closely grouped with the majority of the data points from the data set, are closely clustered to one another. If there are a number of experiments which produced the same output value, can they all be considered outliers? An additional cause for concern is that for these speed values, grouped closely around 25m/s, there are no other data points recorded which are grouped closely to the datapoints for lower speed values.Given these values in question fall at the upper bound of the dataset, I have investigated the limitations of wind turbines. As it happens, wind turbines are designed to cease operating once wind speeds reach a certain speed, applying brakes on the propellers to ensure that they are not damaged by excessive wind speeds. For most large wind turbines, the speed at which the turbines stop power production is 55mph [3], which corresponds to approximately 24.6 m/s.As such, this second group of data points are not outliers, but representations of the real-world operation of a wind turbine.If a full spectrum of non-zero x values are being considered as the domain of the function, then the function should be split into 2 variations; - 0 for all x >= 24.6m/s - TBD for x < 24.6m/s Once I have confirmed this value of 24.6m/s as an accurate cut off point based on the dataset, I will remove all values for higher speeds for my remaining analysis.
###Code
# show all values with a zero power output
df.loc[df['power'] == 0,'speed']
###Output
_____no_output_____
###Markdown
As seen in the above results, rows 0 to 456 inclusive with a zero power production correspond to either the first group of outliers I have identified (rows 208, 340, 404, and 456) or points which have a lower wind speed value and do not appear as outliers on the plot.Going by the speed values for the remaining zero power production values, 24.499 appears to be the most appropriate cut off point for the domain of the regression equation. As seen by the consecutive row numbers from 490-500 appearing above, there are no non-zero values falling above a speed of 24.499m/sAs a result, I will remove the rows which contain zero power values for all rows after and including row 208, which will remove both the outlier data points and those which do not correspond to a moving wind turbine.
###Code
# create a new dataframe in variable data for the remaining analysis
data = df.drop([208,340,404,456,490,491,492,493,494,495,496,497,498,499], axis=0)
data.reset_index(inplace=True, drop=True)
# print data to screen
data
# repeat the linear polyfit for the new dataset
m2,c2 = np.polyfit(data['speed'],data['power'], 1)
plt.plot(data['speed'],data['power'],'c.',label="Original Data Points")
plt.plot(data['speed'],data['speed']*m2 + c2, 'r-',label = "Best Fit Line")
plt.xlabel('Speed')
plt.ylabel('Power')
plt.legend()
plt.show()
cost2 = lambda m,c: np.sum([(data['power'][i] - m * data['speed'][i] - c)**2 for i in range(data['speed'].size)])
print(f"The cost of the best fit line with outliers removed and adjusted domain is {cost2(m2,c2)}")
#data
print(f"This adjusted linear regression has a cost of {round((cost2(m2,c2)/cost(m,c))*100,2)}% of the unadjusted linear regression")
###Output
This adjusted linear regression has a cost of 34.57% of the unadjusted linear regression
###Markdown
By removing the outliers of the data set the cost of the linear best fit line is almost reduced to 1/3 of the cost of the initial best fit line.*** 4: Linear or Non-Linear?The np.polyfit takes in three parameters; x-values, y-values, and degree of the function. For both best fit lines thus far, a degree of 1 has been used, which corresponds to a linear relationship, or a straight line. However, when you look at the data points plotted from the dataset, they appear to have a curved trend rather than a linear. Thus I will investigate, using the adjusted dataset, the appearance and cost of a quadratic and cubic equation to describe the relationship between the variables.*** In addition to the appearance of the data points, the Pearson Correlation Coefficient (PCC) is also a good indicator of the linear correlation between the variables [4]. This value ranges between -1 and 1, -1 corresponding to a perefectly negative linear relationship, 1 corresponding to a perfectly positive linear relationsip. If the value lies closer to 0, this implies that either their is no correlation between the datapoints, or this relationship is not linear.
###Code
# from the scipy documentation [6]
r,p = scipy.stats.pearsonr(data['speed'],data['power'])
print(f"The Pearson Correlation Coefficient for this dataset is {r}, and the p-value is {p}")
###Output
The Pearson Correlation Coefficient for this dataset is 0.9500256632037263, and the p-value is 7.379108925462722e-247
###Markdown
This PCC is very close to 1, with a p value that is extremely close to 0. Both of these results provide a strong basis to conclude that the relaionship between the speed and power is a linear one.However, given the appearance of the placement of the datapoints, I will investigate the non-linear regression results to see if the cost of the regression equation can be improved.***
###Code
# a quadratic polyfit will return 3 variables
a3, b3, c3 = np.polyfit(data['speed'],data['power'], 2)
plt.plot(data['speed'],data['power'],'c.',label="Original Data Points")
plt.plot(data['speed'],a3 * (data['speed']**2) + b3 * data['speed'] + c3, 'r-',label = "Best Fit Quadratic Line")
plt.xlabel('Speed')
plt.ylabel('Power')
plt.legend()
plt.show()
# new cost function defined for quadratic lines
quadratic_cost = lambda a,b,c: np.sum([(data['power'][i] - a * (data['speed'][i]**2) - b * data['speed'][i] - c)**2 for i in range(data['speed'].size)])
print(f"The cost of the best fit quadratic line is {quadratic_cost(a3,b3,c3)}")
# a cubic equation outputs 4 variables
a4, b4, c4, d4 = np.polyfit(data['speed'],data['power'], 3)
plt.plot(data['speed'],data['power'],'b.',label="Original Data Points")
plt.plot(data['speed'],a4 * (data['speed']**3) + b4 * (data['speed']**2) + c4 * data['speed'] + d4, 'g-',label = "Best Fit Cubic Line")
plt.xlabel('Speed')
plt.ylabel('Power')
plt.legend()
plt.show()
# new cost function defined for cubic lines
cubic_cost = lambda a,b,c,d: np.sum([(data['power'][i] - a * (data['speed'][i]**3) - b * (data['speed'][i]**2) - c * data['speed'][i] - d)**2 for i in range(data['speed'].size)])
print(f"The cost of the best fit cubic line is {cubic_cost(a4,b4,c4,d4)}")
[round(a4,4),round(b4,4),round(c4,4),round(d4,4)]
###Output
_____no_output_____ |
idaes/examples/workshops/Module_2_Flowsheet/Module_2_Flowsheet_Solution.ipynb | ###Markdown
Learning outcomes------------------------------- Construct a steady-state flowsheet using the IDAES unit model library- Connecting unit models in a flowsheet using Arcs- Using the SequentialDecomposition tool to initialize a flowsheet with recycle- Fomulate and solve an optimization problem - Defining an objective function - Setting variable bounds - Adding additional constraints Problem Statement------Hydrodealkylation is a chemical reaction that often involves reactingan aromatic hydrocarbon in the presence of hydrogen gas to form asimpler aromatic hydrocarbon devoid of functional groups,. In thisexample, toluene will be reacted with hydrogen gas at high temperatures to form benzene via the following reaction:**C6H5CH3 + H2 → C6H6 + CH4**This reaction is often accompanied by an equilibrium side reactionwhich forms diphenyl, which we will neglect for this example.This example is based on the 1967 AIChE Student Contest problem aspresent by Douglas, J.M., Chemical Design of Chemical Processes, 1988,McGraw-Hill.The flowsheet that we will be using for this module is shown below with the stream conditions. We will be processing toluene and hydrogen to produce at least 370 TPY of benzene. As shown in the flowsheet, there are two flash tanks, F101 to separate out the non-condensibles and F102 to further separate the benzene-toluene mixture to improve the benzene purity. Note that typically a distillation column is required to obtain high purity benzene but that is beyond the scope of this workshop. The non-condensibles separated out in F101 will be partially recycled back to M101 and the rest will be either purged or combusted for power generation.We will assume ideal gas for this flowsheet. The properties required for this module are available in the same directory:- hda_ideal_VLE.py- hda_reaction.pyThe state variables chosen for the property package are **flows of component by phase, temperature and pressure**. The components considered are: **toluene, hydrogen, benzene and methane**. Therefore, every stream has 8 flow variables, 1 temperature and 1 pressure variable.  Importing required pyomo and idaes components-----------To construct a flowsheet, we will need several components from the pyomo and idaes package. Let us first import the following components from Pyomo:- Constraint (to write constraints)- Var (to declare variables)- ConcreteModel (to create the concrete model object)- Expression (to evaluate values as a function of variables defined in the model)- Objective (to define an objective function for optimization)- SolverFactory (to solve the problem)- TransformationFactory (to apply certain transformations)- Arc (to connect two unit models)- SequentialDecomposition (to initialize the flowsheet in a sequential mode)For further details on these components, please refer to the pyomo documentation: https://pyomo.readthedocs.io/en/latest/
###Code
from pyomo.environ import (Constraint,
Var,
ConcreteModel,
Expression,
Objective,
SolverFactory,
TransformationFactory,
value)
from pyomo.network import Arc, SequentialDecomposition
###Output
_____no_output_____
###Markdown
From idaes, we will be needing the FlowsheetBlock and the following unit models:- Mixer- Heater- StoichiometricReactor- **Flash**- Separator (splitter) - PressureChanger
###Code
from idaes.core import FlowsheetBlock
from idaes.unit_models import (PressureChanger,
Mixer,
Separator as Splitter,
Heater,
StoichiometricReactor)
###Output
_____no_output_____
###Markdown
Inline Exercise:Now, import the remaining unit models highlighted in blue above and run the cell using `Shift+Enter` after typing in the code.
###Code
from idaes.unit_models import Flash
###Output
_____no_output_____
###Markdown
We will also be needing some utility tools to put together the flowsheet and calculate the degrees of freedom.
###Code
from idaes.unit_models.pressure_changer import ThermodynamicAssumption
from idaes.core.util.model_statistics import degrees_of_freedom
###Output
_____no_output_____
###Markdown
Importing required thermo and reaction package----------- The final set of imports are to import the thermo and reaction package for the HDA process. We have created a custom thermo package that assumes Ideal Gas with support for VLE. The reaction package here is very simple as we will be using only a StochiometricReactor and the reaction package consists of the stochiometric coefficients for the reaction and the parameter for the heat of reaction. Let us import the following modules and they are in the same directory as this jupyter notebook: hda_ideal_VLE as thermo_props hda_reaction as reaction_props
###Code
import hda_ideal_VLE as thermo_props
import hda_reaction as reaction_props
###Output
_____no_output_____
###Markdown
Constructing the Flowsheet----------------------------------We have now imported all the components, unit models, and property modules we need to construct a flowsheet. Let us create a ConcreteModel and add the flowsheet block as we did in module 1.
###Code
m = ConcreteModel()
m.fs = FlowsheetBlock(default={"dynamic": False})
###Output
_____no_output_____
###Markdown
We now need to add the property packages to the flowsheet. Unlike Module 1, where we only had a thermo property package, for this flowsheet we will also need to add a reaction property package.
###Code
m.fs.thermo_params = thermo_props.HDAParameterBlock()
m.fs.reaction_params = reaction_props.HDAReactionParameterBlock(
default={"property_package": m.fs.thermo_params})
###Output
_____no_output_____
###Markdown
Adding Unit Models-----Let us start adding the unit models we have imported to the flowsheet. Here, we are adding the Mixer (assigned a name M101) and a Heater (assigned a name H101). Note that, all unit models need to be given a property package argument. In addition to that, there are several arguments depending on the unit model, please refer to the documentation for more details (https://idaes-pse.readthedocs.io/en/latest/models/index.html). For example, the Mixer unit model here is given a `list` consisting of names to the three inlets.
###Code
m.fs.M101 = Mixer(default={"property_package": m.fs.thermo_params,
"inlet_list": ["toluene_feed", "hydrogen_feed", "vapor_recycle"]})
m.fs.H101 = Heater(default={"property_package": m.fs.thermo_params,
"has_pressure_change": False,
"has_phase_equilibrium": True})
###Output
_____no_output_____
###Markdown
Inline Exercise:Let us now add the StoichiometricReactor(assign the name R101) and pass the following arguments: "property_package": m.fs.thermo_params "reaction_package": m.fs.reaction_params "has_heat_of_reaction": True "has_heat_transfer": True "has_pressure_change": False
###Code
m.fs.R101 = StoichiometricReactor(
default={"property_package": m.fs.thermo_params,
"reaction_package": m.fs.reaction_params,
"has_heat_of_reaction": True,
"has_heat_transfer": True,
"has_pressure_change": False})
###Output
_____no_output_____
###Markdown
Let us now add the Flash(assign the name F101) and pass the following arguments: "property_package": m.fs.thermo_params "has_heat_transfer": True "has_pressure_change": False
###Code
m.fs.F101 = Flash(default={"property_package": m.fs.thermo_params,
"has_heat_transfer": True,
"has_pressure_change": True})
###Output
_____no_output_____
###Markdown
Let us now add the Splitter(S101), PressureChanger(C101) and the second Flash(F102).
###Code
m.fs.S101 = Splitter(default={"property_package": m.fs.thermo_params,
"ideal_separation": False,
"outlet_list": ["purge", "recycle"]})
m.fs.C101 = PressureChanger(default={
"property_package": m.fs.thermo_params,
"compressor": True,
"thermodynamic_assumption": ThermodynamicAssumption.isothermal})
m.fs.F102 = Flash(default={"property_package": m.fs.thermo_params,
"has_heat_transfer": True,
"has_pressure_change": True})
###Output
_____no_output_____
###Markdown
Connecting Unit Models using Arcs-----We have now added all the unit models we need to the flowsheet. However, we have not yet specifed how the units are to be connected. To do this, we will be using the `Arc` which is a pyomo component that takes in two arguments: `source` and `destination`. Let us connect the outlet of the mixer(M101) to the inlet of the heater(H101).
###Code
m.fs.s03 = Arc(source=m.fs.M101.outlet, destination=m.fs.H101.inlet)
###Output
_____no_output_____
###Markdown
 Inline Exercise:Now, connect the H101 outlet to the R101 inlet using the cell above as a guide.
###Code
m.fs.s04 = Arc(source=m.fs.H101.outlet, destination=m.fs.R101.inlet)
###Output
_____no_output_____
###Markdown
We will now be connecting the rest of the flowsheet as shown below. Notice how the outlet names are different for the flash tanks F101 and F102 as they have a vapor and a liquid outlet.
###Code
m.fs.s05 = Arc(source=m.fs.R101.outlet, destination=m.fs.F101.inlet)
m.fs.s06 = Arc(source=m.fs.F101.vap_outlet, destination=m.fs.S101.inlet)
m.fs.s08 = Arc(source=m.fs.S101.recycle, destination=m.fs.C101.inlet)
m.fs.s09 = Arc(source=m.fs.C101.outlet,
destination=m.fs.M101.vapor_recycle)
m.fs.s10 = Arc(source=m.fs.F101.liq_outlet, destination=m.fs.F102.inlet)
###Output
_____no_output_____
###Markdown
We have now connected the unit model block using the arcs. However, each of these arcs link to ports on the two unit models that are connected. In this case, the ports consist of the state variables that need to be linked between the unit models. Pyomo provides a convenient method to write these equality constraints for us between two ports and this is done as follows:
###Code
TransformationFactory("network.expand_arcs").apply_to(m)
###Output
_____no_output_____
###Markdown
Adding expressions to compute purity and operating costs---In this section, we will add a few Expressions that allows us to evaluate the performance. Expressions provide a convenient way of calculating certain values that are a function of the variables defined in the model. For more details on Expressions, please refer to: https://pyomo.readthedocs.io/en/latest/pyomo_modeling_components/Expressions.htmlFor this flowsheet, we are interested in computing the purity of the product Benzene stream (i.e. the mole fraction) and the operating cost which is a sum of the cooling and heating cost. Let us first add an Expression to compute the mole fraction of benzene in the `vap_outlet` of F102 which is our product stream. Please note that the var flow_mol_phase_comp has the index - [time, phase, component]. As this is a steady-state flowsheet, the time index by default is 0. The valid phases are ["Liq", "Vap"]. Similarly the valid component list is ["benzene", "toluene", "hydrogen", "methane"].
###Code
m.fs.purity = Expression(
expr=m.fs.F102.vap_outlet.flow_mol_phase_comp[0, "Vap", "benzene"] /
(m.fs.F102.vap_outlet.flow_mol_phase_comp[0, "Vap", "benzene"]
+ m.fs.F102.vap_outlet.flow_mol_phase_comp[0, "Vap", "toluene"]))
###Output
_____no_output_____
###Markdown
Now, let us add an expression to compute the cooling cost assuming a cost of 0.212E-4 $/kW. Note that cooling utility is required for the reactor (R101) and the first flash (F101).
###Code
m.fs.cooling_cost = Expression(expr=0.212e-7 * (-m.fs.F101.heat_duty[0]) +
0.212e-7 * (-m.fs.R101.heat_duty[0]))
###Output
_____no_output_____
###Markdown
Now, let us add an expression to compute the heating cost assuming the utility cost as follows: 2.2E-4 dollars/kW for H101 1.9E-4 dollars/kW for F102 Note that the heat duty is in units of watt (J/s).
###Code
m.fs.heating_cost = Expression(expr=2.2e-7 * m.fs.H101.heat_duty[0] +
1.9e-7 * m.fs.F102.heat_duty[0])
###Output
_____no_output_____
###Markdown
Let us now add an expression to compute the total operating cost per year which is basically the sum of the cooling and heating cost we defined above.
###Code
m.fs.operating_cost = Expression(expr=(3600 * 24 * 365 *
(m.fs.heating_cost +
m.fs.cooling_cost)))
###Output
_____no_output_____
###Markdown
Fixing feed conditions---Let us first check how many degrees of freedom exist for this flowsheet using the `degrees_of_freedom` tool we imported earlier.
###Code
print(degrees_of_freedom(m))
###Output
29
###Markdown
We will now be fixing the toluene feed stream to the conditions shown in the flowsheet above. Please note that though this is a pure toluene feed, the remaining components are still assigned a very small non-zero value to help with convergence and initializing.
###Code
m.fs.M101.toluene_feed.flow_mol_phase_comp[0, "Vap", "benzene"].fix(1e-5)
m.fs.M101.toluene_feed.flow_mol_phase_comp[0, "Vap", "toluene"].fix(1e-5)
m.fs.M101.toluene_feed.flow_mol_phase_comp[0, "Vap", "hydrogen"].fix(1e-5)
m.fs.M101.toluene_feed.flow_mol_phase_comp[0, "Vap", "methane"].fix(1e-5)
m.fs.M101.toluene_feed.flow_mol_phase_comp[0, "Liq", "benzene"].fix(1e-5)
m.fs.M101.toluene_feed.flow_mol_phase_comp[0, "Liq", "toluene"].fix(0.30)
m.fs.M101.toluene_feed.flow_mol_phase_comp[0, "Liq", "hydrogen"].fix(1e-5)
m.fs.M101.toluene_feed.flow_mol_phase_comp[0, "Liq", "methane"].fix(1e-5)
m.fs.M101.toluene_feed.temperature.fix(303.2)
m.fs.M101.toluene_feed.pressure.fix(350000)
###Output
_____no_output_____
###Markdown
Similarly, let us fix the hydrogen feed to the following conditions in the next cell: FH2 = 0.30 mol/s FCH4 = 0.02 mol/s Remaining components = 1e-5 mol/s T = 303.2 K P = 350000 Pa
###Code
m.fs.M101.hydrogen_feed.flow_mol_phase_comp[0, "Vap", "benzene"].fix(1e-5)
m.fs.M101.hydrogen_feed.flow_mol_phase_comp[0, "Vap", "toluene"].fix(1e-5)
m.fs.M101.hydrogen_feed.flow_mol_phase_comp[0, "Vap", "hydrogen"].fix(0.30)
m.fs.M101.hydrogen_feed.flow_mol_phase_comp[0, "Vap", "methane"].fix(0.02)
m.fs.M101.hydrogen_feed.flow_mol_phase_comp[0, "Liq", "benzene"].fix(1e-5)
m.fs.M101.hydrogen_feed.flow_mol_phase_comp[0, "Liq", "toluene"].fix(1e-5)
m.fs.M101.hydrogen_feed.flow_mol_phase_comp[0, "Liq", "hydrogen"].fix(1e-5)
m.fs.M101.hydrogen_feed.flow_mol_phase_comp[0, "Liq", "methane"].fix(1e-5)
m.fs.M101.hydrogen_feed.temperature.fix(303.2)
m.fs.M101.hydrogen_feed.pressure.fix(350000)
###Output
_____no_output_____
###Markdown
Fixing unit model specifications---Now that we have fixed our inlet feed conditions, we will now be fixing the operating conditions for the unit models in the flowsheet. Let us set set the H101 outlet temperature to 600 K.
###Code
m.fs.H101.outlet.temperature.fix(600)
###Output
_____no_output_____
###Markdown
For the StoichiometricReactor, we have to define the conversion in terms of toluene. This requires us to create a new variable for specifying the conversion and adding a Constraint that defines the conversion with respect to toluene. The second degree of freedom for the reactor is to define the heat duty. In this case, let us assume the reactor to be adiabatic i.e. Q = 0.
###Code
m.fs.R101.conversion = Var(initialize=0.75, bounds=(0, 1))
m.fs.R101.conv_constraint = Constraint(
expr=m.fs.R101.conversion*m.fs.R101.inlet.
flow_mol_phase_comp[0, "Vap", "toluene"] ==
(m.fs.R101.inlet.flow_mol_phase_comp[0, "Vap", "toluene"] -
m.fs.R101.outlet.flow_mol_phase_comp[0, "Vap", "toluene"]))
m.fs.R101.conversion.fix(0.75)
m.fs.R101.heat_duty.fix(0)
###Output
_____no_output_____
###Markdown
The Flash conditions for F101 can be set as follows.
###Code
m.fs.F101.vap_outlet.temperature.fix(325.0)
m.fs.F101.deltaP.fix(0)
###Output
_____no_output_____
###Markdown
Inline Exercise:Set the conditions for Flash F102 to the following conditions: T = 375 K deltaP = -200000 Use Shift+Enter to run the cell once you have typed in your code.
###Code
m.fs.F102.vap_outlet.temperature.fix(375)
m.fs.F102.deltaP.fix(-200000)
###Output
_____no_output_____
###Markdown
Let us fix the purge split fraction to 20% and the outlet pressure of the compressor is set to 350000 Pa.
###Code
m.fs.S101.split_fraction[0, "purge"].fix(0.2)
m.fs.C101.outlet.pressure.fix(350000)
###Output
_____no_output_____
###Markdown
Inline Exercise:We have now defined all the feed conditions and the inputs required for the unit models. The system should now have 0 degrees of freedom i.e. should be a square problem. Please check that the degrees of freedom is 0. Use Shift+Enter to run the cell once you have typed in your code.
###Code
print(degrees_of_freedom(m))
###Output
0
###Markdown
Initialization------------------This section will demonstrate how to use the built-in sequential decomposition tool to initialize our flowsheet. Let us first create an object for the SequentialDecomposition and specify our options for this.
###Code
seq = SequentialDecomposition()
seq.options.select_tear_method = "heuristic"
seq.options.tear_method = "Wegstein"
seq.options.iterLim = 5
# Using the SD tool
G = seq.create_graph(m)
heuristic_tear_set = seq.tear_set_arcs(G, method="heuristic")
order = seq.calculation_order(G)
###Output
_____no_output_____
###Markdown
Which is the tear stream? Display tear set and order
###Code
for o in heuristic_tear_set:
print(o.name)
###Output
fs.s03
###Markdown
What sequence did the SD tool determine to solve this flowsheet with the least number of tears?
###Code
for o in order:
print(o[0].name)
###Output
fs.H101
fs.R101
fs.F101
fs.S101
fs.C101
fs.M101
###Markdown
 The SequentialDecomposition tool has determined that the tear stream is the mixer outlet. We will need to provide a reasonable guess for this.
###Code
tear_guesses = {
"flow_mol_phase_comp": {
(0, "Vap", "benzene"): 1e-5,
(0, "Vap", "toluene"): 1e-5,
(0, "Vap", "hydrogen"): 0.30,
(0, "Vap", "methane"): 0.02,
(0, "Liq", "benzene"): 1e-5,
(0, "Liq", "toluene"): 0.30,
(0, "Liq", "hydrogen"): 1e-5,
(0, "Liq", "methane"): 1e-5},
"temperature": {0: 303},
"pressure": {0: 350000}}
# Pass the tear_guess to the SD tool
seq.set_guesses_for(m.fs.H101.inlet, tear_guesses)
###Output
_____no_output_____
###Markdown
Next, we need to tell the tool how to initialize a particular unit. We will be writing a python function which takes in a "unit" and calls the initialize method on that unit.
###Code
def function(unit):
unit.initialize(outlvl=1)
###Output
_____no_output_____
###Markdown
We are now ready to initialize our flowsheet in a sequential mode. Note that we specifically set the iteration limit to be 5 as we are trying to use this tool only to get a good set of initial values such that IPOPT can then take over and solve this flowsheet for us.
###Code
seq.run(m, function)
###Output
Ipopt 3.13.2: tol=1e-06
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.13.2, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 18
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 0
Total number of variables............................: 10
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 10
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 7.00e+08 0.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 0.0000000e+00 2.24e-08 0.00e+00 -1.0 7.00e+05 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 1
(scaled) (unscaled)
Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 0.0000000000000000e+00 0.0000000000000000e+00
Constraint violation....: 2.2351741790771488e-09 2.2351741790771488e-08
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 2.2351741790771488e-09 2.2351741790771488e-08
Number of objective function evaluations = 2
Number of objective gradient evaluations = 2
Number of equality constraint evaluations = 2
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 2
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 1
Total CPU secs in IPOPT (w/o function evaluations) = 0.001
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
Ipopt 3.13.2: tol=1e-06
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.13.2, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 31
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 11
Total number of variables............................: 17
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 17
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 7.00e+08 0.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 0.0000000e+00 9.70e+02 0.00e+00 -1.0 7.69e+05 - 1.00e+00 1.00e+00h 1
2 0.0000000e+00 6.38e+02 0.00e+00 -1.0 4.73e+06 - 1.00e+00 1.00e+00h 1
3 0.0000000e+00 3.67e+02 0.00e+00 -1.0 2.00e+07 - 1.00e+00 1.00e+00h 1
4 0.0000000e+00 1.67e+02 0.00e+00 -1.0 5.27e+07 - 1.00e+00 1.00e+00h 1
5 0.0000000e+00 4.93e+01 0.00e+00 -1.0 7.34e+07 - 1.00e+00 1.00e+00h 1
6 0.0000000e+00 5.77e+00 0.00e+00 -1.0 4.19e+07 - 1.00e+00 1.00e+00h 1
7 0.0000000e+00 9.14e-02 0.00e+00 -1.0 6.27e+06 - 1.00e+00 1.00e+00h 1
8 0.0000000e+00 2.34e-05 0.00e+00 -2.5 1.02e+05 - 1.00e+00 1.00e+00h 1
9 0.0000000e+00 1.62e-12 0.00e+00 -5.7 2.62e+01 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 9
(scaled) (unscaled)
Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 0.0000000000000000e+00 0.0000000000000000e+00
Constraint violation....: 1.6200374375330284e-12 1.6200374375330284e-12
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 1.6200374375330284e-12 1.6200374375330284e-12
Number of objective function evaluations = 10
Number of objective gradient evaluations = 10
Number of equality constraint evaluations = 10
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 10
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 9
Total CPU secs in IPOPT (w/o function evaluations) = 0.004
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
2020-04-17 14:56:19 - Level 5 - idaes.init.fs.H101.control_volume - Initialization Complete
2020-04-17 14:56:19 - Level 4 - idaes.init.fs.H101 - Initialization Step 1 Complete.
Ipopt 3.13.2: tol=1e-06
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.13.2, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 124
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 112
Total number of variables............................: 41
variables with only lower bounds: 0
variables with lower and upper bounds: 9
variables with only upper bounds: 0
Total number of equality constraints.................: 41
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 1.44e+05 0.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 0.0000000e+00 8.53e+04 1.03e+01 -1.0 3.65e+04 - 1.44e-01 5.96e-01h 1
2 0.0000000e+00 5.59e+04 4.56e+02 -1.0 1.46e+04 - 9.90e-01 3.84e-01h 1
3 0.0000000e+00 5.46e+04 2.28e+04 -1.0 9.01e+03 - 9.64e-01 2.49e-02h 1
4 0.0000000e+00 5.45e+04 8.50e+07 -1.0 8.79e+03 - 9.91e-01 2.77e-04h 1
5r 0.0000000e+00 5.45e+04 1.00e+03 0.7 0.00e+00 - 0.00e+00 3.46e-07R 4
6r 0.0000000e+00 4.36e+04 3.24e+03 0.7 2.01e+04 - 9.16e-02 2.64e-03f 1
7r 0.0000000e+00 3.79e+04 5.91e+03 0.7 2.19e+04 - 4.65e-02 8.63e-02f 1
8r 0.0000000e+00 3.17e+04 5.52e+03 0.7 2.00e+04 - 1.58e-01 1.21e-01f 1
9r 0.0000000e+00 2.24e+04 4.07e+03 0.7 1.69e+04 - 2.62e-01 2.16e-01f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10r 0.0000000e+00 2.81e+03 6.09e+03 0.7 1.32e+04 - 1.00e+00 7.31e-01f 1
11r 0.0000000e+00 1.14e+03 5.67e+02 0.7 1.61e+03 - 1.00e+00 1.00e+00h 1
12r 0.0000000e+00 1.94e+02 4.92e+02 -0.0 7.79e+02 - 1.00e+00 9.01e-01f 1
13r 0.0000000e+00 2.70e+01 8.63e+03 -0.0 4.68e+02 - 8.57e-01 3.24e-01f 1
14r 0.0000000e+00 2.21e+01 2.68e+04 -0.0 1.19e+03 - 1.00e+00 4.58e-01f 1
15r 0.0000000e+00 1.50e+01 2.79e+02 -0.0 2.96e+02 - 1.00e+00 1.00e+00f 1
16r 0.0000000e+00 9.88e+00 1.66e+01 -0.0 1.08e+02 - 1.00e+00 1.00e+00h 1
17r 0.0000000e+00 3.29e+00 1.77e+02 -1.4 5.95e+01 - 7.72e-01 9.68e-01f 1
18r 0.0000000e+00 5.49e+02 4.70e+03 -1.4 8.66e+03 - 7.57e-01 3.13e-01f 1
19r 0.0000000e+00 3.25e+03 1.47e+04 -1.4 2.57e+03 - 1.00e+00 9.97e-01f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
20r 0.0000000e+00 7.66e+01 9.65e+02 -1.4 1.06e+03 - 4.61e-01 1.00e+00h 1
21r 0.0000000e+00 2.47e+00 2.28e+01 -1.4 5.47e+01 - 1.00e+00 1.00e+00h 1
22r 0.0000000e+00 8.50e-02 3.54e-02 -1.4 1.24e+00 - 1.00e+00 1.00e+00h 1
23r 0.0000000e+00 4.43e-01 6.74e+01 -4.7 1.56e+03 - 8.51e-01 8.92e-01f 1
24r 0.0000000e+00 1.18e+01 6.96e+03 -4.7 1.12e+02 -4.0 7.45e-01 8.89e-01f 1
25r 0.0000000e+00 2.53e+03 6.99e+03 -4.7 9.70e+05 - 1.34e-02 5.52e-03f 1
26r 0.0000000e+00 2.53e+03 1.30e+04 -4.7 2.82e+05 - 1.56e-01 1.94e-05f 1
27r 0.0000000e+00 2.53e+03 1.41e+04 -4.7 3.91e+05 - 2.35e-01 5.39e-02f 1
28r 0.0000000e+00 2.48e+03 9.30e+04 -4.7 3.89e+05 - 9.70e-01 1.83e-02f 1
29r 0.0000000e+00 2.48e+03 9.80e+04 -4.7 1.06e+05 - 1.00e+00 3.51e-03f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
30r 0.0000000e+00 6.31e+02 1.61e+05 -4.7 4.03e+03 - 1.00e+00 7.63e-01f 1
31r 0.0000000e+00 6.03e+02 8.31e+05 -4.7 9.55e+02 - 1.00e+00 8.77e-02f 1
32r 0.0000000e+00 1.66e+02 7.40e+05 -4.7 8.71e+02 - 1.00e+00 7.25e-01f 1
33r 0.0000000e+00 1.13e+02 8.82e+05 -4.7 2.39e+02 - 1.00e+00 3.18e-01f 1
34r 0.0000000e+00 1.13e+01 7.38e+07 -4.7 1.63e+02 - 1.00e+00 9.70e-01f 1
35r 0.0000000e+00 9.91e+00 6.88e+07 -4.7 1.16e+00 -1.8 9.77e-01 6.84e-01h 1
36r 0.0000000e+00 9.90e+00 3.24e+08 -4.7 2.52e+02 - 1.00e+00 5.62e-04h 1
37r 0.0000000e+00 8.52e+00 2.84e+08 -4.7 4.92e+00 - 1.17e-01 1.39e-01f 1
38r 0.0000000e+00 1.25e+00 2.52e+08 -4.7 4.24e+00 - 2.12e-01 1.00e+00f 1
39r 0.0000000e+00 8.16e-02 2.46e+07 -4.7 7.06e-01 - 9.33e-01 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
40r 0.0000000e+00 3.37e-01 2.12e+06 -4.7 8.24e+00 - 9.11e-01 5.50e-01f 1
41r 0.0000000e+00 3.16e-01 1.68e+07 -4.7 1.05e+02 - 8.97e-01 1.40e-01f 1
42r 0.0000000e+00 1.59e-01 4.30e+06 -4.7 4.26e+01 - 1.00e+00 5.00e-01f 1
43r 0.0000000e+00 1.31e-01 1.03e+08 -4.7 3.05e+00 - 1.00e+00 2.23e-01f 1
44r 0.0000000e+00 7.23e-02 7.07e+08 -4.7 9.03e-01 - 9.95e-02 4.32e-01f 1
45r 0.0000000e+00 2.99e-02 3.02e+08 -4.7 7.49e-02 - 1.00e+00 5.76e-01f 1
46r 0.0000000e+00 3.96e-03 7.36e+05 -4.7 4.38e-02 - 1.00e+00 1.00e+00f 1
47r 0.0000000e+00 3.96e-03 1.08e+04 -4.7 3.69e-02 - 1.00e+00 1.00e+00h 1
48r 0.0000000e+00 3.96e-03 1.35e-01 -4.7 1.37e-04 - 1.00e+00 1.00e+00h 1
49r 0.0000000e+00 3.96e-03 7.39e+05 -7.0 8.44e-01 - 1.00e+00 9.45e-01f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
50r 0.0000000e+00 3.44e-05 9.17e+05 -7.0 6.03e+04 - 1.00e+00 6.57e-02f 1
Number of Iterations....: 50
(scaled) (unscaled)
Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 0.0000000000000000e+00 0.0000000000000000e+00
Constraint violation....: 2.7355712931488153e-09 3.4409735052420842e-05
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
Overall NLP error.......: 2.7355712931488153e-09 3.4409735052420842e-05
Number of objective function evaluations = 55
Number of objective gradient evaluations = 7
Number of equality constraint evaluations = 55
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 52
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 50
Total CPU secs in IPOPT (w/o function evaluations) = 0.033
Total CPU secs in NLP function evaluations = 0.001
EXIT: Optimal Solution Found.
###Markdown
Inline Exercise:We have now initialized the flowsheet. Let us run the flowsheet in a simulation mode to look at the results. To do this, complete the last line of code where we pass the model to the solver. You will need to type the following: results = solver.solve(m, tee=True)Use Shift+Enter to run the cell once you have typed in your code.
###Code
# Create the solver object
solver = SolverFactory('ipopt')
solver.options = {'tol': 1e-6, 'max_iter': 5000}
# Solve the model
results = solver.solve(m, tee=False)
# For testing purposes
from pyomo.environ import TerminationCondition
assert results.solver.termination_condition == TerminationCondition.optimal
###Output
_____no_output_____
###Markdown
Analyze the results of the square problem-------------------------What is the total operating cost?
###Code
print('operating cost = $', value(m.fs.operating_cost))
###Output
operating cost = $ 419122.3387677943
###Markdown
For this operating cost, what is the amount of benzene we are able to produce and what purity we are able to achieve?
###Code
m.fs.F102.report()
print()
print('benzene purity = ', value(m.fs.purity))
###Output
====================================================================================
Unit : fs.F102 Time: 0.0
------------------------------------------------------------------------------------
Unit Performance
Variables:
Key : Value : Fixed : Bounds
Heat Duty : 7352.5 : False : (None, None)
Pressure Change : -2.0000e+05 : True : (None, None)
------------------------------------------------------------------------------------
Stream Table
Inlet Vapor Outlet Liquid Outlet
flow_mol_phase_comp ('Liq', 'benzene') 0.20460 1.0000e-08 0.062620
flow_mol_phase_comp ('Liq', 'toluene') 0.062520 1.0000e-08 0.032257
flow_mol_phase_comp ('Liq', 'hydrogen') 2.6712e-07 1.0000e-08 9.4877e-08
flow_mol_phase_comp ('Liq', 'methane') 2.6712e-07 1.0000e-08 9.4877e-08
flow_mol_phase_comp ('Vap', 'benzene') 1.0000e-08 0.14198 1.0000e-08
flow_mol_phase_comp ('Vap', 'toluene') 1.0000e-08 0.030264 1.0000e-08
flow_mol_phase_comp ('Vap', 'hydrogen') 1.0000e-08 1.8224e-07 1.0000e-08
flow_mol_phase_comp ('Vap', 'methane') 1.0000e-08 1.8224e-07 1.0000e-08
temperature 325.00 375.00 375.00
pressure 3.5000e+05 1.5000e+05 1.5000e+05
====================================================================================
benzene purity = 0.8242962943918924
###Markdown
Next, let's look at how much benzene we are loosing with the light gases out of F101. IDAES has tools for creating stream tables based on the `Arcs` and/or `Ports` in a flowsheet. Let us create and print a simple stream table showing the stream leaving the reactor and the vapor stream from F101.Inline Exercise:How much benzene are we loosing in the F101 vapor outlet stream?
###Code
from idaes.core.util.tables import create_stream_table_dataframe, stream_table_dataframe_to_string
st = create_stream_table_dataframe({"Reactor": m.fs.s05, "Light Gases": m.fs.s06})
print(stream_table_dataframe_to_string(st))
###Output
Reactor Light Gases
flow_mol_phase_comp ('Liq', 'benzene') 1.2993e-07 1.0000e-08
flow_mol_phase_comp ('Liq', 'toluene') 8.4147e-07 1.0000e-08
flow_mol_phase_comp ('Liq', 'hydrogen') 1.0000e-08 1.0000e-08
flow_mol_phase_comp ('Liq', 'methane') 1.0000e-08 1.0000e-08
flow_mol_phase_comp ('Vap', 'benzene') 0.35374 0.14915
flow_mol_phase_comp ('Vap', 'toluene') 0.078129 0.015610
flow_mol_phase_comp ('Vap', 'hydrogen') 0.32821 0.32821
flow_mol_phase_comp ('Vap', 'methane') 1.2721 1.2721
temperature 771.85 325.00
pressure 3.5000e+05 3.5000e+05
###Markdown
Inline Exercise:You can querry additional variables here if you like. Use Shift+Enter to run the cell once you have typed in your code. Optimization--------------------------We saw from the results above that the total operating cost for the base case was $419,122 per year. We are producing 0.142 mol/s of benzene at a purity of 82\%. However, we are losing around 42\% of benzene in F101 vapor outlet stream. Let us try to minimize this cost such that:- we are producing at least 0.15 mol/s of benzene in F102 vapor outlet i.e. our product stream- purity of benzne i.e. the mole fraction of benzene in F102 vapor outlet is at least 80%- restricting the benzene loss in F101 vapor outlet to less than 20%For this problem, our decision variables are as follows:- H101 outlet temperature- R101 cooling duty provided- F101 outlet temperature- F102 outlet temperature- F102 deltaP in the flash tank Let us declare our objective function for this problem.
###Code
m.fs.objective = Objective(expr=m.fs.operating_cost)
###Output
_____no_output_____
###Markdown
Now, we need to unfix the decision variables as we had solved a square problem (degrees of freedom = 0) until now.
###Code
m.fs.H101.outlet.temperature.unfix()
m.fs.R101.heat_duty.unfix()
m.fs.F101.vap_outlet.temperature.unfix()
m.fs.F102.vap_outlet.temperature.unfix()
###Output
_____no_output_____
###Markdown
Inline Exercise:Let us now unfix the remaining variable which is F102 pressure drop (F102.deltaP) Use Shift+Enter to run the cell once you have typed in your code.
###Code
m.fs.F102.deltaP.unfix()
###Output
_____no_output_____
###Markdown
Next, we need to set bounds on these decision variables to values shown below: - H101 outlet temperature [500, 600] K - R101 outlet temperature [600, 800] K - F101 outlet temperature [298, 450] K - F102 outlet temperature [298, 450] K - F102 outlet pressure [105000, 110000] PaLet us first set the variable bound for the H101 outlet temperature as shown below:
###Code
m.fs.H101.outlet.temperature[0].setlb(500)
m.fs.H101.outlet.temperature[0].setub(600)
###Output
_____no_output_____
###Markdown
Inline Exercise:Now, set the variable bound for the R101 outlet temperature.Use Shift+Enter to run the cell once you have typed in your code.
###Code
m.fs.R101.outlet.temperature[0].setlb(600)
m.fs.R101.outlet.temperature[0].setub(800)
###Output
_____no_output_____
###Markdown
Let us fix the bounds for the rest of the decision variables.
###Code
m.fs.F101.vap_outlet.temperature[0].setlb(298.0)
m.fs.F101.vap_outlet.temperature[0].setub(450.0)
m.fs.F102.vap_outlet.temperature[0].setlb(298.0)
m.fs.F102.vap_outlet.temperature[0].setub(450.0)
m.fs.F102.vap_outlet.pressure[0].setlb(105000)
m.fs.F102.vap_outlet.pressure[0].setub(110000)
###Output
_____no_output_____
###Markdown
Now, the only things left to define are our constraints on overhead loss in F101, product flow rate and purity in F102. Let us first look at defining a constraint for the overhead loss in F101 where we are restricting the benzene leaving the vapor stream to less than 20 \% of the benzene available in the reactor outlet.
###Code
m.fs.overhead_loss = Constraint(
expr=m.fs.F101.vap_outlet.flow_mol_phase_comp[0, "Vap", "benzene"] <=
0.20 * m.fs.R101.outlet.flow_mol_phase_comp[0, "Vap", "benzene"])
###Output
_____no_output_____
###Markdown
Inline Exercise:Now, add the constraint such that we are producing at least 0.15 mol/s of benzene in the product stream which is the vapor outlet of F102. Let us name this constraint as m.fs.product_flow. Use Shift+Enter to run the cell once you have typed in your code.
###Code
m.fs.product_flow = Constraint(
expr=m.fs.F102.vap_outlet.flow_mol_phase_comp[0, "Vap", "benzene"] >=
0.15)
###Output
_____no_output_____
###Markdown
Let us add the final constraint on product purity or the mole fraction of benzene in the product stream such that it is at least greater than 80%.
###Code
m.fs.product_purity = Constraint(expr=m.fs.purity >= 0.80)
###Output
_____no_output_____
###Markdown
We have now defined the optimization problem and we are now ready to solve this problem.
###Code
results = solver.solve(m, tee=True)
# For testing purposes
from pyomo.environ import TerminationCondition
assert results.solver.termination_condition == TerminationCondition.optimal
###Output
_____no_output_____
###Markdown
Optimization Results---Display the results and product specifications
###Code
print('operating cost = $', value(m.fs.operating_cost))
print()
print('Product flow rate and purity in F102')
m.fs.F102.report()
print()
print('benzene purity = ', value(m.fs.purity))
print()
print('Overhead loss in F101')
m.fs.F101.report()
###Output
operating cost = $ 312786.3383410268
Product flow rate and purity in F102
====================================================================================
Unit : fs.F102 Time: 0.0
------------------------------------------------------------------------------------
Unit Performance
Variables:
Key : Value : Fixed : Bounds
Heat Duty : 8377.0 : False : (None, None)
Pressure Change : -2.4500e+05 : False : (None, None)
------------------------------------------------------------------------------------
Stream Table
Inlet Vapor Outlet Liquid Outlet
flow_mol_phase_comp ('Liq', 'benzene') 0.21743 1.0000e-08 0.067425
flow_mol_phase_comp ('Liq', 'toluene') 0.070695 1.0000e-08 0.037507
flow_mol_phase_comp ('Liq', 'hydrogen') 2.8812e-07 1.0000e-08 1.0493e-07
flow_mol_phase_comp ('Liq', 'methane') 2.8812e-07 1.0000e-08 1.0493e-07
flow_mol_phase_comp ('Vap', 'benzene') 1.0000e-08 0.15000 1.0000e-08
flow_mol_phase_comp ('Vap', 'toluene') 1.0000e-08 0.033189 1.0000e-08
flow_mol_phase_comp ('Vap', 'hydrogen') 1.0000e-08 1.9319e-07 1.0000e-08
flow_mol_phase_comp ('Vap', 'methane') 1.0000e-08 1.9319e-07 1.0000e-08
temperature 301.88 362.93 362.93
pressure 3.5000e+05 1.0500e+05 1.0500e+05
====================================================================================
benzene purity = 0.8188276578112281
Overhead loss in F101
====================================================================================
Unit : fs.F101 Time: 0.0
------------------------------------------------------------------------------------
Unit Performance
Variables:
Key : Value : Fixed : Bounds
Heat Duty : -56354. : False : (None, None)
Pressure Change : 0.0000 : True : (None, None)
------------------------------------------------------------------------------------
Stream Table
Inlet Vapor Outlet Liquid Outlet
flow_mol_phase_comp ('Liq', 'benzene') 4.3534e-08 1.0000e-08 0.21743
flow_mol_phase_comp ('Liq', 'toluene') 7.5866e-07 1.0000e-08 0.070695
flow_mol_phase_comp ('Liq', 'hydrogen') 1.0000e-08 1.0000e-08 2.8812e-07
flow_mol_phase_comp ('Liq', 'methane') 1.0000e-08 1.0000e-08 2.8812e-07
flow_mol_phase_comp ('Vap', 'benzene') 0.27178 0.054356 1.0000e-08
flow_mol_phase_comp ('Vap', 'toluene') 0.076085 0.0053908 1.0000e-08
flow_mol_phase_comp ('Vap', 'hydrogen') 0.35887 0.35887 1.0000e-08
flow_mol_phase_comp ('Vap', 'methane') 1.2414 1.2414 1.0000e-08
temperature 696.12 301.88 301.88
pressure 3.5000e+05 3.5000e+05 3.5000e+05
====================================================================================
###Markdown
Display optimal values for the decision variables
###Code
print('Optimal Values')
print()
print('H101 outlet temperature = ', value(m.fs.H101.outlet.temperature[0]), 'K')
print()
print('R101 outlet temperature = ', value(m.fs.R101.outlet.temperature[0]), 'K')
print()
print('F101 outlet temperature = ', value(m.fs.F101.vap_outlet.temperature[0]), 'K')
print()
print('F102 outlet temperature = ', value(m.fs.F102.vap_outlet.temperature[0]), 'K')
print('F102 outlet pressure = ', value(m.fs.F102.vap_outlet.pressure[0]), 'Pa')
###Output
Optimal Values
H101 outlet temperature = 500.0 K
R101 outlet temperature = 696.1161004637528 K
F101 outlet temperature = 301.8784760569282 K
F102 outlet temperature = 362.9347683054898 K
F102 outlet pressure = 105000.0 Pa
|
Chatbot_StuyHacksX_Display_Copy.ipynb | ###Markdown
Preliminary Initialiaztion
###Code
# imports
import pandas as pd
from google.colab import files
# for reuploads / edits to main datasheet
!rm all-merge.csv
# uploads
data_csv = files.upload()
filename = "all-merge.csv"
# grabbing from drive, because it's nicer this way
filename_end = input("Filename? ")
filename = "drive/MyDrive/Colab Notebooks/chatbot/" + filename_end
data = pd.read_csv(filename)
authors = data['Author']
content = data['Content']
time_diff = data['TimeDiff']
conv_id = data['ConvID']
is_custom_user = data['IsSpecUser']
corpus_id = data['CorpusID']
# WE NO LONGER FORCE PRE-FORMATTING DATA
# data feature processing
import time, datetime
authors = data['Author']
content = data['Content']
time_data = data['Date']
time_diff_list = []
conv_id_list = []
conv_id_list.append(0)
is_custom_user = []
def convert_datetime(datetime_str):
return datetime.datetime.strptime(datetime_str, "%d-%b-%y %I:%M %p").timestamp()
for i in range(1, len(time_data)):
time_diff_to_app = (convert_datetime(time_data[i])-convert_datetime(time_data[i-1]))/60
time_diff_list.append(time_diff_to_app)
if time_diff_to_app >= 30:
conv_id_list.append(conv_id_list[-1]+1)
else:
conv_id_list.append(conv_id_list[-1])
time_diff = pd.Series(time_diff_list)
conv_id = pd.Series(conv_id_list)
# added user system for future training off of different people
sel_user = input("User? ")
###Output
_____no_output_____
###Markdown
Short note about CorpusID: this functionality may be slightly incorrect as it gets 25 distinct databases in all-merge despite there supposedly being 25, meaning that some bases have been incorrectly merged. This **will** require triage. "Count"-Type Analysise.g. making new features (deriving from current data)These analyses rely on preformatted csv files with additional data semi-manually added.1. Performs a total message count and analyzes n (eg 100) most used words2. Counts most prominent authors sending messages prior to those of sel_user (eg most common people conversed with/after)3. Counts participation in all unique conversations (conversations defined as exchanges where time between messages <30min)4. Counts distinct frequent groups of people conversed with5. Counts the amount of words from each user (basic).
###Code
usr_message_count = 0
total_word_count = 0
words = []
word_count = []
for i in range (content.size):
if str(authors.get(i)) == sel_user:
usr_message_count += 1;
word_in_row = str(content.get(i)).split()
for j in word_in_row:
words.append(j)
total_word_count += 1
wordset = set(words)
print("Total messages from " + sel_user + ": " + str(usr_message_count))
for i in wordset:
word_count.append([i, words.count(i)])
excluded_words = set()
most_used_words = []
for x in range(100): # bad sorting algorithm
max_i = 0
max_i_word = ""
for i in word_count:
if len(i[0]) > 0 and i[0] not in excluded_words:
if i[1] > max_i:
max_i_word = i[0]
max_i = i[1]
excluded_words.add(max_i_word)
most_used_words.append([max_i_word, max_i])
print("Most used words: " + str(most_used_words))
print("Total words: " + str(total_word_count) + " at average of " + str(total_word_count/usr_message_count) + " wpm")
authorlist = []
author_count = []
for i in range (authors.size):
if str(authors.get(i)) == sel_user:
authorlist.append(authors.get(i-1))
authorset = set(authorlist)
for i in authorset:
author_count.append([i, authorlist.count(i)])
print("Authors prior to send count " + str(author_count))
convset = set()
for i in range(conv_id.size):
if str(authors.get(i)) == sel_user:
convset.add(conv_id.get(i))
print("Got " + str(int(conv_id.get(conv_id.size-1))) + " distinct conversations, participation in " + str(len(convset)) + " at rate " + str(len(convset)/int(conv_id.get(conv_id.size-1))))
authors_permutations = []
convID = -1
for i in range(conv_id.size):
if conv_id.get(i) in convset:
if conv_id.get(i) != convID:
authors_permutations.append(set())
else:
authors_permutations[len(authors_permutations)-1].add(authors.get(i))
convID = conv_id.get(i)
# creates set of permutations (list)
authors_permutations_included = []
for i in authors_permutations:
if i in authors_permutations_included:
continue
else:
authors_permutations_included.append(i)
authors_permutations_count = []
for i in authors_permutations_included:
authors_permutations_count.append([i, authors_permutations.count(i)])
# get unsorted list of distinct conversational groups
print(authors_permutations_count)
# bug noticed: prints [set(), 382], but.. whatever
sel_user_word_count = 0
user2_word_count = 0
user2 = input("Select user comparator? ")
for i in range(content.size):
try:
split_list = str(content[i]).split(' ')
except:
print("Error at i count", i)
if authors[i] == sel_user:
sel_user_word_count += len(split_list)
elif authors[i] == user2:
user2_word_count += len(split_list)
print(sel_user, "at", sel_user_word_count, "words.")
print(user2, "at", user2_word_count, "words.")
print("Ratio is", sel_user_word_count/user2_word_count)
###Output
_____no_output_____
###Markdown
Short noted bug that within authors_permutation there is an entry being [set(), 382], which is problematic but not critical. This may be addressed later.
Important bug usr_message_count is nonfunctional, ignored temporarily. Requires triage. Sequence to SequenceVectorizing the dictionary of distinct words as a Vocabulary object, grabbing specialized conversation pairs, and training a model to respond. This is the core of the project, inspired by and heavily relying on content from [here](https://medium.com/swlh/end-to-end-chatbot-using-sequence-to-sequence-architecture-e24d137f9c78).
###Code
# imports for this section
import unicodedata
import re
import random
import torch
from torch import nn
import itertools
import os
# defining how a vocabulary object is set up
# relies on running above code to get count of distinct words as word_count
PAD = 0
SRT = 1
END = 2
class Vocabulary:
def __init__(self, name):
self.name = name
self.trimmed = False
self.word_to_index = {}
self.word_to_count = {}
self.index_to_word = {PAD: "PAD", SRT: "SOS", END: "EOS"}
self.num_words = 3
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWordNoContext(word)
def addWordNoContext(self, word):
if word not in self.word_to_index:
self.word_to_index[word] = self.num_words
self.word_to_count[word] = 1
self.index_to_word[self.num_words] = word
self.num_words += 1
else:
self.word_to_count[word] += 1
def addWord(self, word, index, count):
self.word_to_index[word] = index
self.word_to_count[word] = count
self.index_to_word[index] = word
self.num_words += 1
# functions to fix bad characters and clean up messages, optimizing convergence
def fixASCII(string):
return ''.join(
c for c in unicodedata.normalize('NFD', string) if unicodedata.category(c) != 'Mn'
)
def fixString(string):
string = fixASCII(string.lower().strip())
string = re.sub(r"([.!?])", r" \1", string)
string = re.sub(r"[^a-zA-Z.!?]+", r" ", string)
string = re.sub(r"\s+", r" ", string).strip()
return string
# normalizing words and generating the Vocabulary object for the complete dataset
# not actually relevant to final model?
print("Got", len(word_count), "distinct words.")
valid_word_count = []
for i in word_count:
if i[0] == fixString(i[0]):
valid_word_count.append(i)
print("Got", len(valid_word_count), "distinct valid words.")
master_voc = Vocabulary("all-merge")
for i in range(len(valid_word_count)):
master_voc.addWord(valid_word_count[i][0], i, valid_word_count[i][1])
###Output
_____no_output_____
###Markdown
Generating Sentence Pair ObjectsVarious methods of generating objects for sentence pair objects for training the model.This section will also build specific vocabulary objects for each distinct conversation filter.1. "Dumb" grabber between two users. Considers only previous lines, offers little context, and scans the entire corpus: weak for serious training.2. "Less dumb" grabber between selected user for training and any other user. Considers only previous lines, offers little context, and scans the entire corpus. Marginally better than the other but also may offer less clarity/personality because of different interaction patterns between different users.
###Code
# get dumb user grabber user
user = input("User for dumb grabber: ")
# "dumb" grabber: only contextualizes single line conversation between two distinct users
pairs = []
vocabulary = Vocabulary("Dumb 2-user grabber")
for i in range(1, len(content)):
if authors[i] == sel_user and authors[i-1] == user:
try:
curr_cont = fixString(content[i])
prev_cont = fixString(content[i-1])
pairs.append([prev_cont, curr_cont])
vocabulary.addSentence(curr_cont)
vocabulary.addSentence(prev_cont)
except:
continue
print("Discriminant with 2-user basic filter grabbed", len(pairs), "distinct pairs across entire corpus.")
print("Corresponding Vocabulary object with", vocabulary.num_words, "distinct words.")
# "less dumb" grabber: builds pairs out of anyone talking to user
pairs = []
vocabulary = Vocabulary("Less dumb 2-user grabber")
for i in range(1, len(content)):
if authors[i] == sel_user and [authors[i-1]] != sel_user:
try:
curr_cont = fixString(content[i])
prev_cont = fixString(content[i-1])
pairs.append([prev_cont, curr_cont])
vocabulary.addSentence(curr_cont)
vocabulary.addSentence(prev_cont)
except:
continue
print("Discriminant with any-user basic filter grabbed", len(pairs), "distinct pairs across entire corpus.")
print("Corresponding Vocabulary object with", vocabulary.num_words, "distinct words.")
###Output
_____no_output_____
###Markdown
Data PreparationPreparing batches for use in the model.
###Code
# utility functions
# multi-grabs indexes from vocabulary
def getIndexesFromSent(voc, sent):
return [voc.word_to_index[word] for word in sent.split(' ')] + [END]
# generating padding
def genPadding(batch, fillvalue=PAD):
return list(itertools.zip_longest(*batch, fillvalue=fillvalue))
# returns binary matrix adjusting for padding
def binaryMatrix(batch, value=PAD):
matrix = []
for i, seq in enumerate(batch):
matrix.append([])
for token in seq:
if token == PAD:
matrix[i].append(0)
else:
matrix[i].append(1)
return matrix
# padding functions
# return input tensor and corresponding lengths
def inputVariable(batch, voc):
idxs_batch = [getIndexesFromSent(voc, sentence) for sentence in batch]
lengths = torch.tensor([len(indexes) for indexes in idxs_batch])
padded_list = genPadding(idxs_batch)
padded_variable = torch.LongTensor(padded_list)
return padded_variable, lengths
# return target tensor, padding mask, and maximum length
def outputVariable(batch, voc):
idxs_batch = [getIndexesFromSent(voc, sentence) for sentence in batch]
max_len = max([len(indexes) for indexes in idxs_batch])
padded_list = genPadding(idxs_batch)
mask = binaryMatrix(padded_list)
mask = torch.ByteTensor(mask)
padded_variable = torch.LongTensor(padded_list)
return padded_variable, mask, max_len
# converts batch into train data
def batch_to_data(voc, batch):
batch.sort(key=lambda x: len(x[0].split(" ")), reverse=True)
input_batch = []
output_batch = []
for pair in batch:
input_batch.append(pair[0])
output_batch.append(pair[1])
inpt, lengths = inputVariable(input_batch, voc)
output, mask, max_len = outputVariable(output_batch, voc)
return inpt, lengths, output, mask, max_len
# example
batches = batch_to_data(vocabulary, [random.choice(pairs) for i in range(5)])
input_var, lengths, target_var, mask, max_len = batches
###Output
_____no_output_____
###Markdown
The ModelThe model in this case revolves around 3 layers1. An encoder to losslessly vectorize words into trainable binary sequences (for this we use a bidirectional GRU).2. An attention layer prioritizes different parts of sentences for "understanding." For this we use a Luong attention layer.3. A decoder to convert the model's inner "thoughts" into output for the user!
###Code
# tensordash
!pip install tensor-dash
from tensordash.torchdash import Torchdash
histories = Torchdash(ModelName="Chatbot", email="[email protected]")
# encoder
class EncoderRNN(nn.Module):
def __init__(self, hidden_size, embedding, n_layers=1, dropout=0):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embedding = embedding
self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers==1 else dropout), bidirectional=True)
def forward(self, input_sequence, input_lengths, hidden=None):
embedded = self.embedding(input_sequence)
packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)
outputs, hidden = self.gru(packed, hidden)
outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs)
outputs = outputs[:, :, :self.hidden_size] + outputs[:, :, self.hidden_size:]
return outputs, hidden
# attention layer
class Attn(nn.Module):
def __init__(self, method, hidden_size):
super(Attn, self).__init__()
self.method = method
if self.method not in ['dot', 'general', 'concat']:
raise ValueError(self.method, "is not a valid attention method.")
self.hidden_size = hidden_size
if self.method == 'general':
self.attn = nn.Linear(self.hidden_size, self.hidden_size)
elif self.method == 'concat':
self.attn = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.v = nn.Parameter(torch.FloatTensor(self.hidden_size))
def dot_score(self, hidden, encoder_output):
return torch.sum(hidden * encoder_output, dim=2)
def general_score(self, hidden, encoder_output):
energy = self.attn(encoder_output)
return torch.sum(hidden * energy, dim=2)
def concat_score(self, hidden, encoder_output):
energy = self.attn(encoder_output)
return torch.sum(hidden * energy, dim=2)
def forward(self, hidden, encoder_outputs):
if self.method == 'general':
attn_energies = self.general_score(hidden, encoder_outputs)
elif self.method == 'concat':
attn_energies = self.concat_score(hidden, encoder_outputs)
elif self.method == 'dot':
attn_energies = self.dot_score(hidden, encoder_outputs)
attn_energies = attn_energies.t()
return nn.functional.softmax(attn_energies, dim=1).unsqueeze(1)
# decoder
class LuongAttnDecoderRNN(nn.Module):
def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1):
super(LuongAttnDecoderRNN, self).__init__()
self.attn_model = attn_model
self.hidden_size = hidden_size
self.output_size = output_size
self.n_layers = n_layers
self.dropout = dropout
self.embedding = embedding
self.embedding_dropout = nn.Dropout(dropout)
self.gru = nn.GRU(self.hidden_size, self.hidden_size, self.n_layers, dropout=(0 if self.n_layers == 1 else dropout))
self.concat = nn.Linear(self.hidden_size * 2, self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
self.attn = Attn(self.attn_model, self.hidden_size)
def forward(self, input_step, last_hidden, encoder_outputs):
embedded = self.embedding(input_step)
embedded = self.embedding_dropout(embedded)
rnn_output, hidden = self.gru(embedded, last_hidden)
attn_weights = self.attn(rnn_output, encoder_outputs)
context = attn_weights.bmm(encoder_outputs.transpose(0, 1))
rnn_output = rnn_output.squeeze(0)
context = context.squeeze(1)
concat_input = torch.cat((rnn_output, context), 1)
concat_output = torch.tanh(self.concat(concat_input))
output = self.out(concat_output)
output = nn.functional.softmax(output, dim=1)
return output, hidden
# loss function
def loss_func(inpt, target, mask):
n_total = mask.sum()
cross_entropy = -torch.log(torch.gather(inpt, 1, target.view(-1, 1)).squeeze(1))
loss = cross_entropy.masked_select(mask).mean()
loss = loss.to(device)
return loss, n_total.item()
# training functions
device = torch.device("cpu")
def train(input_variable, lengths, target_variable, mask, max_len, encoder, decoder, embedding, encoder_optimizer, decoder_optimizer, batch_size, clip):
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_variable = input_variable.to(device)
lengths = lengths.to(device)
target_variable = target_variable.to(device)
mask = mask.to(device)
loss = 0
print_losses = []
n_totals = 0
encoder_outputs, encoder_hidden = encoder(input_variable, lengths)
decoder_input = torch.LongTensor([[SRT for i in range (batch_size)]])
decoder_input = decoder_input.to(device)
decoder_hidden = encoder_hidden[:decoder.n_layers]
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
if use_teacher_forcing:
for t in range(max_len):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_outputs)
decoder_input = target_variable[t].view(1, -1)
mask_loss, n_total = loss_func(decoder_output, target_variable[t], mask[t])
loss += mask_loss
print_losses.append(mask_loss.item() * n_total)
n_totals += n_total
else:
for t in range(max_len):
decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_outputs)
_, topi = decoder_output.topk(1)
decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]])
decoder_input = decoder_input.to(device)
mask_loss, n_total = loss_func(decoder_output, target_variable[t], mask[t])
loss += mask_loss
print_losses.append(mask_loss.item() * n_total)
n_totals += n_total
loss.backward()
_ = nn.utils.clip_grad_norm_(encoder.parameters(), clip)
_ = nn.utils.clip_grad_norm_(decoder.parameters(), clip)
encoder_optimizer.step()
decoder_optimizer.step()
return sum(print_losses) / n_totals
def train_iterations(model_name, vocabulary, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, embedding, encoder_n_layers, decoder_n_layers, n_iterations, batch_size, print_rate, save_rate, clip):
training_batches = [batch_to_data(vocabulary, [random.choice(pairs) for i in range(batch_size)]) for ii in range(n_iterations)]
start_iteration = 1 # should be 1
print_loss = 0
for iteration in range(start_iteration, n_iterations + 1):
training_batch = training_batches[iteration - 1]
input_variable, lengths, target_variable, mask, max_len = training_batch
loss = train(input_variable, lengths, target_variable, mask, max_len, encoder, decoder, embedding, encoder_optimizer, decoder_optimizer, batch_size, clip)
print_loss += loss
# tensordash
histories.sendLoss(loss = loss, epoch = iteration, total_epochs = n_iterations+1)
if iteration % print_rate == 0:
print_loss_avg = print_loss / print_rate
train_loss.append(print_loss_avg)
print("Iteration {}; Percent complete: {:.1f}%; Average loss: {:.4f}".format(iteration, iteration/n_iterations*100, print_loss_avg))
print_loss = 0
if iteration % save_rate == 0:
directory = os.path.join("drive/MyDrive/Colab Notebooks/chatbot/saves", sel_user, model_name, "all", '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size))
if not os.path.exists(directory):
os.makedirs(directory)
torch.save({
'iteration': iteration,
'en': encoder.state_dict(),
'de': decoder.state_dict(),
'en_opt': encoder_optimizer.state_dict(),
'de_opt': decoder_optimizer.state_dict(),
'loss': loss,
'voc_dict': vocabulary.__dict__,
'embedding': embedding.state_dict()
}, os.path.join(directory, '{}_{}.tar'.format(iteration, 'checkpoint')))
# searcher
class GreedySearchDecoder(nn.Module):
def __init__(self, encoder, decoder, use_multinomial=False):
super(GreedySearchDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder
self.use_multinomial = use_multinomial
def forward(self, input_sequence, input_length, max_len):
encoder_outputs, encoder_hidden = self.encoder(input_sequence, input_length)
decoder_hidden = encoder_hidden[:decoder.n_layers]
decoder_input = torch.ones(1, 1, device=device, dtype=torch.long) * SRT
all_tokens = torch.zeros([0], device=device, dtype=torch.long)
all_scores = torch.zeros([0], device=device)
if not self.use_multinomial:
for i in range(max_len):
decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs)
decoder_scores, decoder_input = torch.max(decoder_output, dim=1)
all_tokens = torch.cat((all_tokens, decoder_input), dim=0)
all_scores = torch.cat((all_scores, decoder_scores), dim=0)
decoder_input = torch.unsqueeze(decoder_input, 0)
return all_tokens, all_scores
else:
for i in range(max_len):
decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs)
decoder_output_multi = decoder_output.data.view(-1).div(0.7).exp()
decoder_input = torch.multinomial(decoder_input_multi, 1)
decoder_scores, _ = torch.max(decoder_output, dim=1)
all_tokens = torch.cat((all_tokens, decoder_input), dim=0)
all_scores = torch.cat((all_scores, decoder_scores), dim=0)
decoder_input = torch.unsqueeze(decoder_input, 0)
return all_tokens, all_scores
# training the model
# params
clip = 50.0
teacher_forcing = 0.9
alpha = 0.0001
decoder_learning = 5.0
n_iter = 500 # from 500
print_rate = 50
save_rate = 100
teacher_forcing_ratio = 1.0
model_name = 'cb_model'
attn_model = 'dot'
hidden_size = 512
encoder_n_layers = 2
decoder_n_layers = 2
dropout = 0.1
batch_size = 64
train_loss = []
embedding = nn.Embedding(vocabulary.num_words, hidden_size)
encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout)
decoder = LuongAttnDecoderRNN(attn_model, embedding, hidden_size, vocabulary.num_words, decoder_n_layers, dropout)
encoder_optimizer = torch.optim.Adam(encoder.parameters(), lr=alpha)
decoder_optimizer = torch.optim.Adam(decoder.parameters(), lr=alpha * decoder_learning)
encoder.train()
decoder.train()
# the training function
train_iterations(model_name, vocabulary, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, embedding, encoder_n_layers, decoder_n_layers, n_iter, batch_size, print_rate, save_rate, clip)
# loading models
spec_filename = "500_checkpoint.tar"
load_filename = os.path.join("drive/MyDrive/Colab Notebooks/chatbot/saves", sel_user, model_name, "all", '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size), spec_filename)
checkpoint = torch.load(load_filename)
encoder_sd = checkpoint['en']
decoder_sd = checkpoint['de']
encoder_optimizer_sd = checkpoint['en_opt']
decoder_optimizer_sd = checkpoint['de_opt']
embedding_sd = checkpoint['embedding']
vocabulary.__dict__ = checkpoint['voc_dict']
embedding.load_state_dict(embedding_sd)
encoder.load_state_dict(encoder_sd)
decoder.load_state_dict(decoder_sd)
encoder_optimizer.load_state_dict(encoder_optimizer_sd)
decoder_optimizer.load_state_dict(decoder_optimizer_sd)
encoder.to(device)
decoder.to(device)
# evaluation
def evaluate(encoder, decoder, searcher, voc, sent, temperature=False):
idxs_batch = [getIndexesFromSent(voc, sent)]
lengths = torch.tensor([len(indexes) for indexes in idxs_batch])
input_batch = torch.LongTensor(idxs_batch).transpose(0, 1)
input_batch = input_batch.to(device)
lengths = lengths.to(device)
tokens, scores = searcher(input_batch, lengths, 12)
decoded_words = [voc.index_to_word[token.item()] for token in tokens]
return decoded_words
def do_evaluate(encoder, decoder, searcher, voc):
input_sent = input()
if input_sent == "exitexit":
print("Quit.")
exit()
input_sent = fixString(input_sent)
outputs = evaluate(encoder, decoder, searcher, voc, input_sent)
outputs[:] = [x for x in outputs if not (x=='EOS' or x=='PAD')]
print("Says:", ' '.join(outputs))
# change these to encoder, decoder when not loading
searcher = GreedySearchDecoder(encoder, decoder)
# evaluation when just trained
print("exitexit to stop.")
while True:
do_evaluate(encoder, decoder, searcher, vocabulary)
# todo:
# fix keyerrors for unknown words in input (probably isn't fixable)
# triage bugs noted in text comments/fix text comments?
###Output
_____no_output_____
###Markdown
Discord ImplementationCode for running a discord bot with this model. This code does not run online but can be implemented server-side.
###Code
!pip install discord
TOKEN = input("Token: ")
import discord
client = discord.Client()
@client.event
async def on_ready():
print('Logged on as user {0.user}'.format(client))
@client.event
async def on_message(message):
if message.author == client.user:
return
if message.content.startswith('$hello'):
await message.channel.send('Hello!')
if message.content.startswith('$hey'):
content_str = message.content[4:]
try:
await message.channel.send(do_evaluate(encoder, decoder, searcher, vocabulary, content_str))
except:
await message.channel.send('Error, unknown word.')
client.run(TOKEN)
###Output
_____no_output_____ |
test/ry/2-features.ipynb | ###Markdown
Features and ObjectivesThis doc is mostly text, explaining the general concept of features, listing the ones defined in rai, and explaining how they define objectives for optimization.At the bottom there are also examples on the collision features. FeaturesWe assume a single configuration $x$, or a whole set of configurations$\{x_1,..,x_T\}$. Each $x_i \in\mathbb{R}$ are the DOFs of thatconfiguration.A feature $\phi$ is a differentiable mapping$$\phi: x \mapsto \mathbb{R}^D$$of a single configuration into some $D$-dimensional space, or a mapping$$\phi: (x_0,x_2,..,x_k) \mapsto \mathbb{R}^D$$of a $(k+1)$-tuple of configurations to a $D$-dimensional space.The rai code implements many features, most of them are accessible viaa feature symbol (FS). They are declared inhttps://github.com/MarcToussaint/rai/blob/master/rai/Kin/featureSymbols.hHere is a table of feature symbols, with therespective dimensionality $D$, the default order $k$, and adescription| FS | frames | $D$ | $k$ | description ||:---:|:---:|:---:|:---:|:---:|| position | {o1} | 3 || 3D position of o1 in world coordinates || positionDiff | {o1,o2} | 3 || difference of 3D positions of o1 and o2 in world coordinates || positionRel | {o1,o2} | 3 || 3D position of o1 in o2 coordinates || quaternion | {o1} | 4 || 4D quaternion of o1 in world coordinates\footnote{There is ways to handle the invariance w.r.t.\ quaternion sign properly.} || quaternionDiff | {o1,o2} | 4 || ... || quaternionRel | {o1,o2} | 4 || ... || pose | {o1} | 7 || 7D pose of o1 in world coordinates || poseDiff | {o1,o2} | 7 || ... || poseRel | {o1,o2} | 7 || ... || vectorX | {o1} | 3 || The x-axis of frame o1 rotated back to world coordinates || vectorXDiff | {o1,o2} | 3 || The difference of the above for two frames o1 and o2 || vectorXRel | {o1,o2} | 3 || The x-axis of frame o1 rotated as to be seend from the frame o2 || vectorY... | | | | same as above || scalarProductXX | {o1,o2} | 1 || The scalar product of the x-axis fo frame o1 with the x-axis of frame o2 || scalarProduct... | {o1,o2} | | | as above || gazeAt | {o1,o2} | 2 | | The 2D projection of the origin of frame o2 onto the xy-plane of frame o1 || angularVel | {o1} | 3 | 1 | The angular velocity of frame o1 across two configurations || accumulatedCollisions | {} | 1 | | The sum of collision penetrations; when negative/zero, nothing is colliding || jointLimits | {} | 1 | | The sum of joint limit penetrations; when negative/zero, all joint limits are ok || distance | {o1,o1} | 1 | | The NEGATIVE distance between convex meshes o1 and o2, positive for penetration || qItself | {} | $n$ | | The configuration joint vector || aboveBox | {o1,o2} | 4 | | when all negative, o1 is above (inside support of) the box o2 || insideBox | {o1,o2} | 6 | | when all negative, o1 is inside the box o2 || standingAbove | | | | ? |A features is typically defined by* The feature symbol (`FS_...` in cpp; `FS....` in python)* The set of frames it refers to* Optionally: A target, which changes the zero-point of the features (optimization typically try to drive features to zero, see below)* Optionally: A scaling, that can also be a matrix to down-project a feature* Optionally: The order $k$, which can make the feature a velocity or acceleration featureTarget and scale redefine a feature to become$$ \phi(x) \gets \texttt{scale} \cdot (\phi(x) - \texttt{target})$$The target needs to be a $D$-dim vector. The scale can be a matrix, which projects features; e.g., and 3D position to just $x$-position.The order of a feature is usually $k=0$, meaning that it is defined over a single configuration only. $k=1$ means that it is defined over two configurations (1st oder Markov), and redefines the feature to become the difference or velocity$$ \phi(x_1,x_2) \gets \frac{1}{\tau}(\phi(x_2) - \phi(x_1))$$$k=2$ means that it is defined over three configurations (2nd order Markov), and redefines the feature to become the acceleration$$ \phi(x_1,x_2,x_3) \equiv \frac{1}{\tau^2}(\phi(x_1) + \phi(x_3) - 2 \phi(x_2))$$ Examples```(FS.position, {'hand'})```is the 3D position of the hand in world coordinates```(FS.positionRel, {'handL', 'handR'}, scale=[[0,0,1]], target=[0.1])```is the z-position position of the left hand measured in the frame of the right hand, with target 10centimeters.```(FS.position, {'handL'}, order=1)```is the 3D velocity of the left hand in world coordinates```(FS.scalarProductXX, {'handL', 'handR'}, target=[1])```says that the scalar product of the x-axes (e.g. directions of the index finger) of both hands should equal 1, which means they are aligned.```(FS.scalarProductXY, {'handL', 'handR'})(FS.scalarProductXZ, {'handL', 'handR'})```says that the the x-axis of handL should be orthogonal (zero scalar product) to the y- and z-axis of handR. So this also describes aligning both x-axes. However, this formulation is much more robust, as it has good error gradients around the optimum. ObjectivesFeatures are meant to define objectives in an optimization problem. An objective is* a feature* an indicator $\rho_k\in\{\texttt{ineq, eq, sos}\}$ that states whether the featuresimplies an inequality, an equality, or a sum-of-square objective* and an index tuple $\pi_k \subseteq \{1,..,n\}$ that states whichconfigurations this feature is defined over.Then, given a set$\{\phi_1,..,\phi_K\}$ of $K$ features, and a set $\{x_1,..,x_n\}$ of$n$ configurations, this defines the mathematical program\begin{align} \min_{x_1,..,x_n} \sum_{k : \rho_k=\texttt{sos}} \phi_k(x_{\pi_k})^T \phi_k(x_{\pi_k}) ~\text{s.t.}~ \mathop\forall_{k : \rho_k=\texttt{ineq}} \phi_k(x_{\pi_k}) \le 0 ~,\quad \mathop\forall_{k : \rho_k=\texttt{eq}} \phi_k(x_{\pi_k}) = 0 ~,\quad\end{align} Code example for collision features* Get list of collisions and proximities for the whole configuration* Get a accumulative, differentiable collision measure* Get proximity/penetration specifically for a pair of shapes* Other geometric collision features for a pair of shapes (witness points, normal, etc) -- all differentiable
###Code
import sys
sys.path += ['../build', '../../../build', '../../lib']
import numpy as np
import libry as ry
C = ry.Config()
C.addFile('../../../rai-robotModels/pr2/pr2.g');
C.addFile('../../../rai-robotModels/objects/kitchen.g');
C.view()
###Output
_____no_output_____
###Markdown
Let's evaluate the accumulative collision scalar and its Jacobian
###Code
coll = C.feature(ry.FS.accumulatedCollisions, [])
C.computeCollisions() #collisions/proxies are not automatically computed on set...State
coll.eval(C)
###Output
_____no_output_____
###Markdown
Let's move into collision and redo this
###Code
C.selectJointsByTag(["base"])
C.setJointState([1.5,1,0])
C.computeCollisions()
coll.eval(C)
###Output
_____no_output_____
###Markdown
We can get more verbose information like this:
###Code
C.getCollisions()
C.getCollisions(0) #only report proxies with distance<0 (penetrations)
###Output
_____no_output_____
###Markdown
The computeCollisions() method calls a collision detection engine (SWIFT++) for the whole configuration, checking all shapes that are collision-activated. The activation/deactivation of collision computations is a nuissance! the 'contact' flag in g-files specifies which shapes are activated by default, and if the value is negative, that collisions with parent shapes are not included. (In the KOMO class, you can use activateCollisionPairs and deactivateCollisionPairs to modify these defaults in optimization problems... TODO: also in Config)When you're interested in the distance or penetration of one specific pair of objects, you don't need to call computeCollisions() and instead query a feature that calls the GJK (and others) algorithm directly only for this pair:
###Code
dist = C.feature(ry.FS.distance, ['coll_wrist_r', '_10'])
dist.eval(C)
###Output
_____no_output_____
###Markdown
Note that this returns the NEGATIVE distance (because one typically wants to put an inequality (<=0) on this). The C++ code implements many more features of the collision geometry, including the normal, witness points, etc. Can be added to python easily on request.
###Code
C.view_close()
###Output
_____no_output_____
###Markdown
Features and ObjectivesThis doc is mostly text, explaining the general concept of features, listing the ones defined in rai, and explaining how they define objectives for optimization.At the bottom there are also examples on the collision features. FeaturesWe assume a single configuration $x$, or a whole set of configurations$\{x_1,..,x_T\}$. Each $x_i \in\mathbb{R}$ are the DOFs of thatconfiguration.A feature $\phi$ is a differentiable mapping$$\phi: x \mapsto \mathbb{R}^D$$of a single configuration into some $D$-dimensional space, or a mapping$$\phi: (x_0,x_2,..,x_k) \mapsto \mathbb{R}^D$$of a $(k+1)$-tuple of configurations to a $D$-dimensional space.The rai code implements many features, most of them are accessible viaa feature symbol (FS). They are declared inhttps://github.com/MarcToussaint/rai/blob/master/rai/Kin/featureSymbols.hHere is a table of feature symbols, with therespective dimensionality $D$, the default order $k$, and adescription| FS | frames | $D$ | $k$ | description ||:---:|:---:|:---:|:---:|:---:|| position | {o1} | 3 || 3D position of o1 in world coordinates || positionDiff | {o1,o2} | 3 || difference of 3D positions of o1 and o2 in world coordinates || positionRel | {o1,o2} | 3 || 3D position of o1 in o2 coordinates || quaternion | {o1} | 4 || 4D quaternion of o1 in world coordinates\footnote{There is ways to handle the invariance w.r.t.\ quaternion sign properly.} || quaternionDiff | {o1,o2} | 4 || ... || quaternionRel | {o1,o2} | 4 || ... || pose | {o1} | 7 || 7D pose of o1 in world coordinates || poseDiff | {o1,o2} | 7 || ... || poseRel | {o1,o2} | 7 || ... || vectorX | {o1} | 3 || The x-axis of frame o1 rotated back to world coordinates || vectorXDiff | {o1,o2} | 3 || The difference of the above for two frames o1 and o2 || vectorXRel | {o1,o2} | 3 || The x-axis of frame o1 rotated as to be seend from the frame o2 || vectorY... | | | | same as above || scalarProductXX | {o1,o2} | 1 || The scalar product of the x-axis fo frame o1 with the x-axis of frame o2 || scalarProduct... | {o1,o2} | | | as above || gazeAt | {o1,o2} | 2 | | The 2D projection of the origin of frame o2 onto the xy-plane of frame o1 || angularVel | {o1} | 3 | 1 | The angular velocity of frame o1 across two configurations || accumulatedCollisions | {} | 1 | | The sum of collision penetrations; when negative/zero, nothing is colliding || jointLimits | {} | 1 | | The sum of joint limit penetrations; when negative/zero, all joint limits are ok || distance | {o1,o1} | 1 | | The NEGATIVE distance between convex meshes o1 and o2, positive for penetration || qItself | {} | $n$ | | The configuration joint vector || aboveBox | {o1,o2} | 4 | | when all negative, o1 is above (inside support of) the box o2 || insideBox | {o1,o2} | 6 | | when all negative, o1 is inside the box o2 || standingAbove | | | | ? |A features is typically defined by* The feature symbol (`FS_...` in cpp; `FS....` in python)* The set of frames it refers to* Optionally: A target, which changes the zero-point of the features (optimization typically try to drive features to zero, see below)* Optionally: A scaling, that can also be a matrix to down-project a feature* Optionally: The order $k$, which can make the feature a velocity or acceleration featureTarget and scale redefine a feature to become$$ \phi(x) \gets \texttt{scale} \cdot (\phi(x) - \texttt{target})$$The target needs to be a $D$-dim vector. The scale can be a matrix, which projects features; e.g., and 3D position to just $x$-position.The order of a feature is usually $k=0$, meaning that it is defined over a single configuration only. $k=1$ means that it is defined over two configurations (1st oder Markov), and redefines the feature to become the difference or velocity$$ \phi(x_1,x_2) \gets \frac{1}{\tau}(\phi(x_2) - \phi(x_1))$$$k=2$ means that it is defined over three configurations (2nd order Markov), and redefines the feature to become the acceleration$$ \phi(x_1,x_2,x_3) \equiv \frac{1}{\tau^2}(\phi(x_1) + \phi(x_3) - 2 \phi(x_2))$$ Examples```(FS.position, {'hand'})```is the 3D position of the hand in world coordinates```(FS.positionRel, {'handL', 'handR'}, scale=[[0,0,1]], target=[0.1])```is the z-position position of the left hand measured in the frame of the right hand, with target 10centimeters.```(FS.position, {'handL'}, order=1)```is the 3D velocity of the left hand in world coordinates```(FS.scalarProductXX, {'handL', 'handR'}, target=[1])```says that the scalar product of the x-axes (e.g. directions of the index finger) of both hands should equal 1, which means they are aligned.```(FS.scalarProductXY, {'handL', 'handR'})(FS.scalarProductXZ, {'handL', 'handR'})```says that the the x-axis of handL should be orthogonal (zero scalar product) to the y- and z-axis of handR. So this also describes aligning both x-axes. However, this formulation is much more robust, as it has good error gradients around the optimum. ObjectivesFeatures are meant to define objectives in an optimization problem. An objective is* a feature* an indicator $\rho_k\in\{\texttt{ineq, eq, sos}\}$ that states whether the featuresimplies an inequality, an equality, or a sum-of-square objective* and an index tuple $\pi_k \subseteq \{1,..,n\}$ that states whichconfigurations this feature is defined over.Then, given a set$\{\phi_1,..,\phi_K\}$ of $K$ features, and a set $\{x_1,..,x_n\}$ of$n$ configurations, this defines the mathematical program\begin{align} \min_{x_1,..,x_n} \sum_{k : \rho_k=\texttt{sos}} \phi_k(x_{\pi_k})^T \phi_k(x_{\pi_k}) ~\text{s.t.}~ \mathop\forall_{k : \rho_k=\texttt{ineq}} \phi_k(x_{\pi_k}) \le 0 ~,\quad \mathop\forall_{k : \rho_k=\texttt{eq}} \phi_k(x_{\pi_k}) = 0 ~,\quad\end{align} Code example for collision features* Get list of collisions and proximities for the whole configuration* Get a accumulative, differentiable collision measure* Get proximity/penetration specifically for a pair of shapes* Other geometric collision features for a pair of shapes (witness points, normal, etc) -- all differentiable
###Code
import sys
sys.path.append('../../lib')
import numpy as np
import libry as ry
C = ry.Config()
C.addFile('../../../rai-robotModels/pr2/pr2.g');
C.addFile('../../../rai-robotModels/objects/kitchen.g');
C.view()
###Output
_____no_output_____
###Markdown
Let's evaluate the accumulative collision scalar and its Jacobian
###Code
coll = C.feature(ry.FS.accumulatedCollisions, [])
C.computeCollisions() #collisions/proxies are not automatically computed on set...State
coll.eval(C)
###Output
_____no_output_____
###Markdown
Let's move into collision and redo this
###Code
C.selectJointsByTag(["base"])
C.setJointState([1.5,1,0])
C.computeCollisions()
coll.eval(C)
###Output
_____no_output_____
###Markdown
We can get more verbose information like this:
###Code
C.getCollisions()
C.getCollisions(0) #only report proxies with distance<0 (penetrations)
###Output
_____no_output_____
###Markdown
The computeCollisions() method calls a collision detection engine (SWIFT++) for the whole configuration, checking all shapes that are collision-activated. The activation/deactivation of collision computations is a nuissance! the 'contact' flag in g-files specifies which shapes are activated by default, and if the value is negative, that collisions with parent shapes are not included. (In the KOMO class, you can use activateCollisionPairs and deactivateCollisionPairs to modify these defaults in optimization problems... TODO: also in Config)When you're interested in the distance or penetration of one specific pair of objects, you don't need to call computeCollisions() and instead query a feature that calls the GJK (and others) algorithm directly only for this pair:
###Code
dist = C.feature(ry.FS.distance, ['coll_wrist_r', '_10'])
dist.eval(C)
###Output
_____no_output_____
###Markdown
Note that this returns the NEGATIVE distance (because one typically wants to put an inequality (<=0) on this). The C++ code implements many more features of the collision geometry, including the normal, witness points, etc. Can be added to python easily on request.
###Code
C.view_close()
###Output
_____no_output_____
###Markdown
Features and ObjectivesThis doc is mostly text, explaining the general concept of features, listing the ones defined in rai, and explaining how they define objectives for optimization.At the bottom there are also examples on the collision features. FeaturesWe assume a single configuration $x$, or a whole set of configurations$\{x_1,..,x_T\}$. Each $x_i \in\mathbb{R}$ are the DOFs of thatconfiguration.A feature $\phi$ is a differentiable mapping$$\phi: x \mapsto \mathbb{R}^D$$of a single configuration into some $D$-dimensional space, or a mapping$$\phi: (x_0,x_2,..,x_k) \mapsto \mathbb{R}^D$$of a $(k+1)$-tuple of configurations to a $D$-dimensional space.The rai code implements many features, most of them are accessible viaa feature symbol (FS). They are declared inhttps://github.com/MarcToussaint/rai/blob/master/rai/Kin/featureSymbols.hHere is a table of feature symbols, with therespective dimensionality $D$, the default order $k$, and adescription| FS | frames | $D$ | $k$ | description ||:---:|:---:|:---:|:---:|:---:|| position | {o1} | 3 || 3D position of o1 in world coordinates || positionDiff | {o1,o2} | 3 || difference of 3D positions of o1 and o2 in world coordinates || positionRel | {o1,o2} | 3 || 3D position of o1 in o2 coordinates || quaternion | {o1} | 4 || 4D quaternion of o1 in world coordinates\footnote{There is ways to handle the invariance w.r.t.\ quaternion sign properly.} || quaternionDiff | {o1,o2} | 4 || ... || quaternionRel | {o1,o2} | 4 || ... || pose | {o1} | 7 || 7D pose of o1 in world coordinates || poseDiff | {o1,o2} | 7 || ... || poseRel | {o1,o2} | 7 || ... || vectorX | {o1} | 3 || The x-axis of frame o1 rotated back to world coordinates || vectorXDiff | {o1,o2} | 3 || The difference of the above for two frames o1 and o2 || vectorXRel | {o1,o2} | 3 || The x-axis of frame o1 rotated as to be seend from the frame o2 || vectorY... | | | | same as above || scalarProductXX | {o1,o2} | 1 || The scalar product of the x-axis fo frame o1 with the x-axis of frame o2 || scalarProduct... | {o1,o2} | | | as above || gazeAt | {o1,o2} | 2 | | The 2D projection of the origin of frame o2 onto the xy-plane of frame o1 || angularVel | {o1} | 3 | 1 | The angular velocity of frame o1 across two configurations || accumulatedCollisions | {} | 1 | | The sum of collision penetrations; when negative/zero, nothing is colliding || jointLimits | {} | 1 | | The sum of joint limit penetrations; when negative/zero, all joint limits are ok || distance | {o1,o1} | 1 | | The NEGATIVE distance between convex meshes o1 and o2, positive for penetration || qItself | {} | $n$ | | The configuration joint vector || aboveBox | {o1,o2} | 4 | | when all negative, o1 is above (inside support of) the box o2 || insideBox | {o1,o2} | 6 | | when all negative, o1 is inside the box o2 || standingAbove | | | | ? |A features is typically defined by* The feature symbol (`FS_...` in cpp; `FS....` in python)* The set of frames it refers to* Optionally: A target, which changes the zero-point of the features (optimization typically try to drive features to zero, see below)* Optionally: A scaling, that can also be a matrix to down-project a feature* Optionally: The order $k$, which can make the feature a velocity or acceleration featureTarget and scale redefine a feature to become$$ \phi(x) \gets \texttt{scale} \cdot (\phi(x) - \texttt{target})$$The target needs to be a $D$-dim vector. The scale can be a matrix, which projects features; e.g., and 3D position to just $x$-position.The order of a feature is usually $k=0$, meaning that it is defined over a single configuration only. $k=1$ means that it is defined over two configurations (1st oder Markov), and redefines the feature to become the difference or velocity$$ \phi(x_1,x_2) \gets \frac{1}{\tau}(\phi(x_2) - \phi(x_1))$$$k=2$ means that it is defined over three configurations (2nd order Markov), and redefines the feature to become the acceleration$$ \phi(x_1,x_2,x_3) \equiv \frac{1}{\tau^2}(\phi(x_1) + \phi(x_3) - 2 \phi(x_2))$$ Examples```(FS.position, {'hand'})```is the 3D position of the hand in world coordinates```(FS.positionRel, {'handL', 'handR'}, scale=[[0,0,1]], target=[0.1])```is the z-position position of the left hand measured in the frame of the right hand, with target 10centimeters.```(FS.position, {'handL'}, order=1)```is the 3D velocity of the left hand in world coordinates```(FS.scalarProductXX, {'handL', 'handR'}, target=[1])```says that the scalar product of the x-axes (e.g. directions of the index finger) of both hands should equal 1, which means they are aligned.```(FS.scalarProductXY, {'handL', 'handR'})(FS.scalarProductXZ, {'handL', 'handR'})```says that the the x-axis of handL should be orthogonal (zero scalar product) to the y- and z-axis of handR. So this also describes aligning both x-axes. However, this formulation is much more robust, as it has good error gradients around the optimum. ObjectivesFeatures are meant to define objectives in an optimization problem. An objective is* a feature* an indicator $\rho_k\in\{\texttt{ineq, eq, sos}\}$ that states whether the featuresimplies an inequality, an equality, or a sum-of-square objective* and an index tuple $\pi_k \subseteq \{1,..,n\}$ that states whichconfigurations this feature is defined over.Then, given a set$\{\phi_1,..,\phi_K\}$ of $K$ features, and a set $\{x_1,..,x_n\}$ of$n$ configurations, this defines the mathematical program\begin{align} \min_{x_1,..,x_n} \sum_{k : \rho_k=\texttt{sos}} \phi_k(x_{\pi_k})^T \phi_k(x_{\pi_k}) ~\text{s.t.}~ \mathop\forall_{k : \rho_k=\texttt{ineq}} \phi_k(x_{\pi_k}) \le 0 ~,\quad \mathop\forall_{k : \rho_k=\texttt{eq}} \phi_k(x_{\pi_k}) = 0 ~,\quad\end{align} Code example for collision features* Get list of collisions and proximities for the whole configuration* Get a accumulative, differentiable collision measure* Get proximity/penetration specifically for a pair of shapes* Other geometric collision features for a pair of shapes (witness points, normal, etc) -- all differentiable
###Code
import sys
sys.path.append('../../lib')
import numpy as np
import libry as ry
C = ry.Config()
C.addFile('../../../rai-robotModels/pr2/pr2.g');
C.addFile('../../../rai-robotModels/objects/kitchen.g');
C.view()
###Output
_____no_output_____
###Markdown
Let's evaluate the accumulative collision scalar and its Jacobian
###Code
coll = C.feature(ry.FS.accumulatedCollisions, [])
C.computeCollisions() #collisions/proxies are not automatically computed on set...State
coll.eval(C)
###Output
_____no_output_____
###Markdown
Let's move into collision and redo this
###Code
C.selectJointsByTag(["base"])
C.setJointState([1.5,1,0])
C.computeCollisions()
coll.eval(C)
###Output
_____no_output_____
###Markdown
We can get more verbose information like this:
###Code
C.getCollisions()
C.getCollisions(0) #only report proxies with distance<0 (penetrations)
###Output
_____no_output_____
###Markdown
The computeCollisions() method calls a collision detection engine (SWIFT++) for the whole configuration, checking all shapes that are collision-activated. The activation/deactivation of collision computations is a nuissance! the 'contact' flag in g-files specifies which shapes are activated by default, and if the value is negative, that collisions with parent shapes are not included. (In the KOMO class, you can use activateCollisionPairs and deactivateCollisionPairs to modify these defaults in optimization problems... TODO: also in Config)When you're interested in the distance or penetration of one specific pair of objects, you don't need to call computeCollisions() and instead query a feature that calls the GJK (and others) algorithm directly only for this pair:
###Code
dist = C.feature(ry.FS.distance, ['coll_wrist_r', '_10'])
dist.eval(C)
###Output
_____no_output_____
###Markdown
Note that this returns the NEGATIVE distance (because one typically wants to put an inequality (<=0) on this). The C++ code implements many more features of the collision geometry, including the normal, witness points, etc. Can be added to python easily on request.
###Code
C.view_close()
###Output
_____no_output_____
###Markdown
Features and ObjectivesThis doc is mostly text, explaining the general concept of features, listing the ones defined in rai, and explaining how they define objectives for optimization.At the bottom there are also examples on the collision features. FeaturesWe assume a single configuration $x$, or a whole set of configurations$\{x_1,..,x_T\}$. Each $x_i \in\mathbb{R}$ are the DOFs of thatconfiguration.A feature $\phi$ is a differentiable mapping$$\phi: x \mapsto \mathbb{R}^D$$of a single configuration into some $D$-dimensional space, or a mapping$$\phi: (x_0,x_2,..,x_k) \mapsto \mathbb{R}^D$$of a $(k+1)$-tuple of configurations to a $D$-dimensional space.The rai code implements many features, most of them are accessible viaa feature symbol (FS). They are declared inhttps://github.com/MarcToussaint/rai/blob/master/rai/Kin/featureSymbols.hHere is a table of feature symbols, with therespective dimensionality $D$, the default order $k$, and adescription| FS | frames | $D$ | $k$ | description ||:---:|:---:|:---:|:---:|:---:|| position | {o1} | 3 || 3D position of o1 in world coordinates || positionDiff | {o1,o2} | 3 || difference of 3D positions of o1 and o2 in world coordinates || positionRel | {o1,o2} | 3 || 3D position of o1 in o2 coordinates || quaternion | {o1} | 4 || 4D quaternion of o1 in world coordinates\footnote{There is ways to handle the invariance w.r.t.\ quaternion sign properly.} || quaternionDiff | {o1,o2} | 4 || ... || quaternionRel | {o1,o2} | 4 || ... || pose | {o1} | 7 || 7D pose of o1 in world coordinates || poseDiff | {o1,o2} | 7 || ... || poseRel | {o1,o2} | 7 || ... || vectorX | {o1} | 3 || The x-axis of frame o1 rotated back to world coordinates || vectorXDiff | {o1,o2} | 3 || The difference of the above for two frames o1 and o2 || vectorXRel | {o1,o2} | 3 || The x-axis of frame o1 rotated as to be seend from the frame o2 || vectorY... | | | | same as above || scalarProductXX | {o1,o2} | 1 || The scalar product of the x-axis fo frame o1 with the x-axis of frame o2 || scalarProduct... | {o1,o2} | | | as above || gazeAt | {o1,o2} | 2 | | The 2D projection of the origin of frame o2 onto the xy-plane of frame o1 || angularVel | {o1} | 3 | 1 | The angular velocity of frame o1 across two configurations || accumulatedCollisions | {} | 1 | | The sum of collision penetrations; when negative/zero, nothing is colliding || jointLimits | {} | 1 | | The sum of joint limit penetrations; when negative/zero, all joint limits are ok || distance | {o1,o1} | 1 | | The NEGATIVE distance between convex meshes o1 and o2, positive for penetration || qItself | {} | $n$ | | The configuration joint vector || aboveBox | {o1,o2} | 4 | | when all negative, o1 is above (inside support of) the box o2 || insideBox | {o1,o2} | 6 | | when all negative, o1 is inside the box o2 || standingAbove | | | | ? |A features is typically defined by* The feature symbol (`FS_...` in cpp; `FS....` in python)* The set of frames it refers to* Optionally: A target, which changes the zero-point of the features (optimization typically try to drive features to zero, see below)* Optionally: A scaling, that can also be a matrix to down-project a feature* Optionally: The order $k$, which can make the feature a velocity or acceleration featureTarget and scale redefine a feature to become$$ \phi(x) \gets \texttt{scale} \cdot (\phi(x) - \texttt{target})$$The target needs to be a $D$-dim vector. The scale can be a matrix, which projects features; e.g., and 3D position to just $x$-position.The order of a feature is usually $k=0$, meaning that it is defined over a single configuration only. $k=1$ means that it is defined over two configurations (1st oder Markov), and redefines the feature to become the difference or velocity$$ \phi(x_1,x_2) \gets \frac{1}{\tau}(\phi(x_2) - \phi(x_1))$$$k=2$ means that it is defined over three configurations (2nd order Markov), and redefines the feature to become the acceleration$$ \phi(x_1,x_2,x_3) \equiv \frac{1}{\tau^2}(\phi(x_1) + \phi(x_3) - 2 \phi(x_2))$$ Examples```(FS.position, {'hand'})```is the 3D position of the hand in world coordinates```(FS.positionRel, {'handL', 'handR'}, scale=[[0,0,1]], target=[0.1])```is the z-position position of the left hand measured in the frame of the right hand, with target 10centimeters.```(FS.position, {'handL'}, order=1)```is the 3D velocity of the left hand in world coordinates```(FS.scalarProductXX, {'handL', 'handR'}, target=[1])```says that the scalar product of the x-axes (e.g. directions of the index finger) of both hands should equal 1, which means they are aligned.```(FS.scalarProductXY, {'handL', 'handR'})(FS.scalarProductXZ, {'handL', 'handR'})```says that the the x-axis of handL should be orthogonal (zero scalar product) to the y- and z-axis of handR. So this also describes aligning both x-axes. However, this formulation is much more robust, as it has good error gradients around the optimum. ObjectivesFeatures are meant to define objectives in an optimization problem. An objective is* a feature* an indicator $\rho_k\in\{\texttt{ineq, eq, sos}\}$ that states whether the featuresimplies an inequality, an equality, or a sum-of-square objective* and an index tuple $\pi_k \subseteq \{1,..,n\}$ that states whichconfigurations this feature is defined over.Then, given a set$\{\phi_1,..,\phi_K\}$ of $K$ features, and a set $\{x_1,..,x_n\}$ of$n$ configurations, this defines the mathematical program\begin{align} \min_{x_1,..,x_n} \sum_{k : \rho_k=\texttt{sos}} \phi_k(x_{\pi_k})^T \phi_k(x_{\pi_k}) ~\text{s.t.}~ \mathop\forall_{k : \rho_k=\texttt{ineq}} \phi_k(x_{\pi_k}) \le 0 ~,\quad \mathop\forall_{k : \rho_k=\texttt{eq}} \phi_k(x_{\pi_k}) = 0 ~,\quad\end{align} Code example for collision features* Get list of collisions and proximities for the whole configuration* Get a accumulative, differentiable collision measure* Get proximity/penetration specifically for a pair of shapes* Other geometric collision features for a pair of shapes (witness points, normal, etc) -- all differentiable
###Code
import sys
sys.path.append('../../../build')
import numpy as np
import libry as ry
C = ry.Config()
C.addFile('../../../rai-robotModels/pr2/pr2.g');
C.addFile('../../../rai-robotModels/objects/kitchen.g');
C.view()
###Output
**ry-c++-log** /home/jay/git/optimization-course/rai/rai/ry/ry.cpp:init_LogToPythonConsole:34(0) initializing ry log callback
###Markdown
Let's evaluate the accumulative collision scalar and its Jacobian
###Code
coll = C.feature(ry.FS.accumulatedCollisions, [])
C.computeCollisions() #collisions/proxies are not automatically computed on set...State
coll.eval(C)
###Output
_____no_output_____
###Markdown
Let's move into collision and redo this
###Code
C.selectJointsByTag(["base"])
C.setJointState([1.5,1,0])
C.computeCollisions()
coll.eval(C)
###Output
_____no_output_____
###Markdown
We can get more verbose information like this:
###Code
C.getCollisions()
C.getCollisions(0) #only report proxies with distance<0 (penetrations)
###Output
_____no_output_____
###Markdown
The computeCollisions() method calls a collision detection engine (SWIFT++) for the whole configuration, checking all shapes that are collision-activated. The activation/deactivation of collision computations is a nuissance! the 'contact' flag in g-files specifies which shapes are activated by default, and if the value is negative, that collisions with parent shapes are not included. (In the KOMO class, you can use activateCollisionPairs and deactivateCollisionPairs to modify these defaults in optimization problems... TODO: also in Config)When you're interested in the distance or penetration of one specific pair of objects, you don't need to call computeCollisions() and instead query a feature that calls the GJK (and others) algorithm directly only for this pair:
###Code
dist = C.feature(ry.FS.distance, ['coll_wrist_r', '_10'])
dist.eval(C)
###Output
_____no_output_____
###Markdown
Note that this returns the NEGATIVE distance (because one typically wants to put an inequality (<=0) on this). The C++ code implements many more features of the collision geometry, including the normal, witness points, etc. Can be added to python easily on request.
###Code
C.view_close()
###Output
_____no_output_____ |
client/workflows/examples-without-verta/notebooks/sklearn-census.ipynb | ###Markdown
Logistic Regression with Hyperparameter Optimization (scikit-learn) Imports
###Code
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
import itertools
import time
from multiprocessing import Pool
import numpy as np
import pandas as pd
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
###Output
_____no_output_____
###Markdown
--- Prepare Data
###Code
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv"
train_data_filename = wget.download(train_data_url)
test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv"
test_data_filename = wget.download(test_data_url)
df_train = pd.read_csv("census-train.csv")
X_train = df_train.iloc[:,:-1].values
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-4, 1e-1, 1, 10, 1e3],
'solver': ['liblinear', 'lbfgs'],
'max_iter': [15, 28],
}
# total models 20
# create hyperparam combinations
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Run Validation
###Code
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
def run_experiment(hyperparams):
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
print(hyperparams, end=' ')
print("Validation accuracy: {:.4f}".format(val_acc))
with Pool() as pool:
pool.map(run_experiment, hyperparam_sets)
###Output
_____no_output_____
###Markdown
Pick the best hyperparameters and train the full data
###Code
best_hyperparams = {}
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____
###Markdown
Logistic Regression with Hyperparameter Optimization (scikit-learn) Imports
###Code
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
import itertools
import time
import numpy as np
import pandas as pd
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
###Output
_____no_output_____
###Markdown
--- Prepare Data
###Code
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv"
train_data_filename = wget.download(train_data_url)
test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv"
test_data_filename = wget.download(test_data_url)
df_train = pd.read_csv("census-train.csv")
X_train = df_train.iloc[:,:-1].values
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-4, 1e-1, 1, 10, 1e3],
'solver': ['liblinear', 'lbfgs'],
'max_iter': [15, 28],
}
# total models 20
# create hyperparam combinations
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Run Validation
###Code
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
def run_experiment(hyperparams):
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
print(hyperparams, end=' ')
print("Validation accuracy: {:.4f}".format(val_acc))
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
###Output
_____no_output_____
###Markdown
Pick the best hyperparameters and train the full data
###Code
best_hyperparams = {}
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____ |
notebooks/building_production_ml_systems/solutions/2_hyperparameter_tuning_vertex.ipynb | ###Markdown
Hyperparameter tuning**Learning Objectives**1. Learn how to use `cloudml-hypertune` to report the results for Cloud hyperparameter tuning trial runs2. Learn how to configure the `.yaml` file for submitting a Cloud hyperparameter tuning job3. Submit a hyperparameter tuning job to Vertex AI IntroductionLet's see if we can improve upon that by tuning our hyperparameters.Hyperparameters are parameters that are set *prior* to training a model, as opposed to parameters which are learned *during* training. These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.Here are the four most common ways to finding the ideal hyperparameters:1. Manual2. Grid Search3. Random Search4. Bayesian Optimzation**1. Manual**Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance. Pros- Educational, builds up your intuition as a data scientist- Inexpensive because only one trial is conducted at a timeCons- Requires alot of time and patience**2. Grid Search**On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination. Pros- Can run hundreds of trials in parallel using the cloud- Gauranteed to find the best solution within the search spaceCons- Expensive**3. Random Search**Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range. Pros- Can run hundreds of trials in parallel using the cloud- Requires less trials than Grid Search to find a good solutionCons- Expensive (but less so than Grid Search)**4. Bayesian Optimization**Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here [here](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization). Pros- Picks values intelligenty based on results from past trials- Less expensive because requires fewer trials to get a good resultCons- Requires sequential trials for best results, takes longer**Vertex AI HyperTune**Vertex AI HyperTune, powered by [Google Vizier](https://ai.google/research/pubs/pub46180), uses Bayesian Optimization by default, but [also supports](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overviewsearch_algorithms) Grid Search and Random Search. When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
###Code
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%env TFVERSION=2.5
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Make code compatible with Vertex AI Training ServiceIn order to make our code compatible with Vertex AI Training Service we need to make the following changes:1. Upload data to Google Cloud Storage 2. Move code into a trainer Python package4. Submit training job with `gcloud` to train on Vertex AI Upload data to Google Cloud Storage (GCS)Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.To do this run the notebook [0_export_data_from_bq_to_gcs.ipynb](./0_export_data_from_bq_to_gcs.ipynb), which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
###Code
!gsutil ls gs://$BUCKET/taxifare/data
###Output
_____no_output_____
###Markdown
Move code into python packageIn the [previous lab](./1_training_at_scale.ipynb), we moved our code into a python package for training on Vertex AI. Let's just check that the files are there. You should see the following files in the `taxifare/trainer` directory: - `__init__.py` - `model.py` - `task.py`
###Code
!ls -la taxifare/trainer
###Output
_____no_output_____
###Markdown
To use hyperparameter tuning in your training job you must perform the following steps: 1. Specify the hyperparameter tuning configuration for your training job by including `parameters` in the `StudySpec` of your Hyperparameter Tuning Job. 2. Include the following code in your training application: - Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial (we already exposed these parameters as command-line arguments in the earlier lab). - Report your hyperparameter metrics during training. Note that while you could just report the metrics at the end of training, it is better to set up a callback, to take advantage or Early Stopping. - Read in the environment variable `$AIP_MODEL_DIR`, set by Vertex AI and containing the trial number, as our `output-dir`. As the training code will be submitted several times in a parallelized fashion, it is safer to use this variable than trying to assemble a unique id within the trainer code. Modify model.py
###Code
%%writefile ./taxifare/trainer/model.py
import datetime
import logging
import os
import shutil
import hypertune
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import activations, callbacks, layers, models
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
DAYS = ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
def features_and_labels(row_data):
for unwanted_col in ["key"]:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
shuffle_buffer_size=1000000,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if not isinstance(s, str):
s = s.numpy().decode("utf-8")
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff * londiff + latdiff * latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed["pickup_datetime"]
feature_columns = {
colname: fc.numeric_column(colname) for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ["pickup_longitude", "dropoff_longitude"]:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78) / 8.0, name=f"scale_{lon_col}"
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ["pickup_latitude", "dropoff_latitude"]:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37) / 8.0, name=f"scale_{lat_col}"
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed["euclidean"] = layers.Lambda(euclidean, name="euclidean")(
[
inputs["pickup_longitude"],
inputs["pickup_latitude"],
inputs["dropoff_longitude"],
inputs["dropoff_latitude"],
]
)
feature_columns["euclidean"] = fc.numeric_column("euclidean")
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed["hourofday"] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32
),
name="hourofday",
)(inputs["pickup_datetime"])
feature_columns["hourofday"] = fc.indicator_column(
fc.categorical_column_with_identity("hourofday", num_buckets=24)
)
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns["pickup_latitude"], latbuckets
)
b_dlat = fc.bucketized_column(
feature_columns["dropoff_latitude"], latbuckets
)
b_plon = fc.bucketized_column(
feature_columns["pickup_longitude"], lonbuckets
)
b_dlon = fc.bucketized_column(
feature_columns["dropoff_longitude"], lonbuckets
)
ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns["pickup_and_dropoff"] = fc.embedding_column(pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ["pickup_datetime"]
NUMERIC_COLS = set(CSV_COLUMNS) - {LABEL_COLUMN, "key"} - set(STRING_COLS)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype="float32")
for colname in NUMERIC_COLS
}
inputs.update(
{
colname: layers.Input(name=colname, shape=(), dtype="string")
for colname in STRING_COLS
}
)
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets
)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation="relu", name=f"h{layer}")(x)
output = layers.Dense(1, name="fare")(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss="mse", metrics=[rmse, "mse"])
return model
# TODO 1
# Instantiate the HyperTune reporting object
hpt = hypertune.HyperTune()
# Reporting callback
# TODO 1
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag="val_rmse",
metric_value=logs["val_rmse"],
global_step=epoch,
)
def train_and_evaluate(hparams):
batch_size = hparams["batch_size"]
nbuckets = hparams["nbuckets"]
lr = hparams["lr"]
nnsize = [int(s) for s in hparams["nnsize"].split()]
eval_data_path = hparams["eval_data_path"]
num_evals = hparams["num_evals"]
num_examples_to_train_on = hparams["num_examples_to_train_on"]
output_dir = hparams["output_dir"]
train_data_path = hparams["train_data_path"]
model_export_path = os.path.join(output_dir, "savedmodel")
checkpoint_path = os.path.join(output_dir, "checkpoints")
tensorboard_path = os.path.join(output_dir, "tensorboard")
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(
checkpoint_path, save_weights_only=True, verbose=1
)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path, histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb, HPTCallback()],
)
# Exporting the model with default serving function.
tf.saved_model.save(model, model_export_path)
return history
###Output
_____no_output_____
###Markdown
Modify task.py
###Code
%%writefile taxifare/trainer/task.py
import argparse
import json
import os
from trainer import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help="Batch size for training steps",
type=int,
default=32,
)
parser.add_argument(
"--eval_data_path",
help="GCS location pattern of eval files",
required=True,
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes (provide space-separated sizes)",
default="32 8",
)
parser.add_argument(
"--nbuckets",
help="Number of buckets to divide lat and lon with",
type=int,
default=10,
)
parser.add_argument(
"--lr", help="learning rate for optimizer", type=float, default=0.001
)
parser.add_argument(
"--num_evals",
help="Number of times to evaluate model on eval data training.",
type=int,
default=5,
)
parser.add_argument(
"--num_examples_to_train_on",
help="Number of examples to train on.",
type=int,
default=100,
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
default=os.getenv("AIP_MODEL_DIR"),
)
parser.add_argument(
"--train_data_path",
help="GCS location pattern of train files containing eval URLs",
required=True,
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
%%writefile taxifare/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name="taxifare_trainer",
version="0.1",
packages=find_packages(),
include_package_data=True,
description="Taxifare model training application.",
)
%%bash
cd taxifare
python ./setup.py sdist --formats=gztar
cd ..
%%bash
gsutil cp taxifare/dist/taxifare_trainer-0.1.tar.gz gs://${BUCKET}/taxifare/
###Output
_____no_output_____
###Markdown
Create HyperparameterTuningJobCreate a StudySpec object to hold the hyperparameter tuning configuration for your training job, and add the StudySpec to your hyperparameter tuning job.In your StudySpec `metric`, set the `metric_id` to a value representing your chosen metric.
###Code
%%bash
# Output directory and job name
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
BASE_OUTPUT_DIR=gs://${BUCKET}/taxifare_$TIMESTAMP
JOB_NAME=taxifare_$TIMESTAMP
echo ${BASE_OUTPUT_DIR} ${REGION} ${JOB_NAME}
# Vertex AI machines to use for training
PYTHON_PACKAGE_URI="gs://${BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5:latest"
PYTHON_MODULE="trainer.task"
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
echo > ./config.yaml "displayName: $JOB_NAME
studySpec:
metrics:
- metricId: val_rmse
goal: MINIMIZE
parameters:
- parameterId: lr
doubleValueSpec:
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterId: nbuckets
integerValueSpec:
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterId: batch_size
discreteValueSpec:
values:
- 15
- 30
- 50
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
baseOutputDirectory:
outputUriPrefix: $BASE_OUTPUT_DIR
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
pythonPackageSpec:
args:
- --train_data_path=$TRAIN_DATA_PATH
- --eval_data_path=$EVAL_DATA_PATH
- --batch_size=$BATCH_SIZE
- --num_examples_to_train_on=$NUM_EXAMPLES_TO_TRAIN_ON
- --num_evals=$NUM_EVALS
- --nbuckets=$NBUCKETS
- --lr=$LR
- --nnsize=$NNSIZE
executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI
packageUris:
- $PYTHON_PACKAGE_URI
pythonModule: $PYTHON_MODULE
replicaCount: $REPLICA_COUNT"
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
JOB_NAME=taxifare_$TIMESTAMP
echo $REGION
echo $JOB_NAME
gcloud beta ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=config.yaml \
--max-trial-count=10 \
--parallel-trial-count=2
###Output
_____no_output_____
###Markdown
You could have also used the Vertex AI Python SDK to achieve the same, as below:
###Code
from datetime import datetime
from google.cloud import aiplatform
# Output directory and jobID
timestamp_str=datetime.strftime(datetime.now(), '%Y%m%d_%H%M%S')
BASE_OUTPUT_DIR=f"gs://{BUCKET}/taxifare_{timestamp_str}"
JOB_NAME=f"taxifare_{timestamp_str}"
print(BASE_OUTPUT_DIR, REGION, JOB_NAME)
# Vertex AI machines to use for training
PYTHON_PACKAGE_URIS=f"gs://{BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5:latest"
PYTHON_MODULE="trainer.task"
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths.
GCS_PROJECT_PATH=f"gs://{BUCKET}/taxifare"
DATA_PATH=f"{GCS_PROJECT_PATH}/data"
TRAIN_DATA_PATH=f"{DATA_PATH}/taxi-train*"
EVAL_DATA_PATH=f"{DATA_PATH}/taxi-valid*"
# custom container
IMAGE_NAME="taxifare_training_container"
IMAGE_URI=f"gcr.io/{PROJECT}/{IMAGE_NAME}"
def create_hyperparameter_tuning_job_python_package_sample(
project: str,
display_name: str,
executor_image_uri: str,
package_uri: str,
python_module: str,
location: str = REGION,
api_endpoint: str = f"{REGION}-aiplatform.googleapis.com",
):
# The Vertex AI services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.JobServiceClient(client_options=client_options)
# study_spec
metric = {
"metric_id": "val_rmse",
"goal": aiplatform.gapic.StudySpec.MetricSpec.GoalType.MINIMIZE,
}
parameter_lr = {
"parameter_id": "lr",
"double_value_spec": {"min_value": 0.0001, "max_value": 0.1},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LOG_SCALE,
}
parameter_nbuckets = {
"parameter_id": "nbuckets",
"integer_value_spec": {"min_value": 10, "max_value": 25},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
parameter_batchsize = {
"parameter_id": "batch_size",
"discrete_value_spec": {"values": [15, 30, 50]},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
# trial_job_spec
worker_pool_spec = {
"machine_spec": {
"machine_type": "n1-standard-4",
},
"replica_count": 1,
"python_package_spec": {
"executor_image_uri": executor_image_uri,
"package_uris": [package_uri],
"python_module": python_module,
"args": [
f"--eval_data_path={EVAL_DATA_PATH}",
f"--train_data_path={TRAIN_DATA_PATH}",
f"--batch_size={BATCH_SIZE}",
f"--num_examples_to_train_on={NUM_EXAMPLES_TO_TRAIN_ON}",
f"--num_evals={NUM_EVALS}",
f"--nbuckets={NBUCKETS}",
f"--lr={LR}",
f"--nnsize={NNSIZE}"
],
},
}
# hyperparameter_tuning_job
hyperparameter_tuning_job = {
"display_name": display_name,
"max_trial_count": 10,
"parallel_trial_count": 2,
"study_spec": {
"metrics": [metric],
"parameters": [
parameter_lr,
parameter_nbuckets,
parameter_batchsize,
],
"algorithm": aiplatform.gapic.StudySpec.Algorithm.ALGORITHM_UNSPECIFIED, # results in Bayesian optimization
# "median_automated_stopping_spec": {} # early stopping: only available in v1beta1 as of writing
},
"trial_job_spec": {
"worker_pool_specs": [worker_pool_spec],
"base_output_directory": {
'output_uri_prefix': BASE_OUTPUT_DIR,
},
},
}
parent = f"projects/{project}/locations/{location}"
response = client.create_hyperparameter_tuning_job(parent=parent, hyperparameter_tuning_job=hyperparameter_tuning_job)
print("response:", response)
create_hyperparameter_tuning_job_python_package_sample(
project=PROJECT,
display_name=JOB_NAME,
executor_image_uri=PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,
package_uri=PYTHON_PACKAGE_URIS,
python_module=PYTHON_MODULE)
###Output
_____no_output_____
###Markdown
Hyperparameter tuning**Learning Objectives**1. Learn how to use `cloudml-hypertune` to report the results for Cloud hyperparameter tuning trial runs2. Learn how to configure the `.yaml` file for submitting a Cloud hyperparameter tuning job3. Submit a hyperparameter tuning job to Vertex AI IntroductionLet's see if we can improve upon that by tuning our hyperparameters.Hyperparameters are parameters that are set *prior* to training a model, as opposed to parameters which are learned *during* training. These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.Here are the four most common ways to finding the ideal hyperparameters:1. Manual2. Grid Search3. Random Search4. Bayesian Optimzation**1. Manual**Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance. Pros- Educational, builds up your intuition as a data scientist- Inexpensive because only one trial is conducted at a timeCons- Requires alot of time and patience**2. Grid Search**On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination. Pros- Can run hundreds of trials in parallel using the cloud- Gauranteed to find the best solution within the search spaceCons- Expensive**3. Random Search**Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range. Pros- Can run hundreds of trials in parallel using the cloud- Requires less trials than Grid Search to find a good solutionCons- Expensive (but less so than Grid Search)**4. Bayesian Optimization**Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here [here](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization). Pros- Picks values intelligenty based on results from past trials- Less expensive because requires fewer trials to get a good resultCons- Requires sequential trials for best results, takes longer**Vertex AI HyperTune**Vertex AI HyperTune, powered by [Google Vizier](https://ai.google/research/pubs/pub46180), uses Bayesian Optimization by default, but [also supports](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overviewsearch_algorithms) Grid Search and Random Search. When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
###Code
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%env TFVERSION=2.5
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
###Output
_____no_output_____
###Markdown
Make code compatible with Vertex AI Training ServiceIn order to make our code compatible with Vertex AI Training Service we need to make the following changes:1. Upload data to Google Cloud Storage 2. Move code into a trainer Python package4. Submit training job with `gcloud` to train on Vertex AI Upload data to Google Cloud Storage (GCS)Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.To do this run the notebook [0_export_data_from_bq_to_gcs.ipynb](./0_export_data_from_bq_to_gcs.ipynb), which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
###Code
!gsutil ls gs://$BUCKET/taxifare/data
###Output
_____no_output_____
###Markdown
Move code into python packageIn the [previous lab](./1_training_at_scale.ipynb), we moved our code into a python package for training on Vertex AI. Let's just check that the files are there. You should see the following files in the `taxifare/trainer` directory: - `__init__.py` - `model.py` - `task.py`
###Code
!ls -la taxifare/trainer
###Output
_____no_output_____
###Markdown
To use hyperparameter tuning in your training job you must perform the following steps: 1. Specify the hyperparameter tuning configuration for your training job by including `parameters` in the `StudySpec` of your Hyperparameter Tuning Job. 2. Include the following code in your training application: - Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial (we already exposed these parameters as command-line arguments in the earlier lab). - Report your hyperparameter metrics during training. Note that while you could just report the metrics at the end of training, it is better to set up a callback, to take advantage or Early Stopping. - Read in the environment variable `$AIP_MODEL_DIR`, set by Vertex AI and containing the trial number, as our `output-dir`. As the training code will be submitted several times in a parallelized fashion, it is safer to use this variable than trying to assemble a unique id within the trainer code. Modify model.py
###Code
%%writefile ./taxifare/trainer/model.py
"""Data prep, train and evaluate DNN model."""
import datetime
import logging
import os
import hypertune
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import activations, callbacks, layers, models
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
# inputs are all float except for pickup_datetime which is a string
STRING_COLS = ["pickup_datetime"]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
DAYS = ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
def features_and_labels(row_data):
for unwanted_col in ["key"]:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
shuffle_buffer_size=1000000,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if not isinstance(s, str):
s = s.numpy().decode("utf-8")
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff * londiff + latdiff * latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, numeric_cols, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed["pickup_datetime"]
feature_columns = {
colname: fc.numeric_column(colname) for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ["pickup_longitude", "dropoff_longitude"]:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78) / 8.0, name=f"scale_{lon_col}"
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ["pickup_latitude", "dropoff_latitude"]:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37) / 8.0, name=f"scale_{lat_col}"
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed["euclidean"] = layers.Lambda(euclidean, name="euclidean")(
[
inputs["pickup_longitude"],
inputs["pickup_latitude"],
inputs["dropoff_longitude"],
inputs["dropoff_latitude"],
]
)
feature_columns["euclidean"] = fc.numeric_column("euclidean")
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed["hourofday"] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32
),
name="hourofday",
)(inputs["pickup_datetime"])
feature_columns["hourofday"] = fc.indicator_column(
fc.categorical_column_with_identity("hourofday", num_buckets=24)
)
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns["pickup_latitude"], latbuckets
)
b_dlat = fc.bucketized_column(
feature_columns["dropoff_latitude"], latbuckets
)
b_plon = fc.bucketized_column(
feature_columns["pickup_longitude"], lonbuckets
)
b_dlon = fc.bucketized_column(
feature_columns["dropoff_longitude"], lonbuckets
)
ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns["pickup_and_dropoff"] = fc.embedding_column(pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr, string_cols):
numeric_cols = set(CSV_COLUMNS) - {LABEL_COLUMN, "key"} - set(string_cols)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype="float32")
for colname in numeric_cols
}
inputs.update(
{
colname: layers.Input(name=colname, shape=(), dtype="string")
for colname in string_cols
}
)
# transforms
transformed, feature_columns = transform(inputs, numeric_cols, nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation="relu", name=f"h{layer}")(x)
output = layers.Dense(1, name="fare")(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss="mse", metrics=[rmse, "mse"])
return model
# TODO 1
# Instantiate the HyperTune reporting object
hpt = hypertune.HyperTune()
# Reporting callback
# TODO 1
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag="val_rmse",
metric_value=logs["val_rmse"],
global_step=epoch,
)
def train_and_evaluate(hparams):
batch_size = hparams["batch_size"]
nbuckets = hparams["nbuckets"]
lr = hparams["lr"]
nnsize = [int(s) for s in hparams["nnsize"].split()]
eval_data_path = hparams["eval_data_path"]
num_evals = hparams["num_evals"]
num_examples_to_train_on = hparams["num_examples_to_train_on"]
output_dir = hparams["output_dir"]
train_data_path = hparams["train_data_path"]
model_export_path = os.path.join(output_dir, "savedmodel")
checkpoint_path = os.path.join(output_dir, "checkpoints")
tensorboard_path = os.path.join(output_dir, "tensorboard")
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
model = build_dnn_model(nbuckets, nnsize, lr, STRING_COLS)
logging.info(model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(
checkpoint_path, save_weights_only=True, verbose=1
)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path, histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb, HPTCallback()],
)
# Exporting the model with default serving function.
model.save(model_export_path)
return history
###Output
_____no_output_____
###Markdown
Modify task.py
###Code
%%writefile taxifare/trainer/task.py
"""Argument definitions for model training code in `trainer.model`."""
import argparse
import json
import os
from trainer import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help="Batch size for training steps",
type=int,
default=32,
)
parser.add_argument(
"--eval_data_path",
help="GCS location pattern of eval files",
required=True,
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes (provide space-separated sizes)",
default="32 8",
)
parser.add_argument(
"--nbuckets",
help="Number of buckets to divide lat and lon with",
type=int,
default=10,
)
parser.add_argument(
"--lr", help="learning rate for optimizer", type=float, default=0.001
)
parser.add_argument(
"--num_evals",
help="Number of times to evaluate model on eval data training.",
type=int,
default=5,
)
parser.add_argument(
"--num_examples_to_train_on",
help="Number of examples to train on.",
type=int,
default=100,
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
default=os.getenv("AIP_MODEL_DIR"),
)
parser.add_argument(
"--train_data_path",
help="GCS location pattern of train files containing eval URLs",
required=True,
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
%%writefile taxifare/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name="taxifare_trainer",
version="0.1",
packages=find_packages(),
include_package_data=True,
description="Taxifare model training application.",
)
%%bash
cd taxifare
python ./setup.py sdist --formats=gztar
cd ..
%%bash
gsutil cp taxifare/dist/taxifare_trainer-0.1.tar.gz gs://${BUCKET}/taxifare/
###Output
_____no_output_____
###Markdown
Create HyperparameterTuningJobCreate a StudySpec object to hold the hyperparameter tuning configuration for your training job, and add the StudySpec to your hyperparameter tuning job.In your StudySpec `metric`, set the `metric_id` to a value representing your chosen metric.
###Code
%%bash
# Output directory and job name
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
BASE_OUTPUT_DIR=gs://${BUCKET}/taxifare_$TIMESTAMP
JOB_NAME=taxifare_$TIMESTAMP
echo ${BASE_OUTPUT_DIR} ${REGION} ${JOB_NAME}
# Vertex AI machines to use for training
PYTHON_PACKAGE_URI="gs://${BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5:latest"
PYTHON_MODULE="trainer.task"
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
echo > ./config.yaml "displayName: $JOB_NAME
studySpec:
metrics:
- metricId: val_rmse
goal: MINIMIZE
parameters:
- parameterId: lr
doubleValueSpec:
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterId: nbuckets
integerValueSpec:
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterId: batch_size
discreteValueSpec:
values:
- 15
- 30
- 50
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
baseOutputDirectory:
outputUriPrefix: $BASE_OUTPUT_DIR
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
pythonPackageSpec:
args:
- --train_data_path=$TRAIN_DATA_PATH
- --eval_data_path=$EVAL_DATA_PATH
- --batch_size=$BATCH_SIZE
- --num_examples_to_train_on=$NUM_EXAMPLES_TO_TRAIN_ON
- --num_evals=$NUM_EVALS
- --nbuckets=$NBUCKETS
- --lr=$LR
- --nnsize=$NNSIZE
executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI
packageUris:
- $PYTHON_PACKAGE_URI
pythonModule: $PYTHON_MODULE
replicaCount: $REPLICA_COUNT"
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
JOB_NAME=taxifare_$TIMESTAMP
echo $REGION
echo $JOB_NAME
gcloud beta ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=config.yaml \
--max-trial-count=10 \
--parallel-trial-count=2
###Output
_____no_output_____
###Markdown
You could have also used the Vertex AI Python SDK to achieve the same, as below:
###Code
from datetime import datetime
from google.cloud import aiplatform
# Output directory and jobID
timestamp_str=datetime.strftime(datetime.now(), '%Y%m%d_%H%M%S')
BASE_OUTPUT_DIR=f"gs://{BUCKET}/taxifare_{timestamp_str}"
JOB_NAME=f"taxifare_{timestamp_str}"
print(BASE_OUTPUT_DIR, REGION, JOB_NAME)
# Vertex AI machines to use for training
PYTHON_PACKAGE_URIS=f"gs://{BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5:latest"
PYTHON_MODULE="trainer.task"
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths.
GCS_PROJECT_PATH=f"gs://{BUCKET}/taxifare"
DATA_PATH=f"{GCS_PROJECT_PATH}/data"
TRAIN_DATA_PATH=f"{DATA_PATH}/taxi-train*"
EVAL_DATA_PATH=f"{DATA_PATH}/taxi-valid*"
# custom container
IMAGE_NAME="taxifare_training_container"
IMAGE_URI=f"gcr.io/{PROJECT}/{IMAGE_NAME}"
def create_hyperparameter_tuning_job_python_package_sample(
project: str,
display_name: str,
executor_image_uri: str,
package_uri: str,
python_module: str,
location: str = REGION,
api_endpoint: str = f"{REGION}-aiplatform.googleapis.com",
):
# The Vertex AI services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.JobServiceClient(client_options=client_options)
# study_spec
metric = {
"metric_id": "val_rmse",
"goal": aiplatform.gapic.StudySpec.MetricSpec.GoalType.MINIMIZE,
}
parameter_lr = {
"parameter_id": "lr",
"double_value_spec": {"min_value": 0.0001, "max_value": 0.1},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LOG_SCALE,
}
parameter_nbuckets = {
"parameter_id": "nbuckets",
"integer_value_spec": {"min_value": 10, "max_value": 25},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
parameter_batchsize = {
"parameter_id": "batch_size",
"discrete_value_spec": {"values": [15, 30, 50]},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
# trial_job_spec
worker_pool_spec = {
"machine_spec": {
"machine_type": "n1-standard-4",
},
"replica_count": 1,
"python_package_spec": {
"executor_image_uri": executor_image_uri,
"package_uris": [package_uri],
"python_module": python_module,
"args": [
f"--eval_data_path={EVAL_DATA_PATH}",
f"--train_data_path={TRAIN_DATA_PATH}",
f"--batch_size={BATCH_SIZE}",
f"--num_examples_to_train_on={NUM_EXAMPLES_TO_TRAIN_ON}",
f"--num_evals={NUM_EVALS}",
f"--nbuckets={NBUCKETS}",
f"--lr={LR}",
f"--nnsize={NNSIZE}"
],
},
}
# hyperparameter_tuning_job
hyperparameter_tuning_job = {
"display_name": display_name,
"max_trial_count": 10,
"parallel_trial_count": 2,
"study_spec": {
"metrics": [metric],
"parameters": [
parameter_lr,
parameter_nbuckets,
parameter_batchsize,
],
"algorithm": aiplatform.gapic.StudySpec.Algorithm.ALGORITHM_UNSPECIFIED, # results in Bayesian optimization
# "median_automated_stopping_spec": {} # early stopping: only available in v1beta1 as of writing
},
"trial_job_spec": {
"worker_pool_specs": [worker_pool_spec],
"base_output_directory": {
'output_uri_prefix': BASE_OUTPUT_DIR,
},
},
}
parent = f"projects/{project}/locations/{location}"
response = client.create_hyperparameter_tuning_job(parent=parent, hyperparameter_tuning_job=hyperparameter_tuning_job)
print("response:", response)
create_hyperparameter_tuning_job_python_package_sample(
project=PROJECT,
display_name=JOB_NAME,
executor_image_uri=PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,
package_uri=PYTHON_PACKAGE_URIS,
python_module=PYTHON_MODULE)
###Output
_____no_output_____
###Markdown
Hyperparameter tuning**Learning Objectives**1. Learn how to use `cloudml-hypertune` to report the results for Cloud hyperparameter tuning trial runs2. Learn how to configure the `.yaml` file for submitting a Cloud hyperparameter tuning job3. Submit a hyperparameter tuning job to Vertex AI IntroductionLet's see if we can improve upon that by tuning our hyperparameters.Hyperparameters are parameters that are set *prior* to training a model, as opposed to parameters which are learned *during* training. These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.Here are the four most common ways to finding the ideal hyperparameters:1. Manual2. Grid Search3. Random Search4. Bayesian Optimzation**1. Manual**Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance. Pros- Educational, builds up your intuition as a data scientist- Inexpensive because only one trial is conducted at a timeCons- Requires alot of time and patience**2. Grid Search**On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination. Pros- Can run hundreds of trials in parallel using the cloud- Gauranteed to find the best solution within the search spaceCons- Expensive**3. Random Search**Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range. Pros- Can run hundreds of trials in parallel using the cloud- Requires less trials than Grid Search to find a good solutionCons- Expensive (but less so than Grid Search)**4. Bayesian Optimization**Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here [here](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization). Pros- Picks values intelligenty based on results from past trials- Less expensive because requires fewer trials to get a good resultCons- Requires sequential trials for best results, takes longer**Vertex AI HyperTune**Vertex AI HyperTune, powered by [Google Vizier](https://ai.google/research/pubs/pub46180), uses Bayesian Optimization by default, but [also supports](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overviewsearch_algorithms) Grid Search and Random Search. When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
###Code
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%env TFVERSION=2.5
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
###Output
Updated property [core/project].
Updated property [ai/region].
###Markdown
Make code compatible with Vertex AI Training ServiceIn order to make our code compatible with Vertex AI Training Service we need to make the following changes:1. Upload data to Google Cloud Storage 2. Move code into a trainer Python package4. Submit training job with `gcloud` to train on Vertex AI Upload data to Google Cloud Storage (GCS)Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.To do this run the notebook [0_export_data_from_bq_to_gcs.ipynb](./0_export_data_from_bq_to_gcs.ipynb), which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
###Code
!gsutil ls gs://$BUCKET/taxifare/data
###Output
gs://qwiklabs-gcp-00-eeb852ce8ccb/taxifare/data/taxi-train-000000000000.csv
gs://qwiklabs-gcp-00-eeb852ce8ccb/taxifare/data/taxi-valid-000000000000.csv
###Markdown
Move code into python packageIn the [previous lab](./1_training_at_scale.ipynb), we moved our code into a python package for training on Vertex AI. Let's just check that the files are there. You should see the following files in the `taxifare/trainer` directory: - `__init__.py` - `model.py` - `task.py`
###Code
!ls -la taxifare/trainer
###Output
total 20
drwxr-xr-x 2 jupyter jupyter 4096 Oct 4 18:39 .
drwxr-xr-x 5 jupyter jupyter 4096 Oct 4 18:39 ..
-rw-r--r-- 1 jupyter jupyter 0 Oct 4 18:39 __init__.py
-rw-r--r-- 1 jupyter jupyter 7165 Oct 4 18:39 model.py
-rw-r--r-- 1 jupyter jupyter 1728 Oct 4 18:39 task.py
###Markdown
To use hyperparameter tuning in your training job you must perform the following steps: 1. Specify the hyperparameter tuning configuration for your training job by including `parameters` in the `StudySpec` of your Hyperparameter Tuning Job. 2. Include the following code in your training application: - Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial (we already exposed these parameters as command-line arguments in the earlier lab). - Report your hyperparameter metrics during training. Note that while you could just report the metrics at the end of training, it is better to set up a callback, to take advantage or Early Stopping. - Read in the environment variable `$AIP_MODEL_DIR`, set by Vertex AI and containing the trial number, as our `output-dir`. As the training code will be submitted several times in a parallelized fashion, it is safer to use this variable than trying to assemble a unique id within the trainer code. Modify model.py
###Code
%%writefile ./taxifare/trainer/model.py
import datetime
import logging
import os
import shutil
import hypertune
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import activations, callbacks, layers, models
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
DAYS = ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
def features_and_labels(row_data):
for unwanted_col in ["key"]:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
shuffle_buffer_size=1000000,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if not isinstance(s, str):
s = s.numpy().decode("utf-8")
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff * londiff + latdiff * latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed["pickup_datetime"]
feature_columns = {
colname: fc.numeric_column(colname) for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ["pickup_longitude", "dropoff_longitude"]:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78) / 8.0, name=f"scale_{lon_col}"
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ["pickup_latitude", "dropoff_latitude"]:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37) / 8.0, name=f"scale_{lat_col}"
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed["euclidean"] = layers.Lambda(euclidean, name="euclidean")(
[
inputs["pickup_longitude"],
inputs["pickup_latitude"],
inputs["dropoff_longitude"],
inputs["dropoff_latitude"],
]
)
feature_columns["euclidean"] = fc.numeric_column("euclidean")
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed["hourofday"] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32
),
name="hourofday",
)(inputs["pickup_datetime"])
feature_columns["hourofday"] = fc.indicator_column(
fc.categorical_column_with_identity("hourofday", num_buckets=24)
)
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns["pickup_latitude"], latbuckets
)
b_dlat = fc.bucketized_column(
feature_columns["dropoff_latitude"], latbuckets
)
b_plon = fc.bucketized_column(
feature_columns["pickup_longitude"], lonbuckets
)
b_dlon = fc.bucketized_column(
feature_columns["dropoff_longitude"], lonbuckets
)
ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns["pickup_and_dropoff"] = fc.embedding_column(pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ["pickup_datetime"]
NUMERIC_COLS = set(CSV_COLUMNS) - {LABEL_COLUMN, "key"} - set(STRING_COLS)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype="float32")
for colname in NUMERIC_COLS
}
inputs.update(
{
colname: layers.Input(name=colname, shape=(), dtype="string")
for colname in STRING_COLS
}
)
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets
)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation="relu", name=f"h{layer}")(x)
output = layers.Dense(1, name="fare")(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss="mse", metrics=[rmse, "mse"])
return model
# TODO 1
# Instantiate the HyperTune reporting object
hpt = hypertune.HyperTune()
# Reporting callback
# TODO 1
class HPTCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
global hpt
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag="val_rmse",
metric_value=logs["val_rmse"],
global_step=epoch,
)
def train_and_evaluate(hparams):
batch_size = hparams["batch_size"]
nbuckets = hparams["nbuckets"]
lr = hparams["lr"]
nnsize = [int(s) for s in hparams["nnsize"].split()]
eval_data_path = hparams["eval_data_path"]
num_evals = hparams["num_evals"]
num_examples_to_train_on = hparams["num_examples_to_train_on"]
output_dir = hparams["output_dir"]
train_data_path = hparams["train_data_path"]
model_export_path = os.path.join(output_dir, "savedmodel")
checkpoint_path = os.path.join(output_dir, "checkpoints")
tensorboard_path = os.path.join(output_dir, "tensorboard")
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(
checkpoint_path, save_weights_only=True, verbose=1
)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path, histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb, HPTCallback()],
)
# Exporting the model with default serving function.
tf.saved_model.save(model, model_export_path)
return history
###Output
_____no_output_____
###Markdown
Modify task.py
###Code
%%writefile taxifare/trainer/task.py
import argparse
import json
import os
from trainer import model
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help="Batch size for training steps",
type=int,
default=32,
)
parser.add_argument(
"--eval_data_path",
help="GCS location pattern of eval files",
required=True,
)
parser.add_argument(
"--nnsize",
help="Hidden layer sizes (provide space-separated sizes)",
default="32 8",
)
parser.add_argument(
"--nbuckets",
help="Number of buckets to divide lat and lon with",
type=int,
default=10,
)
parser.add_argument(
"--lr", help="learning rate for optimizer", type=float, default=0.001
)
parser.add_argument(
"--num_evals",
help="Number of times to evaluate model on eval data training.",
type=int,
default=5,
)
parser.add_argument(
"--num_examples_to_train_on",
help="Number of examples to train on.",
type=int,
default=100,
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
default=os.getenv("AIP_MODEL_DIR"),
)
parser.add_argument(
"--train_data_path",
help="GCS location pattern of train files containing eval URLs",
required=True,
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
%%writefile taxifare/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name="taxifare_trainer",
version="0.1",
packages=find_packages(),
include_package_data=True,
description="Taxifare model training application.",
)
%%bash
cd taxifare
python ./setup.py sdist --formats=gztar
cd ..
%%bash
gsutil cp taxifare/dist/taxifare_trainer-0.1.tar.gz gs://${BUCKET}/taxifare/
###Output
_____no_output_____
###Markdown
Create HyperparameterTuningJobCreate a StudySpec object to hold the hyperparameter tuning configuration for your training job, and add the StudySpec to your hyperparameter tuning job.In your StudySpec `metric`, set the `metric_id` to a value representing your chosen metric.
###Code
%%bash
# Output directory and job name
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
BASE_OUTPUT_DIR=gs://${BUCKET}/taxifare_$TIMESTAMP
JOB_NAME=taxifare_$TIMESTAMP
echo ${BASE_OUTPUT_DIR} ${REGION} ${JOB_NAME}
# Vertex AI machines to use for training
PYTHON_PACKAGE_URI="gs://${BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5:latest"
PYTHON_MODULE="trainer.task"
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
echo > ./config.yaml "displayName: $JOB_NAME
studySpec:
metrics:
- metricId: val_rmse
goal: MINIMIZE
parameters:
- parameterId: lr
doubleValueSpec:
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterId: nbuckets
integerValueSpec:
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterId: batch_size
discreteValueSpec:
values:
- 15
- 30
- 50
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
baseOutputDirectory:
outputUriPrefix: $BASE_OUTPUT_DIR
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
pythonPackageSpec:
args:
- --train_data_path=$TRAIN_DATA_PATH
- --eval_data_path=$EVAL_DATA_PATH
- --batch_size=$BATCH_SIZE
- --num_examples_to_train_on=$NUM_EXAMPLES_TO_TRAIN_ON
- --num_evals=$NUM_EVALS
- --nbuckets=$NBUCKETS
- --lr=$LR
- --nnsize=$NNSIZE
executorImageUri: $PYTHON_PACKAGE_EXECUTOR_IMAGE_URI
packageUris:
- $PYTHON_PACKAGE_URI
pythonModule: $PYTHON_MODULE
replicaCount: $REPLICA_COUNT"
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
JOB_NAME=taxifare_$TIMESTAMP
echo $REGION
echo $JOB_NAME
gcloud beta ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=config.yaml \
--max-trial-count=10 \
--parallel-trial-count=2
###Output
_____no_output_____
###Markdown
You could have also used the Vertex AI Python SDK to achieve the same, as below:
###Code
from datetime import datetime
from google.cloud import aiplatform
# Output directory and jobID
timestamp_str=datetime.strftime(datetime.now(), '%Y%m%d_%H%M%S')
BASE_OUTPUT_DIR=f"gs://{BUCKET}/taxifare_{timestamp_str}"
JOB_NAME=f"taxifare_{timestamp_str}"
print(BASE_OUTPUT_DIR, REGION, JOB_NAME)
# Vertex AI machines to use for training
PYTHON_PACKAGE_URIS=f"gs://{BUCKET}/taxifare/taxifare_trainer-0.1.tar.gz"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-5:latest"
PYTHON_MODULE="trainer.task"
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths.
GCS_PROJECT_PATH=f"gs://{BUCKET}/taxifare"
DATA_PATH=f"{GCS_PROJECT_PATH}/data"
TRAIN_DATA_PATH=f"{DATA_PATH}/taxi-train*"
EVAL_DATA_PATH=f"{DATA_PATH}/taxi-valid*"
# custom container
IMAGE_NAME="taxifare_training_container"
IMAGE_URI=f"gcr.io/{PROJECT}/{IMAGE_NAME}"
def create_hyperparameter_tuning_job_python_package_sample(
project: str,
display_name: str,
executor_image_uri: str,
package_uri: str,
python_module: str,
location: str = REGION,
api_endpoint: str = f"{REGION}-aiplatform.googleapis.com",
):
# The Vertex AI services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.JobServiceClient(client_options=client_options)
# study_spec
metric = {
"metric_id": "val_rmse",
"goal": aiplatform.gapic.StudySpec.MetricSpec.GoalType.MINIMIZE,
}
parameter_lr = {
"parameter_id": "lr",
"double_value_spec": {"min_value": 0.0001, "max_value": 0.1},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LOG_SCALE,
}
parameter_nbuckets = {
"parameter_id": "nbuckets",
"integer_value_spec": {"min_value": 10, "max_value": 25},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
parameter_batchsize = {
"parameter_id": "batch_size",
"discrete_value_spec": {"values": [15, 30, 50]},
"scale_type": aiplatform.gapic.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,
}
# trial_job_spec
worker_pool_spec = {
"machine_spec": {
"machine_type": "n1-standard-4",
},
"replica_count": 1,
"python_package_spec": {
"executor_image_uri": executor_image_uri,
"package_uris": [package_uri],
"python_module": python_module,
"args": [
f"--eval_data_path={EVAL_DATA_PATH}",
f"--train_data_path={TRAIN_DATA_PATH}",
f"--batch_size={BATCH_SIZE}",
f"--num_examples_to_train_on={NUM_EXAMPLES_TO_TRAIN_ON}",
f"--num_evals={NUM_EVALS}",
f"--nbuckets={NBUCKETS}",
f"--lr={LR}",
f"--nnsize={NNSIZE}"
],
},
}
# hyperparameter_tuning_job
hyperparameter_tuning_job = {
"display_name": display_name,
"max_trial_count": 10,
"parallel_trial_count": 2,
"study_spec": {
"metrics": [metric],
"parameters": [
parameter_lr,
parameter_nbuckets,
parameter_batchsize,
],
"algorithm": aiplatform.gapic.StudySpec.Algorithm.ALGORITHM_UNSPECIFIED, # results in Bayesian optimization
# "median_automated_stopping_spec": {} # early stopping: only available in v1beta1 as of writing
},
"trial_job_spec": {
"worker_pool_specs": [worker_pool_spec],
"base_output_directory": {
'output_uri_prefix': BASE_OUTPUT_DIR,
},
},
}
parent = f"projects/{project}/locations/{location}"
response = client.create_hyperparameter_tuning_job(parent=parent, hyperparameter_tuning_job=hyperparameter_tuning_job)
print("response:", response)
create_hyperparameter_tuning_job_python_package_sample(
project=PROJECT,
display_name=JOB_NAME,
executor_image_uri=PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,
package_uri=PYTHON_PACKAGE_URIS,
python_module=PYTHON_MODULE)
###Output
_____no_output_____ |
Mini Projects/Feature Selection/feature-selection-for-classification-problems.ipynb | ###Markdown
Importing necessary libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import random
import plotly.express as px
pd.set_option("display.max_rows", None, "display.max_columns", None)
from sklearn.model_selection import train_test_split
import imblearn #Major library - Please ensure this is installed
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
#-------------------------------------------------------------------
import statsmodels #Install if not present readily
import xgboost as xgb
from sklearn.linear_model import Lasso,LogisticRegression
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import roc_auc_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
import warnings
warnings.filterwarnings("ignore")
random.seed(100)
###Output
_____no_output_____
###Markdown
Loading Data In this notebook, we are using the credit card fraud detection dataset. Since a fraud occurs rarely, the target variable is severely imbalanced, making it a perfect case to solve through different sampling & feature selection methods as prescribed below. The link and detailed description to the original data can be found here : https://www.kaggle.com/mlg-ulb/creditcardfraud
###Code
dataset = pd.read_csv(r"../input/creditcardfraud/creditcard.csv")
#------------------------------------------------------------------------------------------------
#Summary
print('Total Shape :',dataset.shape)
dataset.head()
###Output
Total Shape : (284807, 31)
###Markdown
About the dataset:1. The dataset consists of 29 principal components already extracted in the source dataset. The column names have been anonymized for business confidentiality purpose 2. The time column is for he purpose level and the Class column is the target variable we aim to predict 3. Since the features are the principal components themselves, we do not need to apply any scaling methods on it Null Check
###Code
pd.DataFrame(dataset.isnull().sum()).T
###Output
_____no_output_____
###Markdown
Minority Class contribution in the dataset
###Code
print('Total fraud(Class = 1) and not-fraud(Class = 0) :\n',dataset['Class'].value_counts())
print('Percentage of minority samples over total Data :',100 * dataset[dataset['Class']==1].shape[0]/dataset.shape[0],'%')
###Output
Total fraud(Class = 1) and not-fraud(Class = 0) :
0 284315
1 492
Name: Class, dtype: int64
Percentage of minority samples over total Data : 0.1727485630620034 %
###Markdown
Insight:1. The %contribution of Class 1 i.e fraud is abysmally low (~0.17%), hence the model will not be able to learn properly on the patterns of a fraud and hence the prediction quality will be poor. 2. To remediate the above case, we have an array of sampling techniques at our disposal which lead us to overcome the problem of imbalance classification Note (Important) : 1. For this dataset, since we have already established that sampled data works better for a classification model, we will proceed with considering sampled data for the next step of feature selection (Works good for sampled/un-sampled) 2. The features in this data are the principal components, not the raw features themselves. Its not general practice to run feature selection algorithms on Principal Components, but since we have 29 principal components (also, we dont have the information how much variance is explained for each of these principal components, hence we are assuming the eigen values/explained variances are distributed among many PC's, and hence we'd still want to eliminate the negligible impact PC features UDF for 3-D ploting of the sampled sets
###Code
def plot_3d(df,col1,col2,col3,hue_elem,name):
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(df[col1], df[col2], df[col3], c=df[hue_elem], marker='o')
title = 'Scatter plot for :' + name
ax.set_title(title)
ax.set_xlabel(col1+' Label')
ax.set_ylabel(col2+' Label')
ax.set_zlabel(col3+' Label')
plt.show()
###Output
_____no_output_____
###Markdown
Splitting the data
###Code
# Test Train Split for modelling purpose
X = dataset.loc[:,[cols for cols in dataset.columns if ('Class' not in cols) & ('Time' not in cols)]] #Removing time since its a level column
y = dataset.loc[:,[cols for cols in dataset.columns if 'Class' in cols]]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33,random_state=100)
#----------------------------------------------------------------------------------------------------
print('Total Shape of Train X:',X_train.shape)
print('Total Shape of Train Y:',y_train.shape)
###Output
Total Shape of Train X: (190820, 29)
Total Shape of Train Y: (190820, 1)
###Markdown
Random UndersamplingReference Links - https://imbalanced-learn.org/stable/references/generated/imblearn.under_sampling.RandomUnderSampler.html
###Code
# transform the dataset
from imblearn.over_sampling import ADASYN
adasyn = ADASYN(sampling_strategy=0.10,n_neighbors=5,random_state=100,n_jobs=-1)
X_train_adasyn, y_train_adasyn = adasyn.fit_resample(X_train, y_train)
#-----------------------------------------------------------------------------------
train_adasyn = X_train_adasyn.join(y_train_adasyn)
print('Total datapoints :',train_adasyn.shape)
print('Percentage of minority samples over Training Data :',
100 * train_adasyn[train_adasyn['Class']==1].shape[0]/train_adasyn.shape[0],'%')
#--------------------------------------------------------------------------------------
plot_3d(train_adasyn,'V3','V1','V2','Class','ADASYN')
###Output
Total datapoints : (209560, 30)
Percentage of minority samples over Training Data : 9.104313800343578 %
###Markdown
Passing under-sampled data into model for training
###Code
## Final X-Y pair of training to pass
X_train_final = X_train_adasyn.copy()
y_train_final = y_train_adasyn.copy()
#-----------------------------------------------------------------------------
train_final = X_train_final.join(y_train_final)
print('Percentage of minority samples over Final Training Data :',
100 * train_final[train_final['Class']==1].shape[0]/train_final.shape[0],'%')
train_final.head(1)
###Output
_____no_output_____
###Markdown
Baseline - Logistic Regression Model on fairly balanced data (~9%) with no feature selection
###Code
lr_clf = LogisticRegression(solver='saga',random_state=100)
lr_clf.fit(X_train_final,y_train_final)
pred = lr_clf.predict(X_test)
#-----------------------------------------------
score = roc_auc_score(y_test, pred)
print('1. ROC AUC: %.3f' % score)
print('2. Accuracy :',accuracy_score(y_test, pred))
print('3. Classification Report -\n',classification_report(y_test, pred))
print('4. Confusion Matrix - \n',confusion_matrix(y_test, pred))
import xgboost as xgb
xgb_clf = xgb.XGBClassifier(random_state=100,n_jobs=-1)
xgb_clf.fit(X_train_final,y_train_final)
xgb_pred = xgb_clf.predict(X_test)
#-----------------------------------------------
score = roc_auc_score(y_test, xgb_pred)
print('1. ROC AUC: %.3f' % score)
print('2. Accuracy :',accuracy_score(y_test, xgb_pred))
print('3. Classification Report -\n',classification_report(y_test, xgb_pred))
print('4. Confusion Matrix - \n',confusion_matrix(y_test, xgb_pred))
###Output
[04:29:00] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
1. ROC AUC: 0.912
2. Accuracy : 0.9994999308414994
3. Classification Report -
precision recall f1-score support
0 1.00 1.00 1.00 93834
1 0.86 0.82 0.84 153
accuracy 1.00 93987
macro avg 0.93 0.91 0.92 93987
weighted avg 1.00 1.00 1.00 93987
4. Confusion Matrix -
[[93814 20]
[ 27 126]]
###Markdown
Feature Selection Techniques:1. Quality Based: 1. Variance Inflation Factor (VIF)** 2. Correlation (Pearson/Spearman)** (Not applicable for classification problems)2. Performance (Fit of a model) based: 1. Intrinsic Techniques: - Lasso/Logistic Regression Feature Selection** - XGBoost/Random Forest Feature Selection 2. Extrinsic Techniques (Wrapper Based Methods): - Recursive Feature Elimination w/ Cross Validation (RFECV) - Relative Importance** - Boruta ** - If satisfying the basic assumptions Variance Inflation Factor (VIF) to detect multi-collinearity1. Multi-collinearity: If two or more features correlated to each other highly2. Not removing multi-collinear features result in violation of linear assumptions in many modelling algorithms, hence unnatural predictions3. ASSUMPTIONS - every assumption of linear regression is valid hereRelevant Links - https://www.statsmodels.org/stable/generated/statsmodels.stats.outliers_influence.variance_inflation_factor.html
###Code
# Import library for VIF
from statsmodels.stats.outliers_influence import variance_inflation_factor
#------------------------------------------------------------------------------------
def calc_vif(X):
# Calculating VIF
vif = pd.DataFrame()
vif["variables"] = X.columns
vif["VIF"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
return(vif)
#------------------------------------------------------------------------------------
X_VIF = calc_vif(X_train_final)
X_VIF = X_VIF.sort_values(['VIF'],ascending=False) #Sorting by descending order
X_VIF[X_VIF['VIF']>4] #Filtering for above 4
###Output
_____no_output_____
###Markdown
Insight : 1. The VIF has a range of [1,inf)2. For columns having VIF>5 (4 at some cases like above), they are considered to be multi-collinear and hence should be removed sequentially and checking the VIF again, till there are no features with higher VIF present Lasso/Logistic Regression Feature Selection 1. ASSUMPTION : Strictly same as of linear regression 2. Lasso for regression problems, logistic regression with regularization for classificationRelevant Links - 1. https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html 2.https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectFromModel.html
###Code
sel_ = SelectFromModel(LogisticRegression(C=1, penalty='l1',solver='saga'))
sel_.fit(X_train_final, y_train_final)
#--------------------------------------------------------------------------------
selected_feat = X_train_final.columns[(sel_.get_support())]
selected_feat
###Output
_____no_output_____
###Markdown
Tree-Based Feature Selection1. Corresponding trees are taken to match with modelling step. Example - If the final model is XGBoost, the same model can be taken for feature selection for better reliability and consistency as well. 2. Calculates feature importances based on Gini Purity Gain, Coverage of nodes, frequency,Gain in MSE etc.Relevent Links - https://xgboost.readthedocs.io/en/latest/python/python_api.htmlmodule-xgboost.training
###Code
my_model = xgb.XGBClassifier(random_state=100)
my_model.fit(X_train_final,y_train_final)
#----------------------------------------------------------------------------------------------------------
feature_importances = pd.DataFrame(my_model.feature_importances_,
index = X_train_final.columns,
columns=['importance']).sort_values('importance', ascending=False)
feature_importances['Features'] = feature_importances.index
feature_importances = feature_importances[['Features','importance']]
feature_importances.reset_index(inplace=True)
feature_importances.drop(columns={'index'},inplace=True)
#----------------------------------------------------------------------------------------------------------
print(feature_importances.head(5))
###Output
[04:30:48] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
Features importance
0 V14 0.387940
1 V4 0.069391
2 V10 0.059476
3 V12 0.049156
4 V17 0.041138
###Markdown
Insights:1. The above are the top 5 feaures contributing to the prediction of 'Class' with V14 being the highest with 57% importance, followed by V17 at 9%2. A threshold for the features to select can be tuned by an iterative process (input into model and check evaluation). Ex - Pick all featues having atleast 1% importance 3. Same methd can be used for other tree based models like CART, RF, LGBM etc. RFECV (Recursive Feature Selection with Cross Validation) 1. A wrapper method, hence uses a model within itself (any model passed by the user)2. For this example, we will pass the XGB model we fit aboveRelevant Links - https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html
###Code
from sklearn.feature_selection import RFECV
warnings.filterwarnings("ignore")
estimator = xgb.XGBClassifier(random_state=100,n_jobs=-1)
selector = RFECV(estimator, step=2, cv=3, n_jobs=-1, scoring = 'f1_weighted') #For example purpose, step=3, use step=1 or 2 in real time
selector.fit(X_train_final,y_train_final)
###Output
[05:07:16] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:08:21] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:09:24] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:10:21] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
###Markdown
Post-Processing for RFECV to extract out the exact columns to pick
###Code
selector_mask = list(selector.support_)
print('1 : ',selector_mask)
print('Length - ',len(selector_mask))
col_list = list(X_train_final.columns)
print('2 : ',col_list)
#--------------------------------------------------------------------------------------------
pass_idx = []
for n, i in enumerate(selector_mask):
if i == True:
selector_mask[n] = 1
elif i == False:
selector_mask[n] = 0
selector_mask
#--------------------------------------------------------------------------------------------
for n,item in enumerate(selector_mask):
if item == 1:
a = n
pass_idx = pass_idx + [n]
print('3 : ',pass_idx,'\n')
final_features = []
for i in pass_idx:
final_features = final_features + [X_train_final.columns[i]]
#--------------------------------------------------------------------------------------------
print('Final Features :')
final_features #The features recommended by the RFECV algorithm
###Output
1 : [True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, False, True, False, True, False, True, True, True, False, True]
Length - 29
2 : ['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount']
3 : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 22, 24, 25, 26, 28]
Final Features :
###Markdown
Boruta Feature Selection1. It works on the principal of shadow feature creation & multiple Bernoulli's trials 2. Is an automated version of XGB feature selection (dynamically choosing threshold)Relevant Links - https://pypi.org/project/Boruta/
###Code
from boruta import BorutaPy
#------------------------------------------------------------------------
###initialize Boruta
xgb = xgb.XGBClassifier(random_state=100)
boruta = BorutaPy(
estimator = xgb,
n_estimators = 'auto',
max_iter = 250 # number of trials to perform
)
#------------------------------------------------------------------------
### fit Boruta (it accepts np.array, not pd.DataFrame)
boruta.fit(np.array(X_train_final), np.array(y_train_final))
### print results
green_area = X_train_final.columns[boruta.support_].to_list()
blue_area = X_train_final.columns[boruta.support_weak_].to_list()
print('features in the green area:', green_area)
print('features in the blue area:', blue_area)
###Output
[05:11:20] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:13:01] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:15:35] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:18:06] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:20:37] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:23:09] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:25:40] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[05:28:11] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
features in the green area: ['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount']
features in the blue area: []
###Markdown
Insights:1. Boruta is generally a consevative algorithm and returns lesser features than other algorithms2. The green area features are those which are absolutely necssary for the model3. The blue area features are those which are optional for the model. In the above example, there are no optional features Improvement on baseline results with Boruta selected features
###Code
lr_clf = LogisticRegression(solver='saga',max_iter=10000,random_state=100)
lr_clf.fit(X_train_final[selected_feat],y_train_final) #X filtered for Boruta green features
pred = lr_clf.predict(X_test[selected_feat]) #X filtered for Boruta green features
#----------------------------------------------------------------------------------------
score = roc_auc_score(y_test, pred)
print('1. ROC AUC: %.3f' % score)
print('2. Accuracy :',accuracy_score(y_test, pred))
print('3. Classification Report -\n',classification_report(y_test, pred))
print('4. Confusion Matrix - \n',confusion_matrix(y_test, pred))
import xgboost as xgb
xgb_clf = xgb.XGBClassifier(random_state=100)
xgb_clf.fit(X_train_final[green_area],y_train_final)
xgb_pred = xgb_clf.predict(X_test[green_area])
#-----------------------------------------------
score = roc_auc_score(y_test, xgb_pred)
print('1. ROC AUC: %.3f' % score)
print('2. Accuracy :',accuracy_score(y_test, xgb_pred))
print('3. Classification Report -\n',classification_report(y_test, xgb_pred))
print('4. Confusion Matrix - \n',confusion_matrix(y_test, xgb_pred))
###Output
[05:40:45] WARNING: ../src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
1. ROC AUC: 0.912
2. Accuracy : 0.9994999308414994
3. Classification Report -
precision recall f1-score support
0 1.00 1.00 1.00 93834
1 0.86 0.82 0.84 153
accuracy 1.00 93987
macro avg 0.93 0.91 0.92 93987
weighted avg 1.00 1.00 1.00 93987
4. Confusion Matrix -
[[93814 20]
[ 27 126]]
|
yt_videos_colabs/PythonInvest_com_2_Sentiment_Analysis_of_Financial_News_.ipynb | ###Markdown
> **Financial News NLP Analysis*** **What?** Extracting the financial news through an API and getting the sentiment* **Why?** Trace news coverage for your favourite stocks (or industry), check if high positive/negative sentiment is correlated with the stocks performance* **How?** * *NewsAPI* in Python * *Vader* library for the sentiment generation * Example: Tesla stock in April-2021Details in the article: https://pythoninvest.com/long-read/sentiment-analysis-of-financial-news 1) IMPORTS
###Code
!pip install newsapi-python
!pip install yfinance
import nltk
### Uncomment it when the script runs for the first time
nltk.download('vader_lexicon')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from newsapi import NewsApiClient
#from newsapi.newsapi_client import NewsApiClient
from datetime import date, timedelta, datetime
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sia = SentimentIntensityAnalyzer()
# Show full output in Colab
# https://stackoverflow.com/questions/54692405/output-truncation-in-google-colab
pd.set_option('display.max_colwidth',1000)
###Output
_____no_output_____
###Markdown
2) Obtain an Access Key for the NewsAPI * You can get a new FREE key on the website https://newsapi.org/* NEWS_API_KEY = personal API Key
###Code
# Init news api
NEWS_API_KEY = '2adc9646b17746ffbd42e9526c1443e1'
# '1900869fa01647fca0bdc19b4550daa0'
###Output
_____no_output_____
###Markdown
3) The News API example
###Code
#https://newsapi.org/docs/endpoints/everything
newsapi = NewsApiClient(api_key= NEWS_API_KEY)
keywrd = 'Tesla stock'
#my_date = datetime.strptime('10-Apr-2021','%d-%b-%Y')
my_date = (datetime.now() - timedelta(days=7)).date()
articles = newsapi.get_everything(q = keywrd,
from_param = my_date.isoformat(),
to = (my_date + timedelta(days = 1)).isoformat(),
language="en",
#sources = ",".join(sources_list),
sort_by="relevancy",
page_size = 100)
articles
###Output
_____no_output_____
###Markdown
4) Sentiment
###Code
PHRASES = ['Well, this week news broke that they had been in talks with Twitter for a $4 billion acquisition, so it looks like they’re still pretty desirable.',\
'Wow, how things change.',\
'Traveloka are poised to become public companies in coming months, kickstarting a coming-out party for Southeast Asia’s long-overlooked internet scene.',\
'Former DHS Secretary Janet Napolitano spoke with Yahoo Finance about comprehensive immigration reform.']
for phrase in PHRASES:
print(f'{phrase}')
print(sia.polarity_scores(phrase))
###Output
Well, this week news broke that they had been in talks with Twitter for a $4 billion acquisition, so it looks like they’re still pretty desirable.
{'neg': 0.084, 'neu': 0.603, 'pos': 0.313, 'compound': 0.7624}
Wow, how things change.
{'neg': 0.0, 'neu': 0.441, 'pos': 0.559, 'compound': 0.5859}
Traveloka are poised to become public companies in coming months, kickstarting a coming-out party for Southeast Asia’s long-overlooked internet scene.
{'neg': 0.0, 'neu': 0.783, 'pos': 0.217, 'compound': 0.5719}
Former DHS Secretary Janet Napolitano spoke with Yahoo Finance about comprehensive immigration reform.
{'neg': 0.0, 'neu': 0.857, 'pos': 0.143, 'compound': 0.25}
###Markdown
5) NEWS + Sentiment 
###Code
def get_articles_sentiments(keywrd, startd, sources_list = None, show_all_articles = False):
newsapi = NewsApiClient(api_key= NEWS_API_KEY)
if type(startd) == str:
my_date = datetime.strptime(startd,'%d-%b-%Y')
else:
my_date = startd
# business_en_sources = get_sources('business','en')
if sources_list:
articles = newsapi.get_everything(q = keywrd,
from_param = my_date.isoformat(),
to = (my_date + timedelta(days = 1)).isoformat(),
language="en",
sources = ",".join(sources_list),
sort_by="relevancy",
page_size = 100)
else:
articles = newsapi.get_everything(q = keywrd,
from_param = my_date.isoformat(),
to = (my_date + timedelta(days = 1)).isoformat(),
language="en",
sort_by="relevancy",
page_size = 100)
article_content = ''
date_sentiments = {}
date_sentiments_list = []
seen = set()
for article in articles['articles']:
if str(article['title']) in seen:
continue
else:
seen.add(str(article['title']))
article_content = str(article['title']) + '. ' + str(article['description'])
sentiment = sia.polarity_scores(article_content)['compound']
date_sentiments.setdefault(my_date, []).append(sentiment)
date_sentiments_list.append((sentiment, article['url'],article['title'],article['description']))
date_sentiments_l = sorted(date_sentiments_list, key=lambda tup: tup[0],reverse=True)
sent_list = list(date_sentiments.values())[0]
return pd.DataFrame(date_sentiments_list, columns=['Sentiment','URL','Title','Description'])
# Easy version when we don't filter the business source -- seems to be relevant though, but the description
# Get all sources in en
dt = (datetime.now() - timedelta(days=7)).strftime("%d-%b-%Y")
return_articles = get_articles_sentiments(keywrd= 'Tesla stock', startd = dt, sources_list = None, show_all_articles= True)
return_articles.Sentiment.hist(bins=30,grid=False)
print(return_articles.Sentiment.mean())
print(return_articles.Sentiment.count())
print(return_articles.Description)
return_articles.sort_values(by='Sentiment', ascending=True)[['Sentiment','URL', 'Description','Title']].head(2)
return_articles.sort_values(by='Sentiment', ascending=False)[['Sentiment','URL', 'Description','Title']].head(2)
###Output
_____no_output_____ |
Online Certificate Course in Data Science and Machine Learning rearranged/03 pandas/Pandas Dataframe-Part1.ipynb | ###Markdown
Selection and Indexing
###Code
df.loc['A']
df.iloc[0]
df['W']
df[['W','Z']]
type(df[['W','Z']])
df
df['New'] = df['W']+df['Y']
df
df.drop('New',axis=1,inplace=True)
df
###Output
_____no_output_____
###Markdown
Adding a Row
###Code
df.loc['F'] = df.loc['A']+df.loc['B']
df
df.drop('F',axis=0,inplace=True)
df
df.loc['F'] = df.loc['A']+df.loc['B']
df
newind = 'DEL UP UK TN AP KL'.split()
newind
df['States'] = newind
df
df.reset_index()
df.set_index('States',inplace=True)
df
###Output
_____no_output_____
###Markdown
Multi-Index Levels
###Code
outside = ['North', 'North', 'North', 'South', 'South', 'South']
inside = newind
hier_index = list(zip(outside,inside))
hier_index
hier_index = pd.MultiIndex.from_tuples(hier_index)
hier_index
df.index = hier_index
df
df.xs('North')
###Output
_____no_output_____
###Markdown
Data Input & Output CSV Input
###Code
df = pd.read_csv('C:\\Users\\AEL04\\Downloads\\example.csv')
df
df2 = pd.read_csv('C:/Users/AEL04/Downloads/example.csv')
df2
pwd
df3 = pd.read_csv('example.csv')
pd.
df3.to_csv('example3.csv',index=False)
df4 = pd.read_csv('example3.csv')
df4
###Output
_____no_output_____
###Markdown
Excel Input
###Code
df = pd.read_excel('Excel_Sample.xlsx',sheet_name='Sheet1')
df
df.to_excel('Excel_Sample2.xlsx',sheet_name='Sheet1',index=False)
pd.read_csv('population_india_census2011.csv',encoding='unicode_escape')
df = pd.read_csv('https://raw.githubusercontent.com/ishant707/Covid19/master/covid_19_world.csv')
df.head(10)
df.shape
df.tail()
###Output
_____no_output_____ |
_notebooks/2020-09-30-Gradient-descent-simple-example.ipynb | ###Markdown
GD (Gradient Descent)> Getting your hands dirty with a bit of calculus to implement SGD and undestand it- toc:true- branch: master- badges: true- comments: true- author: Juan Cruz Alric- categories: [deep-learning, jupyter, fastai] GD is the key that allow us to have a model that can get better and better and look for that perfection. For this we need a way to adjust the parameters so that we can get a better performance from each iteration.We could look at each individual feature and come up with a set of parameters for each one, such that the highest parameters are associated with those features most likely to be important for a particular output.This can be represented as a function and set of parameter values for each possible output instance the probability of being correct:x= featuresp=parameters```def pr_eight(x,p) = (x*p).sum()``` x is represented as a vector, with all of the rows stacked up end to end into a single long line (x=[2,3,2,4,3,4,5,6,....,n]) and p is also a vector. If we have this function we only need a way of updating those "p" values until they are good as we can make them. To be more specific, here are the steps that we are going to require, to turn this function into a machine learning classifier:1. *Initialize* the weights.1. For each feature, use these weights to *predict* the output.1. Based on these predictions, calculate how good the model is (its *loss*).1. Calculate the *gradient*, which measures for each weight, how changing that weight would change the loss1. *Step* (that is, change) all the weights based on that calculation.1. Go back to the step 2, and *repeat* the process.1. Iterate until you decide to *stop* the training process (for instance, because the model is good enough or you don't want to wait any longer). - Initialize: We initialize the parameters to random values- Loss: testing the effectiveness of any current parameter assigment in terms of the actual performance. We need number that will return a small number if the performance was good or a large one if the performance was bad.- Step: A simple way to figure out whether a weight should be increased or decrease a bit. The best way to do this is by calculating the "gradients". - Stop: Once you decided how many epochs (iterations) to train the model for, we apply that decision. Train until we ran out of time or the accuracy of the model starts to get worst. Simple case
###Code
from fastai.vision.all import *
from fastbook import *
def f(x): return x**2
# we can use this functions to create the graph
plot_function(f, 'x', 'x**2')
# We can pick a random value
plot_function(f, 'x', 'x**2')
plt.scatter(-1.5, f(-1.5), color='red');
###Output
_____no_output_____
###Markdown
If we decide to increment x just for a tiny value we can see that we would descend from the actual spot
###Code
# We can pick a random value
plot_function(f, 'x', 'x**2')
plt.scatter(-1.5, f(-1.5), color='red', alpha=0.5);
plt.scatter(-1, f(-1), color='red');
###Output
_____no_output_____
###Markdown
We try and get even lower
###Code
# We can pick a random value
plot_function(f, 'x', 'x**2')
plt.scatter(-1.5, f(-1.5), color='red', alpha=0.5);
plt.scatter(-1, f(-1), color='red', alpha=0.5);
plt.scatter(-0.8, f(-0.8), color='red', alpha=0.5);
plt.scatter(-0.7, f(-0.7), color='red');
###Output
_____no_output_____
###Markdown
Calculating the gradient The "one magic step" is the bit where we calculate the gradients. As we mentioned, we use calculus as a performance optimization; it allows us to more quickly calculate whether our loss will go up or down when we adjust our parameters up or down. In other words, the gradients will tell us how much we have to change each weight to make our model better. Did you study calculus in school? If you remember that the derivative of a function tells you how much a change in its parameter will change its result. If not dont worry you just stop for a minute and go and watch this awesome video made by 3blue1brown https://www.youtube.com/watch?v=9vKqVkMQHKk&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x&index=2Now that you refresh about derivatives we can continue Remember the function x^2? Well its derivative is another function that calculates the change, rather than de value. For instance, the derivative of x^2 at the value 5 tells us how rapidly the function changes at the value 5. When we know how our function will change, then we know what to do to make it smaller. This is the **key to machine learning**: having a way to change the parameters of a function to make it smaller.One important thing to be aware of is that our function has lots of weights that we need to adjust, so when we calculate the derivative we won't get back one number, but lots of them a gradient for every weight. But there is nothing mathematically tricky here; you can calculate the derivative with respect to one weight, and treat all the other ones as constant, then repeat that for each other weight. This is how all of the gradients are calculated, for every weight. Well... the best of all of this is that...PyTorch is able to automatically compute the derivative of nearly any function! and its surprinsingly fast 1) Lests pick a tensor value which we want gradients at:
###Code
xt = tensor(5.).requires_grad_()
###Output
_____no_output_____
###Markdown
**requires_grad_** is a method brought to us by pytorch. We use it to tell Pytorch that we want to calculate grandients with respect to that variable at that specific value. This will make Pytorch remember to keep track of how to compute grandients of the other. Now lets calculate the function with that specific value
###Code
yt = f(xt)
yt
###Output
_____no_output_____
###Markdown
Finally we tell Pytorch to calculate the gradient for us
###Code
yt.backward()
xt.grad
###Output
_____no_output_____
###Markdown
If you remember your high school calculus rules, the derivative of x**2 is 2*x, and we have x=3, so the gradients should be 2*5=10 which is what PyTorch calculated for us! Lets now do it with a vector instead of only 1 number
###Code
xt = tensor([3., 5., 15.]).requires_grad_()
xt
###Output
_____no_output_____
###Markdown
lets change the first function to add all those numbers in the vector
###Code
def f(x): return (x**2).sum()
yt = f(xt)
yt
yt.backward()
xt.grad
###Output
_____no_output_____
###Markdown
The gradients only tell us the slope of our function, they don't actually tell us exactly how far to adjust the parameters. But it gives us some idea of how far; if the slope is very large, then that may suggest that we have more adjustments to do, whereas if the slope is very small, that may suggest that we are close to the optimal value. Adjusting using the Learning rate Deciding how to modify our parameters based on the values of the gradients is a crusial part of the process of deep learning We will multiply the gradient by some small number aka "the learning rate (LR)".Common pick numbers rank between 0.001 and 0.1. Once you have picked a LR, you can adjust your parameters using this simple funtion: p -= gradient(p) * lr ----> this is known as stepping your parameters. **What happends if you pick a learning rate to small?** It can mean having to do a lot of steps :( **What happends if you pick a learning rate to high?** Well it could actually result in the loss getting worse and bouncing back up Lets work on a end-to-end simple example Lets take a look at GD and see how finding a minimum can be used to train a model to fit data better Let's start with a simple model. Imagine you were measuring the speed of a roller coaster as it went over the top of a hump. It would start fast, and then get slower as it went up the hill; it would be slowest at the top, and it would then speed up again as it went downhill. You want to build a model of how the speed changes over time. If you were measuring the speed manually every second for 60 seconds, it might look something like this:
###Code
time = torch.arange(0,20).float()
time
speed = torch.randn(20)*3 + 0.75*(time-9.5)**2 + 1
plt.scatter(time, speed);
###Output
_____no_output_____
###Markdown
Lets try and guess that is a "quadratic function" of the form:a*(time**2)+(b*time)+c
###Code
def f(t, params):
a,b,c = params
return a*(t**2) + (b*t) + c
###Output
_____no_output_____
###Markdown
This greatly simplifies the problem, since every quadratic function is fully defined by the three parameters a, b, and c. Thus, to find the best quadratic function, we only need to find the best values for a, b, and c. We need to define first what we mean by "best." We define this precisely by choosing a loss function, which will return a value based on a prediction and a target, where lower values of the function correspond to "better" predictions. For continuous data, it's common to use mean squared error:
###Code
def mse(preds, targets): return ((preds-targets)**2).mean()
###Output
_____no_output_____
###Markdown
Now lets implement the 7 step process from the begining of the post **Step 1: Initialize the parameters** We are going to initialized each parameter with a random value and tell Pytorch that we want to track their gradients using _requires_grad_()
###Code
params = torch.randn(3).requires_grad_()
###Output
_____no_output_____
###Markdown
We can clone the original parameters to have them just in case
###Code
original_parameters = params.clone()
###Output
_____no_output_____
###Markdown
**Step 2: Calculate the predictions**
###Code
preds = f(time,params)
###Output
_____no_output_____
###Markdown
Lets see how the predictions are to our real targets
###Code
def show_preds(preds, ax=None):
if ax is None: ax=plt.subplots()[1]
ax.scatter(time, speed)
ax.scatter(time, to_np(preds), color='red')
ax.set_ylim(-300,100)
show_preds(preds)
###Output
_____no_output_____
###Markdown
Wow! Terrible our random values think that the roller coster is going backwards... look at the negative speed Can we do a better job? Well, lets calculate the loss **Step 3: Calculate the loss**
###Code
loss = mse(preds, speed)
loss
###Output
_____no_output_____
###Markdown
Our goal is now to improve this. To do that, we'll need to know the gradients. **Step 4: Calculate the gradients**
###Code
loss.backward()
params.grad
###Output
_____no_output_____
###Markdown
We can now pick a learning rate to try and adjust this gradients
###Code
lr = 0.00001
params.data -= lr * params.grad.data
params.grad = None
###Output
_____no_output_____
###Markdown
Lets see if the loss has improved:
###Code
preds = f(time,params)
mse(preds, speed)
show_preds(preds)
###Output
_____no_output_____
###Markdown
We need to repeat this a few times
###Code
def apply_step(params, prn=True):
preds = f(time, params)
loss = mse(preds, speed)
loss.backward()
params.data -= lr * params.grad.data
params.grad = None
if prn: print(loss.item())
return preds
###Output
_____no_output_____
###Markdown
**Step 6: Repeat the process** Now we repeat this process a bunch of times and see if we get any improvements
###Code
for i in range(20): apply_step(params)
# Lets use the original parameters and try to do the whole process again but
# this time with a graph
params = original_parameters.detach().requires_grad_()
_,axs = plt.subplots(1,4,figsize=(12,3))
for ax in axs: show_preds(apply_step(params, False), ax)
plt.tight_layout()
###Output
_____no_output_____ |
samples/02_power_users_developers/openstreetmap_exploration.ipynb | ###Markdown
Exploring OpenStreetMap using Pandas and the Python APIThis notebook is based around a simple tool named OSM Runner that queries the OpenStreetMap (OSM) Overpass API and returns a Spatial Data Frame. Using the Python API inside of a Jupyter Notebook, we can develop map-driven tools to explore OSM with the full capabilities of the ArcGIS platform at our disposal. Be sure to update the GIS connection information in the cell below before proceeding. This Notebook was written for an environment that does not have access to arcpy.
###Code
import time
from osm_runner import Runner # pip install osm-runner
import pandas as pd
from arcgis.features import FeatureLayer, GeoAccessor, GeoSeriesAccessor
from arcgis.geoenrichment import enrich
from arcgis import dissolve_boundaries
from arcgis.geometry import project
from arcgis.gis import GIS
# Organization Login
gis = GIS('http://www.arcgis.com', 'username', 'password')
###Output
_____no_output_____
###Markdown
Build Data Frames from Feature Layers & Extract Bounding BoxLet's assume we want to compare recycling amenities in OSM across 2 major cities. The first step will be to turn the boundaries for each city into a Data Frame via the GeoAccessor method from_layer(). Once we have a Data Frame for each city, we will use the Project operation of the Geometry Service in our GIS to get the envelope required to fetch data from Open Street Map.
###Code
dc_fl = FeatureLayer('https://maps2.dcgis.dc.gov/dcgis/rest/services/DCGIS_DATA/Administrative_Other_Boundaries_WebMercator/MapServer/10')
dc_df = GeoAccessor.from_layer(dc_fl)
display(dc_df.head())
dc_extent = dc_df.spatial.full_extent
dc_coords = project([[dc_extent[0], dc_extent[1]], [dc_extent[2], dc_extent[3]]], in_sr=3857, out_sr=4326)
dc_bounds = f"({dc_coords[0]['y']},{dc_coords[0]['x']},{dc_coords[1]['y']},{dc_coords[1]['x']})"
pr_fl = FeatureLayer('https://carto2.apur.org/apur/rest/services/OPENDATA/QUARTIER/MapServer/0')
pr_df = GeoAccessor.from_layer(pr_fl)
display(pr_df.head())
pr_extent = pr_df.spatial.full_extent
pr_coords = project([[pr_extent[0], pr_extent[1]], [pr_extent[2], pr_extent[3]]], in_sr=2154, out_sr=4326)
pr_bounds = f"({pr_coords[0]['y']},{pr_coords[0]['x']},{pr_coords[1]['y']},{pr_coords[1]['x']})"
###Output
_____no_output_____
###Markdown
Overview of the area in Washington DC to be Collected
###Code
dc_map = gis.map('Washington DC')
dc_map.draw(dc_df.iloc[0].SHAPE)
dc_map.draw(dc_df.spatial.bbox)
display(dc_map)
print(f'Searching Area: {round(dc_df.spatial.bbox.area / 1000000)} Square Kilometers')
###Output
_____no_output_____
###Markdown
Overview of the Area in Paris to be Collected
###Code
pr_map = gis.map('Paris')
pr_dis = dissolve_boundaries(pr_fl).query().sdf.iloc[0].SHAPE
pr_map.draw(pr_dis)
pr_map.draw(pr_df.spatial.bbox)
display(pr_map)
print(f'Searching Area: {round(pr_df.spatial.bbox.area / 1000000)} Square Kilometers')
###Output
_____no_output_____
###Markdown
Collecting Demographics with [Geoenrichment](https://developers.arcgis.com/rest/geoenrichment/api-reference/geoenrichment-service-overview.htm)In addition to the useful GeoAccessor methods and properties we can access via a Data Frame, we may also pass a Data Frame to the [enrich()](https://developers.arcgis.com/python/api-reference/arcgis.geoenrichment.htmlenrich) method to learn something useful about the area we are studying. Even before we fetch data from OpenStreetMap, it should be clear from the differences in population size, population density, and the study area being assessed, that any comparison between DC and Paris would not be fair. At the end of the notebook we will set up a solution to help us find a city that might be more comparable to Paris or DC.
###Code
try:
dc_e = enrich(dc_df, gis=gis)
display(dc_e.head())
pr_e = enrich(pr_df, gis=gis)
display(pr_e.head())
print(f'DC Population: {dc_e.TOTPOP.sum()}')
print(f'DC Density: {int(round(dc_e.TOTPOP.sum() / (dc_df.spatial.area / 1000000)))} per Square Kilometer')
print(f'Paris Population: {pr_e.TOTPOP.sum()}')
print(f'Paris Density: {int(round(pr_e.TOTPOP.sum() / (pr_dis.area / 1000000)))} per Square Kilometer')
except RuntimeError:
print('Your GIS Connection Does Not Support Geoenrichment')
###Output
_____no_output_____
###Markdown
Fetch Open Street Map Data within Boundaries as Data Frame. For our purposes, we will only be looking for recycling amenities. We could have collected all amenities by simply passing the string 'amenity' as the third argument. Or we might have tried to find all of the known surveillance cameras in our extent by passing {'man_made': ['surveillance']}. Please consult the OSM Wiki for more information on what features you can extract. We are adding the following results to the first 2 maps we created above.
###Code
runner = Runner()
dc_osm_df = runner.gen_osm_df('point', dc_bounds, {'amenity': ["recycling"]})
dc_osm_df.columns = dc_osm_df.columns.str.replace("recycling:", "rec")
dc_osm_df.SHAPE = dc_osm_df.geom
dc_osm_df.spatial.plot(map_widget=dc_map, renderer_type='u', col='recycling_type')
pr_osm_df = runner.gen_osm_df('point', pr_bounds, {'amenity': ["recycling"]})
pr_osm_df.columns = pr_osm_df.columns.str.replace("recycling:", "rec")
pr_osm_df.SHAPE = pr_osm_df.geom
pr_osm_df.spatial.plot(map_widget=pr_map, renderer_type='u', col='recycling_type')
display(dc_osm_df.head(n=1))
display(pr_osm_df.head(n=1))
###Output
_____no_output_____
###Markdown
General Attribute ComparisonWe can use basic Data Frame methods to get a general idea of the differences between the recycling features in DC and Paris. While the completeness (more unique sources and operators) of the Paris data may partially be the result of there being many more records, we see a number of non-profit agencies operating these Parisian facilities and government efforts in Paris focused on documenting these features. An interesting question might be whether the large discrepancy (both in raw counts and specificity in the details) is the result of OSM simply being used more in Europe, or the result of a different set of values toward environmental stewardship.
###Code
# Values for DC
print(f'Total Records found for DC Data Frame: {len(dc_osm_df)}')
print(f'Total Attributes Defined in Paris Data Frame: {len(list(dc_osm_df))}')
print('#' * 25)
print(f'Top 5 Operators ({dc_osm_df.operator.nunique()} Unique)')
print('#' * 25)
print(dc_osm_df.operator.value_counts()[:5].to_string())
print('#' * 25)
print(f'Top 5 Sources ({dc_osm_df.source.nunique()} Unique)')
print('#' * 25)
print(dc_osm_df.source.value_counts()[:5].to_string())
# Values for Paris
print(f'Total Records found for Paris Data Frame: {len(pr_osm_df)}')
print(f'Total Attributes Defined in Paris Data Frame: {len(list(pr_osm_df))}')
print('#' * 25)
print(f'Top 5 Operators ({pr_osm_df.operator.nunique()} Unique)')
print('#' * 25)
print(pr_osm_df.operator.value_counts()[:5].to_string())
print('#' * 25)
print(f'Top 5 Sources ({pr_osm_df.source.nunique()} Unique)')
print('#' * 25)
print(pr_osm_df.source.value_counts()[:5].to_string())
###Output
Total Records found for Paris Data Frame: 1265
Total Attributes Defined in Paris Data Frame: 82
#########################
Top 5 Operators (11 Unique)
#########################
Eco-Emballages 40
Le Relais 27
Ecotextile 4
WWF 3
Issy en Transition 2
#########################
Top 5 Sources (46 Unique)
#########################
survey 254
GPSO data.gouv.fr 2015-02 107
data.issy.com 15/06/2016 64
GPSO data.gouv.fr 2015-02;survey 54
cadastre-dgi-fr source : Direction Générale des Impôts - Cadastre. Mise à jour : 2011 13
###Markdown
Using the Map to Drive Our ExplorationPerhaps we are interested in finding shops that do not have recycling options nearby. The following 2 cells can be used as a way to explore OSM interactively within the Jupyter Notebook. Locate a place, drag the map around, and then run the last cell to plot a heat map of recycling amenities and all of the shops within the extent of the map. With only a few lines of code, we have the beginning of a site selection tool that also exposes all of the analytical power of Python and the ArcGIS platform.
###Code
search_map = gis.map('Berlin', 12)
display(search_map)
###Output
_____no_output_____
###Markdown
Get Data Frame for Map Extent & Plot
###Code
extent = search_map.extent
coords = project([[extent['xmin'], extent['ymin']], [extent['xmax'], extent['ymax']]], in_sr=3857, out_sr=4326)
bounds = f"({coords[0]['y']},{coords[0]['x']},{coords[1]['y']},{coords[1]['x']})"
try:
runner = Runner()
shop_df = runner.gen_osm_df('point', bounds, {'shop': ['coffee', 'street_vendor', 'convenience']})
recy_df = runner.gen_osm_df('point', bounds, {'amenity': ['recycling']})
# Move Geometries to SHAPE Column to Support Plot
shop_df.SHAPE = shop_df.geom
recy_df.SHAPE = recy_df.geom
print(f'OSM Coffee Shops Features Within Current Map View: {len(shop_df)}')
print(f'OSM Recycling Features Within Current Map View: {len(recy_df)}')
recy_df.spatial.plot(map_widget=search_map, renderer_type='h')
shop_df.spatial.plot(map_widget=search_map)
except Exception as e:
print("We Likely Didn't Find Any Features in this Extent.")
print(e)
except KeyError as e:
print('Try Moving the Map Around & Running This Cell Again')
print(e)
###Output
OSM Coffee Shops Features Within Current Map View: 636
OSM Recycling Features Within Current Map View: 941
###Markdown
Export the Data to ArcGIS Online or Portal for Further AnalysisFinally, the GeoAccessor gives us a convenient method for pushing our Data Frame into a Hosted Feature Layer within Portal or ArcGIS Online so that we can do further analysis or share the information with other people in our organization. We could have also moved our results into a database with the to_featureclass() method.
###Code
recycling_hfl = recy_df.spatial.to_featurelayer(f'OSM_Recycling_{round(time.time())}', gis=gis, tags='OSM')
shops_hfl = shop_df.spatial.to_featurelayer(f'OSM_Shops_{round(time.time())}', gis=gis, tags='OSM')
display(recycling_hfl)
display(shops_hfl)
###Output
_____no_output_____
###Markdown
Exploring OpenStreetMap using Pandas and the Python APIThis notebook is based around a simple tool named OSM Runner that queries the OpenStreetMap (OSM) Overpass API and returns a Spatial Data Frame. Using the Python API inside of a Jupyter Notebook, we can develop map-driven tools to explore OSM with the full capabilities of the ArcGIS platform at our disposal. Be sure to update the GIS connection information in the cell below before proceeding. This Notebook was written for an environment that does not have access to arcpy.
###Code
import time
from osm_runner import Runner # pip install osm-runner
import pandas as pd
from arcgis.features import FeatureLayer, GeoAccessor, GeoSeriesAccessor
from arcgis.geoenrichment import enrich
from arcgis import dissolve_boundaries
from arcgis.geometry import project
from arcgis.gis import GIS
# Organization Login
gis = GIS('http://www.arcgis.com', 'username', 'password')
###Output
_____no_output_____
###Markdown
Build Data Frames from Feature Layers & Extract Bounding BoxLet's assume we want to compare recycling amenities in OSM across 2 major cities. The first step will be to turn the boundaries for each city into a Data Frame via the GeoAccessor method from_layer(). Once we have a Data Frame for each city, we will use the Project operation of the Geometry Service in our GIS to get the envelope required to fetch data from Open Street Map.
###Code
dc_fl = FeatureLayer('https://maps2.dcgis.dc.gov/dcgis/rest/services/DCGIS_DATA/Administrative_Other_Boundaries_WebMercator/MapServer/10')
dc_df = GeoAccessor.from_layer(dc_fl)
display(dc_df.head())
dc_extent = dc_df.spatial.full_extent
dc_coords = project([[dc_extent[0], dc_extent[1]], [dc_extent[2], dc_extent[3]]], in_sr=3857, out_sr=4326)
dc_bounds = f"({dc_coords[0]['y']},{dc_coords[0]['x']},{dc_coords[1]['y']},{dc_coords[1]['x']})"
pr_fl = FeatureLayer('https://carto2.apur.org/apur/rest/services/OPENDATA/QUARTIER/MapServer/0')
pr_df = GeoAccessor.from_layer(pr_fl)
display(pr_df.head())
pr_extent = pr_df.spatial.full_extent
pr_coords = project([[pr_extent[0], pr_extent[1]], [pr_extent[2], pr_extent[3]]], in_sr=2154, out_sr=4326)
pr_bounds = f"({pr_coords[0]['y']},{pr_coords[0]['x']},{pr_coords[1]['y']},{pr_coords[1]['x']})"
###Output
_____no_output_____
###Markdown
Overview of the area in Washington DC to be Collected
###Code
dc_map = gis.map('Washington DC')
dc_map.draw(dc_df.iloc[0].SHAPE)
dc_map.draw(dc_df.spatial.bbox)
display(dc_map)
print(f'Searching Area: {round(dc_df.spatial.bbox.area / 1000000)} Square Kilometers')
###Output
_____no_output_____
###Markdown
Overview of the Area in Paris to be Collected
###Code
pr_map = gis.map('Paris')
pr_dis = dissolve_boundaries(pr_fl).query().sdf.iloc[0].SHAPE
pr_map.draw(pr_dis)
pr_map.draw(pr_df.spatial.bbox)
display(pr_map)
print(f'Searching Area: {round(pr_df.spatial.bbox.area / 1000000)} Square Kilometers')
###Output
_____no_output_____
###Markdown
Collecting Demographics with [Geoenrichment](https://developers.arcgis.com/rest/geoenrichment/api-reference/geoenrichment-service-overview.htm)In addition to the useful GeoAccessor methods and properties we can access via a Data Frame, we may also pass a Data Frame to the [enrich()](https://developers.arcgis.com/python/api-reference/arcgis.geoenrichment.htmlenrich) method to learn something useful about the area we are studying. Even before we fetch data from OpenStreetMap, it should be clear from the differences in population size, population density, and the study area being assessed, that any comparison between DC and Paris would not be fair. At the end of the notebook we will set up a solution to help us find a city that might be more comparable to Paris or DC.
###Code
try:
dc_e = enrich(dc_df, gis=gis)
display(dc_e.head())
pr_e = enrich(pr_df, gis=gis)
display(pr_e.head())
print(f'DC Population: {dc_e.TOTPOP.sum()}')
print(f'DC Density: {int(round(dc_e.TOTPOP.sum() / (dc_df.spatial.area / 1000000)))} per Square Kilometer')
print(f'Paris Population: {pr_e.TOTPOP.sum()}')
print(f'Paris Density: {int(round(pr_e.TOTPOP.sum() / (pr_dis.area / 1000000)))} per Square Kilometer')
except RuntimeError:
print('Your GIS Connection Does Not Support Geoenrichment')
###Output
_____no_output_____
###Markdown
Fetch Open Street Map Data within Boundaries as Data Frame. For our purposes, we will only be looking for recycling amenities. We could have collected all amenities by simply passing the string 'amenity' as the third argument. Or we might have tried to find all of the known surveillance cameras in our extent by passing {'man_made': ['surveillance']}. Please consult the OSM Wiki for more information on what features you can extract. We are adding the following results to the first 2 maps we created above.
###Code
runner = Runner()
dc_osm_df = runner.gen_osm_df('point', dc_bounds, {'amenity': ["recycling"]})
dc_osm_df.columns = dc_osm_df.columns.str.replace("recycling:", "rec")
dc_osm_df.SHAPE = dc_osm_df.geom
dc_osm_df.spatial.plot(map_widget=dc_map, renderer_type='u', col='recycling_type')
pr_osm_df = runner.gen_osm_df('point', pr_bounds, {'amenity': ["recycling"]})
pr_osm_df.columns = pr_osm_df.columns.str.replace("recycling:", "rec")
pr_osm_df.SHAPE = pr_osm_df.geom
pr_osm_df.spatial.plot(map_widget=pr_map, renderer_type='u', col='recycling_type')
display(dc_osm_df.head(n=1))
display(pr_osm_df.head(n=1))
###Output
_____no_output_____
###Markdown
General Attribute ComparisonWe can use basic Data Frame methods to get a general idea of the differences between the recycling features in DC and Paris. While the completeness (more unique sources and operators) of the Paris data may partially be the result of there being many more records, we see a number of non-profit agencies operating these Parisian facilities and government efforts in Paris focused on documenting these features. An interesting question might be whether the large discrepancy (both in raw counts and specificity in the details) is the result of OSM simply being used more in Europe, or the result of a different set of values toward environmental stewardship.
###Code
# Values for DC
print(f'Total Records found for DC Data Frame: {len(dc_osm_df)}')
print(f'Total Attributes Defined in Paris Data Frame: {len(list(dc_osm_df))}')
print('#' * 25)
print(f'Top 5 Operators ({dc_osm_df.operator.nunique()} Unique)')
print('#' * 25)
print(dc_osm_df.operator.value_counts()[:5].to_string())
print('#' * 25)
print(f'Top 5 Sources ({dc_osm_df.source.nunique()} Unique)')
print('#' * 25)
print(dc_osm_df.source.value_counts()[:5].to_string())
# Values for Paris
print(f'Total Records found for Paris Data Frame: {len(pr_osm_df)}')
print(f'Total Attributes Defined in Paris Data Frame: {len(list(pr_osm_df))}')
print('#' * 25)
print(f'Top 5 Operators ({pr_osm_df.operator.nunique()} Unique)')
print('#' * 25)
print(pr_osm_df.operator.value_counts()[:5].to_string())
print('#' * 25)
print(f'Top 5 Sources ({pr_osm_df.source.nunique()} Unique)')
print('#' * 25)
print(pr_osm_df.source.value_counts()[:5].to_string())
###Output
Total Records found for Paris Data Frame: 1265
Total Attributes Defined in Paris Data Frame: 82
#########################
Top 5 Operators (11 Unique)
#########################
Eco-Emballages 40
Le Relais 27
Ecotextile 4
WWF 3
Issy en Transition 2
#########################
Top 5 Sources (46 Unique)
#########################
survey 254
GPSO data.gouv.fr 2015-02 107
data.issy.com 15/06/2016 64
GPSO data.gouv.fr 2015-02;survey 54
cadastre-dgi-fr source : Direction Générale des Impôts - Cadastre. Mise à jour : 2011 13
###Markdown
Using the Map to Drive Our ExplorationPerhaps we are interested in finding shops that do not have recycling options nearby. The following 2 cells can be used as a way to explore OSM interactively within the Jupyter Notebook. Locate a place, drag the map around, and then run the last cell to plot a heat map of recycling amenities and all of the shops within the extent of the map. With only a few lines of code, we have the beginning of a site selection tool that also exposes all of the analytical power of Python and the ArcGIS platform.
###Code
search_map = gis.map('Berlin', 12)
display(search_map)
###Output
_____no_output_____
###Markdown
Get Data Frame for Map Extent & Plot
###Code
extent = search_map.extent
coords = project([[extent['xmin'], extent['ymin']], [extent['xmax'], extent['ymax']]], in_sr=3857, out_sr=4326)
bounds = f"({coords[0]['y']},{coords[0]['x']},{coords[1]['y']},{coords[1]['x']})"
try:
runner = Runner()
shop_df = runner.gen_osm_df('point', bounds, {'shop': ['coffee', 'street_vendor', 'convenience']})
recy_df = runner.gen_osm_df('point', bounds, {'amenity': ['recycling']})
# Move Geometries to SHAPE Column to Support Plot
shop_df.SHAPE = shop_df.geom
recy_df.SHAPE = recy_df.geom
print(f'OSM Coffee Shops Features Within Current Map View: {len(shop_df)}')
print(f'OSM Recycling Features Within Current Map View: {len(recy_df)}')
recy_df.spatial.plot(map_widget=search_map, renderer_type='h')
shop_df.spatial.plot(map_widget=search_map)
except Exception as e:
print("We Likely Didn't Find Any Features in this Extent.")
print(e)
except KeyError as e:
print('Try Moving the Map Around & Running This Cell Again')
print(e)
###Output
OSM Coffee Shops Features Within Current Map View: 636
OSM Recycling Features Within Current Map View: 941
###Markdown
Export the Data to ArcGIS Online or Portal for Further AnalysisFinally, the GeoAccessor gives us a convenient method for pushing our Data Frame into a Hosted Feature Layer within Portal or ArcGIS Online so that we can do further analysis or share the information with other people in our organization. We could have also moved our results into a database with the to_featureclass() method.
###Code
recycling_hfl = recy_df.spatial.to_featurelayer(f'OSM_Recycling_{round(time.time())}', gis=gis, tags='OSM')
shops_hfl = shop_df.spatial.to_featurelayer(f'OSM_Shops_{round(time.time())}', gis=gis, tags='OSM')
display(recycling_hfl)
display(shops_hfl)
###Output
_____no_output_____ |
11 - Introduction to Python/5_Conditional Statements/3_Else if, for Brief - ELIF (11:16)/Else If, for Brief - Elif - Solution_Py3.ipynb | ###Markdown
Else if, for Brief - ELIF *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Assign 200 to x.Create the following piece of code:If x > 200, print out "Big"; If x > 100 and x <= 200, print out "Average"; and If x <= 100, print out "Small".Use the If, Elif, and Else keywords in your code.Change the initial value of x to see how your output will vary.
###Code
x = 200
if x > 200:
print ("Big")
elif x > 100 and x <= 200:
print ("Average")
else:
print ("Small")
###Output
Average
###Markdown
Keep the first two conditions of the previous code. Add a new ELIF statement, so that, eventually, the program prints "Small" if x >= 0 and x <= 100, and "Negative" if x < 0. Let x carry the value of 50 and then of -50 to check if your code is correct.
###Code
x = 200
if x > 200:
print ("Big")
elif x > 100 and x <= 200:
print ("Average")
elif x >= 0 and x <= 100:
print ("Small")
else:
print ("Negative")
###Output
Average
|
examples/sql_test.ipynb | ###Markdown
SQL ExamplesLet us now experiment with SQL databases. CAPlot supports every database SQL that **SQLAlchemy** supports. As an example, we're going to work with a **SQLite** database that contains two tables; one with the data `variants.tsv.gz` contains and another with `samples.tsv.gz`'s data. To specify any SQL database as the source, we have to use their **URL** which is more or so the same in various DBMS, albeit with a different prefix. Since our database file is stored at `data/db.sqlite`, we have to enter `sqlite:///data/db.sqlite` as the source. Another thing to note is that `loadQuery` is mandatory when you are working with a SQL database, since CAPlot needs a single table. Setup
###Code
from bokeh.io import output_notebook
output_notebook()
import caplot
###Output
_____no_output_____
###Markdown
Examples PCA
###Code
plot = caplot.PCA(source='sqlite:///data/db.sqlite', loadQuery='SELECT * FROM samples')
plot.subplots = ['pcaMAF-scores_1', 'pcaMAF-scores_2']
plot.coloringColumn = 'pheno-superpopulation'
plot.coloringStyle = 'Categorical'
plot.coloringPalette = 'Category10'
plot.Show()
###Output
_____no_output_____
###Markdown
Manhattan
###Code
plot = caplot.Manhattan(source='sqlite:///data/db.sqlite', loadQuery='SELECT * FROM variants')
plot.contig = 'locus-contig'
plot.position = 'locus-position'
plot.pvalue = 'LogReg3-p_value'
plot.filter = 'SELECT * FROM data WHERE "maf">0.2'
plot.Show()
###Output
_____no_output_____ |
_pages/AI/TensorFlow/src/UDSL-DeepLearning/day5_RNN.ipynb | ###Markdown
Preprocess data : names
###Code
data_name = set() # name set
max_len = 0 # maximum length
with open('../data/woman_name_dataset.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
line_count += 1 # 첫 line은 컬러명이라서 의미가 없어서.
else:
tmp_name = row[1].split()[0] # split은 중간의 공백을 구별해서. 리스트로 반납 rddrrrr r
data_name.add(row[1].split()[0])
if len(tmp_name) > max_len:
max_len = len(tmp_name) # 데이터 중 최대의 길이로 셋팅
data_name = list(data_name) # set보다는 list가 편해서..
print('name total : {}'.format(len(data_name)))
print('maximum name length : {}'.format(max_len))
###Output
name total : 1219
maximum name length : 11
###Markdown
Preprocess data : Characters
###Code
# 문자를 숫자로.. one hot vector로 preprocessing
chars = set() # = {a,b,.....,z}
for name in data_name:
for char in name:
chars.add(char)
chars = list(np.sort(list(chars))) # no.sort는 numpy array 로 되어서
print('{} alphabets : '.format(len(chars)), chars)
###Output
26 alphabets : ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
###Markdown
Define function to convert name to onehot
###Code
def name_to_onehot(names, chars, max_len):
# len(names) : batch size
onehot = np.zeros((len(names), max_len, len(chars)+1)) # 이름이 끝나는 것에 signal을 주고 싶어서 Tom마지막 , placeholder에 같은 크기
for idx_1, name in enumerate(names): # idx_1는 이름의 갯수
for idx_2 in range(max_len): # idx_2는 이름의 char 갯수
if idx_2 < len(name):
idx_3 = chars.index(name[idx_2]) # idx3 전체 albartbet의 몇번째인가.
onehot[idx_1, idx_2, idx_3] = 1
else:
onehot[idx_1, idx_2, -1] = 1 # -1 을 주면 onehot이 끝나는것
return onehot
onehot_ex = name_to_onehot(['jane'], chars, max_len)
###Output
_____no_output_____
###Markdown
Define dimension and Placeholders
###Code
num_data = len(data_name)
seq_len = max_len - 1
dim_data = len(chars) + 1 # size of the one-hot vecotr.
ph_input_name = tf.placeholder(dtype=tf.float32, shape=[None, seq_len, dim_data]) # batch size, maximum seqence 길이, onehot 길이
ph_output_name = tf.placeholder(dtype=tf.float32, shape=[None, seq_len, dim_data])
###Output
_____no_output_____
###Markdown
Define weight variables
###Code
dim_rnn_cell = 128 # hideen layer를 ......
stddev = 0.02
with tf.variable_scope('weights'): # variable_scope를 담는 통 ( 일반적인 w1, w2, w3를 쓰면..헷갈리단.. variable_scope안의 w1, w2, w3)
W_i = tf.get_variable('W_i', dtype=tf.float32,
initializer=tf.random_normal([dim_data, dim_rnn_cell],
stddev = stddev))
# data : 1 by dim_data
# W_i : dim_data by dim_rnn_cell
b_i = tf.get_variable('b_i', dtype=tf.float32,
initializer=tf.random_normal([dim_rnn_cell],
stddev = stddev))
# b_i : 1 by dim_rnn_cell
# h= data * W_i * b_i : batch by dim_data * dim_data by dim_rnn cell +1 1 by dim_rnn_cell
# h : 1 by dim_rnn_cell
W_o = tf.get_variable('W_o', dtype=tf.float32,
initializer=tf.random_normal([dim_rnn_cell, dim_data],
stddev = stddev))
b_o = tf.get_variable('b_o', dtype=tf.float32,
initializer=tf.random_normal([dim_data],
stddev = stddev))
# LSTM : batch by dim_rnn_cell : batch by dim_data
# I want one hot encoding vector!
# LSTM CELL * W_o(dim_rnn_cell by dim_data) + b_o(dim_data)
###Output
_____no_output_____
###Markdown
Define RNN for training
###Code
#n_dim_rnn_cell : LSTM Hidden Cell Size
def name_rnn_train(_x, _seq_len, _dim_data, _dim_rnn_cell):
# _x : ph_input_name : Batch(0), seq_len(1), dim_data(2)
_x_split = tf.transpose(_x, [1, 0, 2]) # seq_len, batch, dim_data
_x_split = tf.reshape(_x_split, [-1, _dim_data])
# x_split : seq_len*batch by dim_data
# use tf.AUTO_REUSE,
# Load Variables
with tf.variable_scope('weights', reuse= tf.AUTO_REUSE):
_W_i = tf.get_variable('W_i')
_b_i = tf.get_variable('b_i')
_W_o = tf.get_variable('W_o')
_b_o = tf.get_variable('b_o')
# Linear Operation for Input
_h_split = tf.matmul(_x_split, _W_i) + b_i
_h_split = tf.split(_h_split, _seq_len, axis=0)
# Define LSTM Cell && RNN
with tf.variable_scope('rnn', reuse=tf.AUTO_REUSE):
_rnn_cell = tf.nn.rnn_cell.BasicLSTMCell(_dim_rnn_cell)
_output, _state = tf.nn.static_rnn(_rnn_cell, _h_split, dtype=tf.float32)
_total_out = []
for _tmp_out in _output:
_tmp_out = tf.matmul(_tmp_out, _W_o) + _b_o
_total_out.append(_tmp_out)
return tf.transpose(tf.stack(_total_out), [1, 0, 2])
###Output
_____no_output_____
###Markdown
Define result graph
###Code
result_name = name_rnn_train(ph_input_name, seq_len, dim_data, dim_rnn_cell)
print('result_shape :', result_name.shape)
###Output
result_shape : (?, 10, 27)
###Markdown
Define Loss function
###Code
def name_loss(_gt_name, _result_name, _seq_len):
total_loss = 0
# _resutl_name : batch, seq_len, dim_data
# ->bathc, dim_data
# batch by dim_data -> loss calculate
for i in range(_seq_len):
gt_char = _gt_name[:, i, :] # batch, dim_data
result_char = _result_name[:, i, :] # batch, dim_data
tmp_loss = tf.nn.softmax_cross_entropy_with_logits(labels=gt_char,
logits=result_char)
tmp_loss = tf.reduce_mean(tmp_loss)
total_loss += tmp_loss
return total_loss
rnn_loss = name_loss(ph_output_name, result_name, seq_len)
###Output
_____no_output_____
###Markdown
Define Optimizer and Get Ready
###Code
learning_rate = 1e-3
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(rnn_loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver(var_list=tf.trainable_variables()) # 학습이 될수 있는 variabale 이 무엇이나..
print('Now ready to start the session')
###Output
Now ready to start the session
###Markdown
Session Run
###Code
max_epoch = 300
batch_size = 64
num_batch = int(num_data/batch_size)
with tf.Session() as sess:
sess.run(init)
for _epoch in range(max_epoch):
random.seed(_epoch)
batch_shuffle = list(range(num_data))
random.shuffle(batch_shuffle)
total_train_loss = 0
for i in range(num_batch):
batch_idx = [batch_shuffle[idx] for idx in range(i*batch_size,
(i+1)*batch_size)]
batch_names = [name for name in data_name if data_name.index(name) in batch_idx]
batch_onehots = name_to_onehot(batch_names, chars, max_len)
input_onehot = batch_onehots[:, 0:(max_len-1), :] # a b y s
output_onehot = batch_onehots[:, 1:max_len, :] # b y s
train_feed_dict = {ph_input_name: input_onehot,
ph_output_name: output_onehot}
sess.run(optimizer, feed_dict = train_feed_dict)
curr_loss = sess.run(rnn_loss, feed_dict=train_feed_dict)
total_train_loss += curr_loss/num_batch
print('epoch : {}, train_loss : {}'.format(_epoch+1, total_train_loss))
model_save_path = saver.save(sess, './RNN_model/model.ckpt', global_step=_epoch+1)
print('Model saved in file: {}'.format(model_save_path))
###Output
epoch : 1, train_loss : 28.88234760886744
Model saved in file: ./RNN_model/model.ckpt-1
epoch : 2, train_loss : 18.6269992025275
Model saved in file: ./RNN_model/model.ckpt-2
epoch : 3, train_loss : 17.659534353958936
Model saved in file: ./RNN_model/model.ckpt-3
epoch : 4, train_loss : 17.120001742714333
Model saved in file: ./RNN_model/model.ckpt-4
epoch : 5, train_loss : 16.72336307324861
Model saved in file: ./RNN_model/model.ckpt-5
epoch : 6, train_loss : 16.349830225894326
Model saved in file: ./RNN_model/model.ckpt-6
epoch : 7, train_loss : 16.048421508387516
Model saved in file: ./RNN_model/model.ckpt-7
epoch : 8, train_loss : 15.76255958958676
Model saved in file: ./RNN_model/model.ckpt-8
epoch : 9, train_loss : 15.47641242177863
Model saved in file: ./RNN_model/model.ckpt-9
epoch : 10, train_loss : 15.2146880501195
Model saved in file: ./RNN_model/model.ckpt-10
epoch : 11, train_loss : 14.931576126500183
Model saved in file: ./RNN_model/model.ckpt-11
epoch : 12, train_loss : 14.657982173718906
Model saved in file: ./RNN_model/model.ckpt-12
epoch : 13, train_loss : 14.41134151659514
Model saved in file: ./RNN_model/model.ckpt-13
epoch : 14, train_loss : 14.167578496431048
Model saved in file: ./RNN_model/model.ckpt-14
epoch : 15, train_loss : 13.912194302207544
Model saved in file: ./RNN_model/model.ckpt-15
epoch : 16, train_loss : 13.690634526704486
Model saved in file: ./RNN_model/model.ckpt-16
epoch : 17, train_loss : 13.52023942847001
Model saved in file: ./RNN_model/model.ckpt-17
epoch : 18, train_loss : 13.393568540874282
Model saved in file: ./RNN_model/model.ckpt-18
epoch : 19, train_loss : 13.29529074618691
Model saved in file: ./RNN_model/model.ckpt-19
epoch : 20, train_loss : 13.192663795069645
Model saved in file: ./RNN_model/model.ckpt-20
epoch : 21, train_loss : 13.123580731843646
Model saved in file: ./RNN_model/model.ckpt-21
epoch : 22, train_loss : 13.063399666234067
Model saved in file: ./RNN_model/model.ckpt-22
epoch : 23, train_loss : 12.950805162128649
Model saved in file: ./RNN_model/model.ckpt-23
epoch : 24, train_loss : 12.87385458695261
Model saved in file: ./RNN_model/model.ckpt-24
epoch : 25, train_loss : 12.826507467972608
Model saved in file: ./RNN_model/model.ckpt-25
epoch : 26, train_loss : 12.725944117495889
Model saved in file: ./RNN_model/model.ckpt-26
epoch : 27, train_loss : 12.661800986842104
Model saved in file: ./RNN_model/model.ckpt-27
epoch : 28, train_loss : 12.572609901428223
Model saved in file: ./RNN_model/model.ckpt-28
epoch : 29, train_loss : 12.502071481002002
Model saved in file: ./RNN_model/model.ckpt-29
epoch : 30, train_loss : 12.447570449427555
Model saved in file: ./RNN_model/model.ckpt-30
epoch : 31, train_loss : 12.354412229437578
Model saved in file: ./RNN_model/model.ckpt-31
epoch : 32, train_loss : 12.265427639609888
Model saved in file: ./RNN_model/model.ckpt-32
epoch : 33, train_loss : 12.214245444849919
Model saved in file: ./RNN_model/model.ckpt-33
epoch : 34, train_loss : 12.128501239575838
Model saved in file: ./RNN_model/model.ckpt-34
epoch : 35, train_loss : 12.044278195029811
Model saved in file: ./RNN_model/model.ckpt-35
epoch : 36, train_loss : 11.96508096393786
Model saved in file: ./RNN_model/model.ckpt-36
epoch : 37, train_loss : 11.883403175755552
Model saved in file: ./RNN_model/model.ckpt-37
epoch : 38, train_loss : 11.806849931415758
Model saved in file: ./RNN_model/model.ckpt-38
epoch : 39, train_loss : 11.722761907075583
Model saved in file: ./RNN_model/model.ckpt-39
epoch : 40, train_loss : 11.627640272441663
Model saved in file: ./RNN_model/model.ckpt-40
epoch : 41, train_loss : 11.536088792901289
Model saved in file: ./RNN_model/model.ckpt-41
epoch : 42, train_loss : 11.440069851122406
Model saved in file: ./RNN_model/model.ckpt-42
epoch : 43, train_loss : 11.352227512158848
Model saved in file: ./RNN_model/model.ckpt-43
epoch : 44, train_loss : 11.264506340026854
Model saved in file: ./RNN_model/model.ckpt-44
epoch : 45, train_loss : 11.205529815272282
Model saved in file: ./RNN_model/model.ckpt-45
epoch : 46, train_loss : 11.084731553730213
Model saved in file: ./RNN_model/model.ckpt-46
epoch : 47, train_loss : 10.989706089622096
Model saved in file: ./RNN_model/model.ckpt-47
epoch : 48, train_loss : 10.917656195791142
Model saved in file: ./RNN_model/model.ckpt-48
epoch : 49, train_loss : 10.81281667006643
Model saved in file: ./RNN_model/model.ckpt-49
epoch : 50, train_loss : 10.74394376654374
Model saved in file: ./RNN_model/model.ckpt-50
epoch : 51, train_loss : 10.660006472938939
Model saved in file: ./RNN_model/model.ckpt-51
epoch : 52, train_loss : 10.583453278792533
Model saved in file: ./RNN_model/model.ckpt-52
epoch : 53, train_loss : 10.498629620200708
Model saved in file: ./RNN_model/model.ckpt-53
epoch : 54, train_loss : 10.401640390094958
Model saved in file: ./RNN_model/model.ckpt-54
epoch : 55, train_loss : 10.333252304478698
Model saved in file: ./RNN_model/model.ckpt-55
epoch : 56, train_loss : 10.261377535368265
Model saved in file: ./RNN_model/model.ckpt-56
epoch : 57, train_loss : 10.176645479704204
Model saved in file: ./RNN_model/model.ckpt-57
epoch : 58, train_loss : 10.083495190269067
Model saved in file: ./RNN_model/model.ckpt-58
epoch : 59, train_loss : 10.010421200802451
Model saved in file: ./RNN_model/model.ckpt-59
epoch : 60, train_loss : 9.93174056002968
Model saved in file: ./RNN_model/model.ckpt-60
epoch : 61, train_loss : 9.864012216266833
Model saved in file: ./RNN_model/model.ckpt-61
epoch : 62, train_loss : 9.794260878311963
Model saved in file: ./RNN_model/model.ckpt-62
epoch : 63, train_loss : 9.707831131784541
Model saved in file: ./RNN_model/model.ckpt-63
epoch : 64, train_loss : 9.618537953025418
Model saved in file: ./RNN_model/model.ckpt-64
epoch : 65, train_loss : 9.56138886903462
Model saved in file: ./RNN_model/model.ckpt-65
epoch : 66, train_loss : 9.48062465065404
Model saved in file: ./RNN_model/model.ckpt-66
epoch : 67, train_loss : 9.402303243938245
Model saved in file: ./RNN_model/model.ckpt-67
epoch : 68, train_loss : 9.33735169862446
Model saved in file: ./RNN_model/model.ckpt-68
epoch : 69, train_loss : 9.265261097958215
Model saved in file: ./RNN_model/model.ckpt-69
epoch : 70, train_loss : 9.1894420322619
Model saved in file: ./RNN_model/model.ckpt-70
epoch : 71, train_loss : 9.12452963778847
Model saved in file: ./RNN_model/model.ckpt-71
epoch : 72, train_loss : 9.049896691974842
Model saved in file: ./RNN_model/model.ckpt-72
epoch : 73, train_loss : 9.001408325998408
Model saved in file: ./RNN_model/model.ckpt-73
epoch : 74, train_loss : 8.913863884775262
Model saved in file: ./RNN_model/model.ckpt-74
epoch : 75, train_loss : 8.860353871395715
Model saved in file: ./RNN_model/model.ckpt-75
epoch : 76, train_loss : 8.799816959782653
Model saved in file: ./RNN_model/model.ckpt-76
epoch : 77, train_loss : 8.724560812899941
Model saved in file: ./RNN_model/model.ckpt-77
epoch : 78, train_loss : 8.650997161865234
Model saved in file: ./RNN_model/model.ckpt-78
epoch : 79, train_loss : 8.571906190169486
Model saved in file: ./RNN_model/model.ckpt-79
epoch : 80, train_loss : 8.521329503310355
Model saved in file: ./RNN_model/model.ckpt-80
epoch : 81, train_loss : 8.446852081700376
Model saved in file: ./RNN_model/model.ckpt-81
epoch : 82, train_loss : 8.397269901476408
Model saved in file: ./RNN_model/model.ckpt-82
epoch : 83, train_loss : 8.32365083694458
Model saved in file: ./RNN_model/model.ckpt-83
epoch : 84, train_loss : 8.27613017433568
Model saved in file: ./RNN_model/model.ckpt-84
epoch : 85, train_loss : 8.19275464509663
Model saved in file: ./RNN_model/model.ckpt-85
epoch : 86, train_loss : 8.130727391493947
Model saved in file: ./RNN_model/model.ckpt-86
epoch : 87, train_loss : 8.085842609405518
Model saved in file: ./RNN_model/model.ckpt-87
epoch : 88, train_loss : 8.007001073736893
Model saved in file: ./RNN_model/model.ckpt-88
epoch : 89, train_loss : 7.954338876824628
Model saved in file: ./RNN_model/model.ckpt-89
epoch : 90, train_loss : 7.886636909685636
Model saved in file: ./RNN_model/model.ckpt-90
epoch : 91, train_loss : 7.826610364412005
Model saved in file: ./RNN_model/model.ckpt-91
epoch : 92, train_loss : 7.76870484101145
Model saved in file: ./RNN_model/model.ckpt-92
epoch : 93, train_loss : 7.719228568829989
Model saved in file: ./RNN_model/model.ckpt-93
epoch : 94, train_loss : 7.667482225518478
Model saved in file: ./RNN_model/model.ckpt-94
epoch : 95, train_loss : 7.609826213435122
Model saved in file: ./RNN_model/model.ckpt-95
epoch : 96, train_loss : 7.5507249330219475
Model saved in file: ./RNN_model/model.ckpt-96
epoch : 97, train_loss : 7.488030232881245
Model saved in file: ./RNN_model/model.ckpt-97
epoch : 98, train_loss : 7.442277858131811
Model saved in file: ./RNN_model/model.ckpt-98
epoch : 99, train_loss : 7.383285120913857
Model saved in file: ./RNN_model/model.ckpt-99
epoch : 100, train_loss : 7.345773897672954
Model saved in file: ./RNN_model/model.ckpt-100
epoch : 101, train_loss : 7.2901706193622795
Model saved in file: ./RNN_model/model.ckpt-101
epoch : 102, train_loss : 7.237842760587993
Model saved in file: ./RNN_model/model.ckpt-102
epoch : 103, train_loss : 7.197460626301012
Model saved in file: ./RNN_model/model.ckpt-103
epoch : 104, train_loss : 7.134299102582428
Model saved in file: ./RNN_model/model.ckpt-104
epoch : 105, train_loss : 7.080920846838701
Model saved in file: ./RNN_model/model.ckpt-105
epoch : 106, train_loss : 7.030462039144415
Model saved in file: ./RNN_model/model.ckpt-106
epoch : 107, train_loss : 6.988008800305817
Model saved in file: ./RNN_model/model.ckpt-107
epoch : 108, train_loss : 6.946906290556255
Model saved in file: ./RNN_model/model.ckpt-108
epoch : 109, train_loss : 6.891653788717168
Model saved in file: ./RNN_model/model.ckpt-109
epoch : 110, train_loss : 6.858407296632466
Model saved in file: ./RNN_model/model.ckpt-110
epoch : 111, train_loss : 6.8052655772158985
Model saved in file: ./RNN_model/model.ckpt-111
epoch : 112, train_loss : 6.7650082236842115
Model saved in file: ./RNN_model/model.ckpt-112
epoch : 113, train_loss : 6.71976493534289
Model saved in file: ./RNN_model/model.ckpt-113
epoch : 114, train_loss : 6.6692928765949455
Model saved in file: ./RNN_model/model.ckpt-114
epoch : 115, train_loss : 6.639791915291233
Model saved in file: ./RNN_model/model.ckpt-115
epoch : 116, train_loss : 6.605656648937024
Model saved in file: ./RNN_model/model.ckpt-116
epoch : 117, train_loss : 6.5564421603554175
Model saved in file: ./RNN_model/model.ckpt-117
epoch : 118, train_loss : 6.510971797140021
Model saved in file: ./RNN_model/model.ckpt-118
epoch : 119, train_loss : 6.47663618388929
Model saved in file: ./RNN_model/model.ckpt-119
epoch : 120, train_loss : 6.435058894910311
Model saved in file: ./RNN_model/model.ckpt-120
epoch : 121, train_loss : 6.3965314312985075
Model saved in file: ./RNN_model/model.ckpt-121
epoch : 122, train_loss : 6.362196244691547
Model saved in file: ./RNN_model/model.ckpt-122
epoch : 123, train_loss : 6.316203242854068
Model saved in file: ./RNN_model/model.ckpt-123
epoch : 124, train_loss : 6.286868496945028
Model saved in file: ./RNN_model/model.ckpt-124
epoch : 125, train_loss : 6.242510067789178
Model saved in file: ./RNN_model/model.ckpt-125
epoch : 126, train_loss : 6.219652803320635
Model saved in file: ./RNN_model/model.ckpt-126
epoch : 127, train_loss : 6.190161880693937
Model saved in file: ./RNN_model/model.ckpt-127
epoch : 128, train_loss : 6.1480917177702255
Model saved in file: ./RNN_model/model.ckpt-128
epoch : 129, train_loss : 6.111247037586413
Model saved in file: ./RNN_model/model.ckpt-129
epoch : 130, train_loss : 6.081362674110815
Model saved in file: ./RNN_model/model.ckpt-130
epoch : 131, train_loss : 6.054022161584151
Model saved in file: ./RNN_model/model.ckpt-131
epoch : 132, train_loss : 6.02355758767379
Model saved in file: ./RNN_model/model.ckpt-132
epoch : 133, train_loss : 5.991408373180189
Model saved in file: ./RNN_model/model.ckpt-133
epoch : 134, train_loss : 5.967622505991082
Model saved in file: ./RNN_model/model.ckpt-134
epoch : 135, train_loss : 5.959697271648206
Model saved in file: ./RNN_model/model.ckpt-135
epoch : 136, train_loss : 5.914266912560715
Model saved in file: ./RNN_model/model.ckpt-136
epoch : 137, train_loss : 5.873075485229493
Model saved in file: ./RNN_model/model.ckpt-137
epoch : 138, train_loss : 5.851713205638684
Model saved in file: ./RNN_model/model.ckpt-138
epoch : 139, train_loss : 5.8298776777167065
Model saved in file: ./RNN_model/model.ckpt-139
epoch : 140, train_loss : 5.794826181311357
Model saved in file: ./RNN_model/model.ckpt-140
epoch : 141, train_loss : 5.777587463981226
Model saved in file: ./RNN_model/model.ckpt-141
epoch : 142, train_loss : 5.746028498599403
Model saved in file: ./RNN_model/model.ckpt-142
epoch : 143, train_loss : 5.725667200590435
Model saved in file: ./RNN_model/model.ckpt-143
epoch : 144, train_loss : 5.704853760568719
Model saved in file: ./RNN_model/model.ckpt-144
epoch : 145, train_loss : 5.681902684663472
Model saved in file: ./RNN_model/model.ckpt-145
epoch : 146, train_loss : 5.646559037660298
Model saved in file: ./RNN_model/model.ckpt-146
epoch : 147, train_loss : 5.6308164847524536
Model saved in file: ./RNN_model/model.ckpt-147
epoch : 148, train_loss : 5.6113778917413
Model saved in file: ./RNN_model/model.ckpt-148
epoch : 149, train_loss : 5.5798296175505
Model saved in file: ./RNN_model/model.ckpt-149
epoch : 150, train_loss : 5.56727459556178
Model saved in file: ./RNN_model/model.ckpt-150
epoch : 151, train_loss : 5.546431240282561
Model saved in file: ./RNN_model/model.ckpt-151
epoch : 152, train_loss : 5.523883844676769
Model saved in file: ./RNN_model/model.ckpt-152
epoch : 153, train_loss : 5.506038590481407
Model saved in file: ./RNN_model/model.ckpt-153
epoch : 154, train_loss : 5.491564725574694
Model saved in file: ./RNN_model/model.ckpt-154
epoch : 155, train_loss : 5.467124738191302
Model saved in file: ./RNN_model/model.ckpt-155
epoch : 156, train_loss : 5.443775453065571
Model saved in file: ./RNN_model/model.ckpt-156
epoch : 157, train_loss : 5.421084981215627
Model saved in file: ./RNN_model/model.ckpt-157
epoch : 158, train_loss : 5.423319063688581
Model saved in file: ./RNN_model/model.ckpt-158
epoch : 159, train_loss : 5.397743425871197
Model saved in file: ./RNN_model/model.ckpt-159
epoch : 160, train_loss : 5.383857325503699
Model saved in file: ./RNN_model/model.ckpt-160
epoch : 161, train_loss : 5.374525647414358
Model saved in file: ./RNN_model/model.ckpt-161
epoch : 162, train_loss : 5.352555199673301
Model saved in file: ./RNN_model/model.ckpt-162
epoch : 163, train_loss : 5.338991014580977
Model saved in file: ./RNN_model/model.ckpt-163
epoch : 164, train_loss : 5.317482973399915
Model saved in file: ./RNN_model/model.ckpt-164
epoch : 165, train_loss : 5.300774674666555
Model saved in file: ./RNN_model/model.ckpt-165
epoch : 166, train_loss : 5.278806636207983
Model saved in file: ./RNN_model/model.ckpt-166
epoch : 167, train_loss : 5.2593744177567325
Model saved in file: ./RNN_model/model.ckpt-167
epoch : 168, train_loss : 5.264723351127222
Model saved in file: ./RNN_model/model.ckpt-168
epoch : 169, train_loss : 5.239978639703047
Model saved in file: ./RNN_model/model.ckpt-169
epoch : 170, train_loss : 5.222865606609144
Model saved in file: ./RNN_model/model.ckpt-170
epoch : 171, train_loss : 5.198700327622263
Model saved in file: ./RNN_model/model.ckpt-171
epoch : 172, train_loss : 5.186485014463727
Model saved in file: ./RNN_model/model.ckpt-172
epoch : 173, train_loss : 5.177536286805806
Model saved in file: ./RNN_model/model.ckpt-173
epoch : 174, train_loss : 5.170638912602475
Model saved in file: ./RNN_model/model.ckpt-174
epoch : 175, train_loss : 5.150406586496454
Model saved in file: ./RNN_model/model.ckpt-175
epoch : 176, train_loss : 5.135734382428621
Model saved in file: ./RNN_model/model.ckpt-176
epoch : 177, train_loss : 5.124523940839266
Model saved in file: ./RNN_model/model.ckpt-177
epoch : 178, train_loss : 5.114203553450735
Model saved in file: ./RNN_model/model.ckpt-178
epoch : 179, train_loss : 5.105977133700723
Model saved in file: ./RNN_model/model.ckpt-179
epoch : 180, train_loss : 5.089974001834266
Model saved in file: ./RNN_model/model.ckpt-180
epoch : 181, train_loss : 5.078289935463352
Model saved in file: ./RNN_model/model.ckpt-181
epoch : 182, train_loss : 5.072560034300152
Model saved in file: ./RNN_model/model.ckpt-182
epoch : 183, train_loss : 5.057972657053094
Model saved in file: ./RNN_model/model.ckpt-183
epoch : 184, train_loss : 5.044691136008815
Model saved in file: ./RNN_model/model.ckpt-184
epoch : 185, train_loss : 5.042502604032817
Model saved in file: ./RNN_model/model.ckpt-185
epoch : 186, train_loss : 5.027126839286403
Model saved in file: ./RNN_model/model.ckpt-186
epoch : 187, train_loss : 5.026148093374152
Model saved in file: ./RNN_model/model.ckpt-187
epoch : 188, train_loss : 5.002026808889289
Model saved in file: ./RNN_model/model.ckpt-188
epoch : 189, train_loss : 5.000458365992496
Model saved in file: ./RNN_model/model.ckpt-189
epoch : 190, train_loss : 4.979876719023053
Model saved in file: ./RNN_model/model.ckpt-190
epoch : 191, train_loss : 4.973548512709767
Model saved in file: ./RNN_model/model.ckpt-191
epoch : 192, train_loss : 4.975726955815365
Model saved in file: ./RNN_model/model.ckpt-192
epoch : 193, train_loss : 4.963386359967684
Model saved in file: ./RNN_model/model.ckpt-193
epoch : 194, train_loss : 4.950365869622482
Model saved in file: ./RNN_model/model.ckpt-194
epoch : 195, train_loss : 4.939387321472168
Model saved in file: ./RNN_model/model.ckpt-195
epoch : 196, train_loss : 4.943730203728927
Model saved in file: ./RNN_model/model.ckpt-196
epoch : 197, train_loss : 4.929129098591051
Model saved in file: ./RNN_model/model.ckpt-197
epoch : 198, train_loss : 4.9202615838301815
Model saved in file: ./RNN_model/model.ckpt-198
epoch : 199, train_loss : 4.9202090062593165
Model saved in file: ./RNN_model/model.ckpt-199
epoch : 200, train_loss : 4.902339282788729
Model saved in file: ./RNN_model/model.ckpt-200
epoch : 201, train_loss : 4.896803755509226
Model saved in file: ./RNN_model/model.ckpt-201
epoch : 202, train_loss : 4.883091826187937
Model saved in file: ./RNN_model/model.ckpt-202
epoch : 203, train_loss : 4.8809738159179705
Model saved in file: ./RNN_model/model.ckpt-203
epoch : 204, train_loss : 4.870889412729364
Model saved in file: ./RNN_model/model.ckpt-204
epoch : 205, train_loss : 4.8575966734635205
Model saved in file: ./RNN_model/model.ckpt-205
epoch : 206, train_loss : 4.8516448924416
Model saved in file: ./RNN_model/model.ckpt-206
epoch : 207, train_loss : 4.8522500489887435
Model saved in file: ./RNN_model/model.ckpt-207
epoch : 208, train_loss : 4.855400236029374
Model saved in file: ./RNN_model/model.ckpt-208
epoch : 209, train_loss : 4.849351431194105
Model saved in file: ./RNN_model/model.ckpt-209
epoch : 210, train_loss : 4.834192426581133
Model saved in file: ./RNN_model/model.ckpt-210
epoch : 211, train_loss : 4.830471314881978
Model saved in file: ./RNN_model/model.ckpt-211
epoch : 212, train_loss : 4.816604990708201
Model saved in file: ./RNN_model/model.ckpt-212
epoch : 213, train_loss : 4.813752099087363
Model saved in file: ./RNN_model/model.ckpt-213
epoch : 214, train_loss : 4.818528024773849
Model saved in file: ./RNN_model/model.ckpt-214
epoch : 215, train_loss : 4.798924847653038
Model saved in file: ./RNN_model/model.ckpt-215
epoch : 216, train_loss : 4.800941266511615
Model saved in file: ./RNN_model/model.ckpt-216
epoch : 217, train_loss : 4.789627853192781
Model saved in file: ./RNN_model/model.ckpt-217
epoch : 218, train_loss : 4.781093296251799
Model saved in file: ./RNN_model/model.ckpt-218
epoch : 219, train_loss : 4.78604733316522
Model saved in file: ./RNN_model/model.ckpt-219
epoch : 220, train_loss : 4.780632445686742
Model saved in file: ./RNN_model/model.ckpt-220
epoch : 221, train_loss : 4.770376782668263
Model saved in file: ./RNN_model/model.ckpt-221
epoch : 222, train_loss : 4.7636329500298755
Model saved in file: ./RNN_model/model.ckpt-222
epoch : 223, train_loss : 4.764173306916889
Model saved in file: ./RNN_model/model.ckpt-223
epoch : 224, train_loss : 4.758184483176784
Model saved in file: ./RNN_model/model.ckpt-224
epoch : 225, train_loss : 4.754288271853799
Model saved in file: ./RNN_model/model.ckpt-225
epoch : 226, train_loss : 4.753107296793084
Model saved in file: ./RNN_model/model.ckpt-226
epoch : 227, train_loss : 4.744985655734414
Model saved in file: ./RNN_model/model.ckpt-227
epoch : 228, train_loss : 4.740304971996106
Model saved in file: ./RNN_model/model.ckpt-228
epoch : 229, train_loss : 4.730224433698152
Model saved in file: ./RNN_model/model.ckpt-229
epoch : 230, train_loss : 4.72966174075478
Model saved in file: ./RNN_model/model.ckpt-230
epoch : 231, train_loss : 4.724961331016139
Model saved in file: ./RNN_model/model.ckpt-231
epoch : 232, train_loss : 4.724763820045872
Model saved in file: ./RNN_model/model.ckpt-232
epoch : 233, train_loss : 4.719883743085359
Model saved in file: ./RNN_model/model.ckpt-233
epoch : 234, train_loss : 4.715274308857166
Model saved in file: ./RNN_model/model.ckpt-234
epoch : 235, train_loss : 4.70762804934853
Model saved in file: ./RNN_model/model.ckpt-235
epoch : 236, train_loss : 4.7023653984069815
Model saved in file: ./RNN_model/model.ckpt-236
epoch : 237, train_loss : 4.6993486253838785
Model saved in file: ./RNN_model/model.ckpt-237
epoch : 238, train_loss : 4.704232768008583
Model saved in file: ./RNN_model/model.ckpt-238
epoch : 239, train_loss : 4.706562293203254
Model saved in file: ./RNN_model/model.ckpt-239
epoch : 240, train_loss : 4.6888444549159
Model saved in file: ./RNN_model/model.ckpt-240
epoch : 241, train_loss : 4.690007109391062
Model saved in file: ./RNN_model/model.ckpt-241
epoch : 242, train_loss : 4.678484916687012
Model saved in file: ./RNN_model/model.ckpt-242
epoch : 243, train_loss : 4.674560095134534
Model saved in file: ./RNN_model/model.ckpt-243
epoch : 244, train_loss : 4.674003425397371
Model saved in file: ./RNN_model/model.ckpt-244
epoch : 245, train_loss : 4.6703734397888175
Model saved in file: ./RNN_model/model.ckpt-245
epoch : 246, train_loss : 4.670478870994166
Model saved in file: ./RNN_model/model.ckpt-246
epoch : 247, train_loss : 4.665914359845613
Model saved in file: ./RNN_model/model.ckpt-247
epoch : 248, train_loss : 4.663475413071481
Model saved in file: ./RNN_model/model.ckpt-248
epoch : 249, train_loss : 4.6565738226238045
Model saved in file: ./RNN_model/model.ckpt-249
epoch : 250, train_loss : 4.655475139617919
Model saved in file: ./RNN_model/model.ckpt-250
epoch : 251, train_loss : 4.661262938850805
Model saved in file: ./RNN_model/model.ckpt-251
epoch : 252, train_loss : 4.650692211954218
Model saved in file: ./RNN_model/model.ckpt-252
epoch : 253, train_loss : 4.643667045392489
Model saved in file: ./RNN_model/model.ckpt-253
epoch : 254, train_loss : 4.636515943627608
Model saved in file: ./RNN_model/model.ckpt-254
epoch : 255, train_loss : 4.641063439218622
Model saved in file: ./RNN_model/model.ckpt-255
epoch : 256, train_loss : 4.637724374469958
Model saved in file: ./RNN_model/model.ckpt-256
epoch : 257, train_loss : 4.636434580150403
Model saved in file: ./RNN_model/model.ckpt-257
epoch : 258, train_loss : 4.626754334098415
Model saved in file: ./RNN_model/model.ckpt-258
epoch : 259, train_loss : 4.632980346679687
Model saved in file: ./RNN_model/model.ckpt-259
epoch : 260, train_loss : 4.625953599026328
Model saved in file: ./RNN_model/model.ckpt-260
epoch : 261, train_loss : 4.623058394381875
Model saved in file: ./RNN_model/model.ckpt-261
epoch : 262, train_loss : 4.612949697594894
Model saved in file: ./RNN_model/model.ckpt-262
epoch : 263, train_loss : 4.609185996808503
Model saved in file: ./RNN_model/model.ckpt-263
epoch : 264, train_loss : 4.615394491898386
Model saved in file: ./RNN_model/model.ckpt-264
epoch : 265, train_loss : 4.6094802053351165
Model saved in file: ./RNN_model/model.ckpt-265
epoch : 266, train_loss : 4.608047234384637
Model saved in file: ./RNN_model/model.ckpt-266
epoch : 267, train_loss : 4.608236764606676
Model saved in file: ./RNN_model/model.ckpt-267
epoch : 268, train_loss : 4.60210923144692
Model saved in file: ./RNN_model/model.ckpt-268
epoch : 269, train_loss : 4.594307196767707
Model saved in file: ./RNN_model/model.ckpt-269
epoch : 270, train_loss : 4.603555829901445
Model saved in file: ./RNN_model/model.ckpt-270
epoch : 271, train_loss : 4.597077821430407
Model saved in file: ./RNN_model/model.ckpt-271
epoch : 272, train_loss : 4.593521168357448
Model saved in file: ./RNN_model/model.ckpt-272
epoch : 273, train_loss : 4.5939915054722835
Model saved in file: ./RNN_model/model.ckpt-273
epoch : 274, train_loss : 4.595139152125308
Model saved in file: ./RNN_model/model.ckpt-274
epoch : 275, train_loss : 4.600436511792633
Model saved in file: ./RNN_model/model.ckpt-275
epoch : 276, train_loss : 4.60313290043881
Model saved in file: ./RNN_model/model.ckpt-276
epoch : 277, train_loss : 4.589922679098028
Model saved in file: ./RNN_model/model.ckpt-277
epoch : 278, train_loss : 4.573727607727051
Model saved in file: ./RNN_model/model.ckpt-278
epoch : 279, train_loss : 4.578500597100508
Model saved in file: ./RNN_model/model.ckpt-279
epoch : 280, train_loss : 4.5803439742640455
Model saved in file: ./RNN_model/model.ckpt-280
epoch : 281, train_loss : 4.578212110619797
Model saved in file: ./RNN_model/model.ckpt-281
epoch : 282, train_loss : 4.569742077275326
Model saved in file: ./RNN_model/model.ckpt-282
epoch : 283, train_loss : 4.574651366785953
Model saved in file: ./RNN_model/model.ckpt-283
epoch : 284, train_loss : 4.577548704649272
Model saved in file: ./RNN_model/model.ckpt-284
epoch : 285, train_loss : 4.570560605902422
Model saved in file: ./RNN_model/model.ckpt-285
epoch : 286, train_loss : 4.567535099230315
Model saved in file: ./RNN_model/model.ckpt-286
epoch : 287, train_loss : 4.5695067706860995
Model saved in file: ./RNN_model/model.ckpt-287
epoch : 288, train_loss : 4.560104570890728
Model saved in file: ./RNN_model/model.ckpt-288
epoch : 289, train_loss : 4.5579379232306225
Model saved in file: ./RNN_model/model.ckpt-289
epoch : 290, train_loss : 4.5630634709408415
Model saved in file: ./RNN_model/model.ckpt-290
epoch : 291, train_loss : 4.556521164743524
Model saved in file: ./RNN_model/model.ckpt-291
epoch : 292, train_loss : 4.555634122145804
Model saved in file: ./RNN_model/model.ckpt-292
epoch : 293, train_loss : 4.556780890414589
Model saved in file: ./RNN_model/model.ckpt-293
epoch : 294, train_loss : 4.559654035066304
Model saved in file: ./RNN_model/model.ckpt-294
epoch : 295, train_loss : 4.558537583602102
Model saved in file: ./RNN_model/model.ckpt-295
epoch : 296, train_loss : 4.548728189970317
Model saved in file: ./RNN_model/model.ckpt-296
epoch : 297, train_loss : 4.5539240084196395
Model saved in file: ./RNN_model/model.ckpt-297
epoch : 298, train_loss : 4.549852120248895
Model saved in file: ./RNN_model/model.ckpt-298
epoch : 299, train_loss : 4.545313609273811
Model saved in file: ./RNN_model/model.ckpt-299
epoch : 300, train_loss : 4.547696414746737
Model saved in file: ./RNN_model/model.ckpt-300
###Markdown
Define RNN to test
###Code
ph_test_input_name = tf.placeholder(dtype=tf.float32, shape=[1, 1, dim_data])
# hideen size : 1
ph_h = tf.placeholder(dtype=tf.float32, shape=[1, dim_rnn_cell])
# hideen stat of LSTM
ph_c = tf.placeholder(dtype=tf.float32, shape=[1, dim_rnn_cell])
# cell state of LSTM
def name_rnn_test(_x, _dim_data, _dim_rnn_cell, _prev_h, _prev_c): # ph_h, ph_c
_x_split = tf.transpose(_x, [1, 0, 2]) # seq_len, batch, dim_data
_x_split = tf.reshape(_x_split, [-1, _dim_data])
with tf.variable_scope('weights', reuse=tf.AUTO_REUSE):
_W_i = tf.get_variable('W_i')
_b_i = tf.get_variable('b_i')
_W_o = tf.get_variable('W_o')
_b_o = tf.get_variable('b_o')
_h_split = tf.matmul(_x_split, _W_i) + b_i
_h_split = tf.split(_h_split, 1, axis=0) # 1 is the seq_len
with tf.variable_scope('rnn', reuse=tf.AUTO_REUSE):
_rnn_cell = tf.nn.rnn_cell.BasicLSTMCell(_dim_rnn_cell)
_output, _state = tf.nn.static_rnn(_rnn_cell, _h_split, dtype=tf.float32,
initial_state = (_prev_h, _prev_c))
_total_out = []
for _tmp_out in _output:
_tmp_out = tf.matmul(_tmp_out, _W_o) + _b_o
_total_out.append(_tmp_out)
return tf.transpose(tf.stack(_total_out), [1, 0, 2]), _state # output _state를다시 쓰기위하여
###Output
_____no_output_____
###Markdown
Run Test
###Code
test_result_name, test_state = name_rnn_test(ph_test_input_name, dim_data, dim_rnn_cell,
ph_h, ph_c)
with tf.Session() as sess:
sess.run(init)
saver.restore(sess, './RNN_model/model.ckpt-300')
total_name = ''
prev_char = 'a'
total_name += prev_char
prev_state = (np.zeros((1, dim_rnn_cell)), np.zeros((1, dim_rnn_cell)))
for i in range(seq_len):
input_onehot = np.zeros((1, 1, dim_data)) # make a space
prev_char_idx = chars.index(prev_char)
input_onehot[:, :, prev_char_idx] = 1
test_feed_dict = {ph_test_input_name: input_onehot,
ph_h: prev_state[0], ph_c: prev_state[1]}
curr_result, curr_state = sess.run([test_result_name, test_state], test_feed_dict)
if np.argmax(curr_result) == dim_data-1:
break
else:
softmax_result = sess.run(tf.nn.softmax(test_result_name), test_feed_dict)
softmax_result = np.squeeze(softmax_result)
softmax_result = softmax_result[:dim_data-1]/sum(softmax_result[:dim_data-1])
prev_char = np.random.choice(chars, 1, p=softmax_result)
total_name += prev_char[0]
prev_state = curr_state
print('Result Name :', total_name)
###Output
_____no_output_____ |
examples/getting-started-movielens/03-Training-with-TF.ipynb | ###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
_____no_output_____
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres__nnzs"])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictonary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col + "__values"] = tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,))
inputs[col + "__nnzs"] = tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=1)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/movielens_tf/1/model.savedmodel/assets
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/models/movielens_tf/1/model.savedmodel/assets
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
# avoid numba warnings
from numba import config
config.CUDA_LOW_OCCUPANCY_WARNINGS = 0
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
2021-12-02 01:17:48.483489: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-02 01:17:48.490106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22755 MB memory: -> device: 0, name: Quadro GV100, pci bus id: 0000:15:00.0, compute capability: 7.0
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictionary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = (tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=1)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
2021-12-02 01:18:14.791643: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Function `_wrapped_model` contains input name(s) movieId, userId with unsupported characters which will be renamed to movieid, userid in the SavedModel.
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
_____no_output_____
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
_____no_output_____
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictonary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = (tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=1)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/movielens_tf/1/model.savedmodel/assets
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/models/movielens_tf/1/model.savedmodel/assets
###Markdown
Getting Started MovieLens: Training with TensorFlowThis notebook is created using the latest stable [merlin-tensorflow-training](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/merlin/containers/merlin-tensorflow-training/tags) container. OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
# avoid numba warnings
from numba import config
config.CUDA_LOW_OCCUPANCY_WARNINGS = 0
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import time
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
2021-12-02 01:17:48.483489: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-02 01:17:48.490106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22755 MB memory: -> device: 0, name: Quadro GV100, pci bus id: 0000:15:00.0, compute capability: 7.0
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictionary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = (tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
The plot is similar to the following figure: Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
EPOCHS = 1
start = time.time()
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=EPOCHS)
t_final = time.time() - start
total_rows = train_dataset_tf.num_rows_processed + valid_dataset_tf.num_rows_processed
print(
f"run_time: {t_final} - rows: {total_rows * EPOCHS} - epochs: {EPOCHS} - dl_thru: {(EPOCHS * total_rows) / t_final}"
)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
2021-12-02 01:18:14.791643: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Function `_wrapped_model` contains input name(s) movieId, userId with unsupported characters which will be renamed to movieid, userid in the SavedModel.
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
_____no_output_____
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
_____no_output_____
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictionary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = (tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=1)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/movielens_tf/1/model.savedmodel/assets
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/models/movielens_tf/1/model.savedmodel/assets
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
# avoid numba warnings
from numba import config
config.CUDA_LOW_OCCUPANCY_WARNINGS = 0
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
2021-12-02 01:17:48.483489: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-02 01:17:48.490106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22755 MB memory: -> device: 0, name: Quadro GV100, pci bus id: 0000:15:00.0, compute capability: 7.0
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictionary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = (tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=1)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
2021-12-02 01:18:14.791643: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Function `_wrapped_model` contains input name(s) movieId, userId with unsupported characters which will be renamed to movieid, userid in the SavedModel.
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
_____no_output_____
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
# avoid numba warnings
from numba import config
config.CUDA_LOW_OCCUPANCY_WARNINGS = 0
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import time
import tensorflow as tf
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
2022-04-27 22:12:40.128861: I tensorflow/core/platform/cpu_feature_guard.cc:152] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-04-27 22:12:41.479738: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 16254 MB memory: -> device: 0, name: Quadro GV100, pci bus id: 0000:15:00.0, compute capability: 7.0
2022-04-27 22:12:41.480359: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 30382 MB memory: -> device: 1, name: Quadro GV100, pci bus id: 0000:2d:00.0, compute capability: 7.0
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictionary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int64, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = (tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
EPOCHS = 1
start = time.time()
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=EPOCHS)
t_final = time.time() - start
total_rows = train_dataset_tf.num_rows_processed + valid_dataset_tf.num_rows_processed
print(
f"run_time: {t_final} - rows: {total_rows * EPOCHS} - epochs: {EPOCHS} - dl_thru: {(EPOCHS * total_rows) / t_final}"
)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
2022-04-27 22:13:04.741886: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Function `_wrapped_model` contains input name(s) movieId, userId with unsupported characters which will be renamed to movieid, userid in the SavedModel.
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
WARNING:absl:Function `_wrapped_model` contains input name(s) movieId, userId with unsupported characters which will be renamed to movieid, userid in the SavedModel.
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
_____no_output_____
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictonary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = \
(tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=1)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/movielens_tf/1/model.savedmodel/assets
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/models/movielens_tf/1/model.savedmodel/assets
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
_____no_output_____
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres__nnzs"])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictonary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col + "__values"] = tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,))
inputs[col + "__nnzs"] = tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=1)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/movielens_tf/1/model.savedmodel/assets
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
INFO:tensorflow:Assets written to: /root/nvt-examples/models/movielens_tf/1/model.savedmodel/assets
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
# avoid numba warnings
from numba import config
config.CUDA_LOW_OCCUPANCY_WARNINGS = 0
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import time
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
2021-12-02 01:17:48.483489: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-02 01:17:48.490106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22755 MB memory: -> device: 0, name: Quadro GV100, pci bus id: 0000:15:00.0, compute capability: 7.0
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictionary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int64, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = (tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
EPOCHS = 1
start = time.time()
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=EPOCHS)
t_final = time.time() - start
total_rows = train_dataset_tf.num_rows_processed + valid_dataset_tf.num_rows_processed
print(
f"run_time: {t_final} - rows: {total_rows * EPOCHS} - epochs: {EPOCHS} - dl_thru: {(EPOCHS * total_rows) / t_final}"
)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
2021-12-02 01:18:14.791643: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Function `_wrapped_model` contains input name(s) movieId, userId with unsupported characters which will be renamed to movieid, userid in the SavedModel.
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
_____no_output_____
###Markdown
Getting Started MovieLens: Training with TensorFlow OverviewWe observed that TensorFlow training pipelines can be slow as the dataloader is a bottleneck. The native dataloader in TensorFlow randomly sample each item from the dataset, which is very slow. The window dataloader in TensorFlow is not much faster. In our experiments, we are able to speed-up existing TensorFlow pipelines by 9x using a highly optimized dataloader.Applying deep learning models to recommendation systems faces unique challenges in comparison to other domains, such as computer vision and natural language processing. The datasets and common model architectures have unique characteristics, which require custom solutions. Recommendation system datasets have terabytes in size with billion examples but each example is represented by only a few bytes. For example, the [Criteo CTR dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/), the largest publicly available dataset, is 1.3TB with 4 billion examples. The model architectures have normally large embedding tables for the users and items, which do not fit on a single GPU. You can read more in our [blogpost](https://medium.com/nvidia-merlin/why-isnt-your-recommender-system-training-faster-on-gpu-and-what-can-you-do-about-it-6cb44a711ad4). Learning objectivesThis notebook explains, how to use the NVTabular dataloader to accelerate TensorFlow training.1. Use **NVTabular dataloader** with TensorFlow Keras model2. Leverage **multi-hot encoded input features** MovieLens25MThe [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) is a popular dataset for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well. For example, which genres a movie has. Although we may not improve state-of-the-art results with our neural network architecture, the purpose of this notebook is to explain how to integrate multi-hot categorical features into a neural network. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The dataloader cannot prepare the next batch fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras modelsMore information in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49).
###Code
# External dependencies
import os
import glob
import nvtabular as nvt
###Output
_____no_output_____
###Markdown
We define our base input directory, containing the data.
###Code
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/movielens/data/")
)
# path to save the models
MODEL_BASE_DIR = os.environ.get("MODEL_BASE_DIR", os.path.expanduser("~/nvt-examples/"))
# avoid numba warnings
from numba import config
config.CUDA_LOW_OCCUPANCY_WARNINGS = 0
###Output
_____no_output_____
###Markdown
Defining Hyperparameters First, we define the data schema and differentiate between single-hot and multi-hot categorical features. Note, that we do not have any numerical input features.
###Code
BATCH_SIZE = 1024 * 32 # Batch Size
CATEGORICAL_COLUMNS = ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = ["genres"] # Multi-hot
NUMERIC_COLUMNS = []
# Output from ETL-with-NVTabular
TRAIN_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "train", "*.parquet")))
VALID_PATHS = sorted(glob.glob(os.path.join(INPUT_DATA_DIR, "valid", "*.parquet")))
###Output
_____no_output_____
###Markdown
In the previous notebook, we used NVTabular for ETL and stored the workflow to disk. We can load the NVTabular workflow to extract important metadata for our training pipeline.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
###Output
_____no_output_____
###Markdown
The embedding table shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form `(cardinality, embedding_size)`.
###Code
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(workflow)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
Initializing NVTabular Dataloader for Tensorflow We import TensorFlow and some NVTabular TF extensions, such as custom TensorFlow layers supporting multi-hot and the NVTabular TensorFlow data loader.
###Code
import os
import time
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.7" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
from nvtabular.framework_utils.tensorflow import layers
###Output
_____no_output_____
###Markdown
First, we take a look on our data loader and how the data is represented as tensors. The NVTabular data loader are initialized as usually and we specify both single-hot and multi-hot categorical features as cat_names. The data loader will automatically recognize the single/multi-hot columns and represent them accordingly.
###Code
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=False,
buffer_size=0.06,
parts_per_chunk=1,
)
###Output
_____no_output_____
###Markdown
Let's generate a batch and take a look on the input features.We can see, that the single-hot categorical features (`userId` and `movieId`) have a shape of `(32768, 1)`, which is the batchsize (as usually).For the multi-hot categorical feature `genres`, we receive two Tensors `genres__values` and `genres__nnzs`.`genres__values` are the actual data, containing the genre IDs. Note that the Tensor has more values than the batch_size. The reason is, that one datapoint in the batch can contain more than one genre (multi-hot).`genres__nnzs` are a supporting Tensor, describing how many genres are associated with each datapoint in the batch.For example,- if the first value in `genres__nnzs` is `5`, then the first 5 values in `genres__values` are associated with the first datapoint in the batch (movieId/userId).- if the second value in `genres__nnzs` is `2`, then the 6th and the 7th values in `genres__values` are associated with the second datapoint in the batch (continuing after the previous value stopped). - if the third value in `genres_nnzs` is `1`, then the 8th value in `genres__values` are associated with the third datapoint in the batch. - and so on
###Code
batch = next(iter(train_dataset_tf))
batch[0]
###Output
2021-12-02 01:17:48.483489: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-02 01:17:48.490106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22755 MB memory: -> device: 0, name: Quadro GV100, pci bus id: 0000:15:00.0, compute capability: 7.0
###Markdown
We can see that the sum of `genres__nnzs` is equal to the shape of `genres__values`.
###Code
tf.reduce_sum(batch[0]["genres"][1])
###Output
_____no_output_____
###Markdown
As each datapoint can have a different number of genres, it is more efficient to represent the genres as two flat tensors: One with the actual values (`genres__values`) and one with the length for each datapoint (`genres__nnzs`).
###Code
del batch
###Output
_____no_output_____
###Markdown
Defining Neural Network Architecture We will define a common neural network architecture for tabular data.* Single-hot categorical features are fed into an Embedding Layer* Each value of a multi-hot categorical features is fed into an Embedding Layer and the multiple Embedding outputs are combined via averaging* The output of the Embedding Layers are concatenated* The concatenated layers are fed through multiple feed-forward layers (Dense Layers with ReLU activations)* The final output is a single number with sigmoid activation function First, we will define some dictionary/lists for our network architecture.
###Code
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
###Output
_____no_output_____
###Markdown
We create `tf.keras.Input` tensors for all 4 input features.
###Code
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = (tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
###Output
_____no_output_____
###Markdown
Next, we initialize Embedding Layers with `tf.feature_column.embedding_column`.
###Code
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layers
###Output
_____no_output_____
###Markdown
NVTabular implemented a custom TensorFlow layer `layers.DenseFeatures`, which takes as an input the different `tf.Keras.Input` and pre-initialized `tf.feature_column` and automatically concatenate them into a flat tensor. In the case of multi-hot categorical features, `DenseFeatures` organizes the inputs `__values` and `__nnzs` to define a `RaggedTensor` and combine them. `DenseFeatures` can handle numeric inputs, as well, but MovieLens does not provide numerical input features.
###Code
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x_emb_output
###Output
_____no_output_____
###Markdown
We can see that the output shape of the concatenated layer is equal to the sum of the individual Embedding output dimensions (1040 = 16+512+512).
###Code
EMBEDDING_TABLE_SHAPES
###Output
_____no_output_____
###Markdown
We add multiple Dense Layers. Finally, we initialize the `tf.keras.Model` and add the optimizer.
###Code
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid", name="output")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
model.compile("sgd", "binary_crossentropy")
# You need to install the dependencies
tf.keras.utils.plot_model(model)
###Output
_____no_output_____
###Markdown
Training the deep learning model We can train our model with `model.fit`. We need to use a Callback to add the validation dataloader.
###Code
validation_callback = KerasSequenceValidater(valid_dataset_tf)
EPOCHS = 1
start = time.time()
history = model.fit(train_dataset_tf, callbacks=[validation_callback], epochs=EPOCHS)
t_final = time.time() - start
total_rows = train_dataset_tf.num_rows_processed + valid_dataset_tf.num_rows_processed
print(
f"run_time: {t_final} - rows: {total_rows * EPOCHS} - epochs: {EPOCHS} - dl_thru: {(EPOCHS * total_rows) / t_final}"
)
MODEL_NAME_TF = os.environ.get("MODEL_NAME_TF", "movielens_tf")
MODEL_PATH_TEMP_TF = os.path.join(MODEL_BASE_DIR, MODEL_NAME_TF, "1/model.savedmodel")
model.save(MODEL_PATH_TEMP_TF)
###Output
2021-12-02 01:18:14.791643: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Function `_wrapped_model` contains input name(s) movieId, userId with unsupported characters which will be renamed to movieid, userid in the SavedModel.
###Markdown
Before moving to the next notebook, `04a-Triton-Inference-with-TF.ipynb`, we need to generate the Triton Inference Server configurations and save the models in the correct format. We just saved TensorFlow model to disk, and in the previous notebook `02-ETL-with-NVTabular`, we saved the NVTabular workflow. Let's load the workflow. The TensorFlow input layers expect the input datatype to be int32. Therefore, we need to change the output datatypes to int32 for our NVTabular workflow.
###Code
workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes["userId"] = "int32"
workflow.output_dtypes["movieId"] = "int32"
MODEL_NAME_ENSEMBLE = os.environ.get("MODEL_NAME_ENSEMBLE", "movielens")
# model path to save the models
MODEL_PATH = os.environ.get("MODEL_PATH", os.path.join(MODEL_BASE_DIR, "models"))
###Output
_____no_output_____
###Markdown
NVTabular provides a function to save the NVTabular workflow, TensorFlow model and Triton Inference Server (IS) config files via `export_tensorflow_ensemble`. We provide the model, workflow, a model name for ensemble model, path and output column.
###Code
# Creates an ensemble triton server model, where
# model: The tensorflow model that should be served
# workflow: The nvtabular workflow used in preprocessing
# name: The base name of the various triton models
from nvtabular.inference.triton import export_tensorflow_ensemble
export_tensorflow_ensemble(model, workflow, MODEL_NAME_ENSEMBLE, MODEL_PATH, ["rating"])
###Output
_____no_output_____ |
example/transformer/load-transformer.ipynb | ###Markdown
Malaya provided basic interface for Pretrained Transformer encoder models, specific to Malay, local social media slang and Manglish language, we called it Transformer-Bahasa. This interface not able us to use it to do custom training. If you want to download pretrained model for Transformer-Bahasa and use it for custom transfer-learning, you can download it here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/, some notebooks to help you get started.Or you can simply use [hugging-face transformers](https://huggingface.co/models?filter=malay) to try transformer models from Malaya, simply check available models from here, https://huggingface.co/models?filter=malay
###Code
from IPython.core.display import Image, display
display(Image('huggingface.png', width=500))
%%time
import malaya
###Output
CPU times: user 4.85 s, sys: 1.27 s, total: 6.12 s
Wall time: 7.45 s
###Markdown
list Transformer-Bahasa available
###Code
malaya.transformer.available_model()
###Output
_____no_output_____
###Markdown
1. `bert` - BERT architecture from google.2. `tiny-bert` - BERT architecture from google with smaller parameters.3. `albert` - ALBERT architecture from google.4. `tiny-albert` - ALBERT architecture from google with smaller parameters.5. `xlnet` - XLNET architecture from google.6. `alxlnet` Malaya architecture, unpublished model. Load XLNET-BahasaFeel free to use another models.
###Code
xlnet = malaya.transformer.load(model = 'xlnet')
strings = ['Kerajaan galakkan rakyat naik public transport tapi parking kat lrt ada 15. Reserved utk staff rapid je dah berpuluh. Park kereta tepi jalan kang kene saman dgn majlis perbandaran. Kereta pulak senang kene curi. Cctv pun tak ada. Naik grab dah 5-10 ringgit tiap hari. Gampang juga',
'Alaa Tun lek ahhh npe muka masam cmni kn agong kata usaha kerajaan terdahulu sejak selepas merdeka',
"Orang ramai cakap nurse kerajaan garang. So i tell u this. Most of our local ppl will treat us as hamba abdi and they don't respect us as a nurse"]
###Output
_____no_output_____
###Markdown
I have random sentences copied from Twitter, searched using `kerajaan` keyword. VectorizationChange a string or batch of strings to latent space / vectors representation.
###Code
v = xlnet.vectorize(strings)
v.shape
###Output
_____no_output_____
###Markdown
Attention Attention is to get which part of the sentence give the impact. Method available for attention,- `'last'` - attention from last layer.- `'first'` - attention from first layer.- `'mean'` - average attentions from all layers. You can give list of strings or a string to get the attention, in this documentation, I just want to use a string.
###Code
xlnet.attention(strings[1], method = 'last')
xlnet.attention(strings[1], method = 'first')
xlnet.attention(strings[1], method = 'mean')
###Output
_____no_output_____
###Markdown
Visualize Attention Before using attention visualization, we need to load D3 into our jupyter notebook first. This visualization borrow from https://github.com/jessevig/bertviz .
###Code
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min',
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
}
});
xlnet.visualize_attention('nak makan ayam dgn husein')
###Output
_____no_output_____
###Markdown
_I attached a printscreen, readthedocs cannot visualize the javascript._
###Code
from IPython.core.display import Image, display
display(Image('xlnet-attention.png', width=300))
###Output
_____no_output_____
###Markdown
Malaya provided basic interface for Pretrained Transformer encoder models, specific to Malay, local social media slang and Manglish language, we called it Transformer-Bahasa. This interface not able us to use it to do custom training. If you want to download pretrained model for Transformer-Bahasa and use it for custom transfer-learning, you can download it here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/, some notebooks to help you get started.
###Code
%%time
import malaya
###Output
CPU times: user 6.64 s, sys: 1.53 s, total: 8.17 s
Wall time: 11.9 s
###Markdown
list Transformer-Bahasa available
###Code
malaya.transformer.available_model()
###Output
_____no_output_____
###Markdown
1. `bert` is original BERT google architecture with `base` and `small` sizes.2. `xlnet` is original XLNET google architecture with `base` size.3. `albert` is A-Lite BERT google + toyota architecture with `base` size. Load XLNET-BahasaFeel free to use another models.
###Code
xlnet = malaya.transformer.load(model = 'xlnet')
strings = ['Kerajaan galakkan rakyat naik public transport tapi parking kat lrt ada 15. Reserved utk staff rapid je dah berpuluh. Park kereta tepi jalan kang kene saman dgn majlis perbandaran. Kereta pulak senang kene curi. Cctv pun tak ada. Naik grab dah 5-10 ringgit tiap hari. Gampang juga',
'Alaa Tun lek ahhh npe muka masam cmni kn agong kata usaha kerajaan terdahulu sejak selepas merdeka',
"Orang ramai cakap nurse kerajaan garang. So i tell u this. Most of our local ppl will treat us as hamba abdi and they don't respect us as a nurse"]
###Output
_____no_output_____
###Markdown
I have random sentences copied from Twitter, searched using `kerajaan` keyword. VectorizationChange a string or batch of strings to latent space / vectors representation.
###Code
v = xlnet.vectorize(strings)
v.shape
###Output
_____no_output_____
###Markdown
Attention Attention is to get which part of the sentence give the impact. Method available for attention,- `'last'` - attention from last layer.- `'first'` - attention from first layer.- `'mean'` - average attentions from all layers. You can give list of strings or a string to get the attention, in this documentation, I just want to use a string.
###Code
xlnet.attention(strings[1], method = 'last')
xlnet.attention(strings[1], method = 'first')
xlnet.attention(strings[1], method = 'mean')
###Output
_____no_output_____
###Markdown
Visualize Attention Before using attention visualization, we need to load D3 into our jupyter notebook first. This visualization borrow from https://github.com/jessevig/bertviz .
###Code
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min',
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
}
});
xlnet.visualize_attention('nak makan ayam dgn husein')
###Output
_____no_output_____
###Markdown
_I attached a printscreen, readthedocs cannot visualize the javascript._
###Code
from IPython.core.display import Image, display
display(Image('xlnet-attention.png', width=300))
###Output
_____no_output_____
###Markdown
Transformer This tutorial is available as an IPython notebook at [Malaya/example/transformer](https://github.com/huseinzol05/Malaya/tree/master/example/transformer). Malaya provided basic interface for Pretrained Transformer encoder models, specific to Malay, local social media slang and Manglish language, we called it Transformer-Bahasa. Below are the list of dataset we pretrained,Standard Bahasa dataset, 1. [Malay-dataset/dumping](https://github.com/huseinzol05/Malay-Dataset/tree/master/dumping).2. [Malay-dataset/pure-text](https://github.com/huseinzol05/Malay-Dataset/tree/master/pure-text).Bahasa social media,1. [Malay-dataset/dumping/instagram](https://github.com/huseinzol05/Malay-Dataset/tree/master/dumping/instagram).2. [Malay-dataset/dumping/twitter](https://github.com/huseinzol05/Malay-Dataset/tree/master/dumping/twitter).Singlish / Manglish,1. [Malay-dataset/dumping/singlish](https://github.com/huseinzol05/Malay-Dataset/tree/master/dumping/singlish-text).2. [Malay-dataset/dumping/singapore-news](https://github.com/huseinzol05/Malay-Dataset/tree/master/dumping/singapore-news).**This interface not able us to use it to do custom training**. If you want to download pretrained model for Transformer-Bahasa and use it for custom transfer-learning, you can download it here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/, some notebooks to help you get started.Or you can simply use [hugging-face transformers](https://huggingface.co/models?filter=ms) to try transformer models from Malaya, simply check available models from here, https://huggingface.co/models?filter=ms
###Code
from IPython.core.display import Image, display
display(Image('huggingface.png', width=500))
%%time
import malaya
###Output
CPU times: user 4.88 s, sys: 641 ms, total: 5.52 s
Wall time: 4.5 s
###Markdown
list Transformer-Bahasa available
###Code
malaya.transformer.available_transformer()
strings = ['Kerajaan galakkan rakyat naik public transport tapi parking kat lrt ada 15. Reserved utk staff rapid je dah berpuluh. Park kereta tepi jalan kang kene saman dgn majlis perbandaran. Kereta pulak senang kene curi. Cctv pun tak ada. Naik grab dah 5-10 ringgit tiap hari. Gampang juga',
'Alaa Tun lek ahhh npe muka masam cmni kn agong kata usaha kerajaan terdahulu sejak selepas merdeka',
"Orang ramai cakap nurse kerajaan garang. So i tell u this. Most of our local ppl will treat us as hamba abdi and they don't respect us as a nurse"]
###Output
_____no_output_____
###Markdown
Load XLNET-Bahasa
###Code
xlnet = malaya.transformer.load(model = 'xlnet')
###Output
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/xlnet.py:70: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/__init__.py:81: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/xlnet.py:253: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/xlnet.py:253: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/modeling.py:686: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.
INFO:tensorflow:memory input None
INFO:tensorflow:Use float type <dtype: 'float32'>
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/modeling.py:693: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/modeling.py:797: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dropout instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/layers/core.py:271: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/modeling.py:99: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/__init__.py:94: The name tf.InteractiveSession is deprecated. Please use tf.compat.v1.InteractiveSession instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/__init__.py:95: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/__init__.py:96: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/__init__.py:100: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/malaya/transformers/xlnet/__init__.py:103: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
INFO:tensorflow:Restoring parameters from /Users/huseinzolkepli/Malaya/xlnet-model/base/xlnet-base/model.ckpt
###Markdown
I have random sentences copied from Twitter, searched using `kerajaan` keyword. VectorizationChange a string or batch of strings to latent space / vectors representation.```pythondef vectorize(self, strings: List[str]): """ Vectorize string inputs. Parameters ---------- strings : List[str] Returns ------- result: np.array """```
###Code
v = xlnet.vectorize(strings)
v.shape
###Output
_____no_output_____
###Markdown
Attention ```pythondef attention(self, strings: List[str], method: str = 'last', **kwargs): """ Get attention string inputs from bert attention. Parameters ---------- strings : List[str] method : str, optional (default='last') Attention layer supported. Allowed values: * ``'last'`` - attention from last layer. * ``'first'`` - attention from first layer. * ``'mean'`` - average attentions from all layers. Returns ------- result : List[List[Tuple[str, float]]] """``` You can give list of strings or a string to get the attention, in this documentation, I just want to use a string.
###Code
xlnet.attention([strings[1]], method = 'last')
xlnet.attention([strings[1]], method = 'first')
xlnet.attention([strings[1]], method = 'mean')
###Output
_____no_output_____
###Markdown
Visualize Attention Before using attention visualization, we need to load D3 into our jupyter notebook first. This visualization borrow from https://github.com/jessevig/bertviz .```pythondef visualize_attention(self, string: str): """ Visualize attention. Parameters ---------- string : str """```
###Code
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min',
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
}
});
xlnet.visualize_attention('nak makan ayam dgn husein')
###Output
_____no_output_____
###Markdown
_I attached a printscreen, readthedocs cannot visualize the javascript._
###Code
from IPython.core.display import Image, display
display(Image('xlnet-attention.png', width=300))
###Output
_____no_output_____
###Markdown
**All attention models able to use these interfaces.** Load ELECTRA-BahasaFeel free to use another models.
###Code
electra = malaya.transformer.load(model = 'electra')
electra.attention([strings[1]], method = 'last')
###Output
_____no_output_____
###Markdown
Malaya provided basic interface for Pretrained Transformer encoder models, specific to Malay, local social media slang and Manglish language, we called it Transformer-Bahasa. This interface not able us to use it to do custom training. If you want to download pretrained model for Transformer-Bahasa and use it for custom transfer-learning, you can download it here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/, some notebooks to help you get started.Or you can simply use [hugging-face transformers](https://huggingface.co/models?filter=malay) to try transformer models from Malaya, simply check available models from here, https://huggingface.co/models?filter=malay
###Code
from IPython.core.display import Image, display
display(Image('huggingface.png', width=500))
%%time
import malaya
###Output
CPU times: user 4.93 s, sys: 1.31 s, total: 6.25 s
Wall time: 8 s
###Markdown
list Transformer-Bahasa available
###Code
malaya.transformer.available_transformer()
###Output
_____no_output_____
###Markdown
1. `bert` - BERT architecture from google.2. `tiny-bert` - BERT architecture from google with smaller parameters.3. `albert` - ALBERT architecture from google.4. `tiny-albert` - ALBERT architecture from google with smaller parameters.5. `xlnet` - XLNET architecture from google.6. `alxlnet` Malaya architecture, unpublished model, A-lite XLNET.7. `electra` ELECTRA architecture from google.8. `small-electra` ELECTRA architecture from google with smaller parameters.
###Code
strings = ['Kerajaan galakkan rakyat naik public transport tapi parking kat lrt ada 15. Reserved utk staff rapid je dah berpuluh. Park kereta tepi jalan kang kene saman dgn majlis perbandaran. Kereta pulak senang kene curi. Cctv pun tak ada. Naik grab dah 5-10 ringgit tiap hari. Gampang juga',
'Alaa Tun lek ahhh npe muka masam cmni kn agong kata usaha kerajaan terdahulu sejak selepas merdeka',
"Orang ramai cakap nurse kerajaan garang. So i tell u this. Most of our local ppl will treat us as hamba abdi and they don't respect us as a nurse"]
###Output
_____no_output_____
###Markdown
Load XLNET-Bahasa
###Code
xlnet = malaya.transformer.load(model = 'xlnet')
###Output
INFO:tensorflow:memory input None
INFO:tensorflow:Use float type <dtype: 'float32'>
INFO:tensorflow:Restoring parameters from /Users/huseinzolkepli/Malaya/xlnet-model/base/xlnet-base/model.ckpt
###Markdown
I have random sentences copied from Twitter, searched using `kerajaan` keyword. VectorizationChange a string or batch of strings to latent space / vectors representation.
###Code
v = xlnet.vectorize(strings)
v.shape
###Output
_____no_output_____
###Markdown
Attention Attention is to get which part of the sentence give the impact. Method available for attention,- `'last'` - attention from last layer.- `'first'` - attention from first layer.- `'mean'` - average attentions from all layers. You can give list of strings or a string to get the attention, in this documentation, I just want to use a string.
###Code
xlnet.attention([strings[1]], method = 'last')
xlnet.attention([strings[1]], method = 'first')
xlnet.attention([strings[1]], method = 'mean')
###Output
_____no_output_____
###Markdown
Visualize Attention Before using attention visualization, we need to load D3 into our jupyter notebook first. This visualization borrow from https://github.com/jessevig/bertviz .
###Code
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min',
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
}
});
xlnet.visualize_attention('nak makan ayam dgn husein')
###Output
_____no_output_____
###Markdown
_I attached a printscreen, readthedocs cannot visualize the javascript._
###Code
from IPython.core.display import Image, display
display(Image('xlnet-attention.png', width=300))
###Output
_____no_output_____
###Markdown
**All attention models able to use these interfaces.** Load ELECTRA-BahasaFeel free to use another models.
###Code
electra = malaya.transformer.load(model = 'electra')
electra.attention([strings[1]], method = 'last')
###Output
_____no_output_____ |
modules/python-loops/5-exercise-loop-over-sequence.ipynb | ###Markdown
Exercise: Looping over a listIn the prior exercise you created code to prompt the user for a list of planets. In this exercise you will complete the application by displaying the planets the user entered.Below is the code for the prior exercise:
###Code
new_planet = ''
planets = []
while new_planet.lower() != 'done':
if new_planet:
planets.append(new_planet)
new_planet = input('Enter a new planet ')
###Output
_____no_output_____
###Markdown
Displaying the list of planets`planets` stores the planets the user entered. You will use a `for` loop to display the entries.Create a `for` loop to iterate over the `planets` list. You can use `planet` as the name of the variable for each planet. Inside the `for` loop, use `print` to display each `planet`.
###Code
for planet in planets:
print(planet)
###Output
_____no_output_____ |
notebooks/event2mind_sandbox.ipynb | ###Markdown
0. Preprocess Text
###Code
doc3 = nlp('I want a pony so badly!')
doc3[1:4].merge()
for token in doc3:
print(token, token.dep_)
import spacy
import textacy
nlp = spacy.load('en_coref_md')
text = 'November was a trying month… on the 7th Dante had a major accident. 5 minutes before school and he and some friends are climbing the fence, I tell him it’s not a good idea and to get down. I turn back to talk to Jodi (on of my best mom friend’s at the school) and Dante comes to me screaming with his hand full of blood. I run him into my classroom and get him to the sink, as I turn on the water to clean the area the flap of his thumb lifts away and I see the bone. Shit. This isn’t something I can fix here, I grab my first aid kit and wrap it like crazy because it’s bleeding like crazy. I phone James and tell him to get to the ER as Dante is screaming and freaking out in the background as I’m trying to usher him back to the car as he’s bleeding like a stuffed pig. Unfortunately in the ER I learned that my child doesn’t take to freezing, an hour of gel freezing and he still felt the 2 needles as they went in, 15 minutes later and he felt the last 2 stitches of 8. He needed more because his finger still had gaps, the doctor didn’t want to cause him anymore pain so he glued them. It was an intense and deep gash that spiraled all the way up his thumb. I was trying to stay strong for him but I did break down as he screamed and cried, I was left to emotionally drained that day. James was able to take the remainder of the day off and stay with him. He missed 2 more days of school and then had an extra long weekend due to the holiday and the pro day but for 2 weeks he couldn’t write (of course it was his right hand.) 3 doctor visits later and he finally got them out full last week, the first visit the doctor wanted them in longer because of the severity. 2nd time he could only get 6 out because the glue had gotten on the last 2 stitches and he didn’t want to have to dig them out so we had to soak and dissolve the glue for 3 days. 3rd time the last 2 came out. Even now he’s slowly regaining his writing skills as there was some nerve damage.'
text = 'So I have had a good day today. I found out we got the other half of our funding for my travel grant, which paid for my friend to come with me. So that’s good, she and I will both get some money back. I took my dogs to the pet store so my girl dog could get a new collar, but she wanted to beat everyone up. This is an ongoing issue with her. She’s so little and cute too but damn she acts like she’s gonna go for the jugular with everyone she doesn’t know! She did end up with a cute new collar tho, it has pineapples on it. I went to the dentist and she’s happy with my Invisalign progress. We have three more trays and then she does an impression to make sure my teeth are where they need to be before they get the rest of the trays. YAY! And I don’t have to make another payment until closer to the end of my treatment. I had some work emails with the festival, and Jessie was bringing up some important points, and one of our potential artists was too expensive to work with, so Mutual Friend was asking for names for some other people we could work with. So I suggested like, three artists, and Jessie actually liked the idea of one of them doing it. Which is nice. I notice she is very encouraging at whatever I contribute to our collective. It’s sweet. I kind of know this is like, the only link we have with each other right now besides social media, so it seems like she’s trying to make sure I know she still wants me to be involved and doesn’t have bad feelings for me. And there was a short period when I was seriously thinking of leaving the collective and not working with this festival anymore. I was so sad, and felt so upset, and didn’t know what to do about Jessie. It felt really close to me throwing in the towel. But I hung on through the festival and it doesn’t seem so bad from this viewpoint now with more time that has passed. And we have been gentle, if reserved, with each other. I mean her last personal email to me however many weeks ago wasn’t very nice. But it seems like we’ve been able to put it aside for work reasons. I dunno. I still feel like if anything was gonna get mended between us, she would need to make the first moves on that. I really don’t want to try reaching out and get rejected even as a friend again. I miss her though. And sometimes I think she misses me. But I don’t want to approach her assuming we both miss each other and have her turn it on me again and make out like all these things are all in my head. I don’t know about that butch I went on a date with last night. I feel more of a friend vibe from her, than a romantic one. I can’t help it, I am just not attracted to butches. And I don’t know how to flirt with them. And I don’t think of them in a sexy way. But I WOULD like another butch buddy. I mean yeah maybe Femmes do play games, or maybe I just chased all the wrong Femmes. Maybe I’ll just leave this and not think about it much until I get back to town in January.'
text = 'well, i tried to get an x-ray of my neck today but, when i got to the medical center and stood in line to wait to get checked in, i was told my doctor hadn’t sent over the orders for it! and the thing is he said he had already sent it when i asked him if i needed anything to get it done. i don’t like being lied to. so, since i have to go to the medical center thursday morning for a consultation for p/t, i’ll just go on back over and get the xray. the lady there said monday and tuesday are really busy days and thursday would be much better.'
text = 'I can’t help but feel annoyed, angry, disheartened, let down… and yet in another way I want to say “you don’t deserve to know him.” Dante’s growing into such an amazing child and yet it seems our family dwindles like crazy, he brings up James’ sister “aunty Tammy” and asks why we never see her. I say she’s busy because she has 2 of her own little boys, but that’s not the case. James’ sister had this dream of being an amazing aunt to Dante and she has done nothing to be in his life. Birthday gifts, Christmas, there’s no communication or her ever asking about him, not that I even speak to her much but she just doesn’t care to be an active part in his world which pisses me off to no end. James’ mother couldn’t get herself clean to stay in his life… she’s non existent to him. It boils my blood because when he was born she was so proud, got clean for a while, and then couldn’t hack it (ended up visiting and left her morphine out where our very smart 2 year old brought us a handful of pills and asked if they were candy.) That was the last time she saw him and her memory has since been forgotten. You couldn’t even get clean to be in your grandchild’s life? She was always a pathetic excuse for a mother, James’ childhood simply enrages me, the idea of a child living the way he did because of her ways makes me sick. She doesn’t deserve to know my child. My own brother “uncle Jason” is seen in passing about 5 times a year, he’s good with Dante, pleasant enough considering my brother has so many anger issues. He’s also a drug addict so it’s not like I would ever allow him time alone with Dante, not that he’d ever want to spend time with him. Christmas is coming up and in a way it’s bittersweet. My half cousin Tianna’s two girls (Stella and Piper) have two sets of everything, tons of aunts and uncles, and they have a huge loving family unit. Dante doesn’t have that, yes his grandparents love him like crazy but his family connection is like mine. When I was young I only had one set of grandparents, my dad was adopted and his mother wasn’t around at all… in a way my grandparents adopted him as well when he married my young at the age of 18. I had my uncle George and my Omi and Opi and my mom and dad and my brother. I remember when George met Denise and I met Tianna (Denise’s child from her first marriage.) I remember the day that I learned that George had proposed to Denise and that they were getting married, I cried and my mom thought I was happy. I wasn’t happy, I was devastated that suddenly I had to share my family (horrible to thing to cry about right?) Tianna already had 2 sets of grandparents, she had tons of aunts, and now she was getting my uncle whom I loved and thought was the coolest guy around as a dad. I was so angry. I never really got to meet Tianna’s dad’s side of the family, I met her grandparents a few times but they never remembered my name which really hurt and annoyed me. I joined soccer and Tianna’s dad was the coach, he wasn’t nice to me which drove the wedge deeper. It hurts… it hurts that my child has the same issue that I did although I really don’t think he’s realized that he’s different. His “grandpa Morgan” isn’t his real grandpa, more like a man who took on his father and tried to “raise” him to be a man, he obviously didn’t stay with James’ mom but he’s still in our life and I’m grateful that he’s there. Unfortunately he’s not around much, we see him 2-3 times a year because he lives in Golden. Uncle Adam is James’ childhood best friend, a good guy who’s more of a businessman who lives to the beat of his own drum and wouldn’t know what to do with a child if his life depended on it. He’s around but again it’s only a few times a year when he visits from Calgary. I want to say sometimes you get to choose your own family, but even the family I chose for him and thought would be around forever, the people who were there when he was born, grew, shared so many moments with are no longer around. They don’t seem to care either, it’s not like “aunty Kat” ever talks to me or asks about him. Seems like moving meant the end of our friendship and our “family ties.”'
text = "Sheila was run over by a truck. She herself didn't see that coming. I told her she should take care of herself, but I know she'll just go and do her thing regardless of what I say. What a conundrum! This makes me wish I had never signed up to be friends with her, although I do love the girl."
preprocessed = textacy.preprocess.normalize_whitespace(text)
preprocessed = textacy.preprocess.preprocess_text(preprocessed, fix_unicode=True, no_contractions=True, no_accents=True)
doc = nlp(preprocessed)
###Output
_____no_output_____
###Markdown
1. Extract People Extract Named Entities
###Code
for ent in doc.ents:
print(ent.text, ent.label_)
people = set([ent.text for ent in doc.ents if ent.label_ == 'PERSON'])
people
textacy.text_utils.keyword_in_context(doc.text, 'Christmas')
###Output
_____no_output_____
###Markdown
Named Entity Relations
###Code
for ent in doc.ents:
if ent.label_ == 'PERSON':
token = ent.root
print(ent.text, token.dep_, token.head.text)
###Output
Sheila nsubjpass run
###Markdown
Coreference Resolution AllenNLP
###Code
from allennlp.models.archival import load_archive
from allennlp.predictors.predictor import Predictor
archive = load_archive('../data/coref-model.tar.gz')
predictor = Predictor.from_archive(archive)
coref = predictor.predict(document = doc.text)
for cluster in coref['clusters']:
spans = [doc[first:last+1] for first, last in cluster]
print(spans)
def coref_resolved(doc, coref):
resolved = [token.text for token in doc]
for token in doc:
cluster_n = 0
for cluster in coref['clusters']:
for first, last in cluster:
span = doc[first:last+1]
if first == last:
resolved[first] = '[' + doc[first].text + '(' + str(cluster_n) + ')]'
else:
resolved[first] = '[' + doc[first].text
resolved[last] = doc[last].text + '(' + str(cluster_n) + ')]'
cluster_n += 1
return ' '.join(resolved)
coref_resolved(doc, coref)
###Output
_____no_output_____
###Markdown
NeuralCoref(maybe not quite as good? maybe it is. certainly easier.)(I think I'll use this together with named entity recognititon to ID unique people)
###Code
doc._.has_coref
doc._.coref_clusters
doc.text
doc._.coref_resolved
###Output
_____no_output_____
###Markdown
Use NEM & Coref to ID unique peopleNEM can tell which clusters are peopleNEM can give clusters better namesNEM can link clusters togetherNEM can tell whether a cluster contains a name
###Code
doc._.coref_clusters
doc._.coref_resolved
people = set([ent.text for ent in doc.ents if ent.label_ == 'PERSON'])
people
[ent for ent in doc.ents if ent.label_ == 'PERSON']
# for now assuming all names are unique identifiers
class Person:
statements = []
def __init__(self, name, pronouns=None, mentions=[], user=False):
self.name = name
# self.gender = gender
self.mentions = mentions
self.user = user
import spacy
import textacy
print('loading en_coref_md...')
nlp = spacy.load('en_coref_sm')
print('done')
# for now assuming all names are unique identifiers
class Person:
def __init__(self, name, refs=[]):
self.name = name
self.refs = refs
self.statements = []
# UPGRADE AT SOME POINT TO EXTRACT GENDER, ACCOUNT FOR CLUSTERS WITHOUT NAMES
# UPGRADE TO INCLUDE I, USER
# assumes names are unique identifiers
# assumes misspellings are diff people
# MEMORYLESS FOR NOW; each change to text means a whole new model
# Set extensions later, for keeping track of which tokens are what
class Model:
def __init__(self, text):
self.raw = text
preprocessed = textacy.preprocess.normalize_whitespace(text)
preprocessed = textacy.preprocess.preprocess_text(preprocessed, fix_unicode=True, no_contractions=True, no_accents=True)
self.doc = nlp(preprocessed)
self.people = []
self.extract_people()
self.resolved_text = self.get_resolved_text()
self.resolved_doc = nlp(self.resolved_text)
self.extract_statements()
def get_person_by_name(self, name):
for person in self.people:
if person.name == name:
return person
return None
def extract_people(self):
namedrops = [ent for ent in self.doc.ents if ent.label_ == 'PERSON']
names = set([namedrop.text for namedrop in namedrops])
# for clusters that include namedrops
if self.doc._.coref_clusters != None:
for cluster in self.doc._.coref_clusters:
name = None
for mention in cluster.mentions:
mention_text = mention.root.text
if mention_text in names:
name = mention_text
if name != None:
person = self.get_person_by_name(name)
if person == None:
self.people += [Person(name, refs=cluster.mentions)]
else:
person.refs = list(set(person.refs + cluster.mentions))
# for named entities without clusters (single mentions)
for namedrop in namedrops:
person = self.get_person_by_name(namedrop.text)
if person == None:
self.people += [Person(namedrop.text, refs=[namedrop])]
else:
person.refs = list(set(person.refs + [namedrop]))
# for user (first person refs)
refs = []
for token in self.doc:
pronoun = token.tag_ in ['PRP', 'PRP$']
first_person = token.text.lower() in ['i', 'me', 'my', 'mine', 'myself']
if pronoun and first_person:
start = token.i - token.n_lefts
end = token.i + token.n_rights + 1
ref = self.doc[start:end]
refs += [ref]
self.people += [Person('User', refs)]
def get_resolved_text(self):
resolved_text = [token.text_with_ws for token in self.doc]
for person in self.people:
for ref in person.refs:
# determine resolved value
# resolved_value = '[' + person.name.upper() + ']'
resolved_value = person.name.upper()
if ref.root.tag_ == 'PRP$':
resolved_value += '\'s'
if ref.text_with_ws[-1] == ' ':
resolved_value += ' '
# set first token to value, remaining tokens to ''
resolved_text[ref.start] = resolved_value
for i in range(ref.start+1, ref.end):
resolved_text[i] = ''
return ''.join(resolved_text)
def extract_statements(self):
for person in self.people:
statements = []
for ref in person.refs:
head = ref.root.head
if head.pos_ == 'VERB':
for statement in textacy.extract.semistructured_statements(self.resolved_doc, person.name, head.lemma_):
statements += [statement]
person.statements = list(set(person.statements + statements))
model = Model()
model.update(doc)
print()
for person in model.people:
print(person.name, person.mentions)
model.resolve_people(doc)
model.update_people_statements(doc)
model.people[0].statements
for person in model.people:
print('PERSON', person.name)
for entity, cue, fragment in person.statements:
print(entity, '-', cue, '-', fragment)
herself = model.get_person_by_name('Sheila').mentions[5]
print(herself.start, herself.end, herself.text)
model.resolve_people(doc)
###Output
_____no_output_____
###Markdown
Using NeuralCoref Scores to Improve Coref Reshttps://modelzoo.co/model/neuralcoref Extract Events Extract Subject Verb Object Triples Words
###Code
svo_triples = textacy.extract.subject_verb_object_triples(doc)
for subj, verb, obj in svo_triples:
print(subj, '-', verb, '-', obj)
###Output
I - told - her
she - should take - care
I - do love - girl
###Markdown
Phrases
###Code
svo_triples = textacy.extract.subject_verb_object_triples(doc)
for subj, verb, obj in svo_triples:
subj_phrase = ' '.join([token.text for token in subj.root.subtree])
obj_phrase = ' '.join([token.text for token in obj.root.subtree])
# start, end = textacy.spacier.utils.get_span_for_verb_auxiliaries(verb.root)
# verb_phrase = doc[start:end+1]
print(subj, '-', verb, '-', obj)
###Output
I - told - her
she - should take - care
I - do love - girl
###Markdown
Extract Semistructured Statements
###Code
doc = nlp('Uncle Tim was an old person')
text = 'So I have had a good day today. I found out we got the other half of our funding for my travel grant, which paid for my friend to come with me. So that’s good, she and I will both get some money back. I took my dogs to the pet store so my girl dog could get a new collar, but she wanted to beat everyone up. This is an ongoing issue with her. She’s so little and cute too but damn she acts like she’s gonna go for the jugular with everyone she doesn’t know! She did end up with a cute new collar tho, it has pineapples on it. I went to the dentist and she’s happy with my Invisalign progress. We have three more trays and then she does an impression to make sure my teeth are where they need to be before they get the rest of the trays. YAY! And I don’t have to make another payment until closer to the end of my treatment. I had some work emails with the festival, and Jessie was bringing up some important points, and one of our potential artists was too expensive to work with, so Mutual Friend was asking for names for some other people we could work with. So I suggested like, three artists, and Jessie actually liked the idea of one of them doing it. Which is nice. I notice she is very encouraging at whatever I contribute to our collective. It’s sweet. I kind of know this is like, the only link we have with each other right now besides social media, so it seems like she’s trying to make sure I know she still wants me to be involved and doesn’t have bad feelings for me. And there was a short period when I was seriously thinking of leaving the collective and not working with this festival anymore. I was so sad, and felt so upset, and didn’t know what to do about Jessie. It felt really close to me throwing in the towel. But I hung on through the festival and it doesn’t seem so bad from this viewpoint now with more time that has passed. And we have been gentle, if reserved, with each other. I mean her last personal email to me however many weeks ago wasn’t very nice. But it seems like we’ve been able to put it aside for work reasons. I dunno. I still feel like if anything was gonna get mended between us, she would need to make the first moves on that. I really don’t want to try reaching out and get rejected even as a friend again. I miss her though. And sometimes I think she misses me. But I don’t want to approach her assuming we both miss each other and have her turn it on me again and make out like all these things are all in my head. I don’t know about that butch I went on a date with last night. I feel more of a friend vibe from her, than a romantic one. I can’t help it, I am just not attracted to butches. And I don’t know how to flirt with them. And I don’t think of them in a sexy way. But I WOULD like another butch buddy. I mean yeah maybe Femmes do play games, or maybe I just chased all the wrong Femmes. Maybe I’ll just leave this and not think about it much until I get back to town in January.'
preprocessed = textacy.preprocess.preprocess_text(text, fix_unicode=True, no_contractions=True, no_accents=True)
doc = nlp(preprocessed)
verbs = textacy.spacier.utils.get_main_verbs_of_sent([sent for sent in doc.sents][0])
print(verbs)
verb_lemmas = [verb.lemma_ for verb in verbs]
print(verb_lemmas)
for person in model.people:
for mention in person.mentions:
print(mention, mention.root.head, mention.root.head.pos_)
doc
res = nlp(model.resolve_people(doc))
res
###Output
_____no_output_____
###Markdown
Verb parents of People
###Code
# doc
statements = []
for person in model.people:
for mention in person.mentions:
head = mention.root.head
# print(person.name, mention.text, head.lemma_)
if head.pos_ == 'VERB':
for statement in textacy.extract.semistructured_statements(doc, mention.text, head.lemma_):
statements += [statement]
for statement in set(statements):
print(statement)
# RESOLVED DOC
statements = []
for person in model.people:
for mention in person.mentions:
head = mention.root.head
# print(person.name, mention.text, head.lemma_)
if head.pos_ == 'VERB':
for statement in textacy.extract.semistructured_statements(res, person.name, head.lemma_):
statements += [statement]
for statement in set(statements):
print(statement)
###Output
_____no_output_____
###Markdown
People children of main verbs
###Code
# doc
verbs = []
for sent in doc.sents:
verbs += textacy.spacier.utils.get_main_verbs_of_sent(sent)
statements = []
for person in model.people:
for mention in person.mentions:
for verb in set(verbs):
for statement in textacy.extract.semistructured_statements(doc, mention.text, verb.lemma_):
statements += [statement]
for statement in set(statements):
print(statement)
# RESOLVED DOC
verbs = []
for sent in doc.sents:
verbs += textacy.spacier.utils.get_main_verbs_of_sent(sent)
statements = []
for person in model.people:
for verb in set(verbs):
for statement in textacy.extract.semistructured_statements(res, person.name, verb.lemma_):
statements += [statement]
for statement in set(statements):
print(statement)
###Output
(Sheila, get, some money back)
(Sheila, have had, a good day today)
###Markdown
AllenNLP OIE(meh, doesn't seem to outperform extract_semistructured?)
###Code
from allennlp.models.archival import load_archive
from allennlp.predictors.predictor import Predictor
archive = load_archive('../data/openie-model.tar.gz')
oie_predictor = Predictor.from_archive(archive)
oie_predictor.predict(sentence='I feel sad because it is raining outside')
### In Resolved
predictions = []
for sent in res.sents:
print('sent': sent)
predictions += [oie_predictor.predict(sentence=sent.text)]
model = Model(text)
model.resolved_text
predictions = []
for sent in model.resolved_doc.sents:
print('SENT:', sent)
prediction = oie_predictor.predict(sentence=sent.text)
predictions += [prediction]
for verb in prediction['verbs']:
print(verb['description'])
print()
descriptions = []
for prediction in predictions:
for verb in prediction['verbs']:
descriptions += [verb['description']]
print(verb['description'])
print()
predictions[11]
oie_predictor.predict(
sentence="I feel bad."
)
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("https://s3-us-west-2.amazonaws.com/allennlp/models/openie-model.2018-08-20.tar.gz")
predictor.predict(
sentence="John decided to run for office next month."
)
###Output
12/06/2018 21:16:45 - INFO - allennlp.common.file_utils - https://s3-us-west-2.amazonaws.com/allennlp/models/openie-model.2018-08-20.tar.gz not found in cache, downloading to /tmp/tmpzzo9jnen
100%|██████████| 65722182/65722182 [00:18<00:00, 3518115.57B/s]
12/06/2018 21:17:04 - INFO - allennlp.common.file_utils - copying /tmp/tmpzzo9jnen to cache at /home/russell/.allennlp/cache/dd04ba717be48bea13525e4293a243477876cdb0f0166abb8b09b5ed2e17cb3e.d68991c3e6de7fbcb5cf3e605d0e298f12cb857ca9d70aa8683abc886aa49edd
12/06/2018 21:17:04 - INFO - allennlp.common.file_utils - creating metadata file for /home/russell/.allennlp/cache/dd04ba717be48bea13525e4293a243477876cdb0f0166abb8b09b5ed2e17cb3e.d68991c3e6de7fbcb5cf3e605d0e298f12cb857ca9d70aa8683abc886aa49edd
12/06/2018 21:17:04 - INFO - allennlp.common.file_utils - removing temp file /tmp/tmpzzo9jnen
12/06/2018 21:17:04 - INFO - allennlp.models.archival - loading archive file https://s3-us-west-2.amazonaws.com/allennlp/models/openie-model.2018-08-20.tar.gz from cache at /home/russell/.allennlp/cache/dd04ba717be48bea13525e4293a243477876cdb0f0166abb8b09b5ed2e17cb3e.d68991c3e6de7fbcb5cf3e605d0e298f12cb857ca9d70aa8683abc886aa49edd
12/06/2018 21:17:04 - INFO - allennlp.models.archival - extracting archive file /home/russell/.allennlp/cache/dd04ba717be48bea13525e4293a243477876cdb0f0166abb8b09b5ed2e17cb3e.d68991c3e6de7fbcb5cf3e605d0e298f12cb857ca9d70aa8683abc886aa49edd to temp dir /tmp/tmpxls8z_09
12/06/2018 21:17:05 - INFO - allennlp.common.params - type = default
12/06/2018 21:17:05 - INFO - allennlp.data.vocabulary - Loading token dictionary from /tmp/tmpxls8z_09/vocabulary.
12/06/2018 21:17:05 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.models.model.Model'> from params {'binary_feature_dim': 100, 'encoder': {'hidden_size': 300, 'input_size': 200, 'num_layers': 8, 'recurrent_dropout_probability': 0.1, 'type': 'alternating_lstm', 'use_highway': True}, 'initializer': [['tag_projection_layer.*weight', {'type': 'orthogonal'}]], 'text_field_embedder': {'tokens': {'embedding_dim': 100, 'trainable': True, 'type': 'embedding'}}, 'type': 'srl'} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fcab0068240>}
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.type = srl
12/06/2018 21:17:05 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.models.semantic_role_labeler.SemanticRoleLabeler'> from params {'binary_feature_dim': 100, 'encoder': {'hidden_size': 300, 'input_size': 200, 'num_layers': 8, 'recurrent_dropout_probability': 0.1, 'type': 'alternating_lstm', 'use_highway': True}, 'initializer': [['tag_projection_layer.*weight', {'type': 'orthogonal'}]], 'text_field_embedder': {'tokens': {'embedding_dim': 100, 'trainable': True, 'type': 'embedding'}}} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fcab0068240>}
12/06/2018 21:17:05 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.text_field_embedders.text_field_embedder.TextFieldEmbedder'> from params {'tokens': {'embedding_dim': 100, 'trainable': True, 'type': 'embedding'}} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fcab0068240>}
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.type = basic
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.embedder_to_indexer_map = None
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.allow_unmatched_keys = False
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.token_embedders = None
12/06/2018 21:17:05 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.token_embedders.token_embedder.TokenEmbedder'> from params {'embedding_dim': 100, 'trainable': True, 'type': 'embedding'} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fcab0068240>}
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.type = embedding
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.num_embeddings = None
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.vocab_namespace = tokens
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.embedding_dim = 100
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.pretrained_file = None
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.projection_dim = None
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.trainable = True
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.padding_index = None
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.max_norm = None
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.norm_type = 2.0
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.scale_grad_by_freq = False
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.text_field_embedder.tokens.sparse = False
12/06/2018 21:17:05 - INFO - allennlp.common.from_params - instantiating class <class 'allennlp.modules.seq2seq_encoders.seq2seq_encoder.Seq2SeqEncoder'> from params {'hidden_size': 300, 'input_size': 200, 'num_layers': 8, 'recurrent_dropout_probability': 0.1, 'type': 'alternating_lstm', 'use_highway': True} and extras {'vocab': <allennlp.data.vocabulary.Vocabulary object at 0x7fcab0068240>}
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.encoder.type = alternating_lstm
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.encoder.batch_first = True
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.encoder.stateful = False
12/06/2018 21:17:05 - INFO - allennlp.common.params - Converting Params object to dict; logging of default values will not occur when dictionary parameters are used subsequently.
12/06/2018 21:17:05 - INFO - allennlp.common.params - CURRENTLY DEFINED PARAMETERS:
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.encoder.hidden_size = 300
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.encoder.input_size = 200
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.encoder.num_layers = 8
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.encoder.recurrent_dropout_probability = 0.1
12/06/2018 21:17:05 - INFO - allennlp.common.params - model.encoder.use_highway = True
12/06/2018 21:17:06 - INFO - allennlp.common.params - model.binary_feature_dim = 100
12/06/2018 21:17:06 - INFO - allennlp.common.params - model.embedding_dropout = 0.0
12/06/2018 21:17:06 - INFO - allennlp.common.params - model.initializer = [['tag_projection_layer.*weight', {'type': 'orthogonal'}]]
12/06/2018 21:17:06 - INFO - allennlp.common.params - model.initializer.list.list.type = orthogonal
12/06/2018 21:17:06 - INFO - allennlp.common.params - Converting Params object to dict; logging of default values will not occur when dictionary parameters are used subsequently.
12/06/2018 21:17:06 - INFO - allennlp.common.params - CURRENTLY DEFINED PARAMETERS:
12/06/2018 21:17:06 - INFO - allennlp.common.params - model.label_smoothing = None
12/06/2018 21:17:06 - INFO - allennlp.common.params - model.ignore_span_metric = False
12/06/2018 21:17:06 - INFO - allennlp.nn.initializers - Initializing parameters
12/06/2018 21:17:06 - INFO - allennlp.nn.initializers - Initializing tag_projection_layer._module.weight using tag_projection_layer.*weight intitializer
12/06/2018 21:17:06 - INFO - allennlp.nn.initializers - Done initializing parameters; the following parameters are using their default initialization from their code
12/06/2018 21:17:06 - INFO - allennlp.nn.initializers - binary_feature_embedding.weight
12/06/2018 21:17:06 - INFO - allennlp.nn.initializers - encoder._module.layer_0.input_linearity.bias
12/06/2018 21:17:06 - INFO - allennlp.nn.initializers - encoder._module.layer_0.input_linearity.weight
###Markdown
Decomposable Attention
###Code
from allennlp.predictors import Predictor
predictor = Predictor.from_path("https://s3-us-west-2.amazonaws.com/allennlp/models/decomposable-attention-elmo-2018.02.19.tar.gz")
prediction = predictor.predict(
hypothesis="Two women are sitting on a blanket near some rocks talking about politics.",
premise="Two women are wandering along the shore drinking iced tea."
)
prediction
type(prediction['premise_tokens'][0])
import pandas as pd
doc = nlp("I guess I am feeling kinda tired. I feel overwhelmed, a bit, maybe hungry. I dunno. I find myself wanting something, but I'm not sure what it is. I feel stressed certainly, too much to do maybe? But I'm not totally sure what I should be doing? Now it's a lot later and it's really time for me to get to bed...but a part of me wants to stay up, nonetheless")
results = pd.DataFrame([], columns=['premise', 'hypothesis', 'entailment', 'contradiction', 'neutral', 'e+c'])
i = 0
for premise in doc.sents:
# entailment, contradiction, neutral = None
for hypothesis in doc.sents:
if (premise != hypothesis):
prediction = predictor.predict(hypothesis=hypothesis.text, premise=premise.text)
entailment, contradiction, neutral = prediction['label_probs']
results.loc[i] = [premise.text, hypothesis.text, entailment, contradiction, neutral, (entailment + (1 - contradiction)) / 2]
i += 1
results.sort_values(by='e+c', ascending=False).loc[results['neutral'] < .5]
hypothesis = 'I feel stressed'
results = pd.DataFrame([], columns=['premise', 'hypothesis', 'entailment', 'contradiction', 'neutral'])
i = 0
for premise in doc.sents:
prediction = predictor.predict(hypothesis=hypothesis, premise=premise.text)
entailment, contradiction, neutral = prediction['label_probs']
results.loc[i] = [premise.text, hypothesis, entailment, contradiction, neutral]
i += 1
results.sort_values(by='entailment', ascending=False)
def demo(shape):
nlp = spacy.load('en_vectors_web_lg')
nlp.add_pipe(KerasSimilarityShim.load(nlp.path / 'similarity', nlp, shape[0]))
doc1 = nlp(u'The king of France is bald.')
doc2 = nlp(u'France has no king.')
print("Sentence 1:", doc1)
print("Sentence 2:", doc2)
entailment_type, confidence = doc1.similarity(doc2)
print("Entailment type:", entailment_type, "(Confidence:", confidence, ")")
from textacy.vsm import Vectorizer
vectorizer = Vectorizer(
tf_type='linear', apply_idf=True, idf_type='smooth', norm='l2',
min_df=3, max_df=0.95, max_n_terms=100000
)
model = textacy.tm.TopicModel('nmf', n_topics=20)
model.fit
import textacy.keyterms
terms = textacy.keyterms.key_terms_from_semantic_network(doc)
terms
terms = textacy.keyterms.sgrank(doc)
terms
doc.text
import textacy.lexicon_methods
textacy.lexicon_methods.download_depechemood(data_dir='data')
textacy.lexicon_methods.emotional_valence(words=[word for word in doc], dm_data_dir='data/DepecheMood_V1.0')
from event2mind_hack import load_event2mind_archive
from allennlp.predictors.predictor import Predictor
archive = load_event2mind_archive('data/event2mind.tar.gz')
predictor = Predictor.from_archive(archive)
predictor.predict(
source="PersonX drops a hint"
)
import math
math.exp(-1)
import pandas as pd
import math
xintent = pd.DataFrame({
'tokens': prediction['xintent_top_k_predicted_tokens'],
'p_log': prediction['xintent_top_k_log_probabilities']
})
xintent['p'] = xintent['p_log'].apply(math.exp)
xintent.sort_values(by='p', ascending=False)
xreact = pd.DataFrame({
'tokens': prediction['xreact_top_k_predicted_tokens'],
'p_log': prediction['xreact_top_k_log_probabilities']
})
xreact['p'] = xreact['p_log'].apply(math.exp)
xreact.sort_values(by='p', ascending=False)
oreact = pd.DataFrame({
'tokens': prediction['oreact_top_k_predicted_tokens'],
'p_log': prediction['oreact_top_k_log_probabilities']
})
oreact['p'] = oreact['p_log'].apply(math.exp)
oreact.sort_values(by='p', ascending=False)
###Output
_____no_output_____ |
graph_prep.ipynb | ###Markdown
Examine Distribution of Utility Bill Dates in ARIS data
###Code
dfaris = pd.read_pickle('data/aris_records.pkl')
dfaris.head()
dfaris['Thru_year'] = [x.year for x in dfaris.Thru]
dfaris.head()
site_yr = list(set(zip(dfaris['Site ID'], dfaris.Thru_year)))
len(site_yr)
dfsu = pd.DataFrame(site_yr, columns=['site_id', 'year'])
dfsu.head()
dfsu.year.hist()
df_yr_ct = dfsu.groupby('site_id').count()
df_yr_ct.year.hist()
xlabel('Number of Years of data')
ylabel('Number of Sites')
df_yr_ct.query('year > 8')
len(df_yr_ct)
###Output
_____no_output_____
###Markdown
Prep for ECI/EUI Comparison Graphs
###Code
df = pickle.load(open('df_processed.pkl', 'rb'))
ut = pickle.load(open('util_obj.pkl', 'rb'))
df.head()
last_complete_year = 2017
df1 = df.query('fiscal_year == @last_complete_year')
# Get Total Utility cost by building. This includes non-energy utilities as well.
df2 = df1.pivot_table(index='site_id', values=['cost'], aggfunc=np.sum)
df2['fiscal_year'] = last_complete_year
df2.reset_index(inplace=True)
df2.set_index(['site_id', 'fiscal_year'], inplace=True)
df2 = bu.add_month_count_column_by_site(df2, df1)
df2.head()
df2.query('month_count==12').head()
df.sum()
df.service_type.unique()
reload(bu)
# Filter down to only services that are energy services.
energy_services = bu.missing_energy_services([])
df4 = df.query('service_type==@energy_services').copy()
# Sum Energy Costs and Usage
df5 = pd.pivot_table(df4, index=['site_id', 'fiscal_year'], values=['cost', 'mmbtu'], aggfunc=np.sum)
df5.head()
# Add a column showing number of months present in each fiscal year.
df5 = bu.add_month_count_column_by_site(df5, df4)
df5.head()
dfe = df4.query("service_type=='Electricity'").groupby(['site_id', 'fiscal_year']).sum()[['mmbtu']]
dfe.rename(columns={'mmbtu': 'elec_mmbtu'}, inplace = True)
df5 = df5.merge(dfe, how='left', left_index=True, right_index=True)
df5['elec_mmbtu'] = df5['elec_mmbtu'].fillna(0.0)
df5['heat_mmbtu'] = df5.mmbtu - df5.elec_mmbtu
df5.head()
# Create a DataFrame with site, year, month and degree-days, but only one row
# for each site/year/month combo.
dfd = df4[['site_id', 'fiscal_year', 'fiscal_mo']].copy()
dfd.drop_duplicates(inplace=True)
ut.add_degree_days_col(dfd)
# Use the agg function below so that a NaN will be returned for the year
# if any monthly values are NaN
dfd = dfd.groupby(['site_id', 'fiscal_year']).agg({'degree_days': lambda x: np.sum(x.values)})[['degree_days']]
dfd.head()
df5 = df5.merge(dfd, how='left', left_index=True, right_index=True)
df5.head()
# Add in some needed building like square footage, primary function
# and building category.
df_bldg = ut.building_info_df()
df_bldg.head()
# Shrink to just the needed fields and remove index
df_info = df_bldg[['sq_ft', 'site_category', 'primary_func']].copy().reset_index()
# Remove the index from df5 so that merging is easier.
df5.reset_index(inplace=True)
# merge in building info
df5 = df5.merge(df_info, how='left')
df5.head()
df5.tail()
# Look at one that is missing from Building Info to see if
# Left join worked.
df5.query('site_id == "TWOCOM"')
df5['eui'] = df5.mmbtu * 1e3 / df5.sq_ft
df5['eci'] = df5.cost / df5.sq_ft
df5['specific_eui'] = df5.heat_mmbtu * 1e6 / df5.degree_days / df5.sq_ft
# Restrict to full years
df5 = df5.query("month_count == 12").copy()
df5.head()
df5 = df5[['site_id', 'fiscal_year', 'eui', 'eci', 'specific_eui', 'site_category', 'primary_func']].copy()
df5.head()
df5.to_pickle('df5.pkl')
pd.read_pickle('df5.pkl').head()
site_id = '03'
df = pd.read_pickle('df_processed.pkl', compression='bz2')
df_utility_cost = pd.read_pickle('df_utility_cost.pkl')
df_usage = pd.read_pickle('df_usage.pkl')
util_obj = pickle.load(open('util_obj.pkl', 'rb'))
df_utility_cost.head()
df_usage.head()
###Output
_____no_output_____ |
guides/ipynb/keras_tuner/distributed_tuning.ipynb | ###Markdown
Distributed hyperparameter tuning**Authors:** Tom O'Malley, Haifeng Jin**Date created:** 2019/10/24**Last modified:** 2021/06/02**Description:** Tuning the hyperparameters of the models with multiple GPUs and multiple machines. IntroductionKerasTuner makes it easy to perform distributed hyperparameter search. Nochanges to your code are needed to scale up from running single-threadedlocally to running on dozens or hundreds of workers in parallel. DistributedKerasTuner uses a chief-worker model. The chief runs a service to which theworkers report results and query for the hyperparameters to try next. The chiefshould be run on a single-threaded CPU instance (or alternatively as a separateprocess on one of the workers). Configuring distributed modeConfiguring distributed mode for KerasTuner only requires setting threeenvironment variables:**KERASTUNER_TUNER_ID**: This should be set to "chief" for the chief process.Other workers should be passed a unique ID (by convention, "tuner0", "tuner1",etc).**KERASTUNER_ORACLE_IP**: The IP address or hostname that the chief serviceshould run on. All workers should be able to resolve and access this address.**KERASTUNER_ORACLE_PORT**: The port that the chief service should run on. Thiscan be freely chosen, but must be a port that is accessible to the otherworkers. Instances communicate via the [gRPC](https://www.grpc.io) protocol.The same code can be run on all workers. Additional considerations fordistributed mode are:- All workers should have access to a centralized file system to which they canwrite their results.- All workers should be able to access the necessary training and validationdata needed for tuning.- To support fault-tolerance, `overwrite` should be kept as `False` in`Tuner.__init__` (`False` is the default).Example bash script for chief service (sample code for `run_tuning.py` atbottom of page):```export KERASTUNER_TUNER_ID="chief"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py```Example bash script for worker:```export KERASTUNER_TUNER_ID="tuner0"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py``` Data parallelism with `tf.distribute`KerasTuner also supports data parallelism via[tf.distribute](https://www.tensorflow.org/tutorials/distribute/keras). Dataparallelism and distributed tuning can be combined. For example, if you have 10workers with 4 GPUs on each worker, you can run 10 parallel trials with eachtrial training on 4 GPUs by using[tf.distribute.MirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy).You can also run each trial on TPUs via[tf.distribute.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy).Currently[tf.distribute.MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy)is not supported, but support for this is on the roadmap. Example codeWhen the enviroment variables described above are set, the example below willrun distributed tuning and use data parallelism within each trial via`tf.distribute`. The example loads MNIST from `tensorflow_datasets` and uses[Hyperband](https://arxiv.org/pdf/1603.06560.pdf) for the hyperparametersearch.
###Code
import keras_tuner as kt
import tensorflow as tf
import numpy as np
def build_model(hp):
"""Builds a convolutional model."""
inputs = tf.keras.Input(shape=(28, 28, 1))
x = inputs
for i in range(hp.Int("conv_layers", 1, 3, default=3)):
x = tf.keras.layers.Conv2D(
filters=hp.Int("filters_" + str(i), 4, 32, step=4, default=8),
kernel_size=hp.Int("kernel_size_" + str(i), 3, 5),
activation="relu",
padding="same",
)(x)
if hp.Choice("pooling" + str(i), ["max", "avg"]) == "max":
x = tf.keras.layers.MaxPooling2D()(x)
else:
x = tf.keras.layers.AveragePooling2D()(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.ReLU()(x)
if hp.Choice("global_pooling", ["max", "avg"]) == "max":
x = tf.keras.layers.GlobalMaxPooling2D()(x)
else:
x = tf.keras.layers.GlobalAveragePooling2D()(x)
outputs = tf.keras.layers.Dense(10, activation="softmax")(x)
model = tf.keras.Model(inputs, outputs)
optimizer = hp.Choice("optimizer", ["adam", "sgd"])
model.compile(
optimizer, loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
return model
tuner = kt.Hyperband(
hypermodel=build_model,
objective="val_accuracy",
max_epochs=2,
factor=3,
hyperband_iterations=1,
distribution_strategy=tf.distribute.MirroredStrategy(),
directory="results_dir",
project_name="mnist",
overwrite=True,
)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape the images to have the channel dimension.
x_train = (x_train.reshape(x_train.shape + (1,)) / 255.0)[:1000]
y_train = y_train.astype(np.int64)[:1000]
x_test = (x_test.reshape(x_test.shape + (1,)) / 255.0)[:100]
y_test = y_test.astype(np.int64)[:100]
tuner.search(
x_train,
y_train,
steps_per_epoch=600,
validation_data=(x_test, y_test),
validation_steps=100,
callbacks=[tf.keras.callbacks.EarlyStopping("val_accuracy")],
)
###Output
_____no_output_____
###Markdown
Distributed hyperparameter tuning**Authors:** Tom O'Malley, Haifeng Jin**Date created:** 2019/10/24**Last modified:** 2021/06/02**Description:** Tuning the hyperparameters of the models with multiple GPUs and multiple machines. IntroductionKerasTuner makes it easy to perform distributed hyperparameter search. Nochanges to your code are needed to scale up from running single-threadedlocally to running on dozens or hundreds of workers in parallel. DistributedKerasTuner uses a chief-worker model. The chief runs a service to which theworkers report results and query for the hyperparameters to try next. The chiefshould be run on a single-threaded CPU instance (or alternatively as a separateprocess on one of the workers). Configuring distributed modeConfiguring distributed mode for KerasTuner only requires setting threeenvironment variables:**KERASTUNER_TUNER_ID**: This should be set to "chief" for the chief process.Other workers should be passed a unique ID (by convention, "tuner0", "tuner1",etc).**KERASTUNER_ORACLE_IP**: The IP address or hostname that the chief serviceshould run on. All workers should be able to resolve and access this address.**KERASTUNER_ORACLE_PORT**: The port that the chief service should run on. Thiscan be freely chosen, but must be a port that is accessible to the otherworkers. Instances communicate via the [gRPC](https://www.grpc.io) protocol.The same code can be run on all workers. Additional considerations fordistributed mode are:- All workers should have access to a centralized file system to which they canwrite their results.- All workers should be able to access the necessary training and validationdata needed for tuning.- To support fault-tolerance, `overwrite` should be kept as `False` in`Tuner.__init__` (`False` is the default).Example bash script for chief service (sample code for `run_tuning.py` atbottom of page):```export KERASTUNER_TUNER_ID="chief"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py```Example bash script for worker:```export KERASTUNER_TUNER_ID="tuner0"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py``` Data parallelism with `tf.distribute`KerasTuner also supports data parallelism via[tf.distribute](https://www.tensorflow.org/tutorials/distribute/keras). Dataparallelism and distributed tuning can be combined. For example, if you have 10workers with 4 GPUs on each worker, you can run 10 parallel trials with eachtrial training on 4 GPUs by using[tf.distribute.MirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy).You can also run each trial on TPUs via[tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy).Currently[tf.distribute.MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy)is not supported, but support for this is on the roadmap. Example codeWhen the enviroment variables described above are set, the example below willrun distributed tuning and use data parallelism within each trial via`tf.distribute`. The example loads MNIST from `tensorflow_datasets` and uses[Hyperband](https://arxiv.org/pdf/1603.06560.pdf) for the hyperparametersearch.
###Code
import kerastuner as kt
import tensorflow as tf
import numpy as np
def build_model(hp):
"""Builds a convolutional model."""
inputs = tf.keras.Input(shape=(28, 28, 1))
x = inputs
for i in range(hp.Int("conv_layers", 1, 3, default=3)):
x = tf.keras.layers.Conv2D(
filters=hp.Int("filters_" + str(i), 4, 32, step=4, default=8),
kernel_size=hp.Int("kernel_size_" + str(i), 3, 5),
activation="relu",
padding="same",
)(x)
if hp.Choice("pooling" + str(i), ["max", "avg"]) == "max":
x = tf.keras.layers.MaxPooling2D()(x)
else:
x = tf.keras.layers.AveragePooling2D()(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.ReLU()(x)
if hp.Choice("global_pooling", ["max", "avg"]) == "max":
x = tf.keras.layers.GlobalMaxPooling2D()(x)
else:
x = tf.keras.layers.GlobalAveragePooling2D()(x)
outputs = tf.keras.layers.Dense(10, activation="softmax")(x)
model = tf.keras.Model(inputs, outputs)
optimizer = hp.Choice("optimizer", ["adam", "sgd"])
model.compile(
optimizer, loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
return model
tuner = kt.Hyperband(
hypermodel=build_model,
objective="val_accuracy",
max_epochs=2,
factor=3,
hyperband_iterations=1,
distribution_strategy=tf.distribute.MirroredStrategy(),
directory="results_dir",
project_name="mnist",
overwrite=True,
)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape the images to have the channel dimension.
x_train = (x_train.reshape(x_train.shape + (1,)) / 255.0)[:1000]
y_train = y_train.astype(np.int64)[:1000]
x_test = (x_test.reshape(x_test.shape + (1,)) / 255.0)[:100]
y_test = y_test.astype(np.int64)[:100]
tuner.search(
x_train,
y_train,
steps_per_epoch=600,
validation_data=(x_test, y_test),
validation_steps=100,
callbacks=[tf.keras.callbacks.EarlyStopping("val_accuracy")],
)
###Output
_____no_output_____
###Markdown
Distributed hyperparameter tuning**Authors:** Tom O'Malley, Haifeng Jin**Date created:** 2019/10/24**Last modified:** 2021/06/02**Description:** Tuning the hyperparameters of the models with multiple GPUs and multiple machines.
###Code
!pip install keras-tuner -q
###Output
_____no_output_____
###Markdown
IntroductionKerasTuner makes it easy to perform distributed hyperparameter search. Nochanges to your code are needed to scale up from running single-threadedlocally to running on dozens or hundreds of workers in parallel. DistributedKerasTuner uses a chief-worker model. The chief runs a service to which theworkers report results and query for the hyperparameters to try next. The chiefshould be run on a single-threaded CPU instance (or alternatively as a separateprocess on one of the workers). Configuring distributed modeConfiguring distributed mode for KerasTuner only requires setting threeenvironment variables:**KERASTUNER_TUNER_ID**: This should be set to "chief" for the chief process.Other workers should be passed a unique ID (by convention, "tuner0", "tuner1",etc).**KERASTUNER_ORACLE_IP**: The IP address or hostname that the chief serviceshould run on. All workers should be able to resolve and access this address.**KERASTUNER_ORACLE_PORT**: The port that the chief service should run on. Thiscan be freely chosen, but must be a port that is accessible to the otherworkers. Instances communicate via the [gRPC](https://www.grpc.io) protocol.The same code can be run on all workers. Additional considerations fordistributed mode are:- All workers should have access to a centralized file system to which they canwrite their results.- All workers should be able to access the necessary training and validationdata needed for tuning.- To support fault-tolerance, `overwrite` should be kept as `False` in`Tuner.__init__` (`False` is the default).Example bash script for chief service (sample code for `run_tuning.py` atbottom of page):```export KERASTUNER_TUNER_ID="chief"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py```Example bash script for worker:```export KERASTUNER_TUNER_ID="tuner0"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py``` Data parallelism with `tf.distribute`KerasTuner also supports data parallelism via[tf.distribute](https://www.tensorflow.org/tutorials/distribute/keras). Dataparallelism and distributed tuning can be combined. For example, if you have 10workers with 4 GPUs on each worker, you can run 10 parallel trials with eachtrial training on 4 GPUs by using[tf.distribute.MirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy).You can also run each trial on TPUs via[tf.distribute.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy).Currently[tf.distribute.MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy)is not supported, but support for this is on the roadmap. Example codeWhen the enviroment variables described above are set, the example below willrun distributed tuning and use data parallelism within each trial via`tf.distribute`. The example loads MNIST from `tensorflow_datasets` and uses[Hyperband](https://arxiv.org/abs/1603.06560) for the hyperparametersearch.
###Code
import keras_tuner as kt
import tensorflow as tf
import numpy as np
def build_model(hp):
"""Builds a convolutional model."""
inputs = tf.keras.Input(shape=(28, 28, 1))
x = inputs
for i in range(hp.Int("conv_layers", 1, 3, default=3)):
x = tf.keras.layers.Conv2D(
filters=hp.Int("filters_" + str(i), 4, 32, step=4, default=8),
kernel_size=hp.Int("kernel_size_" + str(i), 3, 5),
activation="relu",
padding="same",
)(x)
if hp.Choice("pooling" + str(i), ["max", "avg"]) == "max":
x = tf.keras.layers.MaxPooling2D()(x)
else:
x = tf.keras.layers.AveragePooling2D()(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.ReLU()(x)
if hp.Choice("global_pooling", ["max", "avg"]) == "max":
x = tf.keras.layers.GlobalMaxPooling2D()(x)
else:
x = tf.keras.layers.GlobalAveragePooling2D()(x)
outputs = tf.keras.layers.Dense(10, activation="softmax")(x)
model = tf.keras.Model(inputs, outputs)
optimizer = hp.Choice("optimizer", ["adam", "sgd"])
model.compile(
optimizer, loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
return model
tuner = kt.Hyperband(
hypermodel=build_model,
objective="val_accuracy",
max_epochs=2,
factor=3,
hyperband_iterations=1,
distribution_strategy=tf.distribute.MirroredStrategy(),
directory="results_dir",
project_name="mnist",
overwrite=True,
)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape the images to have the channel dimension.
x_train = (x_train.reshape(x_train.shape + (1,)) / 255.0)[:1000]
y_train = y_train.astype(np.int64)[:1000]
x_test = (x_test.reshape(x_test.shape + (1,)) / 255.0)[:100]
y_test = y_test.astype(np.int64)[:100]
tuner.search(
x_train,
y_train,
steps_per_epoch=600,
validation_data=(x_test, y_test),
validation_steps=100,
callbacks=[tf.keras.callbacks.EarlyStopping("val_accuracy")],
)
###Output
_____no_output_____
###Markdown
Distributed hyperparameter tuning**Authors:** Tom O'Malley, Haifeng Jin**Date created:** 2019/10/24**Last modified:** 2021/06/02**Description:** Tuning the hyperparameters of the models with multiple GPUs and multiple machines.
###Code
!pip install keras-tuner -q
###Output
_____no_output_____
###Markdown
IntroductionKerasTuner makes it easy to perform distributed hyperparameter search. Nochanges to your code are needed to scale up from running single-threadedlocally to running on dozens or hundreds of workers in parallel. DistributedKerasTuner uses a chief-worker model. The chief runs a service to which theworkers report results and query for the hyperparameters to try next. The chiefshould be run on a single-threaded CPU instance (or alternatively as a separateprocess on one of the workers). Configuring distributed modeConfiguring distributed mode for KerasTuner only requires setting threeenvironment variables:**KERASTUNER_TUNER_ID**: This should be set to "chief" for the chief process.Other workers should be passed a unique ID (by convention, "tuner0", "tuner1",etc).**KERASTUNER_ORACLE_IP**: The IP address or hostname that the chief serviceshould run on. All workers should be able to resolve and access this address.**KERASTUNER_ORACLE_PORT**: The port that the chief service should run on. Thiscan be freely chosen, but must be a port that is accessible to the otherworkers. Instances communicate via the [gRPC](https://www.grpc.io) protocol.The same code can be run on all workers. Additional considerations fordistributed mode are:- All workers should have access to a centralized file system to which they canwrite their results.- All workers should be able to access the necessary training and validationdata needed for tuning.- To support fault-tolerance, `overwrite` should be kept as `False` in`Tuner.__init__` (`False` is the default).Example bash script for chief service (sample code for `run_tuning.py` atbottom of page):```export KERASTUNER_TUNER_ID="chief"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py```Example bash script for worker:```export KERASTUNER_TUNER_ID="tuner0"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py``` Data parallelism with `tf.distribute`KerasTuner also supports data parallelism via[tf.distribute](https://www.tensorflow.org/tutorials/distribute/keras). Dataparallelism and distributed tuning can be combined. For example, if you have 10workers with 4 GPUs on each worker, you can run 10 parallel trials with eachtrial training on 4 GPUs by using[tf.distribute.MirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy).You can also run each trial on TPUs via[tf.distribute.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy).Currently[tf.distribute.MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy)is not supported, but support for this is on the roadmap. Example codeWhen the enviroment variables described above are set, the example below willrun distributed tuning and use data parallelism within each trial via`tf.distribute`. The example loads MNIST from `tensorflow_datasets` and uses[Hyperband](https://arxiv.org/abs/1603.06560) for the hyperparametersearch.
###Code
import keras_tuner
import tensorflow as tf
import numpy as np
def build_model(hp):
"""Builds a convolutional model."""
inputs = tf.keras.Input(shape=(28, 28, 1))
x = inputs
for i in range(hp.Int("conv_layers", 1, 3, default=3)):
x = tf.keras.layers.Conv2D(
filters=hp.Int("filters_" + str(i), 4, 32, step=4, default=8),
kernel_size=hp.Int("kernel_size_" + str(i), 3, 5),
activation="relu",
padding="same",
)(x)
if hp.Choice("pooling" + str(i), ["max", "avg"]) == "max":
x = tf.keras.layers.MaxPooling2D()(x)
else:
x = tf.keras.layers.AveragePooling2D()(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.ReLU()(x)
if hp.Choice("global_pooling", ["max", "avg"]) == "max":
x = tf.keras.layers.GlobalMaxPooling2D()(x)
else:
x = tf.keras.layers.GlobalAveragePooling2D()(x)
outputs = tf.keras.layers.Dense(10, activation="softmax")(x)
model = tf.keras.Model(inputs, outputs)
optimizer = hp.Choice("optimizer", ["adam", "sgd"])
model.compile(
optimizer, loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
return model
tuner = keras_tuner.Hyperband(
hypermodel=build_model,
objective="val_accuracy",
max_epochs=2,
factor=3,
hyperband_iterations=1,
distribution_strategy=tf.distribute.MirroredStrategy(),
directory="results_dir",
project_name="mnist",
overwrite=True,
)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape the images to have the channel dimension.
x_train = (x_train.reshape(x_train.shape + (1,)) / 255.0)[:1000]
y_train = y_train.astype(np.int64)[:1000]
x_test = (x_test.reshape(x_test.shape + (1,)) / 255.0)[:100]
y_test = y_test.astype(np.int64)[:100]
tuner.search(
x_train,
y_train,
steps_per_epoch=600,
validation_data=(x_test, y_test),
validation_steps=100,
callbacks=[tf.keras.callbacks.EarlyStopping("val_accuracy")],
)
###Output
_____no_output_____
###Markdown
Distributed hyperparameter tuning**Authors:** Tom O'Malley, Haifeng Jin**Date created:** 2019/10/24**Last modified:** 2021/06/02**Description:** Tuning the hyperparameters of the models with multiple GPUs and multiple machines. IntroductionKerasTuner makes it easy to perform distributed hyperparameter search. Nochanges to your code are needed to scale up from running single-threadedlocally to running on dozens or hundreds of workers in parallel. DistributedKerasTuner uses a chief-worker model. The chief runs a service to which theworkers report results and query for the hyperparameters to try next. The chiefshould be run on a single-threaded CPU instance (or alternatively as a separateprocess on one of the workers). Configuring distributed modeConfiguring distributed mode for KerasTuner only requires setting threeenvironment variables:**KERASTUNER_TUNER_ID**: This should be set to "chief" for the chief process.Other workers should be passed a unique ID (by convention, "tuner0", "tuner1",etc).**KERASTUNER_ORACLE_IP**: The IP address or hostname that the chief serviceshould run on. All workers should be able to resolve and access this address.**KERASTUNER_ORACLE_PORT**: The port that the chief service should run on. Thiscan be freely chosen, but must be a port that is accessible to the otherworkers. Instances communicate via the [gRPC](https://www.grpc.io) protocol.The same code can be run on all workers. Additional considerations fordistributed mode are:- All workers should have access to a centralized file system to which they canwrite their results.- All workers should be able to access the necessary training and validationdata needed for tuning.- To support fault-tolerance, `overwrite` should be kept as `False` in`Tuner.__init__` (`False` is the default).Example bash script for chief service (sample code for `run_tuning.py` atbottom of page):```export KERASTUNER_TUNER_ID="chief"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py```Example bash script for worker:```export KERASTUNER_TUNER_ID="tuner0"export KERASTUNER_ORACLE_IP="127.0.0.1"export KERASTUNER_ORACLE_PORT="8000"python run_tuning.py``` Data parallelism with `tf.distribute`KerasTuner also supports data parallelism via[tf.distribute](https://www.tensorflow.org/tutorials/distribute/keras). Dataparallelism and distributed tuning can be combined. For example, if you have 10workers with 4 GPUs on each worker, you can run 10 parallel trials with eachtrial training on 4 GPUs by using[tf.distribute.MirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy).You can also run each trial on TPUs via[tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy).Currently[tf.distribute.MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy)is not supported, but support for this is on the roadmap. Example codeWhen the enviroment variables described above are set, the example below willrun distributed tuning and use data parallelism within each trial via`tf.distribute`. The example loads MNIST from `tensorflow_datasets` and uses[Hyperband](https://arxiv.org/pdf/1603.06560.pdf) for the hyperparametersearch.
###Code
import keras_tuner as kt
import tensorflow as tf
import numpy as np
def build_model(hp):
"""Builds a convolutional model."""
inputs = tf.keras.Input(shape=(28, 28, 1))
x = inputs
for i in range(hp.Int("conv_layers", 1, 3, default=3)):
x = tf.keras.layers.Conv2D(
filters=hp.Int("filters_" + str(i), 4, 32, step=4, default=8),
kernel_size=hp.Int("kernel_size_" + str(i), 3, 5),
activation="relu",
padding="same",
)(x)
if hp.Choice("pooling" + str(i), ["max", "avg"]) == "max":
x = tf.keras.layers.MaxPooling2D()(x)
else:
x = tf.keras.layers.AveragePooling2D()(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.ReLU()(x)
if hp.Choice("global_pooling", ["max", "avg"]) == "max":
x = tf.keras.layers.GlobalMaxPooling2D()(x)
else:
x = tf.keras.layers.GlobalAveragePooling2D()(x)
outputs = tf.keras.layers.Dense(10, activation="softmax")(x)
model = tf.keras.Model(inputs, outputs)
optimizer = hp.Choice("optimizer", ["adam", "sgd"])
model.compile(
optimizer, loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
return model
tuner = kt.Hyperband(
hypermodel=build_model,
objective="val_accuracy",
max_epochs=2,
factor=3,
hyperband_iterations=1,
distribution_strategy=tf.distribute.MirroredStrategy(),
directory="results_dir",
project_name="mnist",
overwrite=True,
)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Reshape the images to have the channel dimension.
x_train = (x_train.reshape(x_train.shape + (1,)) / 255.0)[:1000]
y_train = y_train.astype(np.int64)[:1000]
x_test = (x_test.reshape(x_test.shape + (1,)) / 255.0)[:100]
y_test = y_test.astype(np.int64)[:100]
tuner.search(
x_train,
y_train,
steps_per_epoch=600,
validation_data=(x_test, y_test),
validation_steps=100,
callbacks=[tf.keras.callbacks.EarlyStopping("val_accuracy")],
)
###Output
_____no_output_____ |
Womens E-Commerce Clothing Review/Model/ClothingReview.ipynb | ###Markdown
*Using TextBlob to calculate sentiment polarity which lies in the range of [-1,1] where 1 means positive sentiment and -1 means a negative sentiment, and also calculating word counts and review length*
###Code
df['Polarity'] = df['Review Text'].apply(lambda x: TextBlob(x).sentiment.polarity)
df['word_count'] = df['Review Text'].apply(lambda x: len(str(x).split()))
df['review_len'] = df['Review Text'].apply(lambda x: len(str(x)))
cl = df.loc[df.Polarity == 1, ['Review Text']].sample(5).values
for c in cl:
print(c[0])
cl = df.loc[df.Polarity == 0, ['Review Text']].sample(5).values
for c in cl:
print(c[0])
cl = df.loc[df.Polarity <= -0.7, ['Review Text']].sample(5).values
for c in cl:
print(c[0])
###Output
What a disappointment and for the price, it's outrageous!
Received this product with a gaping hole in it. very disappointed in the quality and the quality control at the warehouse
Awful color, horribly wrinkled and just a mess...so disappointed
The button fell off when i took it out of the bag, and i noticed that all of the thread had unraveled. will be returning :-(
Cut out design, no seems or hems.
very disappointed in retailer
###Markdown
*Distribution of review sentiment polarity*---
###Code
features = ['Polarity', 'Age', 'review_len', 'word_count']
titles = ['Polarity Distribution', 'Age Distribution', 'Review length Distribution', 'Word Count Distribution']
colors = ['#ff6678', '#3399ff', '#00ff00', '#ff6600']
for feature, title, color in zip(features, titles, colors):
sns.displot(x=df[feature], bins=50, color=color)
plt.title(title, size=15)
plt.xlabel(feature)
plt.show()
###Output
_____no_output_____
###Markdown
*Vast majority of the sentiment polarity scores are greater than zero, means most of them are pretty positive.Most reviewers are in their 30s to 40s.*
###Code
sns.countplot(x = 'Rating', palette='inferno', data=df)
plt.title('Rating Distribution', size=15)
plt.xlabel('Ratings')
plt.show()
###Output
_____no_output_____
###Markdown
*The ratings are in align with the polarity score, that is, most of the ratings are pretty high at 4 or 5 ranges.*
###Code
sns.countplot(x='Division Name', palette='inferno', data=df)
plt.title('Division distribution', size=15)
plt.show()
###Output
_____no_output_____
###Markdown
General division has the most number of reviews, and Initmates division has the least number of reviews.
###Code
plt.figure(figsize=(8, 5))
sns.countplot(x='Department Name', palette='inferno', data=df)
plt.title('Department Name', size=15)
plt.show()
plt.figure(figsize=(8, 10))
sns.countplot(y='Class Name', palette='inferno', data=df)
plt.title('Class Distribution', size=15)
plt.show()
plt.figure(figsize=(10, 6))
sns.boxplot(x='Department Name', y='Polarity', width=0.5, palette='viridis', data=df)
plt.title('Sentiment Polarity v/s Department Name', size=15)
plt.show()
###Output
_____no_output_____
###Markdown
*The highest sentiment polarity score was achieved by all of the six departments except Trend department, and the lowest sentiment polarity score was collected by Tops department. And the Trend department has the lowest median polarity score. If you remember, the Trend department has the least number of reviews. This explains why it does not have as wide variety of score distribution as the other departments.*
###Code
plt.figure(figsize=(10, 6))
sns.boxplot(x='Department Name', y='Rating', width=0.5, palette='viridis', data=df)
plt.title('Rating v/s Department Name', size=15)
plt.show()
###Output
_____no_output_____
###Markdown
*Except Trend department, all the other departments’ median rating were 5. Overall, the ratings are high and sentiment are positive in this review data set.*
###Code
recommended = df.loc[df['Recommended IND'] == 1, 'Polarity']
not_recommended = df.loc[df['Recommended IND'] == 0, 'Polarity']
plt.figure(figsize=(8, 6))
sns.histplot(x=recommended, color=colors[1], label='Recommended')
sns.histplot(x=not_recommended, color=colors[3], label='Not Recommended')
plt.title('Distribution of Sentiment polarity of reviews based on Recommendation', size=15)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
*It is obvious that clothes that have higher polarity score are more likely to be recommended.*
###Code
plt.figure(figsize=(8, 8))
g = sns.jointplot(x='Rating', y='Polarity', kind='kde', color=colors[3], data=df)
g.plot_joint(sns.kdeplot, fill=True, color=colors[3], zorder=0, levels=6)
plt.show()
plt.figure(figsize=(10, 8))
g = sns.jointplot(x='Age', y='Polarity', kind='kde', color=colors[1], data=df)
g.plot_joint(sns.kdeplot, fill=True, color=colors[1], zorder=0, levels=6)
plt.show()
###Output
_____no_output_____ |
notebooks/IK1_model.ipynb | ###Markdown
It's then necessary to check if Acromine produced the correct results. We must fix errors manually
###Code
top = miner.top(15)
top
longforms = miner.get_longforms(cutoff=2.9)
longforms
longforms = [lf for i, lf in enumerate(longforms) if i in [1, 2, 3, 4, 5, 6, 7]]
longforms.extend([top[3], top[7], top[11]])
longforms.sort(key=lambda x: -x[1])
longforms
longforms, scores = zip(*longforms)
grounding_map = {}
for longform in longforms:
grounding = gilda_ground(longform)
if grounding[0]:
grounding_map[longform] = f'{grounding[0]}:{grounding[1]}'
grounding_map
result = ground_with_gui(longforms, scores, grounding_map=grounding_map)
result
grounding_map, names, pos_labels = ({'medetomidine': 'CHEBI:CHEBI:48552',
'mediator': 'ungrounded',
'mediterranean': 'ungrounded',
'metathesis electrodialysis': 'ungrounded',
'microendoscopic discectomy': 'ungrounded',
'minimal effective dose': 'ungrounded',
'minimal erythema dose': 'ungrounded',
'morphine equivalent dose': 'ungrounded',
'multiple epiphyseal dysplasia': 'MESH:D010009',
'mycoepoxydiene': 'PUBCHEM:11300750'},
{'MESH:D010009': 'Osteochondrodysplasias',
'PUBCHEM:11300750': 'Mycoepoxydiene'},
['CHEBI:CHEBI:48552', 'PUBCHEM:11300750'])
grounding_dict = {'MED': grounding_map}
classifier = AdeftClassifier('MED', pos_labels)
len(texts)
param_grid = {'C': [100.0], 'max_features': [1000]}
labeler = AdeftLabeler(grounding_dict)
corpus = labeler.build_from_texts(shortform_texts)
texts, labels = zip(*corpus)
classifier.cv(texts, labels, param_grid, cv=5, n_jobs=8)
classifier.stats
disamb = AdeftDisambiguator(classifier, grounding_dict, names)
disamb.disambiguate(texts[2])
disamb.dump('MED', '../results')
from adeft.disambiguate import load_disambiguator
d = load_disambiguator('HIR', '../results')
d.disambiguate(texts[0])
a = load_disambiguator('AR')
a.disambiguate('Androgen')
logit = d.classifier.estimator.named_steps['logit']
logit.classes_
###Output
_____no_output_____ |
Week05/Homework03.ipynb | ###Markdown
Your name here. Your Woskshop section here. Homework 3: Arrays, File I/O and Plotting **Submit this notebook to bCourses to receive a grade for this Workshop.**Please complete homework activities in code cells in this iPython notebook. Be sure to comment your code well so that anyone who reads it can follow it and use it. Enter your name in the cell at the top of the notebook. When you are ready to submit it, you should download it as a python notebook (click "File", "Download as", "Notebook (.ipynb)") and upload it on bCourses under the Assignments tab. Please also save the notebook as PDF and upload to bCourses. Problem 1: Sunspots[Adapted from Newman, Exercise 3.1] At this link (and also in your current directory on datahub) you will find a file called `sunspots.txt`, which contains the observed number of sunspots on the Sun for each month since January 1749. The file contains two columns of numbers, the first being the month and the second being the sunspot number.a. Write a program that reads in the data and makes a graph of sunspots as a function of time. Adjust the $x$ axis so that the data fills the whole horizontal width of the graph. b. Modify your code to display two subplots in a single figure: The plot from Part 1 with all the data, and a second subplot with the first 1000 data points on the graph. c. Write a function `running_average(y, r)` that takes an array or list $y$ and calculates the running average of the data, defined by $$ Y_k = \frac{1}{2r+1} \sum_{m=-r}^r y_{k+m},$$where $y_k$ are the sunspot numbers in our case. Use this function and modify your second subplot (the one with the first 1000 data points) to plot both the original data and the running average on the same graph, again over the range covered by the first 1000 data points. Use $r=5$, but make sure your program allows the user to easily change $r$. The next two parts may require you to google for how to do things. Make a strong effort to do these parts on your own without asking for help. If you do ask for help from a GSI or friend, first ask them to point you to the resource they used, and do your best to learn the necessary techniques from that resource yourself. Finding and learning from online documentation and forums is a very important skill. (Hint: Stack Exchange/Stack Overflow is often a great resource.)d. Add legends to each of your subplots, but make them partially transparent, so that you can still see any data that they might overlap. *Note: In your program, you should only have to change $r$ for the running average in one place to adjust both the graph and the legend.* e. Since the $x$ and $y$ axes in both subplots have the same units, add shared $x$ and $y$ labels to your plot that are centered on the horizontal and vertical dimensions of your figure, respectively. Also add a single title to your figure.When your are finished, your plot should look something close to this:
###Code
# Don't rerun this snippet of code.
# If you accidentally do, close and reopen the notebook (without saving)
# to get the image back. If all else fails, redownload the notebook.
# from IPython.display import Image
# Image(filename="samplecode/sunspots.png")
###Output
_____no_output_____
###Markdown
Hints* The running average is not defined for the first and last few points that you're taking a running average over. (Why is that?) Notice, for instance, that the black curve in the plot above doesn't extend quite as far on either side as the red curve. For making your plot, it might be helpful if your `running_average` function returns an array of the $x$-values $x_k$ (or their corresponding indices $k$) along with an array of the $y$-values $Y_k$ that you compute for the running average.* You can use the Latex code `$\pm$` for the $\pm$ symbol in the legend. You can also just write `+/-` if you prefer. Problem 2: Variety PlotIn this problem, you will reproduce the following as a single figure with four subplots, as best you can:
###Code
# Don't rerun this snippet of code.
# If you accidentally do, close and reopen the notebook (without saving)
# to get the image back. If all else fails, redownload the notebook.
# from IPython.display import Image
# Image(filename="samplecode/variety_plot.png")
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.