path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
site/ru/tutorials/keras/basic_text_classification.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1: 0 - негативный обзор, 1 - позитивный.
###Code
print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1: 0 - негативный обзор, 1 - позитивный.
###Code
print("Training entries: {}, labels: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1: 0 - негативный обзор, 1 - позитивный.
###Code
print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import unicode_literals
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1: 0 - негативный обзор, 1 - позитивный.
###Code
print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1: 0 - негативный обзор, 1 - позитивный.
###Code
print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена при помощи русскоговорящего Tensorflowсообщества на общественных началах. По скольку этот перевод не являетсяофициальным, мы не гарантируем что он на 100% аккуратен и соответствует[официальной документации на английском языке](https://www.tensorflow.org/?hl=en).Если у вас есть предложение как исправить этот перевод, мы будем очень радыувидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs)GitHub репозиторий. Если вы хотите помочь сделать документацию по Tensorflowлучше (сделать сам перевод или проверить перевод подготовленный кем то другим),заполните [эту форму](https://bit.ly/tf-translate) или напишите нам на[[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1: 0 - негативный обзор, 1 - позитивный.
###Code
print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs). В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1:0 - негативный обзор, 1 - позитивный.
###Code
print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1:0 - негативный обзор, 1 - позитивный.
###Code
print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Классификация текста обзоров фильмов Читай на TensorFlow.org Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). В этом интерактивном уроке мы построим модель, которая будет классифицировать обзор фильма как *позитивный* или *негативный* на основе текста. Это пример *бинарной* классификации (по двум классам), важной, и широко применяющейся задачи машинного обучения.Мы воспользуемся [датасетом IMDB](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb), который содержит тексты 50,000 обзоров фильмов из [Internet Movie Database](https://www.imdb.com/). Они разделены на 25,000 обзоров для обучения, и 25,000 для проверки модели. Тренировочные и проверочные датасеты *сбалансированы*, т.е. содержат одинаковое количество позитивных и негативных обзоров.Данное руководство использует [tf.keras](https://www.tensorflow.org/guide/keras), высокоуровневый API для создания и обучения моделей в TensorFlow. Чтобы сделать более сложную по структуре классификацую текста при помощи `tf.keras`, читай [Руководство по классификации текстов](https://developers.google.com/machine-learning/guides/text-classification/).
###Code
# keras.datasets.imdb is broken in 1.13 and 1.14, by np 1.16.3
!pip install tf_nightly
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Загружаем датасет IMDBДатасет IMDB доступен сразу в TensorFlow при помощи метода `load_data`. Он уже подготовлен таким образом, что обзоры (последовательности слов) были конвертированы в последовательность целых чисел, где каждое целое представляет конкретное слово в массиве.Давай напишем пару строчек кода чтобы загрузить датасет (или автоматически используем копию из кэша, если ты уже скачал этот набор данных):
###Code
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Аргумент `num_words=10000` позволяет нам ограничиться только 10,000 наиболее часто встречающимися словами из тренировочного сета. Все редкие слова исключаются. Это поможет нам держать объем данных в разумных пределах. Знакомимся с даннымиДавай посмотрим какая информация нам доступна. Данные уже подготовлены: каждый пример - это массив целых чисел, которые представляют слова из обзоров. Каждая метка *label* является целым числом 0 или 1:0 - негативный обзор, 1 - позитивный.
###Code
print("Тренировочных записей: {}, меток: {}".format(len(train_data), len(train_labels)))
###Output
_____no_output_____
###Markdown
Текст обзоров уже был конвертирован в целые числа, где каждое целок представляет слово из словаря. Вот пример того, как выглядит первый обзор:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Разные обзоры фильмов также имеют разное количество слов. Код ниже поможет нам узнать количество слов в первом и втором обзоре. Поскольку нейросеть может принимать только данные одинаковой длины, то нам предстоит как-то решить эту задачу.
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
Конвертируем целые обратно в словаНе будет лишним также знать, как конвертировать целые числа из массива обратно в текст. Напишем вспомогательную функцию, с помощью который мы сможем запрашивать из этого словаря объект, который содержит указанные числа и отображать их в виде слов:
###Code
# Назначим словарь, который будет отображать слова из массива данных
word_index = imdb.get_word_index()
# Зарезервируем первые несколько значений
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # Вместо редких слов, не вошедших в набор из 10,000, будет указано UNK
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
###Output
_____no_output_____
###Markdown
Теперь мы можем легко воспользоваться функцией `decode_review` для отображения текста первого обзора фильма:
###Code
decode_review(train_data[0])
###Output
_____no_output_____
###Markdown
Подготавливаем данныеОбзоры фильмов из массива целых чисел должны быть конвертированы в тензоры прежде, чем они будут пропущены через нейросеть. Эта конвертация может быть сделана несколькими способами:* *One-hot encoding* конвертирует массивы в векторы 0 и 1. Например, последовательность [3, 5] станет 10,000-мерным вектором, полностью состоящим из нулей кроме показателей 3 и 5, которые будут представлены единицами. Затем, нам нужно будет создать первый `Dense` слой в нашей сети, который сможет принимать векторые данные с плавающей запятой. Такой подход очень требователен к объему памяти, несмотря на то, что требует указать размеры матрицы `num_words * num_reviews`* Другой способ - сделать все массивы одинаковыми по длине, а затем создать тензор целых чисел с указанием `max_length * num_reviews`. Мы можем использовать *Embedding* (пер. "Встроенный") слой, который может использовать эти параметры в качестве первого слоя нашей сетиВ этом руководстве мы попробуем второй способ.Поскольку все обзоры фильмов должны быть одинаковой длины, то мы используем функцию [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences), чтобы привести все длины к одному значению:
###Code
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
###Output
_____no_output_____
###Markdown
Давай теперь посмотрим на длину наших примеров:
###Code
len(train_data[0]), len(train_data[1])
###Output
_____no_output_____
###Markdown
А также проверим как выглядит первый стандартизированный по длине обзор фильма:
###Code
print(train_data[0])
###Output
_____no_output_____
###Markdown
Строим модельНейронная сеть создается посредством стека (наложения) слоев - это требует ответов на два вопроса об архитектуре самой модели:* Сколько слоев будет использовано в модели?* Сколько *скрытых блоков* будет использовано для каждого слоя?В этом примере, входные данные состоят из массива слов (целых чисел). Получаемые предсказания будут в виде меток 0 или 1. Давай построим модель, которая будет решать нашу задачу:
###Code
# Размер входных данных - количество слов, использованных в обзорах фильмов (10,000 слов)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
###Output
_____no_output_____
###Markdown
Для создания классификатора все слои проходят процесс стека, или наложения:1. Первый `Embedding` слой принимает переведенные в целые числа слова и ищет соответствующий вектор для каждой пары слово/число. Модель обучается на этих векторах. Векторы увеличивают размер получаемого массива на 1, в результате чего мы получаем измерения: `(batch, sequence, embedding)`2. Следующий слой `GlobalAveragePooling1D` возвращает получаемый вектор заданной длины для каждого примера, усредняя размер ряда. Это позволит модели легко принимать данные разной длины3. Этот вектор пропускается через полносвязный `Dense` слой с 16 скрытыми блоками4. Последний слой также является полносвязным, но с всего одним выходящим нодом. При помощи функции активации `sigmoid` (Сигмоида) мы будем получать число с плавающей запятой между 0 и 1, которое будет показывать вероятность или уверенность модели Скрытые блокиВышеописанная модель имеет 2 промежуточных или *скрытых* слоя, между входом и выходом данных. Количество выходов (блоков, нодов или нейронов) является размером репрезентативного пространства слоя. Другими словами, количество свободы, которая разрешена сети во время обучения.Если модель имеет больше скрытых блоков, и/или больше слоев, то тогда нейросеть может обучиться более сложным представлениям. Однако в этом случае это будет дороже с точки зрения вычислительных ресурсов и может привести к обучению нежелательных паттернов - паттернов, которые улучшают показатели на тренировочных данных, но не на проверочных. Это называется *переобучением*, и мы обязательно познакомимся с этим явлением далее. Функция потерь и оптимизаторДля модели нам необходимо указать функцию потерь и оптимизатор для обучения. Поскольку наша задача является примером бинарной классификации и модель будет показывать вероятность (слой из единственного блока с сигмоидой в качестве функции активации), то мы воспользуемся функцией потерь `binary_crossentropy` (пер. "Перекрестная энтропия").Это не единственный выбор для нашей функции потерь: ты можешь, например, выбрать `mean_squared_error`. Но обычно `binary_crossentropy` лучше справляется с вероятностями - она измеряет "дистанцию" между распределениями вероятностей, или, как в нашем случае, между эталоном и предсказаниями.Далее, по мере знакомства с задачами регрессии (например, предсказание цен на недвижимость), мы посмотрим как использовать другую функцию потерь, которая называется среднеквадратичская ошибка (MSE).А сейчас, настроим нашу модель: мы будем использовать *оптимизатор Адама* и *перекрестную энтропию* для потерь:
###Code
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Создадим проверочный набор данныхВо время обучения мы хотим проверить точность нашей модели на данных, которых она еще не видела. Давай создадим *проверочный сет* данных, выделив 10,000 примеров из оригинального тренировочного сета в отдельный набор.Почему мы не используем проверочный набор прямо сейчас? Наша цель - разработать и настроить нашу модель, используя только данные для обучения, и только потом использовать проверочный сет всего один раз чтобы оценить точность.
###Code
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
###Output
_____no_output_____
###Markdown
Обучаем модельНачнем тренировку нашей модели с 40 эпох при помощи мини-батчей по 512 образцов (*батч* - набор, пакет данных). Это означает, что мы сделаем 40 итераций (или проходов) по всем образцам данных в тензорах `x_train` и `y_train`. После обучения мы узнаем потери и точность нашей модели, показав ей 10,000 образцов из проверочного набора данных:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
###Output
_____no_output_____
###Markdown
Оценим точность моделиТеперь когда обучение прошло успешно, давай посмотрим какие результаты показывает модель.Она будет возвращать 2 значения: потери *loss* (число, которое показывает ошибку, чем оно ниже, тем лучше), и точность *accuracy*.
###Code
results = model.evaluate(test_data, test_labels)
print(results)
###Output
_____no_output_____
###Markdown
Как мы видим, этот достаточно наивный подход достиг точности около 87%. Если бы мы использовали более сложные методы, то модель приблизилась бы к отметке в 95%. Построим временной график точности и потерьМетод `model.fit()` возвращает объект `History`, который содержит все показатели, которые были записаны в лог во время обучения:
###Code
history_dict = history.history
history_dict.keys()
###Output
_____no_output_____
###Markdown
Здесь всего четыре показателя, по одному для каждой отслеживаемой метрики во время обучения и проверки. Мы можем использовать их, чтобы построить графики потерь и точности обоих стадий для сравнения:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" означает "blue dot", синяя точка
plt.plot(epochs, loss, 'bo', label='Потери обучения')
# "b" означает "solid blue line", непрерывная синяя линия
plt.plot(epochs, val_loss, 'b', label='Потери проверки')
plt.title('Потери во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Потери')
plt.legend()
plt.show()
plt.clf() # Очистим график
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']
plt.plot(epochs, acc, 'bo', label='Точность обучения')
plt.plot(epochs, val_acc, 'b', label='Точность проверки')
plt.title('Точность во время обучения и проверки')
plt.xlabel('Эпохи')
plt.ylabel('Точность')
plt.legend()
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/slide_deck-checkpoint.ipynb | ###Markdown
Ford GoBike System Data exploration by Moaz Magdy Investigation Overview> The goal of this presentation is to investigate the features are best for predicting the trip duration. The features of interest were: trip duration, member gender, user type, trip start time, and member age Dataset Overview> This dataset includes trip information from Ford GoBike, a bike-sharing system in California's San Francisco Bay Area established in 2013. The data includes trips from Ford GoBike system for Feb. 2019.
###Code
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# suppress warnings from final output
import warnings
warnings.simplefilter("ignore")
# load in the dataset into a pandas dataframe
data = pd.read_csv('modified_data.csv')
###Output
_____no_output_____
###Markdown
Distribution of the main variable of interest: trip duration.The distribution is highly right_skewed with several extreme values. So, we will consider trip durations greater than the maximum whisker (Q3 + 1.5 IQR) as outliers and we will drop them out, then we will apply a log transformation on the data.
###Code
# Distribution of the duration_min feature
plt.figure(figsize= [8,5])
bin_size = 1
bins = np.arange(0, data.duration_min.max() + bin_size, bin_size)
plt.hist(data.duration_min, bins= bins)
plt.xlim([0,100])
plt.xlabel('Trip duration (Minutes)')
plt.ylabel('Count')
plt.title('Distribution of the Trip duration');
# Calculate Q3
duration_q3 = data.duration_min.quantile(0.75)
# Calculate IQR
duration_IQR = data.duration_min.quantile(0.75) - data.duration_min.quantile(0.25)
# Get the index of outliers
outliers_indx = data.query('duration_min > @duration_q3 + @duration_IQR * 1.5').index
# Drop the outliers
data.drop(index= outliers_indx, inplace= True)
# Distribution of the duration_sec feature after log transformation
bin_size = 0.05
bins = 10 ** np.arange(0, np.log10(data.duration_min.max()) + bin_size, bin_size)
plt.figure(figsize= (8,5))
plt.hist(data.duration_min, bins= bins)
plt.xscale('log')
plt.xticks([0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000], ['0.1', '0.3', '1', '3', '10', '30', '100', '300', '1000'])
plt.xlabel('Trip duration (Minutes)')
plt.ylabel('Count')
plt.title('Distribution of the Trip duration after log transformation and removing outliers');
###Output
_____no_output_____
###Markdown
When plotted on a log-scale, the trip duration distribution looks unimodal, with one large peak around 10 minutes Distribution of user type feature.The proportion plot for the user_type feature shows that about %90 of users are subscribers.
###Code
# Creat a proportion plot for user_type
n_user = data['user_type'].value_counts().sum()
max_user_type = data['user_type'].value_counts()[0]
max_prop = max_user_type/n_user
tick_props = np.arange(0, max_prop + 0.1, 0.1)
tick_names = ['{:0.2f}'.format(v) for v in tick_props]
plt.figure(figsize= (4,5))
sns.countplot(data = data, x='user_type', color= sns.color_palette()[0])
plt.yticks(tick_props * n_user, tick_names)
plt.xlabel('User Type')
plt.ylabel('Proportion')
plt.title('Proportions of customers and subscribers');
###Output
_____no_output_____
###Markdown
Distribution of member gender feature.The proportions plot for member gender feature shows that about 74% of members are males, and 24% are females.
###Code
# Creat a proportion plot for member_gender
n_user = data['member_gender'].value_counts().sum()
max_user_gender = data['member_gender'].value_counts()[0]
max_prop = max_user_gender/n_user
tick_props = np.arange(0, max_prop + 0.05, 0.05)
tick_names = ['{:0.2f}'.format(v) for v in tick_props]
plt.figure(figsize= (4,5))
sns.countplot(data = data, x='member_gender', color= sns.color_palette()[0])
plt.yticks(tick_props * n_user, tick_names)
plt.xlabel('Member Gender')
plt.ylabel('Proportion')
plt.title('Proportions of user gender');
###Output
_____no_output_____
###Markdown
Distribution of trip start time.The above histogram for number of trips over the day shows that the distribution is bimodal, with one big peak around 8 am (1 hour after sunrise), and one big peak around 5 pm (1 hour before sunset).
###Code
# Histogram for number of trips over 'time_of_day'.
bin_size = 1
bins = np.arange(-0.5, data.time_of_day.max() + bin_size, bin_size)
plt.figure(figsize= (12,5))
sns.distplot(data['time_of_day'], color= sns.color_palette()[0], kde= False)
# plt.hist(data.time_of_day, color= base_color, bins= bins, )
plt.xlabel('Time of day (hour)')
plt.title('Distribution of trip start time over the day')
xticks = np.arange(0, 24,1)
plt.xticks(xticks, ['{}'.format(v) for v in xticks])
sunrise_avg = 7 #Source: (https://www.sunrise-and-sunset.com/en/sun/united-states/california__mo/2019/february)
sunset_avg = 17.8 #Source: (https://www.sunrise-and-sunset.com/en/sun/united-states/california__mo/2019/february)
# plot a vertical time to indicate the sunrise.
plt.axvline(x = sunrise_avg, ymin=0 , ymax = 20000, color = 'red', linestyle = '-', label = 'Sunrise');
# plot a vertical time to indicate the sunset.
plt.axvline(x = sunset_avg, ymin=0 , ymax = 20000, color = 'red', linestyle = '--', label = 'Sunset');
plt.legend();
###Output
_____no_output_____
###Markdown
Distribution of member ageThe distribution of member age is right_skewed, with most users below 40 years, few users between 40 and 60, and very few users above 60.
###Code
# Histogram for member_age
plt.figure(figsize= [8,5])
bin_size = 2
bins = np.arange(data.member_age.min(), data.member_age.max() + bin_size, bin_size)
plt.hist(x= data['member_age'],bins= bins)
plt.xlabel('Member Age')
plt.ylabel('Count')
plt.title('Distribution of member age');
###Output
_____no_output_____
###Markdown
Effect of member gender on trip durationThe member gender seems to a little effect on trip durations such that females tend ride slightly longer than males.
###Code
# Create a boxplot between member_gender and trip duration
plt.figure(figsize= (5,6))
sns.boxplot(data= data, x= 'member_gender', y= 'duration_min', color= sns.color_palette()[0])
plt.xlabel('Member Gender')
plt.ylabel('Trip duration (Minutes)');
###Output
_____no_output_____
###Markdown
Effect of user type on trip durationUser type seems to have a strong effect on trip duration. The trip duration for customers is longer than that for subscribers regardless of day of week.
###Code
# Create a boxplot between day_of_week, duration_min, and user_type.
plt.figure(figsize= (14,6))
sns.boxplot(data= data, x = 'day_of_week', y= 'duration_min', hue= 'user_type',
order = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'])
plt.legend(loc = [1,0.8], title= 'User Type')
plt.xlabel('Day of Week')
plt.ylabel('Trip duration (Minutes)')
plt.title('Relationship between day of week vs. trip duration vs. user type');
###Output
_____no_output_____
###Markdown
Effect of member age on trip durationSurprisingly, the member age doesn't seem to have an effect on the trip duration.
###Code
plt.figure(figsize=[8,5])
sns.regplot(data= data.sample(500), x='member_age', y= 'duration_min')
plt.xlabel('Member Age')
plt.ylabel('Trip duration (Minutes)')
plt.title('Regression plot for Member age vs. Trip duration');
###Output
_____no_output_____
###Markdown
Prosper Loan Data Exploration by Arshi Saleh Investigation OverviewMy goal in this investigation is to help an investor determine the key factors to keep in mind while making investments using Prosper which is an online personal loan service based on peer-to-peer lending.In most cases Investors are looking for higher returns which come with higher risks. At the end of the investigation I want to be able to help investors get better returns with a relatively lower risk. Dataset OverviewThe prosper loan data set has 113937 rows and 81 columns. I will be working with 32 columns which are a mix of numeric and categorical datatypes and one column has bool datatype. There are 112935 rows in the dataset after the initial restructuring which includes melting columns CreditGrade and ProsperRating (Alpha) into one (ProspserCreditRating) as both of them provide ratings, the first for time period before 2009 and the second for time period after July, 2009. We have created CreditScore feature using the average of CreditScoreUpper & CreditScoreLower.
###Code
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
# suppress warnings from final output
import warnings
warnings.simplefilter("ignore")
# load in the dataset into a pandas dataframe
prosper_loan=pd.read_csv('prosperLoanData.csv')
#CreditGrade and ProsperRating (Alpha) both provide the borrower ratings
#CreditGrade is pre 2009 and ProsperRating is post July,2009
#Hence to get a tidy dataset we can melt the two columns
#create a copy of the dataset
prosper_loan_sub=prosper_loan.copy()
#remove extra columns and melt into a single column
prosper_loan_sub= pd.melt(prosper_loan_sub, id_vars=['ListingKey','Term','LoanStatus','LenderYield','EstimatedReturn','Occupation','EmploymentStatus',
'LoanOriginalAmount','ListingCreationDate','ListingCategory (numeric)',
'CreditScoreRangeLower','CreditScoreRangeUpper','AmountDelinquent','TotalTrades',
'DelinquenciesLast7Years','InquiriesLast6Months','BankcardUtilization','DebtToIncomeRatio','IncomeRange',
'IncomeVerifiable','StatedMonthlyIncome','OnTimeProsperPayments','PublicRecordsLast10Years',
'PublicRecordsLast12Months','ProsperPrincipalBorrowed','ProsperPrincipalOutstanding',
'TotalCreditLinespast7years','FirstRecordedCreditLine','TotalProsperLoans','ProsperPaymentsLessThanOneMonthLate',
'ProsperPaymentsOneMonthPlusLate'],
value_vars =['CreditGrade', 'ProsperRating (Alpha)'], var_name='stages', value_name='ProsperCreditRating')
prosper_loan_sub=prosper_loan_sub.drop(['stages'], axis = 1)
#drop duplicate rows
prosper_loan_sub = prosper_loan_sub.dropna(subset=['ProsperCreditRating']).drop_duplicates('ListingKey')
#reset index
prosper_loan_sub.reset_index(drop=True, inplace=True)
#change the name of the column 'ListingCategory (numeric)'
prosper_loan_sub.rename(columns={'ListingCategory (numeric)':'ListingCategory'}, inplace=True)
#Merge categories Full-time and Part-time with category Employed
prosper_loan_sub["EmploymentStatus"].replace({"Full-time": "Employed", "Part-time": "Employed"}, inplace=True)
#the values in the ListingCategory are in numeric form, I have converted to string so that we get a better idea about
#the various listing categories
prosper_loan_sub["ListingCategory"].replace({0: "Not Available", 1: "Debt Consolidation", 2:"Home Improvement"
,3:"Business", 4:"Personal Loan", 5:"Student Use",
6:"Auto", 7:"Other", 8:"Baby&Adoption", 9:"Boat",
10:"Cosmetic Procedure", 11:"Engagement Ring",12:"Green Loans",
13:"Household Expenses", 14:"Large Purchases",
15:" Medical/Dental", 16:"Motorcycle", 17:" RV", 18:"Taxes",
19:"Vacation", 20:"Wedding Loans"}, inplace=True)
#convert column IncomeRange into ordered categorical data for ordinal data
# this method requires pandas v0.21 or later
level_order = ["$100,000+", "$75,000-99,999", "$50,000-74,999", "$25,000-49,999",
"$1-24,999", "$0", "Not employed","Not displayed"]
ordered_cat_income = pd.api.types.CategoricalDtype(ordered = True, categories = level_order)
prosper_loan_sub['IncomeRange'] = prosper_loan_sub['IncomeRange'].astype(ordered_cat_income)
#convert column ProsperCreditRating into ordered categorical data for ordinal data
# this method requires pandas v0.21 or later
level_order = ["AA", "A", "B", "C", "D", "E","HR","NC"] #here HR stands for high risk(credit score is below 540) and NC stands for No Score
ordered_cat_rating = pd.api.types.CategoricalDtype(ordered = True, categories = level_order)
prosper_loan_sub['ProsperCreditRating'] = prosper_loan_sub['ProsperCreditRating'].astype(ordered_cat_rating)
#find the average CreditScore using the columns CreditScoreRangeLower and CreditScoreRangeUpper
prosper_loan_sub['CreditScore']=(prosper_loan_sub['CreditScoreRangeLower']+prosper_loan_sub['CreditScoreRangeUpper'])/2
###Output
_____no_output_____
###Markdown
Loan Status Distribution (proportion)We can observe that more than 45% of the loan are current, followed by more than 30% completed loans. Loans that have defaulted are around 10% and those charged off is 5%. The proportion of borrowers who are late in their payments is quiet less. This builds investor confidence.
###Code
# get proportion taken by most common group for derivation
# of tick marks
n_points = prosper_loan_sub.shape[0]
max_count = prosper_loan_sub['LoanStatus'].value_counts().max()
max_prop = max_count / n_points
# generate tick mark locations and names
tick_props = np.arange(0, max_prop, 0.05)
tick_names = ['{:0.2f}'.format(v) for v in tick_props]
#plot and explore the LoanStatus column
plt.figure(figsize = [12, 5])
base_color = sb.color_palette()[0]
loan_status_order = prosper_loan_sub['LoanStatus'].value_counts().index
sb.countplot(data = prosper_loan_sub, x = 'LoanStatus', color = base_color, order = loan_status_order)
plt.yticks(tick_props * n_points, tick_names)
plt.ylabel('Proportion')
plt.title('LoanStatus Distribution (proportion)')
plt.xticks(rotation = 90);
###Output
_____no_output_____
###Markdown
Distribution of CreditScore, ProsperCreditRating, IncomeRange and LoanOriginal Amount for LoanStatus='Completed' To have a better understanding of the borrowers that have successfully closed their loans I have explored the CreditScore, ProsperCreditRating, IncomeRange and LoanOriginalAmount for borrowers who have already closed their loans and we can see that CreditScore peaks at around 700, more number of borrowers have ProsperCreditRatings of D and C, income range between mostly lies above 25,000 dollars and loan original amount is less than 5000 in most cases.
###Code
#extract rows which have LoanStatus = Completed
loan_completed=prosper_loan_sub.query('LoanStatus == "Completed"')
#set figure size
plt.figure(figsize = [20,15])
default_color = sb.color_palette()[0]
#plot the creditscore for completed loans
plt.subplot(2, 2, 1) #2 row, 2 column, subplot 1
bin_edges = np.arange(loan_completed['CreditScore'].min(), loan_completed['CreditScore'].max()+10, 10)
plt.hist(data = loan_completed, x = 'CreditScore', color = default_color, bins=bin_edges)
plt.xlabel('CreditScore', fontsize=15)
plt.ylabel('Frequency', fontsize=15)
plt.title('CreditScore Histogram for completed Loans', fontsize=15)
#plot the ProsperCreditRating foe completed loans
plt.subplot(2, 2, 2) #2 row, 2 column, subplot 2
sb.countplot(data = loan_completed, x = 'ProsperCreditRating', color = default_color)
plt.title('Prosper Credit Rating for completed Loans', fontsize=15)
#plot the Income Range for the completed loans
plt.subplot(2, 2, 3) #2 row, 2 column, subplot 3
sb.countplot(data = loan_completed, x = 'IncomeRange', color = default_color)
plt.title('Income Range for completed Loans', fontsize=15)
plt.xticks(rotation=30)
#plot the LoanOriginal Amount for completed loans
plt.subplot(2, 2, 4) #2 row, 2 column, subplot 4
bin_edges = np.arange(loan_completed['LoanOriginalAmount'].min(), loan_completed['LoanOriginalAmount'].max()+100, 500)
plt.hist(data = loan_completed, x = 'LoanOriginalAmount', color = default_color, bins=bin_edges)
plt.xlabel('LoanOriginalAmount', fontsize=15)
plt.ylabel('Frequency', fontsize=15)
plt.title('LoanOriginalAmount Histogram for completed Loans', fontsize=15)
;
###Output
_____no_output_____
###Markdown
IncomeRange v/s Average Debt To Income Ratio & Average ProsperPaymentsOneMonthPlusLateBorrowers who are not employed have the highest debt to income ratio followed by 1-24,999 dollar income range. 75,000-99,999 and 50,000-74,999 dollars Income range borrowers have low Average Debt-to-Income ratio and smaller average ProsperPaymentsOneMonthPlusLate value. We can also observe that the error bars are longer for not employed in both plots.
###Code
#plot the IncomeRange v/s Average Debt To Income Ratio & Average ProsperPaymentsOneMonthPlusLate
#set the base color and size a barplot
plt.figure(figsize = [15, 5])
base_color = sb.color_palette()[0]
#plot Debt to Income Ratio and Income Range
#subplot 1
plt.subplot(1, 2, 1)
sb.barplot(data = prosper_loan_sub, x = 'IncomeRange', y = 'DebtToIncomeRatio', color = base_color)
plt.ylabel('Average Debt to Income Ratio')
plt.title('Avg Debt to Income Ratio v/s Income Range')
plt.xticks(rotation = 90);
#plot ProsperPaymentsOneMonthPlusLate and IncomeRange
#subplot 2
plt.subplot(1, 2, 2)
sb.barplot(data = prosper_loan_sub, x = 'IncomeRange', y = 'ProsperPaymentsOneMonthPlusLate', color = base_color)
plt.ylabel('Average ProsperPaymentsOneMonthPlusLate')
plt.title('Avg ProsperPaymentsOneMonthPlusLate v/s Income Range')
plt.xticks(rotation = 90);
###Output
_____no_output_____
###Markdown
ProsperCreditRating v/s Estimated Return and ProsperCreditRating v/s LenderYield We can see that Estimated Return has better returns for ProsperRatings B, C and D. HR has a higher peak than B, and C but it also has a huge number of outliers with low and negative results.LenderYield gives us a different picture where HR has the best returns followed by E and D. Since LenderYield does not consider the estimated loss the risk factor of the HR category is not clearly visible in this plot.
###Code
#plot ProsperCreditRating v/s Estimated Return and ProsperCreditRating v/s LenderYield as subplots to compare
plt.figure(figsize = [15, 6])
base_color = sb.color_palette()[1]
#plot ProsperCreditRating v/s Estimated Return
plt.subplot(1, 2, 1)
g=sb.violinplot(data = prosper_loan_sub, x = 'EstimatedReturn', y = 'ProsperCreditRating', color = base_color,
inner = None)
g.set_title('ProsperCreditRating v/s Estimated Return')
#plot ProsperCreditRating v/s LenderYield
plt.subplot(1, 2, 2)
g=sb.violinplot(data = prosper_loan_sub, x = 'LenderYield', y = 'ProsperCreditRating', color = base_color,
inner = None)
g.set_title('ProsperCreditRating v/s LenderYield');
###Output
_____no_output_____
###Markdown
LoanOriginalAmount v/s mean of Estimated ReturnWe can see from this plot that the estimated return is higher for smaller values of LoanOriginalAmount with peaks around 5,000$ and 7000-8000$ but the standard error of mean is more for amounts greater than 30,000 dollars.
###Code
#plot a line plot for LoanOriginalAmount v/s mean of Estimated Return
plt.figure(figsize = [10, 6])
# set bin edges, compute centers
bin_size = 1000
xbin_edges = np.arange(prosper_loan_sub['LoanOriginalAmount'].min(), prosper_loan_sub['LoanOriginalAmount'].max()+bin_size, bin_size)
xbin_centers = (xbin_edges + bin_size/2)[:-1]
# compute statistics in each bin
#https://www.geeksforgeeks.org/python-pandas-dataframe-sem/
data_xbins = pd.cut(prosper_loan_sub['LoanOriginalAmount'], xbin_edges, right = False, include_lowest = True)
y_means = prosper_loan_sub['EstimatedReturn'].groupby(data_xbins).mean()
y_sems = prosper_loan_sub['EstimatedReturn'].groupby(data_xbins).sem() #calculate the standard error
# plot the summarized data
with sb.color_palette("copper"): #https://seaborn.pydata.org/generated/seaborn.color_palette.html
with sb.axes_style("whitegrid"):
plt.errorbar(x = xbin_centers, y = y_means, yerr = y_sems)
plt.xlabel('LoanOriginalAmount ($)')
plt.ylabel('EstimatedReturn')
plt.title('LoanOriginalAmount v/s mean of Estimated Return');
###Output
_____no_output_____
###Markdown
LoanOriginalAmount v/s AmountDelinquent v/s DelinquenciesLast7YearsWe can observe than borrowers with lower values of OriginalLoanAmount have more number of delinquencies in last 7 years as compared to borrowers who had high values of original loan amounts.Similar relationship is observed between Loan OriginalAmount and AmountDelinquent, borrowers with lower values of original loan amount have higher amount delinquent as compared to borrowers with smaller value of original loan amount.Most of the borrowers having high number of delinquencies in the last 7 years have relatively lower delinquentamounts. The probable reason can be that when lending higher amounts there is a tighter check on the financials of the borrower.
###Code
# plot a pairgrid plot to explore relationship between LoanOriginalAmount, AmountDelinquent and DelinquenciesLast7Years
borrower_stats=['LoanOriginalAmount', 'AmountDelinquent','DelinquenciesLast7Years']
with sb.color_palette("RdBu"):
g = sb.PairGrid(data = prosper_loan_sub, vars = borrower_stats,
hue_kws={"marker": ["s"]}) #https://seaborn.pydata.org/generated/seaborn.PairGrid.html
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter, edgecolor="w", s=40, linewidths=0.5)
#set the title of the plot
g.fig.suptitle('Relation between LoanOriginalAmount v/s AmountDelinquent v/s DelinquenciesLast7Years', fontsize=12)
g.fig.subplots_adjust(top=.9); #https://stackoverflow.com/questions/28638158/seaborn-facetgrid-how-to-leave-proper-space-on-top-for-suptitle
###Output
_____no_output_____
###Markdown
LoanStatus v/s Occupation v/s Estimated ReturnThis plot is very interesting , we can see that administrative assistant have higher estimated returns for all categories and LoanStatus "Chargedoff" and "Defaulted" have higher estimated returns compared to other loanstatus categories.I have included other in this chart which was previously excluded in the univariant exploration because there are a huge number of borrowers who do not disclose their occupation. In this plot we can observe that professional and other have similar boxplots in all cases except FinalPayment in progress. This reflects that professional is also a generalised category like Other.
###Code
#group all the different PastDue LoanStatus into single category PastDue
prosper_loan_sub["LoanStatus"].replace({"Past Due (1-15 days)": "PastDue", "Past Due (31-60 days)": "PastDue",
"Past Due (61-90 days)": "PastDue", "Past Due (91-120 days)": "PastDue",
"Past Due (16-30 days)": "PastDue", "Past Due (>120 days)": "PastDue"}, inplace=True)
#exclude "Cancelled" from the LoanStatus
loan_df=prosper_loan_sub[prosper_loan_sub['LoanStatus'].isin(["Current","Completed","Chargedoff","Defaulted","PastDue","FinalPaymentInProgress"])]
#set the order for occupation and we will limit this variable to the top five most frequent occupations
occupation_order=prosper_loan_sub['Occupation'].value_counts().head(6).index
#facet bivariate plots to create a multivariate visualization
#plot boxplot for LoanStatus v/s Occupation v/s Estimated Return
with sb.axes_style("whitegrid"):
g = sb.FacetGrid(data = loan_df, col = 'LoanStatus', height = 4, col_wrap = 3)
g.map(sb.boxplot, 'Occupation', 'EstimatedReturn',order=occupation_order)
g.fig.suptitle('LoanStatus v/s Occupation v/s Estimated Return', fontsize=12)
g.fig.subplots_adjust(top=.9)
#https://www.drawingfromdata.com/how-to-rotate-axis-labels-in-seaborn-and-matplotlib
for axes in g.axes.flat:
axes.set_xticklabels(axes.get_xticklabels(), rotation=90, horizontalalignment='right');
###Output
_____no_output_____
###Markdown
Explore Relation between IncomeRange, ProsperCreditRating ProsperPaymentsOneMonthPlusLate, ProsperPaymentsLessThanOneMonthLateLoanOriginalAmount and InquiriesLast6Months We can observe that income range 50,000-74,999 and 25,000-49,999 dollars and ProsperCreditRating D and E have the most number of outliers in relation to ProsperPaymentsOneMonthPlusLate.The InquiriesLast6Months has maximum outliers for ProsperCreditRating HR and IncomeRange Not Displayed followed by 50,000-74,999 dollarsProsperPaymentsLessThanOneMonthLate has maximum outliers for ProsperCreditRating AA followed by HR and D and for IncomeRange 50,000-74,999 , 75,000-99,999 and 1,00,000+ dollars.
###Code
#explore relationship between numerical & categorical variables using pairgrid
#plot a violin plot to understand the relationship between IncomeRange, ProsperCreditRating ProsperPaymentsOneMonthPlusLate, ProsperPaymentsLessThanOneMonthLate
#LoanOriginalAmount and InquiriesLast6Months
#https://seaborn.pydata.org/generated/seaborn.PairGrid.html
g = sb.PairGrid(data = prosper_loan_sub, x_vars = ['ProsperPaymentsOneMonthPlusLate', 'InquiriesLast6Months',
'ProsperPaymentsLessThanOneMonthLate', 'LoanOriginalAmount'],
y_vars = ['IncomeRange','ProsperCreditRating'])
g.map(sb.violinplot, inner = 'quartile')
g.fig.suptitle('IncomeRange v/s ProsperCreditRating v/s ProsperPaymentsOneMonthPlusLate v/s ProsperPaymentsLessThanOneMonthLate v/s LoanOriginalAmount and InquiriesLast6Months',
fontsize=12)
g.fig.subplots_adjust(top=.9);
###Output
_____no_output_____ |
_rmd/extra_binom_power/binomial_power.ipynb | ###Markdown
Power calculations for a binomial proportion BackgroundIn an ideal statistical world no hypothesis test would be run before a [power analysis](https://en.wikipedia.org/wiki/Power_of_a_test) has been carried out to determine a reasonable [sample size](https://en.wikipedia.org/wiki/Sample_size_determination). Most researchers carrying out statistical hypothesis testing only focus on the type-I error: the probability of erreneously rejecting a null hypothesis. Since the null hypothesis is synonymous with the "no effects" regime, this amounts to controlling the number spurious discoveries. Sophisticated techniques have also been developed to control the type-I error under the problem of [multiple comaparisons](https://en.wikipedia.org/wiki/Multiple_comparisons_problem). This focus on type-I error leads to significant problem in academic research: inflated effect size estimates. Unfortunately, advanced statistical methods give researchers a false confidence that their "significant" discoveries are "true", in the sense that they are probably not *completely* spurious. The power of the test is probality of rejecting the null when it is false. In the "no effects" regime this is equivalent to correctly concluding there is some real effect. Power is inversely related to effect size bias because results that are statistically significant go through a filter: only realized statisticals below a certain p-value are examined. If the power of a test is large, then the descrepancy between the distribution of the statistic conditional on statistically significance will be similar to its unconditional distribution.[[^1]] Generally speaking, a test that with a large power, say 80%, will have a fairly small effect size bias. Power analysis is by no means absent from empirical research. It is essential for grant funding applications in biomedical research and the design of clinical trials. I suspect that as statistics have moved towards "big data" and "analytics" the cultural emphasis on a well-designed statistical procedure has shifted to "mining" for hidden results. In such situations estimating the power is impossible because its not even clear what the hypothesis test is! Another reason power is less focused on is that it is harder to carry out. A researcher needs to make several assumptions that are normally not needed for calculating type-I errors including the estimated signal size and sometimes other nuissance parameters.This post is about how to calculate the number of samples that will be needed to obtain a specified power for a test of a binomial proportion. Anytime there is a binary outcome which is being measured in aggregate (between groups, systems, cities, etc) we can ask whether this proportion is different than some amount. For example is there a difference in the rate of high-school completion between sexes or does a medical test have a better true positive rate than an existing tool? Constructing [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval) for [binomial proportions](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval) is statistically tricky because a binomial proportion's distribution is often unknown. Asymptotically this proportion will actually have a normal distribution, and this normal approximation is often used in practice. This post will provide analytic ways of calculating the sample size for binomial confidence intervals for both normal and more advanced methods. There are several key recommendations. First, under no circumstances should a naive version of the normal approximation method be used. PreliminariesTo quickly establish some notation that will be used throughout the post: we are considering the proportion of successes in a binomial trial: $p=y / n$, where $y \sim \text{Binom}(\pi, n)$. Therefore $E(p)=\pi$ and $Var(p)=\pi(1-\pi)/n = \sigma^2_\pi$. Assymptotically it is known that $p$ can be transformed so that it is normally distributed: $z=(p - \pi)/\sigma_\pi \sim N(0,1)$. Suppose we measure the proportions from two different IID groups 1 and 2: $p_1$ and $p_2$, $E(p_i)=\pi_i$, with $n_1$ and $n_2$ samples. If we want to establish whether group 1 has a higher proportion than group 2 will specify a null and alternative hypothesis as follows:$$\begin{align*}H_0 &: \pi_1 \leq \pi_2 \\H_A &: \pi_1 > \pi_2, \hspace{3mm} \pi_1 - \pi_2 = \Delta > 0 \\z_0 &= \frac{p_1 - p_2}{\sigma_{\pi_0}} \\\sigma_{\pi}^2 &= \frac{\pi_1(1-\pi_1)}{n_1} + \frac{\pi_2(1-\pi_2)}{n_2} \\\sigma_{\pi_0}^2 &= \sigma_{\pi} | H_0 = \frac{\pi_1(1-\pi_1)}{n}(w_1 + w_2) \\n &= n_1+n_2, \hspace{3mm} w_i = n / n_i\end{align*}$$Note that in the null I assume $\pi_1=\pi_2$ rather than $\pi_1 t_\alpha$ we will reject the null, where $P(z_0 > t_\alpha | H_0) = \alpha$. The type-II error will be denoted as $\beta$ so that $P(z_A > t_\alpha | H_A) = 1-\beta$ is the power. (1) Normal approximation approach Assume that $n$ is large enough that we can use the z-score formula seen in the preliminaries. Then it is easy enough to derive an approximation for the power calculation as follows:$$\begin{align*}z_A = z | H_A &= \frac{p_1 - p_2 - \Delta}{\sigma_{\pi_A}} \sim N(0,1) \\\sigma_{\pi_A}^2 &= \frac{\pi_1(1-\pi_1)}{n_1} + \frac{(\pi_1-\Delta)(1-(\pi_1-\Delta))}{n_2} \\&= \sigma_{\pi_0}^2 + \epsilon, \hspace{3mm} \epsilon=\frac{\Delta(2\pi_1-1)-\Delta^2}{n_2}\end{align*}$$Because $\epsilon(\Delta,\pi_1,n_2)$ is $O(n_2^{-1})$ it is often ignored in the analysis since for large enough $n_2$ it is basically zero. If one sets $\epsilon=0$, then there is a closed-form solution to solving for a specific power:$$\begin{align*}P( z > t_\alpha) &= P\Bigg( \frac{p_1 - p_2}{\sigma_{\pi_0}} > t_\alpha \Bigg) \\&= P\Bigg( \frac{p_1 - p_2}{\sigma_{\pi_0}}-\frac{\Delta}{\sigma_{\pi_0}}+\frac{\Delta}{\sigma_{\pi_0}} > t_\alpha | H_A \Bigg) \\&= P\Bigg( z_A > t_\alpha - \frac{\Delta}{\sigma_{\pi_0}} | H_A \Bigg) \\1-\beta &= \Phi(\Delta / \sigma_{\pi_0}-t_\alpha) \\\frac{1}{\sigma_{\pi_0}}&= \frac{(\Phi^{-1}_{1-\beta} + t_\alpha)}{\Delta} \\\frac{n_1n_2}{n} &= \frac{\pi_1(1-\pi_1)(\Phi^{-1}_{1-\beta} + t_\alpha)^2}{\Delta^2}\end{align*}$$If the two groups have the same sample size: $n_1 = n_2$, then the exact formula is:$$\begin{align}n^* &= 2\cdot \frac{\pi_1(1-\pi_1)(\Phi^{-1}_{1-\beta} + t_\alpha)^2}{\Delta^2} \label{eq:simple1}\end{align}$$The terms in equation \eqref{eq:simple1} are intuitive. Higher power ($\Phi^{-1}_{1-\beta}$) or a lower type-I error rate ($t_\alpha$) require an increasing number of samples, the latter because the threshold to become significant is higher. Increases in the binomial variance ($\pi(1-\pi)$) or a smaller difference in the proportions ($\Delta$) require more samples as there is more noise and less signal, respectively. Though equation \eqref{eq:simple1} is elegant, it has several limitations. First, its variance is off ignores $\epsilon$ term. Second it assumes an equal number of samples between groups 1 and 2. Instead it is much better to analytically solve for $n_1$,$$\begin{align}\arg\min_{n_1} \hspace{3mm} \big[ 1/\sigma_{\pi_A}(n_1,n_2,\pi_1,\Delta) - (\Phi^{-1}_{1-\beta} + t_\alpha)/\Delta \big]^2 \label{eq:argmin1}\end{align}$$Where $n_2$, $\pi_1$, $\Delta$, $\alpha$, and $\beta$ are treated as fixed parameters and the power can be directly targeted by $n_1$ as equation \eqref{eq:argmin1} shows. The code block below will compare approaches \eqref{eq:simple1} and \eqref{eq:argmin1} to see how well there predicted power lines up with reality.
###Code
import os
import numpy as np
import pandas as pd
from scipy import stats
from scipy.stats import norm
import plotnine
from plotnine import *
seed = 1234 # Use throughout post
from scipy.optimize import minimize_scalar
from time import time
np.seterr(invalid='ignore')
def sig_n12(n1, pi1, Delta, n2=None):
pi2 = pi1 - Delta
assert (pi2 > 0)
if n2 is None:
n2 = n1
else:
n2 = np.log(n2)
return np.sqrt(pi1*(1-pi1)/np.exp(n1) + pi2*(1-pi2)/np.exp(n2))
def power_eq1(pi1, Delta, alpha, power):
return 2*pi1*(1-pi1)*(norm.ppf(power) + norm.ppf(1-alpha))**2 / Delta**2
def power_eq2(pi1, Delta, alpha, power, n2=None):
"""
POWER FOR ONE-SIDED HYPOTHESIS TEST, DELTA>0
Delta=E(p1)-E(p2), alpha=E(z>t|H_0), beta=E(z<t|H_0)
"""
rhs = (norm.ppf(power) + norm.ppf(1-alpha))/Delta
n1star = np.exp(minimize_scalar(fun=lambda x: (1/sig_n12(x, pi1=pi1, Delta=Delta, n2=n2) - rhs)**2).x)
n2star = n2
if n2 is None:
n2star = n1star
lhs = 1/sig_n12(n1=np.log(n1star), pi1=pi1, Delta=Delta, n2=n2star)
assert np.abs(lhs - rhs) < 1e-5
return n1star, n2star
nsim = 100000
alpha = 0.05
t_alpha = norm.ppf(1-alpha)
pi_seq = np.round(np.arange(0.25,0.51,0.05),2)
delta_seq = np.round(np.arange(0.05,0.25,0.05),2)
power_seq = np.round(np.arange(0.25,1,0.05),2)
np.random.seed(seed)
count = 0
holder = []
for pi in pi_seq:
for delta in delta_seq:
for power in power_seq:
count += 1
n_eq1 = power_eq1(pi, delta, alpha, power)
n_eq2 = power_eq2(pi, delta, alpha, power)[0]
# Generate data for different groups
phat1_g1 = np.random.binomial(n_eq1, pi, size=nsim)/n_eq1
phat1_g2 = np.random.binomial(n_eq1, pi-delta, size=nsim)/n_eq1
phat2_g1 = np.random.binomial(n_eq2, pi, size=nsim)/n_eq2
phat2_g2 = np.random.binomial(n_eq2, pi-delta, size=nsim)/n_eq2
se1 = np.sqrt(phat1_g1*(1-phat1_g1)/n_eq1 + phat1_g2*(1-phat1_g2)/n_eq1)
se2 = np.sqrt(phat2_g1*(1-phat2_g1)/n_eq2 + phat2_g2*(1-phat2_g2)/n_eq2)
z1 = (phat1_g1-phat1_g2)/se1
z2 = (phat2_g1-phat2_g2)/se2
p1, p2 = np.mean(z1 > t_alpha), np.mean(z2 > t_alpha)
tmp = pd.DataFrame({'pi':pi, 'Delta':delta,'power':power,'p1':p1, 'p2':p2, 'n1':n_eq1, 'n2':n_eq2},index=[count])
holder.append(tmp)
sim1 = pd.concat(holder).melt(['pi','Delta','power'],None,'method')
sim1 = sim1.assign(metric=lambda x: x.method.str[0], approach=lambda x: x.method.str[1]).drop(columns = 'method')
sim1 = sim1.pivot_table('value',['pi','Delta','power','approach'],'metric').reset_index()
plotnine.options.figure_size = (8,4)
gg_sim1 = (ggplot(sim1, aes(x='power',y='p',color='Delta')) + theme_bw() +
geom_jitter(random_state=seed, height=0,width=0.005,alpha=0.75,size=2) +
labs(x='Expected power', y='Actual power') +
geom_abline(slope=1, intercept=0,linetype='--',color='blue') +
facet_wrap('~approach',labeller=labeller(approach={'1':'Eq. (1)','2':'Eq. (2)'})) +
ggtitle('Figure 1: Comparison of normal approximation power calculations') +
scale_color_continuous(name=' Δ'))
print(gg_sim1)
plotnine.options.figure_size = (7,3.5)
gg_power_n = (ggplot(sim1.assign(err=lambda x: x.p - x.power), aes(x='np.log(n)',y='err')) + theme_bw() +
geom_point() +
labs(x='log(n)', y='Actual less expected power') +
facet_wrap('~approach',labeller=labeller(approach={'1':'Eq. (1)','2':'Eq. (2)'})) +
ggtitle('Figure 2: Power calculation accuracy and sample size'))
print(gg_power_n)
###Output
_____no_output_____
###Markdown
Figure 1 shows that equation \eqref{eq:simple1} over predicts the number of samples that are needed (actual power exceeds predicted power), especially for large values of $\Delta$. This is to be expected since the variance formula ignores this term. In contrast \eqref{eq:argmin1} provides much tighter predictions. However both approaches suffer from inaccurate power estimates for small sample sizes, as Figure 2 shows. This is be expected because the normal approximation becomes increasingly imprecise for small sample sizes. The underlying reason for the descrepancy is that the lower-bound does not provide the actual coverage as expected:$$\begin{align*}&\text{Coverage of lower bound} \\P(\hat p_1 &> \hat p_2 - \sigma_{\hat \pi} \cdot t_\alpha | H_0 ) \neq 1-\alpha, \text{when $n$ is small}\end{align*}$$The simulations below show the coverage errors as a function of sample size using equation \eqref{eq:argmin1}.
###Code
nsim = 100000
pi_seq = np.round(np.arange(0.01,1,0.01),2)
n_seq = 2**np.arange(7,11,1)
holder = []
for pi in pi_seq:
for n in n_seq:
phat1 = np.random.binomial(n, pi, size=nsim)/n
phat2 = np.random.binomial(n, pi, size=nsim)/n
se = np.sqrt(phat1*(1-phat1)/n + phat2*(1-phat2)/n)
coverage = np.mean(phat1 > phat2 - se*t_alpha)
tmp = pd.DataFrame({'pi':pi,'n':n,'coverage':coverage},index=[0])
holder.append(tmp)
sim_coverage = pd.concat(holder).reset_index(None,True).assign(n=lambda x: pd.Categorical(x.n, x.n.unique()))
plotnine.options.figure_size = (5, 4)
gg_coverage = (ggplot(sim_coverage, aes(x='pi',y='coverage',color='n'))+
theme_bw() + geom_line() +
labs(x='π',y='Coverage') +
ggtitle('Figure 3: Coverage of Gaussian approx. lower bound\nFor the 95% level') +
scale_color_discrete(name='Sample size') +
theme(legend_position=(0.5,0.35)) +
scale_y_continuous(limits=[0.85,1.0],breaks=list(np.arange(0.85,1.01,0.05))) +
geom_hline(yintercept=0.95,linetype='--'))
gg_coverage
###Output
_____no_output_____
###Markdown
Figure 3 shows that for small sample sizes and small true proportion values ($\pi$) the lower bound for the Gaussian approximation does a poor job at actually containing the true parameter ($\pi$) at a coverage of $1-\alpha$ as would be expected if the distribution was exact. (2) Exact binomial distribution Wilson MethodIf we want to either.... Arcsin
###Code
# p_seq = np.round(np.arange(0.01, 1, 0.01),2)
# n = 1000
# nsim = 2500
# np.random.seed(seed)
# df_arcsin = pd.concat([pd.DataFrame({'phat':[np.random.binomial(n,p)/n for z in range(nsim)],'p':p}) for p in p_seq])
# df_arcsin = df_arcsin.assign(ptrans= lambda x: np.arcsin(np.sqrt(x.phat)))
# df_arcsin_var = df_arcsin.groupby('p').ptrans.std().reset_index().rename(columns={'ptrans':'var_trans'})
# gg_arcsin = (ggplot(df_arcsin_var, aes(x='p',y='var_trans')) + theme_bw() +
# geom_point() +
# geom_hline(yintercept=np.sqrt(1/(4*n)),color='blue',linetype='--') +
# labs(x='True proportion',y='Standard deviation of outcome') +
# ggtitle('Variation in arcsin() transform'))
# gg_arcsin
###Output
_____no_output_____ |
notebook_for_all_climate_analysis_&_trip_analysis.ipynb | ###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
inspector = inspect(engine)
column_measurement = inspector.get_columns('measurement')
column_measurement
column_station = inspector.get_columns('station')
column_station
###Output
_____no_output_____
###Markdown
Exploratory Climate Analysis
###Code
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date
one_year = dt.date(2017,8,23) - dt.timedelta(days=365)
one_year
# Design a query to retrieve the last 12 months of precipitation data and plot the results
#results = session.query(Measurement.prcp,Measurement.date).order_by(Measurement.date.desc()).all()
#Measurement of precipiation 12 months prior
result1 = session.query(Measurement.prcp,Measurement.date).\
filter(Measurement.date < '2017-08-23').filter(Measurement.date >= '2016-08-23').\
order_by(Measurement.date).all()
# Calculate the date 1 year ago from the last data point in the database
precipitation = [result[0] for result in result1]
date = [result[1] for result in result1]
# Use Pandas Plotting with Matplotlib to plot the data
result1
precipitation
plt.plot(date, precipitation, color='maroon')
plt.title("Precipitation score for the last 12 months",fontsize=12)
plt.xlabel("Date",fontsize=12)
plt.ylabel("Precipitation",fontsize=12)
plt.tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False)
plt.tight_layout()
plt.savefig("Precipitation scores for the last 12 months.png")
plt.show()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(result1, columns=['Precipitation', 'Date'])
df.set_index('Date', inplace=True)
df.to_csv("Precipation data in the last 12 months.csv")
df.head(10)
# Use Pandas to calcualte the summary statistics for the precipitation data
stat = df.describe()
stat.to_csv("Precipitation statistic for the last 12 months.csv")
stat
# Design a query to show how many stations are available in this dataset?
stations = session.query(Measurement.station).distinct().\
order_by(Measurement.station).all()
stations
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
station_result = session.query(Measurement.station, func.count(Measurement.station)).group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
station_result
station_activity_df = pd.DataFrame(station_result, columns = ["Station", "Count of Measurement"])
station_activity_df.set_index('Station', inplace=True)
station_activity_df.to_csv("Station Activity.csv")
# Using the station id from the previous query, calculate the lowest temperature recorded, highest temperature recorded, average temperature of the most active station (in a tupple)
USC00519281 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00519281').all()
USC00519397 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00519397').all()
USC00513117 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00513117').all()
USC00519523 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00519523').all()
USC00516128 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00516128').all()
USC00514830 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00514830').all()
USC00511918 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00511918').all()
USC00517948 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00517948').all()
USC00518838 = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).filter(Measurement.station == 'USC00518838').all()
#station with most activities - min, max, avg tobs
print(USC00519281)
print(USC00519397)
print(USC00513117)
print(USC00519523)
print(USC00516128)
print(USC00514830)
print(USC00511918)
print(USC00517948)
print(USC00518838)
result2 = session.query(Measurement.tobs, Measurement.date).\
filter(Measurement.station =='USC00519281').\
filter(Measurement.date < '2017-08-23').filter(Measurement.date > '2016-08-23').\
order_by(Measurement.date).all()
# Calculate the date 1 year ago from the last data point in the database
result2
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temp = [result[0] for result in result2]
temp
plt.hist(temp)
plt.xlabel("Temperature")
plt.ylabel("Frequency")
###Output
_____no_output_____
###Markdown

###Code
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
tmin, tavg, tmax = calc_temps('2016-08-13','2016-08-23')[0]
tmin
my_trip = session.query(func.avg(Measurement.tobs),Measurement.date).\
group_by(Measurement.date).filter(Measurement.date <= '2016-08-23').filter(Measurement.date >= '2016-08-13').\
order_by(Measurement.date).all()
trip_temp = [result[0] for result in my_trip]
trip_date = [result[1] for result in my_trip]
my_trip
# Plot the results from your previous query as a bar chart.
plt.bar(trip_date, trip_temp, color= 'mediumseagreen')
plt.title("Trip Avgerage Temperature per day",fontsize=14)
plt.xlabel("Trip Date",fontsize=12)
plt.ylabel("Average Temperature",fontsize=12)
plt.ylim(70, 82)
plt.tick_params(axis='x', which='both', bottom=False, top=False, labelrotation=60 )
#plt.errorbar(tmin, tmax, ecolor='black')
plt.tight_layout()
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
yerr = tmax - tmin
xpos = 1
fig, ax = plt.subplots(figsize=plt.figaspect(2.))
bar = ax.bar(xpos, tmax, yerr=yerr, alpha=0.5, color='coral', align="center")
ax.set(xticks=range(xpos), xticklabels="a", title="Trip Avg Temp", ylabel="Temp (F)")
# fig.autofmt_xdate()
fig.tight_layout()
fig.show()
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
###Output
_____no_output_____
###Markdown
Optional Challenge Assignment
###Code
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
my_trip_normal = session.query(func.min(Measurement.tobs),func.avg(Measurement.tobs),func.max(Measurement.tobs),func.strftime("%m-%d", Measurement.date)).\
group_by(Measurement.date).filter(Measurement.date<= '2016-08-23').filter(Measurement.date>= '2016-08-13').\
order_by(Measurement.date).all()
trip_tmin = [result[0] for result in my_trip_normal]
trip_tavg = [result[1] for result in my_trip_normal]
trip_tmax = [result[2] for result in my_trip_normal]
trip_date = [result[3] for result in my_trip_normal]
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
my_trip_normal
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
mytrip_df = pd.DataFrame({
"My Trip Date": trip_date,
"Min Temp ": trip_tmin,
"Avg Temp":trip_tavg,
"Max Temp": trip_tmax
})
mytrip_df.set_index('My Trip Date', inplace=True)
mytrip_df.to_csv("Trip_normal.csv")
mytrip_df
# Plot the daily normals as an area plot with `stacked=False`
mytrip_df.plot(kind='area', stacked=False, x_compat=True, alpha=.2)
plt.tight_layout()
###Output
_____no_output_____ |
Spark_and_Python_For_Big_Data_with_PySpark/04-Spark_for_Machine_Learning/1-Linear_Rgression/Linear_Regression_Code_Along.ipynb | ###Markdown
Linear Regression Code Along Basically what I do here is examine a dataset with Ecommerce Customer Data for a company's website and mobile app. Then we want to see if we can build a regression model that will predict the customer's yearly spend on the company's product.
###Code
from pyspark.sql import SparkSession
from pyspark.ml.regression import LinearRegression
spark = SparkSession.builder.appName('lr_example').getOrCreate()
data = spark.read.csv('Ecommerce_Customers.csv', inferSchema=True,
header=True)
data.printSchema()
data.describe().show()
for item in data.head(1)[0]:
print(item)
for item in data.head(2)[1]:
print(item)
###Output
[email protected]
4547 Archer CommonDiazchester, CA 06566-8576
DarkGreen
31.92627202636016
11.109460728682564
37.268958868297744
2.66403418213262
392.2049334443264
###Markdown
Setting Up DataFrame for Machine Learning A few things we need to do before Spark can accept the data!- It needs to be in the form of two columns("label","features")- Import VectorAssembler and Vectors
###Code
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
data.columns
assembler = VectorAssembler(inputCols=['Avg Session Length','Time on App',
'Time on Website','Length of Membership','Yearly Amount Spent'],
outputCol='features')
output = assembler.transform(data)
output.printSchema()
output.select('features').show()
output.head(1)
final_data = output.select('features', 'Yearly Amount Spent')
final_data.show()
train_data, test_data = final_data.randomSplit([0.7, 0.3])
train_data.describe().show()
test_data.describe().show()
lr = LinearRegression(labelCol='Yearly Amount Spent')
lr_model = lr.fit(train_data)
test_results = lr_model.evaluate(test_data)
test_results.r2
test_results.rootMeanSquaredError
test_results.residuals.show()
final_data.describe().show()
test_results.r2
unlabeled_data = test_data.select('features')
unlabeled_data.show()
predictions = lr_model.transform(unlabeled_data)
predictions.show()
print("RMSE: {}".format(test_results.rootMeanSquaredError))
print("MSE: {}".format(test_results.meanSquaredError))
###Output
RMSE: 2.3640679527554053e-13
MSE: 5.588817285245134e-26
|
_notebooks/2022-03-11-FinalProject.ipynb | ###Markdown
"Music Popularity Analysis" > "Analysis of Music Data from the Corgis Music Dataset. Investigating possible correlations between song characteristics and song popularity, and further investigation of artist popularity in respective geographic locations. Date: 3/11/2022"- toc: true- branch: master- badges: true- comments: true- author: Brandon Chi & Alexandra Lansing- categories: [fastpages, jupyter] IntroductionWelcome to our project on music popularity! Music is something that we both grew up with and are very passionate about so we thought it would be interesting to explore the characteristics of popular music and hopefully derive some meaningful conclusions about what makes people so passionate about music and what has made the music industry as big as it is today. This is what guided our initial research question: What characteristics of a song (ex: a song's time signature, tempo, mode) make it so appealing to people? The basis of how we ranked songs was based on the song's 'hotttness' rating that was a float value from -1 to 1. However, upon some statistcal analysis to show correlation between different characteristics and popularity rating, we didn't find anything that convinced us that a significant correlation existed. Out of the song characteristics in our dataset, song duration showed the most telling correlation, where a slight relationship was found between song duration and song/artist popularity. Our exploration of this characteristic is displayed in our methods. From here we knew we needed to expand our investigation past song characteristic data.Upon furthering our analysis, we noticed that within the music CORGIS dataset were entries for the latitude and longitude representing the home of the artist. This inspired us to shift our research question to the scope of cultural representation within the music industry. Are there certain regions around the world that dominate the production of popular music? We are strong believers that music embodies an indvidual's culture and roots and We can see many examples of how an artist's cultural roots and upbringing influence the music that they create. Therefore, we were curious to see if certain areas around the world are not being represented within the industry and with it, their culture. This train of thought led us to our modified research question: Are certain regions around the world dominating the production of popular music and more so are cultures around the world being equally represented within the music indstury?
###Code
import pandas as pd
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
from shapely.geometry import Point
import geopandas as gpd
from geopandas import GeoDataFrame
from geopy.geocoders import Nominatim
###Output
/opt/conda/lib/python3.9/site-packages/geopandas/_compat.py:106: UserWarning: The Shapely GEOS version (3.9.1-CAPI-1.14.2) is incompatible with the GEOS version PyGEOS was compiled with (3.10.1-CAPI-1.16.0). Conversions between both will be slow.
warnings.warn(
###Markdown
Methods & Results
###Code
musicDF=pd.read_csv('music.csv')
###Output
_____no_output_____
###Markdown
We began our investigation of the dataset by taking a look at the data provided. Our initial goal was to find correlations between the dataset columns, so we explored the entire dataset’s shape, the first 5 rows, and the last 5 rows to see the information provided.
###Code
print("Dimensions of the dataframe")
musicDF.shape
print("First 5 rows: ")
musicDF.head()
print("Last 5 rows: ")
musicDF.tail()
print("Column names")
musicDF.columns
###Output
Column names
###Markdown
It appeared that this dataset provides information about both the song artist, and the songs themselves. Interestingly, most columns demonstrated characteristics about songs (i.e. key, time signature, loudness, tempo). From this finding, we decided we investigate potential correlations between song characteristics and the popularity of songs/artists. By finding relationships between these factors, this might shine light on how certain aspects of songs make songs and artists more likable.
###Code
musicDF.describe()
###Output
_____no_output_____
###Markdown
Exploring the statistics from the columns of this dataset allows us to get a better idea of the range of each column’s data, and further demonstrates possible outliers in the set. Standard deviation tells us about the spread of data provided. As seen in the description above, the standard deviation varies quite significantly across the columns, some measuring 945 while others measuring 1.2. We decided to start creating visualizations to explore this spread of data further, and find potential correlations.
###Code
musicDF.info()
musicDF.plot(kind='scatter',x='song.loudness',y='song.hotttnesss', title = 'song hottnesss vs. song loudness');
###Output
_____no_output_____
###Markdown
(Figure 1) This scatter has a very undtraditional shape but we think it is important to notice that there seems to be someslight hinting towards higher values of song.loudness associated with popularity. As you can see there are songsthat have the same loudness as others but are not nearly as popular so loudness isn't the only characteristicwe can base our analysis off of but for the most part, the more popular songs tend to have a loudness ratingfrom [-10,0]
###Code
musicDF['song.hotttnesss'].plot(kind='box',title='Box Plot of Song hotttnesss')
###Output
_____no_output_____
###Markdown
(Figure 2) In this box plot, it appears that the median measurement of song popularity lies at 0,and the minimum lies at the first quartile, -1. The third quartile is seen at about 0.35,and the maximum is 1. It seems as though the first quartile has a larger range than the third quartile.Outliers are expected to be seen in the first quartile range and also at the maximum.
###Code
X = musicDF[['artist.hotttnesss']]
y = musicDF['song.tempo']
reg = LinearRegression().fit(X, y)
ytrain = reg.intercept_ + reg.coef_ * X
plt.plot(X,y,'ro',X,ytrain,'b-')
plt.title("Linear Regression of song tempo in comparison to artist hotttness rating")
###Output
_____no_output_____
###Markdown
(Figure 3)We ran a linear regression comparing the correlation between artist hotttnesss and the tempo of a song.Depicted below we see that the fitting line almost has a slope of 0 and we can see that the surroundingpoints don't really trend to have y=x or y=-x shape making it difficult to make the conclusion that a songs tempo influences the artist hotttness rating that much. Almost resembling a constant y=C shapemakes us think that we have to do some sort of analysis with a combination of different song characteristicsto find a more convincing correlation. As learned in class, we made a linear regression fit using the LinearRegression library from ‘sklearn.linear_model’ and the matplotlib.pyplot library. Following a similar method as the assignment, we first created a LinearRegression object using the constructor with the fit being centered around the artist.hotness rating as well as song tempo. Then we made the train variable fit using the regression intercept and regression coefficient. Finally, we plotted the regression using matplotlib along with the train line. Notice that there is not much to suggest a positive or negative correlation. The trend of the points and the line is relatively flat suggesting a neutral correlation. This is just one of the methods that we used to analyze correlation.
###Code
tempoDF = musicDF.sort_values(by='song.tempo', ascending=False).reset_index()
barData = tempoDF.loc[0:50,['song.tempo','artist.name']]
barData.plot(kind='bar',
x='artist.name',
y='song.tempo',
figsize=(12,5),
ylabel="song tempo",
title="song tempo vs. arist name")
###Output
_____no_output_____
###Markdown
(Figure 4)We first sorted the songs by tempo (a little over 250 seems to be the fastest song that this dataset has).Then we made a bar plot showing the artist's name assoicated with the fastest song. This information should be used in combination with the rest of the data as we have artists with the fastest songs and we want to see if maybe there is some correlation between the speed of a song and the overall popularity ranking. For additionalanalysis we will run tests on the artists themselves to see if they are popular.
###Code
musicDF.plot(kind='scatter', x='song.hotttnesss', y='song.duration', color='skyblue', title = 'song duration vs song hotttnesss')
###Output
_____no_output_____
###Markdown
(Figure 5) Taking a look now at song duration, we investigated if the length of the songs themselves would affect song and artist popularity. In this scatter plot, we found that there seemed to be several outlying datapoints in the set, so further cleaning was needed.
###Code
musicDF['song.hotttnesss']
music_df_song_h= musicDF[musicDF['song.hotttnesss'] > 0.0]
music_df_song_h.plot(kind='scatter', x='song.hotttnesss', y='song.duration', color='deepskyblue', title = 'song duration vs song hotttnesss')
###Output
_____no_output_____
###Markdown
(Figure 6)We extracted the song hotness measurements that ranged from 0-1. In this scatter plot, the rightmost points demonstrated the most popular songs in the dataset. It appears that as song hotness increases, the song duration begins to narrow towards a more specific range of length. The “hottest” songs seem to fall between 150 and 300 seconds. This result made sense to us as music listeners, because most of the songs we listen to fall between 3-5 minutes. The narrow range of duration of popular songs is an interesting find; perhaps this is because listeners only have so much time until they become bored of the song (hence the longer duration of songs shown leftmost on the plot).
###Code
musicDF['artist.hotttnesss']
music_df_song_h.plot(kind='scatter', x='artist.hotttnesss', y='song.duration', color='teal', title = 'song duration vs. artist hotttnesss')
###Output
_____no_output_____
###Markdown
(Figure 7) After finding these results about song duration and song hotness, we wanted to take a further look at how song duration might affect artist hotness. Taking a look at the artist hotness data, it seemed all data was provided between 0 and 1.2. We then created a scatter plot to visualize this spread and possible correlations. The correlation between song duration and artist hotness was not as evident as the slight correlation between song duration and song hotness. It appeared that most artists in the middle range of hotness had songs with lengths ranging between 1 and 550 mostly, then some outliers around 1000 and 1700 seconds. The most popular artists towards the right of the plot seem they’ve created songs around the 150-300 second range, which reflects the result we found earlier about song duration and song hotness. This seems to be an interesting result, but it appears there is too weak of a correlation here to draw truly significant conclusions.
###Code
# collapse-show
def filterLatLong(d):
latLongDF = d[d['artist.latitude'] != 0.0]
latLongDF = latLongDF[latLongDF['artist.longitude'] != 0.0]
return latLongDF.reset_index()
music_df = filterLatLong(musicDF)
song_df = music_df.sort_values('song.hotttnesss', axis=0, ascending=False, inplace=False).reset_index(drop=True)
artist_df = music_df.sort_values('artist.hotttnesss', axis=0, ascending=False, inplace=False).reset_index(drop=True)
# top 10%
top10PercentSong = song_df.head(int(len(song_df) * .1))
restSong = song_df.tail(int(len(song_df)*.5))
top10PercentArtist = artist_df.head(int(len(song_df) * .1))
restArtist = artist_df.tail(int(len(song_df)*.5))
def createMap(df):
geometry = [Point(xy) for xy in zip(df['artist.longitude'], df['artist.latitude'])]
gdf = GeoDataFrame(df, geometry=geometry)
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
gdf.plot(ax=world.plot(figsize=(10, 6)), marker='o', color='red', markersize=5)
print("Top 10 Percent of Songs")
createMap(top10PercentSong)
###Output
Top 10 Percent of Songs
###Markdown
(Figure 8) Top 10 Percent of Songs on map
###Code
print("Bottom 50 Percent of Songs")
createMap(restSong)
###Output
_____no_output_____
###Markdown
(Figure 9) Bottom 50 Percent of Songs
###Code
print("Top 10 Percent of Artists")
createMap(top10PercentArtist)
###Output
Top 10 Percent of Artists
###Markdown
(Figure 10) Top 10 Percent of Artists
###Code
print("Bottom 50 Percent of Artists")
createMap(restArtist)
###Output
Bottom 50 Percent of Artists
###Markdown
(Figure 11) Bottom 50 Percent of Artists We utilized the shapely.geometry, geopandas, and GeoDataFrame libraries to plot our points on a map. Within this analysis, we separated the maps into two categories: rankings of a song's popularity as well as the sorted ranking of the artist’s popularity. The importance here is we can see if there are specific trends between the most and least popular music, utilizing the head() and tail() methods built into pandas dataframe, and the sort_values() method. The functionality to create the maps were exported into helper functions for a cleaner visualization generation.
###Code
geolocator = Nominatim(user_agent="geoapiExercises")
# hide
def getLocationFromCoordinates(lat,long): #takes in lat and long as a float => converts to string
location = geolocator.reverse(str(lat) + ',' + str(long))
return location
def getStateCountryName():
dFreq={}
for k,v in top10PercentSong.iterrows():
location = getLocationFromCoordinates(v['artist.latitude'],v['artist.longitude'])
if location != None:
splitLocation = location[0].split(',')
stateCountryInd = len(splitLocation)-3
dFreq[splitLocation[stateCountryInd]] = dFreq[splitLocation[stateCountryInd]] + 1 if splitLocation[stateCountryInd] in dFreq else 1
return dFreq
print(getStateCountryName())
###Output
{' District of Columbia': 1, ' Illinois': 12, ' British Columbia': 4, ' Michigan': 8, ' England': 48, ' North Carolina': 4, ' Ontario': 7, ' Surrey County': 1, ' Tennessee': 8, ' New York': 43, ' California': 72, ' France métropolitaine': 5, ' Massachusetts': 5, ' Colorado': 2, ' Alabama': 7, ' Arkansas': 1, ' Skåne län': 3, ' Minnesota': 3, ' Puerto Rico': 2, ' Georgia': 7, 'Montgomery County': 6, ' Pennsylvania': 4, ' Victoria': 3, ' Washington': 6, ' Nova Scotia': 2, ' New Jersey': 8, ' Connecticut': 4, ' Florida': 8, ' Woodlands County': 1, ' Iowa': 2, ' Bahia': 1, ' Sundsvalls kommun': 4, ' Texas': 12, ' Wisconsin': 1, 'Fresno County': 2, ' Leinster': 1, ' Αποκεντρωμένη Διοίκηση Θεσσαλίας - Στερεάς Ελλάδος': 6, ' Saint Michael': 2, ' Oklahoma': 2, ' Northeastern Ontario': 2, ' County Roscommon': 3, ' Ohio': 7, ' Mississippi': 3, ' Mação': 3, ' Missouri': 2, ' Alba / Scotland': 2, ' Oyo': 2, ' Indiana': 1, ' Auvergne-Rhône-Alpes': 1, ' Berlin': 3, 'Yavapai County': 1, ' West Virginia': 2, ' Park County': 1, ' Pohjois-Suomen aluehallintovirasto': 2, ' Stockholms län': 2, ' Manner-Suomi': 2, ' القاهرة': 1, ' Umbria': 1, 'Fremont County': 1, ' Ciudad Autónoma de Buenos Aires': 4, ' Louisiana': 2, ' Grong': 1, 'Cercle de Goundam': 1, ' Virginia': 1, ' South Carolina': 2, ' West Midlands': 1, ' Nordrhein-Westfalen': 1, ' Västra Götalands län': 1}
|
Image/clamp.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# ee.Image.clamp() example.
# Clamp the values of all bands in an image to lie within the specified range.
# Values below the low value of that range are set to low value, values above
# the high value of that range are set to the high value.
image = ee.Image('CGIAR/SRTM90_V4')
clamped = image.clamp(1000, 2000)
Map.setCenter(-121.753, 46.855, 9)
Map.addLayer(image, {'min': 0, 'max': 4300}, 'Full stretch')
Map.addLayer(clamped, {'min': 0, 'max': 4300}, 'Clamped')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# ee.Image.clamp() example.
# Clamp the values of all bands in an image to lie within the specified range.
# Values below the low value of that range are set to low value, values above
# the high value of that range are set to the high value.
image = ee.Image('CGIAR/SRTM90_V4')
clamped = image.clamp(1000, 2000)
Map.setCenter(-121.753, 46.855, 9)
Map.addLayer(image, {'min': 0, 'max': 4300}, 'Full stretch')
Map.addLayer(clamped, {'min': 0, 'max': 4300}, 'Clamped')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# ee.Image.clamp() example.
# Clamp the values of all bands in an image to lie within the specified range.
# Values below the low value of that range are set to low value, values above
# the high value of that range are set to the high value.
image = ee.Image('CGIAR/SRTM90_V4')
clamped = image.clamp(1000, 2000)
Map.setCenter(-121.753, 46.855, 9)
Map.addLayer(image, {'min': 0, 'max': 4300}, 'Full stretch')
Map.addLayer(clamped, {'min': 0, 'max': 4300}, 'Clamped')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# ee.Image.clamp() example.
# Clamp the values of all bands in an image to lie within the specified range.
# Values below the low value of that range are set to low value, values above
# the high value of that range are set to the high value.
image = ee.Image('CGIAR/SRTM90_V4')
clamped = image.clamp(1000, 2000)
Map.setCenter(-121.753, 46.855, 9)
Map.addLayer(image, {'min': 0, 'max': 4300}, 'Full stretch')
Map.addLayer(clamped, {'min': 0, 'max': 4300}, 'Clamped')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# ee.Image.clamp() example.
# Clamp the values of all bands in an image to lie within the specified range.
# Values below the low value of that range are set to low value, values above
# the high value of that range are set to the high value.
image = ee.Image('CGIAR/SRTM90_V4')
clamped = image.clamp(1000, 2000)
Map.setCenter(-121.753, 46.855, 9)
Map.addLayer(image, {'min': 0, 'max': 4300}, 'Full stretch')
Map.addLayer(clamped, {'min': 0, 'max': 4300}, 'Clamped')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
# ee.Image.clamp() example.
# Clamp the values of all bands in an image to lie within the specified range.
# Values below the low value of that range are set to low value, values above
# the high value of that range are set to the high value.
image = ee.Image('CGIAR/SRTM90_V4')
clamped = image.clamp(1000, 2000)
view_state = pdk.ViewState(longitude=-121.753, latitude=46.855, zoom=9)
ee_layers.append(EarthEngineLayer(ee_object=image, vis_params={'min':0,'max':4300}))
ee_layers.append(EarthEngineLayer(ee_object=clamped, vis_params={'min':0,'max':4300}))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# ee.Image.clamp() example.
# Clamp the values of all bands in an image to lie within the specified range.
# Values below the low value of that range are set to low value, values above
# the high value of that range are set to the high value.
image = ee.Image('CGIAR/SRTM90_V4')
clamped = image.clamp(1000, 2000)
Map.setCenter(-121.753, 46.855, 9)
Map.addLayer(image, {'min': 0, 'max': 4300}, 'Full stretch')
Map.addLayer(clamped, {'min': 0, 'max': 4300}, 'Clamped')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# ee.Image.clamp() example.
# Clamp the values of all bands in an image to lie within the specified range.
# Values below the low value of that range are set to low value, values above
# the high value of that range are set to the high value.
image = ee.Image('CGIAR/SRTM90_V4')
clamped = image.clamp(1000, 2000)
Map.setCenter(-121.753, 46.855, 9)
Map.addLayer(image, {'min': 0, 'max': 4300}, 'Full stretch')
Map.addLayer(clamped, {'min': 0, 'max': 4300}, 'Clamped')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
Max/RBM-ORBM-Single-Models.ipynb | ###Markdown
2 Dimension Square Separation In this notebook I was able to separate two overlapping squares Log LikelihoodWe want to look for the log likelihood of producing the dataset $ \mathcal{D} $ for a given image from dreams of the image. Wait the dreams? Or should I look at the reconstructions? For looking at the RBM I should be able to get away with a reconstruction$$ LL_{\mathcal{D}} = \sum_{i} v_i \log( \sigma_i) + (1 - v_i) \log(1 - \sigma_i) $$$$\log P\big(v\big|h_{a}\big) = \begin{cases}\log( \sigma_i) & \text{if $v_i=1$}\\\log(1 - \sigma_i) & \text{if $v_i = 0$}\end{cases}$$
###Code
from scipy.special import expit
from rbmpy.rbm import RBM
from rbmpy.sampler import VanillaSampler, PartitionedSampler, ApproximatedSampler, LayerWiseApproxSampler,ApproximatedMulDimSampler, ContinuousSampler
from rbmpy.trainer import VanillaTrainier
from rbmpy.performance import Result
import numpy as np
import rbmpy.datasets, rbmpy.performance, rbmpy.plotter, rbmpy.mnist, pickle, rbmpy.rbm, os, logging, rbmpy.sampler,math
from sklearn.linear_model import Perceptron
from sklearn.neural_network import BernoulliRBM
import rbmpy.plotter as pp
from numpy import newaxis
from collections import Counter
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
logger = logging.getLogger()
# Set the logging level to logging.DEBUG
logger.setLevel(logging.INFO)
%matplotlib inline
from IPython.core.debugger import Tracer; debug_here = Tracer()
# Helper Methods
def squash_images(imgs):
squashed = np.array(imgs)
old_shape = squashed.shape
squashed = squashed.reshape(old_shape[0], old_shape[1] * old_shape[2])
return squashed
def inflate_images(imgs):
inflated = np.array(imgs)
old_shape = inflated.shape
size= math.sqrt(old_shape[1])
inflated = inflated.reshape(old_shape[0], size, size)
return inflated
def gen_square(xy,sq_shape, img_size):
"""Square image starting at i, of sq_size within img_size. i must be < (sq_size + img_size)"""
img = np.zeros(img_size)
x = xy[0]
y = xy[1]
x2 = x + sq_shape[0]
y2 = y + sq_shape[1]
img[x:x2,y:y2] = 1
return img
def gen_training(sq_shape, img_size):
if img_size[0] != img_size[1]:
logger.warn("Unsquashing will not work with none squares yet!")
training = []
for x in range(img_size[0]- 1):
for y in range(img_size[1]-1):
training.append(gen_square((x,y), sq_shape, img_size))
return np.array(training)
def ll_score(v, v_prime):
if v == 1:
return np.log(v_prime)
elif v == 0:
return np.log(1 - v_prime)
else:
raise NotImplementedError()
ll_score = np.vectorize(ll_score)
def evaluate_model(training, model):
s = VanillaSampler(model)
results = []
avg = 0
for i in range(5000):
results.append(ll_score(squash_images(train),s.reconstruction_given_visible(squash_images(train), return_sigmoid=True)).sum())
avg = avg/i
npr = np.array(results)
return npr
# return np.median(npr,axis=0), np.min(npr, axis = 0), np.max(npr,axis = 0), np.mean(npr,axis = 0)
def plot_eval(train,model):
# look at the reconstructions
dreams = []
for i in range(16):
dreams.append(s.dream(model).reshape(5,5))
pp.images(np.array(dreams))
# Lets also look at it's weights
pp.images(rbmpy.rbm.weights_into_hiddens(model.weights)[:10], cmap='Greys',title= "Hinton Diagrams",filename="Results/Weights.png")
result = evaluate_model(train,model)
plt.plot(result)
plt.show()
print("mean{:.2f} Worst {:.2f} Best {:.2f}".format( np.mean(result), np.min(result), np.max(result)))
pp.images(inflate_images(squash_images(train) - s.reconstruction_given_visible(squash_images(train))))
train = gen_training((2,2),(5,5))
# np.random.shuffle(train)
pp.images(train, title="Training Set", filename="Results/Training.png")
###Output
_____no_output_____
###Markdown
Train and Evaluate the Traditional Model
###Code
model = RBM(25,25,16)
s = VanillaSampler(model)
t = VanillaTrainier(model, s)
t.train(200, squash_images(train), learning_rate=0.05, use_visible_bias = False)
# plot the 16 centers
plot_eval(train, model)
###Output
_____no_output_____
###Markdown
Make the sampler and random Composite
###Code
help(s.dream)
s = VanillaSampler(model)
dream1 = s.dream(model, num_gibbs = 500)
dream2 = s.dream(model, num_gibbs = 500)
phi_1 = np.dot(s.visible_to_hidden(dream1), model.weights)
phi_2 = np.dot(s.visible_to_hidden(dream2), model.weights)
pp.image(expit(phi_1).reshape(5,5))
pp.image(expit(phi_2).reshape(5,5))
# pp.image((expit(phi_1) + expit(phi_2)).reshape(5,5))
comp = expit(phi_1 + phi_2)
pp.image(comp.reshape(5,5))
orbm_sampler = ApproximatedSampler(model.weights,model.weights,model.hidden_bias, model.hidden_bias)
rand_h = np.random.randint(0,2,size=( model.num_hid()))
left, right = orbm_sampler.v_to_v(rand_h,rand_h, comp)
plt.suptitle("ORBM")
pp.image(left.reshape(5,5))
pp.image(right.reshape(5,5))
rbm_sampler = VanillaSampler(model)
plt.suptitle("RBM")
pp.image(rbm_sampler.reconstruction_given_visible(comp).reshape(5,5))
a = ApproximatedMulDimSampler(model.weights,model.weights, model.hidden_bias,model.hidden_bias)
data = model.visible.copy()
np.random.shuffle(data)
item_one = inflate_images(data)[0]
item_two = inflate_images(data)[1]
composite_v = np.maximum(item_one,item_two )
pp.image(item_one+ item_two,cmap='Paired',show_colorbar=False)
rand_h = np.random.randint(0,2,10)
approx= ApproximatedSampler(model.weights, model.weights, model.hidden_bias, model.hidden_bias)
reconstruction = approx.v_to_v(rand_h,rand_h, composite_v.reshape(25),num_gibbs=500)
pp.image(reconstruction[0].reshape(5,5),show_colorbar=False, title="V'_a")
pp.image(reconstruction[1].reshape(5,5), show_colorbar=False, title = "V'_b" )
pp.image(reconstruction[0].reshape(5,5) + reconstruction[1].reshape(5,5),title="Composite Recon" ,cmap ='Paired',show_colorbar=False)
pp.image(s.reconstruction_given_visible(composite_v.reshape(25)).reshape(5,5),show_colorbar=False)
###Output
_____no_output_____
###Markdown
Make a composite training set
###Code
def gen_composite_training(sq_shape, img_size, static_xy):
training = []
for x in range(img_size[0]-1):
for y in range(img_size[1]-1):
training.append(np.maximum(gen_square((x,y), sq_shape, img_size),gen_square(static_xy, sq_shape, img_size)))
return np.array(training)
comp = gen_composite_training((2,2),(5,5),(1,1))
pp.images(comp)
rand_h = np.random.randint(0,2,35)
approx= ApproximatedSampler(model.weights, model.weights, model.hidden_bias, model.hidden_bias)
for current_img in comp:
reconstruction = approx.v_to_v(rand_h,rand_h,current_img.reshape(25),num_gibbs=1000)
pp.images(np.array([current_img,reconstruction[0].reshape(5,5), reconstruction[1].reshape(5,5), s.reconstruction_given_visible(current_img.reshape(25)).reshape(5,5)]))
###Output
_____no_output_____ |
notebooks/Solutions/DATAPREP_03a_b_MV_Handling_BasicApproaches_Lab_Solution.ipynb | ###Markdown
Missing value (MV) handling - the basic approachesPerform listwise deletion and feature deletion on the provided data frame xReminder:- listwise deletion: All rows containing any MV will be removed- feature deletion: All feature/columns containing any MV will be removed
###Code
import pandas as pd
import numpy as np
from sklearn.impute import SimpleImputer
x= pd.DataFrame([[1, 2,0], [np.nan, 3,1], [7, 6,0],[1, 6,0],[np.nan, np.nan,1], [3, np.nan,0]])
print("Original data: \n",x)
#Listwise deletion
x_deleted_rows=x.dropna()
print("\nListwise deletion")
print("Data after deleting rows with missing values: \n",x_deleted_rows)
#Feature deletion
x_deleted_columns=x.dropna(axis=1)
print("\nFeature deletion")
print("Data after deleting features with missing values: \n",x_deleted_columns)
###Output
Original data:
0 1 2
0 1.0 2.0 0
1 NaN 3.0 1
2 7.0 6.0 0
3 1.0 6.0 0
4 NaN NaN 1
5 3.0 NaN 0
Listwise deletion
Data after deleting rows with missing values:
0 1 2
0 1.0 2.0 0
2 7.0 6.0 0
3 1.0 6.0 0
Feature deletion
Data after deleting features with missing values:
2
0 0
1 1
2 0
3 0
4 1
5 0
###Markdown
Univariate feature imputationImputes missing values in a feature using only non-missing values in that feature (and no other features)The SimpleImputer class provides basic strategies for imputing missing values. Missing values can be imputed with a provided constant value, or using the statistics (mean, median or most frequent) of each column in which the missing values are located. This class also allows for different missing values encodings.Based on the provided array x, perform average imputation using the SimpleImputer class and applying different strategies:- mean- median- constant- most frequentFit and transform the data and print the corresponding result for each strategy!
###Code
import numpy as np
from sklearn.impute import SimpleImputer
x= np.array([[1, 2], [np.nan, 3], [7, 6],[1, 6]])
print("Original data: \n",x)
#Average Imputation using strategy='mean'
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
transformed_x=imp.fit_transform(x)
print("Transformed data (mean imputation): \n",transformed_x)
#Average Imputation using strategy='median'
imp = SimpleImputer(missing_values=np.nan, strategy='median')
transformed_x=imp.fit_transform(x)
print("Transformed data (median imputation): \n",transformed_x)
#Average Imputation using strategy='constant'
imp = SimpleImputer(missing_values=np.nan, strategy='constant',fill_value=17)
transformed_x=imp.fit_transform(x)
print("Transformed data (constant imputation): \n",transformed_x)
#Average Imputation using strategy='most_frequent'
imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
transformed_x=imp.fit_transform(x)
print("Transformed data (mostfrequent value imputation): \n",transformed_x)
###Output
Original data:
[[ 1. 2.]
[nan 3.]
[ 7. 6.]
[ 1. 6.]]
Transformed data (mean imputation):
[[1. 2.]
[3. 3.]
[7. 6.]
[1. 6.]]
Transformed data (median imputation):
[[1. 2.]
[1. 3.]
[7. 6.]
[1. 6.]]
Transformed data (constant imputation):
[[ 1. 2.]
[17. 3.]
[ 7. 6.]
[ 1. 6.]]
Transformed data (mostfrequent value imputation):
[[1. 2.]
[1. 3.]
[7. 6.]
[1. 6.]]
###Markdown
Univariate imputation can also be applied to string values.Use the SimpleImputer class again in combination with a "most frequent" strategy.You should get the Value Error shown below - and hopefully no other errors ;)Try to fix it!
###Code
#Univariate imputation with string values
#first try
x_string= np.array([["Mike", 2], [np.nan, 3], ["Peter", 6],["Peter", 6]])
print("Original data: \n",x_string)
#Average Imputation using strategy='most_frequent'
imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
transformed_x_string=imp.fit_transform(x_string)
print("Transformed data (mostfrequent value imputation): \n",transformed_x_string)
#Univariate imputation with string values
#second try
x_string= np.array([["Mike", 2], [np.nan, 3], ["Peter", 6],["Peter", 6]], dtype="object")
print("Original data: \n",x_string)
#Average Imputation using strategy='most_frequent'
imp = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
transformed_x_string=imp.fit_transform(x_string)
print("Transformed data (most frequent value imputation): \n",transformed_x_string)
print("Int values keep their type: ",transformed_x_string[1,1]," + 4 = ",transformed_x_string[1,1]+4)
###Output
Original data:
[['Mike' 2]
[nan 3]
['Peter' 6]
['Peter' 6]]
Transformed data (most frequent value imputation):
[['Mike' 2]
['Peter' 3]
['Peter' 6]
['Peter' 6]]
Int values keep their type: 3 + 4 = 7
###Markdown
Use the same array again (including a missing value in the second example) and apply a "constant" imputation this time an replace the MV with your favourite name!
###Code
#Univariate imputation with string values
#Similarily with constant values
x_string= np.array([["Mike", 2], [np.nan, 3], ["Peter", 6],["Peter", 6]], dtype="object")
print("Original data: \n",x_string)
#Average Imputation using strategy='constant'
imp = SimpleImputer(missing_values=np.nan, strategy='constant',fill_value="Hugo")
transformed_x_string=imp.fit_transform(x_string)
print("Transformed data (constant value imputation): \n",transformed_x_string)
###Output
Original data:
[['Mike' 2]
[nan 3]
['Peter' 6]
['Peter' 6]]
Transformed data (constant value imputation):
[['Mike' 2]
['Hugo' 3]
['Peter' 6]
['Peter' 6]]
|
docs/notebooks/debugging-a-pipeline.ipynb | ###Markdown
Debugging a pipeline
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
`creme` encourages users to make use of pipelines. The biggest pain point of pipelines is that it can be hard to understand what's happening to the data, especially when the pipeline is complex. Fortunately the `Pipeline` class has a `debug_one` method that can help out.Let's look at a fairly complex pipeline for predicting the number of bikes in 5 bike stations from the city of Toulouse. It doesn't matter if you understand the pipeline or not; the point of this notebook is to learn how to introspect a pipeline.
###Code
import datetime as dt
from creme import compose
from creme import datasets
from creme import feature_extraction
from creme import linear_model
from creme import metrics
from creme import preprocessing
from creme import stats
from creme import stream
X_y = datasets.fetch_bikes()
X_y = stream.simulate_qa(X_y, on='moment', lag=dt.timedelta(minutes=30))
def add_time_feature(x):
return {
**x,
'hour': x['moment'].hour,
'day': x['moment'].weekday()
}
model = add_time_feature
model |= (
compose.Whitelister('clouds', 'humidity', 'pressure', 'temperature', 'wind') +
feature_extraction.TargetAgg(by=['station', 'hour'], how=stats.Mean()) +
feature_extraction.TargetAgg(by='station', how=stats.EWMean())
)
model |= preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
metric = metrics.MAE()
for i, (is_question, x, y) in enumerate(X_y):
# Question
if is_question:
y_pred = model.predict_one(x)
metric.update(y, y_pred)
# Answer
else:
model = model.fit_one(x, y)
if i >= 30000 and i % 30000 == 0:
print(i, metric)
###Output
30000 MAE: 17.579727
60000 MAE: 9.881766
90000 MAE: 7.355954
120000 MAE: 6.100533
150000 MAE: 5.337824
180000 MAE: 4.855611
210000 MAE: 4.477027
240000 MAE: 4.19225
270000 MAE: 3.959311
300000 MAE: 3.809473
330000 MAE: 3.680278
360000 MAE: 3.568833
###Markdown
We can start by looking at what the pipeline looks by drawing it.
###Code
model.draw()
###Output
_____no_output_____
###Markdown
As mentionned above the `Pipeline` class has a `debug_one` method. You can use this at any point you want to visualize what happen to an input `x`. For example, let's see what happens to the last seen `x`.
###Code
model.debug_one(x)
###Output
0. Input
--------
moment: 2016-10-05 09:57:18 (datetime)
station: pomme (str)
clouds: 88 (int)
description: overcast clouds (str)
humidity: 84 (int)
pressure: 1017.34 (float)
temperature: 17.45 (float)
wind: 1.95 (float)
1. add_time_feature
-------------------
moment: 2016-10-05 09:57:18 (datetime)
station: pomme (str)
clouds: 88 (int)
description: overcast clouds (str)
humidity: 84 (int)
pressure: 1017.34 (float)
temperature: 17.45 (float)
wind: 1.95 (float)
hour: 9 (int)
day: 2 (int)
2. Transformer union
--------------------
2.0 ['clouds', 'humidity', 'pressure', 'temperature', 'wind']
-------------------------------------------------------------
clouds: 88 (int)
temperature: 17.45 (float)
wind: 1.95 (float)
humidity: 84 (int)
pressure: 1017.34 (float)
2.1 target_mean_by_station_and_hour
-----------------------------------
target_mean_by_station_and_hour: 7.8939597315436245 (float)
2.2 target_ewm_0.5_by_station
-----------------------------
target_ewm_0.5_by_station: 11.803715633226826 (float)
pressure: 1017.34 (float)
clouds: 88 (int)
temperature: 17.45 (float)
target_mean_by_station_and_hour: 7.8939597315436245 (float)
target_ewm_0.5_by_station: 11.803715633226826 (float)
wind: 1.95 (float)
humidity: 84 (int)
3. StandardScaler
-----------------
pressure: 0.049162926633057616 (float)
clouds: 1.5477798751348644 (float)
temperature: -0.5193787632453031 (float)
target_mean_by_station_and_hour: -0.26012478963975966 (float)
target_ewm_0.5_by_station: 0.19213729904519417 (float)
wind: -0.6942615120308012 (float)
humidity: 1.1636526975326305 (float)
4. LinearRegression
-------------------
12.002955119736493
###Markdown
Debugging a pipeline
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
`creme` encourages users to make use of pipelines. The biggest pain point of pipelines is that it can be hard to understand what's happening to the data, especially when the pipeline is complex. Fortunately the `Pipeline` class has a `debug_one` method that can help out.Let's look at a fairly complex pipeline for predicting the number of bikes in 5 bike stations from the city of Toulouse. It doesn't matter if you understand the pipeline or not; the point of this notebook is to learn how to introspect a pipeline.
###Code
import datetime as dt
from creme import compose
from creme import datasets
from creme import feature_extraction
from creme import linear_model
from creme import metrics
from creme import preprocessing
from creme import stats
from creme import stream
X_y = datasets.fetch_bikes()
X_y = stream.simulate_qa(X_y, on='moment', lag=dt.timedelta(minutes=30))
def add_time_features(x):
return {
**x,
'hour': x['moment'].hour,
'day': x['moment'].weekday()
}
model = add_time_features
model |= (
compose.Whitelister('clouds', 'humidity', 'pressure', 'temperature', 'wind') +
feature_extraction.TargetAgg(by=['station', 'hour'], how=stats.Mean()) +
feature_extraction.TargetAgg(by='station', how=stats.EWMean())
)
model |= preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
metric = metrics.MAE()
for i, (is_question, x, y) in enumerate(X_y):
# Question
if is_question:
y_pred = model.predict_one(x)
metric.update(y, y_pred)
# Answer
else:
model = model.fit_one(x, y)
if i >= 30000 and i % 30000 == 0:
print(i, metric)
###Output
30000 MAE: 17.579727
60000 MAE: 9.881766
90000 MAE: 7.355954
120000 MAE: 6.100533
150000 MAE: 5.337824
180000 MAE: 4.855611
210000 MAE: 4.477027
240000 MAE: 4.19225
270000 MAE: 3.959311
300000 MAE: 3.809473
330000 MAE: 3.680278
360000 MAE: 3.568833
###Markdown
We can start by looking at what the pipeline looks by drawing it.
###Code
model.draw()
###Output
_____no_output_____
###Markdown
As mentionned above the `Pipeline` class has a `debug_one` method. You can use this at any point you want to visualize what happen to an input `x`. For example, let's see what happens to the last seen `x`.
###Code
model.debug_one(x)
###Output
0. Input
--------
moment: 2016-10-05 09:57:18 (datetime)
station: pomme (str)
clouds: 88 (int)
description: overcast clouds (str)
humidity: 84 (int)
pressure: 1017.34 (float)
temperature: 17.45 (float)
wind: 1.95 (float)
1. add_time_features
--------------------
moment: 2016-10-05 09:57:18 (datetime)
station: pomme (str)
clouds: 88 (int)
description: overcast clouds (str)
humidity: 84 (int)
pressure: 1017.34 (float)
temperature: 17.45 (float)
wind: 1.95 (float)
hour: 9 (int)
day: 2 (int)
2. Transformer union
--------------------
2.0 ['clouds', 'humidity', 'pressure', 'temperature', 'wind']
-------------------------------------------------------------
wind: 1.95 (float)
humidity: 84 (int)
pressure: 1017.34 (float)
clouds: 88 (int)
temperature: 17.45 (float)
2.1 target_mean_by_station_and_hour
-----------------------------------
target_mean_by_station_and_hour: 7.8939597315436245 (float)
2.2 target_ewm_0.5_by_station
-----------------------------
target_ewm_0.5_by_station: 11.803715633226826 (float)
target_ewm_0.5_by_station: 11.803715633226826 (float)
target_mean_by_station_and_hour: 7.8939597315436245 (float)
wind: 1.95 (float)
humidity: 84 (int)
pressure: 1017.34 (float)
clouds: 88 (int)
temperature: 17.45 (float)
3. StandardScaler
-----------------
target_ewm_0.5_by_station: 0.19213729904519417 (float)
target_mean_by_station_and_hour: -0.26012478963975966 (float)
wind: -0.6942615120308012 (float)
humidity: 1.1636526975326305 (float)
pressure: 0.049162926633057616 (float)
clouds: 1.5477798751348644 (float)
temperature: -0.5193787632453031 (float)
4. LinearRegression
-------------------
[92m9.11417211815334[0m * 0.19213729904519417 (target_ewm_0.5_by_station) +
[92m0.21050435957123156[0m * -0.26012478963975966 (target_mean_by_station_and_hour) +
[92m0.21462575107627266[0m * -0.6942615120308012 (wind) +
[92m0.4410345311539121[0m * 1.1636526975326305 (humidity) +
[91m-0.15489862777915547[0m * 0.049162926633057616 (pressure) +
[91m-0.24393617308125604[0m * 1.5477798751348644 (clouds) +
[91m-0.5149019633474102[0m * -0.5193787632453031 (temperature)
12.002955119736493
|
Best solution (Bronze medal - 100th place)/99-jigsaw-fold1-xlm-roberta-large-best.ipynb | ###Markdown
Dependencies
###Code
import json, warnings, shutil, glob
from jigsaw_utility_scripts import *
from scripts_step_lr_schedulers import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/jigsaw-data-split-roberta-192-ratio-2-upper/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv",
usecols=['comment_text', 'toxic', 'lang'])
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print('Validation samples: %d' % len(valid_df))
display(valid_df.head())
base_data_path = 'fold_1/'
fold_n = 1
# Unzip files
!tar -xvf /kaggle/input/jigsaw-data-split-roberta-192-ratio-2-upper/fold_1.tar.gz
###Output
Train samples: 400830
###Markdown
Model parameters
###Code
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 192,
"BATCH_SIZE": 128,
"EPOCHS": 4,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": None,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Learning rate schedule
###Code
lr_min = 1e-7
lr_start = 1e-7
lr_max = config['LEARNING_RATE']
step_size = len(k_fold[k_fold[f'fold_{fold_n}'] == 'train']) // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
hold_max_steps = 0
warmup_steps = step_size * 1
decay = .9997
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps, hold_max_steps,
lr_start, lr_max, lr_min, decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 1e-07 to 9.84e-06 to 1.06e-06
###Markdown
Model
###Code
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
cls_token = last_hidden_state[:, 0, :]
output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
model = Model(inputs=[input_ids, attention_mask], outputs=output)
return model
###Output
_____no_output_____
###Markdown
Train
###Code
# Load data
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train_int.npy').reshape(x_train.shape[1], 1).astype(np.float32)
x_valid_ml = np.load(database_base_path + 'x_valid.npy')
y_valid_ml = np.load(database_base_path + 'y_valid.npy').reshape(x_valid_ml.shape[1], 1).astype(np.float32)
#################### ADD TAIL ####################
x_train = np.hstack([x_train, np.load(base_data_path + 'x_train_tail.npy')])
y_train = np.vstack([y_train, y_train])
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid_ml.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED))
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
train_data_iter = iter(train_dist_ds)
valid_data_iter = iter(valid_dist_ds)
# Step functions
@tf.function
def train_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(step_size):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss = loss_fn(y, probabilities)
valid_auc.update_state(y, probabilities)
valid_loss.update_state(loss)
for _ in tf.range(valid_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
# Train model
with strategy.scope():
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=lambda:
exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps, hold_max_steps, lr_start,
lr_max, lr_min, decay))
loss_fn = losses.binary_crossentropy
train_auc = metrics.AUC()
valid_auc = metrics.AUC()
train_loss = metrics.Sum()
valid_loss = metrics.Sum()
metrics_dict = {'loss': train_loss, 'auc': train_auc,
'val_loss': valid_loss, 'val_auc': valid_auc}
history = custom_fit(model, metrics_dict, train_step, valid_step, train_data_iter, valid_data_iter,
step_size, valid_step_size, config['BATCH_SIZE'], config['EPOCHS'],
config['ES_PATIENCE'], save_last=False)
# model.save_weights('model.h5')
# Make predictions
x_train = np.load(base_data_path + 'x_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_ml_eval = np.load(database_base_path + 'x_valid.npy')
train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO))
valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO))
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
k_fold.loc[k_fold[f'fold_{fold_n}'] == 'train', f'pred_{fold_n}'] = np.round(train_preds)
k_fold.loc[k_fold[f'fold_{fold_n}'] == 'validation', f'pred_{fold_n}'] = np.round(valid_preds)
valid_df[f'pred_{fold_n}'] = valid_ml_preds
# Fine-tune on validation set
#################### ADD TAIL ####################
x_valid_ml_tail = np.hstack([x_valid_ml, np.load(database_base_path + 'x_valid_tail.npy')])
y_valid_ml_tail = np.vstack([y_valid_ml, y_valid_ml])
valid_step_size_tail = x_valid_ml_tail.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_ml_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_valid_ml_tail, y_valid_ml_tail,
config['BATCH_SIZE'], AUTO, seed=SEED))
train_ml_data_iter = iter(train_ml_dist_ds)
history_ml = custom_fit(model, metrics_dict, train_step, valid_step, train_ml_data_iter, valid_data_iter,
valid_step_size_tail, valid_step_size, config['BATCH_SIZE'], 1,
config['ES_PATIENCE'], save_last=False)
# Join history
for key in history_ml.keys():
history[key] += history_ml[key]
model.save_weights('model_ml.h5')
# Make predictions
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
valid_df[f'pred_ml_{fold_n}'] = valid_ml_preds
### Delete data dir
shutil.rmtree(base_data_path)
###Output
Train for 5010 steps, validate for 62 steps
EPOCH 1/4
time: 1714.6s loss: 0.2386 auc: 0.9608 val_loss: 0.2656 val_auc: 0.9225
EPOCH 2/4
time: 1519.6s loss: 0.1597 auc: 0.9821 val_loss: 0.2932 val_auc: 0.9180
EPOCH 3/4
time: 1519.4s loss: 0.1406 auc: 0.9860 val_loss: 0.2942 val_auc: 0.9134
EPOCH 4/4
time: 1519.4s loss: 0.1351 auc: 0.9870 val_loss: 0.3031 val_auc: 0.9151
Training finished
Train for 125 steps, validate for 62 steps
EPOCH 1/1
time: 1621.8s loss: 7.0298 auc: 0.9598 val_loss: 0.1199 val_auc: 0.9831
Training finished
###Markdown
Model loss graph
###Code
plot_metrics(history)
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
display(evaluate_model_single_fold(k_fold, 1, label_col='toxic_int').style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
train_set = k_fold[k_fold[f'fold_{fold_n}'] == 'train']
validation_set = k_fold[k_fold[f'fold_{fold_n}'] == 'validation']
plot_confusion_matrix(train_set['toxic_int'], train_set[f'pred_{fold_n}'],
validation_set['toxic_int'], validation_set[f'pred_{fold_n}'])
###Output
_____no_output_____
###Markdown
Model evaluation by language
###Code
display(evaluate_model_single_fold_lang(valid_df, 1).style.applymap(color_map))
# ML fine-tunned preds
display(evaluate_model_single_fold_lang(valid_df, 1, pred_col='pred_ml').style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
pd.set_option('max_colwidth', 120)
print('English validation set')
display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10))
print('Multilingual validation set')
display(valid_df[['comment_text', 'toxic'] + [c for c in valid_df.columns if c.startswith('pred')]].head(10))
###Output
English validation set
###Markdown
Test set predictions
###Code
x_test = np.load(database_base_path + 'x_test.npy')
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO))
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
display(submission.describe())
display(submission.head(10))
###Output
_____no_output_____ |
introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-transfer-learning-highlevel.ipynb | ###Markdown
Image classification transfer learning demo (SageMaker SDK)1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
region = boto3.Session().region_name
s3_client = boto3.client("s3")
sess = sagemaker.Session()
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/image/caltech-256/"
output_bucket = sess.default_bucket()
output_prefix = "ic-transfer-learning"
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-train.rec", "caltech-256-60-train.rec"
)
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-val.rec", "caltech-256-60-val.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec", output_bucket, output_prefix + "/train_rec/caltech-256-60-train.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec",
output_bucket,
output_prefix + "/validation_rec/caltech-256-60-train.rec",
)
from sagemaker import image_uris
training_image = image_uris.retrieve(region=sess.boto_region_name, framework="image-classification")
print(training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset [1] consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/). Data in this notebook was downloaded from [MXNet's caltech-256 training dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec) and [MXNet's caltech-256 validation dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec) and stored in the `data_bucket`.>[1] Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.
###Code
import boto3
# Four channels: train, validation, train_lst, and validation_lst
s3train = f"s3://{output_bucket}/{output_prefix}/train_rec/"
s3validation = f"s3://{output_bucket}/{output_prefix}/validation_rec/"
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
ic = sagemaker.estimator.Estimator(
training_image,
role,
instance_count=1,
instance_type="ml.p3.2xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(
num_layers=18,
use_pretrained_model=1,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype="float32",
)
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.session.s3_input(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
List of object categories
###Code
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
###Output
_____no_output_____
###Markdown
Download test image
###Code
# test image
file_name = "/tmp/test.jpg"
s3_client.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg",
file_name,
)
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
# Invoke the deployed model to compute prediction
prediction = ic_classifier.predict(payload, initial_args={"ContentType": "application/x-image"})
# prediction is a JSON string. Load it into a Python object.
probabilities = json.loads(prediction)
# find the class with maximum probability and print the class index
predicted_category_index = np.argmax(probabilities)
predicted_category_name = object_categories[predicted_category_index]
confidence = probabilities[predicted_category_index]
print(f"Result: label - {predicted_category_name}, probability - {confidence}")
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____
###Markdown
Image classification transfer learning demo1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = 'ic-transfer-learning'
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'image-classification', repo_version="latest")
print (training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/tutorials/basic/record_io.html) and the other is a [lst format](https://mxnet.incubator.apache.org/how_to/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/).
###Code
import os
import urllib.request
import boto3
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
def upload_to_s3(channel, file):
s3 = boto3.resource('s3')
data = open(file, "rb")
key = channel + '/' + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
# # caltech-256
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')
upload_to_s3('validation', 'caltech-256-60-val.rec')
upload_to_s3('train', 'caltech-256-60-train.rec')
# Four channels: train, validation, train_lst, and validation_lst
s3train = 's3://{}/{}/train/'.format(bucket, prefix)
s3validation = 's3://{}/{}/validation/'.format(bucket, prefix)
# upload the lst files to train and validation channels
!aws s3 cp caltech-256-60-train.rec $s3train --quiet
!aws s3 cp caltech-256-60-val.rec $s3validation --quiet
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
ic = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
train_volume_size = 50,
train_max_run = 360000,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sess)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(num_layers=18,
use_pretrained_model=1,
image_shape = "3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype='float32')
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(s3train, distribution='FullyReplicated',
content_type='application/x-recordio', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3validation, distribution='FullyReplicated',
content_type='application/x-recordio', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count = 1,
instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Download test image
###Code
!wget -O /tmp/test.jpg http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/008.bathtub/008_0007.jpg
file_name = '/tmp/test.jpg'
# test image
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, 'rb') as f:
payload = f.read()
payload = bytearray(payload)
ic_classifier.content_type = 'application/x-image'
result = json.loads(ic_classifier.predict(payload))
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = ['ak47', 'american-flag', 'backpack', 'baseball-bat', 'baseball-glove', 'basketball-hoop', 'bat', 'bathtub', 'bear', 'beer-mug', 'billiards', 'binoculars', 'birdbath', 'blimp', 'bonsai-101', 'boom-box', 'bowling-ball', 'bowling-pin', 'boxing-glove', 'brain-101', 'breadmaker', 'buddha-101', 'bulldozer', 'butterfly', 'cactus', 'cake', 'calculator', 'camel', 'cannon', 'canoe', 'car-tire', 'cartman', 'cd', 'centipede', 'cereal-box', 'chandelier-101', 'chess-board', 'chimp', 'chopsticks', 'cockroach', 'coffee-mug', 'coffin', 'coin', 'comet', 'computer-keyboard', 'computer-monitor', 'computer-mouse', 'conch', 'cormorant', 'covered-wagon', 'cowboy-hat', 'crab-101', 'desk-globe', 'diamond-ring', 'dice', 'dog', 'dolphin-101', 'doorknob', 'drinking-straw', 'duck', 'dumb-bell', 'eiffel-tower', 'electric-guitar-101', 'elephant-101', 'elk', 'ewer-101', 'eyeglasses', 'fern', 'fighter-jet', 'fire-extinguisher', 'fire-hydrant', 'fire-truck', 'fireworks', 'flashlight', 'floppy-disk', 'football-helmet', 'french-horn', 'fried-egg', 'frisbee', 'frog', 'frying-pan', 'galaxy', 'gas-pump', 'giraffe', 'goat', 'golden-gate-bridge', 'goldfish', 'golf-ball', 'goose', 'gorilla', 'grand-piano-101', 'grapes', 'grasshopper', 'guitar-pick', 'hamburger', 'hammock', 'harmonica', 'harp', 'harpsichord', 'hawksbill-101', 'head-phones', 'helicopter-101', 'hibiscus', 'homer-simpson', 'horse', 'horseshoe-crab', 'hot-air-balloon', 'hot-dog', 'hot-tub', 'hourglass', 'house-fly', 'human-skeleton', 'hummingbird', 'ibis-101', 'ice-cream-cone', 'iguana', 'ipod', 'iris', 'jesus-christ', 'joy-stick', 'kangaroo-101', 'kayak', 'ketch-101', 'killer-whale', 'knife', 'ladder', 'laptop-101', 'lathe', 'leopards-101', 'license-plate', 'lightbulb', 'light-house', 'lightning', 'llama-101', 'mailbox', 'mandolin', 'mars', 'mattress', 'megaphone', 'menorah-101', 'microscope', 'microwave', 'minaret', 'minotaur', 'motorbikes-101', 'mountain-bike', 'mushroom', 'mussels', 'necktie', 'octopus', 'ostrich', 'owl', 'palm-pilot', 'palm-tree', 'paperclip', 'paper-shredder', 'pci-card', 'penguin', 'people', 'pez-dispenser', 'photocopier', 'picnic-table', 'playing-card', 'porcupine', 'pram', 'praying-mantis', 'pyramid', 'raccoon', 'radio-telescope', 'rainbow', 'refrigerator', 'revolver-101', 'rifle', 'rotary-phone', 'roulette-wheel', 'saddle', 'saturn', 'school-bus', 'scorpion-101', 'screwdriver', 'segway', 'self-propelled-lawn-mower', 'sextant', 'sheet-music', 'skateboard', 'skunk', 'skyscraper', 'smokestack', 'snail', 'snake', 'sneaker', 'snowmobile', 'soccer-ball', 'socks', 'soda-can', 'spaghetti', 'speed-boat', 'spider', 'spoon', 'stained-glass', 'starfish-101', 'steering-wheel', 'stirrups', 'sunflower-101', 'superman', 'sushi', 'swan', 'swiss-army-knife', 'sword', 'syringe', 'tambourine', 'teapot', 'teddy-bear', 'teepee', 'telephone-box', 'tennis-ball', 'tennis-court', 'tennis-racket', 'theodolite', 'toaster', 'tomato', 'tombstone', 'top-hat', 'touring-bike', 'tower-pisa', 'traffic-light', 'treadmill', 'triceratops', 'tricycle', 'trilobite-101', 'tripod', 't-shirt', 'tuning-fork', 'tweezer', 'umbrella-101', 'unicorn', 'vcr', 'video-projector', 'washing-machine', 'watch-101', 'waterfall', 'watermelon', 'welding-mask', 'wheelbarrow', 'windmill', 'wine-bottle', 'xylophone', 'yarmulke', 'yo-yo', 'zebra', 'airplanes-101', 'car-side-101', 'faces-easy-101', 'greyhound', 'tennis-shoes', 'toad', 'clutter']
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____
###Markdown
Image classification transfer learning demo (SageMaker SDK)1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
region = boto3.Session().region_name
s3_client = boto3.client("s3")
sess = sagemaker.Session()
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/image/caltech-256/"
output_bucket = sess.default_bucket()
output_prefix = "ic-transfer-learning"
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-train.rec", "caltech-256-60-train.rec"
)
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-val.rec", "caltech-256-60-val.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec", output_bucket, output_prefix + "/train_rec/caltech-256-60-train.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec",
output_bucket,
output_prefix + "/validation_rec/caltech-256-60-train.rec",
)
from sagemaker import image_uris
training_image = image_uris.retrieve(region=sess.boto_region_name, framework="image-classification")
print(training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset [1] consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/). Data in this notebook was downloaded from [MXNet's caltech-256 training dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec) and [MXNet's caltech-256 validation dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec) and stored in the `data_bucket`.>[1] Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.
###Code
import boto3
# Four channels: train, validation, train_lst, and validation_lst
s3train = f"s3://{output_bucket}/{output_prefix}/train_rec/"
s3validation = f"s3://{output_bucket}/{output_prefix}/validation_rec/"
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
ic = sagemaker.estimator.Estimator(
training_image,
role,
instance_count=1,
instance_type="ml.p2.xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(
num_layers=18,
use_pretrained_model=1,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype="float32",
)
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.session.s3_input(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
List of object categories
###Code
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
###Output
_____no_output_____
###Markdown
Download test image
###Code
# test image
file_name = "/tmp/test.jpg"
s3_client.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg",
file_name,
)
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
# Invoke the deployed model to compute prediction
prediction = ic_classifier.predict(payload, initial_args={"ContentType": "application/x-image"})
# prediction is a JSON string. Load it into a Python object.
probabilities = json.loads(prediction)
# find the class with maximum probability and print the class index
predicted_category_index = np.argmax(probabilities)
predicted_category_name = object_categories[predicted_category_index]
confidence = probabilities[predicted_category_index]
print(f"Result: label - {predicted_category_name}, probability - {confidence}")
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____
###Markdown
Image classification transfer learning demo1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
region = boto3.Session().region_name
s3_client = boto3.client("s3")
sess = sagemaker.Session()
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/image/caltech-256/"
output_bucket = sess.default_bucket()
output_prefix = "ic-transfer-learning"
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-train.rec", "caltech-256-60-train.rec"
)
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-val.rec", "caltech-256-60-val.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec", output_bucket, output_prefix + "/train_rec/caltech-256-60-train.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec",
output_bucket,
output_prefix + "/validation_rec/caltech-256-60-train.rec",
)
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, "image-classification", repo_version="latest")
print(training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset [1] consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/). Data in this notebook was downloaded from [MXNet's caltech-256 training dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec) and [MXNet's caltech-256 validation dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec) and stored in the `data_bucket`.>[1] Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.
###Code
import boto3
# Four channels: train, validation, train_lst, and validation_lst
s3train = f"s3://{output_bucket}/{output_prefix}/train_rec/"
s3validation = f"s3://{output_bucket}/{output_prefix}/validation_rec/"
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
ic = sagemaker.estimator.Estimator(
training_image,
role,
instance_count=1,
instance_type="ml.p2.xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(
num_layers=18,
use_pretrained_model=1,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype="float32",
)
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.session.s3_input(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
List of object categories
###Code
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
###Output
_____no_output_____
###Markdown
Download test image
###Code
# test image
file_name = "/tmp/test.jpg"
s3_client.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg",
file_name,
)
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
# Invoke the deployed model to compute prediction
prediction = ic_classifier.predict(payload, initial_args={"ContentType": "application/x-image"})
# prediction is a JSON string. Load it into a Python object.
probabilities = json.loads(prediction)
# find the class with maximum probability and print the class index
predicted_category_index = np.argmax(probabilities)
predicted_category_name = object_categories[predicted_category_index]
confidence = probabilities[predicted_category_index]
print(f"Result: label - {predicted_category_name}, probability - {confidence}")
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____
###Markdown
Image classification transfer learning demo1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
region = boto3.Session().region_name
sess = sagemaker.Session()
data_bucket = f"jumpstart-cache-prod-{region}"
data_prefix = "1p-notebooks-datasets/caltech"
output_bucket = sess.default_bucket()
output_prefix = "ic-transfer-learning"
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, "image-classification", repo_version="latest")
print(training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset [1] consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/). Data in this notebook was downloaded from [MXNet's caltech-256 training dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec) and [MXNet's caltech-256 validation dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec) and stored in the `data_bucket`.>[1] Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.
###Code
import os
import urllib.request
import boto3
# Four channels: train, validation, train_lst, and validation_lst
s3train = f"s3://{data_bucket}/{data_prefix}/train_rec/"
s3validation = f"s3://{data_bucket}/{data_prefix}/validation_rec/"
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
ic = sagemaker.estimator.Estimator(
training_image,
role,
instance_count=1,
instance_type="ml.p2.xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(
num_layers=18,
use_pretrained_model=1,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype="float32",
)
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.session.s3_input(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
List of object categories
###Code
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
###Output
_____no_output_____
###Markdown
Download test image
###Code
# test image
file_name = "/tmp/test.jpg"
s3 = boto3.client("s3")
s3.download_file(data_bucket, data_prefix + "/images/sample_bath_tub_image.jpg", file_name)
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
# Invoke the deployed model to compute prediction
prediction = ic_classifier.predict(payload, initial_args={"ContentType": "application/x-image"})
# prediction is a JSON string. Load it into a Python object.
probabilities = json.loads(prediction)
# find the class with maximum probability and print the class index
predicted_category_index = np.argmax(probabilities)
predicted_category_name = object_categories[predicted_category_index]
confidence = probabilities[predicted_category_index]
print(f"Result: label - {predicted_category_name}, probability - {confidence}")
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____
###Markdown
Image classification transfer learning demo (SageMaker SDK)1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [Caltech-256 dataset](https://paperswithcode.com/dataset/caltech-256). This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
region = boto3.Session().region_name
s3_client = boto3.client("s3")
sess = sagemaker.Session()
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/image/caltech-256/"
output_bucket = sess.default_bucket()
output_prefix = "ic-transfer-learning"
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-train.rec", "caltech-256-60-train.rec"
)
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-val.rec", "caltech-256-60-val.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec", output_bucket, output_prefix + "/train_rec/caltech-256-60-train.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec",
output_bucket,
output_prefix + "/validation_rec/caltech-256-60-train.rec",
)
from sagemaker import image_uris
training_image = image_uris.retrieve(region=sess.boto_region_name, framework="image-classification")
print(training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset [1] consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/). Data in this notebook was downloaded from [MXNet's caltech-256 training dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec) and [MXNet's caltech-256 validation dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec) and stored in the `data_bucket`.>[1] Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.
###Code
import boto3
# Four channels: train, validation, train_lst, and validation_lst
s3train = f"s3://{output_bucket}/{output_prefix}/train_rec/"
s3validation = f"s3://{output_bucket}/{output_prefix}/validation_rec/"
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
ic = sagemaker.estimator.Estimator(
training_image,
role,
instance_count=1,
instance_type="ml.p3.2xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(
num_layers=18,
use_pretrained_model=1,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype="float32",
)
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.session.s3_input(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
List of object categories
###Code
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
###Output
_____no_output_____
###Markdown
Download test image
###Code
# test image
file_name = "/tmp/test.jpg"
s3_client.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg",
file_name,
)
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
# Invoke the deployed model to compute prediction
prediction = ic_classifier.predict(payload, initial_args={"ContentType": "application/x-image"})
# prediction is a JSON string. Load it into a Python object.
probabilities = json.loads(prediction)
# find the class with maximum probability and print the class index
predicted_category_index = np.argmax(probabilities)
predicted_category_name = object_categories[predicted_category_index]
confidence = probabilities[predicted_category_index]
print(f"Result: label - {predicted_category_name}, probability - {confidence}")
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____
###Markdown
Image classification transfer learning demo1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
region = boto3.Session().region_name
s3_client = boto3.client("s3")
sess = sagemaker.Session()
data_bucket = f"sagemaker-sample-files"
data_prefix = "datasets/image/caltech-256/"
output_bucket = sess.default_bucket()
output_prefix = "ic-transfer-learning"
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-train.rec", "caltech-256-60-train.rec"
)
s3_client.download_file(
data_bucket, data_prefix + "caltech-256-60-val.rec", "caltech-256-60-val.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec", output_bucket, output_prefix + "/train_rec/caltech-256-60-train.rec"
)
s3_client.upload_file(
"caltech-256-60-train.rec",
output_bucket,
output_prefix + "/validation_rec/caltech-256-60-train.rec",
)
from sagemaker import image_uris
training_image = image_uris.retrieve(region=sess.boto_region_name, framework="image-classification")
print(training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset [1] consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/). Data in this notebook was downloaded from [MXNet's caltech-256 training dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec) and [MXNet's caltech-256 validation dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec) and stored in the `data_bucket`.>[1] Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.
###Code
import boto3
# Four channels: train, validation, train_lst, and validation_lst
s3train = f"s3://{output_bucket}/{output_prefix}/train_rec/"
s3validation = f"s3://{output_bucket}/{output_prefix}/validation_rec/"
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
ic = sagemaker.estimator.Estimator(
training_image,
role,
instance_count=1,
instance_type="ml.p2.xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(
num_layers=18,
use_pretrained_model=1,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype="float32",
)
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.session.s3_input(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
List of object categories
###Code
object_categories = [
"ak47",
"american-flag",
"backpack",
"baseball-bat",
"baseball-glove",
"basketball-hoop",
"bat",
"bathtub",
"bear",
"beer-mug",
"billiards",
"binoculars",
"birdbath",
"blimp",
"bonsai-101",
"boom-box",
"bowling-ball",
"bowling-pin",
"boxing-glove",
"brain-101",
"breadmaker",
"buddha-101",
"bulldozer",
"butterfly",
"cactus",
"cake",
"calculator",
"camel",
"cannon",
"canoe",
"car-tire",
"cartman",
"cd",
"centipede",
"cereal-box",
"chandelier-101",
"chess-board",
"chimp",
"chopsticks",
"cockroach",
"coffee-mug",
"coffin",
"coin",
"comet",
"computer-keyboard",
"computer-monitor",
"computer-mouse",
"conch",
"cormorant",
"covered-wagon",
"cowboy-hat",
"crab-101",
"desk-globe",
"diamond-ring",
"dice",
"dog",
"dolphin-101",
"doorknob",
"drinking-straw",
"duck",
"dumb-bell",
"eiffel-tower",
"electric-guitar-101",
"elephant-101",
"elk",
"ewer-101",
"eyeglasses",
"fern",
"fighter-jet",
"fire-extinguisher",
"fire-hydrant",
"fire-truck",
"fireworks",
"flashlight",
"floppy-disk",
"football-helmet",
"french-horn",
"fried-egg",
"frisbee",
"frog",
"frying-pan",
"galaxy",
"gas-pump",
"giraffe",
"goat",
"golden-gate-bridge",
"goldfish",
"golf-ball",
"goose",
"gorilla",
"grand-piano-101",
"grapes",
"grasshopper",
"guitar-pick",
"hamburger",
"hammock",
"harmonica",
"harp",
"harpsichord",
"hawksbill-101",
"head-phones",
"helicopter-101",
"hibiscus",
"homer-simpson",
"horse",
"horseshoe-crab",
"hot-air-balloon",
"hot-dog",
"hot-tub",
"hourglass",
"house-fly",
"human-skeleton",
"hummingbird",
"ibis-101",
"ice-cream-cone",
"iguana",
"ipod",
"iris",
"jesus-christ",
"joy-stick",
"kangaroo-101",
"kayak",
"ketch-101",
"killer-whale",
"knife",
"ladder",
"laptop-101",
"lathe",
"leopards-101",
"license-plate",
"lightbulb",
"light-house",
"lightning",
"llama-101",
"mailbox",
"mandolin",
"mars",
"mattress",
"megaphone",
"menorah-101",
"microscope",
"microwave",
"minaret",
"minotaur",
"motorbikes-101",
"mountain-bike",
"mushroom",
"mussels",
"necktie",
"octopus",
"ostrich",
"owl",
"palm-pilot",
"palm-tree",
"paperclip",
"paper-shredder",
"pci-card",
"penguin",
"people",
"pez-dispenser",
"photocopier",
"picnic-table",
"playing-card",
"porcupine",
"pram",
"praying-mantis",
"pyramid",
"raccoon",
"radio-telescope",
"rainbow",
"refrigerator",
"revolver-101",
"rifle",
"rotary-phone",
"roulette-wheel",
"saddle",
"saturn",
"school-bus",
"scorpion-101",
"screwdriver",
"segway",
"self-propelled-lawn-mower",
"sextant",
"sheet-music",
"skateboard",
"skunk",
"skyscraper",
"smokestack",
"snail",
"snake",
"sneaker",
"snowmobile",
"soccer-ball",
"socks",
"soda-can",
"spaghetti",
"speed-boat",
"spider",
"spoon",
"stained-glass",
"starfish-101",
"steering-wheel",
"stirrups",
"sunflower-101",
"superman",
"sushi",
"swan",
"swiss-army-knife",
"sword",
"syringe",
"tambourine",
"teapot",
"teddy-bear",
"teepee",
"telephone-box",
"tennis-ball",
"tennis-court",
"tennis-racket",
"theodolite",
"toaster",
"tomato",
"tombstone",
"top-hat",
"touring-bike",
"tower-pisa",
"traffic-light",
"treadmill",
"triceratops",
"tricycle",
"trilobite-101",
"tripod",
"t-shirt",
"tuning-fork",
"tweezer",
"umbrella-101",
"unicorn",
"vcr",
"video-projector",
"washing-machine",
"watch-101",
"waterfall",
"watermelon",
"welding-mask",
"wheelbarrow",
"windmill",
"wine-bottle",
"xylophone",
"yarmulke",
"yo-yo",
"zebra",
"airplanes-101",
"car-side-101",
"faces-easy-101",
"greyhound",
"tennis-shoes",
"toad",
"clutter",
]
###Output
_____no_output_____
###Markdown
Download test image
###Code
# test image
file_name = "/tmp/test.jpg"
s3_client.download_file(
"sagemaker-sample-files",
"datasets/image/caltech-256/256_ObjectCategories/008.bathtub/008_0007.jpg",
file_name,
)
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
# Invoke the deployed model to compute prediction
prediction = ic_classifier.predict(payload, initial_args={"ContentType": "application/x-image"})
# prediction is a JSON string. Load it into a Python object.
probabilities = json.loads(prediction)
# find the class with maximum probability and print the class index
predicted_category_index = np.argmax(probabilities)
predicted_category_name = object_categories[predicted_category_index]
confidence = probabilities[predicted_category_index]
print(f"Result: label - {predicted_category_name}, probability - {confidence}")
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____
###Markdown
Image classification transfer learning demo1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = 'ic-transfer-learning'
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'image-classification', repo_version="latest")
print (training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/).
###Code
import os
import urllib.request
import boto3
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
def upload_to_s3(channel, file):
s3 = boto3.resource('s3')
data = open(file, "rb")
key = channel + '/' + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
# # caltech-256
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')
upload_to_s3('validation', 'caltech-256-60-val.rec')
upload_to_s3('train', 'caltech-256-60-train.rec')
# Four channels: train, validation, train_lst, and validation_lst
s3train = 's3://{}/{}/train/'.format(bucket, prefix)
s3validation = 's3://{}/{}/validation/'.format(bucket, prefix)
# upload the lst files to train and validation channels
!aws s3 cp caltech-256-60-train.rec $s3train --quiet
!aws s3 cp caltech-256-60-val.rec $s3validation --quiet
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = 's3://{}/{}/output'.format(bucket, prefix)
ic = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
train_volume_size = 50,
train_max_run = 360000,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sess)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(num_layers=18,
use_pretrained_model=1,
image_shape = "3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype='float32')
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(s3train, distribution='FullyReplicated',
content_type='application/x-recordio', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3validation, distribution='FullyReplicated',
content_type='application/x-recordio', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count = 1,
instance_type = 'ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Download test image
###Code
!wget -O /tmp/test.jpg http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/008.bathtub/008_0007.jpg
file_name = '/tmp/test.jpg'
# test image
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, 'rb') as f:
payload = f.read()
payload = bytearray(payload)
ic_classifier.content_type = 'application/x-image'
result = json.loads(ic_classifier.predict(payload))
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = ['ak47', 'american-flag', 'backpack', 'baseball-bat', 'baseball-glove', 'basketball-hoop', 'bat', 'bathtub', 'bear', 'beer-mug', 'billiards', 'binoculars', 'birdbath', 'blimp', 'bonsai-101', 'boom-box', 'bowling-ball', 'bowling-pin', 'boxing-glove', 'brain-101', 'breadmaker', 'buddha-101', 'bulldozer', 'butterfly', 'cactus', 'cake', 'calculator', 'camel', 'cannon', 'canoe', 'car-tire', 'cartman', 'cd', 'centipede', 'cereal-box', 'chandelier-101', 'chess-board', 'chimp', 'chopsticks', 'cockroach', 'coffee-mug', 'coffin', 'coin', 'comet', 'computer-keyboard', 'computer-monitor', 'computer-mouse', 'conch', 'cormorant', 'covered-wagon', 'cowboy-hat', 'crab-101', 'desk-globe', 'diamond-ring', 'dice', 'dog', 'dolphin-101', 'doorknob', 'drinking-straw', 'duck', 'dumb-bell', 'eiffel-tower', 'electric-guitar-101', 'elephant-101', 'elk', 'ewer-101', 'eyeglasses', 'fern', 'fighter-jet', 'fire-extinguisher', 'fire-hydrant', 'fire-truck', 'fireworks', 'flashlight', 'floppy-disk', 'football-helmet', 'french-horn', 'fried-egg', 'frisbee', 'frog', 'frying-pan', 'galaxy', 'gas-pump', 'giraffe', 'goat', 'golden-gate-bridge', 'goldfish', 'golf-ball', 'goose', 'gorilla', 'grand-piano-101', 'grapes', 'grasshopper', 'guitar-pick', 'hamburger', 'hammock', 'harmonica', 'harp', 'harpsichord', 'hawksbill-101', 'head-phones', 'helicopter-101', 'hibiscus', 'homer-simpson', 'horse', 'horseshoe-crab', 'hot-air-balloon', 'hot-dog', 'hot-tub', 'hourglass', 'house-fly', 'human-skeleton', 'hummingbird', 'ibis-101', 'ice-cream-cone', 'iguana', 'ipod', 'iris', 'jesus-christ', 'joy-stick', 'kangaroo-101', 'kayak', 'ketch-101', 'killer-whale', 'knife', 'ladder', 'laptop-101', 'lathe', 'leopards-101', 'license-plate', 'lightbulb', 'light-house', 'lightning', 'llama-101', 'mailbox', 'mandolin', 'mars', 'mattress', 'megaphone', 'menorah-101', 'microscope', 'microwave', 'minaret', 'minotaur', 'motorbikes-101', 'mountain-bike', 'mushroom', 'mussels', 'necktie', 'octopus', 'ostrich', 'owl', 'palm-pilot', 'palm-tree', 'paperclip', 'paper-shredder', 'pci-card', 'penguin', 'people', 'pez-dispenser', 'photocopier', 'picnic-table', 'playing-card', 'porcupine', 'pram', 'praying-mantis', 'pyramid', 'raccoon', 'radio-telescope', 'rainbow', 'refrigerator', 'revolver-101', 'rifle', 'rotary-phone', 'roulette-wheel', 'saddle', 'saturn', 'school-bus', 'scorpion-101', 'screwdriver', 'segway', 'self-propelled-lawn-mower', 'sextant', 'sheet-music', 'skateboard', 'skunk', 'skyscraper', 'smokestack', 'snail', 'snake', 'sneaker', 'snowmobile', 'soccer-ball', 'socks', 'soda-can', 'spaghetti', 'speed-boat', 'spider', 'spoon', 'stained-glass', 'starfish-101', 'steering-wheel', 'stirrups', 'sunflower-101', 'superman', 'sushi', 'swan', 'swiss-army-knife', 'sword', 'syringe', 'tambourine', 'teapot', 'teddy-bear', 'teepee', 'telephone-box', 'tennis-ball', 'tennis-court', 'tennis-racket', 'theodolite', 'toaster', 'tomato', 'tombstone', 'top-hat', 'touring-bike', 'tower-pisa', 'traffic-light', 'treadmill', 'triceratops', 'tricycle', 'trilobite-101', 'tripod', 't-shirt', 'tuning-fork', 'tweezer', 'umbrella-101', 'unicorn', 'vcr', 'video-projector', 'washing-machine', 'watch-101', 'waterfall', 'watermelon', 'welding-mask', 'wheelbarrow', 'windmill', 'wine-bottle', 'xylophone', 'yarmulke', 'yo-yo', 'zebra', 'airplanes-101', 'car-side-101', 'faces-easy-101', 'greyhound', 'tennis-shoes', 'toad', 'clutter']
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____
###Markdown
Image classification transfer learning demo1. [Introduction](Introduction)2. [Prerequisites and Preprocessing](Prequisites-and-Preprocessing)3. [Fine-tuning the Image classification model](Fine-tuning-the-Image-classification-model)4. [Training parameters](Training-parameters)5. [Start the training](Start-the-training)6. [Inference](Inference) IntroductionWelcome to our end-to-end example of distributed image classification algorithm in transfer learning mode. In this demo, we will use the Amazon sagemaker image classification algorithm in transfer learning mode to fine-tune a pre-trained model (trained on imagenet data) to learn to classify a new dataset. In particular, the pre-trained model will be fine-tuned using [caltech-256 dataset](http://www.vision.caltech.edu/Image_Datasets/Caltech256/). This notebook was tested in Amazon SageMaker Studio on ml.t3.medium instance with Python 3 (Data Science) kernel.To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. Prequisites and Preprocessing Permissions and environment variablesHere we set up the linkage and authentication to AWS services. There are three parts to this:* The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook* The S3 bucket that you want to use for training and model data* The Amazon sagemaker image classification docker image which need not be changed
###Code
%%time
import boto3
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
print(role)
region = boto3.Session().region_name
sess = sagemaker.Session()
data_bucket = f"jumpstart-cache-prod-{region}"
data_prefix = "1p-notebooks-datasets/caltech"
output_bucket = sess.default_bucket()
output_prefix = "ic-transfer-learning"
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, "image-classification", repo_version="latest")
print(training_image)
###Output
_____no_output_____
###Markdown
Fine-tuning the Image classification modelThe caltech 256 dataset [1] consist of images from 257 categories (the last one being a clutter category) and has 30k images with a minimum of 80 images and a maximum of about 800 images per category. The image classification algorithm can take two types of input formats. The first is a [recordio format](https://mxnet.incubator.apache.org/faq/recordio.html) and the other is a [lst format](https://mxnet.incubator.apache.org/faq/recordio.html?highlight=im2rec). Files for both these formats are available at http://data.dmlc.ml/mxnet/data/caltech-256/. In this example, we will use the recordio format for training and use the training/validation split [specified here](http://data.dmlc.ml/mxnet/data/caltech-256/). Data in this notebook was downloaded from [MXNet's caltech-256 training dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec) and [MXNet's caltech-256 validation dataset](http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec) and stored in the `data_bucket`.>[1] Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.
###Code
import os
import urllib.request
import boto3
# Four channels: train, validation, train_lst, and validation_lst
s3train = f"s3://{data_bucket}/{data_prefix}/train_rec/"
s3validation = f"s3://{data_bucket}/{data_prefix}/validation_rec/"
###Output
_____no_output_____
###Markdown
Once we have the data available in the correct format for training, the next step is to actually train the model using the data. Before training the model, we need to setup the training parameters. The next section will explain the parameters in detail. TrainingNow that we are done with all the setup that is needed, we are ready to train our object detector. To begin, let us create a ``sageMaker.estimator.Estimator`` object. This estimator will launch the training job. Training parametersThere are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:* **Training instance count**: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings. * **Training instance type**: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training * **Output path**: This the s3 folder in which the training output is stored
###Code
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"
ic = sagemaker.estimator.Estimator(
training_image,
role,
instance_count=1,
instance_type="ml.p2.xlarge",
train_volume_size=50,
train_max_run=360000,
input_mode="File",
output_path=s3_output_location,
sagemaker_session=sess,
)
###Output
_____no_output_____
###Markdown
Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:* **num_layers**: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.* **use_pretrained_model**: Set to 1 to use pretrained model for transfer learning.* **image_shape**: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.* **num_classes**: This is the number of output classes for the new dataset. Imagenet was trained with 1000 output classes but the number of output classes can be changed for fine-tuning. For caltech, we use 257 because it has 256 object categories + 1 clutter class.* **num_training_samples**: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.* **mini_batch_size**: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.* **epochs**: Number of training epochs.* **learning_rate**: Learning rate for training.* **precision_dtype**: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
###Code
ic.set_hyperparameters(
num_layers=18,
use_pretrained_model=1,
image_shape="3,224,224",
num_classes=257,
num_training_samples=15420,
mini_batch_size=128,
epochs=2,
learning_rate=0.01,
precision_dtype="float32",
)
###Output
_____no_output_____
###Markdown
Input data specificationSet the data type and channels used for training
###Code
train_data = sagemaker.session.s3_input(
s3train,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
validation_data = sagemaker.session.s3_input(
s3validation,
distribution="FullyReplicated",
content_type="application/x-recordio",
s3_data_type="S3Prefix",
)
data_channels = {"train": train_data, "validation": validation_data}
###Output
_____no_output_____
###Markdown
Start the trainingStart training by calling the fit method in the estimator
###Code
ic.fit(inputs=data_channels, logs=True)
###Output
_____no_output_____
###Markdown
Inference***A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class of the image. You can deploy the created model by using the deploy method in the estimator
###Code
ic_classifier = ic.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
List of object categories
###Code
object_categories = ["ak47", "american-flag", "backpack", "baseball-bat", "baseball-glove", "basketball-hoop", "bat", "bathtub", "bear", "beer-mug", "billiards", "binoculars", "birdbath", "blimp", "bonsai-101", "boom-box", "bowling-ball", "bowling-pin", "boxing-glove", "brain-101", "breadmaker", "buddha-101", "bulldozer", "butterfly", "cactus", "cake", "calculator", "camel", "cannon", "canoe", "car-tire", "cartman", "cd", "centipede", "cereal-box", "chandelier-101", "chess-board", "chimp", "chopsticks", "cockroach", "coffee-mug", "coffin", "coin", "comet", "computer-keyboard", "computer-monitor", "computer-mouse", "conch", "cormorant", "covered-wagon", "cowboy-hat", "crab-101", "desk-globe", "diamond-ring", "dice", "dog", "dolphin-101", "doorknob", "drinking-straw", "duck", "dumb-bell", "eiffel-tower", "electric-guitar-101", "elephant-101", "elk", "ewer-101", "eyeglasses", "fern", "fighter-jet", "fire-extinguisher", "fire-hydrant", "fire-truck", "fireworks", "flashlight", "floppy-disk", "football-helmet", "french-horn", "fried-egg", "frisbee", "frog", "frying-pan", "galaxy", "gas-pump", "giraffe", "goat", "golden-gate-bridge", "goldfish", "golf-ball", "goose", "gorilla", "grand-piano-101", "grapes", "grasshopper", "guitar-pick", "hamburger", "hammock", "harmonica", "harp", "harpsichord", "hawksbill-101", "head-phones", "helicopter-101", "hibiscus", "homer-simpson", "horse", "horseshoe-crab", "hot-air-balloon", "hot-dog", "hot-tub", "hourglass", "house-fly", "human-skeleton", "hummingbird", "ibis-101", "ice-cream-cone", "iguana", "ipod", "iris", "jesus-christ", "joy-stick", "kangaroo-101", "kayak", "ketch-101", "killer-whale", "knife", "ladder", "laptop-101", "lathe", "leopards-101", "license-plate", "lightbulb", "light-house", "lightning", "llama-101", "mailbox", "mandolin", "mars", "mattress", "megaphone", "menorah-101", "microscope", "microwave", "minaret", "minotaur", "motorbikes-101", "mountain-bike", "mushroom", "mussels", "necktie", "octopus", "ostrich", "owl", "palm-pilot", "palm-tree", "paperclip", "paper-shredder", "pci-card", "penguin", "people", "pez-dispenser", "photocopier", "picnic-table", "playing-card", "porcupine", "pram", "praying-mantis", "pyramid", "raccoon", "radio-telescope", "rainbow", "refrigerator", "revolver-101", "rifle", "rotary-phone", "roulette-wheel", "saddle", "saturn", "school-bus", "scorpion-101", "screwdriver", "segway", "self-propelled-lawn-mower", "sextant", "sheet-music", "skateboard", "skunk", "skyscraper", "smokestack", "snail", "snake", "sneaker", "snowmobile", "soccer-ball", "socks", "soda-can", "spaghetti", "speed-boat", "spider", "spoon", "stained-glass", "starfish-101", "steering-wheel", "stirrups", "sunflower-101", "superman", "sushi", "swan", "swiss-army-knife", "sword", "syringe", "tambourine", "teapot", "teddy-bear", "teepee", "telephone-box", "tennis-ball", "tennis-court", "tennis-racket", "theodolite", "toaster", "tomato", "tombstone", "top-hat", "touring-bike", "tower-pisa", "traffic-light", "treadmill", "triceratops", "tricycle", "trilobite-101", "tripod", "t-shirt", "tuning-fork", "tweezer", "umbrella-101", "unicorn", "vcr", "video-projector", "washing-machine", "watch-101", "waterfall", "watermelon", "welding-mask", "wheelbarrow", "windmill", "wine-bottle", "xylophone", "yarmulke", "yo-yo", "zebra", "airplanes-101", "car-side-101", "faces-easy-101", "greyhound", "tennis-shoes", "toad", "clutter"]
###Output
_____no_output_____
###Markdown
Download test image
###Code
# test image
file_name = "/tmp/test.jpg"
s3 = boto3.client("s3")
s3.download_file(data_bucket, data_prefix + "/images/sample_bath_tub_image.jpg", file_name)
from IPython.display import Image
Image(file_name)
###Output
_____no_output_____
###Markdown
EvaluationEvaluate the image through the network for inteference. The network outputs class probabilities and typically, one selects the class with the maximum probability as the final class output.**Note:** The output class detected by the network may not be accurate in this example. To limit the time taken and cost of training, we have trained the model only for a couple of epochs. If the network is trained for more epochs (say 20), then the output class will be more accurate.
###Code
import json
import numpy as np
with open(file_name, "rb") as f:
payload = f.read()
payload = bytearray(payload)
# Invoke the deployed model to compute prediction
prediction = ic_classifier.predict(payload, initial_args={"ContentType": "application/x-image"})
# prediction is a JSON string. Load it into a Python object.
probabilities = json.loads(prediction)
# find the class with maximum probability and print the class index
predicted_category_index = np.argmax(probabilities)
predicted_category_name = object_categories[predicted_category_index]
confidence = probabilities[predicted_category_index]
print(f"Result: label - {predicted_category_name}, probability - {confidence}")
###Output
_____no_output_____
###Markdown
Clean upWhen we're done with the endpoint, we can just delete it and the backing instances will be released. Run the following cell to delete the endpoint.
###Code
ic_classifier.delete_endpoint()
###Output
_____no_output_____ |
graphs/all graphs.ipynb | ###Markdown
Trial
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
###Output
_____no_output_____
###Markdown
Correlation testing + heatmaps
###Code
df = pd.read_csv("allyear.csv")
df.drop(['county'],axis=1,inplace=True)
df.reset_index()
df
corr = df.corr()
corr
plt.figure(figsize=(12, 12))
# plot the correlation heatmap
#sns.heatmap(corr, vmin=-1, vmax=1, cmap='coolwarm')
sns.heatmap(corr, vmin=-1, vmax=1, cmap='RdBu')
###Output
_____no_output_____
###Markdown
metric cutoff testing
###Code
array = ['population','unemployedRate','povertyRate','medianIncome','avgmealmon','homeless','pplweekmon','drivealone','carpooled','publicTrans','walked','meanHouseIncome','yeshealth','privatehealth','publichealth','nohealth']
combos=[]
opvars = []
removed = []
x=0
for i in array[x:]:
for j in array[x+1:]:
column_1 = df[i]
column_2 = df[j]
combos.append((i,j))
x=x+1
for i in combos:
c1=df[i[0]]
c2=df[i[1]]
corr = abs(c1.corr(c2))
if corr > 0.95:
snapcol = df['POP_SNAP']
corr_snap1 = abs(c1.corr(snapcol))
corr_snap2 = abs(c2.corr(snapcol))
if (corr_snap2>corr_snap1 and i[1] not in opvars):
if (i[0] not in opvars):
opvars.append(i[1])
else:
index = opvars.index(i[0])
opvars[index] = i[1]
elif (corr_snap1>corr_snap2 and i[0] not in opvars):
if (i[1] not in opvars):
opvars.append(i[0])
else:
index = opvars.index(i[1])
opvars[index] = i[0]
print(opvars)
opvars = ['population', 'privatehealth', 'medianIncome', 'yeshealth']
opvars.append('POP_SNAP')
###Output
_____no_output_____
###Markdown
Developing metric using multiple linear regression
###Code
df = df[opvars]
df
###Output
_____no_output_____
###Markdown
training model
###Code
x = df.drop(['POP_SNAP'],axis=1).values
y = df['POP_SNAP'].values
# splitting dataset into training and test set
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.3, random_state=0)
# training model on training set
ml = LinearRegression()
ml.fit(x_train,y_train)
# predict test set results
y_pred=ml.predict(x_test)
print(y_pred)
# testing one value
#ml.predict([[106540,22656,5923]])
# scatterplot of results
plt.figure(figsize=(12,10))
plt.scatter(y_test,y_pred)
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.title('Actual vs Predicted')
# define our intput
X2 = sm.add_constant(x)
# create a OLS model
model = sm.OLS(y, X2)
# fit the data
est = model.fit()
print(est.summary())
import pylab
# check for the normality of the residuals
sm.qqplot(est.resid, line='s')
pylab.show()
# also check that the mean of the residuals is approx. 0.
mean_residuals = sum(est.resid)/ len(est.resid)
print("The mean of the residuals is {:.4}".format(mean_residuals))
###Output
_____no_output_____
###Markdown
testing 2020 values to predict SNAP data
###Code
df20 = pd.read_csv('mod4-2020.csv')
df20
pred20=[]
for index, row in df20.iterrows():
pred = int(ml.predict([[row['population'], row['privatehealth'], row['medianIncome'], row['yeshealth']]]))
pred20.append(pred)
print(pred20)
predequation = []
for index, row in df20.iterrows():
equation = int(11630-4672+0.1360*row['population']-0.6704*row['privatehealth']-0.1119*row['medianIncome']+0.4657*row['yeshealth'])
predequation.append(equation)
print(predequation)
snap20 =[]
for i in df20['POP_SNAP']:
snap20.append(i)
errors=[]
j=0
while j<len(pred20):
err = abs((pred20[j]-snap20[j])/snap20[j])
errors.append(err)
j=j+1
avgerr = sum(errors)/len(errors)
print(avgerr)
residuals = []
k=0
while k<len(pred20):
residual = snap20[k]-pred20[k]
residuals.append(residual)
k=k+1
avgres = sum(residuals)/len(pred20)
print(avgres)
df20['snapPred20'] = pred20
df20
###Output
_____no_output_____
###Markdown
bar graph pred vs actual 2020 SNAP
###Code
# set width of bars
plt.figure(figsize=(70, 28))
barWidth = 0.27
#Set position of bar on X axis
r1 = np.arange(len(pred20))
r2 = [x + barWidth for x in r1]
# Make the plot
plt.bar(r1, pred20, color='#08366A', width=barWidth, edgecolor='white', label='prediction')
plt.bar(r2, snap20, color='#EA8E70', width=barWidth, edgecolor='white', label='actual')
# Add xticks on the middle of the group bars
plt.ylabel('number of snap recipients', fontweight='bold', fontsize=50)
plt.yticks(fontsize=40)
plt.xlabel('counties', fontweight='bold', fontsize=50)
plt.xticks([r + barWidth for r in range(len(pred20))], ['Atlantic', 'Bergen', 'Burlington','Camden','Cape May',
'Cumberland','Essex','Gloucester','Hudson','Hunterdon',
'Mercer','Middlesex','Monmouth','Morris','Ocean','Passaic',
'Salem','Somerset','Sussex','Union','Warren'], fontsize=20)
plt.xticks(fontsize=28)
# Create legend & Show graphic
plt.legend(fontsize=40)
plt.title('2020 SNAP predicted vs actual', fontsize=70)
plt.show()
###Output
_____no_output_____
###Markdown
testing 2021 data to predict SNAP data
###Code
df21 = pd.read_csv('mod4-2021.csv')
df21
pred21=[]
for index, row in df21.iterrows():
pred = int(ml.predict([[row['pop2021'], row['privatehealth'], row['medianIncome'], row['yeshealth']]]))
pred21.append(pred)
print(pred21)
snap21 =[]
for i in df21['POP_SNAP']:
snap21.append(i)
errors=[]
j=0
while j<len(pred21):
err = abs((pred21[j]-snap21[j])/snap21[j])
errors.append(err)
j=j+1
avgerr = sum(errors)/len(errors)
print(avgerr)
df21['snapPred21'] = pred21
df21
pop21 =[]
for i in df21['pop2021']:
pop21.append(i)
percent = []
for m in range(len(pop21)):
percent.append((snap21[m]/pop21[m])*100)
df21['percent'] = percent
df21
###Output
_____no_output_____
###Markdown
Cloropleth Map of 2021 data using GeoPandas and Matplotlib
###Code
# racial demographics per county
import geopandas as gpd
df21.head()
file_path = 'County_Boundaries/County_Boundaries_of_NJ.shp'
map_df = gpd.read_file(file_path)
map_df.plot();
race = pd.read_csv('race19.csv')
race
merged = map_df.set_index('COUNTY').join(race.set_index('county'))
merged.head()
merged['percent'] = percent
counties = ['Atlantic', 'Bergen', 'Burlington','Camden','Cape May','Cumberland','Essex','Gloucester','Hudson','Hunterdon','Mercer','Monmouth','Ocean','Passaic','Salem','Somerset','Sussex','Union','Warren','Morris','Middlesex']
merged['counties'] = counties
merged
merged['percent']
merged['coords'] = merged['geometry'].apply(lambda x: x.centroid.coords[:])
merged['coords'] = [coords[0] for coords in merged['coords']]
# selects variable/feature of the map
feature = 'percent'
# minimal and maximal values of linear colormap
vmin, vmax = 0, 20
fig, ax = plt.subplots(1, figsize=(30, 25))
# plot map with the feature, indicate customizables
merged.plot(column=feature, cmap='Blues', linewidth=1.0, ax=ax, edgecolor='0.5')
ax.set_title('2021 Percent Distribution of Food Insecurity', fontsize=20)
ax.axis('off')
# data normalization before returning RGBA colors from the given colormap
sm = plt.cm.ScalarMappable(cmap='Blues', norm=plt.Normalize(vmin=vmin, vmax=vmax))
#sm._A = []
cbar = fig.colorbar(sm);
#subset = merged[[merged.index['CUMBERLAND'] , merged.index['HUDSON'], merged.index['SOMERSET']]]
#merged.drop(merged.drop([merged.index['CUMBERLAND'] , merged.index['HUDSON'], merged.index['SOMERSET']]))
for _, row in merged.iterrows():
plt.text(s=(row['counties']), x = row['coords'][0], y = row['coords'][1],
horizontalalignment='center', fontdict = {'size': 12, 'weight':'bold'}, color='black')
for _, row in subset.iterrows():
plt.text(s=(row['counties']), x = row['coords'][0], y = row['coords'][1],
horizontalalignment='center', fontdict = {'size': 12, 'weight':'bold'}, color='white')
###Output
_____no_output_____
###Markdown
line graph of 2015-2021 Bergen, Camden, Essex, Sussex county
###Code
# sample df
bergen = pd.read_csv('bergen.csv')
bergen
# reading csv files and specifying variables
bg=pd.DataFrame({'year': bergen['year'], 'snap': bergen['POP_SNAP']})
camden = pd.read_csv('camden.csv')
cm=pd.DataFrame({'year': camden['year'], 'snap': camden['POP_SNAP']})
essex = pd.read_csv('essex.csv')
ex=pd.DataFrame({'year': essex['year'], 'snap': essex['POP_SNAP']})
sussex = pd.read_csv('sussex.csv')
sx=pd.DataFrame({'year': sussex['year'], 'snap': sussex['POP_SNAP']})
# initialize a figure
plt.figure(figsize=(15, 15))
# do a 2x2 chart
plt.subplot(221)
plt.plot('year', 'snap', data=bg[bg['year'] <= 2020], marker='o', color='orange',linewidth=2, label='actual')
plt.plot('year', 'snap', data=bg[bg['year'] >= 2020], marker='o', color='blue', linewidth=2, linestyle = '--', label='prediction')
plt.title('Bergen', fontsize=15, loc='left', style='italic')
plt.subplot(222)
plt.plot('year', 'snap', data=cm[cm['year'] <= 2020], marker='o', color='orange',linewidth=2, label='actual')
plt.plot('year', 'snap', data=cm[cm['year'] >= 2020], marker='o', color='blue', linewidth=2, linestyle = '--', label='prediction')
plt.title('Camden', fontsize=15, loc='left', style='italic')
plt.legend(fontsize=15)
plt.subplot(223)
plt.plot('year', 'snap', data=ex[ex['year'] <= 2020], marker='o', color='orange',linewidth=2, label='actual')
plt.plot('year', 'snap', data=ex[ex['year'] >= 2020], marker='o', color='blue', linewidth=2, linestyle = '--', label='prediction')
plt.title('Essex', fontsize=15, loc='left', style='italic')
plt.subplot(224)
plt.plot('year', 'snap', data=sx[sx['year'] <= 2020], marker='o', color='orange',linewidth=2, label='actual')
plt.plot('year', 'snap', data=sx[sx['year'] >= 2020], marker='o', color='blue', linewidth=2, linestyle = '--', label='prediction')
plt.title('Sussex', fontsize=15, loc='left', style='italic')
# adding a title:
plt.title('Counties')
# showing the graph
plt.show()
###Output
_____no_output_____
###Markdown
plotting race distributions in NJ
###Code
race = pd.read_csv('race2.csv')
race
cm = sns.light_palette('teal', as_cmap=True)
corace = race.style.background_gradient(cmap=cm, low=0, high=1, axis=0)
corace
race['perSNAP']=percent
race
racecorr = race.corr()
racecorr
plt.figure(figsize=(9, 8))
sns.heatmap(racecorr, vmin=-1, vmax=1, cmap='RdBu', annot=True)
###Output
_____no_output_____ |
talks/uc2017/Mapping, Visualization, and Analysis Using ArcGIS API for Python/Mapping, Visualization, and Analysis Using ArcGIS API for Python.ipynb | ###Markdown
Mapping, Visualization and Analysis using ArcGIS API for Python Atma Mani Rohit Singh ArcGIS API for Python A quick introduction * Python API to your Web GIS * Powerful, modern and easy to use * Implemented using REST + local capabilities API Overview A Pythonic platform for geospatial analysis It all starts with your GIS
###Code
from arcgis.gis import GIS
from getpass import getpass
gis = GIS('https://deldev.maps.arcgis.com', 'deldev', getpass())
###Output
········
###Markdown
Mapping* Map Widget* WebMap* WebScene Map widget
###Code
m = gis.map('San Diego')
m
###Output
_____no_output_____
###Markdown
Map widget properties
###Code
m
m.zoom
m.zoom = 15
m.extent
from arcgis.geocoding import geocode
redlands = geocode('Redlands, CA')[0]
m
m.extent = redlands['extent']
###Output
_____no_output_____
###Markdown
In-built basemaps
###Code
m
import time
for basemap in m.basemaps:
print(basemap)
m.basemap = basemap
time.sleep(2)
m.basemaps
###Output
_____no_output_____
###Markdown
Searching for layers
###Code
items = gis.content.search('San Diego')
for item in items:
display(item)
trolley_stations = items[1]
sd_attractions = items[0]
landsat_item = gis.content.search('Landsat Multispectral',
'Imagery Layer', outside_org=True)[0]
landsat_item
###Output
_____no_output_____
###Markdown
Adding layers
###Code
sdmap = gis.map('San Diego', zoomlevel=12)
sdmap
sdmap.add_layer(sd_attractions)
sdmap.add_layer(trolley_stations)
sdmap.add_layer(landsat_item)
###Output
_____no_output_____
###Markdown
Drawing on map
###Code
m = gis.map('Redlands, CA')
m
m.draw(redlands['location'])
###Output
_____no_output_____
###Markdown
Popups
###Code
m = gis.map('Redlands, CA')
m
m.draw(redlands['location'], popup={'title': 'Redlands, CA',
'content': 'City of Redlands'})
###Output
_____no_output_____
###Markdown
Popups - rich content
###Code
m = gis.map('Redlands, CA')
m
url = 'https://upload.wikimedia.org/wikipedia/en/thumb/6/6e/Esri_logo.svg/1280px-Esri_logo.svg.png'
m.draw(redlands['location'],
popup={'title': 'Esri Headquarters',
'content': "<img src='{}' width='240px'/>".format(url)})
###Output
_____no_output_____
###Markdown
Symbology
###Code
m = gis.map('Redlands, CA')
m
finish_symbol = {"angle":0,
"xoffset":12,
"yoffset":12,
"type":"esriPMS",
"url":"http://static.arcgis.com/images/Symbols/Basic/CheckeredFlag.png",
"contentType":"image/png",
"width":24,
"height":24} # See https://developers.arcgis.com/javascript/3/samples/portal_symbols/
m.draw(redlands['location'], symbol=finish_symbol)
###Output
_____no_output_____
###Markdown
Digitizing input
###Code
m = gis.map('Redlands, CA')
m
from arcgis.geometry import lengths
def calc_dist(map1, g):
print("Computing length of drawn polyline...")
length = lengths(g['spatialReference'], [g], "", "geodesic")
print("Length: " + str(length[0]) + " m.")
# Set calc_dist as the callback function to be invoked when a polyline is drawn on the map
m.on_draw_end(calc_dist)
m.draw("freehandpolyline")
###Output
_____no_output_____
###Markdown
Web Maps
###Code
stamen_watercolor = gis.content.search('Stamen Watercolor owner:dkensok',
'Web Map',
outside_org=True)[0]
stamen_watercolor
m2 = gis.map(stamen_watercolor)
m2
ny = geocode('Central Park, New York, NY')[0]
m2.extent = ny['extent']
###Output
_____no_output_____
###Markdown
Web Scenes
###Code
pictometry = gis.content.search('Pictometry',
'Web Scene', outside_org=True)[0]
pictometry
arcgis.mapping.WebScene(pictometry)
###Output
_____no_output_____ |
RHDocentes_Fotografia.ipynb | ###Markdown
RHDocentes - fevereiro 2022**Notebook 1** - Fotografia do ISELEstudo organizado em 3 notebooks:* **Este notebook** - Fotografia do ISEL * [O próximo](https://github.com/arjoca/RHDocentes/blob/main/RHDocentes_Futuro.ipynb) - O Futuro e a Dinâmica de Aposentações* [O último](https://github.com/arjoca/RHDocentes/blob/main/RHDocentes_Corrige.ipynb) - Correção das Desigualdades Fontes de informação* Ficheiro "Afetacao_Financeira_ADs_Cursos_2021_v01.xlsx"* Ficheiro "RAIDES_0.xlsx"Informação que depois de filtrada e anonimizada deu origem aos dados guardados no ficheiro [clean_data.xlsx](https://github.com/arjoca/RHDocentes/blob/main/data/clean_data.xlsx), usados neste estudo. Preparação Importação de módulos e leitura de dados
###Code
# Instalação de módulos
!pip install kora -q
!pip install -U kaleido
# Importação de módulos
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import Image
import plotly.express as px
import kora.install.orca
# Leitura de dados
base_url = 'https://raw.githubusercontent.com/arjoca/RHDocentes/main/data/'
docentes = pd.read_csv(base_url + 'docentes.csv', encoding= 'unicode_escape',
parse_dates=['Data Nascimento'], infer_datetime_format=True)
alunos_per_curso = pd.read_csv(base_url + 'alunos.csv', encoding= 'unicode_escape', index_col=0)
horas = pd.read_csv(base_url + 'horas.csv', encoding= 'unicode_escape', index_col=0)
###Output
_____no_output_____
###Markdown
Inicializações
###Code
# Visualizar dataframes como tabelas interativas
%load_ext google.colab.data_table
# Designações das categorias de docentes
cat_prof_coord = ['Prof. Coordenador', 'Prof. Coordenador c/ Agreg. ', 'Professor Coordenador Principal']
cat_convidado = ['Assistente Convidado', 'Professor Adjunto Convidado']
cat_adjunto = ['Professor Adjunto']
cat_assistente = ['Assistente do 2. Trienio', 'Assistente', 'Equiparado Assistente 2. Trienio']
cat_monitor = ['Monitor']
cat_quadro = cat_prof_coord + cat_adjunto
cat_todas = cat_quadro + cat_convidado + cat_assistente + cat_monitor
###Output
_____no_output_____
###Markdown
Definição de funções auxiliares
###Code
# Desenho de gráfico de barras/tabela com linha de referência
def plot_table_series(s, title='', ref=None, size=(17,5)):
fig, ax = plt.subplots(1, 1, figsize=size)
s.name = ''
df = s.to_frame()
df.plot(kind='bar', table=np.round(df.T, 1), ax=ax, legend=None)
if ref is not None:
ax.axhline(y=ref, linewidth=2, color='#d62728')
ax.xaxis.set_visible(False)
ax.set_title(title);
# Designações para categorias de docentes
def categorizar(x):
if x in cat_prof_coord: return "Coordenador"
elif x in cat_adjunto: return "Adjunto"
elif x in cat_convidado: return "Convidado"
else: return "Outra"
###Output
_____no_output_____
###Markdown
O ISEL Hoje
###Code
# Horas de contacto usadas para determinar relações entre cursos e departamentos
horas_per_dept = horas.sum()
horas_per_curso = horas.sum(axis=1)
pesos_dept_per_curso = horas.T / horas_per_curso
pesos_curso_per_dept = horas / horas_per_dept
# Alunos por departamento
alunos_per_dept = pd.DataFrame(np.dot(pesos_dept_per_curso, alunos_per_curso),
index=pesos_dept_per_curso.index, columns=['Alunos'])
alunos_per_curso = alunos_per_curso['Alunos']
alunos_per_dept = alunos_per_dept['Alunos']
alunos_per_dept.name = ''
alunos_per_dept = alunos_per_dept.sort_index()
# ETIs e rácios alunos/ETI
eti_per_dept = docentes.groupby(['Departamento'])['ETI'].sum()
eti_per_dept.name = ''
alunos_per_eti_isel = alunos_per_dept.sum() / eti_per_dept.sum()
alunos_per_eti_dept = alunos_per_dept / eti_per_dept
###Output
_____no_output_____
###Markdown
Números globais
###Code
# Alunos por departamento
plot_table_series(alunos_per_dept, 'Alunos por departamento')
# ETIs por departamento
plot_table_series(eti_per_dept, 'ETIs por departamento')
###Output
_____no_output_____
###Markdown
Visão geral Distribuição dos alunos pelos departamentos e cursos
###Code
# Distribuição dos alunos
dept_curso_alunos = pesos_dept_per_curso * alunos_per_curso
dept_curso_alunos = dept_curso_alunos.stack().reset_index()
dept_curso_alunos.rename(columns={'level_0': 'Departamento', 0: 'Alunos'}, inplace=True)
# Treemap interativo com a distribuição de alunos (isel -> departamentos -> cursos)
fig = px.treemap(np.round(dept_curso_alunos, 0),
path=[px.Constant('ISEL'), 'Departamento', 'Curso'], values='Alunos')
fig.update_traces(root_color="lightgrey")
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25), title={
'text': "Distribuição dos Alunos pelos Departamentos/Cursos",
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'})
# Figura estática
fig.write_image('temp_image.png')
Image('temp_image.png',width=1600, height=800)
###Output
_____no_output_____
###Markdown
Distribuição de horas de contacto pelos departamentos e cursos
###Code
# Distribuição das horas de contacto
horas_stacked = horas.stack().reset_index()
horas_stacked.rename(columns={'level_1': 'Departamento', 0: 'Horas'}, inplace=True)
# Treemap interativo com a distribuição das horas de contacto (isel -> departamentos -> cursos)
fig = px.treemap(np.round(horas_stacked, 0),
path=[px.Constant('ISEL'), 'Departamento', 'Curso'], values='Horas')
fig.update_traces(root_color="lightgrey")
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25), title={
'text': "Distribuição das Horas de Contacto pelos Departamentos/Cursos",
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'})
# Figura estática
fig.write_image('temp_image.png')
Image('temp_image.png',width=1600, height=800)
###Output
_____no_output_____
###Markdown
Distribuição de ETIs pelos departamentos e categorias
###Code
# Distribuição dos ETIs por departamentos e categorias
dist_eti = docentes.drop(columns='Data Nascimento')
dist_eti['Categoria'] = dist_eti['Categoria'].apply(categorizar)
# Sunburst interativo com a distribuição dos ETIs (isel -> departamentos -> categorias)
fig = px.sunburst(dist_eti, path=[px.Constant('ISEL'), 'Departamento', 'Categoria'], values='ETI')
fig.update_traces(root_color="lightgrey")
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25),
title={'text': "Distribuição dos ETIs pelos Departamentos/Categorias",
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'},
height=800)
# Figura estática
fig.write_image('temp_image.png')
Image('temp_image.png',width=800, height=800)
# Sunburst interativo com a distribuição dos ETIs (isel -> categorias -> departamentos)
fig = px.sunburst(dist_eti, path=[px.Constant('ISEL'), 'Categoria', 'Departamento'], values='ETI')
fig.update_traces(root_color="lightgrey")
fig.update_layout(margin = dict(t=50, l=25, r=25, b=25),
title={
'text': "Distribuição dos ETIs pelas Categorias/Departamentos",
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'},
height=800)
fig.write_image('temp_image.png')
Image('temp_image.png',width=800, height=800)
###Output
_____no_output_____
###Markdown
Desigualdades entre departamentos**Métrica:** rácio Alunos/ETI**Observação:** Alunos/ETI = (Alunos/Hora_Contacto) x (Horas_Contacto/ETI)**Ou seja**, um rácio elevado Alunos/ETI pode ter uma de duas causas (ou ambas):* Demasiados alunos por hora de contacto (**sobrecarga docente devido a turmas grandes**)* Demasiadas horas de contacto por ETI (**sobrecarga docente devido a excesso de horas letivas**) Rácio Alunos/ETI por departamentos
###Code
# Rácio Alunos/ETI por departamentos
plot_table_series(alunos_per_dept / eti_per_dept, 'Rácio Alunos/ETI', ref=alunos_per_eti_isel)
###Output
_____no_output_____
###Markdown
Primeiro fator: Alunos por hora de contacto (dimensão relativa das turmas)
###Code
# Eficiência relativa dos departamentos/cursos
alunos_per_hora_isel = alunos_per_dept.sum() / horas_per_dept.sum()
horas_per_eti_isel = horas_per_curso.sum() / eti_per_dept.sum()
eff_alunos_per_hora_dept = 100 * alunos_per_dept / horas_per_dept / alunos_per_hora_isel
eff_horas_per_eti_dept = 100 * horas_per_dept / eti_per_dept / horas_per_eti_isel
eff_alunos_per_hora_curso = 100 * alunos_per_curso / horas_per_curso / alunos_per_hora_isel
# Alunos por hora de contacto
plot_table_series(eff_alunos_per_hora_dept, 'Alunos por hora de contacto', ref=100)
###Output
_____no_output_____
###Markdown
Segundo fator: Horas de contacto por ETI (carga letiva relativa)
###Code
# Horas de contacto por ETI
plot_table_series(eff_horas_per_eti_dept, 'Horas de contacto por ETI', ref=100)
###Output
_____no_output_____
###Markdown
Visão agregada dos dois fatores
###Code
# Figura com a visualização dos dois fatores
fig, ax = plt.subplots(1, 1, figsize=(17,5))
df = pd.DataFrame(data={'Horas/ETI':eff_horas_per_eti_dept,'Alunos/Hora':eff_alunos_per_hora_dept})
df.plot(kind='bar', ax=ax)
ax.grid(True)
###Output
_____no_output_____
###Markdown
Análise por curso (dimensão relativa das turmas) Licenciaturas
###Code
# Figura com eficiência relativa de cada curso (licenciatura)
plot_table_series(eff_alunos_per_hora_curso[:11], 'Alunos por hora de contacto', ref=100)
###Output
_____no_output_____
###Markdown
Mestrados
###Code
# Figura com eficiência relativa de cada curso (mestrado)
plot_table_series(eff_alunos_per_hora_curso[11:], 'Alunos por hora de contacto', ref=100)
###Output
_____no_output_____
###Markdown
ETIs em Falta
###Code
# VIANA (Valor Ideal Atendendo ao Número de Alunos)
# FANA (em Falta Atendendo ao Número de Alunos)
alunos_per_eti_ideal = alunos_per_eti_isel
viana = alunos_per_dept / alunos_per_eti_ideal
fana = viana - eti_per_dept
# Figura com défice/superávite de ETIs por departamento
plot_table_series(fana, 'Défice/Superávite de ETIs por Departamento', ref=0)
# Percentagem de ETIs em Falta/Excesso relativamente aos atuais ETIs dos Departamentos
s = 100*fana/eti_per_dept
plot_table_series(s, 'Percentagem de ETIs em Falta/Excesso relativamente aos atuais ETIs dos Departamentos', ref=0)
###Output
_____no_output_____ |
1D Numerov Schrodinger Solver.ipynb | ###Markdown
A numerical 1D Schrödinger solutionRevised from initial work in comp phys class.Based on: "TANG_DONGJIAO thesis.pdf"Would be good to reconcile these two and publish to http://www.compadre.org/picup/TODO: check agreement with theory
###Code
import numpy as np
from scipy.linalg import eigh, inv
import matplotlib.pyplot as plt
%matplotlib inline
N = 1000
x, dx = np.linspace(-1,1,N,retstep=True)
#dx = dx*0.1
# Finite square well
V_0 = np.zeros(N)
V_0[:] = 450
V_0[int(N/2 - N/6):int(N/2+N/6)] = 0
plt.plot(x,V_0)
plt.ylim(V.min() - 0.1*V_0.max(),V_0.max()*1.1)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,20))
E0 = energy[0] # ground state energy
states = [evecs[:,i] for i in range(20)]
plt.plot(energy,".")
plt.fill_between(range(21),E0,E0+V_0.max(), color='c', alpha=0.25) # Shade the bound states
for i,state in enumerate(states[0:17]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*2000 + energy[i])
plt.title("Finite square well")
#plt.fill_between(x,0,V,color='k',alpha=0.1) # shade in the potential well
###Output
_____no_output_____
###Markdown
SHO
###Code
# Finite square well
V_0 = 250*x**2
plt.plot(x,V_0)
plt.ylim(-50,400)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,30))
E0 = energy[0]
states = [evecs[:,i] for i in range(30)]
plt.plot(energy,".")
plt.fill_between(range(31),E0,E0+V_0.max(), color='c', alpha=0.25)
###Output
_____no_output_____
###Markdown
The bound states (below the cutoff) are clearly linear in energy (as expected), then above that we see the ∞-well solutions.
###Code
for i,state in enumerate(states[0:8]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*1000 + energy[i])
plt.title("Harmonic oscillator")
plt.ylim(E0,E0+100)
plt.fill_between(x,E0,E0+V_0,color='k',alpha=0.1)
###Output
_____no_output_____
###Markdown
Periodic wells:
###Code
N = 1000
x, dx = np.linspace(-1,1,N,retstep=True)
V_0 = np.zeros(N)
# periodic wells
V_0[:] = 1000
L = N/12 # width
S = N/10 # s
a = N/4 #
for i in range(5):
V_0[int(i*S+a):int(i*S+a+L)] = 0
plt.plot(x,V_0)
plt.ylim(-50,3050)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,30))
E0 = energy[0]
states = [evecs[:,i] for i in range(30)]
plt.plot(energy,".")
plt.fill_between(range(31),E0,E0+V_0.max(), color='c', alpha=0.25)
plt.figure(figsize=(16,6))
for i,state in enumerate(states[0:15]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*3000 + energy[i])
plt.fill_between(x,E0,E0+V_0,color='k',alpha=0.1)
#plt.plot(E0+V_0) TODO
plt.title("Bandgaps in periodic structure")
###Output
_____no_output_____
###Markdown
Bandgaps! For Students: explore the symmetry of these states. Q: Are there five degenerate states because each state has the particle in only one well? Q: Why does each cluster of states start to have a slope in the E vs. graph?
###Code
for i,state in enumerate(states[0:5]):
plt.subplot(5,1,i+1)
plt.plot(x, state**2)
for i,state in enumerate(states[20:25]):
plt.subplot(5,1,i+1)
plt.plot(x, state**2)
plt.figure(figsize=(10,3))
plt.plot(x,states[24]**2)
plt.plot(x,states[20]**2)
###Output
_____no_output_____ |
notebooks/04_DataVist.ipynb | ###Markdown
04 Analyze and visualize dataAnalyze and visulize data
###Code
# Data manipulation
import pandas as pd
import numpy as np
import seaborn as sns
import ast
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import norm, skew
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
#import nltk
#import pandas as pd
df = pd.read_csv('export_df_all_divorce_dropna.csv')
df.columns
# rename feature columns
df.rename(columns = {'age_diff':'age difference', 'zodiac_sp':'zodiac (spouse)',
'num_of_child_cl':'num of child', 'num_of_child_sp_cl':'num of child (spouse)',
'num_of_role':'num of role', 'num_of_role_sp': 'num of role (spouse)',
'geo_distance': 'geo distance', 'num_of_m': 'num of marrage', 'num_of_m_sp': 'num of marrage (spouse)',
'age_m_1':'age at 1st marriage', 'age_m_sp_1': 'age at 1st marriage (spouse)',
'age_div_1':'age at 1st divorce', 'age_div_sp_1': 'age at 1st divorce (spouse)', }, inplace=True)
df.columns
###Output
_____no_output_____
###Markdown
load data
###Code
#df = df.drop(["Unnamed: 0"], axis = 1)
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Data visulization **1.** overview
###Code
df.hist(figsize=(15,12));
df_new = df[['name','bday','sex','age','age difference', 'zodiac',
'zodiac (spouse)','num of marrage', 'num of marrage (spouse)',
'num of child', 'num of child (spouse)',
'num of role', 'num of role (spouse)',
'geohash', 'geo distance',
'age at 1st marriage', 'age at 1st divorce', 'age at 1st marriage (spouse)', 'age at 1st divorce (spouse)',
'divorce']]
fig, ax = plt.subplots(figsize=(12,12)) # Sample figsize in inches
sns.heatmap(df_new.corr(), annot=True, fmt=".2f");
###Output
_____no_output_____
###Markdown
**2.** Skewness in numerical featureCheck the skew for the numerical feature.
###Code
numeric_feats = df.dtypes[df.dtypes != "object"].index
# Check the skew of all numerical features
skewed_feats = df[numeric_feats].apply(lambda x: skew(x.dropna())).sort_values(ascending=False)
skewness = pd.DataFrame({'Skew' :skewed_feats})
skewness.plot.bar()
###Output
_____no_output_____
###Markdown
**3.** Divorce rateHere, I want to define [divorce rate](https://en.wikipedia.org/wiki/Divorce_demography) used in this project: The "divorce rate" measures the number of divorces per 100 married person, so that all unmarried persons are left out of the calculation.For example, if the data has 10,000 people with 5,000 married women, and 500 couples divorce. The divorce rate is 10 divorces per 100 married women, or 10%.$${\text{Divorce rate}} = \frac{\text{Number of divorces}\cdot100}{\text{Number of married person}} $$
###Code
# calculate the divorce rate
a = (df[df['divorce']==1]['divorce'].sum())/(df[df['num of marrage']!=0]['divorce'].notnull().sum())*100
"Overall divorce rate = {0:8.2f} %".format(a)#
b = df[(df['sex']=='M') & (df['divorce']==1)]['divorce'].sum()/(df[(df['sex']=='M')&(df['num of marrage']!=0)]['divorce'].notnull().sum())*100
"Divorce rate for male actor = {0:8.2f} %".format(a)#
c = df[(df['sex']=='F') & (df['divorce']==1)]['divorce'].sum()/(df[(df['sex']=='F')&(df['num of marrage']!=0)]['divorce'].notnull().sum())*100
"Divorce rate for female actor= {0:8.2f} %".format(a)#
# Set up a factorplot
g = sns.factorplot("sex", "divorce", data=df, kind="bar", palette="muted", legend=False, size=5, aspect=1)
# Show plot
plt.show()
plt.show()
###Output
_____no_output_____
###Markdown
Calculate the divorce rate for US general population (based on [US Census data in 2017](https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?pid=ACS_17_1YR_S1201&prodType=table))
###Code
print("Divorce rate for US general population (2017)")
print('Divorce rate (total): {0:8.2f}%'.format(10.9/47.8*100))
print('Divorce rate (male): {0:8.2f}%'.format(9.6/49.3*100))
print('Divorce rate (female): {0:8.2f}%'.format(12.1/46.4*100))
###Output
Divorce rate for US general population (2017)
Divorce rate (total): 22.80%
Divorce rate (male): 19.47%
Divorce rate (female): 26.08%
###Markdown
**4.** Divorce vs. Age at 1st marriage
###Code
#sns.distplot('age', data = df)
ax = sns.catplot(x="divorce", y="age at 1st marriage", hue="sex",
kind="violin", split=True, data=df);
ax.set(ylim=(5, 70))
#sns.jointplot(x='divorce', y='age', data=df);
axes = sns.catplot(x="divorce", y="age at 1st marriage", hue="sex", kind="box", showfliers=False, data=df);
axes.set(ylim=(5, 70))
a = df[(df['divorce']==0) & (df['sex']=='M')]['age at 1st marriage'].median()
b = df[(df['divorce']==0) & (df['sex']=='F')]['age at 1st marriage'].median()
c = df[(df['divorce']==1) & (df['sex']=='M')]['age at 1st marriage'].median()
d = df[(df['divorce']==1) & (df['sex']=='F')]['age at 1st marriage'].median()
print("Median age at 1st marriage for non-divorce male actor : {}".format(a))
print("Median age at 1st marriage for non-divorce female actor : {}".format(b))
print("Median age at 1st marriage for divorce male actor : {}".format(c))
print("Median age at 1st marriage for divorce female actor : {}".format(d))
## In 2018, the median age at first marriage was almost 30 for men and almost 28 for women.
###Output
Median age at 1st marriage for non-divorce male actor : 32.0
Median age at 1st marriage for non-divorce female actor : 31.0
Median age at 1st marriage for divorce male actor : 27.0
Median age at 1st marriage for divorce female actor : 25.0
###Markdown
**5.** Divorce vs. Age at 1st marriage (spouse)
###Code
axes = sns.catplot(x="divorce", y= 'age at 1st marriage (spouse)', hue="sex",
kind="violin", split=True, data=df);
#sns.jointplot(x='divorce', y='age', data=df);
axes.set(ylim=(5, 70))
axes = sns.catplot(x="divorce", y="age at 1st marriage (spouse)", hue="sex", kind="box", showfliers=False, data=df);
axes.set(ylim=(5, 70))
a = df[(df['divorce']==0) & (df['sex']=='M')]['age at 1st marriage (spouse)'].median()
b = df[(df['divorce']==0) & (df['sex']=='F')]['age at 1st marriage (spouse)'].median()
c = df[(df['divorce']==1) & (df['sex']=='M')]['age at 1st marriage (spouse)'].median()
d = df[(df['divorce']==1) & (df['sex']=='F')]['age at 1st marriage (spouse)'].median()
print("Median age at 1st marriage for non-divorce male actor's spouse : {}".format(a))
print("Median age at 1st marriage for non-divorce female actor's spouse : {}".format(b))
print("Median age at 1st marriage for divorce male actor's spouse : {}".format(c))
print("Median age at 1st marriage for divorce female actor's spouse : {}".format(d))
###Output
Median age at 1st marriage for non-divorce male actor's spouse : 29.0
Median age at 1st marriage for non-divorce female actor's spouse : 32.0
Median age at 1st marriage for divorce male actor's spouse : 25.0
Median age at 1st marriage for divorce female actor's spouse : 28.0
###Markdown
**6.** Divorce vs. Age
###Code
axes = sns.catplot(x="divorce", y= 'age', hue="sex",
kind="violin", split=True, data=df);
axes.set(ylim=(5, 180))
# Sort the dataframe by target
df_M = df.loc[df['sex'] == 'M']
df_F = df.loc[df['sex'] == 'F']
ax = sns.distplot(df_M[df_M['divorce']==0]['age'].dropna(), hist=True, label="M")
ax = sns.distplot(df_F[df_F['divorce']==0]['age'].dropna(), hist=True, label="F")
ax.legend()
ax = sns.distplot(df_M[df_M['divorce']==1]['age'].dropna(), hist=True, label="M")
ax = sns.distplot(df_F[df_F['divorce']==1]['age'].dropna(), hist=True, label="F")
ax.legend()
###Output
_____no_output_____
###Markdown
**7.** number of child
###Code
ax = sns.countplot(x="num_of_m", hue = 'sex', data=df)
###Output
_____no_output_____
###Markdown
**8.** number of role
###Code
ax = sns.countplot(x="num_of_role", hue = 'sex', data=df)
###Output
_____no_output_____
###Markdown
More data Analysiscalculate the years between first marriage and first divorce
###Code
# create a new dataframe
df_year = df[['year_m', 'year_div','sex']]
df_year.head()
# calculate number of marriage and numbe of divorce
num_year_m = []
num_year_div = []
for i in range(len(df_year)):
try:
num_year_m.append(len(ast.literal_eval(df_year['year_m'][i])))
except:
num_year_m.append(np.nan)
try:
num_year_div.append(len(ast.literal_eval(df_year['year_div'][i])))
except:
num_year_div.append(np.nan)
df_num_year_m = pd.DataFrame({'num_year_m':num_year_m, 'num_year_div':num_year_div})
df_year = pd.concat([df_year, df_num_year_m], axis =1)
df_year['mar_stage'] = df_year['num_year_m'] - df_year['num_year_div']
df_year.sort_values(by=['mar_stage'], ascending=False).head()
#print(df_year['mar_stage'].max())
###Output
_____no_output_____
###Markdown
calculate marriage stage, which is number of marriage - number of divorce. - single and never married: mar_stage = 0, num_year_m =0- single and has divorced: mar_stage = 0, num_year_m !=0- not single, but has divorced: mar_stage >0, num_year_div !=0However, there are outliners, which have mar_stage >1 (e.g. has four times marriage, but two times divorce)
###Code
# mar_stage = 0 (single), mar_stage >=1 (not single)
ax = sns.countplot(x="mar_stage", data=df_year)
###Output
_____no_output_____
###Markdown
The average length of a first marriage that ends in divorce
###Code
year_m_1 = []
year_div_1 = []
year_diff_1 = []
for i in range(len(df_year)):
try:
a = ast.literal_eval(df_year['year_m'][i])[0]
year_m_1.append(a)
except:
year_m_1.append(np.nan)
try:
b = ast.literal_eval(df_year['year_div'][i])[0]
year_div_1.append(b)
except:
year_div_1.append(np.nan)
try:
year_diff_1.append(b-a)
except:
year_diff_1.append(np.nan)
df_year_m_div_1 = pd.DataFrame({'year_m_1':year_m_1, 'year_div_1':year_div_1,'year_diff_1':year_diff_1 })
df_year = pd.concat([df_year, df_year_m_div_1], axis =1)
df_year.head()
###Output
_____no_output_____
###Markdown
remove outliner:remove row with year_diff_1<0
###Code
df_year = df_year[df_year['year_diff_1']>=0].sort_values(by=['year_diff_1'], ascending=True);
axes = sns.catplot(x="mar_stage", y= "year_diff_1", hue="sex",
kind="violin", split=True, data=df_year[df_year['mar_stage']<=1]);
ax = sns.boxplot(x ="sex", y="year_diff_1",
# hue="mar_stage", palette=["m", "g"],
data=df_year[df_year['mar_stage']<=1], showfliers=False)
ax.set(xlabel='sex', ylabel='years last in the first marriage')
ax.set_ylim(-2,27)
plt.show()
ax = sns.boxplot(x ="mar_stage", y="year_diff_1",
# hue="mar_stage", palette=["m", "g"],
data=df_year[df_year['mar_stage']<=1], showfliers=False)
ax.set(xlabel='current marriage stage (0: single, 1: married)', ylabel='years between 1st marriage and divorce')
plt.show()
a = df_year[df_year['mar_stage']<=1]['year_diff_1'].median()
print("The average length of a first marriage that ends in divorce : {}".format(a))
###Output
The average length of a first marriage that ends in divorce : 6.0
###Markdown
The average length of the first divorce that ends in the second marriage
###Code
year_m_2 = []
year_div_1 = []
year_diff_2 = []
for i in range(len(df_year)):
try:
a = ast.literal_eval(df_year['year_m'][i])[1] # 2nd marriage year
year_m_2.append(a)
except:
year_m_2.append(np.nan)
try:
b = ast.literal_eval(df_year['year_div'][i])[0] # 1st divorce year
year_div_1.append(b)
except:
year_div_1.append(np.nan)
try:
year_diff_2.append(a-b) # 2nd marriage year - 1st divorce year
except:
year_diff_2.append(np.nan)
df_year = df_year.drop(['year_m_2','year_diff_2'], axis =1)
df_year_m_div_2 = pd.DataFrame({'year_m_2':year_m_2, 'year_diff_2':year_diff_2 })
df_year = pd.concat([df_year, df_year_m_div_2], axis =1)
df_year.head()
###Output
_____no_output_____
###Markdown
create a new dataset which only contains 'year_diff_2' >2
###Code
#df_year.sort_values(by=['year_diff_2'], ascending=True).head(20)
#df_year_2 = df_year[df_year['year_diff_2']>=0]
df_year[(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)].head()
sns.catplot(x="mar_stage", y= "year_diff_2", hue="sex",
kind="violin", split=True,
data=df_year[(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)]);
ax = sns.boxplot(x ="mar_stage", y="year_diff_2",
# hue="mar_stage", palette=["m", "g"],
data=df_year[(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)], showfliers=False)
ax.set(xlabel='current marriage stage (0: single, 1: married)', ylabel='years between 1st divorce and 2nd marriage ')
plt.show()
ax = sns.boxplot(x ="sex", y="year_diff_2",
# hue="mar_stage", palette=["m", "g"],
data=df_year[(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)], showfliers=False)
ax.set(xlabel='sex', ylabel='years of re-marriage')
ax.set_ylim(-2,27)
plt.show()
ax = sns.boxplot(x ="mar_stage", y="year_diff_1",
# hue="mar_stage", palette=["m", "g"],
data=df_year[(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)], showfliers=False)
ax.set(xlabel='current marriage stage (0: single, 1: married)', ylabel='years between 1st marriage and divorce')
plt.show()
ax = sns.boxplot(x ="sex", y="year_diff_1",
# hue="mar_stage", palette=["m", "g"],
data=df_year[(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)], showfliers=False)
ax.set(xlabel='sex', ylabel='years last in the first marriage')
ax.set_ylim(-2,27)
plt.show()
b = df_year[(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)]['year_diff_2'].median()
print("The average length of the first divorce that ends in the second marriage : {}".format(b))
b = df_year[(df_year['sex']=='M')&(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)]['year_diff_2'].median()
print("The average length of the first divorce that ends in the second marriage (male) : {}".format(b))
b = df_year[(df_year['sex']=='F')&(df_year['mar_stage']<=1) & (df_year['year_diff_2']>=0)]['year_diff_2'].median()
print("The average length of the first divorce that ends in the second marriage (female) : {}".format(b))
###Output
The average length of the first divorce that ends in the second marriage (female) : 7.0
|
scripts/4K.ipynb | ###Markdown
Signed changes
###Code
import numpy as np
from scipy.io import loadmat
import os
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import seaborn as sns
sns.set_palette("colorblind")
sns.set_context("poster")
import sys
from data_utils import get_per_mouse_boutons, load_data
def compute_angle(x, y, in_degrees=True):
angle_in_radians = np.arccos(x@y / (x@x * y@y)**0.5)
if in_degrees:
return angle_in_radians * 180/np.pi
else:
return angle_in_radians
# Compute angle between avg. response for specific stimulus
# and NP/NR boutons
def get_learning_angles(stim, expt = 'AFC'):
responses = ['exc', 'inh']
hab_rec = {} # angle between avg. habituation and avg. recall vecgtors
for resp in responses:
Xr, yr = get_per_mouse_boutons("rec", f"../data/per_mouse/{expt}_{resp}_{stim}/")
Xa, ya = get_per_mouse_boutons("acq", f"../data/per_mouse/{expt}_{resp}_{stim}/")
Xh, yh = get_per_mouse_boutons("hab", f"../data/per_mouse/{expt}_{resp}_{stim}/")
mouse_ids = Xr.keys()
n_mice = len(mouse_ids)
n_trials = 15
hab_rec[resp] = np.zeros((n_mice, ))
for m, mouse_id in enumerate(mouse_ids):
# Mean during habituation vs mean during recall
hab_rec[resp][m] = compute_angle(Xh[mouse_id][:,12:21].mean((0,1)), Xr[mouse_id][:,12:21].mean((0,1)))
return hab_rec
pc_angles = {}
afc_angles = {}
for stim in ['cs1', 'cs2']:
pc_angles[stim] = get_learning_angles(stim, 'Pseudo')
for stim in ['csm', 'csp']:
afc_angles[stim] = get_learning_angles(stim, 'AFC')
# plot them
color = 'tab:blue'
alpha = 0.5
s = 30
for i, resp in enumerate(['exc', 'inh']):
pc_avg = (pc_angles['cs1'][resp]+ pc_angles['cs2'][resp])/2
plt.bar(i, pc_avg.mean(), color=color, alpha=alpha)
plt.scatter([i] * len(pc_avg), pc_avg, s=s, alpha=alpha)
# means
plt.bar(3, afc_angles['csm']['exc'].mean(), color=color, alpha=alpha)
plt.bar(4, afc_angles['csm']['inh'].mean(), color=color, alpha=alpha)
plt.bar(6, afc_angles['csp']['exc'].mean(), color=color, alpha=alpha)
plt.bar(7, afc_angles['csp']['inh'].mean(), color=color, alpha=alpha)
# individual mice
plt.scatter([3]* len(afc_angles['csm']['exc']), afc_angles['csm']['exc'], s=s, alpha=alpha)
plt.scatter([4]* len(afc_angles['csm']['inh']), afc_angles['csm']['inh'], s=s, alpha=alpha)
plt.scatter([6]* len(afc_angles['csp']['exc']), afc_angles['csp']['exc'], s=s, alpha=alpha)
plt.scatter([7]* len(afc_angles['csp']['inh']), afc_angles['csp']['inh'], s=s, alpha=alpha)
plt.xticks([0, 1, 3, 4, 6, 7], ['PN', 'NR']*3)
plt.text(0, -50, "CS1/2", fontsize=20)
plt.text(3, -50, "CS-", fontsize=20)
plt.text(6, -50, "CS+", fontsize=20)
plt.ylabel(f"Learning $\theta$ (deg.)")
sns.despine()
plt.tight_layout()
###Output
_____no_output_____ |
Copy_of_LS_DS_112_Loading_Data.ipynb | ###Markdown
Lambda School Data Science - Loading, Cleaning and Visualizing DataObjectives for today:- Load data from multiple sources into a Python notebook - !curl method - CSV upload method- Create basic plots appropriate for different data types - Scatter Plot - Histogram - Density Plot - Pairplot- "Clean" a dataset using common Python libraries - Removing NaN values "Interpolation"
###Code
###Output
_____no_output_____
###Markdown
Part 1 - Loading DataData comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.Data set sources:- https://archive.ics.uci.edu/ml/datasets.html- https://github.com/awesomedata/awesome-public-datasets- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags). Lecture example - flag data
###Code
df = sns.load_dataset('titanic'
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)
?pd.read_csv
??pd.read_csv
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
flag_data.count()
flag_data.isna().sum()
###Output
_____no_output_____
###Markdown
Yes, but what does it *mean*?This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).```1. name: Name of the country concerned2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW4. area: in thousands of square km5. population: in round millions6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others8. bars: Number of vertical bars in the flag9. stripes: Number of horizontal stripes in the flag10. colours: Number of different colours in the flag11. red: 0 if red absent, 1 if red present in the flag12. green: same for green13. blue: same for blue14. gold: same for gold (also yellow)15. white: same for white16. black: same for black17. orange: same for orange (also brown)18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)19. circles: Number of circles in the flag20. crosses: Number of (upright) crosses21. saltires: Number of diagonal crosses22. quarters: Number of quartered sections23. sunstars: Number of sun or star symbols24. crescent: 1 if a crescent moon symbol present, else 025. triangle: 1 if any triangles present, 0 otherwise26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 027. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise29. topleft: colour in the top-left corner (moving right to decide tie-breaks)30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)```Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1...
###Code
###Output
_____no_output_____
###Markdown
Loading from a local CSV to Google Colab
###Code
###Output
_____no_output_____
###Markdown
Part 2 - Basic Visualizations Basic Data Visualizations Using Matplotlib
###Code
import matplotlib.pyplot as plt
# Scatter Plot
# Histogram
# Seaborn Density Plot
# Seaborn Pairplot
###Output
_____no_output_____
###Markdown
Create the same basic Visualizations using Pandas
###Code
# Pandas Histogram - Look familiar?
# Pandas Scatterplot
# Pandas Scatter Matrix - Usually doesn't look too great.
###Output
_____no_output_____
###Markdown
Part 3 - Deal with Missing Values Diagnose Missing ValuesLets use the Adult Dataset from UCI.
###Code
###Output
_____no_output_____
###Markdown
Fill Missing Values
###Code
###Output
_____no_output_____
###Markdown
Your assignment - pick a dataset and do something like the aboveThis is purposely open-ended - you can pick any data set you wish. It is highly advised you pick a dataset from UCI or a similar semi-clean source. You don't want the data that you're working with for this assignment to have any bigger issues than maybe not having headers or including missing values, etc.After you have chosen your dataset, do the following:- Import the dataset using the method that you are least comfortable with (!curl or CSV upload). - Make sure that your dataset has the number of rows and columns that you expect. - Make sure that your dataset has appropriate column names, rename them if necessary. - If your dataset uses markers like "?" to indicate missing values, replace them with NaNs during import.- Identify and fill missing values in your dataset (if any) - Don't worry about using methods more advanced than the `.fillna()` function for today.- Create one of each of the following plots using your dataset - Scatterplot - Histogram - Density Plot - Pairplot (note that pairplots will take a long time to load with large datasets or datasets with many columns)If you get that done and want to try more challenging or exotic things, go for it! Use documentation as illustrated above, and follow the 20-minute rule (that is - ask for help if you're stuck!).If you have loaded a few traditional datasets, see the following section for suggested stretch goals.
###Code
# coinmarket_data = 'https://coinmarketcap.com/' how can I input the data from here and organize it to a simalar type ,getting errors when I try to load the data
import numpy as np
import pandas as pd
from pandas.io.json import json_normalize
from requests import Request, Session
from requests.exceptions import ConnectionError, Timeout, TooManyRedirects
import json
from bs4 import BeautifulSoup
import requests
from tqdm import tqdm_notebook as tqdm
import time
# TODO your work here!
# And note you should write comments, descriptions, and add new
# code and text blocks as needed
def get_data(c,start='20130428',end='20190528'):
url = 'https://coinmarketcap.com/currencies/'+c+'/historical-data/?start='+start+'&end='+end
url=url.replace(' ','-')
content = requests.get(url).content
soup = BeautifulSoup(content,'html.parser')
# time.sleep(1)
table = soup.find('table', {'class': 'table'})
data = [[td.text.strip() for td in tr.findChildren('td')]
for tr in table.findChildren('tr')]
df = pd.DataFrame(data)
df.drop(df.index[0], inplace=True) # first row is empty
df[0] = pd.to_datetime(df[0]) # date
for i in range(1,7):
df[i] = pd.to_numeric(df[i].str.replace(",","").str.replace("-","")) # some vol is missing and has -
df.columns = ['Date','Open','High','Low','Close','Volume','Market Cap']
df['Name']=c
return df
df_total=pd.DataFrame()
df_list=[]
for c in tqdm(['bitcoin-cash','bitcoin','dash','dogecoin','ethereum','iota','litecoin','nem','neo']):
print(c)
try:
df_tmp=get_data(c)
df_list.append(df_tmp)
except:
print('failed to parse for :%s'%(c))
# df_total=pd.concat(df_list)
# df_total=df_total.sort_values(by=['Name','Date']).reset_index()
# df_total
print(len(df_list))
df_total=pd.concat(df_list)
df_total=df_total.sort_values(by=['Name','Date']).reset_index(drop=True)
df_total.to_csv('crypto_amit_may28.csv', index=False)
df_total.isna().sum()
pd.set_option('display.max_rows',500)
pd.set_option('display.max_columns',500)
df_total.isnull().sum() #still nan values?
de =df_total[df_total.Name=='ethereum']
#df_total[df_total.Name=='ethereum'].head()
de.head()
de.head().Open
import matplotlib.pyplot as plt
import numpy as np
de.hist()
#import matplotlib.pylab as plt
de.Open.plot();
#plt.scatter('Open', 'Close')
plt.scatter(de.Open, de.Close)
de['Volume'].plot.density();
###Output
_____no_output_____
###Markdown
Stretch Goals - Other types and sources of dataNot all data comes in a nice single file - for example, image classification involves handling lots of image files. You still will probably want labels for them, so you may have tabular data in addition to the image blobs - and the images may be reduced in resolution and even fit in a regular csv as a bunch of numbers.If you're interested in natural language processing and analyzing text, that is another example where, while it can be put in a csv, you may end up loading much larger raw data and generating features that can then be thought of in a more standard tabular fashion.Overall you will in the course of learning data science deal with loading data in a variety of ways. Another common way to get data is from a database - most modern applications are backed by one or more databases, which you can query to get data to analyze. We'll cover this more in our data engineering unit.How does data get in the database? Most applications generate logs - text files with lots and lots of records of each use of the application. Databases are often populated based on these files, but in some situations you may directly analyze log files. The usual way to do this is with command line (Unix) tools - command lines are intimidating, so don't expect to learn them all at once, but depending on your interests it can be useful to practice.One last major source of data is APIs: https://github.com/toddmotto/public-apisAPI stands for Application Programming Interface, and while originally meant e.g. the way an application interfaced with the GUI or other aspects of an operating system, now it largely refers to online services that let you query and retrieve data. You can essentially think of most of them as "somebody else's database" - you have (usually limited) access.*Stretch goal* - research one of the above extended forms of data/data loading. See if you can get a basic example working in a notebook. Image, text, or (public) APIs are probably more tractable - databases are interesting, but there aren't many publicly accessible and they require a great deal of setup.
###Code
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Loading, Cleaning and Visualizing DataObjectives for today:- Load data from multiple sources into a Python notebook - !curl method - CSV upload method- Create basic plots appropriate for different data types - Scatter Plot - Histogram - Density Plot - Pairplot- "Clean" a dataset using common Python libraries - Removing NaN values "Interpolation" Part 1 - Loading DataData comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.Data set sources:- https://archive.ics.uci.edu/ml/datasets.html- https://github.com/awesomedata/awesome-public-datasets- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags). Lecture example - flag data
###Code
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)
?pd.read_csv
??pd.read_csv
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
flag_data.count()
flag_data.isna().sum()
###Output
_____no_output_____
###Markdown
Yes, but what does it *mean*?This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).```1. name: Name of the country concerned2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW4. area: in thousands of square km5. population: in round millions6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others8. bars: Number of vertical bars in the flag9. stripes: Number of horizontal stripes in the flag10. colours: Number of different colours in the flag11. red: 0 if red absent, 1 if red present in the flag12. green: same for green13. blue: same for blue14. gold: same for gold (also yellow)15. white: same for white16. black: same for black17. orange: same for orange (also brown)18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)19. circles: Number of circles in the flag20. crosses: Number of (upright) crosses21. saltires: Number of diagonal crosses22. quarters: Number of quartered sections23. sunstars: Number of sun or star symbols24. crescent: 1 if a crescent moon symbol present, else 025. triangle: 1 if any triangles present, 0 otherwise26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 027. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise29. topleft: colour in the top-left corner (moving right to decide tie-breaks)30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)```Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1...
###Code
column_headers = ['name', 'landmass', 'zone','area','population']
flag_data = pd.read_csv(flag_data_url, names=column_headers)
flag_data.head()
###Output
_____no_output_____
###Markdown
Loading from a local CSV to Google Colab Part 2 - Basic Visualizations Basic Data Visualizations Using Matplotlib
###Code
import matplotlib.pyplot as plt
# Scatter Plot
# Histogram
# Seaborn Density Plot
# Seaborn Pairplot
###Output
_____no_output_____
###Markdown
Create the same basic Visualizations using Pandas
###Code
# Pandas Histogram - Look familiar?
# Pandas Scatterplot
# Pandas Scatter Matrix - Usually doesn't look too great.
###Output
_____no_output_____
###Markdown
Part 3 - Deal with Missing Values Diagnose Missing ValuesLets use the Adult Dataset from UCI.
###Code
###Output
_____no_output_____
###Markdown
Fill Missing Values Your assignment - pick a dataset and do something like the aboveThis is purposely open-ended - you can pick any data set you wish. It is highly advised you pick a dataset from UCI or a similar semi-clean source. You don't want the data that you're working with for this assignment to have any bigger issues than maybe not having headers or including missing values, etc.After you have chosen your dataset, do the following:- Import the dataset using the method that you are least comfortable with (!curl or CSV upload). - Make sure that your dataset has the number of rows and columns that you expect. - Make sure that your dataset has appropriate column names, rename them if necessary. - If your dataset uses markers like "?" to indicate missing values, replace them with NaNs during import.- Identify and fill missing values in your dataset (if any) - Don't worry about using methods more advanced than the `.fillna()` function for today.- Create one of each of the following plots using your dataset - Scatterplot - Histogram - Density Plot - Pairplot (note that pairplots will take a long time to load with large datasets or datasets with many columns)If you get that done and want to try more challenging or exotic things, go for it! Use documentation as illustrated above, and follow the 20-minute rule (that is - ask for help if you're stuck!).If you have loaded a few traditional datasets, see the following section for suggested stretch goals.
###Code
# TODO your work here!
# And note you should write comments, descriptions, and add new
# code and text blocks as needed
plants_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/plants/plants.data'
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/plants/plants.data
wine_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data'
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data > wine.data
help(pd.read_csv)
?pd.read_csv
wine_data = pd.read_csv(wine_data_url, header=None)
wine_data.head()
wine_data = pd.read_csv('wine.data')
wine_data.head()
wine_data.count()
wine_data.isna().sum()
wine_data[ 'received' ]
column_headers = ['total', 'received', 'time','speed']
wine_data = pd.read_csv(wine_data_url, names=column_headers)
wine_data.head()
column_headers = ['alcohol', 'malic acid', 'ash', 'alcalinity of ash', 'magnesium', 'total phenols', 'flavanoids' ]
wine_data = pd.read_csv(wine_data_url, names=column_headers)
wine_data.head()
df = pd.read_csv(wine_data_url, names=column_headers)
df.head()
import matplotlib.pyplot as plt
# Scatter Plot
df.plot.scatter('alcohol', 'ash')
# Histogram
wine_data.hist();
wine_data.alcohol.hist();
wine_data.alcohol.hist(bins=20);
wine_data.alcohol.hist(bins=20);
wine_data.alcohol.hist(by=alcohol)
df.plot.kde()
wine_data.plot.kde()
wine_data.ash.plot.kde()
# Seaborn Density plot
wine_data.ash.plot.density()
# Seaborn Pairplot
import seaborn as sns;
sns.set(style='ticks', color_codes=True)
g = sns.pairplot(wine_data)
wine_data.hist
###Output
_____no_output_____
###Markdown
Stretch Goals - Other types and sources of dataNot all data comes in a nice single file - for example, image classification involves handling lots of image files. You still will probably want labels for them, so you may have tabular data in addition to the image blobs - and the images may be reduced in resolution and even fit in a regular csv as a bunch of numbers.If you're interested in natural language processing and analyzing text, that is another example where, while it can be put in a csv, you may end up loading much larger raw data and generating features that can then be thought of in a more standard tabular fashion.Overall you will in the course of learning data science deal with loading data in a variety of ways. Another common way to get data is from a database - most modern applications are backed by one or more databases, which you can query to get data to analyze. We'll cover this more in our data engineering unit.How does data get in the database? Most applications generate logs - text files with lots and lots of records of each use of the application. Databases are often populated based on these files, but in some situations you may directly analyze log files. The usual way to do this is with command line (Unix) tools - command lines are intimidating, so don't expect to learn them all at once, but depending on your interests it can be useful to practice.One last major source of data is APIs: https://github.com/toddmotto/public-apisAPI stands for Application Programming Interface, and while originally meant e.g. the way an application interfaced with the GUI or other aspects of an operating system, now it largely refers to online services that let you query and retrieve data. You can essentially think of most of them as "somebody else's database" - you have (usually limited) access.*Stretch goal* - research one of the above extended forms of data/data loading. See if you can get a basic example working in a notebook. Image, text, or (public) APIs are probably more tractable - databases are interesting, but there aren't many publicly accessible and they require a great deal of setup.
###Code
deck_of_cards = 'https://deckofcardsapi.com/api/deck/new/shuffle/?deck_count=1'
!curl https://https://deckofcardsapi.com/api/deck/new/shuffle/?deck_count=1
###Output
curl: (6) Could not resolve host: https
###Markdown
Lambda School Data Science - Loading DataData comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.Data set sources:- https://archive.ics.uci.edu/ml/datasets.html- https://github.com/awesomedata/awesome-public-datasets- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags). Lecture example - flag data
###Code
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
flag_data.count()
flag_data.isna().sum()
###Output
_____no_output_____
###Markdown
Yes, but what does it *mean*?This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).```1. name: Name of the country concerned2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW4. area: in thousands of square km5. population: in round millions6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others8. bars: Number of vertical bars in the flag9. stripes: Number of horizontal stripes in the flag10. colours: Number of different colours in the flag11. red: 0 if red absent, 1 if red present in the flag12. green: same for green13. blue: same for blue14. gold: same for gold (also yellow)15. white: same for white16. black: same for black17. orange: same for orange (also brown)18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)19. circles: Number of circles in the flag20. crosses: Number of (upright) crosses21. saltires: Number of diagonal crosses22. quarters: Number of quartered sections23. sunstars: Number of sun or star symbols24. crescent: 1 if a crescent moon symbol present, else 025. triangle: 1 if any triangles present, 0 otherwise26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 027. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise29. topleft: colour in the top-left corner (moving right to decide tie-breaks)30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)```Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1... Your assignment - pick a dataset and do something like the aboveThis is purposely open-ended - you can pick any data set you wish. It is highly advised you pick a dataset from UCI or a similar "clean" source.If you get that done and want to try more challenging or exotic things, go for it! Use documentation as illustrated above, and follow the 20-minute rule (that is - ask for help if you're stuck).If you have loaded a few traditional datasets, see the following section for suggested stretch goals.
###Code
# TODO your work here!
# And note you should write comments, descriptions, and add new
# code and text blocks as needed
import pandas as pd
import numpy as np
zoo_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/zoo/zoo.data'
#i had to put in this attirbute list manually as i couldn't work out an easy way to get it from the site
attributes = ['animal name','hair','feathers','eggs','milk','airborne','aquatic',
'predator','toothed','backbone','breathes','venomous','fins','legs','tail','domestic','catsize','type']
df = pd.read_csv(zoo_data_url,header=None,names = attributes)
df.head()
#the data has no missing values
sum(df.isna().sum())
#here i am going to use one of the APIs from the resource below. I randomly chose Oxford Dictionaries API and read how to use it
# to look up words they have to be stripped down to their 'lemma', i.e. their root so 'sorting' would become'sort'
#to do this i will have to download nltk to tokenize and lemmatise a random block of text i generated from www.randomtext.me
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.stem import WordNetLemmatizer
nltk.download('punkt') #this import was required after an error message
nltk.download('wordnet') #this import was required after an error message
random_text = '''Hey hello one more tapir while more away when much vulture that jeez some as near more one
crud much and despite jeez bounced due far freely far until wherever and together oh much dove
hey some stupid far vain juggled perceptible because some less on alas
jeepers crane strove crud the shark quetzal contrary inconsiderate
mocking shut gecko however much '''
text_words = nltk.word_tokenize(random_text)
lemma_list = []
for word in text_words:
lemmaword = WordNetLemmatizer().lemmatize(word, pos="v")
lemma_list.append(lemmaword)
lemma_list[:5]
#makign funtions to look up and parse entries in dictionaries. i only requested a few features, but could have retreived many more
#many of the entries have missing values
import ast
import requests
import json
def oxlookup(word):
app_id = 'dbb2134d'
app_key = '5c84de9b610305a7944afa6820f97a06'
language = 'en'
word_id = word
url = 'https://od-api.oxforddictionaries.com:443/api/v1/entries/' + language + '/' + word_id.lower()
r = requests.get(url, headers = {'app_id': app_id, 'app_key': app_key})
return (json.dumps(r.json()))
def parselookup(jsondump):
my_dict = ast.literal_eval(jsondump)
results_dict = my_dict['results'][0]
lexical_entries = results_dict['lexicalEntries']
idnum = results_dict['id']
language = results_dict['language']
no_entries = len(lexical_entries)
try:
primary_definition_domain = lexical_entries[0]['entries'][0]['senses'][0]['domains'][0]
except:
primary_definition_domain=np.nan
try:
etymologies = lexical_entries[0]['entries'][0]['etymologies'][0]
except:
etymologies=np.nan
primary_definition = lexical_entries[0]['entries'][0]['senses'][0]['short_definitions'][0]
try:
example = lexical_entries[0]['entries'][0]['senses'][0]['examples'][0]['text']
except:
example = np.nan
returnlist = [idnum,no_entries,etymologies,primary_definition_domain,primary_definition,example]
return returnlist
column_list = ['id_number','number_lexical_entries','etymologies','primary_definition_domain','primary_definition','example']
#looking up each word in the lemmatized random text list and slowing down the api so we don't violate the terms of service
import time
dictlist=[]
for word in lemma_list :
time.sleep(-time.time()%1)
templookup = oxlookup(word)
dictlist.append(templookup)
dictlist[:5]
#parsing dictionary and turning it into dataframe
blist=[]
for entry in dictlist:
bbb=parselookup(entry)
blist.append(bbb)
column_list = ['id_number','number_lexical_entries','etymologies','primary_definition_domain','primary_definition','example']
bdf = pd.DataFrame.from_records(blist,columns=column_list)
bdf.head(30)
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Loading, Cleaning and Visualizing DataObjectives for today:- Load data from multiple sources into a Python notebook - !curl method - CSV upload method- Create basic plots appropriate for different data types - Scatter Plot - Histogram - Density Plot - Pairplot- "Clean" a dataset using common Python libraries - Removing NaN values "Interpolation" Part 1 - Loading DataData comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.Data set sources:- https://archive.ics.uci.edu/ml/datasets.html- https://github.com/awesomedata/awesome-public-datasets- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags). Lecture example - flag data
###Code
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
help(pd.read_csv)
?pd.read_csv
??pd.read_csv
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
flag_data.count()
flag_data.isna().sum()
###Output
_____no_output_____
###Markdown
Yes, but what does it *mean*?This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).```1. name: Name of the country concerned2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW4. area: in thousands of square km5. population: in round millions6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others8. bars: Number of vertical bars in the flag9. stripes: Number of horizontal stripes in the flag10. colours: Number of different colours in the flag11. red: 0 if red absent, 1 if red present in the flag12. green: same for green13. blue: same for blue14. gold: same for gold (also yellow)15. white: same for white16. black: same for black17. orange: same for orange (also brown)18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)19. circles: Number of circles in the flag20. crosses: Number of (upright) crosses21. saltires: Number of diagonal crosses22. quarters: Number of quartered sections23. sunstars: Number of sun or star symbols24. crescent: 1 if a crescent moon symbol present, else 025. triangle: 1 if any triangles present, 0 otherwise26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 027. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise29. topleft: colour in the top-left corner (moving right to decide tie-breaks)30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)```Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1...
###Code
###Output
_____no_output_____
###Markdown
Loading from a local CSV to Google Colab
###Code
###Output
_____no_output_____
###Markdown
Part 2 - Basic Visualizations Basic Data Visualizations Using Matplotlib
###Code
import matplotlib.pyplot as plt
# Scatter Plot
# Histogram
# Seaborn Density Plot
# Seaborn Pairplot
###Output
_____no_output_____
###Markdown
Create the same basic Visualizations using Pandas
###Code
# Pandas Histogram - Look familiar?
# Pandas Scatterplot
# Pandas Scatter Matrix - Usually doesn't look too great.
###Output
_____no_output_____
###Markdown
Part 3 - Deal with Missing Values Diagnose Missing ValuesLets use the Adult Dataset from UCI.
###Code
###Output
_____no_output_____
###Markdown
Fill Missing Values
###Code
###Output
_____no_output_____
###Markdown
Your assignment - pick a dataset and do something like the aboveThis is purposely open-ended - you can pick any data set you wish. It is highly advised you pick a dataset from UCI or a similar semi-clean source. You don't want the data that you're working with for this assignment to have any bigger issues than maybe not having headers or including missing values, etc.After you have chosen your dataset, do the following:- Import the dataset using the method that you are least comfortable with (!curl or CSV upload). - Make sure that your dataset has the number of rows and columns that you expect. - Make sure that your dataset has appropriate column names, rename them if necessary. - If your dataset uses markers like "?" to indicate missing values, replace them with NaNs during import.- Identify and fill missing values in your dataset (if any) - Don't worry about using methods more advanced than the `.fillna()` function for today.- Create one of each of the following plots using your dataset - Scatterplot - Histogram - Density Plot - Pairplot (note that pairplots will take a long time to load with large datasets or datasets with many columns)If you get that done and want to try more challenging or exotic things, go for it! Use documentation as illustrated above, and follow the 20-minute rule (that is - ask for help if you're stuck!).If you have loaded a few traditional datasets, see the following section for suggested stretch goals.
###Code
# TODO your work here!
# And note you should write comments, descriptions, and add new
# code and text blocks as needed
from google.colab import files
files.upload();
import pandas as pd
df = pd.read_csv("iris.data")
df.shape
df.info()
df.columns = ['sepal length in cm', 'sepal width in cm', 'petal length in cm', 'petal width in cm', 'class']
df.head()
df['class'].unique()
dummy = pd.get_dummies(df['class'])
df = dummy.join(df)
dummy
df.head()
#scatterplot
#histogram
#seaborn density plot
#seaborn pairplot
# Pandas Histogram - Look familiar
# Pandas Scatterplot
# Pandas Scatter Matrix - Usually doesn't look too great.
import matplotlib.pyplot as plt
plt.scatter(x = df['sepal length in cm'], y = df['petal width in cm']);
plt.hist(df['sepal length in cm']);
import seaborn as sns
sns.distplot(df['sepal length in cm']);
sns.kdeplot(df['sepal length in cm']);
sns.pairplot(df);
df.hist();
df.plot.scatter(x= 'sepal length in cm', y= 'petal width in cm');
pd.plotting.scatter_matrix(df);
###Output
_____no_output_____
###Markdown
Stretch Goals - Other types and sources of dataNot all data comes in a nice single file - for example, image classification involves handling lots of image files. You still will probably want labels for them, so you may have tabular data in addition to the image blobs - and the images may be reduced in resolution and even fit in a regular csv as a bunch of numbers.If you're interested in natural language processing and analyzing text, that is another example where, while it can be put in a csv, you may end up loading much larger raw data and generating features that can then be thought of in a more standard tabular fashion.Overall you will in the course of learning data science deal with loading data in a variety of ways. Another common way to get data is from a database - most modern applications are backed by one or more databases, which you can query to get data to analyze. We'll cover this more in our data engineering unit.How does data get in the database? Most applications generate logs - text files with lots and lots of records of each use of the application. Databases are often populated based on these files, but in some situations you may directly analyze log files. The usual way to do this is with command line (Unix) tools - command lines are intimidating, so don't expect to learn them all at once, but depending on your interests it can be useful to practice.One last major source of data is APIs: https://github.com/toddmotto/public-apisAPI stands for Application Programming Interface, and while originally meant e.g. the way an application interfaced with the GUI or other aspects of an operating system, now it largely refers to online services that let you query and retrieve data. You can essentially think of most of them as "somebody else's database" - you have (usually limited) access.*Stretch goal* - research one of the above extended forms of data/data loading. See if you can get a basic example working in a notebook. Image, text, or (public) APIs are probably more tractable - databases are interesting, but there aren't many publicly accessible and they require a great deal of setup.
###Code
###Output
_____no_output_____
###Markdown
Stretch Goals - Other types and sources of dataNot all data comes in a nice single file - for example, image classification involves handling lots of image files. You still will probably want labels for them, so you may have tabular data in addition to the image blobs - and the images may be reduced in resolution and even fit in a regular csv as a bunch of numbers.If you're interested in natural language processing and analyzing text, that is another example where, while it can be put in a csv, you may end up loading much larger raw data and generating features that can then be thought of in a more standard tabular fashion.Overall you will in the course of learning data science deal with loading data in a variety of ways. Another common way to get data is from a database - most modern applications are backed by one or more databases, which you can query to get data to analyze. We'll cover this more in our data engineering unit.How does data get in the database? Most applications generate logs - text files with lots and lots of records of each use of the application. Databases are often populated based on these files, but in some situations you may directly analyze log files. The usual way to do this is with command line (Unix) tools - command lines are intimidating, so don't expect to learn them all at once, but depending on your interests it can be useful to practice.One last major source of data is APIs: https://github.com/toddmotto/public-apisAPI stands for Application Programming Interface, and while originally meant e.g. the way an application interfaced with the GUI or other aspects of an operating system, now it largely refers to online services that let you query and retrieve data. You can essentially think of most of them as "somebody else's database" - you have (usually limited) access.*Stretch goal* - research one of the above extended forms of data/data loading. See if you can get a basic example working in a notebook. Image, text, or (public) APIs are probably more tractable - databases are interesting, but there aren't many publicly accessible and they require a great deal of setup.
###Code
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Loading, Cleaning and Visualizing DataObjectives for today:- Load data from multiple sources into a Python notebook - !curl method - CSV upload method- Create basic plots appropriate for different data types - Scatter Plot - Histogram - Density Plot - Pairplot- "Clean" a dataset using common Python libraries - Removing NaN values "Interpolation" Part 1 - Loading DataData comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.Data set sources:- https://archive.ics.uci.edu/ml/datasets.html- https://github.com/awesomedata/awesome-public-datasets- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags). Lecture example - flag data
###Code
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url, header=None)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
#help(pd.read_csv)
?pd.read_csv
??pd.read_csv
# Alright, we can pass header=None to fix this
names=['botright', 'topleft', 'text', 'animate', 'icon','triangle','crescent',
'sunstars','quarters','saltires','crosses','circles', 'mainhue','orange'
,'black','white', 'gold', 'blue','green','red','colours','stripes','bars'
, 'religion','language','population','area','zone','landmass','name']
names.reverse()
flag_data = pd.read_csv(flag_data_url, header=None, names=names)
flag_data.head()
flag_data.count()
flag_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Yes, but what does it *mean*?This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).```1. name: Name of the country concerned2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW4. area: in thousands of square km5. population: in round millions6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others8. bars: Number of vertical bars in the flag9. stripes: Number of horizontal stripes in the flag10. colours: Number of different colours in the flag11. red: 0 if red absent, 1 if red present in the flag12. green: same for green13. blue: same for blue14. gold: same for gold (also yellow)15. white: same for white16. black: same for black17. orange: same for orange (also brown)18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)19. circles: Number of circles in the flag20. crosses: Number of (upright) crosses21. saltires: Number of diagonal crosses22. quarters: Number of quartered sections23. sunstars: Number of sun or star symbols24. crescent: 1 if a crescent moon symbol present, else 025. triangle: 1 if any triangles present, 0 otherwise26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 027. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise29. topleft: colour in the top-left corner (moving right to decide tie-breaks)30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)```Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1... Loading from a local CSV to Google Colab
###Code
###Output
_____no_output_____
###Markdown
Part 2 - Basic Visualizations Basic Data Visualizations Using Matplotlib
###Code
import matplotlib.pyplot as plt
plt.scatter(flag_data['name'],flag_data['landmass'])
plt.show()
# Scatter Plot
plt.scatter(flag_data['name'],flag_data['population'])
plt.show()
plt.scatter(flag_data['language'],flag_data['population'])
plt.show()
# Histogram
plt.hist(flag_data['language'])
# Seaborn Density Plot
# Make default density plot
import seaborn as sns
sns.kdeplot(flag_data['language'])
plt.show()
sns.distplot(flag_data['language']);
import seaborn as sns
sns.pairplot(flag_data[['name','population','language']])
list1 = ['name','population', 'language']
list1[0:4]
s.pairplot
###Output
_____no_output_____
###Markdown
Create the same basic Visualizations using Pandas
###Code
flag_data.hist(column='stripes')
flag_data.plot.hist(stacked=True, alpha=.5)
# 100 flags that have no stripes 40 that have two etc
# Pandas Scatterplot
# Pandas Scatter Matrix - Usually doesn't look too great.
###Output
_____no_output_____
###Markdown
Part 3 - Deal with Missing Values Diagnose Missing ValuesLets use the Adult Dataset from UCI.
###Code
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/Ames%20Housing%20Data/train.csv')
###Output
_____no_output_____
###Markdown
Fill Missing Values
###Code
df.isnull().sum().sum()
df = df.fillna(0)
df.isnull().sum().sum()
df.shape
df.columns
df.head()
plt.scatter(df[''],df[''])
plt.show()
###Output
_____no_output_____
###Markdown
Your assignment - pick a dataset and do something like the aboveThis is purposely open-ended - you can pick any data set you wish. It is highly advised you pick a dataset from UCI or a similar semi-clean source. You don't want the data that you're working with for this assignment to have any bigger issues than maybe not having headers or including missing values, etc.After you have chosen your dataset, do the following:- Import the dataset using the method that you are least comfortable with (!curl or CSV upload). - Make sure that your dataset has the number of rows and columns that you expect. - Make sure that your dataset has appropriate column names, rename them if necessary. - If your dataset uses markers like "?" to indicate missing values, replace them with NaNs during import.- Identify and fill missing values in your dataset (if any) - Don't worry about using methods more advanced than the `.fillna()` function for today.- Create one of each of the following plots using your dataset - Scatterplot - Histogram - Density Plot - Pairplot (note that pairplots will take a long time to load with large datasets or datasets with many columns)If you get that done and want to try more challenging or exotic things, go for it! Use documentation as illustrated above, and follow the 20-minute rule (that is - ask for help if you're stuck!).If you have loaded a few traditional datasets, see the following section for suggested stretch goals.
###Code
import pandas as pd
features = ['id','radius','texture','perimeter','area','smoothness','compactness','concavity','concavepoints','symetry','fractaldimension']
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data', header=None, names=features)
df.head(15)
#df.shape
# TODO your work here!
# And note you should write comments, descriptions, and add new
# code and text blocks as needed
##with this data you cant make many assumptions or guesses unless you are a cell biologist so ill start with
import seaborn as sns
sns.pairplot(df)
#what can be guesed by these graphs...im going to make the assumption you need a better understanding of cancer/cell tissue to make ideas about this. and move onto a new dataset
import matplotlib.pyplot as plt
import seaborn as sns
sns.kdeplot(df['texture'])
plt.show()
#this shows alot of 1 values for the texture rows
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/reprocessed.hungarian.data', header=None)
#df["0"]= df["0"].str.split("t", n = 1, expand = True)
df.head(15)
#df.shape
###Output
_____no_output_____ |
notebooks/deprecated/.ipynb_checkpoints/fluctuation_matching_amber_BDD-checkpoint.ipynb | ###Markdown
Model: United-atom Model
###Code
from os import path, system
from shutil import copyfile
import re
import datetime
import time
import scipy.constants as constants
import pandas as pd
import numpy as np
from fluctmatch import enm, prm, ic_table, fluct_util
enm_rootfolder = '/home/yizaochen/codes/dna_rna/fluctmatch_sequence'
T = 310.0 # temperature, 310 K
RT = T * (constants.k * constants.N_A / (constants.calorie * constants.kilo)) # RT kcal/mol # https://en.wikipedia.org/wiki/KT_(energy)
###Output
_____no_output_____
###Markdown
Calculate the variance of each bond through normal mode analysis(NMA)
###Code
rootdir = '/Users/yizao/PycharmProjects/ENM'
host = 'pnas_amber_16mer'
type_na = 'bdna+bdna'
nadir = path.join(rootdir, host, type_na)
agent = enm.ENMAgent(host, type_na)
nadir = path.join(enm_rootfolder, host, type_na)
charmminpfolder = path.join(nadir, 'charmm_inp')
charmmdatfolder = path.join(nadir, 'charmm_dat')
icfolder = path.join(nadir, 'ic')
datafolder = path.join(nadir, 'data')
###Output
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/input exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/mode_traj exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/ic exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/ic_fluct_mat exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/rtf_ic_str exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/data exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/data/backup exists
###Markdown
Part 0-1: Get initial na.avg.ic and na.fluct.ic
###Code
# IC fluct
icfluct_inp = path.join(charmminpfolder, 'ic_fluct.inp')
icfluct_dat = path.join(charmmdatfolder, 'ic_fluct.dat')
fluct_util.write_ic_fluct_inp(icfluct_inp, host, type_na)
fluct_util.exec_charmm(icfluct_inp, icfluct_dat)
mode0ic = path.join(icfolder, f'mode.0.ic')
nafluctic = path.join(datafolder, f'na.fluct.ic')
with open(mode0ic, 'r') as f:
context = f.read()
context = re.sub(r'-99 ', ' -99 ', context)
with open(nafluctic, 'w') as f:
f.write(context)
icfluct_0 = ic_table.ICTable(nafluctic, initial=True)
# IC Avg
icavg_inp = path.join(charmminpfolder, f'ic_avg.inp')
icavg_dat = path.join(charmmdatfolder, f'ic_avg.dat')
fluct_util.write_ic_avg_inp(icavg_inp, host, type_na, distance_average=False) # Important! Check Fix b0
fluct_util.exec_charmm(icavg_inp, icavg_dat)
mode0avgic = path.join(icfolder, f'mode.0.avg.ic')
naavgic = path.join(datafolder, f'na.avg.ic')
with open(mode0avgic, 'r') as f:
context = f.read()
context = re.sub(r'-99 ', ' -99 ', context)
with open(naavgic, 'w') as f:
f.write(context)
icavg_0 = ic_table.ICTable(naavgic, initial=True)
###Output
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/ic_avg.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/ic_avg.dat
###Markdown
Part 0-2: Get the initial equilibrium distance and force constant and write PRM
###Code
set_all_same = False # Set all force constants are 10 !!!
b_0 = icavg_0.values # Initial Guess of equilibrium bond length
k_0 = RT / np.square(icfluct_0.values) # Initial Guess of force constants
kbpair_0 = ic_table.KBPair(read_from_prm=False, icavg=icavg_0, icfluct=icfluct_0, rt=RT)
if set_all_same:
scalar = 10
all_k_1 = scalar * np.ones_like(k_0) # Set all force constants are 10 !!! Important
kbpair_0.set_d_k(all_k_1)
scratch_prm = path.join(datafolder, 'na_enm.prm') #!!! Important File
prm_agent = prm.PRM(host, type_na, kbpair_0, iternum=0)
prm_agent.write_prm(scratch_prm)
initial_prm = path.join(datafolder, 'na_enm_init.prm')
copyfile(scratch_prm, initial_prm)
###Output
_____no_output_____
###Markdown
Part 1: NMA Initialize
###Code
# NMA Initialize
nma_init_inp = path.join(charmminpfolder, 'nmainit.inp')
nma_init_dat = path.join(charmmdatfolder, 'nmainit.dat')
fluct_util.write_nmainit_inp(nma_init_inp, host, type_na, out_start_end_mode=None)
fluct_util.exec_charmm(nma_init_inp, nma_init_dat)
output_vib = path.join(datafolder, 'na_enm.vib')
initial_vib = path.join(datafolder, 'na_enm_init.vib')
copyfile(output_vib, initial_vib)
print(f'cp {output_vib} {initial_vib}')
###Output
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nmainit.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nmainit.dat
cp /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/data/na_enm.vib /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/data/na_enm_init.vib
###Markdown
Part 2: Fluctuation-Matching
###Code
start = 0
end = 300
fluct_util.fluct_match(host, type_na, start, end, icfluct_0, icavg_0, kbpair_0, nadir, out_start_end_mode=None)
###Output
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/diff_iters exists
/Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/data/backup exists
IterNum: 0
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 1
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 2
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 3
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 4
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 5
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 6
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 7
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 8
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 9
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 10
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 11
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 12
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 13
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 14
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 15
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 16
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 17
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 18
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 19
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 20
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 21
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 22
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 23
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 24
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 25
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 26
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 27
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 28
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 29
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 30
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 31
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 32
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 33
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 34
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 35
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 36
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 37
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 38
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 39
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 40
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 41
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 42
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 43
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 44
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 45
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 46
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 47
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 48
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 49
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 50
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 51
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 52
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 53
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 54
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 55
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 56
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 57
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 58
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 59
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 60
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 61
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 62
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 63
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 64
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 65
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 66
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 67
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 68
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 69
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 70
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 71
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 72
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 73
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 74
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 75
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 76
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 77
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 78
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 79
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 80
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 81
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 82
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 83
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 84
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 85
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 86
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 87
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 88
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 89
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 90
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 91
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 92
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 93
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 94
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 95
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 96
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 97
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 98
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 99
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 100
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 101
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 102
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 103
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 104
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 105
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 106
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 107
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 108
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 109
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 110
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 111
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 112
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 113
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 114
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 115
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 116
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 117
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 118
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 119
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 120
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 121
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 122
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 123
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 124
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 125
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 126
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 127
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 128
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 129
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 130
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 131
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 132
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 133
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 134
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 135
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 136
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 137
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 138
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 139
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 140
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 141
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 142
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 143
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 144
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 145
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 146
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 147
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 148
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 149
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 150
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 151
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 152
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 153
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 154
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 155
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 156
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 157
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 158
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 159
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 160
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 161
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 162
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 163
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 164
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 165
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 166
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 167
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 168
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 169
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 170
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 171
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 172
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 173
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 174
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 175
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 176
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 177
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 178
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 179
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 180
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 181
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 182
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 183
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 184
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 185
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 186
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 187
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 188
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 189
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 190
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 191
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 192
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 193
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 194
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 195
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 196
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 197
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 198
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 199
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
IterNum: 200
charmm< /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/nma.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/nma.dat
###Markdown
Part 3: Get a minimized structure accdoring to converged parameter file(prm)
###Code
get_minim_inp = path.join(charmminpfolder, 'get_minim_after_fluct.inp ')
get_minim_dat = path.join(charmmdatfolder, 'get_minim_after_fluct.dat')
minim_crd = path.join(datafolder, 'minim_after_fm.crd')
print(f'cd {enm_rootfolder}')
print(f'vim {get_minim_inp}')
print(f'charmm_yz < {get_minim_inp} > {get_minim_dat}')
print(f'vmd -cor {minim_crd}')
###Output
cd /Users/yizao/PycharmProjects/ENM
vim /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/get_minim_after_fluct.inp
charmm_yz < /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_inp/get_minim_after_fluct.inp > /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/charmm_dat/get_minim_after_fluct.dat
vmd -cor /Users/yizao/PycharmProjects/ENM/pnas_amber_16mer/bdna+bdna/data/minim_after_fm.crd
###Markdown
Reload Function
###Code
from imp import reload
reload(fluct_util)
###Output
_____no_output_____ |
analyses/data-expungement/[EDA] CrimeSolv arrest crime statistics.ipynb | ###Markdown
Background In this analysis, we attempt to answer the question of how many individuals are potentially eligible or impacted by Massachusetts expungement eligibility.
###Code
import numpy as np
import pandas as pd
import math
from datetime import date
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(style="ticks")
###Output
_____no_output_____
###Markdown
Data Source (Under 21 / Unarmed)The data source for this comes from the Massachusetts State Police CrimeSOLV that contains 'crime data from police agencies in Massachusetts which report data to the Crime Reporting Unit in the format of the "National Incident Based Reporting System." The majority of police agencies now use this way of reporting their crime data.'Unfortunately, there is no API for retrieving the data, but a report can be configureed and downloaded as a CSV at https://masscrime.chs.state.ma.us. Using the reporting system, Columns are selected and arranged in the following manner:With the following filters: This report can then be downloaded in CSV list format for further analysis.
###Code
# Load data from downloaded CSV
arrests = pd.read_csv(
"data/CrimeSolve_arrests_by_age.csv", skiprows=5,
names=list(["Date","Offense","Armed","Age","Count","dummy"]),
index_col=False)
# Remove empty entries and drop dummy cols
arrests = arrests[arrests["Count"].notnull()] \
.drop(["Armed","dummy"], axis=1) \
.groupby(["Date", "Offense", "Age"]) \
.agg("sum").reset_index()
arrests.Count = pd.to_numeric(arrests['Count'], errors='coerce')
arrests.head()
###Output
_____no_output_____
###Markdown
Pre-processing (Categorize offenses, expungibility, etc)
###Code
# TODO: Read all of this from a metadata file
# Broad categories
crimes_against_person = list([
"Murder and Nonnegligent Manslaughter",
"Negligent Manslaughter",
"Justifiable Homicide",
"Kidnapping/Abduction",
"Rape",
"Sodomy",
"Sexual Assault With An Object",
"Fondling",
"Incest",
"Statutory Rape",
"Aggravated Assault",
"Simple Assault",
"Intimidation",
"Human Trafficking, Commercial Sex Acts",
"Human Trafficking, Involuntary Servitude"])
crimes_against_property = list([
"Arson",
"Bribery",
"Burglary/Breaking & Entering",
"Counterfeiting/Forgery",
"Destruction/Damage/Vandalism of Property",
"Embezzlement",
"Extortion/Blackmail",
"False Pretenses/Swindle/Confidence Game",
"Credit Card/Automatic Teller Fraud",
"Impersonation",
"Welfare Fraud",
"Wire Fraud",
"Identity Theft",
"Hacking/Computer Invasion",
"Robbery",
"Pocket-picking",
"Purse-snatching",
"Shoplifting",
"Theft From Building",
"Theft From Coin Operated Machine or Device",
"Theft From Motor Vehicle",
"Theft of Motor Vehicle Parts/Accessories",
"All Other Larceny",
"Motor Vehicle Theft",
"Stolen Property Offenses"
])
crimes_against_society = list([
"Drug/Narcotic Violations",
"Drug Equipment Violations",
"Betting/Wagering",
"Operating/Promoting/Assisting Gambling",
"Gambling Equipment Violations",
"Sports Tampering",
"Pornography/Obscene Material",
"Prostitution",
"Assisting or Promoting Prostitution",
"Purchasing Prostitution",
"Weapon Law Violations",
"Animal Cruelty"
])
group_b_offenses = list([
"Bad Checks",
"Curfew/Loitering/Vagrancy Violations",
"Disorderly Conduct",
"Driving Under the Influence",
"Drunkenness",
"Family Offenses (Nonviolent)",
"Liquor Law Violations",
"Peeping Tom",
"Runaway",
"Trespass of Real Property",
"All Other Offenses"
])
missing = list(["Missing"])
categories = { i:"Crimes Against Person" for i in crimes_against_person }
categories.update({ i: "Crimes Against Property" for i in crimes_against_property})
categories.update({ i: "Crimes Against Society" for i in crimes_against_society})
categories.update({ i: "Group B Offenses" for i in group_b_offenses})
categories.update({ i: "Missing" for i in missing})
# Add offense category
arrests["Category"] = arrests["Offense"].replace(categories)
# Add label for disqualifying offenses
disqualifying_offenses = list([
# Result / intent in Death or serious bodily injury
'Murder and Nonnegligent Manslaughter',
'Aggravated Assault',
'Negligent Manslaughter',
'Rape',
'Sodomy',
'Sexual Assault With An Object'
])
arrests["Disqualifying_Offense"] = arrests['Offense'].isin(disqualifying_offenses)
# Assign crime type - this is incorrect since all are assigned misdemeanor
crime_type = {offense: "misdemeanor" for offense in arrests["Offense"].unique()}
crime_type["Arson"] = "felony"
crime_type["Robbery"] = "felony"
arrests["Offense_Type"] = arrests["Offense"].map(crime_type)
# Expungement eligibility based on age, offense and offense type (felony/misdemeanor) and time cutoff
def eligible(row):
if row["Age"] >= 21 or row["Disqualifying_Offense"]:
return False
cutoff = 7 if row["Offense_Type"] == 'felony' else 3
current_year = date.today().year
years_since = current_year - row["Date"]
return years_since >= cutoff
arrests["Expungible"] = arrests.apply(eligible, axis=1)
# Unique offenses that are not disqualified according to the logic above
arrests[arrests["Disqualifying_Offense"] == False]["Offense"].unique()
arrests.head()
# Write this dataframe back to disk for other visualizations
arrests.to_csv("output/arrest_expungibility.csv", index=False)
###Output
_____no_output_____
###Markdown
Basic Stats
###Code
arrests[arrests["Expungible"]]["Count"].sum()
crime_count = arrests[arrests["Expungible"]][["Offense", "Count"]] \
.groupby(["Offense"]) \
.sum() \
.sort_values("Count", ascending=False) \
.reset_index()
crime_count
g = sns.catplot(
x="Offense", y="Count", kind="bar",
data=crime_count,
height=5, aspect=3)
g.set_xticklabels(rotation=60, ha="right");
###Output
_____no_output_____
###Markdown
Visualizations
###Code
tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),
(44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),
(148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),
(227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),
(188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]
# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts.
for i in range(len(tableau20)):
r, g, b = tableau20[i]
tableau20[i] = (r / 255., g / 255., b / 255.)
def crime_plot(categories):
plt.figure(figsize=(12, 6))
ax = plt.subplot(111)
ax.spines["top"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.xticks(fontsize=10)
plt.tick_params(axis="both", which="both", bottom=False, top=False,
labelbottom=True, left=False, right=False, labelleft=True)
for rank, column in enumerate(categories):
if column in plot_data.columns:
plt.plot(plot_data["Date"].values, plot_data[column].values, lw=2.5, color=tableau20[rank%20])
y_pos = plot_data[column].values[-1] - 0.5
plt.text(2016.5, y_pos, column, fontsize=11, color=tableau20[rank%20])
return plt
plot_data = arrests[arrests["Expungible"]] \
.drop(["Expungible", "Age"], axis=1) \
.groupby(["Category","Date", "Offense"]) \
.sum() \
.pivot_table(values="Count", index="Date", columns="Offense", fill_value=0) \
.reset_index()
###Output
_____no_output_____
###Markdown
Crimes Against Person
###Code
outliers = ["Simple Assault", "Intimidation"]
plt = crime_plot(outliers)
plt.title("Crimes Against Person")
plt.show();
plt = crime_plot([i for i in crimes_against_person if i not in outliers])
plt.title("Crimes Against Person")
plt.show();
###Output
_____no_output_____
###Markdown
Crimes Against Property
###Code
outliers = ["Shoplifting", "Destruction/Damage/Vandalism of Property",
"All Other Larceny", "Burglary/Breaking & Entering"]
plt = crime_plot(outliers)
plt.title("Crimes Against Property")
plt.show();
midliers = ["Stolen Property Offenses", "Motor Vehicle Theft",
"Theft From Motor Vehicle", "Robbery", "Theft From Building"]
plt = crime_plot(midliers)
plt.title("Crimes Against Property")
plt.show();
plt = crime_plot([i for i in crimes_against_property if i not in outliers + midliers])
plt.title("Crimes Against Property")
plt.show();
###Output
_____no_output_____
###Markdown
Crimes Against Society
###Code
outliers = ["Drug/Narcotic Violations", "Weapon Law Violations"]
plt = crime_plot(outliers)
plt.title("Crimes Against Society")
plt.show();
plt = crime_plot([i for i in crimes_against_society if i not in outliers])
plt.title("Crimes Against Society")
plt.show();
###Output
_____no_output_____
###Markdown
Group B Offenses
###Code
outliers = ["All Other Offenses", "Liquor Law Violations", "Disorderly Conduct"]
plt = crime_plot(outliers)
plt.title("Group B Offenses")
plt.show();
midliers = ["Drunkenness", "Driving Under the Influence",
"Trespass of Real Property"]
plt = crime_plot(midliers)
plt.title("Group B Offenses")
plt.show();
plt = crime_plot([i for i in group_b_offenses if i not in outliers + midliers])
plt.title("Group B Offenses")
plt.show();
###Output
_____no_output_____ |
data structure/recursion/Checking Palindrome.ipynb | ###Markdown
Palindrome A **palindrome** is a word that is the reverse of itself—that is, it is the same word when read forwards and backwards.For example:* "madam" is a palindrome* "abba" is a palindrome* "cat" is not* "a" is a trivial case of a palindromeThe goal of this exercise is to use recursion to write a function `is_palindrome` that takes a string as input and checks whether that string is a palindrome. (Note that this problem can also be solved with a non-recursive solution, but that's not the point of this exercise.)
###Code
def is_palindrome(input):
"""
Return True if input is palindrome, False otherwise.
Args:
input(str): input to be checked if it is palindrome
"""
# TODO: Write your recursive palindrome checker here
pass
# Test Cases
print ("Pass" if (is_palindrome("")) else "Fail")
print ("Pass" if (is_palindrome("a")) else "Fail")
print ("Pass" if (is_palindrome("madam")) else "Fail")
print ("Pass" if (is_palindrome("abba")) else "Fail")
print ("Pass" if not (is_palindrome("Udacity")) else "Fail")
###Output
_____no_output_____
###Markdown
Hide Solution
###Code
# Solution
def is_palindrome(input):
"""
Return True if input is palindrome, False otherwise.
Args:
input(str): input to be checked if it is palindrome
"""
if len(input) <= 1:
return True
else:
first_char = input[0]
last_char = input[-1]
# sub_input is input with first and last char removed
sub_input = input[1:-1]
return (first_char == last_char) and is_palindrome(sub_input)
print ("Pass" if (is_palindrome("")) else "Fail")
print ("Pass" if (is_palindrome("a")) else "Fail")
print ("Pass" if (is_palindrome("madam")) else "Fail")
print ("Pass" if (is_palindrome("abba")) else "Fail")
print ("Pass" if not (is_palindrome("Udacity")) else "Fail")
###Output
Pass
Pass
Pass
Pass
Pass
|
04-Milestone Project - 1/03-Milestone Project 1 - Complete Walkthrough Solution.ipynb | ###Markdown
Milestone Project 1: Full Walk-through Code SolutionBelow is the filled in code that goes along with the complete walk-through video. Check out the corresponding lecture videos for more information on this code! **Step 1: Write a function that can print out a board. Set up your board as a list, where each index 1-9 corresponds with a number on a number pad, so you get a 3 by 3 board representation.**
###Code
from IPython.display import clear_output
def display_board(board):
clear_output() # Remember, this only works in jupyter!
print(' | |')
print(' ' + board[7] + ' | ' + board[8] + ' | ' + board[9])
print(' | |')
print('-----------')
print(' | |')
print(' ' + board[4] + ' | ' + board[5] + ' | ' + board[6])
print(' | |')
print('-----------')
print(' | |')
print(' ' + board[1] + ' | ' + board[2] + ' | ' + board[3])
print(' | |')
###Output
_____no_output_____
###Markdown
**TEST Step 1:** run your function on a test version of the board list, and make adjustments as necessary
###Code
test_board = ['#','X','O','X','O','X','O','X','O','X']
display_board(test_board)
###Output
| |
X | O | X
| |
-----------
| |
O | X | O
| |
-----------
| |
X | O | X
| |
###Markdown
**Step 2: Write a function that can take in a player input and assign their marker as 'X' or 'O'. Think about using *while* loops to continually ask until you get a correct answer.**
###Code
def player_input():
marker = ''
while not (marker == 'X' or marker == 'O'):
marker = input('Player 1: Do you want to be X or O? ').upper()
if marker == 'X':
return ('X', 'O')
else:
return ('O', 'X')
###Output
_____no_output_____
###Markdown
**TEST Step 2:** run the function to make sure it returns the desired output
###Code
player_input()
###Output
Player 1: Do you want to be X or O? X
###Markdown
**Step 3: Write a function that takes in the board list object, a marker ('X' or 'O'), and a desired position (number 1-9) and assigns it to the board.**
###Code
def place_marker(board, marker, position):
board[position] = marker
###Output
_____no_output_____
###Markdown
**TEST Step 3:** run the place marker function using test parameters and display the modified board
###Code
place_marker(test_board,'$',8)
display_board(test_board)
###Output
| |
X | $ | X
| |
-----------
| |
O | X | O
| |
-----------
| |
X | O | X
| |
###Markdown
**Step 4: Write a function that takes in a board and checks to see if someone has won. **
###Code
def win_check(board,mark):
return ((board[7] == mark and board[8] == mark and board[9] == mark) or # across the top
(board[4] == mark and board[5] == mark and board[6] == mark) or # across the middle
(board[1] == mark and board[2] == mark and board[3] == mark) or # across the bottom
(board[7] == mark and board[4] == mark and board[1] == mark) or # down the middle
(board[8] == mark and board[5] == mark and board[2] == mark) or # down the middle
(board[9] == mark and board[6] == mark and board[3] == mark) or # down the right side
(board[7] == mark and board[5] == mark and board[3] == mark) or # diagonal
(board[9] == mark and board[5] == mark and board[1] == mark)) # diagonal
###Output
_____no_output_____
###Markdown
**TEST Step 4:** run the win_check function against our test_board - it should return True
###Code
win_check(test_board,'X')
###Output
_____no_output_____
###Markdown
**Step 5: Write a function that uses the random module to randomly decide which player goes first. You may want to lookup random.randint() Return a string of which player went first.**
###Code
import random
def choose_first():
if random.randint(0, 1) == 0:
return 'Player 2'
else:
return 'Player 1'
###Output
_____no_output_____
###Markdown
**Step 6: Write a function that returns a boolean indicating whether a space on the board is freely available.**
###Code
def space_check(board, position):
return board[position] == ' '
###Output
_____no_output_____
###Markdown
**Step 7: Write a function that checks if the board is full and returns a boolean value. True if full, False otherwise.**
###Code
def full_board_check(board):
for i in range(1,10):
if space_check(board, i):
return False
return True
###Output
_____no_output_____
###Markdown
**Step 8: Write a function that asks for a player's next position (as a number 1-9) and then uses the function from step 6 to check if its a free position. If it is, then return the position for later use. **
###Code
def player_choice(board):
position = 0
while position not in [1,2,3,4,5,6,7,8,9] or not space_check(board, position):
position = int(input('Choose your next position: (1-9) '))
return position
###Output
_____no_output_____
###Markdown
**Step 9: Write a function that asks the player if they want to play again and returns a boolean True if they do want to play again.**
###Code
def replay():
return input('Do you want to play again? Enter Yes or No: ').lower().startswith('y')
###Output
_____no_output_____
###Markdown
**Step 10: Here comes the hard part! Use while loops and the functions you've made to run the game!**
###Code
print('Welcome to Tic Tac Toe!')
while True:
# Reset the board
theBoard = [' '] * 10
player1_marker, player2_marker = player_input()
turn = choose_first()
print(turn + ' will go first.')
play_game = input('Are you ready to play? Enter Yes or No.')
if play_game.lower()[0] == 'y':
game_on = True
else:
game_on = False
while game_on:
if turn == 'Player 1':
# Player1's turn.
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player1_marker, position)
if win_check(theBoard, player1_marker):
display_board(theBoard)
print('Congratulations! You have won the game!')
game_on = False
else:
if full_board_check(theBoard):
display_board(theBoard)
print('The game is a draw!')
break
else:
turn = 'Player 2'
else:
# Player2's turn.
display_board(theBoard)
position = player_choice(theBoard)
place_marker(theBoard, player2_marker, position)
if win_check(theBoard, player2_marker):
display_board(theBoard)
print('Player 2 has won!')
game_on = False
else:
if full_board_check(theBoard):
display_board(theBoard)
print('The game is a draw!')
break
else:
turn = 'Player 1'
if not replay():
break
###Output
| |
| O | O
| |
-----------
| |
| |
| |
-----------
| |
X | X | X
| |
Congratulations! You have won the game!
Do you want to play again? Enter Yes or No: No
|
assignments/2019/assignment2/python_numpy_tutorial.ipynb | ###Markdown
CS 231n Python & NumPy Tutorial Python 3 and NumPy will be used extensively throughout this course, so it's important to be familiar with them. A good amount of the material in this notebook comes from Justin Johnson's Python & NumPy Tutorial:http://cs231n.github.io/python-numpy-tutorial/. At this moment, not everything from that tutorial is in this notebook and not everything from this notebook is in the tutorial. Python 3 If you're unfamiliar with Python 3, here are some of the most common changes from Python 2 to look out for. Print is a function
###Code
print("Hello!")
###Output
Hello!
###Markdown
Without parentheses, printing will not work.
###Code
print "Hello!"
###Output
_____no_output_____
###Markdown
Floating point division by default
###Code
5 / 2
###Output
_____no_output_____
###Markdown
To do integer division, we use two backslashes:
###Code
5 // 2
###Output
_____no_output_____
###Markdown
No xrange The xrange from Python 2 is now merged into "range" for Python 3 and there is no xrange in Python 3. In Python 3, range(3) does not create a list of 3 elements as it would in Python 2, rather just creates a more memory efficient iterator.Hence, xrange in Python 3: Does not exist range in Python 3: Has very similar behavior to Python 2's xrange
###Code
for i in range(3):
print(i)
range(3)
# If need be, can use the following to get a similar behavior to Python 2's range:
print(list(range(3)))
###Output
[0, 1, 2]
###Markdown
NumPy "NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more" -https://docs.scipy.org/doc/numpy-1.10.1/user/whatisnumpy.html.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Let's run through an example showing how powerful NumPy is. Suppose we have two lists a and b, consisting of the first 100,000 non-negative numbers, and we want to create a new list c whose *i*th element is a[i] + 2 * b[i]. Without NumPy:
###Code
%%time
a = list(range(100000))
b = list(range(100000))
%%time
for _ in range(10):
c = []
for i in range(len(a)):
c.append(a[i] + 2 * b[i])
###Output
_____no_output_____
###Markdown
With NumPy:
###Code
%%time
a = np.arange(100000)
b = np.arange(100000)
%%time
for _ in range(10):
c = a + 2 * b
###Output
_____no_output_____
###Markdown
The result is 10 to 15 times (sometimes more) faster, and we could do it in fewer lines of code (and the code itself is more intuitive)! Regular Python is much slower due to type checking and other overhead of needing to interpret code and support Python's abstractions.For example, if we are doing some addition in a loop, constantly type checking in a loop will lead to many more instructions than just performing a regular addition operation. NumPy, using optimized pre-compiled C code, is able to avoid a lot of the overhead introduced.The process we used above is **vectorization**. Vectorization refers to applying operations to arrays instead of just individual elements (i.e. no loops). Why vectorize?1. Much faster2. Easier to read and fewer lines of code3. More closely assembles mathematical notationVectorization is one of the main reasons why NumPy is so powerful. ndarray ndarrays, n-dimensional arrays of homogenous data type, are the fundamental datatype used in NumPy. As these arrays are of the same type and are fixed size at creation, they offer less flexibility than Python lists, but can be substantially more efficient runtime and memory-wise. (Python lists are arrays of pointers to objects, adding a layer of indirection.)The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
###Code
# Can initialize ndarrays with Python lists, for example:
a = np.array([1, 2, 3]) # Create a rank 1 array
print('type:', type(a)) # Prints "<class 'numpy.ndarray'>"
print('shape:', a.shape) # Prints "(3,)"
print('a:', a) # Prints "1 2 3"
a_cpy= a.copy()
a[0] = 5 # Change an element of the array
print('a modeified:', a) # Prints "[5, 2, 3]"
print('a copy:', a_cpy)
b = np.array([[1, 2, 3],
[4, 5, 6]]) # Create a rank 2 array
print('shape:', b.shape) # Prints "(2, 3)"
print(b[0, 0], b[0, 1], b[1, 0]) # Prints "1 2 4"
###Output
type: <class 'numpy.ndarray'>
shape: (3,)
a: [1 2 3]
a modeified: [5 2 3]
a copy: [1 2 3]
shape: (2, 3)
1 2 4
###Markdown
There are many other initializations that NumPy provides:
###Code
a = np.zeros((2, 2)) # Create an array of all zeros
print(a) # Prints "[[ 0. 0.]
# [ 0. 0.]]"
b = np.full((2, 2), 7) # Create a constant array
print(b) # Prints "[[ 7. 7.]
# [ 7. 7.]]"
c = np.eye(2) # Create a 2 x 2 identity matrix
print(c) # Prints "[[ 1. 0.]
# [ 0. 1.]]"
d = np.random.random((2, 2)) # Create an array filled with random values
print(d) # Might print "[[ 0.91940167 0.08143941]
# [ 0.68744134 0.87236687]]"
###Output
[[0. 0.]
[0. 0.]]
[[7 7]
[7 7]]
[[1. 0.]
[0. 1.]]
[[0.66004187 0.99373783]
[0.12856953 0.67825603]]
###Markdown
How do we create a 2 by 2 matrix of ones?
###Code
a = np.ones((2, 2)) # Create an array of all ones
print(a) # Prints "[[ 1. 1.]
# [ 1. 1.]]"
###Output
[[1. 1.]
[1. 1.]]
###Markdown
Useful to keep track of shape; helpful for debugging and knowing dimensions will be very useful when computing gradients, among other reasons.
###Code
nums = np.arange(8)
print(nums)
print(nums.shape)
nums = nums.reshape((2, 4))
print('Reshaped:\n', nums)
print(nums.shape)
# The -1 in reshape corresponds to an unknown dimension that numpy will figure out,
# based on all other dimensions and the array size.
# Can only specify one unknown dimension.
# For example, sometimes we might have an unknown number of data points, and
# so we can use -1 instead without worrying about the true number.
nums = nums.reshape((4, -1))
print('Reshaped with -1:\n', nums, '\nshape:\n', nums.shape)
# You can also flatten the array by using -1 reshape
print('Flatten:\n', nums.reshape(-1), '\nshape:\n', nums.reshape(-1).shape)
###Output
[0 1 2 3 4 5 6 7]
(8,)
Reshaped:
[[0 1 2 3]
[4 5 6 7]]
(2, 4)
Reshaped with -1:
[[0 1]
[2 3]
[4 5]
[6 7]]
shape:
(4, 2)
Flatten:
[0 1 2 3 4 5 6 7]
shape:
(8,)
###Markdown
NumPy supports an object-oriented paradigm, such that ndarray has a number of methods and attributes, with functions similar to ones in the outermost NumPy namespace. For example, we can do both:
###Code
nums = np.arange(8)
print(nums.min()) # Prints 0
print(np.min(nums)) # Prints 0
print(np.reshape(nums, (4, 2)))
###Output
0
0
[[0 1]
[2 3]
[4 5]
[6 7]]
###Markdown
Array Operations/Math NumPy supports many elementwise operations:
###Code
x = np.array([[1, 2],
[3, 4]], dtype=np.float64)
y = np.array([[5, 6],
[7, 8]], dtype=np.float64)
# Elementwise sum; both produce the array
# [[ 6.0 8.0]
# [10.0 12.0]]
print(np.array_equal(x + y, np.add(x, y)))
# Elementwise difference; both produce the array
# [[-4.0 -4.0]
# [-4.0 -4.0]]
print(np.array_equal(x - y, np.subtract(x, y)))
# Elementwise product; both produce the array
# [[ 5.0 12.0]
# [21.0 32.0]]
print(np.array_equal(x * y, np.multiply(x, y)))
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print(np.sqrt(x))
###Output
True
True
True
[[1. 1.41421356]
[1.73205081 2. ]]
###Markdown
How do we elementwise divide between two arrays?
###Code
x = np.array([[1, 2], [3, 4]], dtype=np.float64)
y = np.array([[5, 6], [7, 8]], dtype=np.float64)
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print(x / y)
print(np.divide(x, y))
###Output
[[0.2 0.33333333]
[0.42857143 0.5 ]]
[[0.2 0.33333333]
[0.42857143 0.5 ]]
###Markdown
Note * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:
###Code
x = np.array([[1, 2], [3, 4]])
y = np.array([[5, 6], [7, 8]])
v = np.array([9, 10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print(v.dot(w))
print(np.dot(v, w))
# Matrix / vector product; both produce the rank 1 array [29 67]
print(x.dot(v))
print(np.dot(x, v))
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print(x.dot(y))
print(np.dot(x, y))
###Output
219
219
[29 67]
[29 67]
[[19 22]
[43 50]]
[[19 22]
[43 50]]
###Markdown
There are many useful functions built into NumPy, and often we're able to express them across specific axes of the ndarray:
###Code
x = np.array([[1, 2, 3],
[4, 5, 6]])
print(np.sum(x)) # Compute sum of all elements; prints "21"
print(np.sum(x, axis=0)) # Compute sum of each column; prints "[5 7 9]"
print(np.sum(x, axis=1)) # Compute sum of each row; prints "[6 15]"
print(np.max(x, axis=1)) # Compute max of each row; prints "[3 6]"
###Output
21
[5 7 9]
[ 6 15]
[3 6]
###Markdown
How can we compute the index of the max value of each row? Useful, to say, find the class that corresponds to the maximum score for an input image.
###Code
x = np.array([[1, 2, 3],
[4, 5, 6]])
print(np.argmax(x, axis=1)) # Compute index of max of each row; prints "[2 2]"
###Output
[2 2]
###Markdown
We can find indices of elements that satisfy some conditions by using `np.where`
###Code
print(np.where(nums > 5))
print(nums[np.where(nums > 5)])
###Output
(array([6, 7]),)
[6 7]
###Markdown
Note the axis you apply the operation will have its dimension removed from the shape.This is useful to keep in mind when you're trying to figure out what axis correspondsto what.For example:
###Code
x = np.array([[1, 2, 3],
[4, 5, 6]])
print('x ndim:', x.ndim)
print((x.max(axis=0)).ndim) # Taking the max over axis 0 has shape (3,)
# corresponding to the 3 columns.
# An array with rank 3
x = np.array([[[1, 2, 3],
[4, 5, 6]],
[[10, 23, 33],
[43, 52, 16]]
])
print('x ndim:', x.ndim) # Has shape (2, 2, 3)
print((x.max(axis=1)).ndim) # Taking the max over axis 1 has shape (2, 3)
print((x.max(axis=(1, 2))).ndim) # Can take max over multiple axes; prints [6 52]
###Output
x ndim: 2
1
x ndim: 3
2
1
###Markdown
Indexing NumPy also provides powerful indexing schemes.
###Code
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]])
print('Original:\n', a)
# Can select an element as you would in a 2 dimensional Python list
print('Element (0, 0) (a[0][0]):\n', a[0][0]) # Prints 1
# or as follows
print('Element (0, 0) (a[0, 0]) :\n', a[0, 0]) # Prints 1
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
print('Sliced (a[:2, 1:3]):\n', a[:2, 1:3])
# Steps are also supported in indexing. The following reverses the first row:
print('Reversing the first row (a[0, ::-1]) :\n', a[0, ::-1]) # Prints [4 3 2 1]
# slice by the first dimension, works for n-dimensional array where n >= 1
print('slice the first row by the [...] operator: \n', a[0, ...])
###Output
Original:
[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]]
Element (0, 0) (a[0][0]):
1
Element (0, 0) (a[0, 0]) :
1
Sliced (a[:2, 1:3]):
[[2 3]
[6 7]]
Reversing the first row (a[0, ::-1]) :
[4 3 2 1]
slice the first row by the [...] operator:
[1 2 3 4]
###Markdown
Often, it's useful to select or modify one element from each row of a matrix. The following example employs **fancy indexing**, where we index into our array using an array of indices (say an array of integers or booleans):
###Code
# Create a new array from which we will select elements
a = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12]])
print(a) # prints "array([[ 1, 2, 3],
# [ 4, 5, 6],
# [ 7, 8, 9],
# [10, 11, 12]])"
# Create an array of indices
b = np.array([0, 2, 0, 1])
# Select one element from each row of a using the indices in b
print(a[np.arange(4), b]) # Prints "[ 1 6 7 11]"
# same as
for x, y in zip(np.arange(4), b):
print(a[x, y])
c = a[0]
c[0] = 100
print(a)
# Mutate one element from each row of a using the indices in b
a[np.arange(4), b] += 10
print(a) # prints "array([[11, 2, 3],
# [ 4, 5, 16],
# [17, 8, 9],
# [10, 21, 12]])
###Output
[[ 1 2 3]
[ 4 5 6]
[ 7 8 9]
[10 11 12]]
[ 1 6 7 11]
1
6
7
11
[[100 2 3]
[ 4 5 6]
[ 7 8 9]
[ 10 11 12]]
[[110 2 3]
[ 4 5 16]
[ 17 8 9]
[ 10 21 12]]
###Markdown
We can also use boolean indexing/masks. Suppose we want to set all elements greater than MAX to MAX:
###Code
MAX = 5
nums = np.array([1, 4, 10, -1, 15, 0, 5])
print(nums > MAX) # Prints [False, False, True, False, True, False, False]
nums[nums > MAX] = 100
print(nums) # Prints [1, 4, 5, -1, 5, 0, 5]
nums = np.array([1, 4, 10, -1, 15, 0, 5])
nums > 5
###Output
_____no_output_____
###Markdown
Note that the indices in fancy indexing can appear in any order and even multiple times:
###Code
nums = np.array([1, 4, 10, -1, 15, 0, 5])
print(nums[[1, 2, 3, 1, 0]]) # Prints [4 10 -1 4 1]
###Output
[ 4 10 -1 4 1]
###Markdown
Broadcasting Many of the operations we've looked at above involved arrays of the same rank. However, many times we might have a smaller array and use that multiple times to update an array of a larger dimensions. For example, consider the below example of shifting the mean of each column from the elements of the corresponding column:
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
print(x.shape) # Prints (2, 3)
col_means = x.mean(axis=0)
print(col_means) # Prints [2. 3.5 5.]
print(col_means.shape) # Prints (3,)
# Has a smaller rank than x!
mean_shifted = x - col_means
print('\n', mean_shifted)
print(mean_shifted.shape) # Prints (2, 3)
###Output
(2, 3)
[2. 3.5 5. ]
(3,)
[[-1. -1.5 -2. ]
[ 1. 1.5 2. ]]
(2, 3)
###Markdown
Or even just multiplying a matrix by 2:
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
print(x * 2) # Prints [[ 2 4 6]
# [ 6 10 14]]
###Output
[[ 2 4 6]
[ 6 10 14]]
###Markdown
Broadcasting two arrays together follows these rules:1. If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.2. The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.3. The arrays can be broadcast together if they are compatible in all dimensions.4. After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.5. In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension. For example, when subtracting the columns above, we had arrays of shape (2, 3) and (3,).1. These arrays do not have same rank, so we prepend the shape of the lower rank one to make it (1, 3).2. (2, 3) and (1, 3) are compatible (have the same size in the dimension, or if one of the arrays has size 1 in that dimension).3. Can be broadcast together!4. After broadcasting, each array behaves as if it had shape equal to (2, 3).5. The smaller array will behave as if it were copied along dimension 0. Let's try to subtract the mean of each row!
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
row_means = x.mean(axis=1)
print(row_means) # Prints [2. 5.]
mean_shifted = x - row_means
###Output
[2. 5.]
###Markdown
To figure out what's wrong, we print some shapes:
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
print(x.shape) # Prints (2, 3)
row_means = x.mean(axis=1)
print(row_means) # Prints [2. 5.]
print(row_means.shape) # Prints (2,)
###Output
(2, 3)
[2. 5.]
(2,)
###Markdown
What happened? Answer: If we following broadcasting rule 1, then we'd prepend a 1 to the smaller rank array ot get (1, 2). However, the last dimensions don't match now between (2, 3) and (1, 2), and so we can't broadcast. Take 2, reshaping the row means to get the desired behavior:
###Code
x = np.array([[1, 2, 3],
[3, 5, 7]])
print(x.shape) # Prints (2, 3)
row_means = x.mean(axis=1)
print('row_means shape:', row_means.shape)
print('expanded row_means shape: ', np.expand_dims(row_means, axis=1).shape)
mean_shifted = x - np.expand_dims(row_means, axis=1)
print(mean_shifted)
print(mean_shifted.shape) # Prints (2, 3)
###Output
(2, 3)
row_means shape: (2,)
expanded row_means shape: (2, 1)
[[-1. 0. 1.]
[-2. 0. 2.]]
(2, 3)
###Markdown
More broadcasting examples!
###Code
# Compute outer product of vectors
v = np.array([1, 2, 3]) # v has shape (3,)
w = np.array([4, 5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
# [[ 4 5]
# [ 8 10]
# [12 15]]
print(np.reshape(v, (3, 1)) * w)
# Add a vector to each row of a matrix
x = np.array([[1, 2, 3], [4, 5, 6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
# [[2 4 6]
# [5 7 9]]
print(x + v)
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
# [[ 5 6 7]
# [ 9 10 11]]
print((x.T + w).T)
# Another solution is to reshape w to be a column vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print(x + np.reshape(w, (2, 1)))
###Output
[[ 4 5]
[ 8 10]
[12 15]]
[[2 4 6]
[5 7 9]]
[[ 5 6 7]
[ 9 10 11]]
[[ 5 6 7]
[ 9 10 11]]
###Markdown
Views vs. Copies Unlike a copy, in a **view** of an array, the data is shared between the view and the array. Sometimes, our results are copies of arrays, but other times they can be views. Understanding when each is generated is important to avoid any unforeseen issues.Views can be created from a slice of an array, changing the dtype of the same data area (using arr.view(dtype), not the result of arr.astype(dtype)), or even both.
###Code
x = np.arange(5)
print('Original:\n', x) # Prints [0 1 2 3 4]
# Modifying the view will modify the array
view = x[1:3]
view[1] = -1
print('Array After Modified View:\n', x) # Prints [0 1 -1 3 4]
x = np.arange(5)
view = x[1:3]
view[1] = -1
# Modifying the array will modify the view
print('View Before Array Modification:\n', view) # Prints [1 -1]
x[2] = 10
print('Array After Modifications:\n', x) # Prints [0 1 10 3 4]
print('View After Array Modification:\n', view) # Prints [1 10]
###Output
View Before Array Modification:
[ 1 -1]
Array After Modifications:
[ 0 1 10 3 4]
View After Array Modification:
[ 1 10]
###Markdown
However, if we use fancy indexing, the result will actually be a copy and not a view:
###Code
x = np.arange(5)
print('Original:\n', x) # Prints [0 1 2 3 4]
# Modifying the result of the selection due to fancy indexing
# will not modify the original array.
copy = x[[1, 2]]
copy[1] = -1
print('Copy:\n', copy) # Prints [1 -1]
print('Array After Modified Copy:\n', x) # Prints [0 1 2 3 4]
# Another example involving fancy indexing
x = np.arange(5)
print('Original:\n', x) # Prints [0 1 2 3 4]
copy = x[x >= 2]
print('Copy:\n', copy) # Prints [2 3 4]
x[3] = 10
print('Modified Array:\n', x) # Prints [0 1 2 10 4]
print('Copy After Modified Array:\n', copy) # Prints [2 3 4]
###Output
Original:
[0 1 2 3 4]
Copy:
[2 3 4]
Modified Array:
[ 0 1 2 10 4]
Copy After Modified Array:
[2 3 4]
|
notebooks/maud_rise_data.ipynb | ###Markdown
Getting LLC4320 output near the Maud rise area in the southern oceanWant a smallish area with full water column data for u, v, SSH, salinity and potential temperature.Want about 3 months of data at the hourly resolution First relevant imports
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import xarray as xr
import intake
import numpy as np
import xgcm
from xmitgcm import llcreader
import dask
# import xesmf as xe
###Output
_____no_output_____
###Markdown
Using a dask clusterSkip this for now
###Code
# from dask.distributed import LocalCluster
# from dask.distributed import Client
# cluster = LocalCluster()
# # cluster.scale(8) # uses max of fig, but I think that is the default for locals
# # cluster.adapt(minimum=10, maximum=20) # gives up to the same capacity as server fig in cloud
# client = Client(cluster)
# cluster
###Output
_____no_output_____
###Markdown
Using xmitgcm llcreader to extract dataThis will create a xarray dataset.We directly request the time period we are interested in via the iter_start and _stop options.We try to select chunk options that make this more efficient.We also load all the grid information and transform to grid in lat/lon so we can easily subset our area/region later.
###Code
model = llcreader.ECCOPortalLLC4320Model()
dm = model.get_dataset(varnames=['Eta', 'U', 'V', 'Theta', 'Salt'], k_levels=range(0, 90),
k_chunksize=10, iter_start=273024,
iter_stop=587520, read_grid=True, type='latlon') # getting surface, 3 months at hrly
###Output
_____no_output_____
###Markdown
This is how the dataset looks
###Code
dm
###Output
_____no_output_____
###Markdown
Subset the regionusing isel method after figuring out the i, j indices of the Maud rise area
###Code
# dm = dm.isel(i=slice(1650, 2330), j=slice(3190, 3780), i_g=slice(1650, 2330), j_g=slice(3190, 3780),)
dm = dm.isel(i=slice(1650+165, 2330-115), j=slice(3190+190, 3780-145), i_g=slice(1650+165, 2330-115), j_g=slice(3190+190, 3780-145),)
###Output
_____no_output_____
###Markdown
This is how much data we are going to be working with (in GB)
###Code
print("Full data set is ", dm.nbytes / 1e9)
print("One variable is ", dm['U'].nbytes/1e9)
###Output
321.794938196
80.19648
###Markdown
We start small so we also subset it in time
###Code
dm = dm.isel(time=slice(0, -1, 24)) # subset to daily
# dm = dm.isel(time=slice(0, -1, 24*30)) # subset monthly
# dm = dm.isel(time=0) # one time instance
###Output
_____no_output_____
###Markdown
And we see the size is much smaller
###Code
print(dm.nbytes/1e9)
print(dm['U'].nbytes/1e9)
###Output
0.265216732
0.03672
###Markdown
A quick look at the surface salinity (note how long it takes to finish the plot)
###Code
%%time
plt.figure()
dm['Salt'].isel(time=0, k=0).plot()
# dm['Salt'].isel(k=0).plot()
###Output
_____no_output_____
###Markdown
Now we try to get as much data as we canBut before that we will load them into memory; this will trigger all computations and transfer the data to our local memory space.This is a good indication of how long it takes to actually have this data on your local machine.First, we try with just one variable, $u$
###Code
with dask.config.set(scheduler='single-threaded'):
with dask.diagnostics.ProgressBar():
u = dm['U'].load()
# u = dask.compute(dm['U'], retries=10)
###Output
[########################################] | 100% Completed | 12min 40.2s
###Markdown
Now the full data set See the size estimate, and note the time it takes.This larger data set may suffer from the intermittency of the NASA data server.So if it is failing we will try something else below.
###Code
print(dm.nbytes/1e9)
with dask.config.set(scheduler='single-threaded'):
with dask.diagnostics.ProgressBar():
dsl = dm.load()
# dsl = dask.compute(dm, retries=50)
###Output
0.265216732
[########################################] | 100% Completed | 48min 36.7s
###Markdown
In case the above is failing, we can try this approachThe retries option in the dask compute method sometimes helps.See this post: https://github.com/MITgcm/xmitgcm/issues/210issue-641376849
###Code
with dask.config.set(scheduler='single-threaded'):
with dask.diagnostics.ProgressBar():
# u = dm['U'].load()
u = dask.compute(dm['U'], retries=10)
with dask.config.set(scheduler='single-threaded'):
with dask.diagnostics.ProgressBar():
# dsl = dm.load()
dsl = dask.compute(dm, retries=50)
###Output
[############################ ] | 71% Completed | 14min 16.4s
|
09_Classification/Classification.ipynb | ###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "../00_Data/day_night_images/training/"
image_dir_test = "../00_Data/day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 123
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
Avg brightness: 36.1795272727
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
## TODO: set the value of a threshold that will separate day and night images
threshold = 110
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
# above or below the threshold
if avg < threshold:
pass
elif avg >= threshold:
predicted_label+=1
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classified
estimate_label(test_im)
###Output
_____no_output_____ |
The Grandest Staircase Of Them All.ipynb | ###Markdown
The Grandest Staircase Of Them All==================================With her LAMBCHOP doomsday device finished, Commander Lambda is preparing for her debut on the galactic stage - but in order to make a grand entrance, she needs a grand staircase! As her personal assistant, you've been tasked with figuring out how to build the best staircase EVER. Lambda has given you an overview of the types of bricks available, plus a budget. You can buy different amounts of the different types of bricks (for example, 3 little pink bricks, or 5 blue lace bricks). Commander Lambda wants to know how many different types of staircases can be built with each amount of bricks, so she can pick the one with the most options. Each type of staircase should consist of 2 or more steps. No two steps are allowed to be at the same height - each step must be lower than the previous one. All steps must contain at least one brick. A step's height is classified as the total amount of bricks that make up that step.For example, when N = 3, you have only 1 choice of how to build the staircase, with the first step having a height of 2 and the second step having a height of 1: ( indicates a brick)21When N = 4, you still only have 1 staircase choice:31 But when N = 5, there are two ways you can build a staircase from the given bricks. The two staircases can have heights (4, 1) or (3, 2), as shown below:4132Write a function called solution(n) that takes a positive integer n and returns the number of different staircases that can be built from exactly n bricks. n will always be at least 3 (so you can have a staircase at all), but no more than 200, because Commander Lambda's not made of money!
###Code
def solution(n):
# Your code here
comb_table = [[0]*n for _ in range(n + 1)]
comb_table[0][0] = comb_table[1][1] = comb_table[2][2] = 1
for stp in range(1, n):
for cap in range(n+1):
comb_table[cap][stp] = comb_table[cap][stp - 1]
if cap >= stp:
comb_table[cap][stp] += comb_table[cap - stp][stp - 1]
return comb_table[-1][-1]
for n in range(3, 201):
print(n, ':', solution(n))
# this could work, but has way to high complexity due to combinatoric explosion
n = 10
per = [[2, 1]]
for nn in range(n-3):
new_per = []
for el in per:
new_el = el.copy()
new_el[0]+=1
new_per.append(new_el)
for i in range(1, len(el)):
if el[i] + 1 <= el[i-1]:
new_el = el.copy()
new_el[i]+=1
new_per.append(new_el)
new_el = el.copy()
new_el.append(1)
new_per.append(new_el)
per = [list(x) for x in set(tuple(x) for x in new_per)]
per.sort()
per
###Output
_____no_output_____ |
notebooks/data_generation.ipynb | ###Markdown
data_generation.ipynb Purpose of this notebookThis notebook shows an example of a workflow to generate a set of molecules to study. Beginning with some principal molecules (in this case, Lithium Ethylene Carbonate (LiEC) and water (H2O), we first generate a small set of fragments and recombine these fragments to form new molecules. A similar principal-fragment-recombinant workflow was used to generate LIBE. What you getA collection of fragment molecule graphs (`all_frags`) and recombinant molecules (`combos`). This notebook will also generate the input files necessary to use BonDNet to study the thermodynamics of the possible recombination reactions (for more details on BonDNet, see the [package documentation](https://github.com/mjwen/bondnet)). What you DON'T getIn LIBE, the recombinant molecules were limited by several filters. Only fragments that could be formed by exergonic pathways were allowed to recombine, and the recombinant molecules generated were limited by prediction of their stability (using BonDNet). Such filters are not employed here.An additional limitation is that we do not here show the user how to perform DFT calculations on the fragment or recombinant molecules. This was not included because some users of LIBE may not have access to Q-Chem, the DFT code used to generate this dataset.
###Code
from pathlib import Path
import copy
from pymatgen.core.structure import Molecule
from pymatgen.analysis.graphs import MoleculeGraph
from pymatgen.analysis.local_env import OpenBabelNN, metal_edge_extender
from pymatgen.analysis.fragmenter import Fragmenter
import deliberate.recombination as recomb
molecules_dir = Path().resolve().parent / "molecules"
liec = Molecule.from_file((molecules_dir / "LiEC.xyz").as_posix())
h2o = Molecule.from_file((molecules_dir / "H2O.xyz").as_posix())
###Output
_____no_output_____
###Markdown
In a single-step fragmentation process (`depth=1`), all bonds are broken in the initial molecule (here, water), and the resulting molecule sub-graphs are gathered to generate a dictionary of fragments. The resulting dictionary (`water_frags`) has alphabetical formulas as keys (in this example, `H1 O1` and `H1` will be keys), and lists of MoleculeGraphs as values.
###Code
water_frags = Fragmenter(h2o, depth=1)
print("Number of fragments from water:", water_frags.total_unique_fragments)
###Output
_____no_output_____
###Markdown
Because ethylene carbonate has a ring structure, we have to declare `open_rings=True`. This will use a cheap force-field method to generate an initial structure of the molecule with each ring-bond broken. We also include the fragments from `water_frags` so that duplicate fragments are not generated.
###Code
all_frags = Fragmenter(liec, depth=1, open_rings=True, prev_unique_frag_dict=water_frags.unique_frag_dict)
print("Total number of fragments (H2O + LiEC):", all_frags.total_unique_fragments)
charges = [-1, 0, 1]
all_molecule_graphs = list()
# Add all fragments
for _, fragment_list in all_frags.unique_frag_dict.items():
for fragment in fragment_list:
for charge in charges:
mg = copy.deepcopy(fragment)
mg.molecule.set_charge_and_spin(charge)
all_molecule_graphs.append(mg)
# Also add principal molecules
for charge in charges:
h2o_mg = MoleculeGraph.with_local_env_strategy(h2o, OpenBabelNN())
h2o_mg.molecule.set_charge_and_spin(charge)
all_molecule_graphs.append(h2o_mg)
liec_mg = MoleculeGraph.with_local_env_strategy(liec, OpenBabelNN())
liec_mg = metal_edge_extender(liec_mg)
liec_mg.molecule.set_charge_and_spin(charge)
all_molecule_graphs.append(liec_mg)
print("Total number of molecule graphs:", len(all_molecule_graphs))
###Output
_____no_output_____
###Markdown
After generating fragments, we then use those fragments (and the principal molecules) to generate new recombinant molecules. Details on this process can be found in `src/deliberate/recombination.py` in this repository. In brief, the process is:1. Each molecule graph in the initial set is examined to see what sites, if any, are available for bonding. This is based on valence rules - for instance, a carbon atom will be considered available if it has less than 4 bonds. Hydrogen and lithium are only allowed to recombine if they are not bonded to anything (they are isolated atoms)2. Each molecule is allowed to recombine with each other molecule (including itself) via all possible combinations of available sites.As a byproduct of this process, two files will be generated: `combos.txt` contains indices relevant to recombination "reactions", and `mol_graphs_recombination.json` contains all recombinant molecule graphs. NOTE: generating combinations is a rather slow process. The next cell may take several minutes to run! It should also be noted that generating recombinant molecules is inherently combinatorially and scales accordingly.
###Code
combos = recomb.generate_combinations(all_molecule_graphs, Path(".").resolve())
print("Number of recombinant molecules generated", len(combos))
###Output
_____no_output_____
###Markdown
In an actual workflow, we would use BonDNet to predict the bond dissociation energies for each bond formed via a recombination reaction. This is a way to predict which recombinant molecules should be expected to be stable. While we do not here demonstrate the use of BonDNet, the following cell will generate all files necessary to use BonDNet on this test dataset.
###Code
recomb.generate_bondnet_files(all_molecule_graphs, combos, recomb.parse_combinations_file(Path(".").resolve() / "combinations.txt"), Path("."))
###Output
_____no_output_____
###Markdown
Synthetic data generation exampleWe create synthetic files with the following function definition:```Pythonimport jsonimport urllib.parseimport boto3s3 = boto3.client('s3')def lambda_handler(event, context): size = 1024*1024*16 index = event['index'] nb_parts = 64 key = f'synthetic/pattern-1gb/file-{index:03}' bucket = 'cloudfuse-taxi-data' create_up_resp = s3.create_multipart_upload( Bucket=bucket, Key=key, ) pattern = b''.join([(ch%256).to_bytes(1, 'big') for ch in range(size)]) parts = [] for i in range(nb_parts): up_res = s3.upload_part( Body=pattern, Bucket=bucket, ContentLength=size, Key=key, PartNumber=i+1, UploadId=create_up_resp['UploadId'], ) parts.append({'ETag': up_res['ETag'], 'PartNumber': i+1}) s3.complete_multipart_upload( Bucket=bucket, Key=key, UploadId=create_up_resp['UploadId'], MultipartUpload={ 'Parts': parts }, ) ```We then invoke this function repeatedly to create a complete dataset of test files:
###Code
import boto3
import json
import base64
from joblib import Parallel, delayed
import os
region_name="us-east-2"
binary_name="lambda"
aws_profile=os.environ["AWS_PROFILE"] # Specify the profile you want to use from your .aws/credentials file with the AWS_PROFILE env variable
def invoke_function(index, show_logs = False):
session = boto3.Session(profile_name=aws_profile)
client = session.client('lambda', region_name = region_name)
inputParams = {
'index': index,
}
response = client.invoke(
FunctionName = "synth-file",
InvocationType = 'RequestResponse',
Payload = json.dumps(inputParams),
LogType='Tail' if show_logs else 'None'
)
if show_logs:
print(base64.b64decode(response['LogResult']).decode("utf-8") )
return json.load(response['Payload'])
nb_file = 100
res = Parallel(n_jobs=50)(delayed(invoke_function)(i) for i in range(nb_file))
###Output
_____no_output_____ |
docs/practices/cv/pointnet.ipynb | ###Markdown
**点云处理:实现PointNet点云分类****作者**:[Zhihao Cao](https://github.com/WhiteFireFox)**日期**:2022.4**摘要**:本示例在于演示如何基于 Paddle2.2 实现PointNet在ShapeNet数据集上进行点云分类处理。 一、环境设置本教程基于PaddlePaddle 2.3.0-rc0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick)。
###Code
import os
import numpy as np
import random
import h5py
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
print(paddle.__version__)
###Output
2.3.0-rc0
###Markdown
二、数据集 2.1 数据介绍ShapeNet数据集是一个注释丰富且规模较大的 3D 形状数据集,由斯坦福大学、普林斯顿大学和芝加哥丰田技术学院于 2015 年联合发布。ShapeNet数据集官方链接:[https://vision.princeton.edu/projects/2014/3DShapeNets/](https://vision.princeton.edu/projects/2014/3DShapeNets/)AIStudio链接:[sharpnet数据集(经过整理)](https://aistudio.baidu.com/aistudio/datasetdetail/70460)ShapeNet数据集的储存格式是h5文件,该文件中key值分别为:- 1、data:这一份数据中所有点的xyz坐标,- 2、label:这一份数据所属类别,如airplane等,- 3、pid:这一份数据中所有点所属的类型,如这一份数据属airplane类,则它包含的所有点的类型有机翼、机身等类型。 2.2 解压数据集
###Code
!unzip data/data70460/shapenet_part_seg_hdf5_data.zip
!mv hdf5_data dataset
###Output
_____no_output_____
###Markdown
2.3 数据列表ShapeNet数据集所有的数据文件。
###Code
train_list = ['ply_data_train0.h5', 'ply_data_train1.h5', 'ply_data_train2.h5', 'ply_data_train3.h5', 'ply_data_train4.h5', 'ply_data_train5.h5']
test_list = ['ply_data_test0.h5', 'ply_data_test1.h5']
val_list = ['ply_data_val0.h5']
###Output
_____no_output_____
###Markdown
2.4 搭建数据生成器说明:将ShapeNet数据集全部读入。
###Code
def make_data(mode='train', path='./dataset/', num_point=2048):
datas = []
labels = []
if mode == 'train':
for file_list in train_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
elif mode == 'test':
for file_list in test_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
else:
for file_list in val_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
return datas, labels
###Output
_____no_output_____
###Markdown
说明:通过继承`paddle.io.Dataset`来完成数据集的构造。
###Code
class PointDataset(paddle.io.Dataset):
def __init__(self, datas, labels):
super(PointDataset, self).__init__()
self.datas = datas
self.labels = labels
def __getitem__(self, index):
data = paddle.to_tensor(self.datas[index].T.astype('float32'))
label = paddle.to_tensor(self.labels[index].astype('int64'))
return data, label
def __len__(self):
return len(self.datas)
###Output
_____no_output_____
###Markdown
说明:使用飞桨框架提供的API:`paddle.io.DataLoader`完成数据的加载,使得按照Batchsize生成Mini-batch的数据。
###Code
# 数据导入
datas, labels = make_data(mode='train', num_point=2048)
train_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='val', num_point=2048)
val_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='test', num_point=2048)
test_dataset = PointDataset(datas, labels)
# 实例化数据读取器
train_loader = paddle.io.DataLoader(
train_dataset,
batch_size=128,
shuffle=True,
drop_last=False
)
val_loader = paddle.io.DataLoader(
val_dataset,
batch_size=32,
shuffle=False,
drop_last=False
)
test_loader = paddle.io.DataLoader(
test_dataset,
batch_size=128,
shuffle=False,
drop_last=False
)
###Output
_____no_output_____
###Markdown
三、定义网络PointNet是斯坦福大学研究人员提出的一个点云处理网络,在这篇论文中,它提出了空间变换网络(T-Net)解决点云的旋转问题(注:因为考虑到某一物体的点云旋转后还是该物体,所以需要有一个网络结构去学习并解决这个旋转问题),并且提出了采取MaxPooling的方法极大程度上地提取点云全局特征。 3.1 定义网络结构
###Code
class PointNet(nn.Layer):
def __init__(self, name_scope='PointNet_', num_classes=16, num_point=2048):
super(PointNet, self).__init__()
self.input_transform_net = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.input_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 9,
weight_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
bias_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
)
)
self.mlp_1 = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU()
)
self.feature_transform_net = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.feature_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 64*64)
)
self.mlp_2 = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU()
)
self.fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(p=0.7),
nn.Linear(256, num_classes),
nn.LogSoftmax(axis=-1)
)
def forward(self, inputs):
batchsize = inputs.shape[0]
t_net = self.input_transform_net(inputs)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.input_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 3, 3])
x = paddle.transpose(inputs, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_1(x)
t_net = self.feature_transform_net(x)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.feature_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 64, 64])
x = paddle.squeeze(x, axis=-1)
x = paddle.transpose(x, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_2(x)
x = paddle.max(x, axis=-1)
x = paddle.squeeze(x, axis=-1)
x = self.fc(x)
return x
###Output
_____no_output_____
###Markdown
3.2 网络结构可视化说明:使用飞桨API:`paddle.summary`完成模型结构可视化
###Code
pointnet = PointNet()
paddle.summary(pointnet, (64, 3, 2048))
###Output
W0424 11:24:32.235721 117 gpu_context.cc:244] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.0, Runtime API Version: 10.1
W0424 11:24:32.240563 117 gpu_context.cc:272] device: 0, cuDNN Version: 7.6.
###Markdown
四、训练说明:模型训练的时候,将会使用`paddle.optimizer.Adam`优化器来进行优化。使用`F.nll_loss`来计算损失值。
###Code
def train():
model = PointNet(num_classes=16, num_point=2048)
model.train()
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
epoch_num = 10
for epoch in range(epoch_num):
# train
print("===================================train===========================================")
for batch_id, data in enumerate(train_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
if batch_id % 20 == 0:
print("train: epoch: {}, batch_id: {}, loss is: {}, accuracy is: {}".format(epoch, batch_id, loss.numpy(), acc.numpy()))
loss.backward()
optim.step()
optim.clear_grad()
if epoch % 2 == 0:
paddle.save(model.state_dict(), './model/PointNet.pdparams')
paddle.save(optim.state_dict(), './model/PointNet.pdopt')
# validation
print("===================================val===========================================")
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(val_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
model.train()
if __name__ == '__main__':
train()
###Output
===================================train===========================================
train: epoch: 0, batch_id: 0, loss is: [8.693383], accuracy is: [0.015625]
train: epoch: 0, batch_id: 20, loss is: [1.2929151], accuracy is: [0.6015625]
train: epoch: 0, batch_id: 40, loss is: [0.8927105], accuracy is: [0.75]
train: epoch: 0, batch_id: 60, loss is: [0.7519456], accuracy is: [0.78125]
train: epoch: 0, batch_id: 80, loss is: [0.66354436], accuracy is: [0.8359375]
===================================val===========================================
validation: loss is: 0.39304283261299133, accuracy is: 0.867584764957428
===================================train===========================================
train: epoch: 1, batch_id: 0, loss is: [0.66547674], accuracy is: [0.796875]
train: epoch: 1, batch_id: 20, loss is: [0.5581873], accuracy is: [0.8125]
train: epoch: 1, batch_id: 40, loss is: [0.4634911], accuracy is: [0.8515625]
train: epoch: 1, batch_id: 60, loss is: [0.2632866], accuracy is: [0.8828125]
train: epoch: 1, batch_id: 80, loss is: [0.32553214], accuracy is: [0.8828125]
===================================val===========================================
validation: loss is: 0.2947256565093994, accuracy is: 0.9020127058029175
===================================train===========================================
train: epoch: 2, batch_id: 0, loss is: [0.30400345], accuracy is: [0.90625]
train: epoch: 2, batch_id: 20, loss is: [0.43601793], accuracy is: [0.875]
train: epoch: 2, batch_id: 40, loss is: [0.34586048], accuracy is: [0.859375]
train: epoch: 2, batch_id: 60, loss is: [0.35014084], accuracy is: [0.921875]
train: epoch: 2, batch_id: 80, loss is: [0.30653465], accuracy is: [0.8828125]
===================================val===========================================
validation: loss is: 0.21731847524642944, accuracy is: 0.9385592937469482
===================================train===========================================
train: epoch: 3, batch_id: 0, loss is: [0.36968467], accuracy is: [0.875]
train: epoch: 3, batch_id: 20, loss is: [0.37996972], accuracy is: [0.9140625]
train: epoch: 3, batch_id: 40, loss is: [0.25406647], accuracy is: [0.921875]
train: epoch: 3, batch_id: 60, loss is: [0.1649745], accuracy is: [0.953125]
train: epoch: 3, batch_id: 80, loss is: [0.16395089], accuracy is: [0.9609375]
===================================val===========================================
validation: loss is: 0.26106956601142883, accuracy is: 0.9226694703102112
===================================train===========================================
train: epoch: 4, batch_id: 0, loss is: [0.17851768], accuracy is: [0.9453125]
train: epoch: 4, batch_id: 20, loss is: [0.29574272], accuracy is: [0.9375]
train: epoch: 4, batch_id: 40, loss is: [0.22927402], accuracy is: [0.9375]
train: epoch: 4, batch_id: 60, loss is: [0.20726189], accuracy is: [0.9375]
train: epoch: 4, batch_id: 80, loss is: [0.16911985], accuracy is: [0.9453125]
===================================val===========================================
validation: loss is: 0.11279569566249847, accuracy is: 0.9645127058029175
===================================train===========================================
train: epoch: 5, batch_id: 0, loss is: [0.27182847], accuracy is: [0.90625]
train: epoch: 5, batch_id: 20, loss is: [0.1203089], accuracy is: [0.953125]
train: epoch: 5, batch_id: 40, loss is: [0.25080964], accuracy is: [0.9140625]
train: epoch: 5, batch_id: 60, loss is: [0.18479557], accuracy is: [0.96875]
train: epoch: 5, batch_id: 80, loss is: [0.18184912], accuracy is: [0.9453125]
===================================val===========================================
validation: loss is: 0.5406728982925415, accuracy is: 0.8646337985992432
===================================train===========================================
train: epoch: 6, batch_id: 0, loss is: [0.10653888], accuracy is: [0.96875]
train: epoch: 6, batch_id: 20, loss is: [0.2692457], accuracy is: [0.9375]
train: epoch: 6, batch_id: 40, loss is: [0.14836423], accuracy is: [0.9453125]
train: epoch: 6, batch_id: 60, loss is: [0.31164974], accuracy is: [0.9140625]
train: epoch: 6, batch_id: 80, loss is: [0.08737734], accuracy is: [0.96875]
===================================val===========================================
validation: loss is: 0.14123289287090302, accuracy is: 0.9555084705352783
===================================train===========================================
train: epoch: 7, batch_id: 0, loss is: [0.13292007], accuracy is: [0.96875]
train: epoch: 7, batch_id: 20, loss is: [0.19241312], accuracy is: [0.9296875]
train: epoch: 7, batch_id: 40, loss is: [0.08458131], accuracy is: [0.96875]
train: epoch: 7, batch_id: 60, loss is: [0.13493742], accuracy is: [0.953125]
train: epoch: 7, batch_id: 80, loss is: [0.1931592], accuracy is: [0.9296875]
===================================val===========================================
validation: loss is: 0.12743274867534637, accuracy is: 0.9671609997749329
===================================train===========================================
train: epoch: 8, batch_id: 0, loss is: [0.10084306], accuracy is: [0.9609375]
train: epoch: 8, batch_id: 20, loss is: [0.09640574], accuracy is: [0.96875]
train: epoch: 8, batch_id: 40, loss is: [0.10779642], accuracy is: [0.9609375]
train: epoch: 8, batch_id: 60, loss is: [0.12643482], accuracy is: [0.96875]
train: epoch: 8, batch_id: 80, loss is: [0.19140013], accuracy is: [0.9453125]
===================================val===========================================
validation: loss is: 0.09421397000551224, accuracy is: 0.9719279408454895
===================================train===========================================
train: epoch: 9, batch_id: 0, loss is: [0.06287473], accuracy is: [0.9765625]
train: epoch: 9, batch_id: 20, loss is: [0.11913891], accuracy is: [0.9609375]
train: epoch: 9, batch_id: 40, loss is: [0.1325048], accuracy is: [0.953125]
train: epoch: 9, batch_id: 60, loss is: [0.13647752], accuracy is: [0.96875]
train: epoch: 9, batch_id: 80, loss is: [0.09159042], accuracy is: [0.9765625]
===================================val===========================================
validation: loss is: 0.22078344225883484, accuracy is: 0.929025411605835
###Markdown
五、评估与测试说明:通过`model.load_dict`的方式加载训练好的模型对测试集上的数据进行评估与测试。
###Code
def evaluation():
model = PointNet()
model_state_dict = paddle.load('./model/PointNet.pdparams')
model.load_dict(model_state_dict)
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(test_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
if __name__ == '__main__':
evaluation()
###Output
validation: loss is: 0.14730410277843475, accuracy is: 0.9561118483543396
###Markdown
**点云处理:实现PointNet点云分类****作者**:[Zhihao Cao](https://github.com/WhiteFireFox)**日期**:2022.1**摘要**:本示例在于演示如何基于 Paddle2.2 实现PointNet在ShapeNet数据集上进行点云分类处理。 一、环境设置本教程基于Paddle 2.2 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick)。
###Code
import os
import numpy as np
import random
import h5py
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
print(paddle.__version__)
###Output
2.2.2
###Markdown
二、数据集 2.1 数据介绍ShapeNet数据集是一个注释丰富且规模较大的 3D 形状数据集,由斯坦福大学、普林斯顿大学和芝加哥丰田技术学院于 2015 年联合发布。ShapeNet数据集官方链接:[https://vision.princeton.edu/projects/2014/3DShapeNets/](https://vision.princeton.edu/projects/2014/3DShapeNets/)AIStudio链接:[sharpnet数据集(经过整理)](https://aistudio.baidu.com/aistudio/datasetdetail/70460)ShapeNet数据集的储存格式是h5文件,该文件中key值分别为:- 1、data:这一份数据中所有点的xyz坐标,- 2、label:这一份数据所属类别,如airplane等,- 3、pid:这一份数据中所有点所属的类型,如这一份数据属airplane类,则它包含的所有点的类型有机翼、机身等类型。 2.2 解压数据集
###Code
!unzip data/data70460/shapenet_part_seg_hdf5_data.zip
!mv hdf5_data dataset
###Output
Archive: data/data70460/shapenet_part_seg_hdf5_data.zip
creating: hdf5_data/
inflating: hdf5_data/ply_data_train5.h5
inflating: hdf5_data/ply_data_train1.h5
inflating: hdf5_data/ply_data_train3.h5
inflating: hdf5_data/ply_data_val0.h5
inflating: hdf5_data/ply_data_train0.h5
inflating: hdf5_data/ply_data_test1.h5
inflating: hdf5_data/ply_data_test0.h5
inflating: hdf5_data/ply_data_train4.h5
inflating: hdf5_data/ply_data_train2.h5
###Markdown
2.3 数据列表ShapeNet数据集所有的数据文件。
###Code
train_list = ['ply_data_train0.h5', 'ply_data_train1.h5', 'ply_data_train2.h5', 'ply_data_train3.h5', 'ply_data_train4.h5', 'ply_data_train5.h5']
test_list = ['ply_data_test0.h5', 'ply_data_test1.h5']
val_list = ['ply_data_val0.h5']
###Output
_____no_output_____
###Markdown
2.4 搭建数据生成器说明:将ShapeNet数据集全部读入。
###Code
def make_data(mode='train', path='./dataset/', num_point=2048):
datas = []
labels = []
if mode == 'train':
for file_list in train_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
elif mode == 'test':
for file_list in test_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
else:
for file_list in val_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
return datas, labels
###Output
_____no_output_____
###Markdown
说明:通过继承`paddle.io.Dataset`来完成数据集的构造。
###Code
class PointDataset(paddle.io.Dataset):
def __init__(self, datas, labels):
super(PointDataset, self).__init__()
self.datas = datas
self.labels = labels
def __getitem__(self, index):
data = paddle.to_tensor(self.datas[index].T.astype('float32'))
label = paddle.to_tensor(self.labels[index].astype('int64'))
return data, label
def __len__(self):
return len(self.datas)
###Output
_____no_output_____
###Markdown
说明:使用飞桨框架提供的API:`paddle.io.DataLoader`完成数据的加载,使得按照Batchsize生成Mini-batch的数据。
###Code
# 数据导入
datas, labels = make_data(mode='train', num_point=2048)
train_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='val', num_point=2048)
val_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='test', num_point=2048)
test_dataset = PointDataset(datas, labels)
# 实例化数据读取器
train_loader = paddle.io.DataLoader(
train_dataset,
batch_size=128,
shuffle=True,
drop_last=False
)
val_loader = paddle.io.DataLoader(
val_dataset,
batch_size=32,
shuffle=False,
drop_last=False
)
test_loader = paddle.io.DataLoader(
test_dataset,
batch_size=128,
shuffle=False,
drop_last=False
)
###Output
_____no_output_____
###Markdown
三、定义网络PointNet是斯坦福大学研究人员提出的一个点云处理网络,在这篇论文中,它提出了空间变换网络(T-Net)解决点云的旋转问题(注:因为考虑到某一物体的点云旋转后还是该物体,所以需要有一个网络结构去学习并解决这个旋转问题),并且提出了采取MaxPooling的方法极大程度上地提取点云全局特征。 3.1 定义网络结构
###Code
class PointNet(nn.Layer):
def __init__(self, name_scope='PointNet_', num_classes=16, num_point=2048):
super(PointNet, self).__init__()
self.input_transform_net = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.input_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 9,
weight_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
bias_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
)
)
self.mlp_1 = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU()
)
self.feature_transform_net = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.feature_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 64*64)
)
self.mlp_2 = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU()
)
self.fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(p=0.7),
nn.Linear(256, num_classes),
nn.LogSoftmax(axis=-1)
)
def forward(self, inputs):
batchsize = inputs.shape[0]
t_net = self.input_transform_net(inputs)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.input_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 3, 3])
x = paddle.transpose(inputs, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_1(x)
t_net = self.feature_transform_net(x)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.feature_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 64, 64])
x = paddle.squeeze(x, axis=-1)
x = paddle.transpose(x, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_2(x)
x = paddle.max(x, axis=-1)
x = paddle.squeeze(x, axis=-1)
x = self.fc(x)
return x
###Output
_____no_output_____
###Markdown
3.2 网络结构可视化说明:使用飞桨API:`paddle.summary`完成模型结构可视化
###Code
pointnet = PointNet()
paddle.summary(pointnet, (64, 3, 2048))
###Output
---------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
===========================================================================
Conv1D-34 [[64, 3, 2048]] [64, 64, 2048] 256
BatchNorm-34 [[64, 64, 2048]] [64, 64, 2048] 256
ReLU-52 [[64, 64, 2048]] [64, 64, 2048] 0
Conv1D-35 [[64, 64, 2048]] [64, 128, 2048] 8,320
BatchNorm-35 [[64, 128, 2048]] [64, 128, 2048] 512
ReLU-53 [[64, 128, 2048]] [64, 128, 2048] 0
Conv1D-36 [[64, 128, 2048]] [64, 1024, 2048] 132,096
BatchNorm-36 [[64, 1024, 2048]] [64, 1024, 2048] 4,096
ReLU-54 [[64, 1024, 2048]] [64, 1024, 2048] 0
MaxPool1D-7 [[64, 1024, 2048]] [64, 1024, 1] 0
Linear-28 [[64, 1024]] [64, 512] 524,800
ReLU-55 [[64, 512]] [64, 512] 0
Linear-29 [[64, 512]] [64, 256] 131,328
ReLU-56 [[64, 256]] [64, 256] 0
Linear-30 [[64, 256]] [64, 9] 2,313
Conv1D-37 [[64, 3, 2048]] [64, 64, 2048] 256
BatchNorm-37 [[64, 64, 2048]] [64, 64, 2048] 256
ReLU-57 [[64, 64, 2048]] [64, 64, 2048] 0
Conv1D-38 [[64, 64, 2048]] [64, 64, 2048] 4,160
BatchNorm-38 [[64, 64, 2048]] [64, 64, 2048] 256
ReLU-58 [[64, 64, 2048]] [64, 64, 2048] 0
Conv1D-39 [[64, 64, 2048]] [64, 64, 2048] 4,160
BatchNorm-39 [[64, 64, 2048]] [64, 64, 2048] 256
ReLU-59 [[64, 64, 2048]] [64, 64, 2048] 0
Conv1D-40 [[64, 64, 2048]] [64, 128, 2048] 8,320
BatchNorm-40 [[64, 128, 2048]] [64, 128, 2048] 512
ReLU-60 [[64, 128, 2048]] [64, 128, 2048] 0
Conv1D-41 [[64, 128, 2048]] [64, 1024, 2048] 132,096
BatchNorm-41 [[64, 1024, 2048]] [64, 1024, 2048] 4,096
ReLU-61 [[64, 1024, 2048]] [64, 1024, 2048] 0
MaxPool1D-8 [[64, 1024, 2048]] [64, 1024, 1] 0
Linear-31 [[64, 1024]] [64, 512] 524,800
ReLU-62 [[64, 512]] [64, 512] 0
Linear-32 [[64, 512]] [64, 256] 131,328
ReLU-63 [[64, 256]] [64, 256] 0
Linear-33 [[64, 256]] [64, 4096] 1,052,672
Conv1D-42 [[64, 64, 2048]] [64, 64, 2048] 4,160
BatchNorm-42 [[64, 64, 2048]] [64, 64, 2048] 256
ReLU-64 [[64, 64, 2048]] [64, 64, 2048] 0
Conv1D-43 [[64, 64, 2048]] [64, 128, 2048] 8,320
BatchNorm-43 [[64, 128, 2048]] [64, 128, 2048] 512
ReLU-65 [[64, 128, 2048]] [64, 128, 2048] 0
Conv1D-44 [[64, 128, 2048]] [64, 1024, 2048] 132,096
BatchNorm-44 [[64, 1024, 2048]] [64, 1024, 2048] 4,096
ReLU-66 [[64, 1024, 2048]] [64, 1024, 2048] 0
Linear-34 [[64, 1024]] [64, 512] 524,800
ReLU-67 [[64, 512]] [64, 512] 0
Linear-35 [[64, 512]] [64, 256] 131,328
ReLU-68 [[64, 256]] [64, 256] 0
Dropout-4 [[64, 256]] [64, 256] 0
Linear-36 [[64, 256]] [64, 16] 4,112
LogSoftmax-4 [[64, 16]] [64, 16] 0
===========================================================================
Total params: 3,476,825
Trainable params: 3,461,721
Non-trainable params: 15,104
---------------------------------------------------------------------------
Input size (MB): 1.50
Forward/backward pass size (MB): 11333.40
Params size (MB): 13.26
Estimated Total Size (MB): 11348.16
---------------------------------------------------------------------------
###Markdown
四、训练说明:模型训练的时候,将会使用`paddle.optimizer.Adam`优化器来进行优化。使用`F.nll_loss`来计算损失值。
###Code
def train():
model = PointNet(num_classes=16, num_point=2048)
model.train()
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
epoch_num = 10
for epoch in range(epoch_num):
# train
print("===================================train===========================================")
for batch_id, data in enumerate(train_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
if batch_id % 20 == 0:
print("train: epoch: {}, batch_id: {}, loss is: {}, accuracy is: {}".format(epoch, batch_id, loss.numpy(), acc.numpy()))
loss.backward()
optim.step()
optim.clear_grad()
if epoch % 2 == 0:
paddle.save(model.state_dict(), './model/PointNet.pdparams')
paddle.save(optim.state_dict(), './model/PointNet.pdopt')
# validation
print("===================================val===========================================")
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(val_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
model.train()
if __name__ == '__main__':
train()
###Output
===================================train===========================================
train: epoch: 0, batch_id: 0, loss is: [8.315134], accuracy is: [0.015625]
###Markdown
五、评估与测试说明:通过`model.load_dict`的方式加载训练好的模型对测试集上的数据进行评估与测试。
###Code
def evaluation():
model = PointNet()
model_state_dict = paddle.load('./model/PointNet.pdparams')
model.load_dict(model_state_dict)
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(test_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
if __name__ == '__main__':
evaluation()
###Output
_____no_output_____
###Markdown
**点云处理:实现PointNet点云分类****作者**:[Zhihao Cao](https://github.com/WhiteFireFox)**日期**:2021.12**摘要**:本示例在于演示如何基于 Paddle2.2 实现PointNet在ShapeNet数据集上进行点云分类处理。 一、环境设置本教程基于Paddle 2.2 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick)。
###Code
import os
import numpy as np
import random
import h5py
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
print(paddle.__version__)
###Output
2.2.1
###Markdown
二、数据集 2.1 数据介绍ShapeNet数据集是一个注释丰富且规模较大的 3D 形状数据集,由斯坦福大学、普林斯顿大学和芝加哥丰田技术学院于 2015 年联合发布。ShapeNet数据集官方链接:[https://vision.princeton.edu/projects/2014/3DShapeNets/](https://vision.princeton.edu/projects/2014/3DShapeNets/)AIStudio链接:[sharpnet数据集(经过整理)](https://aistudio.baidu.com/aistudio/datasetdetail/70460)ShapeNet数据集的储存格式是h5文件,该文件中key值分别为:- 1、data:这一份数据中所有点的xyz坐标,- 2、label:这一份数据所属类别,如airplane等,- 3、pid:这一份数据中所有点所属的类型,如这一份数据属airplane类,则它包含的所有点的类型有机翼、机身等类型。 2.2 解压数据集
###Code
!unzip data/data70460/shapenet_part_seg_hdf5_data.zip
!mv hdf5_data dataset
###Output
_____no_output_____
###Markdown
2.3 数据列表ShapeNet数据集所有的数据文件。
###Code
train_list = ['ply_data_train0.h5', 'ply_data_train1.h5', 'ply_data_train2.h5', 'ply_data_train3.h5', 'ply_data_train4.h5', 'ply_data_train5.h5']
test_list = ['ply_data_test0.h5', 'ply_data_test1.h5']
val_list = ['ply_data_val0.h5']
###Output
_____no_output_____
###Markdown
2.4 搭建数据生成器说明:将ShapeNet数据集全部读入。
###Code
def make_data(mode='train', path='./dataset/', num_point=2048):
datas = []
labels = []
if mode == 'train':
for file_list in train_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
elif mode == 'test':
for file_list in test_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
else:
for file_list in val_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
return datas, labels
###Output
_____no_output_____
###Markdown
说明:通过继承`paddle.io.Dataset`来完成数据集的构造。
###Code
class PointDataset(paddle.io.Dataset):
def __init__(self, datas, labels):
super(PointDataset, self).__init__()
self.datas = datas
self.labels = labels
def __getitem__(self, index):
data = paddle.to_tensor(self.datas[index].T.astype('float32'))
label = paddle.to_tensor(self.labels[index].astype('int64'))
return data, label
def __len__(self):
return len(self.datas)
###Output
_____no_output_____
###Markdown
说明:使用飞桨框架提供的API:`paddle.io.DataLoader`完成数据的加载,使得按照Batchsize生成Mini-batch的数据。
###Code
# 数据导入
datas, labels = make_data(mode='train', num_point=2048)
train_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='val', num_point=2048)
val_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='test', num_point=2048)
test_dataset = PointDataset(datas, labels)
# 实例化数据读取器
train_loader = paddle.io.DataLoader(
train_dataset,
batch_size=128,
shuffle=True,
drop_last=False
)
val_loader = paddle.io.DataLoader(
val_dataset,
batch_size=32,
shuffle=False,
drop_last=False
)
test_loader = paddle.io.DataLoader(
test_dataset,
batch_size=128,
shuffle=False,
drop_last=False
)
###Output
_____no_output_____
###Markdown
三、定义网络PointNet是斯坦福大学研究人员提出的一个点云处理网络,在这篇论文中,它提出了空间变换网络(T-Net)解决点云的旋转问题(注:因为考虑到某一物体的点云旋转后还是该物体,所以需要有一个网络结构去学习并解决这个旋转问题),并且提出了采取MaxPooling的方法极大程度上地提取点云全局特征。 3.1 定义网络结构
###Code
class PointNet(nn.Layer):
def __init__(self, name_scope='PointNet_', num_classes=16, num_point=2048):
super(PointNet, self).__init__()
self.input_transform_net = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.input_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 9,
weight_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
bias_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
)
)
self.mlp_1 = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU()
)
self.feature_transform_net = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.feature_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 64*64)
)
self.mlp_2 = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU()
)
self.fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(p=0.7),
nn.Linear(256, num_classes),
nn.LogSoftmax(axis=-1)
)
def forward(self, inputs):
batchsize = inputs.shape[0]
t_net = self.input_transform_net(inputs)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.input_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 3, 3])
x = paddle.transpose(inputs, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_1(x)
t_net = self.feature_transform_net(x)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.feature_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 64, 64])
x = paddle.squeeze(x, axis=-1)
x = paddle.transpose(x, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_2(x)
x = paddle.max(x, axis=-1)
x = paddle.squeeze(x, axis=-1)
x = self.fc(x)
return x
###Output
_____no_output_____
###Markdown
3.2 网络结构可视化说明:使用飞桨API:`paddle.summary`完成模型结构可视化
###Code
pointnet = PointNet()
paddle.summary(pointnet, (64, 3, 2048))
###Output
W1108 18:17:31.528717 5445 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W1108 18:17:31.534310 5445 device_context.cc:465] device: 0, cuDNN Version: 7.6.
###Markdown
四、训练说明:模型训练的时候,将会使用`paddle.optimizer.Adam`优化器来进行优化。使用`F.nll_loss`来计算损失值。
###Code
def train():
model = PointNet(num_classes=16, num_point=2048)
model.train()
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
epoch_num = 10
for epoch in range(epoch_num):
# train
print("===================================train===========================================")
for batch_id, data in enumerate(train_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
if batch_id % 20 == 0:
print("train: epoch: {}, batch_id: {}, loss is: {}, accuracy is: {}".format(epoch, batch_id, loss.numpy(), acc.numpy()))
loss.backward()
optim.step()
optim.clear_grad()
if epoch % 2 == 0:
paddle.save(model.state_dict(), './model/PointNet.pdparams')
paddle.save(optim.state_dict(), './model/PointNet.pdopt')
# validation
print("===================================val===========================================")
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(val_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
model.train()
if __name__ == '__main__':
train()
###Output
===================================train===========================================
train: epoch: 0, batch_id: 0, loss is: [7.559999], accuracy is: [0.0625]
train: epoch: 0, batch_id: 20, loss is: [1.2115248], accuracy is: [0.6484375]
train: epoch: 0, batch_id: 40, loss is: [0.6856382], accuracy is: [0.8046875]
train: epoch: 0, batch_id: 60, loss is: [0.58668905], accuracy is: [0.84375]
train: epoch: 0, batch_id: 80, loss is: [0.500105], accuracy is: [0.8515625]
===================================val===========================================
validation: loss is: 0.6364309787750244, accuracy is: 0.8358050584793091
===================================train===========================================
train: epoch: 1, batch_id: 0, loss is: [0.5509058], accuracy is: [0.8046875]
train: epoch: 1, batch_id: 20, loss is: [0.564171], accuracy is: [0.8359375]
train: epoch: 1, batch_id: 40, loss is: [0.49365884], accuracy is: [0.8359375]
train: epoch: 1, batch_id: 60, loss is: [0.3184696], accuracy is: [0.8984375]
train: epoch: 1, batch_id: 80, loss is: [0.4560991], accuracy is: [0.8515625]
===================================val===========================================
validation: loss is: 0.29481494426727295, accuracy is: 0.9141949415206909
===================================train===========================================
train: epoch: 2, batch_id: 0, loss is: [0.34659007], accuracy is: [0.9296875]
train: epoch: 2, batch_id: 20, loss is: [0.28600746], accuracy is: [0.890625]
train: epoch: 2, batch_id: 40, loss is: [0.46038467], accuracy is: [0.890625]
train: epoch: 2, batch_id: 60, loss is: [0.22319293], accuracy is: [0.9375]
train: epoch: 2, batch_id: 80, loss is: [0.18374936], accuracy is: [0.9453125]
===================================val===========================================
validation: loss is: 0.21900080144405365, accuracy is: 0.9359109997749329
===================================train===========================================
train: epoch: 3, batch_id: 0, loss is: [0.16127768], accuracy is: [0.953125]
train: epoch: 3, batch_id: 20, loss is: [0.2118332], accuracy is: [0.9453125]
train: epoch: 3, batch_id: 40, loss is: [0.25717354], accuracy is: [0.9375]
train: epoch: 3, batch_id: 60, loss is: [0.1606617], accuracy is: [0.9453125]
train: epoch: 3, batch_id: 80, loss is: [0.3831357], accuracy is: [0.890625]
===================================val===========================================
validation: loss is: 0.15731117129325867, accuracy is: 0.9528601765632629
===================================train===========================================
train: epoch: 4, batch_id: 0, loss is: [0.22388156], accuracy is: [0.9296875]
train: epoch: 4, batch_id: 20, loss is: [0.15476276], accuracy is: [0.953125]
train: epoch: 4, batch_id: 40, loss is: [0.18755408], accuracy is: [0.953125]
train: epoch: 4, batch_id: 60, loss is: [0.19691831], accuracy is: [0.9375]
train: epoch: 4, batch_id: 80, loss is: [0.1511537], accuracy is: [0.9609375]
===================================val===========================================
validation: loss is: 0.11272283643484116, accuracy is: 0.9618644118309021
===================================train===========================================
train: epoch: 5, batch_id: 0, loss is: [0.18051876], accuracy is: [0.9296875]
train: epoch: 5, batch_id: 20, loss is: [0.18252423], accuracy is: [0.953125]
train: epoch: 5, batch_id: 40, loss is: [0.10009789], accuracy is: [0.96875]
train: epoch: 5, batch_id: 60, loss is: [0.18498154], accuracy is: [0.9453125]
train: epoch: 5, batch_id: 80, loss is: [0.08847393], accuracy is: [0.984375]
===================================val===========================================
validation: loss is: 0.11934908479452133, accuracy is: 0.9698092937469482
===================================train===========================================
train: epoch: 6, batch_id: 0, loss is: [0.14827338], accuracy is: [0.9453125]
train: epoch: 6, batch_id: 20, loss is: [0.14230463], accuracy is: [0.9609375]
train: epoch: 6, batch_id: 40, loss is: [0.15367788], accuracy is: [0.9609375]
train: epoch: 6, batch_id: 60, loss is: [0.11884344], accuracy is: [0.9453125]
train: epoch: 6, batch_id: 80, loss is: [0.09308159], accuracy is: [0.984375]
===================================val===========================================
validation: loss is: 0.14607110619544983, accuracy is: 0.960275411605835
===================================train===========================================
train: epoch: 7, batch_id: 0, loss is: [0.25773823], accuracy is: [0.9296875]
train: epoch: 7, batch_id: 20, loss is: [0.11836436], accuracy is: [0.96875]
train: epoch: 7, batch_id: 40, loss is: [0.286631], accuracy is: [0.953125]
train: epoch: 7, batch_id: 60, loss is: [0.07704206], accuracy is: [0.984375]
train: epoch: 7, batch_id: 80, loss is: [0.19048041], accuracy is: [0.9453125]
===================================val===========================================
validation: loss is: 0.12925586104393005, accuracy is: 0.9608050584793091
===================================train===========================================
train: epoch: 8, batch_id: 0, loss is: [0.18118389], accuracy is: [0.9453125]
train: epoch: 8, batch_id: 20, loss is: [0.21135367], accuracy is: [0.9453125]
train: epoch: 8, batch_id: 40, loss is: [0.1625056], accuracy is: [0.9453125]
train: epoch: 8, batch_id: 60, loss is: [0.05222891], accuracy is: [0.984375]
train: epoch: 8, batch_id: 80, loss is: [0.18492831], accuracy is: [0.9375]
===================================val===========================================
validation: loss is: 0.11697262525558472, accuracy is: 0.9676907062530518
===================================train===========================================
train: epoch: 9, batch_id: 0, loss is: [0.17470701], accuracy is: [0.953125]
train: epoch: 9, batch_id: 20, loss is: [0.17707036], accuracy is: [0.9375]
train: epoch: 9, batch_id: 40, loss is: [0.11838087], accuracy is: [0.953125]
train: epoch: 9, batch_id: 60, loss is: [0.12307863], accuracy is: [0.96875]
train: epoch: 9, batch_id: 80, loss is: [0.05727548], accuracy is: [0.984375]
===================================val===========================================
validation: loss is: 0.1319371610879898, accuracy is: 0.960275411605835
###Markdown
五、评估与测试说明:通过`model.load_dict`的方式加载训练好的模型对测试集上的数据进行评估与测试。
###Code
def evaluation():
model = PointNet()
model_state_dict = paddle.load('./model/PointNet.pdparams')
model.load_dict(model_state_dict)
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(test_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
if __name__ == '__main__':
evaluation()
###Output
validation: loss is: 0.1707916259765625, accuracy is: 0.9437429904937744
###Markdown
**点云处理:实现PointNet点云分类****作者**:[Zhihao Cao](https://github.com/WhiteFireFox)**日期**:2022.5**摘要**:本示例在于演示如何基于 PaddlePaddle 2.3.0 实现PointNet在ShapeNet数据集上进行点云分类处理。 一、环境设置本教程基于PaddlePaddle 2.3.0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick)。
###Code
import os
import numpy as np
import random
import h5py
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
print(paddle.__version__)
###Output
2.3.0
###Markdown
二、数据集 2.1 数据介绍ShapeNet数据集是一个注释丰富且规模较大的 3D 形状数据集,由斯坦福大学、普林斯顿大学和芝加哥丰田技术学院于 2015 年联合发布。ShapeNet数据集官方链接:[https://vision.princeton.edu/projects/2014/3DShapeNets/](https://vision.princeton.edu/projects/2014/3DShapeNets/)AIStudio链接:[sharpnet数据集(经过整理)](https://aistudio.baidu.com/aistudio/datasetdetail/70460)ShapeNet数据集的储存格式是h5文件,该文件中key值分别为:- 1、data:这一份数据中所有点的xyz坐标,- 2、label:这一份数据所属类别,如airplane等,- 3、pid:这一份数据中所有点所属的类型,如这一份数据属airplane类,则它包含的所有点的类型有机翼、机身等类型。 2.2 解压数据集
###Code
!unzip data/data70460/shapenet_part_seg_hdf5_data.zip
!mv hdf5_data dataset
###Output
_____no_output_____
###Markdown
2.3 数据列表ShapeNet数据集所有的数据文件。
###Code
train_list = ['ply_data_train0.h5', 'ply_data_train1.h5', 'ply_data_train2.h5', 'ply_data_train3.h5', 'ply_data_train4.h5', 'ply_data_train5.h5']
test_list = ['ply_data_test0.h5', 'ply_data_test1.h5']
val_list = ['ply_data_val0.h5']
###Output
_____no_output_____
###Markdown
2.4 搭建数据生成器说明:将ShapeNet数据集全部读入。
###Code
def make_data(mode='train', path='./dataset/', num_point=2048):
datas = []
labels = []
if mode == 'train':
for file_list in train_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
elif mode == 'test':
for file_list in test_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
else:
for file_list in val_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
return datas, labels
###Output
_____no_output_____
###Markdown
说明:通过继承`paddle.io.Dataset`来完成数据集的构造。
###Code
class PointDataset(paddle.io.Dataset):
def __init__(self, datas, labels):
super(PointDataset, self).__init__()
self.datas = datas
self.labels = labels
def __getitem__(self, index):
data = paddle.to_tensor(self.datas[index].T.astype('float32'))
label = paddle.to_tensor(self.labels[index].astype('int64'))
return data, label
def __len__(self):
return len(self.datas)
###Output
_____no_output_____
###Markdown
说明:使用飞桨框架提供的API:`paddle.io.DataLoader`完成数据的加载,使得按照Batchsize生成Mini-batch的数据。
###Code
# 数据导入
datas, labels = make_data(mode='train', num_point=2048)
train_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='val', num_point=2048)
val_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='test', num_point=2048)
test_dataset = PointDataset(datas, labels)
# 实例化数据读取器
train_loader = paddle.io.DataLoader(
train_dataset,
batch_size=128,
shuffle=True,
drop_last=False
)
val_loader = paddle.io.DataLoader(
val_dataset,
batch_size=32,
shuffle=False,
drop_last=False
)
test_loader = paddle.io.DataLoader(
test_dataset,
batch_size=128,
shuffle=False,
drop_last=False
)
###Output
_____no_output_____
###Markdown
三、定义网络PointNet是斯坦福大学研究人员提出的一个点云处理网络,在这篇论文中,它提出了空间变换网络(T-Net)解决点云的旋转问题(注:因为考虑到某一物体的点云旋转后还是该物体,所以需要有一个网络结构去学习并解决这个旋转问题),并且提出了采取MaxPooling的方法极大程度上地提取点云全局特征。 3.1 定义网络结构
###Code
class PointNet(nn.Layer):
def __init__(self, name_scope='PointNet_', num_classes=16, num_point=2048):
super(PointNet, self).__init__()
self.input_transform_net = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.input_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 9,
weight_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
bias_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
)
)
self.mlp_1 = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU()
)
self.feature_transform_net = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.feature_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 64*64)
)
self.mlp_2 = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU()
)
self.fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(p=0.7),
nn.Linear(256, num_classes),
nn.LogSoftmax(axis=-1)
)
def forward(self, inputs):
batchsize = inputs.shape[0]
t_net = self.input_transform_net(inputs)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.input_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 3, 3])
x = paddle.transpose(inputs, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_1(x)
t_net = self.feature_transform_net(x)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.feature_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 64, 64])
x = paddle.squeeze(x, axis=-1)
x = paddle.transpose(x, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_2(x)
x = paddle.max(x, axis=-1)
x = paddle.squeeze(x, axis=-1)
x = self.fc(x)
return x
###Output
_____no_output_____
###Markdown
3.2 网络结构可视化说明:使用飞桨API:`paddle.summary`完成模型结构可视化
###Code
pointnet = PointNet()
paddle.summary(pointnet, (64, 3, 2048))
###Output
W0509 16:16:31.949033 135 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0509 16:16:31.957976 135 device_context.cc:465] device: 0, cuDNN Version: 7.6.
###Markdown
四、训练说明:模型训练的时候,将会使用`paddle.optimizer.Adam`优化器来进行优化。使用`F.nll_loss`来计算损失值。
###Code
def train():
model = PointNet(num_classes=16, num_point=2048)
model.train()
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
epoch_num = 10
for epoch in range(epoch_num):
# train
print("===================================train===========================================")
for batch_id, data in enumerate(train_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
if batch_id % 20 == 0:
print("train: epoch: {}, batch_id: {}, loss is: {}, accuracy is: {}".format(epoch, batch_id, loss.numpy(), acc.numpy()))
loss.backward()
optim.step()
optim.clear_grad()
if epoch % 2 == 0:
paddle.save(model.state_dict(), './model/PointNet.pdparams')
paddle.save(optim.state_dict(), './model/PointNet.pdopt')
# validation
print("===================================val===========================================")
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(val_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
model.train()
if __name__ == '__main__':
train()
###Output
===================================train===========================================
train: epoch: 0, batch_id: 0, loss is: [8.135595], accuracy is: [0.046875]
train: epoch: 0, batch_id: 20, loss is: [0.96110815], accuracy is: [0.7265625]
train: epoch: 0, batch_id: 40, loss is: [0.77762437], accuracy is: [0.8046875]
train: epoch: 0, batch_id: 60, loss is: [0.575164], accuracy is: [0.84375]
train: epoch: 0, batch_id: 80, loss is: [0.60243726], accuracy is: [0.8359375]
===================================val===========================================
validation: loss is: 0.5027859807014465, accuracy is: 0.848895251750946
===================================train===========================================
train: epoch: 1, batch_id: 0, loss is: [0.5886416], accuracy is: [0.8359375]
train: epoch: 1, batch_id: 20, loss is: [0.59509534], accuracy is: [0.8515625]
train: epoch: 1, batch_id: 40, loss is: [0.43501458], accuracy is: [0.875]
train: epoch: 1, batch_id: 60, loss is: [0.5497817], accuracy is: [0.8515625]
train: epoch: 1, batch_id: 80, loss is: [0.2889481], accuracy is: [0.8984375]
===================================val===========================================
validation: loss is: 0.2470872551202774, accuracy is: 0.9263771176338196
===================================train===========================================
train: epoch: 2, batch_id: 0, loss is: [0.43095332], accuracy is: [0.8984375]
train: epoch: 2, batch_id: 20, loss is: [0.42620662], accuracy is: [0.8984375]
train: epoch: 2, batch_id: 40, loss is: [0.31073096], accuracy is: [0.8984375]
train: epoch: 2, batch_id: 60, loss is: [0.21410619], accuracy is: [0.9375]
train: epoch: 2, batch_id: 80, loss is: [0.23696409], accuracy is: [0.9296875]
===================================val===========================================
validation: loss is: 0.24663102626800537, accuracy is: 0.9278147220611572
===================================train===========================================
train: epoch: 3, batch_id: 0, loss is: [0.1000444], accuracy is: [0.96875]
train: epoch: 3, batch_id: 20, loss is: [0.2845613], accuracy is: [0.9296875]
train: epoch: 3, batch_id: 40, loss is: [0.46592], accuracy is: [0.859375]
train: epoch: 3, batch_id: 60, loss is: [0.3819336], accuracy is: [0.9140625]
train: epoch: 3, batch_id: 80, loss is: [0.08518291], accuracy is: [0.9765625]
===================================val===========================================
validation: loss is: 0.17066480219364166, accuracy is: 0.9491525292396545
===================================train===========================================
train: epoch: 4, batch_id: 0, loss is: [0.11713062], accuracy is: [0.9609375]
train: epoch: 4, batch_id: 20, loss is: [0.1716559], accuracy is: [0.953125]
train: epoch: 4, batch_id: 40, loss is: [0.15082854], accuracy is: [0.96875]
train: epoch: 4, batch_id: 60, loss is: [0.2787561], accuracy is: [0.96875]
train: epoch: 4, batch_id: 80, loss is: [0.11986132], accuracy is: [0.9609375]
===================================val===========================================
validation: loss is: 0.1389710158109665, accuracy is: 0.9608050584793091
===================================train===========================================
train: epoch: 5, batch_id: 0, loss is: [0.17427993], accuracy is: [0.9453125]
train: epoch: 5, batch_id: 20, loss is: [0.25355965], accuracy is: [0.9609375]
train: epoch: 5, batch_id: 40, loss is: [0.18881711], accuracy is: [0.9609375]
train: epoch: 5, batch_id: 60, loss is: [0.14433464], accuracy is: [0.953125]
train: epoch: 5, batch_id: 80, loss is: [0.13028377], accuracy is: [0.96875]
===================================val===========================================
validation: loss is: 0.09753856807947159, accuracy is: 0.9671609997749329
===================================train===========================================
train: epoch: 6, batch_id: 0, loss is: [0.12662013], accuracy is: [0.9765625]
train: epoch: 6, batch_id: 20, loss is: [0.1309431], accuracy is: [0.9609375]
train: epoch: 6, batch_id: 40, loss is: [0.29988244], accuracy is: [0.9453125]
train: epoch: 6, batch_id: 60, loss is: [0.114668], accuracy is: [0.9609375]
train: epoch: 6, batch_id: 80, loss is: [0.48784435], accuracy is: [0.9296875]
===================================val===========================================
validation: loss is: 0.16411711275577545, accuracy is: 0.9576271176338196
===================================train===========================================
train: epoch: 7, batch_id: 0, loss is: [0.12558301], accuracy is: [0.9609375]
train: epoch: 7, batch_id: 20, loss is: [0.1776012], accuracy is: [0.953125]
train: epoch: 7, batch_id: 40, loss is: [0.12831621], accuracy is: [0.9609375]
train: epoch: 7, batch_id: 60, loss is: [0.15245995], accuracy is: [0.953125]
train: epoch: 7, batch_id: 80, loss is: [0.08825297], accuracy is: [0.9609375]
===================================val===========================================
validation: loss is: 0.06742173433303833, accuracy is: 0.9809321761131287
===================================train===========================================
train: epoch: 8, batch_id: 0, loss is: [0.07868354], accuracy is: [0.96875]
train: epoch: 8, batch_id: 20, loss is: [0.1875119], accuracy is: [0.96875]
train: epoch: 8, batch_id: 40, loss is: [0.04444], accuracy is: [0.9921875]
train: epoch: 8, batch_id: 60, loss is: [0.08977574], accuracy is: [0.9765625]
train: epoch: 8, batch_id: 80, loss is: [0.13062863], accuracy is: [0.9765625]
===================================val===========================================
validation: loss is: 0.13399624824523926, accuracy is: 0.9661017060279846
===================================train===========================================
train: epoch: 9, batch_id: 0, loss is: [0.14676869], accuracy is: [0.953125]
train: epoch: 9, batch_id: 20, loss is: [0.16409941], accuracy is: [0.9609375]
train: epoch: 9, batch_id: 40, loss is: [0.08795467], accuracy is: [0.96875]
train: epoch: 9, batch_id: 60, loss is: [0.05970801], accuracy is: [0.984375]
train: epoch: 9, batch_id: 80, loss is: [0.2631768], accuracy is: [0.9296875]
===================================val===========================================
validation: loss is: 0.11335306614637375, accuracy is: 0.9682203531265259
###Markdown
五、评估与测试说明:通过`model.load_dict`的方式加载训练好的模型对测试集上的数据进行评估与测试。
###Code
def evaluation():
model = PointNet()
model_state_dict = paddle.load('./model/PointNet.pdparams')
model.load_dict(model_state_dict)
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(test_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
if __name__ == '__main__':
evaluation()
###Output
_____no_output_____
###Markdown
**点云处理:实现PointNet点云分类****作者**:[Zhihao Cao](https://github.com/WhiteFireFox)**日期**:2021.11**摘要**:本示例在于演示如何基于Paddle2.2实现PointNet在ShapeNet数据集上进行点云分类处理。 一、环境设置本教程基于Paddle 2.2 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick)。
###Code
import os
import numpy as np
import random
import h5py
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
print(paddle.__version__)
###Output
2.2.0
###Markdown
二、数据集 2.1 数据介绍ShapeNet数据集是一个注释丰富且规模较大的 3D 形状数据集,由斯坦福大学、普林斯顿大学和芝加哥丰田技术学院于 2015 年联合发布。ShapeNet数据集官方链接:[https://vision.princeton.edu/projects/2014/3DShapeNets/](https://vision.princeton.edu/projects/2014/3DShapeNets/)AIStudio链接:[sharpnet数据集(经过整理)](https://aistudio.baidu.com/aistudio/datasetdetail/70460)ShapeNet数据集的储存格式是h5文件,该文件中key值分别为:- 1、data:这一份数据中所有点的xyz坐标,- 2、label:这一份数据所属类别,如airplane等,- 3、pid:这一份数据中所有点所属的类型,如这一份数据属airplane类,则它包含的所有点的类型有机翼、机身等类型。 2.2 解压数据集
###Code
!unzip data/data70460/shapenet_part_seg_hdf5_data.zip
!mv hdf5_data dataset
###Output
_____no_output_____
###Markdown
2.3 数据列表ShapeNet数据集所有的数据文件。
###Code
train_list = ['ply_data_train0.h5', 'ply_data_train1.h5', 'ply_data_train2.h5', 'ply_data_train3.h5', 'ply_data_train4.h5', 'ply_data_train5.h5']
test_list = ['ply_data_test0.h5', 'ply_data_test1.h5']
val_list = ['ply_data_val0.h5']
###Output
_____no_output_____
###Markdown
2.4 搭建数据生成器说明:将ShapeNet数据集全部读入。
###Code
def make_data(mode='train', path='./dataset/', num_point=2048):
datas = []
labels = []
if mode == 'train':
for file_list in train_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
elif mode == 'test':
for file_list in test_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
else:
for file_list in val_list:
f = h5py.File(os.path.join(path, file_list), 'r')
datas.extend(f['data'][:, :num_point, :])
labels.extend(f['label'])
f.close()
return datas, labels
###Output
_____no_output_____
###Markdown
说明:通过继承`paddle.io.Dataset`来完成数据集的构造。
###Code
class PointDataset(paddle.io.Dataset):
def __init__(self, datas, labels):
super(PointDataset, self).__init__()
self.datas = datas
self.labels = labels
def __getitem__(self, index):
data = paddle.to_tensor(self.datas[index].T.astype('float32'))
label = paddle.to_tensor(self.labels[index].astype('int64'))
return data, label
def __len__(self):
return len(self.datas)
###Output
_____no_output_____
###Markdown
说明:使用飞桨框架提供的API:`paddle.io.DataLoader`完成数据的加载,使得按照Batchsize生成Mini-batch的数据。
###Code
# 数据导入
datas, labels = make_data(mode='train', num_point=2048)
train_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='val', num_point=2048)
val_dataset = PointDataset(datas, labels)
datas, labels = make_data(mode='test', num_point=2048)
test_dataset = PointDataset(datas, labels)
# 实例化数据读取器
train_loader = paddle.io.DataLoader(
train_dataset,
batch_size=128,
shuffle=True,
drop_last=False
)
val_loader = paddle.io.DataLoader(
val_dataset,
batch_size=32,
shuffle=False,
drop_last=False
)
test_loader = paddle.io.DataLoader(
test_dataset,
batch_size=128,
shuffle=False,
drop_last=False
)
###Output
_____no_output_____
###Markdown
三、定义网络PointNet是斯坦福大学研究人员提出的一个点云处理网络,在这篇论文中,它提出了空间变换网络(T-Net)解决点云的旋转问题(注:因为考虑到某一物体的点云旋转后还是该物体,所以需要有一个网络结构去学习并解决这个旋转问题),并且提出了采取MaxPooling的方法极大程度上地提取点云全局特征。 3.1 定义网络结构
###Code
class PointNet(nn.Layer):
def __init__(self, name_scope='PointNet_', num_classes=16, num_point=2048):
super(PointNet, self).__init__()
self.input_transform_net = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.input_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 9,
weight_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
bias_attr=paddle.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
)
)
self.mlp_1 = nn.Sequential(
nn.Conv1D(3, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU()
)
self.feature_transform_net = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU(),
nn.MaxPool1D(num_point)
)
self.feature_fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Linear(256, 64*64)
)
self.mlp_2 = nn.Sequential(
nn.Conv1D(64, 64, 1),
nn.BatchNorm(64),
nn.ReLU(),
nn.Conv1D(64, 128, 1),
nn.BatchNorm(128),
nn.ReLU(),
nn.Conv1D(128, 1024, 1),
nn.BatchNorm(1024),
nn.ReLU()
)
self.fc = nn.Sequential(
nn.Linear(1024, 512),
nn.ReLU(),
nn.Linear(512, 256),
nn.ReLU(),
nn.Dropout(p=0.7),
nn.Linear(256, num_classes),
nn.LogSoftmax(axis=-1)
)
def forward(self, inputs):
batchsize = inputs.shape[0]
t_net = self.input_transform_net(inputs)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.input_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 3, 3])
x = paddle.transpose(inputs, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_1(x)
t_net = self.feature_transform_net(x)
t_net = paddle.squeeze(t_net, axis=-1)
t_net = self.feature_fc(t_net)
t_net = paddle.reshape(t_net, [batchsize, 64, 64])
x = paddle.squeeze(x, axis=-1)
x = paddle.transpose(x, (0, 2, 1))
x = paddle.matmul(x, t_net)
x = paddle.transpose(x, (0, 2, 1))
x = self.mlp_2(x)
x = paddle.max(x, axis=-1)
x = paddle.squeeze(x, axis=-1)
x = self.fc(x)
return x
###Output
_____no_output_____
###Markdown
3.2 网络结构可视化说明:使用飞桨API:`paddle.summary`完成模型结构可视化
###Code
pointnet = PointNet()
paddle.summary(pointnet, (64, 3, 2048))
###Output
W1108 18:17:31.528717 5445 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W1108 18:17:31.534310 5445 device_context.cc:465] device: 0, cuDNN Version: 7.6.
###Markdown
四、训练说明:模型训练的时候,将会使用`paddle.optimizer.Adam`优化器来进行优化。使用`F.nll_loss`来计算损失值。
###Code
def train():
model = PointNet(num_classes=16, num_point=2048)
model.train()
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
epoch_num = 10
for epoch in range(epoch_num):
# train
print("===================================train===========================================")
for batch_id, data in enumerate(train_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
if batch_id % 20 == 0:
print("train: epoch: {}, batch_id: {}, loss is: {}, accuracy is: {}".format(epoch, batch_id, loss.numpy(), acc.numpy()))
loss.backward()
optim.step()
optim.clear_grad()
if epoch % 2 == 0:
paddle.save(model.state_dict(), './model/PointNet.pdparams')
paddle.save(optim.state_dict(), './model/PointNet.pdopt')
# validation
print("===================================val===========================================")
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(val_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
model.train()
if __name__ == '__main__':
train()
###Output
===================================train===========================================
train: epoch: 0, batch_id: 0, loss is: [7.559999], accuracy is: [0.0625]
train: epoch: 0, batch_id: 20, loss is: [1.2115248], accuracy is: [0.6484375]
train: epoch: 0, batch_id: 40, loss is: [0.6856382], accuracy is: [0.8046875]
train: epoch: 0, batch_id: 60, loss is: [0.58668905], accuracy is: [0.84375]
train: epoch: 0, batch_id: 80, loss is: [0.500105], accuracy is: [0.8515625]
===================================val===========================================
validation: loss is: 0.6364309787750244, accuracy is: 0.8358050584793091
===================================train===========================================
train: epoch: 1, batch_id: 0, loss is: [0.5509058], accuracy is: [0.8046875]
train: epoch: 1, batch_id: 20, loss is: [0.564171], accuracy is: [0.8359375]
train: epoch: 1, batch_id: 40, loss is: [0.49365884], accuracy is: [0.8359375]
train: epoch: 1, batch_id: 60, loss is: [0.3184696], accuracy is: [0.8984375]
train: epoch: 1, batch_id: 80, loss is: [0.4560991], accuracy is: [0.8515625]
===================================val===========================================
validation: loss is: 0.29481494426727295, accuracy is: 0.9141949415206909
===================================train===========================================
train: epoch: 2, batch_id: 0, loss is: [0.34659007], accuracy is: [0.9296875]
train: epoch: 2, batch_id: 20, loss is: [0.28600746], accuracy is: [0.890625]
train: epoch: 2, batch_id: 40, loss is: [0.46038467], accuracy is: [0.890625]
train: epoch: 2, batch_id: 60, loss is: [0.22319293], accuracy is: [0.9375]
train: epoch: 2, batch_id: 80, loss is: [0.18374936], accuracy is: [0.9453125]
===================================val===========================================
validation: loss is: 0.21900080144405365, accuracy is: 0.9359109997749329
===================================train===========================================
train: epoch: 3, batch_id: 0, loss is: [0.16127768], accuracy is: [0.953125]
train: epoch: 3, batch_id: 20, loss is: [0.2118332], accuracy is: [0.9453125]
train: epoch: 3, batch_id: 40, loss is: [0.25717354], accuracy is: [0.9375]
train: epoch: 3, batch_id: 60, loss is: [0.1606617], accuracy is: [0.9453125]
train: epoch: 3, batch_id: 80, loss is: [0.3831357], accuracy is: [0.890625]
===================================val===========================================
validation: loss is: 0.15731117129325867, accuracy is: 0.9528601765632629
===================================train===========================================
train: epoch: 4, batch_id: 0, loss is: [0.22388156], accuracy is: [0.9296875]
train: epoch: 4, batch_id: 20, loss is: [0.15476276], accuracy is: [0.953125]
train: epoch: 4, batch_id: 40, loss is: [0.18755408], accuracy is: [0.953125]
train: epoch: 4, batch_id: 60, loss is: [0.19691831], accuracy is: [0.9375]
train: epoch: 4, batch_id: 80, loss is: [0.1511537], accuracy is: [0.9609375]
===================================val===========================================
validation: loss is: 0.11272283643484116, accuracy is: 0.9618644118309021
===================================train===========================================
train: epoch: 5, batch_id: 0, loss is: [0.18051876], accuracy is: [0.9296875]
train: epoch: 5, batch_id: 20, loss is: [0.18252423], accuracy is: [0.953125]
train: epoch: 5, batch_id: 40, loss is: [0.10009789], accuracy is: [0.96875]
train: epoch: 5, batch_id: 60, loss is: [0.18498154], accuracy is: [0.9453125]
train: epoch: 5, batch_id: 80, loss is: [0.08847393], accuracy is: [0.984375]
===================================val===========================================
validation: loss is: 0.11934908479452133, accuracy is: 0.9698092937469482
===================================train===========================================
train: epoch: 6, batch_id: 0, loss is: [0.14827338], accuracy is: [0.9453125]
train: epoch: 6, batch_id: 20, loss is: [0.14230463], accuracy is: [0.9609375]
train: epoch: 6, batch_id: 40, loss is: [0.15367788], accuracy is: [0.9609375]
train: epoch: 6, batch_id: 60, loss is: [0.11884344], accuracy is: [0.9453125]
train: epoch: 6, batch_id: 80, loss is: [0.09308159], accuracy is: [0.984375]
===================================val===========================================
validation: loss is: 0.14607110619544983, accuracy is: 0.960275411605835
===================================train===========================================
train: epoch: 7, batch_id: 0, loss is: [0.25773823], accuracy is: [0.9296875]
train: epoch: 7, batch_id: 20, loss is: [0.11836436], accuracy is: [0.96875]
train: epoch: 7, batch_id: 40, loss is: [0.286631], accuracy is: [0.953125]
train: epoch: 7, batch_id: 60, loss is: [0.07704206], accuracy is: [0.984375]
train: epoch: 7, batch_id: 80, loss is: [0.19048041], accuracy is: [0.9453125]
===================================val===========================================
validation: loss is: 0.12925586104393005, accuracy is: 0.9608050584793091
===================================train===========================================
train: epoch: 8, batch_id: 0, loss is: [0.18118389], accuracy is: [0.9453125]
train: epoch: 8, batch_id: 20, loss is: [0.21135367], accuracy is: [0.9453125]
train: epoch: 8, batch_id: 40, loss is: [0.1625056], accuracy is: [0.9453125]
train: epoch: 8, batch_id: 60, loss is: [0.05222891], accuracy is: [0.984375]
train: epoch: 8, batch_id: 80, loss is: [0.18492831], accuracy is: [0.9375]
===================================val===========================================
validation: loss is: 0.11697262525558472, accuracy is: 0.9676907062530518
===================================train===========================================
train: epoch: 9, batch_id: 0, loss is: [0.17470701], accuracy is: [0.953125]
train: epoch: 9, batch_id: 20, loss is: [0.17707036], accuracy is: [0.9375]
train: epoch: 9, batch_id: 40, loss is: [0.11838087], accuracy is: [0.953125]
train: epoch: 9, batch_id: 60, loss is: [0.12307863], accuracy is: [0.96875]
train: epoch: 9, batch_id: 80, loss is: [0.05727548], accuracy is: [0.984375]
===================================val===========================================
validation: loss is: 0.1319371610879898, accuracy is: 0.960275411605835
###Markdown
五、评估与测试说明:通过`model.load_dict`的方式加载训练好的模型对测试集上的数据进行评估与测试。
###Code
def evaluation():
model = PointNet()
model_state_dict = paddle.load('./model/PointNet.pdparams')
model.load_dict(model_state_dict)
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(test_loader()):
inputs, labels = data
predicts = model(inputs)
loss = F.nll_loss(predicts, labels)
acc = paddle.metric.accuracy(predicts, labels)
losses.append(loss.numpy())
accuracies.append(acc.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("validation: loss is: {}, accuracy is: {}".format(avg_loss, avg_acc))
if __name__ == '__main__':
evaluation()
###Output
validation: loss is: 0.1707916259765625, accuracy is: 0.9437429904937744
|
graphsage/doc/preprocess.ipynb | ###Markdown
###Code
# python -m graphsage.supervised_train --train_prefix ./example_data/toy-ppi --model graphsage_mean --sigmoid
# test networkx and visualization
import networkx as nx
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
G = nx.complete_graph(6)
nx.draw(G)
# download code and data
!git clone https://github.com/williamleif/GraphSAGE
import json
from networkx.readwrite import json_graph
import os
import numpy as np
import sys
CODE_ROOT = "GraphSAGE/graphsage"
sys.path.append("GraphSAGE")
def load_data():
data_path = 'GraphSAGE/example_data'
# DATA 1, 14755 nodes, 228431 links
G_data = json.load(open(data_path + '/toy-ppi-G.json'))
#G_data['nodes'] = G_data['nodes'][:100]
#G_data['links'] = G_data['links'][:100]
G = json_graph.node_link_graph(G_data)
conversion = lambda n : n
lab_conversion = lambda n : n
# DATA 2, (14755, 50) dtype('float64')
feats = np.load(data_path + '/toy-ppi-feats.npy')
# DATA 3, {"0": 0, "1": 1}, len: 14755
# node ids to integer values indexing feature tensor
# 其实没什么用
id_map = json.load(open(data_path + "/toy-ppi-id_map.json"))
# DATA 4, dict, len: 14755, column 121
# from node ids to class values (integer or list)
# 分类标签
class_map = json.load(open(data_path + "/toy-ppi-class_map.json"))
broken_count = 0
for node in G.nodes():
if not 'val' in G.nodes()[node] or not 'test' in G.nodes()[node]:
G.remove_node(node)
broken_count += 1
print("Removed {:d} nodes that lacked proper annotations due to networkx versioning issues".format(broken_count))
# edge: (0, 800) 边
# G[0]: 某结点与所有的关联结点组成的边的集合
# 标记需要在训练中移除的关联关系,即边
for edge in G.edges():
if (G.nodes()[edge[0]]['val'] or G.nodes()[edge[1]]['val'] or
G.nodes()[edge[0]]['test'] or G.nodes()[edge[1]]['test']):
G[edge[0]][edge[1]]['train_removed'] = True
else:
G[edge[0]][edge[1]]['train_removed'] = False
from sklearn.preprocessing import StandardScaler
# 训练集的id集合,result only int, len: 9716
train_ids = np.array([id_map[str(n)] for n in G.nodes() \
if not G.nodes()[n]['val'] and not G.nodes()[n]['test']])
train_feats = feats[train_ids]
# 特征缩放,标准化:z = (x - u) / s
# u is the mean of the training samples
# s is the standard deviation of the training samples
scaler = StandardScaler()
scaler.fit(train_feats)
feats = scaler.transform(feats)
walks = []
return G, feats, id_map, walks, class_map
def construct_placeholders(num_classes):
# Define placeholders
placeholders = {
'labels' : tf.compat.v1.placeholder(tf.float32, shape=(None, num_classes), name='labels'),
'dropout': tf.compat.v1.placeholder_with_default(0., shape=(), name='dropout'),
'batch' : tf.compat.v1.placeholder(tf.int32, shape=(None), name='batch1'),
'batch_size' : tf.compat.v1.placeholder(tf.int32, name='batch_size'),
}
return placeholders
train_data = load_data()
G = train_data[0]
features = train_data[1]
id_map = train_data[2]
context_pairs = train_data[3]
class_map = train_data[4]
# num_classes = 121
num_classes = len(list(class_map.values())[0])
# pad with dummy zero vector, row wise
features = np.vstack([features, np.zeros((features.shape[1],))])
placeholders = construct_placeholders(num_classes)
class NodeMinibatchIterator(object):
"""
This minibatch iterator iterates over nodes for supervised learning.
G -- networkx graph
id2idx -- dict mapping node ids to integer values indexing feature tensor
placeholders -- standard tensorflow placeholders object for feeding
label_map -- map from node ids to class values (integer or list)
num_classes -- number of output classes
batch_size -- size of the minibatches
max_degree -- maximum size of the downsampled adjacency lists
以toy-ppi数据集举例:
label_map为输出,维度为(14755, 121)
num_class为label_map的第二维,即121
"""
def __init__(self, G, id2idx,
placeholders, label_map, num_classes,
batch_size=100, max_degree=25,
**kwargs):
self.G = G
self.nodes = G.nodes()
self.id2idx = id2idx
self.placeholders = placeholders
self.batch_size = batch_size
self.max_degree = max_degree
self.batch_num = 0
self.label_map = label_map
self.num_classes = num_classes
self.adj, self.deg = self.construct_adj()
self.test_adj = self.construct_test_adj()
self.val_nodes = [n for n in self.G.nodes() if self.G.nodes()[n]['val']]
self.test_nodes = [n for n in self.G.nodes() if self.G.nodes()[n]['test']]
# 不参与训练的结点id
self.no_train_nodes_set = set(self.val_nodes + self.test_nodes)
# 可训练的结点id
self.train_nodes = set(G.nodes()).difference(self.no_train_nodes_set)
# don't train on nodes that only have edges to test set
# 只保留有邻居的结点
self.train_nodes = [n for n in self.train_nodes if self.deg[id2idx[str(n)]] > 0]
def _make_label_vec(self, node):
label = self.label_map[node]
if isinstance(label, list):
label_vec = np.array(label)
else:
label_vec = np.zeros((self.num_classes))
class_ind = self.label_map[node]
label_vec[class_ind] = 1
return label_vec
def construct_adj(self):
# adjacency shape: (14756, 128) ,用于存储所有节点的邻居节点id
adj = len(self.id2idx) * np.ones((len(self.id2idx)+1, self.max_degree))
# (14755,) ,用于存储所有结点的degree值
deg = np.zeros((len(self.id2idx),))
for nodeid in self.G.nodes():
if self.G.nodes()[nodeid]['test'] or self.G.nodes()[nodeid]['val']:
continue
# 获取所有训练集的邻居节点的id
neighbors = np.array([self.id2idx[str(neighbor)]
for neighbor in self.G.neighbors(nodeid)
if (not self.G[nodeid][neighbor]['train_removed'])])
deg[self.id2idx[str(nodeid)]] = len(neighbors)
if len(neighbors) == 0:
continue
if len(neighbors) > self.max_degree:
neighbors = np.random.choice(neighbors, self.max_degree, replace=False)
elif len(neighbors) < self.max_degree:
neighbors = np.random.choice(neighbors, self.max_degree, replace=True)
adj[self.id2idx[str(nodeid)], :] = neighbors
return adj, deg
def construct_test_adj(self):
adj = len(self.id2idx) * np.ones((len(self.id2idx)+1, self.max_degree))
for nodeid in self.G.nodes():
# 所有邻居节点的id,这里没有限制训练集或测试集
neighbors = np.array([self.id2idx[str(neighbor)]
for neighbor in self.G.neighbors(nodeid)])
if len(neighbors) == 0:
continue
if len(neighbors) > self.max_degree:
neighbors = np.random.choice(neighbors, self.max_degree, replace=False)
elif len(neighbors) < self.max_degree:
neighbors = np.random.choice(neighbors, self.max_degree, replace=True)
adj[self.id2idx[str(nodeid)], :] = neighbors
return adj
def end(self):
return self.batch_num * self.batch_size >= len(self.train_nodes)
def batch_feed_dict(self, batch_nodes, val=False):
batch1id = batch_nodes
batch1 = [self.id2idx[n] for n in batch1id]
labels = np.vstack([self._make_label_vec(node) for node in batch1id])
feed_dict = dict()
feed_dict.update({self.placeholders['batch_size'] : len(batch1)})
feed_dict.update({self.placeholders['batch']: batch1})
feed_dict.update({self.placeholders['labels']: labels})
return feed_dict, labels
def node_val_feed_dict(self, size=None, test=False):
if test:
val_nodes = self.test_nodes
else:
val_nodes = self.val_nodes
if not size is None:
val_nodes = np.random.choice(val_nodes, size, replace=True)
# add a dummy neighbor
ret_val = self.batch_feed_dict(val_nodes)
return ret_val[0], ret_val[1]
def incremental_node_val_feed_dict(self, size, iter_num, test=False):
if test:
val_nodes = self.test_nodes
else:
val_nodes = self.val_nodes
val_node_subset = val_nodes[iter_num*size:min((iter_num+1)*size,
len(val_nodes))]
# add a dummy neighbor
ret_val = self.batch_feed_dict(val_node_subset)
return ret_val[0], ret_val[1], (iter_num+1)*size >= len(val_nodes), val_node_subset
def num_training_batches(self):
return len(self.train_nodes) // self.batch_size + 1
def next_minibatch_feed_dict(self):
start_idx = self.batch_num * self.batch_size
self.batch_num += 1
end_idx = min(start_idx + self.batch_size, len(self.train_nodes))
batch_nodes = self.train_nodes[start_idx : end_idx]
return self.batch_feed_dict(batch_nodes)
def incremental_embed_feed_dict(self, size, iter_num):
node_list = self.nodes
val_nodes = node_list[iter_num*size:min((iter_num+1)*size,
len(node_list))]
return self.batch_feed_dict(val_nodes), (iter_num+1)*size >= len(node_list), val_nodes
def shuffle(self):
""" Re-shuffle the training set.
Also reset the batch number.
"""
self.train_nodes = np.random.permutation(self.train_nodes)
self.batch_num = 0
"""
This minibatch iterator iterates over nodes for supervised learning.
G -- networkx graph
id2idx -- dict mapping node ids to integer values indexing feature tensor
placeholders -- standard tensorflow placeholders object for feeding
label_map -- map from node ids to class values (integer or list)
num_classes -- number of output classes
batch_size -- size of the minibatches
max_degree -- maximum size of the downsampled adjacency lists
"""
# 实例化 NodeMinibatch 迭代器
minibatch = NodeMinibatchIterator(G,
id_map,
placeholders,
class_map,
num_classes,
batch_size=512,
max_degree=128,
context_pairs = context_pairs)
# adjacency shape: (14756, 128) 包装为placeholder
adj_info_ph = tf.compat.v1.placeholder(tf.int32, shape=minibatch.adj.shape)
adj_info = tf.Variable(adj_info_ph, trainable=False, name="adj_info")
# 接着就是构建模型了,需要改动的兼容代码过多,暂不继续了
###Output
_____no_output_____ |
results_analysis/.ipynb_checkpoints/mnist_analysis-checkpoint.ipynb | ###Markdown
FedAVG```class Mnist_Cnn(nn.Module): def __init__(self): super(Mnist_Cnn, self).__init__() self.conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=5, stride=1, padding=2) self.pool = nn.MaxPool2d(4) self.fc1 = nn.Linear(2 * 7 * 7, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = torch.flatten(x, 1) x = self.fc1(x) return x```Centralized model ~ 94-95 % accuracy
###Code
plot_client_training(settings_files, "fedavg", 0)
print_test_results(settings_files, "fedavg")
print_test_results(settings_files, "fedprox")
###Output
Run Test accuracy
0 95.01
1 93.16
2 93.47
3 93.59
4 93.87
Mean 93.82
Std 0.64
###Markdown
FedED```class Mnist_Cnn2(nn.Module): def __init__(self): super(Mnist_Cnn2, self).__init__() self.conv1 = nn.Conv2d(1, 16, 5, 1, 2) self.conv2 = nn.Conv2d(16, 32, 5, 1, 2) self.fc1 = nn.Linear(32 * 7 * 7, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64, 10) self.pool1 = nn.MaxPool2d(2) self.pool2 = nn.MaxPool2d(2) def forward(self, x): x = self.pool1(F.relu(self.conv1(x))) x = self.pool2(F.relu(self.conv2(x))) x = torch.flatten(x, 1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x loss = nn.MSELoss()```$$ \mathbf{\hat{z}} = \sum_{k\in\mathcal{K}_t} \omega_k \mathbf{z}^k, \quad \omega^k = \frac{N^k}{\sum_{k\in\mathcal{K}_t} N^k}$$
###Code
plot_client_training(settings_files, "feded", 0, "cnn1", "mse")
plot_student_results(settings_files, [500, 1000, 5000, 15000, 30000], "cnn1", "mse", 0)
###Output
_____no_output_____
###Markdown
FedED weight scheme 1$$ \mathbf{\hat{z}}_c = \sum_{k\in\mathcal{K}_t} \omega^c_k \mathbf{z}^k_c, \quad \omega^k_c = \frac{N_c^k}{\sum_{k\in\mathcal{K}_t} N_c^k}$$
###Code
#plot_client_training("mnist", "feded", 1, "cnn2")
plot_student_results(settings_files, [500, 1000, 5000, 15000, 30000], "cnn2", "mse", 1)
###Output
_____no_output_____
###Markdown
FedED weight scheme 21. Träna autoencoder $H_k$ på privat data $\mathcal{D}_k$.2. För varje publikt dataexempel $x_j$, notera loss$$ l_k(x_j) = MSE(H_k(x_j), x_j). $$3. Ange vikt $$\omega^j_k = \frac{1}{l_k(x_j)^6}.$$3. Bilda viktat medelvärde för logits med aktiva klienter $\mathcal{K}_t$$$ \mathbf{\hat{z}}_j = \sum_{k\in\mathcal{K}_t} \omega^j_k \mathbf{z}^j_k. %\quad \omega^k_c = \frac{N_c^k}{\sum_{k\in\mathcal{K}_t} N_c^k}$$4. Normalisera.Student loss: $$\mathbf{z}_j = F_S(x_j) \qquad MSE(\mathbf{z}_j, \mathbf{\hat{z}}_j)$$<!-- $$ \mathbf{\bar{z}}_c = \sum_{k\in\mathcal{K}_t} N_c^k \mathbf{z}^k_c \\ \mathbf{\hat{z}}_c = \frac{\mathbf{\bar{z}}_c}{\sum_c \mathbf{\bar{z}}_c}$$ -->
###Code
plot_student_results(settings_files, [500, 1000, 5000, 15000, 30000], "cnn2", "mse", 2)
###Output
_____no_output_____
###Markdown
Student loss: $$CE(\mathbf{t}_j, \mathbf{\hat{t}}_j)$$
###Code
plot_student_results(settings_files, [500, 1000, 5000, 15000, 30000], "cnn2", "ce", 2)
###Output
_____no_output_____ |
Sandbox/energy-below-GS.ipynb | ###Markdown
Fix data
###Code
import numpy as np
import os
import json
origin = '../data/TFIM/logsweep/WF/reheating/'
dest = '../data/TFIM/logsweep/WF/reheating-fixed/'
files = sorted(os.listdir(origin))
#os.makedirs(dest)
for file in files[:1]:
with open(origin + file, 'rt') as origin_fp:
data = json.load(origin_fp)
print(*(f'{k} : {type(v)}' for k, v in data.items()), sep='\n')
for file in files:
with open(origin + file, 'rt') as origin_fp:
data = json.load(origin_fp)
norms = np.array(data['norm_check_samples'])
if len(norms) == 200:
norms = norms[100:]
energy_samples = np.array(data['energy_samples']) / norms
gsf_samples = np.array(data['gsf_samples']) / norms
data.update(energy_avg=np.mean(energy_samples))
data.update(energy_std=np.std(energy_samples))
data.update(gsf_avg=np.mean(gsf_samples))
data.update(gsf_std=np.std(gsf_samples))
data.update(energy_samples=energy_samples.tolist())
data.update(gsf_samples=gsf_samples.tolist())
with open(dest + file, 'wt') as dest_fp:
json.dump(data, dest_fp)
os.listdir(dest)
###Output
_____no_output_____ |
bindings/python/docs/Reference/Environment.ipynb | ###Markdown
Library ▸ Physics ▸ Environment Setup
###Code
import sys
! {sys.executable} -m pip install --quiet LibraryCorePy
! {sys.executable} -m pip install --quiet LibraryIOPy
! {sys.executable} -m pip install --quiet LibraryMathematicsPy
# ! {sys.executable} -m pip install --quiet LibraryPhysicsPy
import numpy
import Library.Core as Core
import Library.IO as IO
import Library.Mathematics as Mathematics
import Library.Physics as Physics
Scale = Physics.Time.Scale
Instant = Physics.Time.Instant
Duration = Physics.Time.Duration
Interval = Physics.Time.Interval
Date = Physics.Time.Date
Time = Physics.Time.Time
DateTime = Physics.Time.DateTime
Environment = Physics.Environment
Object = Physics.Environment.Object
Objects = Physics.Environment.Objects
Celestial = Physics.Environment.Objects.Celestial
Earth = Physics.Environment.Objects.CelestialBodies.Earth
###Output
_____no_output_____
###Markdown
--- Physics ▸ Environment **Physics ▸ Environment ▸ Constructors**
###Code
instant = Physics.Time.Instant.DateTime(Physics.Time.DateTime(2018, 1, 1, 0, 0, 0), Physics.Time.Scale.UTC) ;
objects = [Physics.Environment.Objects.CelestialBodies.Earth.Default()] ;
environment = Environment(instant, objects) ;
Environment.Undefined() ;
Environment.Default() ;
###Output
_____no_output_____
###Markdown
**Physics ▸ Environment ▸ Methods**
###Code
environment.isDefined() ;
environment.accessObjectWithName("Earth") ;
environment.getInstant() ;
environment.setInstant(Physics.Time.Instant.DateTime(Physics.Time.DateTime(2018, 1, 1, 0, 0, 0), Physics.Time.Scale.UTC)) ;
segment = Mathematics.Geometry.D3.Objects.Segment(Mathematics.Geometry.D3.Objects.Point(0.0, 0.0, 0.0), Mathematics.Geometry.D3.Objects.Point(7000e3, 0.0, 0.0))
geometry = Physics.Environment.Object.Geometry(segment, Physics.Coordinate.Frame.GCRF())
environment.intersects(geometry, []) ;
###Output
_____no_output_____
###Markdown
Physics ▸ Environment ▸ Object Physics ▸ Environment ▸ Objects Physics ▸ Environment ▸ Objects ▸ Celestial Physics ▸ Environment ▸ Objects ▸ Celestial ▸ Earth **Physics ▸ Environment ▸ Objects ▸ Celestial ▸ Earth ▸ Static Properties**
###Code
Earth.GravitationalParameter ;
Earth.EquatorialRadius ;
Earth.Flattening ;
Earth.C20 ;
Earth.J2 ;
Earth.Models.EGM2008.GravitationalParameter ;
Earth.Models.EGM2008.EquatorialRadius ;
Earth.Models.EGM2008.Flattening ;
Earth.Models.EGM2008.C20 ;
Earth.Models.EGM2008.J2 ;
Earth.Models.WGS84_EGM96.GravitationalParameter ;
Earth.Models.WGS84_EGM96.EquatorialRadius ;
Earth.Models.WGS84_EGM96.Flattening ;
Earth.Models.WGS84_EGM96.C20 ;
Earth.Models.WGS84_EGM96.J2 ;
Earth.Models.EGM96.GravitationalParameter ;
Earth.Models.EGM96.EquatorialRadius ;
Earth.Models.EGM96.Flattening ;
Earth.Models.EGM96.C20 ;
Earth.Models.EGM96.J2 ;
Earth.Models.WGS84.GravitationalParameter ;
Earth.Models.WGS84.EquatorialRadius ;
Earth.Models.WGS84.Flattening ;
Earth.Models.WGS84.C20 ;
Earth.Models.WGS84.J2 ;
###Output
_____no_output_____
###Markdown
Open Space Toolkit ▸ Physics ▸ Environment Setup
###Code
import numpy
import ostk.core as core
import ostk.io as io
import ostk.mathematics as mathematics
import ostk.physics as physics
Point = mathematics.geometry.d3.objects.Point
Segment = mathematics.geometry.d3.objects.Segment
Scale = physics.time.Scale
Instant = physics.time.Instant
Duration = physics.time.Duration
Interval = physics.time.Interval
Date = physics.time.Date
Time = physics.time.Time
DateTime = physics.time.DateTime
Frame = physics.coordinate.Frame
Environment = physics.Environment
Object = physics.environment.Object
Geometry = physics.environment.object.Geometry
Celestial = physics.environment.objects.Celestial
Earth = physics.environment.objects.celestial_bodies.Earth
###Output
_____no_output_____
###Markdown
--- Physics ▸ Environment **Physics ▸ Environment ▸ Constructors**
###Code
instant = Instant.date_time(DateTime(2018, 1, 1, 0, 0, 0), Scale.UTC) ;
objects = [Earth.default()] ;
environment = Environment(instant, objects) ;
Environment.undefined() ;
Environment.default() ;
###Output
_____no_output_____
###Markdown
**Physics ▸ Environment ▸ Methods**
###Code
environment.is_defined() ;
environment.access_object_with_name("Earth") ;
environment.get_instant() ;
environment.set_instant(Instant.date_time(DateTime(2018, 1, 1, 0, 0, 0), Scale.UTC)) ;
segment = Segment(Point(0.0, 0.0, 0.0), Point(7000e3, 0.0, 0.0))
geometry = Geometry(segment, Frame.GCRF())
environment.intersects(geometry, []) ;
###Output
_____no_output_____
###Markdown
Physics ▸ Environment ▸ Object Physics ▸ Environment ▸ Objects Physics ▸ Environment ▸ Objects ▸ Celestial Physics ▸ Environment ▸ Objects ▸ Celestial ▸ Earth **Physics ▸ Environment ▸ Objects ▸ Celestial ▸ Earth ▸ Static Properties**
###Code
Earth.gravitational_parameter ;
Earth.equatorial_radius ;
Earth.flattening ;
Earth.C20 ;
Earth.J2 ;
Earth.Models.EGM2008.gravitational_parameter ;
Earth.Models.EGM2008.equatorial_radius ;
Earth.Models.EGM2008.flattening ;
Earth.Models.EGM2008.C20 ;
Earth.Models.EGM2008.J2 ;
Earth.Models.WGS84_EGM96.gravitational_parameter ;
Earth.Models.WGS84_EGM96.equatorial_radius ;
Earth.Models.WGS84_EGM96.flattening ;
Earth.Models.WGS84_EGM96.C20 ;
Earth.Models.WGS84_EGM96.J2 ;
Earth.Models.EGM96.gravitational_parameter ;
Earth.Models.EGM96.equatorial_radius ;
Earth.Models.EGM96.flattening ;
Earth.Models.EGM96.C20 ;
Earth.Models.EGM96.J2 ;
Earth.Models.WGS84.gravitational_parameter ;
Earth.Models.WGS84.equatorial_radius ;
Earth.Models.WGS84.flattening ;
Earth.Models.WGS84.C20 ;
Earth.Models.WGS84.J2 ;
###Output
_____no_output_____ |
L3-Data-Handling-with-Pandas.ipynb | ###Markdown
Read-in the Data
###Code
import numpy as np
import pandas as pd
pop_df = pd.read_csv("./data/populations.txt", sep='\t')
pop_df.head(5)
pop_df.shape
###Output
_____no_output_____
###Markdown
Let us investigate the dataset by checking the number of variables/fetures and observations.**Number of variables/features = number of columns in DF**We have 4 variables in the dataset (with year).**Number of observations = number of rows in DF**We have 84 observations in the dataset (21*4). Let us check the names of the variables embedded in the dataset. Note that sometimes we do not have column names (variable names) in the dataset.
###Code
pop_df.columns
pop_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 21 entries, 0 to 20
Data columns (total 5 columns):
year 21 non-null int64
hare 21 non-null float64
lynx 21 non-null float64
carrot 21 non-null int64
fox 21 non-null int64
dtypes: float64(2), int64(3)
memory usage: 968.0 bytes
###Markdown
We only need the values to feed into the models - we can access the values this way.
###Code
pop_df.values
###Output
_____no_output_____
###Markdown
Data type is also an important characteristic of the data, we can access the data types this way.
###Code
pop_df.dtypes
###Output
_____no_output_____
###Markdown
We can access columns (Pandas series) using their labels:
###Code
hare_df = pop_df["hare"]
hare_df
###Output
_____no_output_____
###Markdown
Or alternatively using the label as a property of the dataframe:
###Code
pop_df.hare
###Output
_____no_output_____
###Markdown
Data ExplorationData exploration is easier with Pandas. The usual numeric operations are available for dataframes or series:
###Code
print ("Mean Hare Population: ", hare_df.mean())
print ("Mean Populations: \n", pop_df[["hare","lynx","carrot"]].mean())
print ("\n")
print ("Standard Deviations: \n", pop_df[["hare","lynx","carrot"]].std())
###Output
Mean Populations:
hare 34080.952381
lynx 20166.666667
carrot 42400.000000
dtype: float64
Standard Deviations:
hare 21413.981859
lynx 16655.999920
carrot 3404.555771
dtype: float64
###Markdown
The describe() method provides a detailed description of variables:
###Code
pop_df[["hare","lynx","carrot"]].describe()
pop_df.describe()
###Output
_____no_output_____
###Markdown
A better way to do correlation analysis:
###Code
pop_df[["hare","lynx","carrot"]].corr()
###Output
_____no_output_____
###Markdown
Also sorting is done easily:
###Code
pop_df.sort_values(by=['hare'])
###Output
_____no_output_____
###Markdown
More examples of accessing and manipulating data in dataframes:
###Code
# finding all instances when the population of hares is above 50k
hare_above_50K = pop_df.hare>50000
print (hare_above_50K)
print ("\n")
print (pop_df[hare_above_50K])
print ("\n")
print (pop_df[hare_above_50K].year)
# finding all instances when the population of one of the animal species is above 50k
above_50K = (pop_df["hare"]>50000) | (pop_df["lynx"]>50000)
print (pop_df[above_50K])
#print pop_df[hare_above_50K].year
###Output
year hare lynx carrot
2 1902 70200.0 9800.0 41500
3 1903 77400.0 35200.0 38200
4 1904 36300.0 59400.0 40600
12 1912 57000.0 12300.0 43800
13 1913 76600.0 19500.0 40900
14 1914 52300.0 45700.0 39400
15 1915 19500.0 51100.0 39000
###Markdown
We know that the *year* column is only an identifier, so we may not need it in the analysis.
###Code
pop2 = pop_df.drop("year", axis=1)
pop2
###Output
_____no_output_____
###Markdown
When necessary, we can convert a dataframe (or a series) into a Numpy array:
###Code
poptable = np.array(pop2)
poptable
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.plot(pop_df["year"], pop_df["hare"])
###Output
_____no_output_____
###Markdown
We can also visualize multiple variables/features in one figure. But you need to make sure:- All data visualized in the same figure should be of the same data type (e.g. you cannot mix continuous and categorical data types in the same figure);- You do not want your visualization to be too busy - below is a good example - but it is **highly discouraged** to include more than **5** variables/features in the same figure.
###Code
plt.plot(pop_df["year"], pop2, label=['Hares','Lynxes','Carrots'])
plt.legend( ('Hares','Lynxes','Carrots') )
plt.ylabel('Population')
plt.xlabel('Year')
plt.show()
###Output
_____no_output_____
###Markdown
Line charts look good at visualizing **continuous** variables (particularly when they are time series like what we have here); but they are not useful when dealing with **categorical** variables. Below is a way of dealing with **categorical** variables.
###Code
plt.hist(pop_df["carrot"], bins=8, alpha=0.5)
plt.xlabel('Carrots')
plt.ylabel('Count')
plt.title('Histogram of Carrot Populaions')
plt.axis([36000, 49000, 0, 6])
#plt.grid(True)
###Output
_____no_output_____
###Markdown
Pandas has its own versatile "plot" method that can handle most types of charts:
###Code
pop_df.plot(x="year", title="Populations")
###Output
_____no_output_____
###Markdown
When we want to investigate the cross-variable relationship, we can use **scatterplot** as following.
###Code
pop_df.plot(x="carrot", y="lynx", kind="scatter")
###Output
_____no_output_____
###Markdown
Data Exploration - InterpretationThis is where data analysts bring the most value. Q: Can you explain the relationship between 'lynx' and 'carrot'? Do they have any linear relationship?A: There is no relationship from 35,000 - 42,000, however, after it does start to resamble linear relationship. It is also important to make best guess assumptions as to why data is this way. For instance, how could you explain a better relationship between lynx and carrot as frequency increases? Boxplot is another visualization tool when investigating the distribution of continuous variables. In a boxplot:- the box is the **confidence interval** of the values of a certain variable;- the attenas (above and below the box) are the **actual range (min. to max.)**;- the line in the box is the **mean (average) value** of the variable.You can use the boxplot to investigate the distribution of the variable - this is the same as checking the 'bell curve' in the distribution chart. For instance, in the chart below, both 'hare' and 'lynx' are right-skewed, while 'carrot' is in a normal (but narrow) distribution.
###Code
pop_df.boxplot(column=["hare","lynx","carrot"], return_type='axes')
fox_col = np.random.randint(low=5000, high=20000, size=21)
fox_col
pop_df["fox"] = pd.Series(fox_col, index=pop_df.index)
pop_df
pop_df.plot(x="year", y="fox", kind="area", title="Fox Population")
pd.plotting.scatter_matrix(pop_df[["hare","lynx","carrot"]], figsize=(14,14), hist_kwds={'bins':8}, alpha=.5, marker='o', s=50)
###Output
_____no_output_____ |
Assignment_D_4.ipynb | ###Markdown
###Code
upper=702648265
lower=1042000
for num in range (lower,upper+1):
x=len(str(num))
sum=0
temp=num
while temp > 0:
y=temp % 10
sum += y ** x
temp //= 10
if num==sum:
print(sum)
break
###Output
1741725
|
notebooks/9_Visualizing-NER.ipynb | ###Markdown
___3. NLP in Practice___ Visualizing Named EntitiesBesides viewing Part of Speech dependencies with `style='dep'`, **displaCy** offers a `style='ent'` visualizer:
###Code
# Perform standard imports
import spacy
nlp = spacy.load('en_core_web_sm')
# Import the displaCy library
from spacy import displacy
doc = nlp(u'Over the last quarter Apple sold nearly 20 thousand iPods for a profit of $6 million. '
u'By contrast, Sony sold only 7 thousand Walkman music players.')
displacy.render(doc, style='ent', jupyter=True)
###Output
_____no_output_____
###Markdown
___ Viewing Sentences Line by LineUnlike the **displaCy** dependency parse, the NER viewer has to take in a Doc object with an `ents` attribute. For this reason, we can't just pass a list of spans to `.render()`, we have to create a new Doc from each `span.text`:
###Code
for sent in doc.sents:
displacy.render(nlp(sent.text), style='ent', jupyter=True)
###Output
_____no_output_____
###Markdown
**NOTE**: If a span does not contain any entities, displaCy will issue a harmless warning:
###Code
doc2 = nlp(u'Over the last quarter Apple sold nearly 20 thousand iPods for a profit of $6 million. '
u'By contrast, my kids sold a lot of lemonade.')
for sent in doc2.sents:
displacy.render(nlp(sent.text), style='ent', jupyter=True)
###Output
_____no_output_____
###Markdown
**WORKAROUND:** We can avert this with an additional bit of code:
###Code
for sent in doc2.sents:
docx = nlp(sent.text)
if docx.ents:
displacy.render(docx, style='ent', jupyter=True)
else:
print(docx.text)
###Output
_____no_output_____
###Markdown
___ Viewing Specific EntitiesYou can pass a list of entity types to restrict the visualization:
###Code
options = {'ents': ['ORG', 'PRODUCT']}
displacy.render(doc, style='ent', jupyter=True, options=options)
###Output
_____no_output_____
###Markdown
___ Customizing Colors and EffectsYou can also pass background color and gradient options:
###Code
colors = {'ORG': 'linear-gradient(90deg, #aa9cfc, #fc9ce7)', 'PRODUCT': 'radial-gradient(yellow, green)'}
options = {'ents': ['ORG', 'PRODUCT'], 'colors':colors}
displacy.render(doc, style='ent', jupyter=True, options=options)
###Output
_____no_output_____
###Markdown
For more on applying CSS background colors and gradients, visit https://www.w3schools.com/css/css3_gradients.asp ___ Creating Visualizations Outside of JupyterIf you're using another Python IDE or writing a script, you can choose to have spaCy serve up HTML separately.Instead of `displacy.render()`, use `displacy.serve()`:
###Code
displacy.serve(doc, style='ent', options=options)
###Output
Serving on port 5000...
Using the 'ent' visualizer
|
spinup/algos/pytorch/lstm_ddpg/.ipynb_checkpoints/Non-stationary-checkpoint.ipynb | ###Markdown
Non-stationary
###Code
from copy import deepcopy
import itertools
import numpy as np
import torch
from torch.optim import Adam
import pybulletgym
import gym
import time
import spinup.algos.pytorch.td3.core as core
from spinup.utils.logx import EpochLogger
class ReplayBuffer:
"""
A simple FIFO experience replay buffer for TD3 agents.
"""
def __init__(self, obs_dim, act_dim, size):
self.obs_buf = np.zeros(core.combined_shape(size, obs_dim), dtype=np.float32)
self.obs2_buf = np.zeros(core.combined_shape(size, obs_dim), dtype=np.float32)
self.act_buf = np.zeros(core.combined_shape(size, act_dim), dtype=np.float32)
self.rew_buf = np.zeros(size, dtype=np.float32)
self.done_buf = np.zeros(size, dtype=np.float32)
self.ptr, self.size, self.max_size = 0, 0, size
def store(self, obs, act, rew, next_obs, done):
self.obs_buf[self.ptr] = obs
self.obs2_buf[self.ptr] = next_obs
self.act_buf[self.ptr] = act
self.rew_buf[self.ptr] = rew
self.done_buf[self.ptr] = done
self.ptr = (self.ptr+1) % self.max_size
self.size = min(self.size+1, self.max_size)
def sample_batch(self, batch_size=32):
idxs = np.random.randint(0, self.size, size=batch_size)
batch = dict(obs=self.obs_buf[idxs],
obs2=self.obs2_buf[idxs],
act=self.act_buf[idxs],
rew=self.rew_buf[idxs],
done=self.done_buf[idxs])
return {k: torch.as_tensor(v, dtype=torch.float32) for k,v in batch.items()}
class POMDPWrapper(gym.ObservationWrapper):
def __init__(self, env_name):
super().__init__(gym.make(env_name))
# Remove velocity info
# OpenAIGym
# 1. MuJoCo
if env_name == "HalfCheetah-v3" or env_name == "HalfCheetah-v2":
self.remain_obs_idx = np.arange(0, 8)
elif env_name == "Ant-v3" or env_name == "Ant-v2":
self.remain_obs_idx = list(np.arange(0, 13)) + list(np.arange(27, 111))
elif env_name == 'Walker2d-v3' or env_name == "Walker2d-v2":
self.remain_obs_idx = np.arange(0, 8)
elif env_name == 'Hopper-v3' or env_name == "Hopper-v2":
self.remain_obs_idx = np.arange(0, 5)
elif env_name == "InvertedPendulum-v2":
self.remain_obs_idx = np.arange(0, 2)
elif env_name == "InvertedDoublePendulum-v2":
self.remain_obs_idx = list(np.arange(0, 5)) + list(np.arange(8, 11))
elif env_name == "Swimmer-v3" or env_name == "Swimmer-v2":
self.remain_obs_idx = np.arange(0, 3)
elif env_name == "Thrower-v2":
self.remain_obs_idx = list(np.arange(0, 7)) + list(np.arange(14, 23))
elif env_name == "Striker-v2":
self.remain_obs_idx = list(np.arange(0, 7)) + list(np.arange(14, 23))
elif env_name == "Pusher-v2":
self.remain_obs_idx = list(np.arange(0, 7)) + list(np.arange(14, 23))
elif env_name == "Reacher-v2":
self.remain_obs_idx = list(np.arange(0, 6)) + list(np.arange(8, 11))
elif env_name == 'Humanoid-v3' or env_name == "Humanoid-v2":
self.remain_obs_idx = list(np.arange(0, 22)) + list(np.arange(45, 185)) + list(np.arange(269, 376))
elif env_name == 'HumanoidStandup-v2':
self.remain_obs_idx = list(np.arange(0, 22)) + list(np.arange(45, 185)) + list(np.arange(269, 376))
# PyBulletGym
# 1. MuJoCo
elif env_name == 'HalfCheetahMuJoCoEnv-v0':
self.remain_obs_idx = np.arange(0, 8)
elif env_name == 'AntMuJoCoEnv-v0':
self.remain_obs_idx = list(np.arange(0, 13)) + list(np.arange(27, 111))
elif env_name == 'Walker2DMuJoCoEnv-v0':
self.remain_obs_idx = np.arange(0, 8)
elif env_name == 'HopperMuJoCoEnv-v0':
self.remain_obs_idx = np.arange(0, 7)
elif env_name == 'InvertedPendulumMuJoCoEnv-v0':
self.remain_obs_idx = np.arange(0, 3)
elif env_name == 'InvertedDoublePendulumMuJoCoEnv-v0':
self.remain_obs_idx = list(np.arange(0, 5)) + list(np.arange(8, 11))
# 2. Roboschool
elif env_name == 'HalfCheetahPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,26)) - set(np.arange(3,6)))
elif env_name == 'AntPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,28)) - set(np.arange(3,6)))
elif env_name == 'Walker2DPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,22)) - set(np.arange(3,6)))
elif env_name == 'HopperPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,15)) - set(np.arange(3,6)))
elif env_name == 'InvertedPendulumPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,5)) - set([1,4]))
elif env_name == 'InvertedDoublePendulumPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,9)) - set([1,5,8]))
elif env_name == 'ReacherPyBulletEnv-v0':
self.remain_obs_idx = list(set(np.arange(0,9)) - set([6,8]))
else:
raise ValueError('POMDP for {} is not defined!'.format(env_name))
# Redefine observation_space
obs_low = np.array([-np.inf for i in range(len(self.remain_obs_idx))], dtype="float32")
obs_high = np.array([np.inf for i in range(len(self.remain_obs_idx))], dtype="float32")
self.observation_space = gym.spaces.Box(obs_low, obs_high)
def observation(self, obs):
return obs.flatten()[self.remain_obs_idx]
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence, pad_packed_sequence
class MLPCritic(nn.Module):
def __init__(self, obs_dim, act_dim, hidden_sizes=(128, 128)):
super(MLPCritic, self).__init__()
self.obs_dim = obs_dim
self.act_dim = act_dim
self.layers = nn.ModuleList()
layer_size = [obs_dim+act_dim]+list(hidden_sizes) + [1]
for h in range(len(layer_size)-2):
self.layers += [nn.Linear(layer_size[h], layer_size[h+1]), nn.ReLU()]
self.layers += [nn.Linear(layer_size[-2], layer_size[-1]), nn.Identity()]
def forward(self, obs, act):
cat_input = torch.cat([obs, act], dim=-1)
x = cat_input
for layer in self.layers:
x = layer(x)
return torch.squeeze(x, -1) # Critical to ensure q has right shape.
class MLPActor(nn.Module):
def __init__(self, obs_dim, act_dim, act_limit, hidden_sizes=(128, 128)):
super(MLPActor, self).__init__()
self.obs_dim = obs_dim
self.act_dim = act_dim
self.act_limit = act_limit
self.layers = nn.ModuleList()
layer_size = [obs_dim]+list(hidden_sizes) + [act_dim]
for h in range(len(layer_size)-2):
self.layers += [nn.Linear(layer_size[h], layer_size[h+1]), nn.ReLU()]
self.layers += [nn.Linear(layer_size[-2], layer_size[-1]), nn.Tanh()]
def forward(self, obs):
x = obs
for layer in self.layers:
x = layer(x)
return self.act_limit * x
class MLPActorCritic(nn.Module):
def __init__(self, obs_dim, act_dim, act_limit, hidden_sizes=(128, 128)):
super(MLPActorCritic, self).__init__()
self.q1 = MLPCritic(obs_dim, act_dim)
self.q2 = MLPCritic(obs_dim, act_dim)
self.pi = MLPActor(obs_dim, act_dim, act_limit=1)
def act(self, obs):
with torch.no_grad():
return self.pi(obs).numpy()
cuda = torch.device('cuda')
def td3(env_name, actor_critic=core.MLPActorCritic, ac_kwargs=dict(), seed=0,
steps_per_epoch=4000, epochs=100, replay_size=int(1e6), gamma=0.99,
polyak=0.995, pi_lr=1e-3, q_lr=1e-3, batch_size=100, start_steps=10000,
update_after=1000, update_every=50, act_noise=0.1, target_noise=0.2,
noise_clip=0.5, policy_delay=2, num_test_episodes=5, max_ep_len=1000,
nonstationary_env = True,
gravity_change_pattern = 'gravity_averagely_equal',
partially_observable = False,
logger_kwargs=dict(), save_freq=1):
"""
Twin Delayed Deep Deterministic Policy Gradient (TD3)
Args:
env_fn : A function which creates a copy of the environment.
The environment must satisfy the OpenAI Gym API.
actor_critic: The constructor method for a PyTorch Module with an ``act``
method, a ``pi`` module, a ``q1`` module, and a ``q2`` module.
The ``act`` method and ``pi`` module should accept batches of
observations as inputs, and ``q1`` and ``q2`` should accept a batch
of observations and a batch of actions as inputs. When called,
these should return:
=========== ================ ======================================
Call Output Shape Description
=========== ================ ======================================
``act`` (batch, act_dim) | Numpy array of actions for each
| observation.
``pi`` (batch, act_dim) | Tensor containing actions from policy
| given observations.
``q1`` (batch,) | Tensor containing one current estimate
| of Q* for the provided observations
| and actions. (Critical: make sure to
| flatten this!)
``q2`` (batch,) | Tensor containing the other current
| estimate of Q* for the provided observations
| and actions. (Critical: make sure to
| flatten this!)
=========== ================ ======================================
ac_kwargs (dict): Any kwargs appropriate for the ActorCritic object
you provided to TD3.
seed (int): Seed for random number generators.
steps_per_epoch (int): Number of steps of interaction (state-action pairs)
for the agent and the environment in each epoch.
epochs (int): Number of epochs to run and train agent.
replay_size (int): Maximum length of replay buffer.
gamma (float): Discount factor. (Always between 0 and 1.)
polyak (float): Interpolation factor in polyak averaging for target
networks. Target networks are updated towards main networks
according to:
.. math:: \\theta_{\\text{targ}} \\leftarrow
\\rho \\theta_{\\text{targ}} + (1-\\rho) \\theta
where :math:`\\rho` is polyak. (Always between 0 and 1, usually
close to 1.)
pi_lr (float): Learning rate for policy.
q_lr (float): Learning rate for Q-networks.
batch_size (int): Minibatch size for SGD.
start_steps (int): Number of steps for uniform-random action selection,
before running real policy. Helps exploration.
update_after (int): Number of env interactions to collect before
starting to do gradient descent updates. Ensures replay buffer
is full enough for useful updates.
update_every (int): Number of env interactions that should elapse
between gradient descent updates. Note: Regardless of how long
you wait between updates, the ratio of env steps to gradient steps
is locked to 1.
act_noise (float): Stddev for Gaussian exploration noise added to
policy at training time. (At test time, no noise is added.)
target_noise (float): Stddev for smoothing noise added to target
policy.
noise_clip (float): Limit for absolute value of target policy
smoothing noise.
policy_delay (int): Policy will only be updated once every
policy_delay times for each update of the Q-networks.
num_test_episodes (int): Number of episodes to test the deterministic
policy at the end of each epoch.
max_ep_len (int): Maximum length of trajectory / episode / rollout.
logger_kwargs (dict): Keyword args for EpochLogger.
save_freq (int): How often (in terms of gap between epochs) to save
the current policy and value function.
"""
logger = EpochLogger(**logger_kwargs)
logger.save_config(locals())
torch.manual_seed(seed)
np.random.seed(seed)
# Wrapper environment if using POMDP
if partially_observable == True:
env, test_env = POMDPWrapper(env_name), POMDPWrapper(env_name)
else:
env, test_env = gym.make(env_name), gym.make(env_name)
obs_dim = env.observation_space.shape[0]
act_dim = env.action_space.shape[0]
# Action limit for clamping: critically, assumes all dimensions share the same bound!
act_limit = env.action_space.high[0]
# Create actor-critic module and target networks
mlp_c1 = MLPCritic(obs_dim, act_dim)
mlp_c2 = MLPCritic(obs_dim, act_dim)
mlp_a = MLPActor(obs_dim, act_dim, act_limit)
mlp_c1_targ = deepcopy(mlp_c1)
mlp_c2_targ = deepcopy(mlp_c2)
mlp_a_targ = deepcopy(mlp_a)
mlp_c1.cuda()
mlp_c2.cuda()
mlp_a.cuda()
mlp_c1_targ.cuda()
mlp_c2_targ.cuda()
mlp_a_targ.cuda()
# Freeze target networks with respect to optimizers (only update via polyak averaging)
for p in mlp_c1_targ.parameters():
p.requires_grad = False
for p in mlp_c2_targ.parameters():
p.requires_grad = False
for p in mlp_a_targ.parameters():
p.requires_grad = False
# List of parameters for both Q-networks (save this for convenience)
q_params = itertools.chain(mlp_c1.parameters(), mlp_c2.parameters())
# Experience buffer
replay_buffer = ReplayBuffer(obs_dim=obs_dim, act_dim=act_dim, size=replay_size)
# # Count variables (protip: try to get a feel for how different size networks behave!)
# var_counts = tuple(core.count_vars(module) for module in [ac.pi, ac.q1, ac.q2])
# logger.log('\nNumber of parameters: \t pi: %d, \t q1: %d, \t q2: %d\n'%var_counts)
# Set up function for computing TD3 Q-losses
def compute_loss_q(data):
o, a, r, o2, d = data['obs'].to(device=cuda), data['act'].to(device=cuda), data['rew'].to(device=cuda), data['obs2'].to(device=cuda), data['done'].to(device=cuda)
q1 = mlp_c1(o, a)
q2 = mlp_c2(o, a)
# Bellman backup for Q functions
with torch.no_grad():
pi_targ = mlp_a_targ(o2)
a2 = pi_targ
# Target Q-values
q1_pi_targ = mlp_c1_targ(o2, a2)
q2_pi_targ = mlp_c2_targ(o2, a2)
q_pi_targ = torch.min(q1_pi_targ, q2_pi_targ)
backup = r + gamma * (1 - d) * q_pi_targ
# MSE loss against Bellman backup
loss_q1 = ((q1 - backup)**2).mean()
loss_q2 = ((q2 - backup)**2).mean()
loss_q = loss_q1 + loss_q2
# Useful info for logging
loss_info = dict(Q1Vals=q1.detach().cpu().numpy(),
Q2Vals=q2.detach().cpu().numpy())
return loss_q, loss_info
# Set up function for computing TD3 pi loss
def compute_loss_pi(data):
o = data['obs'].to(device=cuda)
q1_pi = mlp_c1(o, mlp_a(o))
return -q1_pi.mean()
# Set up optimizers for policy and q-function
pi_optimizer = Adam(mlp_a.parameters(), lr=pi_lr)
q_optimizer = Adam(q_params, lr=q_lr)
# # Set up model saving
# logger.setup_pytorch_saver(ac)
def update(data, timer):
# First run one gradient descent step for Q1 and Q2
q_optimizer.zero_grad()
loss_q, loss_info = compute_loss_q(data)
loss_q.backward()
q_optimizer.step()
# Record things
logger.store(LossQ=loss_q.item(), **loss_info)
# Freeze Q-networks so you don't waste computational effort
# computing gradients for them during the policy learning step.
for p in q_params:
p.requires_grad = False
# Next run one gradient descent step for pi.
pi_optimizer.zero_grad()
loss_pi = compute_loss_pi(data)
loss_pi.backward()
pi_optimizer.step()
# Unfreeze Q-networks so you can optimize it at next DDPG step.
for p in q_params:
p.requires_grad = True
# Record things
logger.store(LossPi=loss_pi.item())
# Finally, update target networks by polyak averaging.
with torch.no_grad():
for p, p_targ in zip(mlp_a.parameters(), mlp_a_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
for p, p_targ in zip(mlp_c1.parameters(), mlp_c1_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
for p, p_targ in zip(mlp_c2.parameters(), mlp_c2_targ.parameters()):
p_targ.data.mul_(polyak)
p_targ.data.add_((1 - polyak) * p.data)
def get_action(o, noise_scale):
o = torch.tensor(o).view(1, -1).float().to(device=cuda)
with torch.no_grad():
a = mlp_a(o)
a = a.cpu().numpy().flatten()
a += noise_scale * np.random.randn(act_dim)
return np.clip(a, -act_limit, act_limit)
def test_agent():
for j in range(num_test_episodes):
o, d, ep_ret, ep_len = test_env.reset(), False, 0, 0
while not(d or (ep_len == max_ep_len)):
# Take deterministic actions at test time (noise_scale=0)
o, r, d, _ = test_env.step(get_action(o, 0))
ep_ret += r
ep_len += 1
logger.store(TestEpRet=ep_ret, TestEpLen=ep_len)
# Prepare for interaction with environment
total_steps = steps_per_epoch * epochs
start_time = time.time()
o, ep_ret, ep_len = env.reset(), 0, 0
# Main loop: collect experience in env and update/log each epoch
for t in range(total_steps):
# Until start_steps have elapsed, randomly sample actions
# from a uniform distribution for better exploration. Afterwards,
# use the learned policy (with some noise, via act_noise).
if t > start_steps:
a = get_action(o, act_noise)
else:
a = env.action_space.sample()
if nonstationary_env == True:
gravity_cycle = 1000
gravity_base = -9.81
if gravity_change_pattern == 'gravity_averagely_equal':
gravity = gravity_base * 1 / 2 * (np.cos(2 * np.pi / gravity_cycle * t) + 1) + gravity_base / 2
elif gravity_change_pattern == 'gravity_averagely_easier':
gravity = gravity_base * 1 / 2 * (np.cos(2 * np.pi / gravity_cycle * t) + 1)
elif gravity_change_pattern == 'gravity_averagely_harder':
gravity = gravity_base * 1 / 2 * (-np.cos(2 * np.pi / gravity_cycle * t) + 1) + gravity_base
else:
pass
if 'PyBulletEnv' in env_name:
env.env._p.setGravity(0, 0, gravity)
elif 'Roboschool' in env_name:
pass
else:
env.model.opt.gravity[2] = gravity
# Step the env
o2, r, d, _ = env.step(a)
ep_ret += r
ep_len += 1
# Ignore the "done" signal if it comes from hitting the time
# horizon (that is, when it's an artificial terminal signal
# that isn't based on the agent's state)
d = False if ep_len==max_ep_len else d
# Store experience to replay buffer
replay_buffer.store(o, a, r, o2, d)
# Super critical, easy to overlook step: make sure to update
# most recent observation!
o = o2
# End of trajectory handling
if d or (ep_len == max_ep_len):
logger.store(EpRet=ep_ret, EpLen=ep_len)
o, ep_ret, ep_len = env.reset(), 0, 0
# Update handling
if t >= update_after and t % update_every == 0:
for j in range(update_every):
batch = replay_buffer.sample_batch(batch_size)
update(data=batch, timer=j)
# End of epoch handling
if (t+1) % steps_per_epoch == 0:
epoch = (t+1) // steps_per_epoch
# # Save model
# if (epoch % save_freq == 0) or (epoch == epochs):
# logger.save_state({'env': env}, None)
# Test the performance of the deterministic version of the agent.
test_agent()
# Log info about epoch
logger.log_tabular('Epoch', epoch)
logger.log_tabular('EpRet', with_min_and_max=True)
logger.log_tabular('TestEpRet', with_min_and_max=True)
logger.log_tabular('EpLen', average_only=True)
logger.log_tabular('TestEpLen', average_only=True)
logger.log_tabular('TotalEnvInteracts', t)
logger.log_tabular('Q1Vals', with_min_and_max=True)
logger.log_tabular('Q2Vals', with_min_and_max=True)
logger.log_tabular('LossPi', average_only=True)
logger.log_tabular('LossQ', average_only=True)
logger.log_tabular('Time', time.time()-start_time)
logger.dump_tabular()
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_False_HalfCheetah_NoTargSmooth_NoDelayUpdate_POMDP_False'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HopperPyBulletEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_False_HopperPyBulletEnv_NoTargSmooth_NoDelayUpdate_POMDP_False'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HopperPyBulletEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_False_HopperPyBulletEnv_NoTargSmooth_NoDelayUpdate_POMDP_True'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'AntMuJoCoEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_False_Ant_NoTargSmooth_NoDelayUpdate_POMDP_True'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'AntMuJoCoEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_False_Ant_NoTargSmooth_NoDelayUpdate_POMDP_False'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetahMuJoCoEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_False_HalfCheetah_NoTargSmooth_NoDelayUpdate_POMDP_False'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetahMuJoCoEnv-v0', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':False,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':True,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': False,
'exp_name': 'td3_NonStationary_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':True,
'gravity_change_pattern': 'gravity_averagely_equal',
'partially_observable': True,
'exp_name': 'td3_NonStationary_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
partially_observable=args['partially_observable'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':True,
'gravity_change_pattern': 'gravity_averagely_equal',
'exp_name': 'td3_NonStationary_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
logger_kwargs=logger_kwargs)
args = {'env': 'HalfCheetah-v2', 'hid': 256, 'l': 2, 'gamma': 0.99,
'seed': 0, 'epochs': 50,
'nonstationary_env':True,
'gravity_change_pattern': 'gravity_averagely_equal',
'exp_name': 'td3_HalfCheetah_NoTargSmooth_NoDelayUpdate'}
from spinup.utils.run_utils import setup_logger_kwargs
logger_kwargs = setup_logger_kwargs(args['exp_name'], args['seed'])
td3(env_name=args['env'], actor_critic=core.MLPActorCritic,
ac_kwargs=dict(hidden_sizes=[args['hid']]*args['l']),
gamma=args['gamma'], seed=args['seed'], epochs=args['epochs'],
nonstationary_env=args['nonstationary_env'],
gravity_change_pattern=args['gravity_change_pattern'],
logger_kwargs=logger_kwargs)
###Output
Warning: Log dir c:\users\lingheng\google drive\git_repos\spinningup-new\data\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate_s0 already exists! Storing info there anyway.
[32;1mLogging data to c:\users\lingheng\google drive\git_repos\spinningup-new\data\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate_s0\progress.txt[0m
[36;1mSaving config:
[0m
{
"ac_kwargs": {
"hidden_sizes": [
256,
256
]
},
"act_noise": 0.1,
"actor_critic": "MLPActorCritic",
"batch_size": 100,
"env_fn": "<function <lambda> at 0x0000021F944728B8>",
"epochs": 50,
"exp_name": "td3_HalfCheetah_NoTargSmooth_NoDelayUpdate",
"gamma": 0.99,
"logger": {
"<spinup.utils.logx.EpochLogger object at 0x0000021FDE4066C8>": {
"epoch_dict": {},
"exp_name": "td3_HalfCheetah_NoTargSmooth_NoDelayUpdate",
"first_row": true,
"log_current_row": {},
"log_headers": [],
"output_dir": "c:\\users\\lingheng\\google drive\\git_repos\\spinningup-new\\data\\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate\\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate_s0",
"output_file": {
"<_io.TextIOWrapper name='c:\\\\users\\\\lingheng\\\\google drive\\\\git_repos\\\\spinningup-new\\\\data\\\\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate\\\\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate_s0\\\\progress.txt' mode='w' encoding='cp1252'>": {
"mode": "w"
}
}
}
},
"logger_kwargs": {
"exp_name": "td3_HalfCheetah_NoTargSmooth_NoDelayUpdate",
"output_dir": "c:\\users\\lingheng\\google drive\\git_repos\\spinningup-new\\data\\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate\\td3_HalfCheetah_NoTargSmooth_NoDelayUpdate_s0"
},
"max_ep_len": 1000,
"noise_clip": 0.5,
"num_test_episodes": 10,
"pi_lr": 0.001,
"policy_delay": 2,
"polyak": 0.995,
"q_lr": 0.001,
"replay_size": 1000000,
"save_freq": 1,
"seed": 0,
"start_steps": 10000,
"steps_per_epoch": 4000,
"target_noise": 0.2,
"update_after": 1000,
"update_every": 50
}
---------------------------------------
| Epoch | 1 |
| AverageEpRet | -303 |
| StdEpRet | 114 |
| MaxEpRet | -147 |
| MinEpRet | -416 |
| AverageTestEpRet | -582 |
| StdTestEpRet | 3.42 |
| MaxTestEpRet | -578 |
| MinTestEpRet | -591 |
| EpLen | 1e+03 |
| TestEpLen | 1e+03 |
| TotalEnvInteracts | 4e+03 |
| AverageQ1Vals | 1.12 |
| StdQ1Vals | 2.05 |
| MaxQ1Vals | 12.5 |
| MinQ1Vals | -8.99 |
| AverageQ2Vals | 1.12 |
| StdQ2Vals | 2.05 |
| MaxQ2Vals | 12.1 |
| MinQ2Vals | -8.91 |
| LossPi | -1.85 |
| LossQ | 0.634 |
| Time | 80.6 |
---------------------------------------
---------------------------------------
| Epoch | 2 |
| AverageEpRet | -305 |
| StdEpRet | 99.1 |
| MaxEpRet | -176 |
| MinEpRet | -436 |
| AverageTestEpRet | -488 |
| StdTestEpRet | 6.44 |
| MaxTestEpRet | -475 |
| MinTestEpRet | -498 |
| EpLen | 1e+03 |
| TestEpLen | 1e+03 |
| TotalEnvInteracts | 8e+03 |
| AverageQ1Vals | 5.38 |
| StdQ1Vals | 5.51 |
| MaxQ1Vals | 25.7 |
| MinQ1Vals | -19.5 |
| AverageQ2Vals | 5.39 |
| StdQ2Vals | 5.52 |
| MaxQ2Vals | 25.6 |
| MinQ2Vals | -18.1 |
| LossPi | -6.9 |
| LossQ | 1.73 |
| Time | 192 |
---------------------------------------
---------------------------------------
| Epoch | 3 |
| AverageEpRet | -455 |
| StdEpRet | 178 |
| MaxEpRet | -256 |
| MinEpRet | -664 |
| AverageTestEpRet | -643 |
| StdTestEpRet | 89.3 |
| MaxTestEpRet | -535 |
| MinTestEpRet | -831 |
| EpLen | 1e+03 |
| TestEpLen | 1e+03 |
| TotalEnvInteracts | 1.2e+04 |
| AverageQ1Vals | 18.5 |
| StdQ1Vals | 12.1 |
| MaxQ1Vals | 44.6 |
| MinQ1Vals | -17.4 |
| AverageQ2Vals | 18.5 |
| StdQ2Vals | 12.1 |
| MaxQ2Vals | 44.3 |
| MinQ2Vals | -17.3 |
| LossPi | -20.6 |
| LossQ | 2.7 |
| Time | 339 |
---------------------------------------
---------------------------------------
| Epoch | 4 |
| AverageEpRet | 579 |
| StdEpRet | 442 |
| MaxEpRet | 1.19e+03 |
| MinEpRet | -30 |
| AverageTestEpRet | 1.17e+03 |
| StdTestEpRet | 248 |
| MaxTestEpRet | 1.37e+03 |
| MinTestEpRet | 468 |
| EpLen | 1e+03 |
| TestEpLen | 1e+03 |
| TotalEnvInteracts | 1.6e+04 |
| AverageQ1Vals | 24.9 |
| StdQ1Vals | 11.9 |
| MaxQ1Vals | 60.5 |
| MinQ1Vals | -12.1 |
| AverageQ2Vals | 24.9 |
| StdQ2Vals | 12 |
| MaxQ2Vals | 61.3 |
| MinQ2Vals | -11.5 |
| LossPi | -26.3 |
| LossQ | 3.8 |
| Time | 481 |
---------------------------------------
---------------------------------------
| Epoch | 5 |
| AverageEpRet | 845 |
| StdEpRet | 939 |
| MaxEpRet | 2.06e+03 |
| MinEpRet | -273 |
| AverageTestEpRet | 1.51e+03 |
| StdTestEpRet | 109 |
| MaxTestEpRet | 1.64e+03 |
| MinTestEpRet | 1.28e+03 |
| EpLen | 1e+03 |
| TestEpLen | 1e+03 |
| TotalEnvInteracts | 2e+04 |
| AverageQ1Vals | 30.9 |
| StdQ1Vals | 17.4 |
| MaxQ1Vals | 90.2 |
| MinQ1Vals | -32.1 |
| AverageQ2Vals | 30.9 |
| StdQ2Vals | 17.4 |
| MaxQ2Vals | 90.6 |
| MinQ2Vals | -31.9 |
| LossPi | -32.4 |
| LossQ | 5.11 |
| Time | 605 |
---------------------------------------
---------------------------------------
| Epoch | 6 |
| AverageEpRet | 2.29e+03 |
| StdEpRet | 253 |
| MaxEpRet | 2.47e+03 |
| MinEpRet | 1.86e+03 |
| AverageTestEpRet | 2.57e+03 |
| StdTestEpRet | 617 |
| MaxTestEpRet | 2.9e+03 |
| MinTestEpRet | 745 |
| EpLen | 1e+03 |
| TestEpLen | 1e+03 |
| TotalEnvInteracts | 2.4e+04 |
| AverageQ1Vals | 46.6 |
| StdQ1Vals | 29.3 |
| MaxQ1Vals | 127 |
| MinQ1Vals | -46.3 |
| AverageQ2Vals | 46.6 |
| StdQ2Vals | 29.3 |
| MaxQ2Vals | 127 |
| MinQ2Vals | -44.9 |
| LossPi | -48.5 |
| LossQ | 8.78 |
| Time | 729 |
---------------------------------------
|
Proyecto/Notebooks/Defenses/CIFAR10 adversarial training.ipynb | ###Markdown
MobileNet v2 with CIFAR10 Librerías
###Code
import numpy as np
import random
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm
from PIL import Image
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, TensorDataset, Dataset
from torchinfo import summary
import torchvision
from torchvision import models
import torchvision.transforms as transforms
import torchattacks
from utils.evaluation import NormalizationLayer, get_topk_accuracy
from utils.evaluation import plot_adversarial, get_same_predictions, get_different_predictions
from utils.mobilenetv2 import build_mobilenet_v2
from utils.training import train
import warnings
warnings.filterwarnings('ignore')
# Reproducibility
random.seed(42)
np.random.seed(42)
torch.manual_seed(42)
torch.backends.cudnn.deterministic = True
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
device
###Output
_____no_output_____
###Markdown
ModeloUsaré una versión modificada de MobileNet v2 para trabajar con CIFAR10, ver [PyTorch models trained on CIFAR-10 dataset](https://github.com/huyvnphan/PyTorch_CIFAR10). De hecho ya hay una versión pre-entrenada, pero la versión que usa de PyTorch no es compatible con TorchAttacks, de modo que lo más sencillo es entrenarla de cero. La salida tiene puntuaciones no normalizadas, para obtener probabilidades hay que ejecutar un softmax en la salida. Por la forma en que funciona [Adversarial-Attacks-PyTorch](https://github.com/Harry24k/adversarial-attacks-pytorch) las imágenes de entrada que se le tienen que pasar deben estar en el rango [0,1], pero los modelos pre-entrenados de PyTorch esperan imágenes normalizadas, las cuáles no están en el [0,1]. La forma de resolver ésto es añadiendo una capa de normalización al inicio. Ver [Demo - White Box Attack (Imagenet)](https://nbviewer.jupyter.org/github/Harry24k/adversarial-attacks-pytorch/blob/master/demos/White%20Box%20Attack%20%28ImageNet%29.ipynb) para un ejemplo con los modelos entrenados en ImageNet.Lo único que cambia es que las medias y std serán diferentes, ver [How to use models](https://github.com/huyvnphan/PyTorch_CIFAR10how-to-use-pretrained-models).
###Code
# La red entrenada es la que usaré para generar los ejemplos adversarios
mobilenet_v2 = nn.Sequential(
NormalizationLayer(mean=[0.4914, 0.4822, 0.4465], std=[0.2471, 0.2435, 0.2616]),
build_mobilenet_v2(pretrained=False))
mobilenet_v2.load_state_dict(torch.load('models/mobilenet_v2.pt'))
mobilenet_v2.eval()
# Lo movemos a la GPU, en caso de que haya
mobilenet_v2 = mobilenet_v2.to(device)
summary(mobilenet_v2)
# Esta red es la que entrenaré
mobilenet_v2_adversarial = nn.Sequential(
NormalizationLayer(mean=[0.4914, 0.4822, 0.4465], std=[0.2471, 0.2435, 0.2616]),
build_mobilenet_v2(pretrained=False))
# Lo movemos a la GPU, en caso de que haya
mobilenet_v2_adversarial = mobilenet_v2_adversarial.to(device)
###Output
_____no_output_____
###Markdown
Dataset & dataloader
###Code
transform = transforms.Compose([transforms.ToTensor()])
batch_size = 128
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=4)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=4)
print(f'Trainset: {len(trainset)}')
print(f'Testset: {len(testset)}')
###Output
Files already downloaded and verified
Files already downloaded and verified
Trainset: 50000
Testset: 10000
###Markdown
Adversarial examplesTomaré 5,000 imágenes de manera aleatoria del conjunto de entrenamiento, posteriormente le aplicaré a cada imagen 4 algoritmos de ataques, por lo que en total tendré al final 20,000 ejemplos adversarios.
###Code
indexes = random.sample(range(50000), 5000)
imgs = np.array([trainset.__getitem__(i)[0].numpy() for i in indexes])
labels = np.array([trainset.__getitem__(i)[1] for i in indexes])
len(set(indexes)), np.unique(labels, return_counts=True)
imgs_subset = torch.tensor(imgs)
labels_subset = torch.tensor(labels)
trainset_subset = TensorDataset(imgs_subset, labels_subset)
trainloader_subset = DataLoader(trainset_subset, batch_size=32, shuffle=False, num_workers=4)
###Output
_____no_output_____
###Markdown
FGSM
###Code
%%time
attack = torchattacks.FGSM(mobilenet_v2, eps=1/255)
attack.set_return_type('float')
attack.save(trainloader_subset, save_path='models/FGSM_train.pt', verbose=True)
###Output
- Save Progress: 100.00 % / Accuracy: 47.68 % / L2: 0.21692
- Save Complete!
CPU times: user 7.41 s, sys: 304 ms, total: 7.71 s
Wall time: 6.13 s
###Markdown
PGD
###Code
%%time
attack = torchattacks.PGD(mobilenet_v2, eps=1/255, alpha=1/255, steps=3)
attack.set_return_type('float')
attack.save(trainloader_subset, save_path='models/PGD_train.pt', verbose=True)
###Output
- Save Progress: 100.00 % / Accuracy: 34.72 % / L2: 0.20776
- Save Complete!
CPU times: user 15.3 s, sys: 325 ms, total: 15.6 s
Wall time: 13.9 s
###Markdown
MIFGSM
###Code
%%time
attack = torchattacks.MIFGSM(mobilenet_v2, eps=1/255, decay=1.0, steps=3)
attack.set_return_type('float')
attack.save(trainloader_subset, save_path='models/MIFGSM_train.pt', verbose=True)
###Output
- Save Progress: 100.00 % / Accuracy: 39.72 % / L2: 0.20134
- Save Complete!
CPU times: user 15.1 s, sys: 417 ms, total: 15.5 s
Wall time: 13.4 s
###Markdown
OnePixel
###Code
%%time
attack = torchattacks.OnePixel(mobilenet_v2, pixels=1, steps=5, popsize=20)
attack.set_return_type('float')
attack.save(trainloader_subset, save_path='models/OnePixel_train.pt', verbose=True)
###Output
- Save Progress: 100.00 % / Accuracy: 81.80 % / L2: 0.85973
- Save Complete!
CPU times: user 3min 59s, sys: 470 ms, total: 4min
Wall time: 3min 58s
###Markdown
Trainset and trainloaderAhora creamos el conjunto de entrenamiento con los ejemplos adversarios.
###Code
images = torch.tensor(np.array([trainset.__getitem__(i)[0].numpy() for i in range(50000)]))
labels = torch.tensor(np.array([trainset.__getitem__(i)[1] for i in range(50000)]))
adv_images_FGSM, adv_labels_FGSM = torch.load('models/FGSM_train.pt')
adv_images_PGD, adv_labels_PGD = torch.load('models/PGD_train.pt')
adv_images_MIFGSM, adv_labels_MIFGSM = torch.load('models/MIFGSM_train.pt')
adv_images_OnePixel, adv_labels_OnePixel = torch.load('models/OnePixel_train.pt')
adversarial_trainset = TensorDataset(torch.cat([images, adv_images_FGSM, adv_images_PGD, adv_images_MIFGSM, adv_images_OnePixel], dim=0),
torch.cat([labels, adv_labels_FGSM, adv_labels_PGD, adv_labels_MIFGSM, adv_labels_OnePixel], dim=0))
adversarial_trainloader = torch.utils.data.DataLoader(adversarial_trainset, batch_size=batch_size, shuffle=True, num_workers=4)
###Output
_____no_output_____
###Markdown
Entrenamiento
###Code
loss_hist, acc_hist = train(mobilenet_v2_adversarial, adversarial_trainloader, testloader, lr=1e-3, epochs=20)
###Output
5%|▌ | 1/20 [01:03<20:00, 63.18s/it]
###Markdown
Guardamos el modelo.
###Code
torch.save(mobilenet_v2_adversarial.state_dict(), 'models/mobilenet_v2_adversarial.pt')
###Output
_____no_output_____ |
scripts/d21-en/tensorflow/chapter_appendix-mathematics-for-deep-learning/single-variable-calculus.ipynb | ###Markdown
Single Variable Calculus:label:`sec_single_variable_calculus`In :numref:`sec_calculus`, we saw the basic elements of differential calculus. This section takes a deeper dive into the fundamentals of calculus and how we can understand and apply it in the context of machine learning. Differential CalculusDifferential calculus is fundamentally the study of how functions behave under small changes. To see why this is so core to deep learning, let us consider an example.Suppose that we have a deep neural network where the weights are, for convenience, concatenated into a single vector $\mathbf{w} = (w_1, \ldots, w_n)$. Given a training dataset, we consider the loss of our neural network on this dataset, which we will write as $\mathcal{L}(\mathbf{w})$. This function is extraordinarily complex, encoding the performance of all possible models of the given architecture on this dataset, so it is nearly impossible to tell what set of weights $\mathbf{w}$ will minimize the loss. Thus, in practice, we often start by initializing our weights *randomly*, and then iteratively take small steps in the direction which makes the loss decrease as rapidly as possible.The question then becomes something that on the surface is no easier: how do we find the direction which makes the weights decrease as quickly as possible? To dig into this, let us first examine the case with only a single weight: $L(\mathbf{w}) = L(x)$ for a single real value $x$. Let us take $x$ and try to understand what happens when we change it by a small amount to $x + \epsilon$. If you wish to be concrete, think a number like $\epsilon = 0.0000001$. To help us visualize what happens, let us graph an example function, $f(x) = \sin(x^x)$, over the $[0, 3]$.
###Code
%matplotlib inline
import tensorflow as tf
from IPython import display
from d2l import tensorflow as d2l
tf.pi = tf.acos(tf.zeros(1)).numpy() * 2 # Define pi in TensorFlow
# Plot a function in a normal range
x_big = tf.range(0.01, 3.01, 0.01)
ys = tf.sin(x_big**x_big)
d2l.plot(x_big, ys, 'x', 'f(x)')
###Output
_____no_output_____
###Markdown
At this large scale, the function's behavior is not simple. However, if we reduce our range to something smaller like $[1.75,2.25]$, we see that the graph becomes much simpler.
###Code
# Plot a the same function in a tiny range
x_med = tf.range(1.75, 2.25, 0.001)
ys = tf.sin(x_med**x_med)
d2l.plot(x_med, ys, 'x', 'f(x)')
###Output
_____no_output_____
###Markdown
Taking this to an extreme, if we zoom into a tiny segment, the behavior becomes far simpler: it is just a straight line.
###Code
# Plot a the same function in a tiny range
x_small = tf.range(2.0, 2.01, 0.0001)
ys = tf.sin(x_small**x_small)
d2l.plot(x_small, ys, 'x', 'f(x)')
###Output
_____no_output_____
###Markdown
This is the key observation of single variable calculus: the behavior of familiar functions can be modeled by a line in a small enough range. This means that for most functions, it is reasonable to expect that as we shift the $x$ value of the function by a little bit, the output $f(x)$ will also be shifted by a little bit. The only question we need to answer is, "How large is the change in the output compared to the change in the input? Is it half as large? Twice as large?"Thus, we can consider the ratio of the change in the output of a function for a small change in the input of the function. We can write this formally as$$\frac{L(x+\epsilon) - L(x)}{(x+\epsilon) - x} = \frac{L(x+\epsilon) - L(x)}{\epsilon}.$$This is already enough to start to play around with in code. For instance, suppose that we know that $L(x) = x^{2} + 1701(x-4)^3$, then we can see how large this value is at the point $x = 4$ as follows.
###Code
# Define our function
def L(x):
return x**2 + 1701 * (x - 4)**3
# Print the difference divided by epsilon for several epsilon
for epsilon in [0.1, 0.001, 0.0001, 0.00001]:
print(f'epsilon = {epsilon:.5f} -> {(L(4+epsilon) - L(4)) / epsilon:.5f}')
###Output
epsilon = 0.10000 -> 25.11000
epsilon = 0.00100 -> 8.00270
epsilon = 0.00010 -> 8.00012
epsilon = 0.00001 -> 8.00001
###Markdown
Now, if we are observant, we will notice that the output of this number is suspiciously close to $8$. Indeed, if we decrease $\epsilon$, we will see value becomes progressively closer to $8$. Thus we may conclude, correctly, that the value we seek (the degree a change in the input changes the output) should be $8$ at the point $x=4$. The way that a mathematician encodes this fact is$$\lim_{\epsilon \rightarrow 0}\frac{L(4+\epsilon) - L(4)}{\epsilon} = 8.$$As a bit of a historical digression: in the first few decades of neural network research, scientists used this algorithm (the *method of finite differences*) to evaluate how a loss function changed under small perturbation: just change the weights and see how the loss changed. This is computationally inefficient, requiring two evaluations of the loss function to see how a single change of one variable influenced the loss. If we tried to do this with even a paltry few thousand parameters, it would require several thousand evaluations of the network over the entire dataset! It was not solved until 1986 that the *backpropagation algorithm* introduced in :cite:`Rumelhart.Hinton.Williams.ea.1988` provided a way to calculate how *any* change of the weights together would change the loss in the same computation time as a single prediction of the network over the dataset.Back in our example, this value $8$ is different for different values of $x$, so it makes sense to define it as a function of $x$. More formally, this value dependent rate of change is referred to as the *derivative* which is written as$$\frac{df}{dx}(x) = \lim_{\epsilon \rightarrow 0}\frac{f(x+\epsilon) - f(x)}{\epsilon}.$$:eqlabel:`eq_der_def`Different texts will use different notations for the derivative. For instance, all of the below notations indicate the same thing:$$\frac{df}{dx} = \frac{d}{dx}f = f' = \nabla_xf = D_xf = f_x.$$Most authors will pick a single notation and stick with it, however even that is not guaranteed. It is best to be familiar with all of these. We will use the notation $\frac{df}{dx}$ throughout this text, unless we want to take the derivative of a complex expression, in which case we will use $\frac{d}{dx}f$ to write expressions like$$\frac{d}{dx}\left[x^4+\cos\left(\frac{x^2+1}{2x-1}\right)\right].$$Oftentimes, it is intuitively useful to unravel the definition of derivative :eqref:`eq_der_def` again to see how a function changes when we make a small change of $x$:$$\begin{aligned} \frac{df}{dx}(x) = \lim_{\epsilon \rightarrow 0}\frac{f(x+\epsilon) - f(x)}{\epsilon} & \implies \frac{df}{dx}(x) \approx \frac{f(x+\epsilon) - f(x)}{\epsilon} \\ & \implies \epsilon \frac{df}{dx}(x) \approx f(x+\epsilon) - f(x) \\ & \implies f(x+\epsilon) \approx f(x) + \epsilon \frac{df}{dx}(x). \end{aligned}$$:eqlabel:`eq_small_change`The last equation is worth explicitly calling out. It tells us that if you take any function and change the input by a small amount, the output would change by that small amount scaled by the derivative.In this way, we can understand the derivative as the scaling factor that tells us how large of change we get in the output from a change in the input. Rules of Calculus:label:`sec_derivative_table`We now turn to the task of understanding how to compute the derivative of an explicit function. A full formal treatment of calculus would derive everything from first principles. We will not indulge in this temptation here, but rather provide an understanding of the common rules encountered. Common DerivativesAs was seen in :numref:`sec_calculus`, when computing derivatives one can oftentimes use a series of rules to reduce the computation to a few core functions. We repeat them here for ease of reference.* **Derivative of constants.** $\frac{d}{dx}c = 0$.* **Derivative of linear functions.** $\frac{d}{dx}(ax) = a$.* **Power rule.** $\frac{d}{dx}x^n = nx^{n-1}$.* **Derivative of exponentials.** $\frac{d}{dx}e^x = e^x$.* **Derivative of the logarithm.** $\frac{d}{dx}\log(x) = \frac{1}{x}$. Derivative RulesIf every derivative needed to be separately computed and stored in a table, differential calculus would be near impossible. It is a gift of mathematics that we can generalize the above derivatives and compute more complex derivatives like finding the derivative of $f(x) = \log\left(1+(x-1)^{10}\right)$. As was mentioned in :numref:`sec_calculus`, the key to doing so is to codify what happens when we take functions and combine them in various ways, most importantly: sums, products, and compositions.* **Sum rule.** $\frac{d}{dx}\left(g(x) + h(x)\right) = \frac{dg}{dx}(x) + \frac{dh}{dx}(x)$.* **Product rule.** $\frac{d}{dx}\left(g(x)\cdot h(x)\right) = g(x)\frac{dh}{dx}(x) + \frac{dg}{dx}(x)h(x)$.* **Chain rule.** $\frac{d}{dx}g(h(x)) = \frac{dg}{dh}(h(x))\cdot \frac{dh}{dx}(x)$.Let us see how we may use :eqref:`eq_small_change` to understand these rules. For the sum rule, consider following chain of reasoning:$$\begin{aligned}f(x+\epsilon) & = g(x+\epsilon) + h(x+\epsilon) \\& \approx g(x) + \epsilon \frac{dg}{dx}(x) + h(x) + \epsilon \frac{dh}{dx}(x) \\& = g(x) + h(x) + \epsilon\left(\frac{dg}{dx}(x) + \frac{dh}{dx}(x)\right) \\& = f(x) + \epsilon\left(\frac{dg}{dx}(x) + \frac{dh}{dx}(x)\right).\end{aligned}$$By comparing this result with the fact that $f(x+\epsilon) \approx f(x) + \epsilon \frac{df}{dx}(x)$, we see that $\frac{df}{dx}(x) = \frac{dg}{dx}(x) + \frac{dh}{dx}(x)$ as desired. The intuition here is: when we change the input $x$, $g$ and $h$ jointly contribute to the change of the output by $\frac{dg}{dx}(x)$ and $\frac{dh}{dx}(x)$.The product is more subtle, and will require a new observation about how to work with these expressions. We will begin as before using :eqref:`eq_small_change`:$$\begin{aligned}f(x+\epsilon) & = g(x+\epsilon)\cdot h(x+\epsilon) \\& \approx \left(g(x) + \epsilon \frac{dg}{dx}(x)\right)\cdot\left(h(x) + \epsilon \frac{dh}{dx}(x)\right) \\& = g(x)\cdot h(x) + \epsilon\left(g(x)\frac{dh}{dx}(x) + \frac{dg}{dx}(x)h(x)\right) + \epsilon^2\frac{dg}{dx}(x)\frac{dh}{dx}(x) \\& = f(x) + \epsilon\left(g(x)\frac{dh}{dx}(x) + \frac{dg}{dx}(x)h(x)\right) + \epsilon^2\frac{dg}{dx}(x)\frac{dh}{dx}(x). \\\end{aligned}$$This resembles the computation done above, and indeed we see our answer ($\frac{df}{dx}(x) = g(x)\frac{dh}{dx}(x) + \frac{dg}{dx}(x)h(x)$) sitting next to $\epsilon$, but there is the issue of that term of size $\epsilon^{2}$. We will refer to this as a *higher-order term*, since the power of $\epsilon^2$ is higher than the power of $\epsilon^1$. We will see in a later section that we will sometimes want to keep track of these, however for now observe that if $\epsilon = 0.0000001$, then $\epsilon^{2}= 0.0000000000001$, which is vastly smaller. As we send $\epsilon \rightarrow 0$, we may safely ignore the higher order terms. As a general convention in this appendix, we will use "$\approx$" to denote that the two terms are equal up to higher order terms. However, if we wish to be more formal we may examine the difference quotient$$\frac{f(x+\epsilon) - f(x)}{\epsilon} = g(x)\frac{dh}{dx}(x) + \frac{dg}{dx}(x)h(x) + \epsilon \frac{dg}{dx}(x)\frac{dh}{dx}(x),$$and see that as we send $\epsilon \rightarrow 0$, the right hand term goes to zero as well.Finally, with the chain rule, we can again progress as before using :eqref:`eq_small_change` and see that$$\begin{aligned}f(x+\epsilon) & = g(h(x+\epsilon)) \\& \approx g\left(h(x) + \epsilon \frac{dh}{dx}(x)\right) \\& \approx g(h(x)) + \epsilon \frac{dh}{dx}(x) \frac{dg}{dh}(h(x))\\& = f(x) + \epsilon \frac{dg}{dh}(h(x))\frac{dh}{dx}(x),\end{aligned}$$where in the second line we view the function $g$ as having its input ($h(x)$) shifted by the tiny quantity $\epsilon \frac{dh}{dx}(x)$.These rule provide us with a flexible set of tools to compute essentially any expression desired. For instance,$$\begin{aligned}\frac{d}{dx}\left[\log\left(1+(x-1)^{10}\right)\right] & = \left(1+(x-1)^{10}\right)^{-1}\frac{d}{dx}\left[1+(x-1)^{10}\right]\\& = \left(1+(x-1)^{10}\right)^{-1}\left(\frac{d}{dx}[1] + \frac{d}{dx}[(x-1)^{10}]\right) \\& = \left(1+(x-1)^{10}\right)^{-1}\left(0 + 10(x-1)^9\frac{d}{dx}[x-1]\right) \\& = 10\left(1+(x-1)^{10}\right)^{-1}(x-1)^9 \\& = \frac{10(x-1)^9}{1+(x-1)^{10}}.\end{aligned}$$Where each line has used the following rules:1. The chain rule and derivative of logarithm.2. The sum rule.3. The derivative of constants, chain rule, and power rule.4. The sum rule, derivative of linear functions, derivative of constants.Two things should be clear after doing this example:1. Any function we can write down using sums, products, constants, powers, exponentials, and logarithms can have its derivate computed mechanically by following these rules.2. Having a human follow these rules can be tedious and error prone!Thankfully, these two facts together hint towards a way forward: this is a perfect candidate for mechanization! Indeed backpropagation, which we will revisit later in this section, is exactly that. Linear ApproximationWhen working with derivatives, it is often useful to geometrically interpret the approximation used above. In particular, note that the equation $$f(x+\epsilon) \approx f(x) + \epsilon \frac{df}{dx}(x),$$approximates the value of $f$ by a line which passes through the point $(x, f(x))$ and has slope $\frac{df}{dx}(x)$. In this way we say that the derivative gives a linear approximation to the function $f$, as illustrated below:
###Code
# Compute sin
xs = tf.range(-tf.pi, tf.pi, 0.01)
plots = [tf.sin(xs)]
# Compute some linear approximations. Use d(sin(x))/dx = cos(x)
for x0 in [-1.5, 0.0, 2.0]:
plots.append(
tf.sin(tf.constant(x0)) + (xs - x0) * tf.cos(tf.constant(x0)))
d2l.plot(xs, plots, 'x', 'f(x)', ylim=[-1.5, 1.5])
###Output
_____no_output_____
###Markdown
Higher Order DerivativesLet us now do something that may on the surface seem strange. Take a function $f$ and compute the derivative $\frac{df}{dx}$. This gives us the rate of change of $f$ at any point.However, the derivative, $\frac{df}{dx}$, can be viewed as a function itself, so nothing stops us from computing the derivative of $\frac{df}{dx}$ to get $\frac{d^2f}{dx^2} = \frac{df}{dx}\left(\frac{df}{dx}\right)$. We will call this the second derivative of $f$. This function is the rate of change of the rate of change of $f$, or in other words, how the rate of change is changing. We may apply the derivative any number of times to obtain what is called the $n$-th derivative. To keep the notation clean, we will denote the $n$-th derivative as $$f^{(n)}(x) = \frac{d^{n}f}{dx^{n}} = \left(\frac{d}{dx}\right)^{n} f.$$Let us try to understand *why* this is a useful notion. Below, we visualize $f^{(2)}(x)$, $f^{(1)}(x)$, and $f(x)$. First, consider the case that the second derivative $f^{(2)}(x)$ is a positive constant. This means that the slope of the first derivative is positive. As a result, the first derivative $f^{(1)}(x)$ may start out negative, becomes zero at a point, and then becomes positive in the end. This tells us the slope of our original function $f$ and therefore, the function $f$ itself decreases, flattens out, then increases. In other words, the function $f$ curves up, and has a single minimum as is shown in :numref:`fig_positive-second`.:label:`fig_positive-second`Second, if the second derivative is a negative constant, that means that the first derivative is decreasing. This implies the first derivative may start out positive, becomes zero at a point, and then becomes negative. Hence, the function $f$ itself increases, flattens out, then decreases. In other words, the function $f$ curves down, and has a single maximum as is shown in :numref:`fig_negative-second`.:label:`fig_negative-second`Third, if the second derivative is a always zero, then the first derivative will never change---it is constant! This means that $f$ increases (or decreases) at a fixed rate, and $f$ is itself a straight line as is shown in :numref:`fig_zero-second`.:label:`fig_zero-second`To summarize, the second derivative can be interpreted as describing the way that the function $f$ curves. A positive second derivative leads to a upwards curve, while a negative second derivative means that $f$ curves downwards, and a zero second derivative means that $f$ does not curve at all.Let us take this one step further. Consider the function $g(x) = ax^{2}+ bx + c$. We can then compute that$$\begin{aligned}\frac{dg}{dx}(x) & = 2ax + b \\\frac{d^2g}{dx^2}(x) & = 2a.\end{aligned}$$If we have some original function $f(x)$ in mind, we may compute the first two derivatives and find the values for $a, b$, and $c$ that make them match this computation. Similarly to the previous section where we saw that the first derivative gave the best approximation with a straight line, this construction provides the best approximation by a quadratic. Let us visualize this for $f(x) = \sin(x)$.
###Code
# Compute sin
xs = tf.range(-tf.pi, tf.pi, 0.01)
plots = [tf.sin(xs)]
# Compute some quadratic approximations. Use d(sin(x)) / dx = cos(x)
for x0 in [-1.5, 0.0, 2.0]:
plots.append(
tf.sin(tf.constant(x0)) + (xs - x0) * tf.cos(tf.constant(x0)) -
(xs - x0)**2 * tf.sin(tf.constant(x0)) / 2)
d2l.plot(xs, plots, 'x', 'f(x)', ylim=[-1.5, 1.5])
###Output
_____no_output_____
###Markdown
We will extend this idea to the idea of a *Taylor series* in the next section. Taylor SeriesThe *Taylor series* provides a method to approximate the function $f(x)$ if we are given values for the first $n$ derivatives at a point $x_0$, i.e., $\left\{ f(x_0), f^{(1)}(x_0), f^{(2)}(x_0), \ldots, f^{(n)}(x_0) \right\}$. The idea will be to find a degree $n$ polynomial that matches all the given derivatives at $x_0$.We saw the case of $n=2$ in the previous section and a little algebra shows this is$$f(x) \approx \frac{1}{2}\frac{d^2f}{dx^2}(x_0)(x-x_0)^{2}+ \frac{df}{dx}(x_0)(x-x_0) + f(x_0).$$As we can see above, the denominator of $2$ is there to cancel out the $2$ we get when we take two derivatives of $x^2$, while the other terms are all zero. Same logic applies for the first derivative and the value itself.If we push the logic further to $n=3$, we will conclude that$$f(x) \approx \frac{\frac{d^3f}{dx^3}(x_0)}{6}(x-x_0)^3 + \frac{\frac{d^2f}{dx^2}(x_0)}{2}(x-x_0)^{2}+ \frac{df}{dx}(x_0)(x-x_0) + f(x_0).$$where the $6 = 3 \times 2 = 3!$ comes from the constant we get in front if we take three derivatives of $x^3$.Furthermore, we can get a degree $n$ polynomial by $$P_n(x) = \sum_{i = 0}^{n} \frac{f^{(i)}(x_0)}{i!}(x-x_0)^{i}.$$where the notation $$f^{(n)}(x) = \frac{d^{n}f}{dx^{n}} = \left(\frac{d}{dx}\right)^{n} f.$$Indeed, $P_n(x)$ can be viewed as the best $n$-th degree polynomial approximation to our function $f(x)$.While we are not going to dive all the way into the error of the above approximations, it is worth mentioning the infinite limit. In this case, for well behaved functions (known as real analytic functions) like $\cos(x)$ or $e^{x}$, we can write out the infinite number of terms and approximate the exactly same function$$f(x) = \sum_{n = 0}^\infty \frac{f^{(n)}(x_0)}{n!}(x-x_0)^{n}.$$Take $f(x) = e^{x}$ as am example. Since $e^{x}$ is its own derivative, we know that $f^{(n)}(x) = e^{x}$. Therefore, $e^{x}$ can be reconstructed by taking the Taylor series at $x_0 = 0$, i.e.,$$e^{x} = \sum_{n = 0}^\infty \frac{x^{n}}{n!} = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \cdots.$$Let us see how this works in code and observe how increasing the degree of the Taylor approximation brings us closer to the desired function $e^x$.
###Code
# Compute the exponential function
xs = tf.range(0, 3, 0.01)
ys = tf.exp(xs)
# Compute a few Taylor series approximations
P1 = 1 + xs
P2 = 1 + xs + xs**2 / 2
P5 = 1 + xs + xs**2 / 2 + xs**3 / 6 + xs**4 / 24 + xs**5 / 120
d2l.plot(
xs, [ys, P1, P2, P5], 'x', 'f(x)', legend=[
"Exponential", "Degree 1 Taylor Series", "Degree 2 Taylor Series",
"Degree 5 Taylor Series"])
###Output
_____no_output_____ |
Deep Learning/1. Keras Sequential Exercise Solution.ipynb | ###Markdown
Keras Sequential Exercise Solution MNIST Hand written digits
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import keras
digits = keras.datasets.mnist
(X_train, y_train), (X_test, y_test) = digits.load_data()
X_train.shape
y_train.shape
X_train[0]
plt.matshow(X_train[0])
y_train[0] # our target value for this image
X_train=X_train/255 # Normalizing
X_test=X_test/255
from keras.models import Sequential
from keras.layers import Flatten, Dense, Activation
model = Sequential() # Creating sequential model
model.add(Flatten(input_shape=[28, 28])) # Adding Layer
# Flatten converting 2D array to 1D array
model.add(Dense(100, activation="relu")) # Hidden Layer with trial and error process
model.add(Dense(10, activation="softmax")) # Output Layers, which is 10 categories in this case
# Softmax distributing set of numbers to probablities of available classes
model.summary()
# Compiling model
model.compile(loss="sparse_categorical_crossentropy", # Loss function
optimizer="adam", # Optimizer helps to adjust weight of edges
metrics=["accuracy"]) # type of mtrics using during training
model.fit(X_train, y_train, epochs=5) # Epochs is number of iteration
plt.matshow(X_test[1])
yp=model.predict(X_test) # y prediction for our entire test dataset
yp[1] # this is showing probabilities of our 10 categories(0,1,2,3...9)
np.argmax(yp[1]) # This numpy function returns the highest value index
model.evaluate(X_test,y_test) # first one is loss function value, 2nd one is accuracy
# We can increase this accuracy by adjusting neuron
###Output
10000/10000 [==============================] - 0s 33us/step
|
chap04/chap.04.01.example4.1.gridworld.ipynb | ###Markdown
Example 4.1: $4 \times 4$ grid world * nonterminal states: $\mathcal{S} = \{1, 2, \cdots, 14 \}$* possible actions: $\mathcal{A} = \{ \textrm{up}, \textrm{down}, \textrm{right}, \textrm{left} \}$* action은 deterministic * $p(6, -1 | 5, \textrm{right}) = 1$ * $p(7, -1 | 7, \textrm{right}) = 1$ * $p(10, r | 5, \textrm{right}) = 0$ for all $r \in \mathcal{R}$* undiscounted ($\gamma = 1$)* 모든 transition의 reward는 terminal state에 가기 전까지는 모두 `-1`
###Code
import numpy as np
np.set_printoptions(precision=1)
###Output
_____no_output_____
###Markdown
Grid world state index| | | | ||----|----|----|----|| 0,0 | 0,1 | 0,2 | 0,3 || 1,0 | 1,1 | 1,2 | 1,3 || 2,0 | 2,1 | 2,2 | 2,3 || 3,0 | 3,1 | 3,2 | 3,3 |
###Code
class GridWorld():
def __init__(self, size=4, terminal_states=[(0, 0), (3, 3)]):
"""
Args:
size: int, Gridworld size
terminal_states: list of tuples
"""
self.actions = ['up', 'down', 'right', 'left']
self.terminal_states = terminal_states # special state (terminal state)
self.values = np.zeros((size, size))
# random initialization
#self.values = np.random.normal(scale=0.1, size=(size, size))
#self.values[0, 0] = 0.
#self.values[-1, -1] = 0.
self.gamma = 1.0
self.size = size
self.theta = 0.0001 # convergence precision
def Step(self, state, action):
"""
Args:
state: tuple (x, y) coordinate
action: string
Returns:
next_state: tuple (x, y) coordinate
reward: int
"""
if state in self.terminal_states:
# terminal state에 있으면 모든 action에 next_state=state, reward=0 을 준다.
next_state = state
reward = 0
else:
if action == 'up':
if state[0] > 0:
next_state = (state[0]-1, state[1])
reward = -1
else:
next_state = state
reward = -1
elif action == 'down':
if state[0] < self.size-1:
next_state = (state[0]+1, state[1])
reward = -1
else:
next_state = state
reward = -1
elif action == 'right':
if state[1] < self.size-1:
next_state = (state[0], state[1]+1)
reward = -1
else:
next_state = state
reward = -1
elif action == 'left':
if state[1] > 0:
next_state = (state[0], state[1]-1)
reward = -1
else:
next_state = state
reward = -1
return next_state, reward
def IterativePolicyEvaluation(self, policy):
#iteration = 0
while True:
delta = 0
for i in range(self.size):
for j in range(self.size):
if (i, j) in self.terminal_states:
continue
else:
v = self.values[i, j]
new_value = 0.
for key, value in policy.get_policy_at_state(state=(i, j)).items():
next_state, reward = self.Step(state=(i, j), action=key)
new_value += value * (reward + self.gamma * self.values[next_state[0], next_state[1]])
self.values[i, j] = new_value
delta = np.maximum(delta, np.abs(v - self.values[i, j]))
#iteration += 1
if delta < self.theta:
break
class Policy():
def __init__(self, size=4):
self.init_actions = {'up': 0.25,
'down': 0.25,
'right': 0.25,
'left': 0.25}
self.policy = np.asarray([self.init_actions] * size * size).reshape((size, size))
def get_policy_at_state(self, state):
"""
Args:
state: tuple (x, y) coordinate
"""
return self.policy[state[0], state[1]]
p = Policy()
g = GridWorld()
g.IterativePolicyEvaluation(p)
g.values
###Output
_____no_output_____ |
rnn.ipynb | ###Markdown
Permuted Pixel MNIST Demo Light weighted demo of our DilatedRNN on Pixel MNist with permutation.
###Code
import sys
sys.path.append("./models")
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
from classification_models import drnn_classification
# configurations
data_dir = "./MNIST_data"
n_steps = 28*28
input_dims = 1
n_classes = 10
# model config
cell_type = "RNN"
assert(cell_type in ["RNN", "LSTM", "GRU"])
hidden_structs = [20] * 9
dilations = [1, 2, 4, 8, 16, 32, 64, 128, 256]
assert(len(hidden_structs) == len(dilations))
# learning config
batch_size = 128
learning_rate = 1.0e-3
training_iters = batch_size * 300
testing_step = 5000
display_step = 100
# permutation seed
seed = 92916
mnist = input_data.read_data_sets(data_dir, one_hot=True)
if 'seed' in globals():
rng_permute = np.random.RandomState(seed)
idx_permute = rng_permute.permutation(n_steps)
else:
idx_permute = np.random.permutation(n_steps)
# build computation graph
tf.reset_default_graph()
x = tf.placeholder(tf.float32, [None, n_steps, input_dims])
y = tf.placeholder(tf.float32, [None, n_classes])
global_step = tf.Variable(0, name='global_step', trainable=False)
# build prediction graph
print ("==> Building a dRNN with %s cells" %cell_type)
pred = drnn_classification(x, hidden_structs, dilations, n_steps, n_classes, input_dims, cell_type)
# build loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.RMSPropOptimizer(learning_rate, 0.9).minimize(cost, global_step=global_step)
tf.summary.scalar('cost', cost)
# evaluation model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar('cost', accuracy)
merged_summary_op = tf.summary.merge_all()
sess = tf.Session()
init = tf.global_variables_initializer()
merged_summary_op = tf.summary.merge_all()
sess.run(init)
summary_writer = tf.summary.FileWriter('graphs', sess.graph)
step = 0
train_results = []
validation_results = []
test_results = []
while step * batch_size < training_iters:
batch_x, batch_y = mnist.train.next_batch(batch_size)
batch_x = batch_x[:, idx_permute]
batch_x = batch_x.reshape([batch_size, n_steps, input_dims])
feed_dict = {
x : batch_x,
y : batch_y
}
cost_, accuracy_, step_, _,summary_str = sess.run([cost, accuracy, global_step, optimizer,merged_summary_op], feed_dict=feed_dict)
train_results.append((step_, cost_, accuracy_))
summary_writer.add_summary(summary_str, step)
if (step + 1) % display_step == 0:
print ("Iter " + str(step + 1) + ", Minibatch Loss: " + "{:.6f}".format(cost_) \
+ ", Training Accuracy: " + "{:.6f}".format(accuracy_))
if (step + 1) % testing_step == 0:
# validation performance
batch_x = mnist.validation.images
batch_y = mnist.validation.labels
# permute the data
batch_x = batch_x[:, idx_permute]
batch_x = batch_x.reshape([-1, n_steps, input_dims])
feed_dict = {
x : batch_x,
y : batch_y
}
cost_, accuracy__, step_ = sess.run([cost, accuracy, global_step], feed_dict=feed_dict)
validation_results.append((step_, cost_, accuracy__))
# test performance
batch_x = mnist.test.images
batch_y = mnist.test.labels
batch_x = batch_x[:, idx_permute]
batch_x = batch_x.reshape([-1, n_steps, input_dims])
feed_dict = {
x : batch_x,
y : batch_y
}
cost_, accuracy_, step_ = sess.run([cost, accuracy, global_step], feed_dict=feed_dict)
test_results.append((step_, cost_, accuracy_))
print ("========> Validation Accuarcy: " + "{:.6f}".format(accuracy__) \
+ ", Testing Accuarcy: " + "{:.6f}".format(accuracy_))
step += 1
###Output
Iter 100, Minibatch Loss: 1.258374, Training Accuracy: 0.632812
Iter 200, Minibatch Loss: 0.660089, Training Accuracy: 0.851562
Iter 300, Minibatch Loss: 0.477038, Training Accuracy: 0.867188
###Markdown
###Code
#Install pybind11
!git clone https://github.com/pybind/pybind11.git
!cd pybind11 && mkdir build && cd build && cmake .. && make install
#Install Eigen
!apt install libeigen3-dev
!ln -sf /usr/include/eigen3/Eigen /usr/include/Eigen
# Install dependencies on colab
!git clone https://github.com/OttoJursch/DRL_robot_exploration.git
!#Build the C++/pybind stuff
!rm -rf DRL_robot_exploration/build
!cd DRL_robot_exploration && mkdir build && cd build && cmake .. && make
!cd DRL_robot_exploration && git pull
from copy import deepcopy
class PaperRewardFunction:
'''
Reward function from the paper
'''
def __init__(self):
pass
def get_reward(self, robot_position, old_op_map, op_map, coll_index):
'''
Takes in map before step and map after step. Measures effect of sensor
input from last step
'''
if not coll_index:
reward = float(
np.size(np.where(op_map == 255)) -
np.size(np.where(old_op_map == 255))) / 14000
if reward > 1:
reward = 1
else:
reward = -1
return reward
class FrontierRewardFunction:
def __init__(self, reward_scale):
self.reward_scale = reward_scale
self.paper_reward = PaperRewardFunction()
def frontiers(self, op_map, map_size, points):
y_len = map_size[0]
x_len = map_size[1]
mapping = op_map.copy()
# 0-1 unknown area map
mapping = (mapping == 127) * 1
mapping = np.lib.pad(mapping, ((1, 1), (1, 1)),
'constant',
constant_values=0)
fro_map = mapping[2:][:, 1:x_len + 1] + mapping[:y_len][:, 1:x_len + 1] + mapping[1:y_len + 1][:, 2:] + \
mapping[1:y_len + 1][:, :x_len] + mapping[:y_len][:, 2:] + mapping[2:][:, :x_len] + mapping[2:][:,
2:] + \
mapping[:y_len][:, :x_len]
ind_free = np.where(op_map.ravel(order='F') == 255)[0]
ind_fron_1 = np.where(1 < fro_map.ravel(order='F'))[0]
ind_fron_2 = np.where(fro_map.ravel(order='F') < 8)[0]
ind_fron = np.intersect1d(ind_fron_1, ind_fron_2)
ind_to = np.intersect1d(ind_free, ind_fron)
f = points[ind_to]
f = f.astype(int)
return f
def map_points(self, map_glo):
map_x = map_glo.shape[1]
map_y = map_glo.shape[0]
x = np.linspace(0, map_x - 1, map_x)
y = np.linspace(0, map_y - 1, map_y)
t1, t2 = np.meshgrid(x, y)
points = np.vstack([t1.T.ravel(), t2.T.ravel()]).T
return points
def get_reward(self, robot_pos, old_op_map, op_map, coll_index):
paper_reward = self.paper_reward.get_reward(robot_pos, old_op_map,
op_map, coll_index)
#If there was a collision return the collision reward
if coll_index:
return paper_reward
frontiers = np.array(
self.frontiers(op_map, op_map.shape, self.map_points(op_map)))
min_frontier_dist = -np.min(np.linalg.norm(robot_pos - frontiers, axis=1))
return self.reward_scale * min_frontier_dist + paper_reward
class PolarActionSpace:
'''
Action space is polar representation of vector robot should take from its
current position
This class will take that and add it to the current robot position to get
'''
def __init__(self, max_travel):
self.max_distance = max_travel
def get_action(self, action_polar_coords, robot_position):
angle = action_polar_coords[0] * (2 * np.pi)
dist = action_polar_coords[1] * self.max_distance
dx = dist * np.sin(angle)
dy = dist * np.cos(angle)
return np.array([dx, dy])
from scipy import spatial
from skimage import io
import numpy as np
import numpy.ma as ma
import time
import sys
from scipy import ndimage
from copy import deepcopy
import matplotlib.pyplot as plt
sys.path.append('DRL_robot_exploration')
from DRL_robot_exploration.build.inverse_sensor_model import *
from DRL_robot_exploration.build.astar import *
from random import shuffle
import os
import random
class Robot:
def __init__(self,
index_map,
train,
plot,
root_dir,
action_space,
reward_function,
do_rescue,
shuffle=True):
self.mode = train
self.action_space = action_space
self.plot = plot
self.root_dir = root_dir
self.index_map = index_map
self.do_rescue = do_rescue
self.reward_function = reward_function
self.reset(index_map, shuffle)
def reset(self, index_map=None, do_shuffle=True):
if self.mode:
self.map_dir = os.path.join(self.root_dir, 'train')
else:
self.map_dir = os.path.join(self.root_dir, 'test')
self.map_list = os.listdir(self.map_dir)
self.map_number = np.size(self.map_list)
if self.mode and do_shuffle:
shuffle(self.map_list)
if index_map is None:
index_map = random.choice(range(len(self.map_list)))
self.li_map = index_map
self.global_map, self.robot_position = self.map_setup(
self.map_dir + '/' + self.map_list[self.li_map])
self.op_map = np.ones(self.global_map.shape) * 127
self.map_size = np.shape(self.global_map)
self.finish_percent = 0.985
self.resolution = 1
self.sensor_range = 80
self.old_position = np.zeros([2])
self.old_op_map = np.empty([0])
#current_dir = os.path.dirname(os.path.realpath(__file__))
self.t = self.map_points(self.global_map)
self.free_tree = spatial.KDTree(
self.free_points(self.global_map).tolist())
self.robot_size = 6
self.local_size = 40
if self.plot:
self.xPoint = np.array([self.robot_position[0]])
self.yPoint = np.array([self.robot_position[1]])
self.x2frontier = np.empty([0])
self.y2frontier = np.empty([0])
return self.begin(), self.robot_position
def begin(self):
self.op_map = self.inverse_sensor(self.robot_position,
self.sensor_range, self.op_map,
self.global_map)
step_map = self.robot_model(self.robot_position, self.robot_size,
self.t, self.op_map)
map_local = self.local_map(self.robot_position, step_map,
self.map_size,
self.sensor_range + self.local_size)
if self.plot:
self.plot_env()
return self.op_map
def step(self, action_index):
terminal = False
complete = False
new_location = False
all_map = False
self.old_position = self.robot_position.copy()
self.old_op_map = self.op_map.copy()
# take action
self.take_action(action_index, self.robot_position)
# collision check
collision_points, collision_index = self.collision_check(
self.old_position, self.robot_position, self.map_size,
self.global_map)
if collision_index:
self.robot_position = self.nearest_free(self.free_tree,
collision_points)
self.op_map = self.inverse_sensor(self.robot_position,
self.sensor_range, self.op_map,
self.global_map)
step_map = self.robot_model(self.robot_position, self.robot_size,
self.t, self.op_map)
else:
self.op_map = self.inverse_sensor(self.robot_position,
self.sensor_range, self.op_map,
self.global_map)
step_map = self.robot_model(self.robot_position, self.robot_size,
self.t, self.op_map)
map_local = self.local_map(self.robot_position, step_map,
self.map_size,
self.sensor_range + self.local_size)
reward = self.reward_function.get_reward(self.robot_position,
self.old_op_map, self.op_map,
collision_index)
if reward <= 0.02 and not collision_index:
reward = -0.8
new_location = True
#terminal = True
# during training, the robot is relocated if it has a collision
# during testing, the robot will use collision check to avoid the collision
if collision_index:
if not self.mode:
new_location = False
terminal = False
else:
new_location = True
terminal = True
if self.plot and self.mode:
self.xPoint = ma.append(self.xPoint, self.robot_position[0])
self.yPoint = ma.append(self.yPoint, self.robot_position[1])
self.plot_env()
self.robot_position = self.old_position.copy()
self.op_map = self.old_op_map.copy()
if self.plot and self.mode:
self.xPoint[self.xPoint.size - 1] = ma.masked
self.yPoint[self.yPoint.size - 1] = ma.masked
else:
if self.plot:
self.xPoint = ma.append(self.xPoint, self.robot_position[0])
self.yPoint = ma.append(self.yPoint, self.robot_position[1])
self.plot_env()
# check if exploration is finished
if np.size(np.where(self.op_map == 255)) / np.size(
np.where(self.global_map == 255)) > self.finish_percent:
self.li_map += 1
if self.li_map == self.map_number:
self.li_map = 0
all_map = True
self.__init__(self.li_map, self.mode, self.plot)
complete = True
new_location = False
terminal = True
return (
self.op_map, self.robot_position
), reward, terminal, complete, new_location, collision_index, all_map
def rescuer(self):
complete = False
all_map = False
pre_position = self.robot_position.copy()
self.robot_position = self.frontier(self.op_map, self.map_size, self.t)
self.op_map = self.inverse_sensor(self.robot_position,
self.sensor_range, self.op_map,
self.global_map)
step_map = self.robot_model(self.robot_position, self.robot_size,
self.t, self.op_map)
map_local = self.local_map(self.robot_position, step_map,
self.map_size,
self.sensor_range + self.local_size)
if self.plot:
path = self.astar_path(self.op_map, pre_position.tolist(),
self.robot_position.tolist())
self.x2frontier = ma.append(self.x2frontier, ma.masked)
self.y2frontier = ma.append(self.y2frontier, ma.masked)
self.x2frontier = ma.append(self.x2frontier, path[1, :])
self.y2frontier = ma.append(self.y2frontier, path[0, :])
self.xPoint = ma.append(self.xPoint, ma.masked)
self.yPoint = ma.append(self.yPoint, ma.masked)
self.xPoint = ma.append(self.xPoint, self.robot_position[0])
self.yPoint = ma.append(self.yPoint, self.robot_position[1])
self.plot_env()
if np.size(np.where(self.op_map == 255)) / np.size(
np.where(self.global_map == 255)) > self.finish_percent:
self.li_map += 1
if self.li_map == self.map_number:
self.li_map = 0
all_map = True
self.__init__(self.li_map, self.mode, self.plot)
complete = True
new_location = False
terminal = True
return map_local, complete, all_map
def take_action(self, action_index, robot_position):
move_action = self.action_space.get_action(action_index,
robot_position)
robot_position[0] = np.round(robot_position[0] + move_action[0])
robot_position[1] = np.round(robot_position[1] + move_action[1])
def map_setup(self, location):
global_map = (io.imread(location, 1) * 255).astype(int)
robot_location = np.nonzero(global_map == 208)
robot_location = np.array([
np.array(robot_location)[1, 127],
np.array(robot_location)[0, 127]
])
global_map = (global_map > 150)
global_map = global_map * 254 + 1
return global_map, robot_location
def map_points(self, map_glo):
map_x = map_glo.shape[1]
map_y = map_glo.shape[0]
x = np.linspace(0, map_x - 1, map_x)
y = np.linspace(0, map_y - 1, map_y)
t1, t2 = np.meshgrid(x, y)
points = np.vstack([t1.T.ravel(), t2.T.ravel()]).T
return points
def local_map(self, robot_location, map_glo, map_size, local_size):
minX = robot_location[0] - local_size
maxX = robot_location[0] + local_size
minY = robot_location[1] - local_size
maxY = robot_location[1] + local_size
if minX < 0:
maxX = abs(minX) + maxX
minX = 0
if maxX > map_size[1]:
minX = minX - (maxX - map_size[1])
maxX = map_size[1]
if minY < 0:
maxY = abs(minY) + maxY
minY = 0
if maxY > map_size[0]:
minY = minY - (maxY - map_size[0])
maxY = map_size[0]
map_loc = map_glo[minY:maxY][:, minX:maxX]
return map_loc
def free_points(self, op_map):
index = np.where(op_map == 255)
free = np.asarray([index[1], index[0]]).T
return free
def nearest_free(self, tree, point):
pts = np.atleast_2d(point)
index = tuple(tree.query(pts)[1])
nearest = tree.data[index]
return nearest
def robot_model(self, position, robot_size, points, map_glo):
map_copy = map_glo.copy()
robot_points = self.range_search(position, robot_size, points)
for i in range(0, robot_points.shape[0]):
rob_loc = np.int32(robot_points[i, :])
rob_loc = np.flipud(rob_loc)
map_copy[tuple(rob_loc)] = 76
map_with_robot = map_copy
return map_with_robot
def range_search(self, position, r, points):
nvar = position.shape[0]
r2 = r**2
s = 0
for d in range(0, nvar):
s += (points[:, d] - position[d])**2
idx = np.nonzero(s <= r2)
idx = np.asarray(idx).ravel()
inrange_points = points[idx, :]
return inrange_points
def collision_check(self, start_point, end_point, map_size, map_glo):
x0, y0 = start_point.round()
x1, y1 = end_point.round()
dx, dy = abs(x1 - x0), abs(y1 - y0)
x, y = x0, y0
error = dx - dy
x_inc = 1 if x1 > x0 else -1
y_inc = 1 if y1 > y0 else -1
dx *= 2
dy *= 2
coll_points = np.ones((1, 2), np.uint8) * -1
while 0 <= x < map_size[1] and 0 <= y < map_size[0]:
k = map_glo.item(y, x)
if k == 1:
coll_points.itemset((0, 0), x)
coll_points.itemset((0, 1), y)
break
if x == end_point[0] and y == end_point[1]:
break
if error > 0:
x += x_inc
error -= dy
else:
y += y_inc
error += dx
if np.sum(coll_points) == -2:
coll_index = False
else:
coll_index = True
return coll_points, coll_index
def inverse_sensor(self, robot_position, sensor_range, op_map, map_glo):
op_map = inverse_sensor_model(robot_position[0], robot_position[1],
sensor_range, op_map, map_glo)
return op_map
def frontier(self, op_map, map_size, points):
y_len = map_size[0]
x_len = map_size[1]
mapping = op_map.copy()
# 0-1 unknown area map
mapping = (mapping == 127) * 1
mapping = np.lib.pad(mapping, ((1, 1), (1, 1)),
'constant',
constant_values=0)
fro_map = mapping[2:][:, 1:x_len + 1] + mapping[:y_len][:, 1:x_len + 1] + mapping[1:y_len + 1][:, 2:] + \
mapping[1:y_len + 1][:, :x_len] + mapping[:y_len][:, 2:] + mapping[2:][:, :x_len] + mapping[2:][:,
2:] + \
mapping[:y_len][:, :x_len]
ind_free = np.where(op_map.ravel(order='F') == 255)[0]
ind_fron_1 = np.where(1 < fro_map.ravel(order='F'))[0]
ind_fron_2 = np.where(fro_map.ravel(order='F') < 8)[0]
ind_fron = np.intersect1d(ind_fron_1, ind_fron_2)
ind_to = np.intersect1d(ind_free, ind_fron)
f = points[ind_to]
f = f.astype(int)
return f[0]
def unique_rows(self, a):
a = np.ascontiguousarray(a)
unique_a = np.unique(a.view([('', a.dtype)] * a.shape[1]))
result = unique_a.view(a.dtype).reshape(
(unique_a.shape[0], a.shape[1]))
result = result[~np.isnan(result).any(axis=1)]
return result
def astar_path(self, weights, start, goal, allow_diagonal=True):
temp_start = [start[1], start[0]]
temp_goal = [goal[1], goal[0]]
temp_weight = (weights < 150) * 254 + 1
# For the heuristic to be valid, each move must cost at least 1.
if temp_weight.min(axis=None) < 1.:
raise ValueError("Minimum cost to move must be 1, but got %f" %
(temp_weight.min(axis=None)))
# Ensure start is within bounds.
if (temp_start[0] < 0 or temp_start[0] >= temp_weight.shape[0]
or temp_start[1] < 0 or temp_start[1] >= temp_weight.shape[1]):
raise ValueError("Start lies outside grid.")
# Ensure goal is within bounds.
if (temp_goal[0] < 0 or temp_goal[0] >= temp_weight.shape[0]
or temp_goal[1] < 0 or temp_goal[1] >= temp_weight.shape[1]):
raise ValueError("Goal of lies outside grid.")
height, width = temp_weight.shape
start_idx = np.ravel_multi_index(temp_start, (height, width))
goal_idx = np.ravel_multi_index(temp_goal, (height, width))
path = astar(
temp_weight.flatten(),
height,
width,
start_idx,
goal_idx,
allow_diagonal,
)
return path
def plot_env(self):
plt.cla()
plt.imshow(self.op_map, cmap='gray')
plt.axis((0, self.map_size[1], self.map_size[0], 0))
plt.plot(self.xPoint, self.yPoint, 'b', linewidth=2)
plt.plot(self.x2frontier, self.y2frontier, 'r', linewidth=2)
plt.plot(self.robot_position[0],
self.robot_position[1],
'mo',
markersize=8)
plt.plot(self.xPoint[0], self.yPoint[0], 'co', markersize=8)
plt.pause(0.05)
import numpy as np
import random
np.random.seed(1000)
random.seed(10)
reward_func = FrontierRewardFunction(1 / 80)
action_space = PolarActionSpace(30)
robot = Robot(0, True, False, 'DRL_robot_exploration/DungeonMaps',action_space,reward_func, False)
test_action = np.array([0.75, 0.5])
print('start')
print(robot.robot_position)
for i in range(10):
(map, loc), reward, terminal, complete, new_loc, collision, all_map = robot.step(test_action)
print('reward', reward)
print('robot loc', loc)
print(collision)
import torch
import torch.nn as nn
import torchsummary
import numpy as np
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence, pad_sequence
def build_conv_feature_extractor(conv_dims, act):
#Create Conv2D + MaxPool layers
conv_layers = [nn.Conv2d(*conv_dim) if len(conv_dim) == 3 else nn.MaxPool2d(conv_dim) for conv_dim in conv_dims]
total_layers = []
#Add ReLU activations after each conv layer
for layer in conv_layers:
total_layers.append(layer)
if type(layer) == nn.Conv2d:
total_layers.append(act())
return nn.Sequential(*total_layers)
def get_output_shape(model, image_dim):
return model(torch.rand(*(image_dim))).data.shape
class RNNActor(nn.Module):
#TODO Determine if the action space allows negative numbers
#Potentially replace tanh with sigmoid
def __init__(self, conv_dims, lstm_hidden, train_length, input_size=(1, 1,84,84), act=nn.ReLU, final_act=nn.Sigmoid):
super(RNNActor, self).__init__()
self.conv_mod = build_conv_feature_extractor(conv_dims, act)
#Silly way to determine the size going into the RNN
with torch.no_grad():
feature_size = get_output_shape(self.conv_mod, input_size)
print('LSTM Input Size', feature_size)
#Construct LSTM
self.lstm_hidden = lstm_hidden
self.lstm_input = np.prod(list(feature_size)) + 2
self.lstm = nn.LSTM(self.lstm_input, lstm_hidden)
self.linear = nn.Linear(lstm_hidden, 2)
self.train_length = train_length
self.final_act = final_act()
def forward(self, image, positions, lengths, hidden_state=None):
batch_size = image.size()[1]
seq_length = image.size()[0]
conv = self.conv_mod(image.view((seq_length * batch_size, 1, 84, 84)))
flat = conv.view(-1).view(seq_length, batch_size, self.lstm_input - 2)
state = torch.cat((flat, positions), 2)
packed = pack_padded_sequence(state, lengths, enforce_sorted=False)
if hidden_state is not None:
states, final_state = self.lstm(packed, hidden_state)
else:
states, final_state = self.lstm(packed)
unpacked, lengths = pad_packed_sequence(states)
final = self.linear(unpacked)
return self.final_act(final), final_state, lengths
conv_dims = [(1, 32, 8), (32, 64, 4), (2, 2), (64, 64, 3), (64, 512, 7), (2, 2), (512, 64, 1)]
lstm_hidden = 512
lstm_out = 2
train_length = 8
rnn = RNNActor(conv_dims, lstm_hidden, train_length).to(device='cuda')
test_batch = torch.rand((train_length, 10, 1, 84, 84)).to(device='cuda')
test_positions = torch.rand((train_length, 10, 2)).to(device='cuda')
hidden = (torch.zeros((1, 10, lstm_hidden)).to(device='cuda'), torch.zeros((1, train_length, lstm_hidden)).to(device='cuda'))
print(hidden[0].size())
test, _ = rnn(test_batch, test_positions, [train_length] * 10)
test.size()
def build_dense_regression(linear_dims, act, final_act=None):
linear_layers = [nn.Linear(*linear_dim) for linear_dim in linear_dims]
activations = [act() for layer in range(len(linear_layers) - 1)]
if final_act is not None:
activations.append(final_act)
else:
activations.append(nn.Identity())
return nn.Sequential(*[val for tup in zip(*[linear_layers, activations]) for val in tup]
)
class CNNCritic(nn.Module):
def __init__(self, conv_dims, fc_dims, input_size=(1, 1,84,84), conv_act=nn.ReLU, fc_act=nn.ReLU):
super(CNNCritic, self).__init__()
self.conv_mod = build_conv_feature_extractor(conv_dims, conv_act)
#Silly way to determine the size going into the RNN
with torch.no_grad():
feature_size = get_output_shape(self.conv_mod, input_size)
#Add 4 for action + position
feature_size = np.prod(list(feature_size)) + 4
first_output = fc_dims[0][0]
fc_dims.insert(0, (feature_size, first_output))
self.fc = build_dense_regression(fc_dims, fc_act)
self.fc_dims = feature_size
def forward(self, map, positions, action):
batch_size = map.size()[1]
seq_length = map.size()[0]
conv = self.conv_mod(map.view((seq_length * batch_size, 1, 84, 84)))
flat = conv.view(-1).view(seq_length, batch_size, self.fc_dims - 4)
total_feats = torch.cat((flat, positions, action), 2)
return self.fc(total_feats)
linear_dims = [(256, 128), (128, 1)]
conv_dims = [(1, 32, 8), (32, 64, 4), (2, 2), (64, 64, 3), (64, 512, 7), (2, 2), (512, 64, 1)]
critic = CNNCritic(conv_dims, linear_dims)
""" Learn a policy using DDPG for the reach task"""
import numpy as np
import torch
import time
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import MultivariateNormal
from torch.nn import MSELoss
import random
from skimage.transform import resize
from io import BytesIO
import itertools
import lmdb
from itertools import zip_longest
import gym
import os
import matplotlib.pyplot as plt
import copy
import time
# TODO: A function to soft update target networks
def weighSync(target_model, source_model, tau=0.001):
for (target, src) in zip(target_model.parameters(), source_model.parameters()):
target.data = (1-tau) * target.data + tau * src.data
def grouper(iterable, n, fillvalue=None):
'''Collect data into fixed-length chunks or blocks'''
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
# TODO: Write the ReplayBuffer
class Replay():
def __init__(self, buffer_size, init_episodes, max_episode_length, sequence_length, action_dim, env, env_width, env_height):
"""
A function to initialize the replay buffer.
param: init_length : Initial number of transitions to collect
param: state_dim : Size of the state space
param: action_dim : Size of the action space
param: env : gym environment object
"""
try:
os.remove('db.lmdb')
except OSError:
pass
self.db = lmdb.open('db.lmdb', map_size=30e9)
self.buffer = [{}] * buffer_size
self.noise = MultivariateNormal(torch.zeros(2), torch.diag(torch.tensor([0.05, 0.05])))
self.sequence_length = sequence_length
self.max_episode_length = max_episode_length
self.env = env
state = self.env.reset()
self.env_width = env_width
self.env_height = env_height
self.buffer_idx = 0
self.total_steps = 0
last_state = env.reset()
init_policy = lambda map, pos, lengths: (torch.from_numpy(np.random.uniform(0, 1, (2,))).unsqueeze(0).unsqueeze(1), None, [1])
self.full_buffer = False
for episode in range(init_episodes):
episode = self.generate_episode(init_policy, False)
def generate_episode(self, policy, add_noise=True, store=True):
episode = []
map, position = self.env.reset()
position = position.astype(np.float64)
map = resize(map, (84, 84))
map = ((map - 127) / 255) * 2
position[0] = position[0]/ 640.0
position[1] = position[1] / 480.0
last_map = torch.from_numpy(map).float()
last_position = torch.from_numpy(position).float()
terminal = False
total_reward = 0
last_state = None
for i in range(self.max_episode_length):
if last_state is None:
action, last_state, lengths = policy(last_map.unsqueeze(0).unsqueeze(0).to(device='cuda'), last_position.unsqueeze(0).unsqueeze(0).to(device='cuda'), [1])
else:
action, last_state, lengths = policy(last_map.unsqueeze(0).unsqueeze(0).to(device='cuda'), last_position.unsqueeze(0).unsqueeze(0).to(device='cuda'), [1], last_state)
if add_noise:
action = action.cpu().squeeze(0).squeeze(1) + self.noise.sample()
else:
action = action.cpu().squeeze(0).squeeze(1)
action_np = action.detach().numpy().flatten()
action_np[0] = np.clip(action_np[0], 0, 1)
action_np[1] = np.clip(action_np[1], 0, 1)
(map, loc), reward, terminal, complete, new_loc, collision, all_map = self.env.step(action_np)
map = resize(map, (84, 84))
map = ((map - 127) / 255) * 2
loc = loc.astype(np.float64)
loc[0] = loc[0] / 640.0
loc[1] = loc[1] / 480.0
map_tensor = torch.from_numpy(map).float()
position_tensor = torch.from_numpy(loc).float()
reward_tensor = torch.tensor(reward).float()
episode.append({'map': map_tensor.detach(), 'position': position_tensor.detach(), 'reward': reward_tensor.detach(), 'action': action.detach()})
last_map = map_tensor
last_position = position_tensor
total_reward += reward
if terminal:
break
if store:
sequences = self.episode_to_sequences(episode)
for sequence in sequences:
self.write_sequence(sequence, self.buffer_idx)
self.buffer_idx = (self.buffer_idx + 1) % len(self.buffer)
if self.buffer_idx == 0:
self.full_buffer = True
return reward
def write_sequence(self, sequence, idx):
actions = sequence['actions']
rewards = sequence['rewards']
maps = sequence['maps']
positions = sequence['positions']
len = sequence['len']
total_tensor = torch.cat((actions, rewards.unsqueeze(1).unsqueeze(1), positions.unsqueeze(1)), 2)
with self.db.begin(write=True) as txn:
total_bytes = BytesIO()
maps_bytes = BytesIO()
torch.save(total_tensor, total_bytes)
torch.save(maps, maps_bytes)
txn.put('{}_total'.format(idx).encode(), total_bytes.getvalue())
txn.put('{}_maps'.format(idx).encode(), maps_bytes.getvalue())
txn.put('{}_len'.format(idx).encode(), str(len).encode())
def read_sequences(self, sequences):
map_sequences = []
position_sequences = []
reward_sequences = []
action_sequences = []
seq_lens = []
for seq in sequences:
with self.db.begin() as txn:
total_data = BytesIO(txn.get('{}_total'.format(seq).encode()))
map_data = BytesIO(txn.get('{}_maps'.format(seq).encode()))
length = txn.get('{}_len'.format(seq).encode())
total_tensor = torch.load(total_data)
map_tensor = torch.load(map_data)
print
map_sequences.append(map_tensor)
position_sequences.append(total_tensor[:, :, 3:])
action_sequences.append(total_tensor[:, :, :2])
reward_sequences.append(total_tensor[:, :, 2])
seq_lens.append(int(length))
map_pad = pad_sequence(map_sequences).to(device='cuda').float()
pos_pad = pad_sequence(position_sequences).to(device='cuda').float()
reward_pad = pad_sequence(reward_sequences).to(device='cuda').float()
action_pad = pad_sequence(action_sequences).to(device='cuda').float()
seqs = seq_lens
return map_pad, pos_pad, reward_pad, action_pad, seqs
def episode_to_sequences(self, episode):
sequences = []
last_idx = 0
for i in np.arange(self.sequence_length, len(episode), self.sequence_length):
window = episode[last_idx:i]
map_tensor = torch.cat([torch.unsqueeze(data['map'], 0) for data in window], 0)
position_tensor = torch.cat([torch.unsqueeze(data['position'], 0) for data in window], 0)
reward_tensor = torch.cat([torch.unsqueeze(data['reward'], 0) for data in window], 0)
action_tensor = torch.cat([torch.unsqueeze(data['action'], 0) for data in window], 0)
sequences.append({'maps':map_tensor, 'positions': position_tensor, 'rewards': reward_tensor, 'actions':action_tensor, 'len':len(window)})
last_idx = i
window = episode[last_idx:]
map_tensor = torch.cat([torch.unsqueeze(data['map'], 0) for data in window], 0)
position_tensor = torch.cat([torch.unsqueeze(data['position'], 0) for data in window], 0)
reward_tensor = torch.cat([torch.unsqueeze(data['reward'], 0) for data in window], 0)
action_tensor = torch.cat([torch.unsqueeze(data['action'], 0) for data in window], 0)
sequences.append({'maps':map_tensor, 'positions': position_tensor, 'rewards': reward_tensor, 'actions':action_tensor, 'len':len(window)})
return sequences
#TODO: Complete the function
def buffer_sample(self, N):
"""
A function to sample N points from the buffer
param: N : Number of samples to obtain from the buffer
"""
if self.full_buffer:
samples = np.random.permutation(range(self.buffer_length))
else:
samples = np.random.permutation(range(self.buffer_idx))
samples = samples[:N]
return self.read_sequences(samples)
def batchify(self, samples):
map_sequences = []
position_sequences = []
reward_sequences = []
action_sequences = []
seq_lens = []
for sequence in samples:
map_sequences.append(sequence['maps'])
position_sequences.append(sequence['positions'])
reward_sequences.append(sequence['rewards'])
action_sequences.append(sequence['actions'])
seq_lens.append(sequence['len'])
map_pad = pad_sequence(map_sequences).to(device='cuda').float()
pos_pad = pad_sequence(position_sequences).to(device='cuda').float()
reward_pad = pad_sequence(reward_sequences).to(device='cuda').float()
action_pad = pad_sequence(action_sequences).to(device='cuda').float()
seqs = seq_lens
return map_pad, pos_pad, reward_pad, action_pad, seqs
# TODO: Implement a DDPG class
class DDPG():
def __init__(
self,
env,
conv_dims,
state_dim,
linear_dims,
sequence_length,
replay,
critic_lr=3e-4,
actor_lr=3e-4,
gamma=0.99,
batch_size=100,
seed=1000
):
"""
param: env: An gym environment
param: action_dim: Size of action space
param: state_dim: Size of state space
param: critic_lr: Learning rate of the critic
param: actor_lr: Learning rate of the actor
param: gamma: The discount factor
param: batch_size: The batch size for training
"""
np.random.seed(seed)
torch.manual_seed(seed)
action_dim = 2
self.gamma = gamma
self.batch_size = batch_size
self.sequence_length = sequence_length
self.env = env
self.state_dim = state_dim
self.actor = RNNActor(conv_dims, state_dim, sequence_length).to(device='cuda')
# TODO: Create a actor and actor_target
self.actor_target = copy.deepcopy(self.actor)
# TODO: Make sure that both networks have the same initial weights
# TODO: Create a critic and critic_target object
self.critic = CNNCritic(conv_dims, linear_dims).to(device='cuda')
self.critic_target = copy.deepcopy(self.critic)
# TODO: Make sure that both networks have the same initial weights
# TODO: Define the optimizer for the actor
self.optimizer_actor = optim.Adam(self.actor.parameters(), actor_lr)
# TODO: Define the optimizer for the critic
self.optimizer_critic = optim.Adam(self.critic.parameters(), critic_lr)
# TODO: define a replay buffer
#buffer_size, init_episodes, max_episode_length, state_dim, action_dim, env, env_width, env_height
self.replay = replay
# TODO: Complete the function
def update_target_networks(self):
"""
A function to update the target networks
"""
weighSync(self.actor_target, self.actor)
weighSync(self.critic_target, self.critic)
# TODO: Complete the function
def update_network(self, y_i, maps, positions, actions, lengths):
"""
A function to update the function just once
"""
qs = self.critic(maps, positions, actions)
#should be (seq_len, batch, 1)
#should be (seq_len, batch, 1)
critic_loss = ((y_i - qs)**2).sum() / (self.sequence_length * self.batch_size)
critic_loss.backward()
self.optimizer_critic.step()
# Freeze Q-network so you don't waste computational effort
# computing gradients for it during the policy learning step.
for p in self.critic.parameters():
p.requires_grad = False
new_act, _, _ = self.actor(maps, positions, lengths)
qs = self.critic(maps, positions, new_act)
actor_loss = qs.sum() / (self.sequence_length * self.batch_size)
(-actor_loss).backward()
self.optimizer_actor.step()
# Freeze Q-network so you don't waste computational effort
# computing gradients for it during the policy learning step.
for p in self.critic.parameters():
p.requires_grad = True
# TODO: Complete the function
def train(self, num_steps):
"""
Train the policy for the given number of iterations
:param num_steps:The number of steps to train the policy for
"""
self.critic_criterion = MSELoss()
num_episodes = 0
i = 0
total_reward = 0
total_steps = 0
episode_reward = 0
steps_list = []
rewards = []
start = time.time()
while i < num_steps:
done = False
last_state = self.env.reset()
num_episodes += 1
i += 1
self.optimizer_critic.zero_grad()
self.optimizer_actor.zero_grad()
reward = self.replay.generate_episode(self.actor)
#maps.size() -> (seq_len, batch_size, 1, 224, 224)
#positons.size() -> (seq_len, batch_size, 2)
#rewards.size() -> (seq_len, batch_size, 1)
# actions -> (seq_len, batch_size, 2)
maps, positions, rewards, actions, lengths = self.replay.buffer_sample(self.batch_size)
positions = positions.squeeze(2)
actions = actions.squeeze(2)
with torch.no_grad():
target_action, _, _ = self.actor_target(maps, positions, lengths)
target_action = target_action.squeeze(2)
#Should be (seq_len, batch_size, 2)
#Should be (seq_len, batch_size, 1)
crit = self.critic_target(maps, positions, target_action)
print('rewards', rewards.size())
print('crit', crit.size())
ys = rewards + self.gamma * crit
self.update_network(ys, maps, positions, actions, lengths)
self.update_target_networks()
if i % 100 == 0:
print('step {}'.format(i))
print('since start {}'.format(time.time() - start))
print('reward', reward)
# test_done = False
# episode_reward = 0
# the_steps = 0
# s = test_env.reset()
# while not test_done:
# total_steps += 1
# the_steps += 1
# action = self.actor(torch.from_numpy(s).float().to(device='cuda')).detach().squeeze().cpu().numpy()
# n_state, r, test_done, _ = test_env.step(action)
# s = n_state
# episode_reward += r
# rewards.append(episode_reward)
# steps_list.append(the_steps)
# print('Episode reward')
# print(episode_reward)
return rewards, steps_list
reward_func = FrontierRewardFunction(1 / 14000)
action_space = PolarActionSpace(15)
robot = Robot(0, True, False, 'DRL_robot_exploration/DungeonMaps',action_space,reward_func, False)
replay = Replay(10000, 10, 300, 8, 2, robot, 640, 480)
replay.buffer_sample(1)
linear_dims = [(256, 128), (128, 1)]
conv_dims = [(1, 32, 8), (32, 64, 4), (2, 2), (64, 64, 3), (64, 512, 7), (2, 2), (512, 64, 1)]
lstm_hidden = 512
lstm_out = 2
train_length = 8
ddpg = DDPG(robot, conv_dims, lstm_hidden, linear_dims, 8, replay, 3e-4,3e-4,0.99, 10)
print('init seqs', ddpg.replay.buffer_idx)
torch.autograd.set_detect_anomaly(True)
ddpg.train(50000)
###Output
_____no_output_____
###Markdown
LSTM- Solving `vanishing gradient` problem
###Code
import matplotlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use("dark_background")
plt.rcParams.update({
"axes.grid" : True
})
train_df = pd.read_csv("./Google_Stock_Price_Train.csv")
test_df = pd.read_csv("./Google_Stock_Price_Test.csv")
train_df["Date"] = train_df["Date"].apply(pd.to_datetime)
test_df["Date"] = test_df["Date"].apply(pd.to_datetime)
train_df.head()
test_df.head()
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111)
ax.set(
title = "Values to Train",
xlabel = "Date",
ylabel = "Open"
)
ax.plot(train_df["Date"],train_df["Open"],label ="'Open' values over Date")
ax.legend(loc="best")
fig.show()
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111)
ax.set(
title = "Values to test",
xlabel = "Date",
ylabel = "Open",
)
ax.plot(test_df["Date"],test_df["Open"],label ="'Open' values over Date")
ax.legend(loc="best")
fig.show()
training_set = train_df["Open"].values.reshape(-1,1)
training_set
###Output
_____no_output_____
###Markdown
- whenver there is RNN , it is recommended to apply normalization to the data
###Code
from sklearn.preprocessing import MinMaxScaler
normalizer = MinMaxScaler(feature_range=(0,1))
training_set_scaled = normalizer.fit_transform(training_set)
training_set_scaled
###Output
_____no_output_____
###Markdown
- creating a training seq of window 60(60 days) and for forcast training it will take next day's value. Lets see how it goes
###Code
x_train = []
y_train = []
window = 60
total_length = len(training_set_scaled)
for i in range(window,total_length):
x_train.append(training_set_scaled[i-window:i,0])
y_train.append(training_set_scaled[i,0])
x_train, y_train = np.array(x_train),np.array(y_train)
x_train.shape
###Output
_____no_output_____
###Markdown
- Here it is a 2D matrix making sense of only one set of features- Our goal is to convert it into a 3D matrix where if we have other features as well ,can be added.
###Code
x_train = np.reshape(x_train,(x_train.shape[0],x_train.shape[1],1))
x_train.shape
y_train.shape
###Output
_____no_output_____
###Markdown
building model
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout
###Output
_____no_output_____
###Markdown
- initialize model
###Code
regressor = Sequential()
###Output
_____no_output_____
###Markdown
- adding first LSTM layer and Dropout Regularization
###Code
regressor.add(
LSTM(
units = 50,
return_sequences = True,
input_shape = (x_train.shape[1],1)
)
)
regressor.add(
Dropout(
rate = 0.2
)
)
###Output
_____no_output_____
###Markdown
- adding second LSTM layer and Dropout Regularization
###Code
regressor.add(
LSTM(
units = 50,
return_sequences = True
)
)
regressor.add(
Dropout(
rate = 0.2
)
)
###Output
_____no_output_____
###Markdown
- adding third LSTM layer and Dropout Regularization
###Code
regressor.add(
LSTM(
units = 50,
return_sequences = True
)
)
regressor.add(
Dropout(
rate = 0.2
)
)
###Output
_____no_output_____
###Markdown
- adding fourth LSTM layer and Dropout Regularization (dont return Sequence)
###Code
regressor.add(
LSTM(
units = 50,
return_sequences = False
)
)
regressor.add(
Dropout(
rate = 0.2
)
)
###Output
_____no_output_____
###Markdown
- add output layer
###Code
regressor.add(
Dense(
units = 1
)
)
###Output
_____no_output_____
###Markdown
- Compile model
###Code
regressor.compile(
optimizer = 'adam',
loss = 'mean_squared_error',
)
regressor.fit(x_train, y_train, epochs = 200, batch_size = 32)
###Output
Epoch 1/200
38/38 [==============================] - 8s 222ms/step - loss: 0.0014
Epoch 2/200
38/38 [==============================] - 8s 214ms/step - loss: 0.0014
Epoch 3/200
38/38 [==============================] - 9s 228ms/step - loss: 0.0013
Epoch 4/200
38/38 [==============================] - 11s 290ms/step - loss: 0.0013
Epoch 5/200
38/38 [==============================] - 15s 397ms/step - loss: 0.0014
Epoch 6/200
38/38 [==============================] - 11s 291ms/step - loss: 0.0013
Epoch 7/200
38/38 [==============================] - 12s 309ms/step - loss: 0.0014
Epoch 8/200
38/38 [==============================] - 11s 277ms/step - loss: 0.0013
Epoch 9/200
38/38 [==============================] - 8s 211ms/step - loss: 0.0013
Epoch 10/200
38/38 [==============================] - 8s 212ms/step - loss: 0.0013
Epoch 11/200
38/38 [==============================] - 8s 220ms/step - loss: 0.0012
Epoch 12/200
38/38 [==============================] - 8s 215ms/step - loss: 0.0015
Epoch 13/200
38/38 [==============================] - 8s 222ms/step - loss: 0.0014
Epoch 14/200
38/38 [==============================] - 7s 175ms/step - loss: 0.0012
Epoch 15/200
38/38 [==============================] - 7s 187ms/step - loss: 0.0011
Epoch 16/200
38/38 [==============================] - 8s 201ms/step - loss: 0.0012
Epoch 17/200
38/38 [==============================] - 8s 217ms/step - loss: 0.0013
Epoch 18/200
38/38 [==============================] - 8s 208ms/step - loss: 0.0012
Epoch 19/200
38/38 [==============================] - 8s 207ms/step - loss: 0.0013
Epoch 20/200
38/38 [==============================] - 8s 208ms/step - loss: 0.0013
Epoch 21/200
38/38 [==============================] - 9s 243ms/step - loss: 0.0011
Epoch 22/200
38/38 [==============================] - 8s 213ms/step - loss: 0.0011
Epoch 23/200
38/38 [==============================] - 8s 213ms/step - loss: 0.0013
Epoch 24/200
38/38 [==============================] - 8s 214ms/step - loss: 0.0012
Epoch 25/200
38/38 [==============================] - 8s 214ms/step - loss: 0.0012
Epoch 26/200
38/38 [==============================] - 9s 226ms/step - loss: 0.0012
Epoch 27/200
38/38 [==============================] - 10s 255ms/step - loss: 0.0012
Epoch 28/200
38/38 [==============================] - 9s 230ms/step - loss: 0.0014
Epoch 29/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0014
Epoch 30/200
38/38 [==============================] - 6s 170ms/step - loss: 0.0012
Epoch 31/200
38/38 [==============================] - 8s 204ms/step - loss: 0.0011
Epoch 32/200
38/38 [==============================] - 9s 226ms/step - loss: 0.0012
Epoch 33/200
38/38 [==============================] - 8s 203ms/step - loss: 0.0012
Epoch 34/200
38/38 [==============================] - 8s 209ms/step - loss: 0.0011
Epoch 35/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0012
Epoch 36/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0011
Epoch 37/200
38/38 [==============================] - 6s 164ms/step - loss: 0.0010
Epoch 38/200
38/38 [==============================] - 6s 169ms/step - loss: 0.0012
Epoch 39/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0011
Epoch 40/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0011
Epoch 41/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0012
Epoch 42/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0011
Epoch 43/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0012
Epoch 44/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0012
Epoch 45/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0011
Epoch 46/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0011
Epoch 47/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0011
Epoch 48/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0010
Epoch 49/200
38/38 [==============================] - 6s 162ms/step - loss: 9.7844e-04
Epoch 50/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0012
Epoch 51/200
38/38 [==============================] - 6s 167ms/step - loss: 0.0011
Epoch 52/200
38/38 [==============================] - 6s 164ms/step - loss: 0.0011
Epoch 53/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0011
Epoch 54/200
38/38 [==============================] - 7s 177ms/step - loss: 0.0011
Epoch 55/200
38/38 [==============================] - 6s 164ms/step - loss: 0.0011
Epoch 56/200
38/38 [==============================] - 6s 164ms/step - loss: 0.0012
Epoch 57/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0010
Epoch 58/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0012
Epoch 59/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0011
Epoch 60/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0011
Epoch 61/200
38/38 [==============================] - 6s 159ms/step - loss: 0.0011
Epoch 62/200
38/38 [==============================] - 6s 165ms/step - loss: 0.0011
Epoch 63/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0010
Epoch 64/200
38/38 [==============================] - 6s 159ms/step - loss: 0.0010
Epoch 65/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0010
Epoch 66/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0010
Epoch 67/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0010
Epoch 68/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0010
Epoch 69/200
38/38 [==============================] - 6s 161ms/step - loss: 9.6550e-04
Epoch 70/200
38/38 [==============================] - 6s 161ms/step - loss: 9.5355e-04
Epoch 71/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0010
Epoch 72/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0012
Epoch 73/200
38/38 [==============================] - 6s 166ms/step - loss: 0.0011
Epoch 74/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0011
Epoch 75/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0011
Epoch 76/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0010
Epoch 77/200
38/38 [==============================] - 6s 164ms/step - loss: 0.0011
Epoch 78/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0010
Epoch 79/200
38/38 [==============================] - 6s 163ms/step - loss: 9.6147e-04
Epoch 80/200
38/38 [==============================] - 6s 162ms/step - loss: 9.7619e-04
Epoch 81/200
38/38 [==============================] - 6s 165ms/step - loss: 0.0011
Epoch 82/200
38/38 [==============================] - 6s 162ms/step - loss: 9.2216e-04
Epoch 83/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0010
Epoch 84/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0011
Epoch 85/200
38/38 [==============================] - 6s 161ms/step - loss: 9.8050e-04
Epoch 86/200
38/38 [==============================] - 6s 166ms/step - loss: 8.7016e-04
Epoch 87/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0011
Epoch 88/200
38/38 [==============================] - 6s 161ms/step - loss: 9.9031e-04
Epoch 89/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0011
Epoch 90/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0011
Epoch 91/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0011
Epoch 92/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0011
Epoch 93/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0011
Epoch 94/200
38/38 [==============================] - 6s 164ms/step - loss: 9.9037e-04
Epoch 95/200
38/38 [==============================] - 6s 162ms/step - loss: 9.7466e-04
Epoch 96/200
38/38 [==============================] - 6s 164ms/step - loss: 8.8390e-04
Epoch 97/200
38/38 [==============================] - 6s 167ms/step - loss: 0.0010
Epoch 98/200
38/38 [==============================] - 6s 169ms/step - loss: 0.0012
Epoch 99/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0010
Epoch 100/200
38/38 [==============================] - 6s 160ms/step - loss: 0.0011
Epoch 101/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0013
Epoch 102/200
38/38 [==============================] - 6s 162ms/step - loss: 9.1187e-04
Epoch 103/200
38/38 [==============================] - 6s 160ms/step - loss: 9.0584e-04
Epoch 104/200
38/38 [==============================] - 6s 165ms/step - loss: 9.1992e-04
Epoch 105/200
38/38 [==============================] - 6s 163ms/step - loss: 9.2383e-04
Epoch 106/200
38/38 [==============================] - 6s 166ms/step - loss: 0.0011
Epoch 107/200
38/38 [==============================] - 6s 161ms/step - loss: 9.9143e-04
Epoch 108/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0010
Epoch 109/200
38/38 [==============================] - 6s 162ms/step - loss: 9.4172e-04
Epoch 110/200
38/38 [==============================] - 6s 164ms/step - loss: 0.0010
Epoch 111/200
38/38 [==============================] - 6s 161ms/step - loss: 9.4158e-04
Epoch 112/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0010
Epoch 113/200
38/38 [==============================] - 6s 164ms/step - loss: 0.0013
Epoch 114/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0011
Epoch 115/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0010
Epoch 116/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0010
Epoch 117/200
38/38 [==============================] - 6s 164ms/step - loss: 9.2120e-04
Epoch 118/200
38/38 [==============================] - 6s 168ms/step - loss: 9.5325e-04
Epoch 119/200
38/38 [==============================] - 6s 159ms/step - loss: 0.0010
Epoch 120/200
38/38 [==============================] - 6s 160ms/step - loss: 9.2763e-04
Epoch 121/200
38/38 [==============================] - 6s 160ms/step - loss: 9.5181e-04
Epoch 122/200
38/38 [==============================] - 6s 161ms/step - loss: 8.8135e-04
Epoch 123/200
38/38 [==============================] - 6s 166ms/step - loss: 0.0010
Epoch 124/200
38/38 [==============================] - 6s 168ms/step - loss: 9.3785e-04
Epoch 125/200
38/38 [==============================] - 6s 166ms/step - loss: 0.0011
Epoch 126/200
38/38 [==============================] - 6s 165ms/step - loss: 9.3783e-04
Epoch 127/200
38/38 [==============================] - 6s 163ms/step - loss: 9.7178e-04
Epoch 128/200
38/38 [==============================] - 6s 164ms/step - loss: 0.0011
Epoch 129/200
38/38 [==============================] - 6s 167ms/step - loss: 0.0010
Epoch 130/200
38/38 [==============================] - 6s 163ms/step - loss: 9.0437e-04
Epoch 131/200
38/38 [==============================] - 6s 162ms/step - loss: 9.0067e-04
Epoch 132/200
38/38 [==============================] - 6s 165ms/step - loss: 9.0563e-04
Epoch 133/200
38/38 [==============================] - 6s 164ms/step - loss: 9.1632e-04
Epoch 134/200
38/38 [==============================] - 6s 163ms/step - loss: 0.0010
Epoch 135/200
38/38 [==============================] - 6s 167ms/step - loss: 0.0010
Epoch 136/200
38/38 [==============================] - 6s 162ms/step - loss: 9.8374e-04
Epoch 137/200
38/38 [==============================] - 6s 163ms/step - loss: 9.4184e-04
Epoch 138/200
38/38 [==============================] - 6s 170ms/step - loss: 9.7541e-04
Epoch 139/200
38/38 [==============================] - 6s 163ms/step - loss: 9.0338e-04
Epoch 140/200
38/38 [==============================] - 7s 171ms/step - loss: 9.3161e-04
Epoch 141/200
38/38 [==============================] - 6s 166ms/step - loss: 0.0011
Epoch 142/200
38/38 [==============================] - 6s 163ms/step - loss: 9.4194e-04
Epoch 143/200
38/38 [==============================] - 6s 163ms/step - loss: 8.2567e-04
Epoch 144/200
38/38 [==============================] - 6s 165ms/step - loss: 9.4239e-04
Epoch 145/200
38/38 [==============================] - 7s 171ms/step - loss: 9.2421e-04
Epoch 146/200
38/38 [==============================] - 6s 165ms/step - loss: 9.2829e-04
Epoch 147/200
38/38 [==============================] - 6s 166ms/step - loss: 9.1966e-04
Epoch 148/200
38/38 [==============================] - 6s 168ms/step - loss: 9.4155e-04
Epoch 149/200
38/38 [==============================] - 6s 163ms/step - loss: 8.7073e-04
Epoch 150/200
38/38 [==============================] - 6s 167ms/step - loss: 8.0530e-04
Epoch 151/200
38/38 [==============================] - 6s 161ms/step - loss: 8.7276e-04
Epoch 152/200
38/38 [==============================] - 6s 165ms/step - loss: 9.7859e-04
Epoch 153/200
38/38 [==============================] - 6s 162ms/step - loss: 0.0010
Epoch 154/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0010
Epoch 155/200
38/38 [==============================] - 6s 165ms/step - loss: 8.7092e-04
Epoch 156/200
38/38 [==============================] - 6s 162ms/step - loss: 9.7503e-04
Epoch 157/200
38/38 [==============================] - 6s 165ms/step - loss: 8.1540e-04
Epoch 158/200
38/38 [==============================] - 6s 165ms/step - loss: 9.3185e-04
Epoch 159/200
38/38 [==============================] - 6s 162ms/step - loss: 9.1329e-04
Epoch 160/200
38/38 [==============================] - 6s 169ms/step - loss: 9.1328e-04
Epoch 161/200
38/38 [==============================] - 6s 161ms/step - loss: 8.6934e-04
Epoch 162/200
38/38 [==============================] - 6s 170ms/step - loss: 8.8985e-04
Epoch 163/200
38/38 [==============================] - 6s 165ms/step - loss: 0.0011
Epoch 164/200
38/38 [==============================] - 6s 162ms/step - loss: 8.8198e-04
Epoch 165/200
38/38 [==============================] - 6s 165ms/step - loss: 0.0011
Epoch 166/200
38/38 [==============================] - 6s 164ms/step - loss: 9.4810e-04
Epoch 167/200
38/38 [==============================] - 6s 162ms/step - loss: 8.9965e-04
Epoch 168/200
38/38 [==============================] - 6s 167ms/step - loss: 8.8273e-04
Epoch 169/200
38/38 [==============================] - 6s 163ms/step - loss: 8.8623e-04
Epoch 170/200
38/38 [==============================] - 6s 168ms/step - loss: 8.4542e-04
Epoch 171/200
38/38 [==============================] - 6s 163ms/step - loss: 9.0736e-04
Epoch 172/200
38/38 [==============================] - 6s 162ms/step - loss: 9.3580e-04
Epoch 173/200
38/38 [==============================] - 6s 165ms/step - loss: 0.0010
Epoch 174/200
38/38 [==============================] - 6s 165ms/step - loss: 9.5881e-04
Epoch 175/200
38/38 [==============================] - 6s 166ms/step - loss: 0.0011
Epoch 176/200
38/38 [==============================] - 6s 163ms/step - loss: 8.2628e-04
Epoch 177/200
38/38 [==============================] - 6s 163ms/step - loss: 8.9608e-04
Epoch 178/200
38/38 [==============================] - 6s 165ms/step - loss: 9.4903e-04
Epoch 179/200
38/38 [==============================] - 6s 163ms/step - loss: 9.0983e-04
Epoch 180/200
38/38 [==============================] - 6s 166ms/step - loss: 8.3170e-04
Epoch 181/200
38/38 [==============================] - 6s 159ms/step - loss: 9.1586e-04
Epoch 182/200
38/38 [==============================] - 6s 161ms/step - loss: 9.8892e-04
Epoch 183/200
38/38 [==============================] - 6s 168ms/step - loss: 9.2415e-04
Epoch 184/200
38/38 [==============================] - 6s 160ms/step - loss: 9.8347e-04
Epoch 185/200
38/38 [==============================] - 6s 165ms/step - loss: 9.5522e-04
Epoch 186/200
38/38 [==============================] - 6s 165ms/step - loss: 8.4980e-04
Epoch 187/200
38/38 [==============================] - 7s 171ms/step - loss: 8.3370e-04
Epoch 188/200
38/38 [==============================] - 6s 162ms/step - loss: 8.5308e-04
Epoch 189/200
38/38 [==============================] - 6s 160ms/step - loss: 8.6326e-04
Epoch 190/200
38/38 [==============================] - 6s 164ms/step - loss: 9.8035e-04
Epoch 191/200
38/38 [==============================] - 6s 162ms/step - loss: 8.3824e-04
Epoch 192/200
38/38 [==============================] - 7s 172ms/step - loss: 8.2253e-04
Epoch 193/200
38/38 [==============================] - 6s 164ms/step - loss: 9.2322e-04
Epoch 194/200
38/38 [==============================] - 6s 163ms/step - loss: 8.3790e-04
Epoch 195/200
38/38 [==============================] - 6s 165ms/step - loss: 9.7694e-04
Epoch 196/200
38/38 [==============================] - 6s 161ms/step - loss: 0.0010
Epoch 197/200
38/38 [==============================] - 6s 170ms/step - loss: 9.3667e-04
Epoch 198/200
38/38 [==============================] - 6s 163ms/step - loss: 8.0850e-04
Epoch 199/200
38/38 [==============================] - 6s 166ms/step - loss: 8.0726e-04
Epoch 200/200
38/38 [==============================] - 6s 162ms/step - loss: 8.2880e-04
###Markdown
- prepare testing set
###Code
testing_set = test_df["Open"].values.reshape(-1,1)
testing_set
###Output
_____no_output_____
###Markdown
- Now to get predictions for a month (20 working days) similar size to the testing set , we need previous 60 days values as input features to get 61th prediction(60+1).- so joining both train df and test df to create a full dataset
###Code
total_dataset = pd.concat([train_df["Open"],test_df["Open"]],axis=0)
inputs_to_model = total_dataset[len(training_set) - len(testing_set) - window:].values
inputs_to_model = inputs_to_model.reshape(-1,1)
inputs_to_model = normalizer.transform(inputs_to_model)
x_test = []
upper_limit = window + len(testing_set)
for i in range(window,upper_limit):
x_test.append(inputs_to_model[i-60:i,0])
x_test = np.array(x_test)
x_test = np.reshape(x_test, (x_test.shape[0],x_test.shape[1],1))
predicted_values = regressor.predict(x_test)
predicted_values = normalizer.inverse_transform(predicted_values)
fig = plt.figure(figsize=(12,8))
plt.plot(test_df["Date"],testing_set,label="real stock price")
plt.plot(test_df["Date"],predicted_values,label="predicted stock price")
plt.legend(loc="best")
plt.title("Comparison in prices")
plt.show()
###Output
_____no_output_____
###Markdown
###Code
# Loading data from pycaret
from pycaret.datasets import get_data
df = get_data('diabetes')
df.head()
df.dtypes
target = df.pop('Class variable')
import pandas as pd
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from math import floor, ceil
from pylab import rcParams
from tensorflow import keras
%matplotlib inline
dataset = tf.data.Dataset.from_tensor_slices((df.values, target.values))
for feat, targ in dataset.take(5):
print ('Features: {}, Target: {}'.format(feat, targ))
train_dataset = dataset.shuffle(len(df)).batch(1)
#simple neural network
def get_compiled_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
model = get_compiled_model()
model.fit(train_dataset, epochs=15)
import collections
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
#LSTM RNN
model = tf.keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
model.add(layers.LSTM(128))
# Add a Dense layer with 10 units.
model.add(layers.Dense(10))
model.summary()
model = get_compiled_model()
model.fit(train_dataset, epochs=200)
# GRU RNN
model = tf.keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary()
model = get_compiled_model()
model.fit(train_dataset, epochs=200)
model = tf.keras.Sequential()
model.add(layers.Bidirectional(layers.LSTM(64, return_sequences=True),
input_shape=(5, 10)))
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(10))
model.summary()
model = get_compiled_model()
model.fit(train_dataset, epochs=200)
###Output
Epoch 1/200
WARNING:tensorflow:Layer dense_12 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
768/768 [==============================] - 1s 1ms/step - loss: 9.2621 - accuracy: 0.5312
Epoch 2/200
768/768 [==============================] - 1s 1ms/step - loss: 0.7846 - accuracy: 0.6419
Epoch 3/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6848 - accuracy: 0.6641
Epoch 4/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6855 - accuracy: 0.6693
Epoch 5/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6830 - accuracy: 0.6484
Epoch 6/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6533 - accuracy: 0.6576
Epoch 7/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6827 - accuracy: 0.6641
Epoch 8/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6517 - accuracy: 0.6602
Epoch 9/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6313 - accuracy: 0.6693
Epoch 10/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6356 - accuracy: 0.6849
Epoch 11/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6390 - accuracy: 0.6732
Epoch 12/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6198 - accuracy: 0.6888
Epoch 13/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6285 - accuracy: 0.6888
Epoch 14/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6141 - accuracy: 0.6888
Epoch 15/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6246 - accuracy: 0.6901
Epoch 16/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6182 - accuracy: 0.6927
Epoch 17/200
768/768 [==============================] - 1s 1ms/step - loss: 0.6003 - accuracy: 0.6849
Epoch 18/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5819 - accuracy: 0.6979
Epoch 19/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5874 - accuracy: 0.6914
Epoch 20/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5722 - accuracy: 0.7057
Epoch 21/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5828 - accuracy: 0.7057
Epoch 22/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5804 - accuracy: 0.6992
Epoch 23/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5639 - accuracy: 0.7122
Epoch 24/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5644 - accuracy: 0.7214
Epoch 25/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5763 - accuracy: 0.7188
Epoch 26/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5797 - accuracy: 0.7214
Epoch 27/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5604 - accuracy: 0.7109
Epoch 28/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5541 - accuracy: 0.7174
Epoch 29/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5558 - accuracy: 0.7161
Epoch 30/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5437 - accuracy: 0.7279
Epoch 31/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5509 - accuracy: 0.7279
Epoch 32/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5484 - accuracy: 0.7070
Epoch 33/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5428 - accuracy: 0.7240
Epoch 34/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5528 - accuracy: 0.7188
Epoch 35/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5447 - accuracy: 0.7292
Epoch 36/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5341 - accuracy: 0.7253
Epoch 37/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5310 - accuracy: 0.7305
Epoch 38/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5375 - accuracy: 0.7044
Epoch 39/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5379 - accuracy: 0.7214
Epoch 40/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5359 - accuracy: 0.7370
Epoch 41/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5232 - accuracy: 0.7161
Epoch 42/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5269 - accuracy: 0.7409
Epoch 43/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5272 - accuracy: 0.7318
Epoch 44/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5331 - accuracy: 0.7357
Epoch 45/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5265 - accuracy: 0.7370
Epoch 46/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5206 - accuracy: 0.7188
Epoch 47/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5185 - accuracy: 0.7357
Epoch 48/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5215 - accuracy: 0.7331
Epoch 49/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5215 - accuracy: 0.7240
Epoch 50/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5114 - accuracy: 0.7279
Epoch 51/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5160 - accuracy: 0.7435
Epoch 52/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5193 - accuracy: 0.7370
Epoch 53/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5147 - accuracy: 0.7318
Epoch 54/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5097 - accuracy: 0.7487
Epoch 55/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5183 - accuracy: 0.7318
Epoch 56/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5142 - accuracy: 0.7448
Epoch 57/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5060 - accuracy: 0.7513
Epoch 58/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5161 - accuracy: 0.7279
Epoch 59/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5107 - accuracy: 0.7448
Epoch 60/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5063 - accuracy: 0.7409
Epoch 61/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5087 - accuracy: 0.7461
Epoch 62/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5002 - accuracy: 0.7474
Epoch 63/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5005 - accuracy: 0.7500
Epoch 64/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5094 - accuracy: 0.7422
Epoch 65/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5053 - accuracy: 0.7396
Epoch 66/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4919 - accuracy: 0.7539
Epoch 67/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4969 - accuracy: 0.7526
Epoch 68/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4979 - accuracy: 0.7422
Epoch 69/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5044 - accuracy: 0.7448
Epoch 70/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5019 - accuracy: 0.7461
Epoch 71/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4969 - accuracy: 0.7461
Epoch 72/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4980 - accuracy: 0.7422
Epoch 73/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5038 - accuracy: 0.7500
Epoch 74/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4960 - accuracy: 0.7448
Epoch 75/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4936 - accuracy: 0.7578
Epoch 76/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4854 - accuracy: 0.7422
Epoch 77/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5027 - accuracy: 0.7487
Epoch 78/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4976 - accuracy: 0.7526
Epoch 79/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4852 - accuracy: 0.7552
Epoch 80/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4924 - accuracy: 0.7552
Epoch 81/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4903 - accuracy: 0.7396
Epoch 82/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4865 - accuracy: 0.7552
Epoch 83/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4943 - accuracy: 0.7487
Epoch 84/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4868 - accuracy: 0.7578
Epoch 85/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4877 - accuracy: 0.7591
Epoch 86/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4865 - accuracy: 0.7565
Epoch 87/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4863 - accuracy: 0.7552
Epoch 88/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4876 - accuracy: 0.7539
Epoch 89/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4814 - accuracy: 0.7617
Epoch 90/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4806 - accuracy: 0.7552
Epoch 91/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4838 - accuracy: 0.7591
Epoch 92/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4850 - accuracy: 0.7552
Epoch 93/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4762 - accuracy: 0.7786
Epoch 94/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4794 - accuracy: 0.7591
Epoch 95/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4903 - accuracy: 0.7708
Epoch 96/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4929 - accuracy: 0.7552
Epoch 97/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4856 - accuracy: 0.7630
Epoch 98/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4827 - accuracy: 0.7604
Epoch 99/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4860 - accuracy: 0.7539
Epoch 100/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4792 - accuracy: 0.7565
Epoch 101/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4848 - accuracy: 0.7643
Epoch 102/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4734 - accuracy: 0.7656
Epoch 103/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4753 - accuracy: 0.7591
Epoch 104/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4923 - accuracy: 0.7500
Epoch 105/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4817 - accuracy: 0.7617
Epoch 106/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4845 - accuracy: 0.7578
Epoch 107/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4711 - accuracy: 0.7708
Epoch 108/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4732 - accuracy: 0.7682
Epoch 109/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4818 - accuracy: 0.7643
Epoch 110/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4731 - accuracy: 0.7682
Epoch 111/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5103 - accuracy: 0.7604
Epoch 112/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4768 - accuracy: 0.7682
Epoch 113/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4759 - accuracy: 0.7656
Epoch 114/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4714 - accuracy: 0.7682
Epoch 115/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4715 - accuracy: 0.7721
Epoch 116/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4780 - accuracy: 0.7578
Epoch 117/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4817 - accuracy: 0.7513
Epoch 118/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4855 - accuracy: 0.7604
Epoch 119/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4815 - accuracy: 0.7565
Epoch 120/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4730 - accuracy: 0.7747
Epoch 121/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4752 - accuracy: 0.7578
Epoch 122/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4779 - accuracy: 0.7578
Epoch 123/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4760 - accuracy: 0.7539
Epoch 124/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4741 - accuracy: 0.7643
Epoch 125/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4673 - accuracy: 0.7643
Epoch 126/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4755 - accuracy: 0.7604
Epoch 127/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4830 - accuracy: 0.7591
Epoch 128/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4779 - accuracy: 0.7578
Epoch 129/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4764 - accuracy: 0.7656
Epoch 130/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4757 - accuracy: 0.7656
Epoch 131/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4750 - accuracy: 0.7643
Epoch 132/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4706 - accuracy: 0.7695
Epoch 133/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4707 - accuracy: 0.7708
Epoch 134/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4598 - accuracy: 0.7643
Epoch 135/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4681 - accuracy: 0.7760
Epoch 136/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4671 - accuracy: 0.7721
Epoch 137/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4610 - accuracy: 0.7669
Epoch 138/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4619 - accuracy: 0.7708
Epoch 139/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4667 - accuracy: 0.7695
Epoch 140/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4544 - accuracy: 0.7708
Epoch 141/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5003 - accuracy: 0.7604
Epoch 142/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4715 - accuracy: 0.7682
Epoch 143/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4595 - accuracy: 0.7565
Epoch 144/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4628 - accuracy: 0.7604
Epoch 145/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4563 - accuracy: 0.7643
Epoch 146/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4633 - accuracy: 0.7747
Epoch 147/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4608 - accuracy: 0.7604
Epoch 148/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4842 - accuracy: 0.7721
Epoch 149/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4540 - accuracy: 0.7760
Epoch 150/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4583 - accuracy: 0.7747
Epoch 151/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4599 - accuracy: 0.7747
Epoch 152/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4597 - accuracy: 0.7747
Epoch 153/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4573 - accuracy: 0.7747
Epoch 154/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4497 - accuracy: 0.7812
Epoch 155/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4623 - accuracy: 0.7721
Epoch 156/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4516 - accuracy: 0.7839
Epoch 157/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5044 - accuracy: 0.7448
Epoch 158/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4657 - accuracy: 0.7578
Epoch 159/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4750 - accuracy: 0.7604
Epoch 160/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4649 - accuracy: 0.7656
Epoch 161/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4521 - accuracy: 0.7812
Epoch 162/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4556 - accuracy: 0.7734
Epoch 163/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4599 - accuracy: 0.7630
Epoch 164/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4632 - accuracy: 0.7617
Epoch 165/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4605 - accuracy: 0.7669
Epoch 166/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4605 - accuracy: 0.7734
Epoch 167/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4628 - accuracy: 0.7695
Epoch 168/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4678 - accuracy: 0.7669
Epoch 169/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4781 - accuracy: 0.7643
Epoch 170/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4649 - accuracy: 0.7773
Epoch 171/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4490 - accuracy: 0.7799
Epoch 172/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4626 - accuracy: 0.7799
Epoch 173/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4564 - accuracy: 0.7760
Epoch 174/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4502 - accuracy: 0.7682
Epoch 175/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4780 - accuracy: 0.7695
Epoch 176/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4522 - accuracy: 0.7799
Epoch 177/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4537 - accuracy: 0.7695
Epoch 178/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4433 - accuracy: 0.7891
Epoch 179/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4565 - accuracy: 0.7747
Epoch 180/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4599 - accuracy: 0.7721
Epoch 181/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4515 - accuracy: 0.7826
Epoch 182/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4562 - accuracy: 0.7786
Epoch 183/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4451 - accuracy: 0.7904
Epoch 184/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4606 - accuracy: 0.7682
Epoch 185/200
768/768 [==============================] - 1s 1ms/step - loss: 0.5112 - accuracy: 0.7682
Epoch 186/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4572 - accuracy: 0.7591
Epoch 187/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4562 - accuracy: 0.7682
Epoch 188/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4577 - accuracy: 0.7760
Epoch 189/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4471 - accuracy: 0.7721
Epoch 190/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4483 - accuracy: 0.7747
Epoch 191/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4550 - accuracy: 0.7747
Epoch 192/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4468 - accuracy: 0.7826
Epoch 193/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4612 - accuracy: 0.7734
Epoch 194/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4502 - accuracy: 0.7708
Epoch 195/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4643 - accuracy: 0.7721
Epoch 196/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4556 - accuracy: 0.7760
Epoch 197/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4508 - accuracy: 0.7760
Epoch 198/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4683 - accuracy: 0.7656
Epoch 199/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4378 - accuracy: 0.7839
Epoch 200/200
768/768 [==============================] - 1s 1ms/step - loss: 0.4586 - accuracy: 0.7708
###Markdown
Embeddingonehotベクトルを入力とする全結合層。onehotベクトルの$1$となるインデックス$\rm idx$を入力として、そこに相当する行を抜き出す。$${\bf h} = {\bf W}_{{\rm idx}, *}$$ サイズ| 変数 | 名前 | サイズ ||:---:|:---:|:---:||-|onehotベクトル|`(n_alphabet)`||${\rm idx}$|インデックス|`(1)`||${\bf h}$|埋め込みベクトル|`(embed_dim)`||${\bf W}$|重み|`(n_alphabet, embed_dim)`|
###Code
n_alphabet = 10; embed_dim = 3
embedding = nn.Embedding(n_alphabet, embed_dim)
idxs = torch.LongTensor([[0,2,4,5],[4,0,2,9]])
print(idxs)
embed = embedding(idxs)
print(embed)
# zero_padding
PAD = 0
embedding = nn.Embedding(n_alphabet, embed_dim, padding_idx=PAD)
embed = embedding(idxs)
print(embed)
###Output
tensor([[0, 2, 4, 5],
[4, 0, 2, 9]])
tensor([[[ 1.6995, 0.9900, -0.4197],
[-0.6903, -1.0622, 0.0646],
[-0.7134, 0.3108, 0.2643],
[-0.6781, -0.6527, 0.7753]],
[[-0.7134, 0.3108, 0.2643],
[ 1.6995, 0.9900, -0.4197],
[-0.6903, -1.0622, 0.0646],
[ 0.7821, 0.2614, -3.2782]]], grad_fn=<EmbeddingBackward>)
tensor([[[ 0.0000, 0.0000, 0.0000],
[ 0.0819, 0.0736, 0.9592],
[-0.2574, 0.0104, 1.1147],
[-0.8644, 0.2818, 1.0298]],
[[-0.2574, 0.0104, 1.1147],
[ 0.0000, 0.0000, 0.0000],
[ 0.0819, 0.0736, 0.9592],
[-0.2122, 1.0935, -0.4656]]], grad_fn=<EmbeddingBackward>)
###Markdown
LSTM入力${\bf x}_t$ `(in_dim)`と前時刻の隠れ状態${\bf h}_{t-1}$ `(hid_dim)`を使用して、記憶セル${\bf c}_{t-1}$ `(hid_dim)`と隠れ状態${\bf h}_{t}$ `(hid_dim)`を更新する。$${\bf c}_t = {\bf f}_t{\bf c}_{t-1} + {\bf i}_t{\bf g}_t\\{\bf h}_t = {\bf o}_t \tanh\left({\bf c}_t\right)$$前時刻の記憶セル${\bf c}_{t-1}$をどれだけ保持するか決める忘却ゲート${\bf f}_t$:$${\bf f}_t = \sigma\left( {\bf W}_{if}{\bf x}_t + {\bf W}_{hf}{\bf h}_{t-1} \right)$$記憶セルに加算する${\bf g}_t$とそれの反映率を決める入力ゲート${\bf i}_t$:$${\bf i}_t = \sigma\left( {\bf W}_{ii}{\bf x}_t + {\bf W}_{hi}{\bf h}_{t-1} \right)\\{\bf g}_t = \tanh\left( {\bf W}_{ig}{\bf x}_t + {\bf W}_{hg}{\bf h}_{t-1} \right)\\$$隠れ状態${\bf h}_{t}$にどれだけ記憶セルの内容$\tanh( {\bf c}_t )$を反映するかを決める出力ゲート${\bf o}_t $:$${\bf o}_t = \sigma\left( {\bf W}_{io}{\bf x}_t + {\bf W}_{ho}{\bf h}_{t-1} \right)$$ サイズ| 変数 | 名前 | サイズ ||:---:|:---:|:---:||${\bf x}$|入力|`(in_dim)`|`(seq, batch_size, in_dim)`||${\bf h}$|隠れ状態|`(hid_dim)`||${\bf c}$|記憶セル|`(hid_dim)`||${\bf W}_{i*}$|入力にかける重み|`(hid_dim, in_dim)`||${\bf W}_{h*}$|隠れ状態にかける重み|`(hid_dim, hid_dim)`||${\bf i, g, f, o}$|各ゲート等|`(hid_dim)`|
###Code
in_dim = 3; hid_dim = 3; batch_size = 5;
lstm = nn.LSTM(in_dim, hid_dim)
inputs = [torch.randn(1, in_dim) for _ in range(batch_size)]
inputs = torch.cat(inputs).view(len(inputs), -1, in_dim) # seq, batch, feature(default: batch_first=False)
print(inputs)
# initialize the cell hidden state.
hidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3))
out, hidden = lstm(inputs, hidden)
print(out) # each hidden
print(hidden) # (last cell, last hidden)
###Output
tensor([[[-0.3707, 1.6203, 0.9399]],
[[ 1.1986, -0.2543, -0.2785]],
[[-0.1050, -2.2704, -1.5767]],
[[ 0.4714, 0.3945, 1.2863]],
[[-1.3864, 0.2574, -0.0920]]])
tensor([[[-0.4008, -0.0932, 0.2109]],
[[-0.3955, 0.1253, 0.1145]],
[[-0.1316, 0.1825, -0.0205]],
[[-0.5473, 0.1051, 0.1221]],
[[-0.0443, 0.1136, 0.1150]]], grad_fn=<CatBackward>)
(tensor([[[-0.0443, 0.1136, 0.1150]]], grad_fn=<ViewBackward>), tensor([[[-0.1209, 0.2656, 0.2066]]], grad_fn=<ViewBackward>))
###Markdown
When using Google Colab, upload the compressed datasets and run the cell below to decompress the dataset---
###Code
# import zipfile
# with zipfile.ZipFile("processed-data-labeled.zip","r") as zip_ref:
# zip_ref.extractall("./")
# with zipfile.ZipFile("processed-data-unlabeled.zip","r") as zip_ref:
# zip_ref.extractall("./")
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import csv
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Bidirectional, LSTM, Dense, Activation, Flatten, Input, Concatenate
from tensorflow.keras.losses import BinaryCrossentropy
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Global hyperparameters and constants
###Code
threshold = 0.01
hidden_size = 100
word_embedding_dim = 300
epochs = 10
use_unlabeled_dataset = True
labeled_dataset_size = 1830
train_dataset_size = 900
validation_dataset_size = 100
test_dataset_size = 830
unlabeled_dataset_size = 4000
pos_list = np.char.lower(["ADJ","ADP","ADV","AUX","CONJ","DET","INTJ","NOUN","NUM","PART","PRON","PROPN","PUNCT","SCONJ","SYM","VERB","X"])
dep_list = np.char.lower(["ROOT", "acl", "acomp", "advcl", "advmod", "agent", "amod", "appos", "attr", "aux", "auxpass", "case", "cc", "ccomp", "compound", "conj", "csubj", "csubjpass", "dative", "dep", "det", "dobj", "expl", "intj", "mark", "meta", "neg", "nmod", "npadvmod", "nsubj", "nsubjpass", "nummod", "oprd", "parataxis", "pcomp", "pobj", "poss", "preconj", "predet", "prep", "prt", "punct", "quantmod", "relcl", "xcomp"])
pos_dim = len(pos_list)
dep_dim = len(dep_list)
###Output
_____no_output_____
###Markdown
Load Data
###Code
# Helper function to one-hot encode the labels
def one_hot(vec, dic):
vec = np.char.lower(vec)
return np.array([dic == row for row in vec], dtype='i1')
labeled_dataset = []
unlabeled_dataset = []
train_dataset = []
validation_dataset = []
test_dataset = []
ul_dataset = []
# Read labeled dataset
for i in range(1, labeled_dataset_size + 1):
filename = "processed-data-labeled/processed-labeled-tweet-{}.csv".format(i)
if os.path.exists(filename):
with open(filename, newline='') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',', quotechar='|')
data = [tuple(x) for x in spamreader]
data = np.array(data, dtype=([("text", 'U20'),("simplified_text", 'U20'), ("best_match", 'U20'), ("index", int), ("pos", 'U20'), ("dep", 'U20'), ("stop", 'U5'), ("label", 'i1')]))
if len(data):
labeled_dataset.append(data)
# Reshape labeled dataset
for i in range(len(labeled_dataset)):
tweet = labeled_dataset[i]
text = tf.reshape(tweet["index"], (1, -1, 1))
pos = tf.reshape(one_hot(tweet["pos"], pos_list), (1, -1, pos_dim))
dep = tf.reshape(one_hot(tweet["dep"], dep_list), (1, -1, dep_dim))
label = tf.reshape(tf.one_hot(tweet["label"], 2), (1, -1, 2))
train_dataset.append((np.concatenate((text, pos, dep), axis=-1), label))
# Read unlabeled dataset
for i in range(1, unlabeled_dataset_size + 1):
filename = "processed-data-unlabeled/processed-unlabeled-tweet-{}.csv".format(i)
if os.path.exists(filename):
with open(filename, newline='') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',', quotechar='|')
data = [tuple(x) for x in spamreader]
data = np.array(data, dtype=([("text", 'U20'),("simplified_text", 'U20'), ("best_match", 'U20'), ("index", int), ("pos", 'U20'), ("dep", 'U20'), ("stop", 'U5')]))
if len(data):
unlabeled_dataset.append(data)
# Reshape unlabeled dataset
for i in range(len(unlabeled_dataset)):
tweet = unlabeled_dataset[i]
text = tf.reshape(tweet["index"], (1, -1, 1))
pos = tf.reshape(one_hot(tweet["pos"], pos_list), (1, -1, pos_dim))
dep = tf.reshape(one_hot(tweet["dep"], dep_list), (1, -1, dep_dim))
ul_dataset.append(np.concatenate((text, pos, dep), axis=-1))
# Split labeled dataset
validation_dataset = train_dataset[901:1001]
test_dataset = train_dataset[1001:1101]
train_dataset = train_dataset[:901]
used = np.zeros(len(ul_dataset))
###Output
_____no_output_____
###Markdown
Define RNN Model
###Code
inputs = Input(shape=(None, pos_dim+dep_dim+1))
x = Embedding(380000, word_embedding_dim)(inputs[:,:,0])
x = Concatenate(axis=-1)([inputs[:,:,1:], x])
x = Bidirectional(LSTM(100, return_sequences=True))(x)
outputs = Dense(2, activation=tf.nn.sigmoid)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.summary()
opt = tf.keras.optimizers.Adam(
learning_rate=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False
)
model.compile(loss=BinaryCrossentropy(), optimizer=opt)
# Define precision, recall and F-1 metrics
def metrics(prediction, label):
prediction = (prediction > 0.5)[0,:,1]
label = label[0,:,1]
true_positive = np.sum(np.logical_and((prediction == label), (prediction == True)))
false_positive = np.sum(np.logical_and((prediction != label), (prediction == True)))
false_negative = np.sum(np.logical_and((prediction != label), (prediction == False)))
precision = 0 if true_positive == 0 else true_positive / (true_positive + false_positive)
recall = 0 if true_positive == 0 else true_positive / (true_positive + false_negative)
f1 = 0 if true_positive == 0 else 2 / (1 / precision + 1 / recall)
return precision, recall, f1
###Output
_____no_output_____
###Markdown
Training
###Code
def train_step(tweet):
x,y = tweet[0], tweet[1]
model.fit(x, y, verbose=0)
prediction = predict_step(x)
return metrics(prediction, y)
def eval_step(tweet):
x,y = tweet[0], tweet[1]
prediction = predict_step(x)
return metrics(prediction, y)
def predict_step(tweet):
return model.predict(tweet)
def neg_log(prediction):
return -np.mean(np.log(np.amax(prediction, axis=-1)))
train_p, train_r, train_f = [], [], []
val_p, val_r, val_f = [], [], []
def semi_supervised():
for epoch in range(epochs):
print("Training epoch {}".format(epoch+1))
count = 0
total = 0
precision, recall, f1 = 0, 0, 0
for tweet in train_dataset:
if count % 100 == 0:
print("Training iter {}".format(count))
p, r, f = train_step(tweet)
total += tweet[0].shape[1]
precision += p * tweet[0].shape[1]
recall += r * tweet[0].shape[1]
f1 += f * tweet[0].shape[1]
count += 1
train_p.append(precision / total)
train_r.append(recall / total)
train_f.append(f1 / total)
print("Validation")
total = 0
precision, recall, f1 = 0, 0, 0
for tweet in validation_dataset:
p, r, f = eval_step(tweet)
total += tweet[0].shape[1]
precision += p * tweet[0].shape[1]
recall += r * tweet[0].shape[1]
f1 += f * tweet[0].shape[1]
if len(val_f) == 0 or f1 / total > np.amax(val_f):
model.save_weights('./checkpoint/checkpoint')
val_p.append(precision / total)
val_r.append(recall / total)
val_f.append(f1 / total)
if use_unlabeled_dataset == True:
print("Enlarging training set")
added = 0
for i in range(len(ul_dataset)):
tweet = ul_dataset[i]
if used[i] == False:
prediction = predict_step(tweet)
if neg_log(prediction) < threshold:
label = prediction > 0.5
train_dataset.append((tweet, label))
used[i] = True
added += 1
print("Added {} data points to training set".format(added))
print(train_p, train_r, train_f)
print(val_p, val_r, val_f)
model.load_weights('./checkpoint/checkpoint')
for tweet in test_dataset:
p, r, f = eval_step(tweet)
total += tweet[0].shape[1]
precision += p * tweet[0].shape[1]
recall += r * tweet[0].shape[1]
f1 += f * tweet[0].shape[1]
print("Test Dataset precision = {}, recall = {}, f1 = {}".format(precision / total, recall / total, f1 / total))
semi_supervised()
###Output
Training epoch 1
Training iter 0
Training iter 100
Training iter 200
Training iter 300
Training iter 400
Training iter 500
Training iter 600
Training iter 700
Training iter 800
Training iter 900
Validation
Enlarging training set
Added 0 data points to training set
[0.15264780558578164] [0.11380123322451942] [0.12452614125084198]
[0.03303303303303303] [0.026526526526526525] [0.02869536202869536]
Training epoch 2
Training iter 0
Training iter 100
Training iter 200
Training iter 300
Training iter 400
Training iter 500
Training iter 600
Training iter 700
Training iter 800
Training iter 900
Validation
Enlarging training set
Added 2 data points to training set
[0.15264780558578164, 0.772076532462822] [0.11380123322451942, 0.7490660137830976] [0.12452614125084198, 0.7416506743166156]
[0.03303303303303303, 0.34634634634634637] [0.026526526526526525, 0.47122122122122123] [0.02869536202869536, 0.3703274703274703]
Training epoch 3
Training iter 0
Training iter 100
Training iter 200
Training iter 300
Training iter 400
Training iter 500
Training iter 600
Training iter 700
Training iter 800
Training iter 900
Validation
Enlarging training set
Added 9 data points to training set
[0.15264780558578164, 0.772076532462822, 0.7944963937515857] [0.11380123322451942, 0.7490660137830976, 0.7730872385922948] [0.12452614125084198, 0.7416506743166156, 0.7658201173260435]
[0.03303303303303303, 0.34634634634634637, 0.26359693026359693] [0.026526526526526525, 0.47122122122122123, 0.2509175842509176] [0.02869536202869536, 0.3703274703274703, 0.2331331331331331]
Training epoch 4
Training iter 0
Training iter 100
Training iter 200
Training iter 300
Training iter 400
Training iter 500
Training iter 600
Training iter 700
Training iter 800
Training iter 900
Validation
Enlarging training set
Added 25 data points to training set
[0.15264780558578164, 0.772076532462822, 0.7944963937515857, 0.8175222893298822] [0.11380123322451942, 0.7490660137830976, 0.7730872385922948, 0.7874298964624677] [0.12452614125084198, 0.7416506743166156, 0.7658201173260435, 0.787095042935207]
[0.03303303303303303, 0.34634634634634637, 0.26359693026359693, 0.37460794127460795] [0.026526526526526525, 0.47122122122122123, 0.2509175842509176, 0.3432599265932599] [0.02869536202869536, 0.3703274703274703, 0.2331331331331331, 0.33250869917536585]
Training epoch 5
Training iter 0
Training iter 100
Training iter 200
Training iter 300
Training iter 400
Training iter 500
Training iter 600
Training iter 700
Training iter 800
Training iter 900
Validation
Enlarging training set
Added 158 data points to training set
[0.15264780558578164, 0.772076532462822, 0.7944963937515857, 0.8175222893298822, 0.8125852878464817] [0.11380123322451942, 0.7490660137830976, 0.7730872385922948, 0.7874298964624677, 0.7849324804548685] [0.12452614125084198, 0.7416506743166156, 0.7658201173260435, 0.787095042935207, 0.7829383693776021]
[0.03303303303303303, 0.34634634634634637, 0.26359693026359693, 0.37460794127460795, 0.298631965298632] [0.026526526526526525, 0.47122122122122123, 0.2509175842509176, 0.3432599265932599, 0.2246413079746413] [0.02869536202869536, 0.3703274703274703, 0.2331331331331331, 0.33250869917536585, 0.23810477143810477]
Training epoch 6
Training iter 0
Training iter 100
Training iter 200
Training iter 300
Training iter 400
Training iter 500
Training iter 600
Training iter 700
Training iter 800
Training iter 900
Training iter 1000
Validation
Enlarging training set
Added 1100 data points to training set
[0.15264780558578164, 0.772076532462822, 0.7944963937515857, 0.8175222893298822, 0.8125852878464817, 0.7328975528054341] [0.11380123322451942, 0.7490660137830976, 0.7730872385922948, 0.7874298964624677, 0.7849324804548685, 0.7046121398219659] [0.12452614125084198, 0.7416506743166156, 0.7658201173260435, 0.787095042935207, 0.7829383693776021, 0.70591261348682]
[0.03303303303303303, 0.34634634634634637, 0.26359693026359693, 0.37460794127460795, 0.298631965298632, 0.07557557557557558] [0.026526526526526525, 0.47122122122122123, 0.2509175842509176, 0.3432599265932599, 0.2246413079746413, 0.06856856856856856] [0.02869536202869536, 0.3703274703274703, 0.2331331331331331, 0.33250869917536585, 0.23810477143810477, 0.06606606606606606]
Training epoch 7
Training iter 0
Training iter 100
Training iter 200
Training iter 300
Training iter 400
Training iter 500
Training iter 600
Training iter 700
Training iter 800
Training iter 900
Training iter 1000
Training iter 1100
Training iter 1200
Training iter 1300
Training iter 1400
Training iter 1500
Training iter 1600
Training iter 1700
Training iter 1800
Training iter 1900
Training iter 2000
Training iter 2100
Validation
Enlarging training set
Added 2059 data points to training set
[0.15264780558578164, 0.772076532462822, 0.7944963937515857, 0.8175222893298822, 0.8125852878464817, 0.7328975528054341, 0.322398406374502] [0.11380123322451942, 0.7490660137830976, 0.7730872385922948, 0.7874298964624677, 0.7849324804548685, 0.7046121398219659, 0.30773373173970786] [0.12452614125084198, 0.7416506743166156, 0.7658201173260435, 0.787095042935207, 0.7829383693776021, 0.70591261348682, 0.30910244735344333]
[0.03303303303303303, 0.34634634634634637, 0.26359693026359693, 0.37460794127460795, 0.298631965298632, 0.07557557557557558, 0.016016016016016016] [0.026526526526526525, 0.47122122122122123, 0.2509175842509176, 0.3432599265932599, 0.2246413079746413, 0.06856856856856856, 0.016016016016016016] [0.02869536202869536, 0.3703274703274703, 0.2331331331331331, 0.33250869917536585, 0.23810477143810477, 0.06606606606606606, 0.016016016016016016]
Test Dataset precision = 0.15251847599913898, recall = 0.22375690607734808, f1 = 0.17018241538583073
###Markdown
Plot Results
###Code
# Plot training scores
plt.plot(train_p)
plt.plot(train_r)
plt.plot(train_f)
plt.title('training')
plt.ylabel('score')
plt.xlabel('epoch')
plt.legend(["precision", "recall", "f1"], loc='upper left')
plt.show()
# Plot validation scores
plt.plot(val_p)
plt.plot(val_r)
plt.plot(val_f)
plt.title('validation')
plt.ylabel('score')
plt.xlabel('epoch')
plt.legend(["precision", "recall", "f1"], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Deep Learning on IBM Stocks The DataWe choose to analyse IBM history stock data which include about 13K records from the last 54 years. [From the year 1962 to this day]Each record contains: - Open price: The price in which the market in that month started at.- Close price: The price in which the market in that month closed at.- High Price: The max price the stock reached within the month.- Low price: The min price the stock reached within the month.- Volume: The max price the stock reached within the month.- [Adjacent close price](https://marubozu.blogspot.co.il/2006/09/how-yahoo-calculates-adjusted-closing.html). - Date: Day, Month and Year.The main challenges of this project are: - The limited data within a market that is changed by wide variety of things. In particular, things that we don't see in the raw data, like special accouncments on new technology.- The historic data of stocks in a particular situation doesn't necessarily resolve the same outcome in the exact same situation a few years later.- We wondered whether it is possible to actually find some features that will give us better accuracy than random. This project is interesting because as everybody knows deep learning solved tasks that considered difficult even with pretty basic deep learning features. And of course, If we find something useful when it comes to stock then good prediction = profit.
###Code
from pandas_datareader.data import DataReader
from datetime import datetime
import os
import pandas as pd
import random
import numpy as np
from keras.models import Sequential
from keras.layers.recurrent import LSTM,GRU,SimpleRNN
from keras.layers.core import Dense, Activation, Dropout
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
import warnings
warnings.filterwarnings('ignore')
from keras.utils.np_utils import to_categorical
###Output
Using Theano backend.
###Markdown
Load or Download the data
###Code
def get_data_if_not_exists(force=False):
if os.path.exists("./data/ibm.csv") and not force:
return pd.read_csv("./data/ibm.csv")
else:
if not os.path.exists("./data"):
os.mkdir("data")
ibm_data = DataReader('IBM', 'yahoo', datetime(1950, 1, 1), datetime.today())
pd.DataFrame(ibm_data).to_csv("./data/ibm.csv")
return pd.DataFrame(ibm_data).reset_index()
###Output
_____no_output_____
###Markdown
Data Exploration
###Code
print "loading the data"
data = get_data_if_not_exists(force=True)
print "done loading the data"
print "data columns names: %s"%data.columns.values
print data.shape
data.head()
###Output
(13744, 7)
###Markdown
Data exploration highlights:- The data contains 13,733 records.- Each record reprsent one specific day.- Each record contain: Date, Open, High, Low, Close, Volume and Adj Close. Creating sequence of close price from the stock dataOur motivation was trying to imitiate a a stock similiar to IBM stock. Feature extraction:We'll use for our features only the closing price of the stock.And the sequence generated will include only the closing price aswell.
###Code
def extract_features(items):
return [[item[4]] for item in items]
def extract_expected_result(item):
return [item[4]]
MAX_WINDOW = 5
def train_test_split(data, test_size=0.1):
"""
This just splits data to training and testing parts
"""
ntrn = int(round(len(data) * (1 - test_size)))
X, y = generate_input_and_outputs(data,extract_features,extract_expected_result)
X_train,y_train,X_test, y_test = X[:ntrn],y[:ntrn],X[ntrn:],y[ntrn:]
return X_train, y_train, X_test, y_test
def generate_input_and_outputs(data,extractFeaturesFunc=extract_features,expectedResultFunc=extract_expected_result):
step = 1
inputs = []
outputs = []
for i in range(0, len(data) - MAX_WINDOW, step):
inputs.append(extractFeaturesFunc(data.iloc[i:i + MAX_WINDOW].as_matrix()))
outputs.append(expectedResultFunc(data.iloc[i + MAX_WINDOW].as_matrix()))
return inputs, outputs
X_train,y_train, X_test, y_test = train_test_split(data,test_size=0.15)
###Output
_____no_output_____
###Markdown
Distance metrics:For our evaluation of the quality we used several distance metrics:* Euclidean distance.* Squared Euclidean distance.* Chebyshev distance.* Cosine distance.
###Code
import scipy.spatial.distance as dist
def distance_functions(generated_seq):
generated_sequence = np.asarray(generated_seq)
original_sequence = np.asarray(y_test)
print 'Euclidean distance: ', dist.euclidean(original_sequence, generated_sequence)
print 'Squared Euclidean distance: ', dist.sqeuclidean(original_sequence, generated_sequence)
print 'Chebyshev distance: ', dist.chebyshev(original_sequence, generated_sequence)
print 'Cosine distance: ', dist.cosine(original_sequence, generated_sequence)
return generated_sequence
def train_and_evaluate(model, model_name):
print 'Done building'
print 'Training...'
model.fit(X_train, y_train, batch_size=500, nb_epoch=500, validation_split=0.15,verbose=0)
print 'Generating sequence...'
generated_sequence = model.predict(X_test)
return distance_functions(generated_sequence)
###Output
_____no_output_____
###Markdown
Training and EvaluationWe tried 3 different deep-learning algorithms:* LSTM.* GRU.* SimpleRNN.For each algorithm we generated a sequence, Measured its distance and plotted the given result with the original sequence.
###Code
layer_output_size1 = 128
print 'Building LSTM Model'
model = Sequential()
model.add(LSTM(layer_output_size1, return_sequences=False, input_shape=(MAX_WINDOW, len(X_train[0][0]))))
model.add(Dense(len(y_train[0]), input_dim=layer_output_size1))
model.add(Activation("linear"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
LSTM_seq = train_and_evaluate(model, 'LSTM')
print '----------------------'
print 'Building SimpleRNN Model'
model = Sequential()
model.add(SimpleRNN(layer_output_size1, return_sequences=False, input_shape=(MAX_WINDOW, len(X_train[0][0]))))
model.add(Dense(len(y_train[0]), input_dim=layer_output_size1))
model.add(Activation("linear"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
SimpleRNN_seq = train_and_evaluate(model, 'SimpleRNN')
print '----------------------'
print 'Building GRU Model'
model = Sequential()
model.add(GRU(layer_output_size1, return_sequences=False, input_shape=(MAX_WINDOW, len(X_train[0][0]))))
model.add(Dense(len(y_train[0]), input_dim=layer_output_size1))
model.add(Activation("linear"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
GRU_seq = train_and_evaluate(model, 'GRU')
###Output
Building LSTM Model
Done building
Training...
Generating sequence...
Euclidean distance: 146.648831224
Squared Euclidean distance: 21505.8796994
Chebyshev distance: 22.0612487793
Cosine distance: 9.0914347589e-05
----------------------
Building SimpleRNN Model
Done building
Training...
Generating sequence...
Euclidean distance: 110.185439683
Squared Euclidean distance: 12140.8311182
Chebyshev distance: 17.1705474854
Cosine distance: 0.000102971857196
----------------------
Building GRU Model
Done building
Training...
Generating sequence...
Euclidean distance: 142.671323629
Squared Euclidean distance: 20355.1065861
Chebyshev distance: 20.6371765137
Cosine distance: 9.01642322843e-05
###Markdown
Graphs showing the difference between the generated sequence and the original LSTM Sequence vs Original Sequence.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pylab
pylab.rcParams['figure.figsize'] = (32, 6)
pylab.xlim([0,len(y_test)])
plt.plot(y_test, linewidth=1)
plt.plot(LSTM_seq, marker='o', markersize=4, linewidth=0)
plt.legend(['Original = Blue', 'LSTM = Green '], loc='best', prop={'size':20})
plt.show()
###Output
_____no_output_____
###Markdown
GRU Sequence vs Original Sequence
###Code
plt.plot(y_test, linewidth=1)
plt.plot(GRU_seq, marker='o', markersize=4, linewidth=0, c='r')
plt.legend(['Original = Blue','GRU = Red'], loc='best', prop={'size':20})
plt.show()
###Output
_____no_output_____
###Markdown
SimpleRNN Sequence vs Original Sequence.
###Code
plt.plot(y_test, linewidth=1)
plt.plot(SimpleRNN_seq, marker='o', markersize=4, linewidth=0, c='black')
plt.legend(['Original = Blue', 'SimpleRNN = Black'], loc='best', prop={'size':20})
plt.show()
###Output
_____no_output_____
###Markdown
Up / Down sequences.After the generation of a new sequence we wanted to try another thing: Trying to predict up / down sequences. Feature Extraction and Data Pre-processing. The features are:1. Open price within the day.1. Highest price within the day.1. Lowest price within the day.1. Close price within the day.1. Adj Close.1. Raise percentage.1. Spread.1. Up Spread.1. Down Spread.1. Absolute Difference between Close and Previous day close.1. Absolute Difference between Open and Previous day open.1. Absolute Difference between High and Previous day high.1. Absolute Difference between low and Previous day low.1. For each day we've also added a 7 previous day sliding window containing all of the above.1. 1 When the stock price raised for that day, 0 When the stock price didn't raise.
###Code
data = get_data_if_not_exists(force=True)
for i in range(1,len(data)):
prev = data.iloc[i-1]
data.set_value(i,"prev_close",prev["Close"])
data["up/down"] = (data["Close"] - data["prev_close"]) > 0
data["raise_percentage"] = (data["Close"] - data["prev_close"])/data["prev_close"]
data["spread"] = abs(data["High"]-data["Low"])
data["up_spread"] = abs(data["High"]-data["Open"])
data["down_spread"] = abs(data["Open"]-data["Low"])
# import re
for i in range(1,len(data)):
prev = data.iloc[i-1]
data.set_value(i,"prev_open",prev["Open"])
data.set_value(i,"prev_high",prev["High"])
data.set_value(i,"prev_low",prev["Low"])
# data.set_value(i,"month",re.findall("[1-9]+", str(data.Date[i]))[2])
# data.set_value(i,"year",re.findall("[1-9]+", str(data.Date[i]))[0])
# prev = data.iloc[i-2]
# data.set_value(i,"prev_prev_open",prev["Open"])
# data.set_value(i,"prev_prev_high",prev["High"])
# data.set_value(i,"prev_prev_low",prev["Low"])
# data.set_value(i,"prev_prev_close",prev["Close"])
data["close_diff"] = abs(data["Close"] - data["prev_close"])
# data["close_diff"] = data["Close"] - data["prev_close"]
# data["close_diff"] = abs(data["Close"] / data["prev_close"])
data["open_diff"] = abs(data["Open"] - data["prev_open"])
# data["open_diff"] = data["Open"] - data["prev_open"]
# data["open_diff"] = abs(data["Open"] / data["prev_open"])
data["high_diff"] = abs(data["High"] - data["prev_high"])
# data["high_diff"] = data["High"] - data["prev_high"]
# data["high_diff"] = abs(data["High"] / data["prev_high"])
data["low_diff"] = abs(data["Low"] - data["prev_low"])
# data["low_diff"] = data["Low"] - data["prev_low"]
# data["low_diff"] = abs(data["Low"] / data["prev_low"])
# data["prev_prev_close_diff"] = (data["Close"] - data["prev_prev_close"])
# data["prev_prev_raise_percentage"] = (data["Close"] - data["prev_prev_close"])/data["prev_prev_close"]
# data["prev_prev_open_diff"] = (data["Open"] - data["prev_prev_open"])
# data["prev_prev_high_diff"] = (data["High"] - data["prev_prev_high"])
# data["prev_prev_low_diff"] = (data["Low"] - data["prev_prev_low"])
# data["open_close_mean"] = (data["Open"] + data["Close"])/2
# removing the first record because have no previuse record therefore can't know if up or down
data = data[1:]
data.describe()
MAX_WINDOW = 5
def extract_features(items):
return [[item[1], item[2], item[3], item[4],
item[5], item[6], item[9], item[10],
item[11], item[12], item[16], item[17],
item[18], item[19], 1]
if item[8]
else
[item[1], item[2], item[3], item[4],
item[5], item[6], item[9], item[10],
item[11], item[12], item[16], item[17],
item[18], item[19], 0]
for item in items]
def extract_expected_result(item):
return 1 if item[8] else 0
def generate_input_and_outputs(data):
step = 1
inputs = []
outputs = []
for i in range(0, len(data) - MAX_WINDOW, step):
inputs.append(extract_features(data.iloc[i:i + MAX_WINDOW].as_matrix()))
outputs.append(extract_expected_result(data.iloc[i + MAX_WINDOW].as_matrix()))
return inputs, outputs
print "generating model input and outputs"
X, y = generate_input_and_outputs(data)
print "done generating input and outputs"
y = to_categorical(y)
###Output
_____no_output_____
###Markdown
Splitting the data to train and test
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15)
X_train,X_validation,y_train,y_validation = train_test_split(X_train,y_train,test_size=0.15)
###Output
_____no_output_____
###Markdown
Configuration of the deep learning models
###Code
models = []
layer_output_size1 = 128
layer_output_size2 = 128
output_classes = len(y[0])
percentage_of_neurons_to_ignore = 0.2
model = Sequential()
model.add(LSTM(layer_output_size1, return_sequences=True, input_shape=(MAX_WINDOW, len(X[0][0]))))
model.add(Dropout(percentage_of_neurons_to_ignore))
model.add(LSTM(layer_output_size2, return_sequences=False))
model.add(Dropout(percentage_of_neurons_to_ignore))
model.add(Dense(output_classes))
model.add(Activation('softmax'))
model.alg_name = "LSTM"
model.compile(loss='categorical_crossentropy',metrics=['accuracy'], optimizer='rmsprop')
models.append(model)
model = Sequential()
model.add(SimpleRNN(layer_output_size1, return_sequences=True, input_shape=(MAX_WINDOW, len(X[0][0]))))
model.add(Dropout(percentage_of_neurons_to_ignore))
model.add(SimpleRNN(layer_output_size2, return_sequences=False))
model.add(Dropout(percentage_of_neurons_to_ignore))
model.add(Dense(output_classes))
model.add(Activation('softmax'))
model.alg_name = "SimpleRNN"
model.compile(loss='categorical_crossentropy',metrics=['accuracy'], optimizer='rmsprop')
models.append(model)
model = Sequential()
model.add(GRU(layer_output_size1, return_sequences=True, input_shape=(MAX_WINDOW, len(X[0][0]))))
model.add(Dropout(percentage_of_neurons_to_ignore))
model.add(GRU(layer_output_size2, return_sequences=False))
model.add(Dropout(percentage_of_neurons_to_ignore))
model.add(Dense(output_classes))
model.add(Activation('softmax'))
model.alg_name = "GRU"
model.compile(loss='categorical_crossentropy',metrics=['accuracy'], optimizer='rmsprop')
models.append(model)
###Output
_____no_output_____
###Markdown
Training
###Code
def trainModel(model):
epochs = 5
print "Training model %s"%(model.alg_name)
model.fit(X_train, y_train, batch_size=128, nb_epoch=epochs,validation_data=(X_validation,y_validation), verbose=0)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
def createSplit(model):
print 'Adding layer of DecisionTreeClassifier'
# split_model = RandomForestClassifier()
# split_model.fit(model.predict(X_validation), y_validation)
# split_model = ExtraTreesClassifier(n_estimators=15, max_depth=None, min_samples_split=2, random_state=0)
# split_model.fit(model.predict(X_validation), y_validation)
# split_model = DecisionTreeClassifier(max_depth=None, min_samples_split=1, random_state=0)
# split_model.fit(model.predict(X_validation), y_validation)
split_model = DecisionTreeClassifier()
split_model.fit(model.predict(X_validation), y_validation)
return split_model
def probabilities_to_prediction(record):
return [1,0] if record[0]>record[1] else [0,1]
def evaluateModel(model):
success, success2 = 0,0
predicts = model.predict(X_test)
split_model = createSplit(model)
for index, record in enumerate(predicts):
predicted = list(split_model.predict([np.array(record)])[0])
predicted2 = probabilities_to_prediction(record)
expected = y_test[index]
if predicted[0] == expected[0]:
success += 1
if predicted2[0] == expected[0]:
success2 += 1
accuracy = float(success) / len(predicts)
accuracy2 = float(success2) / len(predicts)
print "The Accuracy for %s is: %s" % (model.alg_name, max(accuracy2, accuracy, 1-accuracy, 1-accuracy2))
return accuracy
def train_and_evaluate():
accuracies = {}
for model in models:
trainModel(model)
acc = evaluateModel(model)
if model.alg_name not in accuracies:
accuracies[model.alg_name] = []
accuracies[model.alg_name].append(acc)
return accuracies
acc = train_and_evaluate()
###Output
Training model LSTM
Adding layer of DecisionTreeClassifier
The Accuracy for LSTM is: 0.531780688986
Training model SimpleRNN
Adding layer of DecisionTreeClassifier
The Accuracy for SimpleRNN is: 0.531780688986
Training model GRU
Adding layer of DecisionTreeClassifier
The Accuracy for GRU is: 0.531780688986
###Markdown
Naive algorithm:We'll choose the most frequent up / down of the stock.
###Code
all_data = data["up/down"].count()
most_frequent = data["up/down"].describe().top
frequency = data["up/down"].describe().freq
acc = float(frequency) / all_data
print 'The most frequent is: %s' % (most_frequent)
print 'The accuracy of naive algorithm is: ', acc
###Output
The most frequent is: False
The accuracy of naive algorithm is: 0.512988430474
###Markdown
Recursive Neural Network to Train a Language Model on Borges' WorkIn this notebook I will write the steps in order to train a character level language model (similar to [Andrej Karpathy's blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)), based on the work by argentinian author Jorge Luis Borges.
###Code
import numpy as np
import os
import sys
import theano
import theano.tensor as T
###Output
_____no_output_____
###Markdown
Prologue: Getting Borges' WorkBefore starting with the recurrent neural network, we should go ahead and get the work of Borges. Naturally, the more data you have, the more precise the network will be. For this, we will start with getting some of the work of J.L.B. in plain text format.Once we have the whole corpus, we need a way to encode each of the characters. We'll use the one-hot encoding, where each character is encoded as a vector with `1` in it's corresponding position. To do so we need to get all the different characters from the corpus.Finally we just need an method to encode all the characters into a one-hot vector.
###Code
class Corpus(object):
def __init__(self, corpus_path):
self.corpus = {}
self.length = 0
for fname in os.listdir(corpus_path):
fpath = os.path.join(corpus_path, fname)
with open(fpath, "r") as f:
self.corpus[fname.replace(".txt", "")] = f.read().decode("utf-8")
characters = set()
for work_name, work in self.corpus.iteritems():
for c in work:
characters.add(c)
self.length += 1
self.characters = sorted(characters)
def character_encoder(self, char):
vector = np.zeros((len(self.characters),), dtype='int64')
vector[self.characters.index(char)] = 1
return vector
def __iter__(self):
for work_name, work in self.corpus.iteritems():
for char in work:
yield self.character_encoder(char)
yield self.character_encoder(u"\n")
def __len__(self):
return self.length
corpus = Corpus("corpus/borges")
###Output
_____no_output_____
###Markdown
First Approach: Simple RNNFor our first approach, we will write a class for a Simple Recurrent Neural Network. This is RNN with a non-gated unit.Here, we begin by setting up some parameters
###Code
NT = len(corpus) # Number of examples (timesteps)
n_in = len(corpus.characters) # Size of the input data (one-hot vector of a character)
n_out = len(corpus.characters) # Size of the output data (one-hot vector of a character)
n_h = 50 # Size of the hidden layer
###Output
_____no_output_____
###Markdown
We continue to set the theano graph for a Simple Recurrent Neural Network.
###Code
# Stateless variables to handle the input
X = T.matrix('X')
y = T.lvector('y')
W_hx = theano.shared(
value=np.random.uniform(
low=-1.0,
high=1.0,
size=(n_in, n_h)
).astype(theano.config.floatX),
name='W_hx',
borrow=True
)
b_h = theano.shared(
value=np.zeros(n_h, dtype=theano.config.floatX),
name='b_h',
borrow=True
)
W_hh = theano.shared(
value=np.random.uniform(
low=-1.0,
high=1.0,
size=(n_h, n_h)
).astype(theano.config.floatX),
name='W_hh',
borrow=True
)
W_S = theano.shared(
value=np.random.uniform(
low=-1.0,
high=1.0,
size=(n_h, n_out)
).astype(theano.config.floatX),
name='W_S',
borrow=True
)
b_S = theano.shared(
value=np.zeros(n_out, dtype=theano.config.floatX),
name='b_S',
borrow=True
)
h0 = theano.shared(
value=np.zeros(n_h, dtype=theano.config.floatX),
name='h0',
borrow=True
)
###Output
_____no_output_____
###Markdown
Next, we define the flow for forward propagation. We need to save all the hidden states, as we need them later.
###Code
def forward_propagation_step(x_t, h_t_prev, W_hx, W_hh, b_h, W_S, b_S):
h_t = T.tanh(T.dot(x_t, W_hx) + T.dot(h_t_prev, W_hh) + b_h)
y_t = T.nnet.softmax(T.dot(h_t, W_S) + b_S)
return [h_t, y_t]
[h, y_out], _ = theano.scan(
forward_propagation_step,
sequences=X,
outputs_info=[h0, None],
non_sequences=[W_hx, W_hh, b_h, W_S, b_S],
truncate_gradient=100,
n_steps=X.shape[0]
)
p_y_given_x = y_out[:, 0, :]
y_pred = T.argmax(p_y_given_x, axis=1)
loss = T.nnet.categorical_crossentropy(p_y_given_x, y).mean()
dWhx = T.grad(loss, wrt=W_hx)
dWhh = T.grad(loss, wrt=W_hh)
dbh = T.grad(loss, wrt=b_h)
dWS = T.grad(loss, wrt=W_S)
dbS = T.grad(loss, wrt=b_S)
forward_propagation = theano.function([X], y_out)
loss_calculation = theano.function([X, y], loss)
predict = theano.function([X], y_pred)
# bbtt = theano.function([X, y], [dWhx, dWhh, dbh, dWS, dbS])
alpha = T.scalar('alpha')
updates = [
(W_hx, W_hx - alpha * dWhx),
(W_hh, W_hh - alpha * dWhh),
(b_h, b_h - alpha * dbh),
(W_S, W_S - alpha * dWS),
(b_S, b_S - alpha * dbS)
]
gradient_step = theano.function(
inputs=[X, y, alpha],
outputs=loss,
updates=updates
)
X_train = []
y_train = []
for char in corpus:
X_train.append(char)
y_train.append(np.where(char == 1)[0][0])
X_train = np.vstack(X_train[:-1])
y_train = np.array(y_train[1:])
for i in xrange(1000, start=1): # We train for epochs times
for j in xrange(y_train.shape[0], 10):
gradient_step(X_train[j:j+10], y_train[j:j+10], 0.001)
if i % 50 == 0:
print >> sys.stderr, "Loss for iteration {}: {}".format(
i, loss_calculation(X_train, y_train)
)
# Generate a 1000 characters text
random_char = corpus.characters[np.random.randint(28, 82)]
characters = [(
random_char,
corpus.character_encoder(random_char)
)]
# The first character is alphabetic random
for j in xrange(1000):
char_vectors = np.vstack([vector for char, vector in characters])
next_char_index = predict(char_vectors)[-1]
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/chokkan/deeplearningclass/blob/master/rnn.ipynb) Deep Neural Networks for structual input Download the dataset
###Code
!wget https://download.pytorch.org/tutorial/data.zip
!unzip data.zip
###Output
Archive: data.zip
creating: data/
inflating: data/eng-fra.txt
creating: data/names/
inflating: data/names/Arabic.txt
inflating: data/names/Chinese.txt
inflating: data/names/Czech.txt
inflating: data/names/Dutch.txt
inflating: data/names/English.txt
inflating: data/names/French.txt
inflating: data/names/German.txt
inflating: data/names/Greek.txt
inflating: data/names/Irish.txt
inflating: data/names/Italian.txt
inflating: data/names/Japanese.txt
inflating: data/names/Korean.txt
inflating: data/names/Polish.txt
inflating: data/names/Portuguese.txt
inflating: data/names/Russian.txt
inflating: data/names/Scottish.txt
inflating: data/names/Spanish.txt
inflating: data/names/Vietnamese.txt
###Markdown
Normalize name spellings in the dataset
###Code
import string
import unicodedata
# Alphabet [a-zA-Z .,;']
alphabet = set(string.ascii_letters + " .,;'")
def normalize(s):
# Apply canonical decomposition, and ignore non-alphabet symbols.
return ''.join(
c for c in unicodedata.normalize('NFD', s) if c in alphabet
)
normalize('Ślusàrski')
import glob
import json
import os
data = []
srcs = glob.glob('data/names/*.txt')
for src in srcs:
lang = os.path.basename(src)[:-4]
for line in open(src):
line = line.strip('\n')
data.append((normalize(line), lang))
with open('names.json', 'w') as fo:
json.dump(data, fo)
###Output
_____no_output_____
###Markdown
Convert the string data into numerical data
###Code
def find_vocabulary(data):
X, Y = set(), set()
for (x, y) in data:
X.update(c for c in x)
Y.add(y)
return sorted(X), sorted(Y)
def build_mapping(items):
M = {}
for item in items:
M.setdefault(item, len(M))
return M
def convert_to_numeric_data(data, Xmap, Ymap):
D = []
for (x, y) in data:
D.append(([Xmap[c] for c in x], Ymap[y]))
return D
import json
data = json.load(open('names.json'))
X, Y = find_vocabulary(data)
Xmap = build_mapping(X)
Ymap = build_mapping(Y)
with open('names.data.json', 'w') as fo:
json.dump(dict(
data = convert_to_numeric_data(data, Xmap, Ymap),
X = X,
Y = Y,
), fo)
###Output
_____no_output_____
###Markdown
Install necessary modules
###Code
!pip install livelossplot
!pip install torch torchvision
###Output
Collecting torch
[?25l Downloading https://files.pythonhosted.org/packages/69/43/380514bd9663f1bf708abeb359b8b48d3fabb1c8e95bb3427a980a064c57/torch-0.4.0-cp36-cp36m-manylinux1_x86_64.whl (484.0MB)
[K 100% |████████████████████████████████| 484.0MB 25kB/s
tcmalloc: large alloc 1073750016 bytes == 0x5b39e000 @ 0x7fafb3ae41c4 0x46d6a4 0x5fcbcc 0x4c494d 0x54f3c4 0x553aaf 0x54e4c8 0x54f4f6 0x553aaf 0x54efc1 0x54f24d 0x553aaf 0x54efc1 0x54f24d 0x553aaf 0x54efc1 0x54f24d 0x551ee0 0x54e4c8 0x54f4f6 0x553aaf 0x54efc1 0x54f24d 0x551ee0 0x54efc1 0x54f24d 0x551ee0 0x54e4c8 0x54f4f6 0x553aaf 0x54e4c8
[?25hCollecting torchvision
[?25l Downloading https://files.pythonhosted.org/packages/ca/0d/f00b2885711e08bd71242ebe7b96561e6f6d01fdb4b9dcf4d37e2e13c5e1/torchvision-0.2.1-py2.py3-none-any.whl (54kB)
[K 100% |████████████████████████████████| 61kB 1.6MB/s
[?25hCollecting pillow>=4.1.1 (from torchvision)
[?25l Downloading https://files.pythonhosted.org/packages/d1/24/f53ff6b61b3d728b90934bddb4f03f8ab584a7f49299bf3bde56e2952612/Pillow-5.2.0-cp36-cp36m-manylinux1_x86_64.whl (2.0MB)
[K 100% |████████████████████████████████| 2.0MB 2.5MB/s
[?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.11.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.14.5)
Installing collected packages: torch, pillow, torchvision
Found existing installation: Pillow 4.0.0
Uninstalling Pillow-4.0.0:
Successfully uninstalled Pillow-4.0.0
Successfully installed pillow-5.2.0 torch-0.4.0 torchvision-0.2.1
###Markdown
Implementing RNN cells (states)
###Code
import json
import random
import torch
import torch.nn as nn
import torch.optim as optim
from livelossplot import PlotLosses
class RNNCell(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNNCell, self).__init__()
self.hidden_size = hidden_size
self.f = nn.Tanh()
self.hi = nn.Linear(input_size + hidden_size, hidden_size)
self.oh = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden):
new_hidden = self.f(self.hi(torch.cat((input, hidden), 0)))
new_output = self.oh(new_hidden)
return new_output, new_hidden
def initHidden(self):
return torch.zeros(self.hidden_size)
def x_to_tensor(x, input_size):
tensor = torch.zeros(len(x), input_size, dtype=torch.float)
for i, j in enumerate(x):
tensor[i][j] = 1
return tensor
def y_to_tensor(y):
tensor = torch.zeros(1, dtype=torch.long)
tensor[0] = y
return tensor
data = json.load(open('names.data.json'))
dataset = data['data']
input_size = len(data['X'])
output_size = len(data['Y'])
model = RNNCell(input_size, 128, output_size)
loss_fn = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.SGD(model.parameters(), lr=0.001)
liveloss = PlotLosses()
for t in range(10):
train_loss = 0.
num_train_correct = 0
random.shuffle(dataset)
# Training loop for every instance.
for (x, y) in dataset:
# Convert a training instance into tensors in place.
x = x_to_tensor(x, input_size)
y = y_to_tensor(y)
# Recurrent Neural Network
hidden = model.initHidden()
for xt in x:
output, hidden = model(xt, hidden)
# Make predictions with the current parameters.
y_pred = output.view(1, -1) # Reshape the output: (18) -> (1, 18)
_, predicted = torch.max(y_pred.data, 1)
num_train_correct += (predicted == y).sum().item()
# Compute the loss value.
loss = loss_fn(y_pred, y)
train_loss += loss.item()
# Update the parameters.
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Visualize accuracy values on the training set.
num_train_correct /= float(len(dataset))
liveloss.update({
'log loss': train_loss,
'accuracy': num_train_correct,
})
liveloss.draw()
print('Accuracy: {:.4f} (train)'.format(num_train_correct))
###Output
_____no_output_____
###Markdown
Using `nn.RNN` module
###Code
import json
import random
import torch
import torch.nn as nn
import torch.optim as optim
from livelossplot import PlotLosses
class SequenceRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(SequenceRNN, self).__init__()
self.hidden_size = hidden_size
self.rnn = nn.RNN(input_size, hidden_size, num_layers=1)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden):
output, hidden = self.rnn(input, hidden)
output = self.fc(output[-1])
return output
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size)
def x_to_tensor(x, input_size):
tensor = torch.zeros(len(x), 1, input_size, dtype=torch.float)
for i, j in enumerate(x):
tensor[i][0][j] = 1 # (T, batch, input_dim)
return tensor
def y_to_tensor(y):
tensor = torch.zeros(1, dtype=torch.long)
tensor[0] = y
return tensor
data = json.load(open('names.data.json'))
dataset = data['data']
input_size = len(data['X'])
output_size = len(data['Y'])
model = SequenceRNN(input_size, 128, output_size)
loss_fn = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.SGD(model.parameters(), lr=0.001)
liveloss = PlotLosses()
for t in range(10):
train_loss = 0.
num_train_correct = 0
random.shuffle(dataset)
# Training loop for every instance.
for (x, y) in dataset:
# Convert a training instance into tensors in place.
x = x_to_tensor(x, input_size)
y = y_to_tensor(y)
# Make predictions with the current parameters.
y_pred = model(x, model.initHidden())
_, predicted = torch.max(y_pred.data, 1)
num_train_correct += (predicted == y).sum().item()
# Compute the loss value.
loss = loss_fn(y_pred, y)
train_loss += loss.item()
# Update the parameters.
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Visualize accuracy values on the training set.
num_train_correct /= float(len(dataset))
liveloss.update({
'log loss': train_loss,
'accuracy': num_train_correct,
})
liveloss.draw()
print('Accuracy: {:.4f} (train)'.format(num_train_correct))
###Output
_____no_output_____
###Markdown
Predict a nationality of a name by using the trained model
###Code
def predict(name):
x = []
for c in name:
x.append(data['X'].index(c))
x = x_to_tensor(x, len(data['X']))
hidden = model.initHidden()
y_pred = nn.Softmax(dim=-1)(model(x, hidden))
scores = []
for index, lang in enumerate(data['Y']):
scores.append((lang, float(y_pred[0][index])))
return sorted(scores, key=lambda x: x[1], reverse=True)
predict('Okazaki')
###Output
_____no_output_____
###Markdown
Mini-batch RNN
###Code
import json
import random
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
from livelossplot import PlotLosses
class MinibatchSequenceRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MinibatchSequenceRNN, self).__init__()
self.hidden_size = hidden_size
self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def last_timestep(self, unpacked, lengths):
idx = (lengths-1).view(-1, 1).expand(
unpacked.size(0), unpacked.size(2)).unsqueeze(1)
return unpacked.gather(1, idx).squeeze()
def forward(self, input, hidden, l):
input = nn.utils.rnn.pack_padded_sequence(input, l, batch_first=True)
output, hidden = self.rnn(input, hidden)
output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True)
output = self.last_timestep(output, l)
output = self.fc(output)
return output
def initHidden(self, batch_size=1):
return torch.zeros(1, batch_size, self.hidden_size)
def create_dataset(data, X, Y):
# Sort the data by sequence length (long to short)
data.sort(key=lambda instance: len(instance[0]), reverse=True)
input_size = len(X)
output_size = len(Y)
max_length = len(data[0][0])
num_instances = len(data)
xt = torch.zeros(num_instances, max_length, input_size, dtype=torch.float)
yt = torch.zeros(num_instances, dtype=torch.long)
lt = torch.zeros(num_instances, dtype=torch.long)
for i, (x, y) in enumerate(data):
for t, v in enumerate(x):
xt[i][t][v] = 1
yt[i] = y
lt[i] = len(x)
return TensorDataset(xt, yt, lt)
batch_size = 32
data = json.load(open('names.data.json'))
train_set = create_dataset(data['data'], data['X'], data['Y'])
train_loader = DataLoader(train_set, batch_size=batch_size)
input_size = len(data['X'])
output_size = len(data['Y'])
model = MinibatchSequenceRNN(input_size, 128, output_size)
loss_fn = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.SGD(model.parameters(), lr=1e-3)
liveloss = PlotLosses()
for t in range(200):
train_loss = 0.
num_train_correct = 0
# Training loop for mini-batches
for batch_idx, (x, y, l) in enumerate(train_loader):
this_batch_size = len(l)
# Make predictions with the current parameters.
hidden = model.initHidden(this_batch_size)
y_pred = model(x, hidden, l)[:this_batch_size]
_, predicted = torch.max(y_pred.data, 1)
num_train_correct += (predicted == y).sum().item()
# Compute the loss value.
loss = loss_fn(y_pred, y)
train_loss += loss.item()
# Update the parameters.
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Visualize accuracy values on the training set.
num_train_correct /= float(len(train_set))
liveloss.update({
'log loss': train_loss,
'accuracy': num_train_correct,
})
liveloss.draw()
print('Accuracy: {:.4f} (train)'.format(num_train_correct))
###Output
_____no_output_____
###Markdown
Mini-batch LSTM
###Code
import json
import random
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
from livelossplot import PlotLosses
class MinibatchSequenceRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MinibatchSequenceRNN, self).__init__()
self.hidden_size = hidden_size
self.rnn = nn.LSTM(input_size, hidden_size, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def last_timestep(self, unpacked, lengths):
idx = (lengths-1).view(-1, 1).expand(
unpacked.size(0), unpacked.size(2)).unsqueeze(1)
return unpacked.gather(1, idx).squeeze()
def forward(self, input, hidden, l):
input = nn.utils.rnn.pack_padded_sequence(input, l, batch_first=True)
output, hidden = self.rnn(input, hidden)
output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True)
output = self.last_timestep(output, l)
output = self.fc(output)
return output
def initHidden(self, batch_size=1):
return (
torch.zeros(1, batch_size, self.hidden_size),
torch.zeros(1, batch_size, self.hidden_size)
)
def create_dataset(data, X, Y):
# Sort the data by sequence length (long to short)
data.sort(key=lambda instance: len(instance[0]), reverse=True)
input_size = len(X)
output_size = len(Y)
max_length = len(data[0][0])
num_instances = len(data)
xt = torch.zeros(num_instances, max_length, input_size, dtype=torch.float)
yt = torch.zeros(num_instances, dtype=torch.long)
lt = torch.zeros(num_instances, dtype=torch.long)
for i, (x, y) in enumerate(data):
for t, v in enumerate(x):
xt[i][t][v] = 1
yt[i] = y
lt[i] = len(x)
return TensorDataset(xt, yt, lt)
batch_size = 32
data = json.load(open('names.data.json'))
train_set = create_dataset(data['data'], data['X'], data['Y'])
train_loader = DataLoader(train_set, batch_size=batch_size)
input_size = len(data['X'])
output_size = len(data['Y'])
model = MinibatchSequenceRNN(input_size, 128, output_size)
loss_fn = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.SGD(model.parameters(), lr=1e-3)
liveloss = PlotLosses()
for t in range(200):
train_loss = 0.
num_train_correct = 0
# Training loop for mini-batches
for batch_idx, (x, y, l) in enumerate(train_loader):
this_batch_size = len(l)
# Make predictions with the current parameters.
hidden = model.initHidden(this_batch_size)
y_pred = model(x, hidden, l)[:this_batch_size]
_, predicted = torch.max(y_pred.data, 1)
num_train_correct += (predicted == y).sum().item()
# Compute the loss value.
loss = loss_fn(y_pred, y)
train_loss += loss.item()
# Update the parameters.
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Visualize accuracy values on the training set.
num_train_correct /= float(len(train_set))
liveloss.update({
'log loss': train_loss,
'accuracy': num_train_correct,
})
liveloss.draw()
print('Accuracy: {:.4f} (train)'.format(num_train_correct))
###Output
_____no_output_____
###Markdown
Mini-batch GRU
###Code
import json
import random
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
from livelossplot import PlotLosses
class MinibatchSequenceRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MinibatchSequenceRNN, self).__init__()
self.hidden_size = hidden_size
self.rnn = nn.GRU(input_size, hidden_size, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def last_timestep(self, unpacked, lengths):
idx = (lengths-1).view(-1, 1).expand(
unpacked.size(0), unpacked.size(2)).unsqueeze(1)
return unpacked.gather(1, idx).squeeze()
def forward(self, input, hidden, l):
input = nn.utils.rnn.pack_padded_sequence(input, l, batch_first=True)
output, hidden = self.rnn(input, hidden)
output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True)
output = self.last_timestep(output, l)
output = self.fc(output)
return output
def initHidden(self, batch_size=1):
return torch.zeros(1, batch_size, self.hidden_size)
def create_dataset(data, X, Y):
# Sort the data by sequence length (long to short)
data.sort(key=lambda instance: len(instance[0]), reverse=True)
input_size = len(X)
output_size = len(Y)
max_length = len(data[0][0])
num_instances = len(data)
xt = torch.zeros(num_instances, max_length, input_size, dtype=torch.float)
yt = torch.zeros(num_instances, dtype=torch.long)
lt = torch.zeros(num_instances, dtype=torch.long)
for i, (x, y) in enumerate(data):
for t, v in enumerate(x):
xt[i][t][v] = 1
yt[i] = y
lt[i] = len(x)
return TensorDataset(xt, yt, lt)
batch_size = 32
data = json.load(open('names.data.json'))
train_set = create_dataset(data['data'], data['X'], data['Y'])
train_loader = DataLoader(train_set, batch_size=batch_size)
input_size = len(data['X'])
output_size = len(data['Y'])
model = MinibatchSequenceRNN(input_size, 128, output_size)
loss_fn = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.SGD(model.parameters(), lr=1e-3)
liveloss = PlotLosses()
for t in range(200):
train_loss = 0.
num_train_correct = 0
# Training loop for mini-batches
for batch_idx, (x, y, l) in enumerate(train_loader):
this_batch_size = len(l)
# Make predictions with the current parameters.
hidden = model.initHidden(this_batch_size)
y_pred = model(x, hidden, l)[:this_batch_size]
_, predicted = torch.max(y_pred.data, 1)
num_train_correct += (predicted == y).sum().item()
# Compute the loss value.
loss = loss_fn(y_pred, y)
train_loss += loss.item()
# Update the parameters.
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Visualize accuracy values on the training set.
num_train_correct /= float(len(train_set))
liveloss.update({
'log loss': train_loss,
'accuracy': num_train_correct,
})
liveloss.draw()
print('Accuracy: {:.4f} (train)'.format(num_train_correct))
###Output
_____no_output_____
###Markdown
Vanilla RNN@author Graham Taylor
###Code
import numpy as np
import theano
import theano.tensor as T
from base import BaseEstimator # from sklearn
import os
import datetime
import pickle as pickle
import matplotlib.pyplot as plt
from meta_rnn import MetaRNN, RNN
def test_real():
''' Test RNN with real-valued outputs. '''
n_hidden = 10
n_in = 5
n_out = 3
n_steps = 10
n_seq = 100
np.random.seed(0)
# simple lag test
seq = np.random.randn(n_seq, n_steps, n_in).astype(theano.config.floatX)
targets = np.zeros((n_seq, n_steps, n_out))
# delay targets
delay = [1,1,2]
targets[:, delay[0]:, 0] = seq[:, :-delay[0], 3] # delayed 1
targets[:, delay[1]:, 1] = seq[:, :-delay[1], 2] # delayed 1
targets[:, delay[2]:, 2] = seq[:, :-delay[2], 0] # delayed 2
targets += 0.01 * np.random.standard_normal(targets.shape)
model = MetaRNN(n_in=n_in, n_hidden=n_hidden, n_out=n_out,
learning_rate=0.001, learning_rate_decay=0.999,
n_epochs=400, activation='tanh')
model.fit(seq, targets, validation_frequency=1000)
fig = plt.figure(figsize=(12,6))
ax1 = plt.subplot(2, 1, 1)
plt.grid(True)
plt.plot(seq[0])
ax1.set_title('input')
ax2 = plt.subplot(2, 1, 2)
true_targets = plt.plot(targets[0])
guess = model.predict(seq[0])
guessed_targets = plt.plot(guess, linestyle='--')
for i, x in enumerate(guessed_targets):
x.set_color(true_targets[i].get_color())
x.set_label('delayed %d' % delay[i])
ax2.set_title('solid: true output, dashed: model output')
ax2.grid(True)
ax2.legend(fontsize=10, framealpha=0.5)
plt.tight_layout()
#plt.savefig('doc/rnn.png')
plt.show()
def test_binary(multiple_out=False, n_epochs=250):
''' Test RNN with binary outputs. '''
n_hidden = 10
n_in = 5
if multiple_out:
n_out = 2
else:
n_out = 1
n_steps = 10
n_seq = 100
np.random.seed(0)
# simple lag test
seq = np.random.randn(n_seq, n_steps, n_in).astype(theano.config.floatX)
targets = np.zeros((n_seq, n_steps, n_out))
# whether lag 1 (dim 3) is greater than lag 2 (dim 0)
targets[:, 2:, 0] = np.cast[np.int](seq[:, 1:-1, 3] > seq[:, :-2, 0])
if multiple_out:
# whether product of lag 1 (dim 4) and lag 1 (dim 2)
# is less than lag 2 (dim 0)
targets[:, 2:, 1] = np.cast[np.int](
(seq[:, 1:-1, 4] * seq[:, 1:-1, 2]) > seq[:, :-2, 0])
model = MetaRNN(n_in=n_in, n_hidden=n_hidden, n_out=n_out,
learning_rate=0.001, learning_rate_decay=0.999,
n_epochs=n_epochs, activation='tanh', output_type='binary')
model.fit(seq, targets, validation_frequency=1000)
seqs = range(10)
for seq_num in seqs:
fig = plt.figure(figsize=(12,6))
ax1 = plt.subplot(211)
plt.plot(seq[seq_num])
ax1.set_title('input')
ax2 = plt.subplot(212)
true_targets = plt.step(range(n_steps), targets[seq_num], marker='o')
guess = model.predict_proba(seq[seq_num])
guessed_targets = plt.step(range(n_steps), guess)
plt.setp(guessed_targets, linestyle='--', marker='d')
for i, x in enumerate(guessed_targets):
x.set_color(true_targets[i].get_color())
ax2.set_ylim((-0.1, 1.1))
ax2.set_title('solid: true output, dashed: model output (prob)')
#plt.savefig('result%02d.png' % seq_num)
plt.show()
def test_softmax(n_epochs=250):
''' Test RNN with softmax outputs. '''
n_hidden = 10
n_in = 5
n_steps = 10
n_seq = 100
n_classes = 3
n_out = n_classes # restricted to single softmax per time step
np.random.seed(0)
# simple lag test
seq = np.random.randn(n_seq, n_steps, n_in)
targets = np.zeros((n_seq, n_steps), dtype=np.int)
thresh = 0.5
# if lag 1 (dim 3) is greater than lag 2 (dim 0) + thresh
# class 1
# if lag 1 (dim 3) is less than lag 2 (dim 0) - thresh
# class 2
# if lag 2(dim0) - thresh <= lag 1 (dim 3) <= lag2(dim0) + thresh
# class 0
targets[:, 2:][seq[:, 1:-1, 3] > seq[:, :-2, 0] + thresh] = 1
targets[:, 2:][seq[:, 1:-1, 3] < seq[:, :-2, 0] - thresh] = 2
#targets[:, 2:, 0] = np.cast[np.int](seq[:, 1:-1, 3] > seq[:, :-2, 0])
model = MetaRNN(n_in=n_in, n_hidden=n_hidden, n_out=n_out,
learning_rate=0.001, learning_rate_decay=0.999,
n_epochs=n_epochs, activation='tanh',
output_type='softmax', use_symbolic_softmax=False)
model.fit(seq, targets, validation_frequency=1000)
seqs = range(10)
for seq_num in seqs:
fig = plt.figure()
ax1 = plt.subplot(2, 1, 1)
plt.plot(seq[seq_num])
ax1.set_title('input')
ax2 = plt.subplot(2, 1, 2)
# blue line will represent true classes
true_targets = plt.step(range(n_steps), targets[seq_num], marker='o')
# show probabilities (in b/w) output by model
guess = model.predict_proba(seq[seq_num])
guessed_probs = plt.imshow(guess.T, interpolation='nearest',
cmap='gray')
ax2.set_title('blue: true class, grayscale: probs assigned by model')
if __name__ == '__main__':
test_real()
# problem takes more epochs to solve
#test_binary(multiple_out=True, n_epochs=2400)
#test_softmax(n_epochs=250)
###Output
... building the model
... training
epoch 010, train loss 0.865682 lr: 0.000991, 0.081 sec
epoch 020, train loss 0.865654 lr: 0.000981, 0.083 sec
epoch 030, train loss 0.865589 lr: 0.000971, 0.129 sec
epoch 040, train loss 0.865428 lr: 0.000962, 0.122 sec
epoch 050, train loss 0.864999 lr: 0.000952, 0.103 sec
epoch 060, train loss 0.863422 lr: 0.000943, 0.120 sec
epoch 070, train loss 0.831402 lr: 0.000933, 0.099 sec
epoch 080, train loss 0.569127 lr: 0.000924, 0.082 sec
epoch 090, train loss 0.567128 lr: 0.000915, 0.078 sec
epoch 100, train loss 0.565494 lr: 0.000906, 0.075 sec
epoch 110, train loss 0.562675 lr: 0.000897, 0.074 sec
epoch 120, train loss 0.520082 lr: 0.000888, 0.073 sec
epoch 130, train loss 0.279635 lr: 0.000879, 0.076 sec
epoch 140, train loss 0.274908 lr: 0.000870, 0.074 sec
epoch 150, train loss 0.272029 lr: 0.000862, 0.075 sec
epoch 160, train loss 0.270034 lr: 0.000853, 0.079 sec
epoch 170, train loss 0.268564 lr: 0.000844, 0.078 sec
epoch 180, train loss 0.267410 lr: 0.000836, 0.139 sec
epoch 190, train loss 0.266435 lr: 0.000828, 0.114 sec
epoch 200, train loss 0.265549 lr: 0.000819, 0.105 sec
epoch 210, train loss 0.264681 lr: 0.000811, 0.123 sec
epoch 220, train loss 0.263752 lr: 0.000803, 0.102 sec
epoch 230, train loss 0.262603 lr: 0.000795, 0.096 sec
epoch 240, train loss 0.260679 lr: 0.000787, 0.074 sec
epoch 250, train loss 0.254477 lr: 0.000779, 0.075 sec
epoch 260, train loss 0.203686 lr: 0.000772, 0.076 sec
epoch 270, train loss 0.130574 lr: 0.000764, 0.073 sec
epoch 280, train loss 0.033861 lr: 0.000756, 0.076 sec
epoch 290, train loss 0.018814 lr: 0.000749, 0.076 sec
epoch 300, train loss 0.014290 lr: 0.000741, 0.092 sec
epoch 310, train loss 0.011918 lr: 0.000734, 0.137 sec
epoch 320, train loss 0.010433 lr: 0.000727, 0.113 sec
epoch 330, train loss 0.009397 lr: 0.000720, 0.114 sec
epoch 340, train loss 0.008622 lr: 0.000712, 0.124 sec
epoch 350, train loss 0.008012 lr: 0.000705, 0.112 sec
epoch 360, train loss 0.007515 lr: 0.000698, 0.075 sec
epoch 370, train loss 0.007100 lr: 0.000691, 0.074 sec
epoch 380, train loss 0.006746 lr: 0.000684, 0.074 sec
epoch 390, train loss 0.006440 lr: 0.000678, 0.076 sec
epoch 400, train loss 0.006171 lr: 0.000671, 0.077 sec
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Recurrent Neural Networks (RNN) with Keras View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language.Schematically, a RNN layer uses a `for` loop to iterate over the timesteps of a sequence, while maintaining an internal state that encodes information about the timesteps it has seen so far.The Keras RNN API is designed with a focus on:- **Ease of use**: the built-in `tf.keras.layers.RNN`, `tf.keras.layers.LSTM`, `tf.keras.layers.GRU` layers enable you to quickly build recurrent models without having to make difficult configuration choices. - **Ease of customization**: You can also define your own RNN cell layer (the inner part of the `for` loop) with custom behavior, and use it with the generic `tf.keras.layers.RNN` layer (the `for` loop itself). This allows you to quickly prototype different research ideas in a flexible way with minimal code. Setup
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import collections
import matplotlib.pyplot as plt
import numpy as np
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
Build a simple model There are three built-in RNN layers in Keras:1. `tf.keras.layers.SimpleRNN`, a fully-connected RNN where the output from previous timestep is to be fed to next timestep.2. `tf.keras.layers.GRU`, first proposed in [Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation](https://arxiv.org/abs/1406.1078).3. `tf.keras.layers.LSTM`, first proposed in [Long Short-Term Memory](https://www.bioinf.jku.at/publications/older/2604.pdf).In early 2015, Keras had the first reusable open-source Python implementations of LSTM and GRU.Here is a simple example of a `Sequential` model that processes sequences of integers, embeds each integer into a 64-dimensional vector, then processes the sequence of vectors using a `LSTM` layer.
###Code
model = tf.keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
model.add(layers.LSTM(128))
# Add a Dense layer with 10 units and softmax activation.
model.add(layers.Dense(10, activation='softmax'))
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, None, 64) 64000
_________________________________________________________________
lstm (LSTM) (None, 128) 98816
_________________________________________________________________
dense (Dense) (None, 10) 1290
=================================================================
Total params: 164,106
Trainable params: 164,106
Non-trainable params: 0
_________________________________________________________________
###Markdown
Outputs and states By default, the output of a RNN layer contain a single vector per sample. This vector is the RNN cell output corresponding to the last timestep, containing information about the entire input sequence. The shape of this output is `(batch_size, units)` where `units` corresponds to the `units` argument passed to the layer's constructor. A RNN layer can also return the entire sequence of outputs for each sample (one vector per timestep per sample), if you set `return_sequences=True`. The shape of this output is `(batch_size, timesteps, units)`.
###Code
model = tf.keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, None, 64) 64000
_________________________________________________________________
gru (GRU) (None, None, 256) 247296
_________________________________________________________________
simple_rnn (SimpleRNN) (None, 128) 49280
_________________________________________________________________
dense_1 (Dense) (None, 10) 1290
=================================================================
Total params: 361,866
Trainable params: 361,866
Non-trainable params: 0
_________________________________________________________________
###Markdown
In addition, a RNN layer can return its final internal state(s). The returned states can be used to resume the RNN execution later, or [to initialize another RNN](https://arxiv.org/abs/1409.3215). This setting is commonly used in the encoder-decoder sequence-to-sequence model, where the encoder final state is used as the initial state of the decoder.To configure a RNN layer to return its internal state, set the `return_state` parameter to `True` when creating the layer. Note that `LSTM` has 2 state tensors, but `GRU` only has one.To configure the initial state of the layer, just call the layer with additional keyword argument `initial_state`.Note that the shape of the state needs to match the unit size of the layer, like in the example below.
###Code
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None, ))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(encoder_input)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(
64, return_state=True, name='encoder')(encoder_embedded)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None, ))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(decoder_input)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(
64, name='decoder')(decoder_embedded, initial_state=encoder_state)
output = layers.Dense(10, activation='softmax')(decoder_output)
model = tf.keras.Model([encoder_input, decoder_input], output)
model.summary()
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, None)] 0
__________________________________________________________________________________________________
input_2 (InputLayer) [(None, None)] 0
__________________________________________________________________________________________________
embedding_2 (Embedding) (None, None, 64) 64000 input_1[0][0]
__________________________________________________________________________________________________
embedding_3 (Embedding) (None, None, 64) 128000 input_2[0][0]
__________________________________________________________________________________________________
encoder (LSTM) [(None, 64), (None, 33024 embedding_2[0][0]
__________________________________________________________________________________________________
decoder (LSTM) (None, 64) 33024 embedding_3[0][0]
encoder[0][1]
encoder[0][2]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 10) 650 decoder[0][0]
==================================================================================================
Total params: 258,698
Trainable params: 258,698
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
RNN layers and RNN cells In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only processes a single timestep.The cell is the inside of the `for` loop of a RNN layer. Wrapping a cell inside a `tf.keras.layers.RNN` layer gives you a layer capable of processing batches of sequences, e.g. `RNN(LSTMCell(10))`.Mathemetically, `RNN(LSTMCell(10))` produces the same result as `LSTM(10)`. In fact, the implementation of this layer in TF v1.x was just creating the corresponding RNN cell and wrapping it in a RNN layer. However using the built-in `GRU` and `LSTM` layers enables the use of CuDNN and you may see better performance.There are three built-in RNN cells, each of them corresponding to the matching RNN layer.- `tf.keras.layers.SimpleRNNCell` corresponds to the `SimpleRNN` layer.- `tf.keras.layers.GRUCell` corresponds to the `GRU` layer.- `tf.keras.layers.LSTMCell` corresponds to the `LSTM` layer.The cell abstraction, together with the generic `tf.keras.layers.RNN` class, make it very easy to implement custom RNN architectures for your research. Cross-batch statefulness When processing very long sequences (possibly infinite), you may want to use the pattern of **cross-batch statefulness**.Normally, the internal state of a RNN layer is reset every time it sees a new batch (i.e. every sample seen by the layer is assume to be independent from the past). The layer will only maintain a state while processing a given sample.If you have very long sequences though, it is useful to break them into shorter sequences, and to feed these shorter sequences sequentially into a RNN layer without resetting the layer's state. That way, the layer can retain information about the entirety of the sequence, even though it's only seeing one sub-sequence at a time.You can do this by setting `stateful=True` in the constructor.If you have a sequence `s = [t0, t1, ... t1546, t1547]`, you woud split it into e.g.```s1 = [t0, t1, ... t100]s2 = [t101, ... t201]...s16 = [t1501, ... t1547]```Then you would process it via:```pythonlstm_layer = layers.LSTM(64, stateful=True)for s in sub_sequences: output = lstm_layer(s)```When you want to clear the state, you can use `layer.reset_states()`.> Note: In this setup, sample `i` in a given batch is assumed to be the continuation of sample `i` in the previous batch. This means that all batches should contain the same number of samples (batch size). E.g. if a batch contains `[sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100]`, the next batch should contain `[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]`.Here is a complete example:
###Code
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
lstm_layer.reset_states()
###Output
_____no_output_____
###Markdown
Bidirectional RNNs For sequences other than time series (e.g. text), it is often the case that a RNN model can perform better if it not only processes sequence from start to end, but also backwards. For example, to predict the next word in a sentence, it is often useful to have the context around the word, not only just the words that come before it.Keras provides an easy API for you to build such bidirectional RNNs: the `tf.keras.layers.Bidirectional` wrapper.
###Code
model = tf.keras.Sequential()
model.add(layers.Bidirectional(layers.LSTM(64, return_sequences=True),
input_shape=(5, 10)))
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(10, activation='softmax'))
model.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bidirectional (Bidirectional (None, 5, 128) 38400
_________________________________________________________________
bidirectional_1 (Bidirection (None, 64) 41216
_________________________________________________________________
dense_3 (Dense) (None, 10) 650
=================================================================
Total params: 80,266
Trainable params: 80,266
Non-trainable params: 0
_________________________________________________________________
###Markdown
Under the hood, `Bidirectional` will copy the RNN layer passed in, and flip the `go_backwards` field of the newly copied layer, so that it will process the inputs in reverse order.The output of the `Bidirectional` RNN will be, by default, the sum of the forward layer output and the backward layer output. If you need a different merging behavior, e.g. concatenation, change the `merge_mode` parameter in the `Bidirectional` wrapper constructor. For more details about `Bidirectional`, please check [the API docs](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Bidirectional). Performance optimization and CuDNN kernels in TensorFlow 2.0 In Tensorflow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available. With this change, the prior `keras.layers.CuDNNLSTM/CuDNNGRU` layers have been deprecated, and you can build your model without worrying about the hardware it will run on.Since the CuDNN kernel is built with certain assumptions, this means the layer **will not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or GRU layers**. E.g.:- Changing the `activation` function from `tanh` to something else.- Changing the `recurrent_activation` function from `sigmoid` to something else.- Using `recurrent_dropout` > 0.- Setting `unroll` to True, which forces LSTM/GRU to decompose the inner `tf.while_loop` into an unrolled `for` loop.- Setting `use_bias` to False.- Using masking when the input data is not strictly right padded (if the mask corresponds to strictly right padded data, CuDNN can still be used. This is the most common case).For the detailed list of contraints, please see the documentation for the [LSTM](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/LSTM) and [GRU](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/GRU) layers. Using CuDNN kernels when availableLet's build a simple LSTM model to demonstrate the performance difference.We'll use as input sequences the sequence of rows of MNIST digits (treating each row of pixels as a timestep), and we'll predict the digit's label.
###Code
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = tf.keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = tf.keras.layers.RNN(
tf.keras.layers.LSTMCell(units),
input_shape=(None, input_dim))
model = tf.keras.models.Sequential([
lstm_layer,
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(output_size, activation='softmax')]
)
return model
###Output
_____no_output_____
###Markdown
Load MNIST dataset
###Code
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0]
###Output
_____no_output_____
###Markdown
Create a model instance and compile itWe choose `sparse_categorical_crossentropy` as the loss function for the model. The output of the model has shape of `[batch_size, 10]`. The target for the model is a integer vector, each of the integer is in the range of 0 to 9.
###Code
model = build_model(allow_cudnn_kernel=True)
model.compile(loss='sparse_categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.fit(x_train, y_train,
validation_data=(x_test, y_test),
batch_size=batch_size,
epochs=5)
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 9s 143us/sample - loss: 0.9524 - accuracy: 0.7002 - val_loss: 0.5327 - val_accuracy: 0.8321
Epoch 2/5
60000/60000 [==============================] - 6s 94us/sample - loss: 0.3983 - accuracy: 0.8785 - val_loss: 0.2542 - val_accuracy: 0.9241
Epoch 3/5
60000/60000 [==============================] - 6s 95us/sample - loss: 0.2524 - accuracy: 0.9239 - val_loss: 0.2166 - val_accuracy: 0.9300
Epoch 4/5
60000/60000 [==============================] - 6s 95us/sample - loss: 0.1928 - accuracy: 0.9417 - val_loss: 0.1956 - val_accuracy: 0.9377
Epoch 5/5
60000/60000 [==============================] - 6s 95us/sample - loss: 0.1612 - accuracy: 0.9515 - val_loss: 0.2773 - val_accuracy: 0.9020
###Markdown
Build a new model without CuDNN kernel
###Code
slow_model = build_model(allow_cudnn_kernel=False)
slow_model.set_weights(model.get_weights())
slow_model.compile(loss='sparse_categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
slow_model.fit(x_train, y_train,
validation_data=(x_test, y_test),
batch_size=batch_size,
epochs=1) # We only train for one epoch because it's slower.
###Output
Train on 60000 samples, validate on 10000 samples
60000/60000 [==============================] - 24s 397us/sample - loss: 0.1399 - accuracy: 0.9580 - val_loss: 0.2771 - val_accuracy: 0.9058
###Markdown
As you can see, the model built with CuDNN is much faster to train compared to the model that use the regular TensorFlow kernel.The same CuDNN-enabled model can also be use to run inference in a CPU-only environment. The `tf.device` annotation below is just forcing the device placement. The model will run on CPU by default if no GPU is available.You simply don't have to worry about the hardware you're running on anymore. Isn't that pretty cool?
###Code
with tf.device('CPU:0'):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print('Predicted result is: %s, target result is: %s' % (result.numpy(), sample_label))
plt.imshow(sample, cmap=plt.get_cmap('gray'))
###Output
Predicted result is: [5], target result is: 5
###Markdown
RNNs with list/dict inputs, or nested inputsNested structures allow implementers to include more information within a single timestep. For example, a video frame could have audio and video input at the same time. The data shape in this case could be:`[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]`In another example, handwriting data could have both coordinates x and y for the current position of the pen, as well as pressure information. So the data representation could be:`[batch, timestep, {"location": [x, y], "pressure": [force]}]`The following code provides an example of how to build a custom RNN cell that accepts such structured inputs. Define a custom cell that support nested input/output
###Code
NestedInput = collections.namedtuple('NestedInput', ['feature1', 'feature2'])
NestedState = collections.namedtuple('NestedState', ['state1', 'state2'])
class NestedCell(tf.keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = NestedState(state1=unit_1,
state2=tf.TensorShape([unit_2, unit_3]))
self.output_size = (unit_1, tf.TensorShape([unit_2, unit_3]))
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
input_1 = input_shapes.feature1[1]
input_2, input_3 = input_shapes.feature2[1:]
self.kernel_1 = self.add_weight(
shape=(input_1, self.unit_1), initializer='uniform', name='kernel_1')
self.kernel_2_3 = self.add_weight(
shape=(input_2, input_3, self.unit_2, self.unit_3),
initializer='uniform',
name='kernel_2_3')
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum('bij,ijkl->bkl', input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = [output_1, output_2_3]
new_states = NestedState(state1=state_1, state2=state_2_3)
return output, new_states
###Output
_____no_output_____
###Markdown
Build a RNN model with nested input/outputLet's build a Keras model that uses a `tf.keras.layers.RNN` layer and the custom cell we just defined.
###Code
unit_1 = 10
unit_2 = 20
unit_3 = 30
input_1 = 32
input_2 = 64
input_3 = 32
batch_size = 64
num_batch = 100
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = tf.keras.layers.RNN(cell)
inp_1 = tf.keras.Input((None, input_1))
inp_2 = tf.keras.Input((None, input_2, input_3))
outputs = rnn(NestedInput(feature1=inp_1, feature2=inp_2))
model = tf.keras.models.Model([inp_1, inp_2], outputs)
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the model with randomly generated dataSince there isn't a good candidate dataset for this model, we use random Numpy data for demonstration.
###Code
input_1_data = np.random.random((batch_size * num_batch, timestep, input_1))
input_2_data = np.random.random((batch_size * num_batch, timestep, input_2, input_3))
target_1_data = np.random.random((batch_size * num_batch, unit_1))
target_2_data = np.random.random((batch_size * num_batch, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size)
###Output
Train on 6400 samples
6400/6400 [==============================] - 5s 836us/sample - loss: 0.3810 - rnn_1_loss: 0.1210 - rnn_1_1_loss: 0.2601 - rnn_1_accuracy: 0.0972 - rnn_1_1_accuracy: 0.0340
|
devel/test_covariance.ipynb | ###Markdown
Test Numerical Covariance Procedure
###Code
from py21cmmc_wv.likelihood import LikelihoodWaveletsMorlet
from py21cmmc.mcmc.mcmc import build_computation_chain
from py21cmmc.mcmc.core import CoreLightConeModule
import numpy as np
from numpy.linalg import slogdet
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.colors import SymLogNorm, LogNorm
from py21cmmc_wv import util
lk = LikelihoodWaveletsMorlet(nrealisations=10)
core = CoreLightConeModule(redshift=7.0, max_redshift=7.5)
chain = build_computation_chain(core, lk)
lk.covariance[0,10]
lk.covariance[:lk.n_frequency,:lk.n_frequency]
lk.covariance
6384**2 * 8 / 1024/1024/1024
plt.figure(figsize=(10,10))
plt.imshow(lk.covariance.toarray()[:1000,:1000], norm=SymLogNorm(linthresh=1e5))
plt.colorbar()
lk.computeLikelihood(lk.default_ctx, {})
###Output
_____no_output_____
###Markdown
Test Block-Diag Solvers
###Code
x = np.linspace(0,1,10)
y = np.cov(np.random.normal(size=(10, 100)))
X = np.repeat(x,10)
Y = np.zeros((100, 100))
for i in range(10):
Y[i*10:(i+1)*10, i*10:(i+1)*10] = y
np.linalg.slogdet(Y)
util.logdet_block_matrix(Y, 10)
np.linalg.solve(Y,X).T.dot(X)
util.solve_block_matrix(Y,X,10).T.dot(X)
np.sum([slogdet(lk.covariance[i*lk.n_frequency:(i+1)*lk.n_frequency, i*lk.n_frequency:(i+1)*lk.n_frequency].toarray())[1] for i in range(int(lk.covariance.shape[0]/lk.n_frequency))])
logdet_chol(lk.covariance)
cov = lk.covariance.toarray()
cov.max()
cov.min()
np.sum(cov==0)/cov.size
del cov
wvlt = []
for i in range(10):
wvlt.append(lk.simulate(lk.default_ctx)['wavelets'])
wvlt = np.array(wvlt)
wvlt.shape
covs = [np.cov(x) for x in wvlt.transpose((1,3,2,0)).reshape((-1, lk.n_frequency, lk.nrealisations))]
covs
np.all(wvlt[0] == wvlt[1])
###Output
_____no_output_____
###Markdown
Test Model Covariance
###Code
from py21cmmc import run_lightcone
from py21cmmc_wv.morlet import morlet_transform_c
from powerbox.dft import fft
import numpy as np
lc = run_lightcone(redshift=7.0, max_redshift=10.0, user_params={"HII_DIM":100})
mean_power = 0
nu = np.linspace(0,10,100)
dnu = nu[1] - nu[0]
var = 100
mean_nonfourier = 0
for i in range(1000):
fake_visibility = np.random.normal(0, np.sqrt(var/2), size=100) + 1j * np.random.normal(0, np.sqrt(var/2), size=100)
fvis, eta, nu = morlet_transform_c(fake_visibility, nu)
mean_power += np.outer(fvis[:,-10], np.conj(fvis[:,-10]))/1000
mean_nonfourier += np.mean(np.abs(fake_visibility)**2) / 1000
mean_nonfourier # should just be the var.
fvis.shape
expected_mean_power = var * dnu**2 * np.sum()
expected_mean_power = expected_mean_power.reshape((100, 100, -1))
expected_mean_power_var = var * dnu * np.sqrt(np.pi) / eta
expected_mean_power_cov = (expected_mean_power_var * np.exp(-np.outer(eta**2, np.add.outer(nu,-nu)**2) / 4).T).T.reshape((len(eta), len(nu), len(nu)))
expected_mean_power_var[-10], expected_mean_power_cov[-10, 49, 49]
mean_power[49,49]
expected_mean_power_cov[-10, 52, 44], np.abs(mean_power[52, 44])
plt.imshow(np.log10(expected_mean_power_cov[-10]/np.abs(mean_power)), vmin=-5)
plt.colorbar()
expected_mean_power = var * 2*np.pi / np.e / eta #np.exp(-np.outer(eta**2,np.add.outer(nu,-nu)**2)/4).T
np.abs(mean_power).max()
expected_mean_power[-10]
mean_power
###Output
_____no_output_____
###Markdown
Test whether the covariance is diagonal and Gaussian.
###Code
%load_ext autoreload
%autoreload 2
from py21cmmc_fg.core import CoreForegrounds, CoreInstrumental
from py21cmmc_fg.likelihood import LikelihoodForeground2D
import numpy as np
from cosmoHammer.ChainContext import ChainContext
%matplotlib inline
import matplotlib.pyplot as plt
fg_core = CoreForegrounds(
pt_source_params=dict(
S_min=1e-1,
S_max=1.0
),
diffuse_params=dict(
u0=10.0,
eta = 0.01,
rho = -2.7,
mean_temp=253e3,
kappa=-2.55
),
add_point_sources=True,
add_diffuse=False,
redshifts = 1420./np.linspace(150, 160, 30) - 1,
boxsize=300.0,
sky_cells = 150
)
instr_core = CoreInstrumental(
antenna_posfile="grid_centres",
freq_min=150.0, freq_max=160.0, nfreq=30,
tile_diameter=4.0,
max_bl_length=150.0,
Tsys=0
)
lk = LikelihoodForeground2D(datafile=None, n_psbins=50)
fg_core.setup()
instr_core.setup()
ctx = ChainContext('derp', {"a":1})
fg_core(ctx)
instr_core(ctx)
p, k = lk.computePower(ctx)
plt.imshow(np.log10(p), origin='lower', aspect='auto', extent=(k[0][0], k[0][-1], k[1][0], k[1][-1]))
plt.xscale('log')
plt.yscale('log')
plt.colorbar()
plt.imshow(ctx.get("output").lightcone_box[:,:,0])
plt.colorbar()
p = [0]*30
for i in range(30):
fg_core(ctx)
instr_core(ctx)
p[i] = lk.computePower(ctx)[0].flatten()
mean = np.mean(p,axis=0)
cov = np.cov(p)
plt.imshow(np.log10(cov.T))
cov.shape
cov = np.cov(np.array(p).T)
###Output
_____no_output_____
###Markdown
Test Numerical Covariance Procedure
###Code
from py21cmmc_wv.likelihood import LikelihoodWaveletsMorlet
from py21cmmc.mcmc.mcmc import build_computation_chain
from py21cmmc.mcmc.core import CoreLightConeModule
import numpy as np
from numpy.linalg import slogdet
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.colors import SymLogNorm, LogNorm
from py21cmmc_wv import util
lk = LikelihoodWaveletsMorlet(nrealisations=10)
core = CoreLightConeModule(redshift=7.0, max_redshift=7.5)
chain = build_computation_chain(core, lk)
lk.covariance[0,10]
lk.covariance[:lk.n_frequency,:lk.n_frequency]
lk.covariance
6384**2 * 8 / 1024/1024/1024
plt.figure(figsize=(10,10))
plt.imshow(lk.covariance.toarray()[:1000,:1000], norm=SymLogNorm(linthresh=1e5))
plt.colorbar()
lk.computeLikelihood(lk.default_ctx, {})
###Output
_____no_output_____
###Markdown
Test Block-Diag Solvers
###Code
x = np.linspace(0,1,10)
y = np.cov(np.random.normal(size=(10, 100)))
X = np.repeat(x,10)
Y = np.zeros((100, 100))
for i in range(10):
Y[i*10:(i+1)*10, i*10:(i+1)*10] = y
np.linalg.slogdet(Y)
util.logdet_block_matrix(Y, 10)
np.linalg.solve(Y,X).T.dot(X)
util.solve_block_matrix(Y,X,10).T.dot(X)
np.sum([slogdet(lk.covariance[i*lk.n_frequency:(i+1)*lk.n_frequency, i*lk.n_frequency:(i+1)*lk.n_frequency].toarray())[1] for i in range(int(lk.covariance.shape[0]/lk.n_frequency))])
logdet_chol(lk.covariance)
cov = lk.covariance.toarray()
cov.max()
cov.min()
np.sum(cov==0)/cov.size
del cov
wvlt = []
for i in range(10):
wvlt.append(lk.simulate(lk.default_ctx)['wavelets'])
wvlt = np.array(wvlt)
wvlt.shape
covs = [np.cov(x) for x in wvlt.transpose((1,3,2,0)).reshape((-1, lk.n_frequency, lk.nrealisations))]
covs
np.all(wvlt[0] == wvlt[1])
###Output
_____no_output_____
###Markdown
Test Model Covariance
###Code
from py21cmmc import run_lightcone
from py21cmmc_wv.morlet import morlet_transform_c
from powerbox.dft import fft
import numpy as np
lc = run_lightcone(redshift=7.0, max_redshift=10.0, user_params={"HII_DIM":100})
mean_power = 0
nu = np.linspace(0,10,100)
dnu = nu[1] - nu[0]
var = 100
mean_nonfourier = 0
for i in range(1000):
fake_visibility = np.random.normal(0, np.sqrt(var/2), size=100) + 1j * np.random.normal(0, np.sqrt(var/2), size=100)
fvis, eta, nu = morlet_transform_c(fake_visibility, nu)
mean_power += np.outer(fvis[:,-10], np.conj(fvis[:,-10]))/1000
mean_nonfourier += np.mean(np.abs(fake_visibility)**2) / 1000
mean_nonfourier # should just be the var.
fvis.shape
expected_mean_power = var * dnu**2 * np.sum()
expected_mean_power = expected_mean_power.reshape((100, 100, -1))
expected_mean_power_var = var * dnu * np.sqrt(np.pi) / eta
expected_mean_power_cov = (expected_mean_power_var * np.exp(-np.outer(eta**2, np.add.outer(nu,-nu)**2) / 4).T).T.reshape((len(eta), len(nu), len(nu)))
expected_mean_power_var[-10], expected_mean_power_cov[-10, 49, 49]
mean_power[49,49]
expected_mean_power_cov[-10, 52, 44], np.abs(mean_power[52, 44])
plt.imshow(np.log10(expected_mean_power_cov[-10]/np.abs(mean_power)), vmin=-5)
plt.colorbar()
expected_mean_power = var * 2*np.pi / np.e / eta #np.exp(-np.outer(eta**2,np.add.outer(nu,-nu)**2)/4).T
np.abs(mean_power).max()
expected_mean_power[-10]
mean_power
###Output
_____no_output_____
###Markdown
Test whether the covariance is diagonal and Gaussian.
###Code
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("home/bella/Documents/Projects/Parameter_estimation_21cmmc/Codes/py21cmmc_fg")
from py21cmmc_fg.core import CoreForegrounds, CoreInstrumental
from py21cmmc_fg.likelihood import LikelihoodForeground2D
import numpy as np
from cosmoHammer.ChainContext import ChainContext
%matplotlib inline
import matplotlib.pyplot as plt
fg_core = CoreForegrounds(
pt_source_params=dict(
S_min=1e-1,
S_max=1.0
),
diffuse_params=dict(
u0=10.0,
eta = 0.01,
rho = -2.7,
mean_temp=253e3,
kappa=-2.55
),
add_point_sources=True,
add_diffuse=False,
redshifts = 1420./np.linspace(150, 160, 30) - 1,
boxsize=300.0,
sky_cells = 150
)
instr_core = CoreInstrumental(
antenna_posfile="grid_centres",
freq_min=150.0, freq_max=160.0, nfreq=30,
tile_diameter=4.0,
max_bl_length=150.0,
Tsys=0
)
lk = LikelihoodForeground2D(datafile=None, n_psbins=50)
fg_core.setup()
instr_core.setup()
ctx = ChainContext('derp', {"a":1})
fg_core(ctx)
instr_core(ctx)
p, k = lk.computePower(ctx)
plt.imshow(np.log10(p), origin='lower', aspect='auto', extent=(k[0][0], k[0][-1], k[1][0], k[1][-1]))
plt.xscale('log')
plt.yscale('log')
plt.colorbar()
plt.imshow(ctx.get("output").lightcone_box[:,:,0])
plt.colorbar()
p = [0]*30
for i in range(30):
fg_core(ctx)
instr_core(ctx)
p[i] = lk.computePower(ctx)[0].flatten()
mean = np.mean(p,axis=0)
cov = np.cov(p)
plt.imshow(np.log10(cov.T))
cov.shape
cov = np.cov(np.array(p).T)
###Output
_____no_output_____ |
nbs/21 - clip-moco.ipynb | ###Markdown
CLIP-MoCo> **CLIP**: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)> [Official Github Repo](https://github.com/openai/CLIP)> **MoCo**: [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/pdf/1911.05722.pdf) > **MoCo V2**: [Improved Baselines with Momentum Contrastive Learning](https://arxiv.org/pdf/2003.04297.pdf) This module combines CLIP and MoCo for increasing negative samples. This is useful when there is no available compute such as GPUs with large memory to support large batch sizes or multi-gpu machines to leverage distributed infonce loss implementation.
###Code
#export
from fastai.vision.all import *
from self_supervised.augmentations import *
from self_supervised.layers import *
#export
try:
from clip.simple_tokenizer import SimpleTokenizer
except:
raise ImportError("""
CLIP package is not installed/importable, please visit https://github.com/openai/CLIP or install following:
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git
""")
###Output
_____no_output_____
###Markdown
Algorithm CLIP  MoCo  Tokenizer
###Code
#export
class ClipTokenizer(DisplayedTransform):
"Tokenizer from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py"
def __init__(self, context_length=77):
self._tokenizer = SimpleTokenizer()
self.context_length = context_length
self.vocab_size = len(self._tokenizer.encoder)
def encodes(self:str, text):
sot_token = self._tokenizer.encoder["<|startoftext|>"]
eot_token = self._tokenizer.encoder["<|endoftext|>"]
tokens = [sot_token] + self._tokenizer.encode(text) + [eot_token]
result = torch.zeros(self.context_length, dtype=torch.long)
if len(tokens) > self.context_length: raise Exception(f"Token length exceeds {self.context_length} for {text}")
result[:len(tokens)] = torch.tensor(tokens)
return TensorBase(result)
###Output
_____no_output_____
###Markdown
Model
###Code
#export
def vitb32_config(input_res, context_length, vocab_size):
"ViT-B/32 configuration, uses 32x32 patches"
return dict(embed_dim=512,
image_resolution=input_res,
vision_layers=12,
vision_width=768,
vision_patch_size=32,
context_length=context_length,
vocab_size=vocab_size,
transformer_width=512,
transformer_heads=8,
transformer_layers=12)
#export
def vitl14_config(input_res, context_length, vocab_size):
"ViT-L/14 configuration, uses 14x14 patches"
return dict(embed_dim=512,
image_resolution=input_res,
vision_layers=24,
vision_width=1024,
vision_patch_size=14,
context_length=context_length,
vocab_size=vocab_size,
transformer_width=512,
transformer_heads=8,
transformer_layers=12)
#export
from collections import OrderedDict
from typing import Tuple, Union
from copy import deepcopy
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1):
super().__init__()
# all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = None
self.stride = stride
if stride > 1 or inplanes != planes * Bottleneck.expansion:
# downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
self.downsample = nn.Sequential(OrderedDict([
("-1", nn.AvgPool2d(stride)),
("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
("1", nn.BatchNorm2d(planes * self.expansion))
]))
def forward(self, x: torch.Tensor):
identity = x
out = self.relu(self.bn1(self.conv1(x)))
out = self.relu(self.bn2(self.conv2(out)))
out = self.avgpool(out)
out = self.bn3(self.conv3(out))
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class AttentionPool2d(nn.Module):
def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
super().__init__()
self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
self.k_proj = nn.Linear(embed_dim, embed_dim)
self.q_proj = nn.Linear(embed_dim, embed_dim)
self.v_proj = nn.Linear(embed_dim, embed_dim)
self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
self.num_heads = num_heads
def forward(self, x):
x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
x, _ = F.multi_head_attention_forward(
query=x, key=x, value=x,
embed_dim_to_check=x.shape[-1],
num_heads=self.num_heads,
q_proj_weight=self.q_proj.weight,
k_proj_weight=self.k_proj.weight,
v_proj_weight=self.v_proj.weight,
in_proj_weight=None,
in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
bias_k=None,
bias_v=None,
add_zero_attn=False,
dropout_p=0,
out_proj_weight=self.c_proj.weight,
out_proj_bias=self.c_proj.bias,
use_separate_proj_weight=True,
training=self.training,
need_weights=False
)
return x[0]
class ModifiedResNet(nn.Module):
"""
A ResNet class that is similar to torchvision's but contains the following changes:
- There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
- Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
- The final pooling layer is a QKV attention instead of an average pool
"""
def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
super().__init__()
self.output_dim = output_dim
self.input_resolution = input_resolution
# the 3-layer stem
self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(width // 2)
self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(width // 2)
self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(width)
self.avgpool = nn.AvgPool2d(2)
self.relu = nn.ReLU(inplace=True)
# residual layers
self._inplanes = width # this is a *mutable* variable used during construction
self.layer1 = self._make_layer(width, layers[0])
self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
embed_dim = width * 32 # the ResNet feature dimension
self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
def _make_layer(self, planes, blocks, stride=1):
layers = [Bottleneck(self._inplanes, planes, stride)]
self._inplanes = planes * Bottleneck.expansion
for _ in range(1, blocks):
layers.append(Bottleneck(self._inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
def stem(x):
for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
x = self.relu(bn(conv(x)))
x = self.avgpool(x)
return x
x = x.type(self.conv1.weight.dtype)
x = stem(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.attnpool(x)
return x
class LayerNorm(nn.LayerNorm):
"""Subclass torch's LayerNorm to handle fp16."""
def forward(self, x: torch.Tensor):
orig_type = x.dtype
ret = super().forward(x.type(torch.float32))
return ret.type(orig_type)
class QuickGELU(nn.Module):
def forward(self, x: torch.Tensor):
return x * torch.sigmoid(1.702 * x)
class ResidualAttentionBlock(nn.Module):
def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
super().__init__()
self.attn = nn.MultiheadAttention(d_model, n_head)
self.ln_1 = LayerNorm(d_model)
self.mlp = nn.Sequential(OrderedDict([
("c_fc", nn.Linear(d_model, d_model * 4)),
("gelu", QuickGELU()),
("c_proj", nn.Linear(d_model * 4, d_model))
]))
self.ln_2 = LayerNorm(d_model)
self.attn_mask = attn_mask
def attention(self, x: torch.Tensor):
self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
def forward(self, x: torch.Tensor):
x = x + self.attention(self.ln_1(x))
x = x + self.mlp(self.ln_2(x))
return x
class Transformer(nn.Module):
def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None, checkpoint=False, checkpoint_nchunks=2):
super().__init__()
self.width = width
self.layers = layers
self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
self.checkpoint = checkpoint
self.checkpoint_nchunks = checkpoint_nchunks
def forward(self, x: torch.Tensor):
if self.checkpoint: return torch.utils.checkpoint.checkpoint_sequential(self.resblocks, self.checkpoint_nchunks, x)
else: return self.resblocks(x)
class VisualTransformer(nn.Module):
def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int, **kwargs):
super().__init__()
self.input_resolution = input_resolution
self.output_dim = output_dim
self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
scale = width ** -0.5
self.class_embedding = nn.Parameter(scale * torch.randn(width))
self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
self.ln_pre = LayerNorm(width)
self.transformer = Transformer(width, layers, heads, **kwargs)
self.ln_post = LayerNorm(width)
self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
def forward(self, x: torch.Tensor):
x = self.conv1(x) # shape = [*, width, grid, grid]
x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
x = x + self.positional_embedding.to(x.dtype)
x = self.ln_pre(x)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_post(x[:, 0, :])
if self.proj is not None:
x = x @ self.proj
return x
class CLIPMOCO(nn.Module):
def __init__(self,
embed_dim: int,
# vision
image_resolution: int,
vision_layers: Union[Tuple[int, int, int, int], int],
vision_width: int,
vision_patch_size: int,
# text
context_length: int,
vocab_size: int,
transformer_width: int,
transformer_heads: int,
transformer_layers: int,
K=4096,
m=0.999,
**kwargs
):
super().__init__()
self.context_length = context_length
if isinstance(vision_layers, (tuple, list)):
vision_heads = vision_width * 32 // 64
self.visual = ModifiedResNet(
layers=vision_layers,
output_dim=embed_dim,
heads=vision_heads,
input_resolution=image_resolution,
width=vision_width
)
else:
vision_heads = vision_width // 64
self.visual = VisualTransformer(
input_resolution=image_resolution,
patch_size=vision_patch_size,
width=vision_width,
layers=vision_layers,
heads=vision_heads,
output_dim=embed_dim,
**kwargs
)
self.transformer = Transformer(
width=transformer_width,
layers=transformer_layers,
heads=transformer_heads,
attn_mask=self.build_attention_mask(),
**kwargs
)
self.vocab_size = vocab_size
self.token_embedding = nn.Embedding(vocab_size, transformer_width)
self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
self.ln_final = LayerNorm(transformer_width)
self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
self.logit_scale = nn.Parameter(torch.log(torch.tensor(1/0.07))) # Same initialization as paper
self.initialize_parameters()
# MOCO params
self.K = K
self.m = m
# init key encoders
self.visual_key_encoder = deepcopy(self.visual)
for param_k in self.visual_key_encoder.parameters(): param_k.requires_grad = False
self.transformer_key_encoder = deepcopy(self.transformer)
for param_k in self.transformer_key_encoder.parameters(): param_k.requires_grad = False
self.text_projection_key_encoder = deepcopy(self.text_projection)
self.text_projection_key_encoder.requires_grad = False
# init queues
self.image_queue = torch.randn(self.K, embed_dim)
self.text_queue = torch.randn(self.K, embed_dim)
self.queue_ptr = 0
def initialize_parameters(self):
nn.init.normal_(self.token_embedding.weight, std=0.02)
nn.init.normal_(self.positional_embedding, std=0.01)
# visual model
if isinstance(self.visual, ModifiedResNet):
if self.visual.attnpool is not None:
std = self.visual.attnpool.c_proj.in_features ** -0.5
nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std)
for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]:
for name, param in resnet_block.named_parameters():
if name.endswith("bn3.weight"):
nn.init.zeros_(param)
# text model
proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
attn_std = self.transformer.width ** -0.5
fc_std = (2 * self.transformer.width) ** -0.5
for block in self.transformer.resblocks:
nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
if self.text_projection is not None:
nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5)
def build_attention_mask(self):
# lazily create causal attention mask, with full attention between the vision tokens
# pytorch uses additive attention mask; fill with -inf
mask = torch.empty(self.context_length, self.context_length)
mask.fill_(float("-inf"))
mask.triu_(1) # zero out the lower diagonal
return mask
@property
def dtype(self):
return self.visual.conv1.weight.dtype
def encode_image(self, image):
return self.visual(image.type(self.dtype))
def encode_text(self, text):
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
x = x + self.positional_embedding.type(self.dtype)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_final(x).type(self.dtype)
# x.shape = [batch_size, n_ctx, transformer.width]
# take features from the eot embedding (eot_token is the highest number in each sequence)
x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
return x
@torch.no_grad()
def _momentum_update_key_encoders(self):
for param_q, param_k in zip(self.visual.parameters(), self.visual_key_encoder.parameters()):
param_k.data = param_k.data * self.m + param_q.data * (1. - self.m)
for param_q, param_k in zip(self.transformer.parameters(), self.transformer_key_encoder.parameters()):
param_k.data = param_k.data * self.m + param_q.data * (1. - self.m)
self.text_projection_key_encoder.data = self.text_projection_key_encoder.data * self.m + self.text_projection.data * (1. - self.m)
@torch.no_grad()
def _dequeue_and_enqueue(self, image_k, text_k):
bs = image_k.size(0)
assert self.K % bs == 0 # for simplicity
self.image_queue[self.queue_ptr:self.queue_ptr+bs, :] = image_k
self.text_queue[self.queue_ptr:self.queue_ptr+bs, :] = text_k
self.queue_ptr = (self.queue_ptr + bs) % self.K # move pointer
@torch.no_grad()
def key_encode_image(self, image):
return self.visual_key_encoder(image.type(self.dtype))
@torch.no_grad()
def key_encode_text(self, text):
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
x = x + self.positional_embedding.type(self.dtype)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer_key_encoder(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_final(x).type(self.dtype)
# x.shape = [batch_size, n_ctx, transformer.width]
# take features from the eot embedding (eot_token is the highest number in each sequence)
x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection_key_encoder
return x
def forward(self, image, text):
image_features = self.encode_image(image)
text_features = self.encode_text(text)
# normalized features
image_features = image_features / image_features.norm(dim=-1, keepdim=True)
text_features = text_features / text_features.norm(dim=-1, keepdim=True)
return image_features, text_features
###Output
_____no_output_____
###Markdown
Metric A useful proxy metric for tracking training performance and convergence.
###Code
#export
class RetrievalAtK(AccumMetric):
def __init__(self, k=20, **kwargs):
super().__init__(func=None, flatten=False, **kwargs)
self.k = k
@property
def value(self):
"For monitoring retrieval at k during training for sanity checking, should be used on < ~10000 samples"
if len(self.preds) == 0: return
image_features = torch.cat(list(L(self.preds).itemgot(0)))
text_features = torch.cat(list(L(self.preds).itemgot(1)))
ranking = torch.argsort(to_detach(image_features.to(default_device()) @ text_features.T.to(default_device()), gather=False),
descending=True)
preds = array(torch.where(ranking == torch.arange(len(image_features)).view(-1,1))[1])
if self.k == "mean": return preds.mean() + 1
elif self.k == "median": return np.floor(np.median(preds)) + 1
else: return np.mean(preds < self.k)
@property
def name(self):
if self.k == "mean": return "mean_retrieval_ranking"
elif self.k == "median": return "median_retrieval_ranking"
else: return f"retrieval_at_{self.k}"
###Output
_____no_output_____
###Markdown
CLIP-MoCo Callback
###Code
#export
class CLIPMOCOTrainer(Callback):
"MoCo Loss for CLIP. Can be used with or without DistributedDataParallel"
order,run_valid = 9,True
def before_fit(self):
self.learn.loss_func = self.lf
def before_batch(self):
"Generate image and text key for the current batch"
with torch.no_grad():
img_b, text_b = self.learn.xb
key_image_features = self.learn.model.key_encode_image(img_b)
key_text_features = self.learn.model.key_encode_text(text_b)
key_image_features = key_image_features / key_image_features.norm(dim=-1, keepdim=True)
key_text_features = key_text_features / key_text_features.norm(dim=-1, keepdim=True)
self.learn.yb = (key_image_features, key_text_features)
def lf(self, pred, *yb):
key_image_features, key_text_features = yb
image_features, text_features = pred
logit_scale = self.model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ key_text_features.t()
logits_per_text = logit_scale * text_features @ key_image_features.t()
labels = torch.arange(len(logits_per_image)).to(logits_per_image.device)
image_loss = F.cross_entropy(logits_per_image, labels)
text_loss = F.cross_entropy(logits_per_text, labels)
return (image_loss+text_loss)/2
def after_step(self):
# logit scaling set as max 100
if num_distrib()==0: self.model.logit_scale.data = torch.clamp(self.model.logit_scale.data, 0, 4.6052)
else: self.model.module.logit_scale.data = torch.clamp(self.model.module.logit_scale.data, 0, 4.6052)
# queues update
key_image_features, key_text_features = self.learn.yb
self.learn.model._dequeue_and_enqueue(key_image_features, key_text_features)
# momentum update
self.learn.model._momentum_update_key_encoders()
###Output
_____no_output_____
###Markdown
Example Usage
###Code
num2txt = {'3': 'three', '7': 'seven'}
def num_to_txt(o): return num2txt[o]
def dummy_targ(o): return 0 # loss func is not called without it
path = untar_data(URLs.MNIST_TINY)
items = get_image_files(path)
clip_tokenizer = ClipTokenizer()
tds = Datasets(items, [PILImage.create, [parent_label, num_to_txt], dummy_targ], n_inp=2, splits=GrandparentSplitter()(items))
dls = tds.dataloaders(bs=2, after_item=[Resize(224), clip_tokenizer, ToTensor()], after_batch=[IntToFloatTensor()], device='cpu')
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIPMOCO(K=4096,m=0.999, **vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPMOCOTrainer(), ShortEpochCallback(0.001)],
metrics=[RetrievalAtK(k=5),
RetrievalAtK(k=20),
RetrievalAtK(k="mean"),
RetrievalAtK(k="median")])
#hide
# Causes kernel died error in CI - github actions
# learner.fit(1)
# learner.recorder.losses
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 01 - augmentations.ipynb.
Converted 02 - layers.ipynb.
Converted 03 - distributed.ipynb.
Converted 10 - simclr.ipynb.
Converted 11 - moco.ipynb.
Converted 12 - byol.ipynb.
Converted 13 - swav.ipynb.
Converted 14 - barlow_twins.ipynb.
Converted 20 - clip.ipynb.
Converted 21 - clip-moco.ipynb.
Converted index.ipynb.
###Markdown
CLIP-MoCo> CLIP: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)> [Official Github Repo](https://github.com/openai/CLIP)> MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/pdf/1911.05722.pdf) > MoCo V2: [Improved Baselines with Momentum Contrastive Learning](https://arxiv.org/pdf/2003.04297.pdf) This module combines CLIP and MoCo for increasing negative samples. This is useful when there is no available compute such as GPUs with large memory to support large batch sizes or multi-gpu machines to leverage distributed infonce loss implementation.
###Code
#export
from fastai.vision.all import *
from self_supervised.augmentations import *
from self_supervised.layers import *
#export
try:
from clip.simple_tokenizer import SimpleTokenizer
except:
raise ImportError("""
CLIP package is not installed/importable, please visit https://github.com/openai/CLIP or install following:
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git
""")
###Output
_____no_output_____
###Markdown
Algorithm CLIP  MoCo  Tokenizer
###Code
#export
class ClipTokenizer(DisplayedTransform):
"Tokenizer from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py"
def __init__(self, context_length=77):
self._tokenizer = SimpleTokenizer()
self.context_length = context_length
self.vocab_size = len(self._tokenizer.encoder)
def encodes(self:str, text):
sot_token = self._tokenizer.encoder["<|startoftext|>"]
eot_token = self._tokenizer.encoder["<|endoftext|>"]
tokens = [sot_token] + self._tokenizer.encode(text) + [eot_token]
result = torch.zeros(self.context_length, dtype=torch.long)
if len(tokens) > self.context_length: raise Exception(f"Token length exceeds {self.context_length} for {text}")
result[:len(tokens)] = torch.tensor(tokens)
return TensorBase(result)
###Output
_____no_output_____
###Markdown
Model
###Code
#export
def vitb32_config(input_res, context_length, vocab_size):
"ViT-B/32 configuration, uses 32x32 patches"
return dict(embed_dim=512,
image_resolution=input_res,
vision_layers=12,
vision_width=768,
vision_patch_size=32,
context_length=context_length,
vocab_size=vocab_size,
transformer_width=512,
transformer_heads=8,
transformer_layers=12)
#export
def vitl14_config(input_res, context_length, vocab_size):
"ViT-L/14 configuration, uses 14x14 patches"
return dict(embed_dim=512,
image_resolution=input_res,
vision_layers=24,
vision_width=1024,
vision_patch_size=14,
context_length=context_length,
vocab_size=vocab_size,
transformer_width=512,
transformer_heads=8,
transformer_layers=12)
#export
from collections import OrderedDict
from typing import Tuple, Union
from copy import deepcopy
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1):
super().__init__()
# all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = None
self.stride = stride
if stride > 1 or inplanes != planes * Bottleneck.expansion:
# downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
self.downsample = nn.Sequential(OrderedDict([
("-1", nn.AvgPool2d(stride)),
("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
("1", nn.BatchNorm2d(planes * self.expansion))
]))
def forward(self, x: torch.Tensor):
identity = x
out = self.relu(self.bn1(self.conv1(x)))
out = self.relu(self.bn2(self.conv2(out)))
out = self.avgpool(out)
out = self.bn3(self.conv3(out))
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class AttentionPool2d(nn.Module):
def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
super().__init__()
self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
self.k_proj = nn.Linear(embed_dim, embed_dim)
self.q_proj = nn.Linear(embed_dim, embed_dim)
self.v_proj = nn.Linear(embed_dim, embed_dim)
self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
self.num_heads = num_heads
def forward(self, x):
x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
x, _ = F.multi_head_attention_forward(
query=x, key=x, value=x,
embed_dim_to_check=x.shape[-1],
num_heads=self.num_heads,
q_proj_weight=self.q_proj.weight,
k_proj_weight=self.k_proj.weight,
v_proj_weight=self.v_proj.weight,
in_proj_weight=None,
in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
bias_k=None,
bias_v=None,
add_zero_attn=False,
dropout_p=0,
out_proj_weight=self.c_proj.weight,
out_proj_bias=self.c_proj.bias,
use_separate_proj_weight=True,
training=self.training,
need_weights=False
)
return x[0]
class ModifiedResNet(nn.Module):
"""
A ResNet class that is similar to torchvision's but contains the following changes:
- There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
- Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
- The final pooling layer is a QKV attention instead of an average pool
"""
def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
super().__init__()
self.output_dim = output_dim
self.input_resolution = input_resolution
# the 3-layer stem
self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(width // 2)
self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(width // 2)
self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(width)
self.avgpool = nn.AvgPool2d(2)
self.relu = nn.ReLU(inplace=True)
# residual layers
self._inplanes = width # this is a *mutable* variable used during construction
self.layer1 = self._make_layer(width, layers[0])
self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
embed_dim = width * 32 # the ResNet feature dimension
self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
def _make_layer(self, planes, blocks, stride=1):
layers = [Bottleneck(self._inplanes, planes, stride)]
self._inplanes = planes * Bottleneck.expansion
for _ in range(1, blocks):
layers.append(Bottleneck(self._inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
def stem(x):
for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
x = self.relu(bn(conv(x)))
x = self.avgpool(x)
return x
x = x.type(self.conv1.weight.dtype)
x = stem(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.attnpool(x)
return x
class LayerNorm(nn.LayerNorm):
"""Subclass torch's LayerNorm to handle fp16."""
def forward(self, x: torch.Tensor):
orig_type = x.dtype
ret = super().forward(x.type(torch.float32))
return ret.type(orig_type)
class QuickGELU(nn.Module):
def forward(self, x: torch.Tensor):
return x * torch.sigmoid(1.702 * x)
class ResidualAttentionBlock(nn.Module):
def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
super().__init__()
self.attn = nn.MultiheadAttention(d_model, n_head)
self.ln_1 = LayerNorm(d_model)
self.mlp = nn.Sequential(OrderedDict([
("c_fc", nn.Linear(d_model, d_model * 4)),
("gelu", QuickGELU()),
("c_proj", nn.Linear(d_model * 4, d_model))
]))
self.ln_2 = LayerNorm(d_model)
self.attn_mask = attn_mask
def attention(self, x: torch.Tensor):
self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
def forward(self, x: torch.Tensor):
x = x + self.attention(self.ln_1(x))
x = x + self.mlp(self.ln_2(x))
return x
class Transformer(nn.Module):
def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None, checkpoint=False, checkpoint_nchunks=2):
super().__init__()
self.width = width
self.layers = layers
self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
self.checkpoint = checkpoint
self.checkpoint_nchunks = checkpoint_nchunks
def forward(self, x: torch.Tensor):
if self.checkpoint: return torch.utils.checkpoint.checkpoint_sequential(self.resblocks, self.checkpoint_nchunks, x)
else: return self.resblocks(x)
class VisualTransformer(nn.Module):
def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int, **kwargs):
super().__init__()
self.input_resolution = input_resolution
self.output_dim = output_dim
self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
scale = width ** -0.5
self.class_embedding = nn.Parameter(scale * torch.randn(width))
self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
self.ln_pre = LayerNorm(width)
self.transformer = Transformer(width, layers, heads, **kwargs)
self.ln_post = LayerNorm(width)
self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
def forward(self, x: torch.Tensor):
x = self.conv1(x) # shape = [*, width, grid, grid]
x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
x = x + self.positional_embedding.to(x.dtype)
x = self.ln_pre(x)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_post(x[:, 0, :])
if self.proj is not None:
x = x @ self.proj
return x
class CLIPMOCO(nn.Module):
def __init__(self,
embed_dim: int,
# vision
image_resolution: int,
vision_layers: Union[Tuple[int, int, int, int], int],
vision_width: int,
vision_patch_size: int,
# text
context_length: int,
vocab_size: int,
transformer_width: int,
transformer_heads: int,
transformer_layers: int,
K=4096,
m=0.999,
**kwargs
):
super().__init__()
self.context_length = context_length
if isinstance(vision_layers, (tuple, list)):
vision_heads = vision_width * 32 // 64
self.visual = ModifiedResNet(
layers=vision_layers,
output_dim=embed_dim,
heads=vision_heads,
input_resolution=image_resolution,
width=vision_width
)
else:
vision_heads = vision_width // 64
self.visual = VisualTransformer(
input_resolution=image_resolution,
patch_size=vision_patch_size,
width=vision_width,
layers=vision_layers,
heads=vision_heads,
output_dim=embed_dim,
**kwargs
)
self.transformer = Transformer(
width=transformer_width,
layers=transformer_layers,
heads=transformer_heads,
attn_mask=self.build_attention_mask(),
**kwargs
)
self.vocab_size = vocab_size
self.token_embedding = nn.Embedding(vocab_size, transformer_width)
self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
self.ln_final = LayerNorm(transformer_width)
self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
self.logit_scale = nn.Parameter(torch.log(torch.tensor(1/0.07))) # Same initialization as paper
self.initialize_parameters()
# MOCO params
self.K = K
self.m = m
# init key encoders
self.visual_key_encoder = deepcopy(self.visual)
for param_k in self.visual_key_encoder.parameters(): param_k.requires_grad = False
self.transformer_key_encoder = deepcopy(self.transformer)
for param_k in self.transformer_key_encoder.parameters(): param_k.requires_grad = False
self.text_projection_key_encoder = deepcopy(self.text_projection)
self.text_projection_key_encoder.requires_grad = False
# init queues
self.image_queue = torch.randn(self.K, embed_dim)
self.text_queue = torch.randn(self.K, embed_dim)
self.queue_ptr = 0
def initialize_parameters(self):
nn.init.normal_(self.token_embedding.weight, std=0.02)
nn.init.normal_(self.positional_embedding, std=0.01)
# visual model
if isinstance(self.visual, ModifiedResNet):
if self.visual.attnpool is not None:
std = self.visual.attnpool.c_proj.in_features ** -0.5
nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std)
for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]:
for name, param in resnet_block.named_parameters():
if name.endswith("bn3.weight"):
nn.init.zeros_(param)
# text model
proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
attn_std = self.transformer.width ** -0.5
fc_std = (2 * self.transformer.width) ** -0.5
for block in self.transformer.resblocks:
nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
if self.text_projection is not None:
nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5)
def build_attention_mask(self):
# lazily create causal attention mask, with full attention between the vision tokens
# pytorch uses additive attention mask; fill with -inf
mask = torch.empty(self.context_length, self.context_length)
mask.fill_(float("-inf"))
mask.triu_(1) # zero out the lower diagonal
return mask
@property
def dtype(self):
return self.visual.conv1.weight.dtype
def encode_image(self, image):
return self.visual(image.type(self.dtype))
def encode_text(self, text):
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
x = x + self.positional_embedding.type(self.dtype)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_final(x).type(self.dtype)
# x.shape = [batch_size, n_ctx, transformer.width]
# take features from the eot embedding (eot_token is the highest number in each sequence)
x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
return x
@torch.no_grad()
def _momentum_update_key_encoders(self):
for param_q, param_k in zip(self.visual.parameters(), self.visual_key_encoder.parameters()):
param_k.data = param_k.data * self.m + param_q.data * (1. - self.m)
for param_q, param_k in zip(self.transformer.parameters(), self.transformer_key_encoder.parameters()):
param_k.data = param_k.data * self.m + param_q.data * (1. - self.m)
self.text_projection_key_encoder.data = self.text_projection_key_encoder.data * self.m + self.text_projection.data * (1. - self.m)
@torch.no_grad()
def _dequeue_and_enqueue(self, image_k, text_k):
bs = image_k.size(0)
assert self.K % bs == 0 # for simplicity
self.image_queue[self.queue_ptr:self.queue_ptr+bs, :] = image_k
self.text_queue[self.queue_ptr:self.queue_ptr+bs, :] = text_k
self.queue_ptr = (self.queue_ptr + bs) % self.K # move pointer
@torch.no_grad()
def key_encode_image(self, image):
return self.visual_key_encoder(image.type(self.dtype))
@torch.no_grad()
def key_encode_text(self, text):
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
x = x + self.positional_embedding.type(self.dtype)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer_key_encoder(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_final(x).type(self.dtype)
# x.shape = [batch_size, n_ctx, transformer.width]
# take features from the eot embedding (eot_token is the highest number in each sequence)
x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection_key_encoder
return x
def forward(self, image, text):
image_features = self.encode_image(image)
text_features = self.encode_text(text)
# normalized features
image_features = image_features / image_features.norm(dim=-1, keepdim=True)
text_features = text_features / text_features.norm(dim=-1, keepdim=True)
return image_features, text_features
###Output
_____no_output_____
###Markdown
Metric A useful proxy metric for tracking training performance and convergence.
###Code
#export
class RetrievalAtK(AccumMetric):
def __init__(self, k=20, **kwargs):
super().__init__(func=None, flatten=False, **kwargs)
self.k = k
@property
def value(self):
"For monitoring retrieval at k during training for sanity checking, should be used on < ~10000 samples"
if len(self.preds) == 0: return
image_features = torch.cat(list(L(self.preds).itemgot(0)))
text_features = torch.cat(list(L(self.preds).itemgot(1)))
ranking = torch.argsort(to_detach(image_features.to(default_device()) @ text_features.T.to(default_device()), gather=False),
descending=True)
preds = array(torch.where(ranking == torch.arange(len(image_features)).view(-1,1))[1])
if self.k == "mean": return preds.mean() + 1
elif self.k == "median": return np.floor(np.median(preds)) + 1
else: return np.mean(preds < self.k)
@property
def name(self):
if self.k == "mean": return "mean_retrieval_ranking"
elif self.k == "median": return "median_retrieval_ranking"
else: return f"retrieval_at_{self.k}"
###Output
_____no_output_____
###Markdown
CLIP-MoCo Callback
###Code
#export
class CLIPMOCOTrainer(Callback):
"MoCo Loss for CLIP. Can be used with or without DistributedDataParallel"
order,run_valid = 9,True
def before_fit(self):
self.learn.loss_func = self.lf
def before_batch(self):
"Generate image and text key for the current batch"
with torch.no_grad():
img_b, text_b = self.learn.xb
key_image_features = self.learn.model.key_encode_image(img_b)
key_text_features = self.learn.model.key_encode_text(text_b)
key_image_features = key_image_features / key_image_features.norm(dim=-1, keepdim=True)
key_text_features = key_text_features / key_text_features.norm(dim=-1, keepdim=True)
self.learn.yb = (key_image_features, key_text_features)
def lf(self, pred, *yb):
key_image_features, key_text_features = yb
image_features, text_features = pred
logit_scale = self.model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ key_text_features.t()
logits_per_text = logit_scale * text_features @ key_image_features.t()
labels = torch.arange(len(logits_per_image)).to(logits_per_image.device)
image_loss = F.cross_entropy(logits_per_image, labels)
text_loss = F.cross_entropy(logits_per_text, labels)
return (image_loss+text_loss)/2
def after_step(self):
# logit scaling set as max 100
if num_distrib()==0: self.model.logit_scale.data = torch.clamp(self.model.logit_scale.data, 0, 4.6052)
else: self.model.module.logit_scale.data = torch.clamp(self.model.module.logit_scale.data, 0, 4.6052)
# queues update
key_image_features, key_text_features = self.learn.yb
self.learn.model._dequeue_and_enqueue(key_image_features, key_text_features)
# momentum update
self.learn.model._momentum_update_key_encoders()
###Output
_____no_output_____
###Markdown
Example Usage
###Code
num2txt = {'3': 'three', '7': 'seven'}
def num_to_txt(o): return num2txt[o]
def dummy_targ(o): return 0 # loss func is not called without it
path = untar_data(URLs.MNIST_TINY)
items = get_image_files(path)
clip_tokenizer = ClipTokenizer()
tds = Datasets(items, [PILImage.create, [parent_label, num_to_txt], dummy_targ], n_inp=2, splits=GrandparentSplitter()(items))
dls = tds.dataloaders(bs=2, after_item=[Resize(224), clip_tokenizer, ToTensor()], after_batch=[IntToFloatTensor()], device='cpu')
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIPMOCO(K=4096,m=0.999, **vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPMOCOTrainer(), ShortEpochCallback(0.001)],
metrics=[RetrievalAtK(k=5),
RetrievalAtK(k=20),
RetrievalAtK(k="mean"),
RetrievalAtK(k="median")])
learner.summary()
#hide
# Causes kernel died error in CI - github actions
# learner.fit(1)
# learner.recorder.losses
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
_____no_output_____
###Markdown
CLIP-MoCo> CLIP: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)> [Official Github Repo](https://github.com/openai/CLIP)> MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/pdf/1911.05722.pdf) > MoCo V2: [Improved Baselines with Momentum Contrastive Learning](https://arxiv.org/pdf/2003.04297.pdf) This module combines CLIP and MoCo for increasing negative samples. This is useful when there is no available compute such as GPUs with large memory to support large batch sizes or multi-gpu machines to leverage distributed infonce loss implementation.
###Code
#export
from fastai.vision.all import *
from self_supervised.augmentations import *
from self_supervised.layers import *
#export
try:
from clip.simple_tokenizer import SimpleTokenizer
except:
raise ImportError("""
CLIP package is not installed/importable, please visit https://github.com/openai/CLIP or install following:
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git
""")
###Output
_____no_output_____
###Markdown
Algorithm CLIP  MoCo  Tokenizer
###Code
#export
class ClipTokenizer(DisplayedTransform):
"Tokenizer from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py"
def __init__(self, context_length=77):
self._tokenizer = SimpleTokenizer()
self.context_length = context_length
self.vocab_size = len(self._tokenizer.encoder)
def encodes(self:str, text):
sot_token = self._tokenizer.encoder["<|startoftext|>"]
eot_token = self._tokenizer.encoder["<|endoftext|>"]
tokens = [sot_token] + self._tokenizer.encode(text) + [eot_token]
result = torch.zeros(self.context_length, dtype=torch.long)
if len(tokens) > self.context_length: raise Exception(f"Token length exceeds {self.context_length} for {text}")
result[:len(tokens)] = torch.tensor(tokens)
return TensorBase(result)
###Output
_____no_output_____
###Markdown
Model
###Code
#export
def vitb32_config(input_res, context_length, vocab_size):
"ViT-B/32 configuration, uses 32x32 patches"
return dict(embed_dim=512,
image_resolution=input_res,
vision_layers=12,
vision_width=768,
vision_patch_size=32,
context_length=context_length,
vocab_size=vocab_size,
transformer_width=512,
transformer_heads=8,
transformer_layers=12)
#export
def vitl14_config(input_res, context_length, vocab_size):
"ViT-L/14 configuration, uses 14x14 patches"
return dict(embed_dim=512,
image_resolution=input_res,
vision_layers=24,
vision_width=1024,
vision_patch_size=14,
context_length=context_length,
vocab_size=vocab_size,
transformer_width=512,
transformer_heads=8,
transformer_layers=12)
#export
from collections import OrderedDict
from typing import Tuple, Union
from copy import deepcopy
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1):
super().__init__()
# all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = None
self.stride = stride
if stride > 1 or inplanes != planes * Bottleneck.expansion:
# downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
self.downsample = nn.Sequential(OrderedDict([
("-1", nn.AvgPool2d(stride)),
("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
("1", nn.BatchNorm2d(planes * self.expansion))
]))
def forward(self, x: torch.Tensor):
identity = x
out = self.relu(self.bn1(self.conv1(x)))
out = self.relu(self.bn2(self.conv2(out)))
out = self.avgpool(out)
out = self.bn3(self.conv3(out))
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class AttentionPool2d(nn.Module):
def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
super().__init__()
self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
self.k_proj = nn.Linear(embed_dim, embed_dim)
self.q_proj = nn.Linear(embed_dim, embed_dim)
self.v_proj = nn.Linear(embed_dim, embed_dim)
self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
self.num_heads = num_heads
def forward(self, x):
x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
x, _ = F.multi_head_attention_forward(
query=x, key=x, value=x,
embed_dim_to_check=x.shape[-1],
num_heads=self.num_heads,
q_proj_weight=self.q_proj.weight,
k_proj_weight=self.k_proj.weight,
v_proj_weight=self.v_proj.weight,
in_proj_weight=None,
in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
bias_k=None,
bias_v=None,
add_zero_attn=False,
dropout_p=0,
out_proj_weight=self.c_proj.weight,
out_proj_bias=self.c_proj.bias,
use_separate_proj_weight=True,
training=self.training,
need_weights=False
)
return x[0]
class ModifiedResNet(nn.Module):
"""
A ResNet class that is similar to torchvision's but contains the following changes:
- There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
- Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
- The final pooling layer is a QKV attention instead of an average pool
"""
def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
super().__init__()
self.output_dim = output_dim
self.input_resolution = input_resolution
# the 3-layer stem
self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(width // 2)
self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(width // 2)
self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(width)
self.avgpool = nn.AvgPool2d(2)
self.relu = nn.ReLU(inplace=True)
# residual layers
self._inplanes = width # this is a *mutable* variable used during construction
self.layer1 = self._make_layer(width, layers[0])
self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
embed_dim = width * 32 # the ResNet feature dimension
self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
def _make_layer(self, planes, blocks, stride=1):
layers = [Bottleneck(self._inplanes, planes, stride)]
self._inplanes = planes * Bottleneck.expansion
for _ in range(1, blocks):
layers.append(Bottleneck(self._inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
def stem(x):
for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
x = self.relu(bn(conv(x)))
x = self.avgpool(x)
return x
x = x.type(self.conv1.weight.dtype)
x = stem(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.attnpool(x)
return x
class LayerNorm(nn.LayerNorm):
"""Subclass torch's LayerNorm to handle fp16."""
def forward(self, x: torch.Tensor):
orig_type = x.dtype
ret = super().forward(x.type(torch.float32))
return ret.type(orig_type)
class QuickGELU(nn.Module):
def forward(self, x: torch.Tensor):
return x * torch.sigmoid(1.702 * x)
class ResidualAttentionBlock(nn.Module):
def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
super().__init__()
self.attn = nn.MultiheadAttention(d_model, n_head)
self.ln_1 = LayerNorm(d_model)
self.mlp = nn.Sequential(OrderedDict([
("c_fc", nn.Linear(d_model, d_model * 4)),
("gelu", QuickGELU()),
("c_proj", nn.Linear(d_model * 4, d_model))
]))
self.ln_2 = LayerNorm(d_model)
self.attn_mask = attn_mask
def attention(self, x: torch.Tensor):
self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
def forward(self, x: torch.Tensor):
x = x + self.attention(self.ln_1(x))
x = x + self.mlp(self.ln_2(x))
return x
class Transformer(nn.Module):
def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None, checkpoint=False, checkpoint_nchunks=2):
super().__init__()
self.width = width
self.layers = layers
self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
self.checkpoint = checkpoint
self.checkpoint_nchunks = checkpoint_nchunks
def forward(self, x: torch.Tensor):
if self.checkpoint: return torch.utils.checkpoint.checkpoint_sequential(self.resblocks, self.checkpoint_nchunks, x)
else: return self.resblocks(x)
class VisualTransformer(nn.Module):
def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int, **kwargs):
super().__init__()
self.input_resolution = input_resolution
self.output_dim = output_dim
self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
scale = width ** -0.5
self.class_embedding = nn.Parameter(scale * torch.randn(width))
self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
self.ln_pre = LayerNorm(width)
self.transformer = Transformer(width, layers, heads, **kwargs)
self.ln_post = LayerNorm(width)
self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
def forward(self, x: torch.Tensor):
x = self.conv1(x) # shape = [*, width, grid, grid]
x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
x = x + self.positional_embedding.to(x.dtype)
x = self.ln_pre(x)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_post(x[:, 0, :])
if self.proj is not None:
x = x @ self.proj
return x
class CLIPMOCO(nn.Module):
def __init__(self,
embed_dim: int,
# vision
image_resolution: int,
vision_layers: Union[Tuple[int, int, int, int], int],
vision_width: int,
vision_patch_size: int,
# text
context_length: int,
vocab_size: int,
transformer_width: int,
transformer_heads: int,
transformer_layers: int,
K=4096,
m=0.999,
**kwargs
):
super().__init__()
self.context_length = context_length
if isinstance(vision_layers, (tuple, list)):
vision_heads = vision_width * 32 // 64
self.visual = ModifiedResNet(
layers=vision_layers,
output_dim=embed_dim,
heads=vision_heads,
input_resolution=image_resolution,
width=vision_width
)
else:
vision_heads = vision_width // 64
self.visual = VisualTransformer(
input_resolution=image_resolution,
patch_size=vision_patch_size,
width=vision_width,
layers=vision_layers,
heads=vision_heads,
output_dim=embed_dim,
**kwargs
)
self.transformer = Transformer(
width=transformer_width,
layers=transformer_layers,
heads=transformer_heads,
attn_mask=self.build_attention_mask(),
**kwargs
)
self.vocab_size = vocab_size
self.token_embedding = nn.Embedding(vocab_size, transformer_width)
self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
self.ln_final = LayerNorm(transformer_width)
self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
self.logit_scale = nn.Parameter(torch.log(torch.tensor(1/0.07))) # Same initialization as paper
self.initialize_parameters()
# MOCO params
self.K = K
self.m = m
# init key encoders
self.visual_key_encoder = deepcopy(self.visual)
for param_k in self.visual_key_encoder.parameters(): param_k.requires_grad = False
self.transformer_key_encoder = deepcopy(self.transformer)
for param_k in self.transformer_key_encoder.parameters(): param_k.requires_grad = False
self.text_projection_key_encoder = deepcopy(self.text_projection)
self.text_projection_key_encoder.requires_grad = False
# init queues
self.image_queue = torch.randn(self.K, embed_dim)
self.text_queue = torch.randn(self.K, embed_dim)
self.queue_ptr = 0
def initialize_parameters(self):
nn.init.normal_(self.token_embedding.weight, std=0.02)
nn.init.normal_(self.positional_embedding, std=0.01)
# visual model
if isinstance(self.visual, ModifiedResNet):
if self.visual.attnpool is not None:
std = self.visual.attnpool.c_proj.in_features ** -0.5
nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std)
nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std)
for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]:
for name, param in resnet_block.named_parameters():
if name.endswith("bn3.weight"):
nn.init.zeros_(param)
# text model
proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
attn_std = self.transformer.width ** -0.5
fc_std = (2 * self.transformer.width) ** -0.5
for block in self.transformer.resblocks:
nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
if self.text_projection is not None:
nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5)
def build_attention_mask(self):
# lazily create causal attention mask, with full attention between the vision tokens
# pytorch uses additive attention mask; fill with -inf
mask = torch.empty(self.context_length, self.context_length)
mask.fill_(float("-inf"))
mask.triu_(1) # zero out the lower diagonal
return mask
@property
def dtype(self):
return self.visual.conv1.weight.dtype
def encode_image(self, image):
return self.visual(image.type(self.dtype))
def encode_text(self, text):
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
x = x + self.positional_embedding.type(self.dtype)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_final(x).type(self.dtype)
# x.shape = [batch_size, n_ctx, transformer.width]
# take features from the eot embedding (eot_token is the highest number in each sequence)
x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
return x
@torch.no_grad()
def _momentum_update_key_encoders(self):
for param_q, param_k in zip(self.visual.parameters(), self.visual_key_encoder.parameters()):
param_k.data = param_k.data * self.m + param_q.data * (1. - self.m)
for param_q, param_k in zip(self.transformer.parameters(), self.transformer_key_encoder.parameters()):
param_k.data = param_k.data * self.m + param_q.data * (1. - self.m)
self.text_projection_key_encoder.data = self.text_projection_key_encoder.data * self.m + self.text_projection.data * (1. - self.m)
@torch.no_grad()
def _dequeue_and_enqueue(self, image_k, text_k):
bs = image_k.size(0)
assert self.K % bs == 0 # for simplicity
self.image_queue[self.queue_ptr:self.queue_ptr+bs, :] = image_k
self.text_queue[self.queue_ptr:self.queue_ptr+bs, :] = text_k
self.queue_ptr = (self.queue_ptr + bs) % self.K # move pointer
@torch.no_grad()
def key_encode_image(self, image):
return self.visual_key_encoder(image.type(self.dtype))
@torch.no_grad()
def key_encode_text(self, text):
x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
x = x + self.positional_embedding.type(self.dtype)
x = x.permute(1, 0, 2) # NLD -> LND
x = self.transformer_key_encoder(x)
x = x.permute(1, 0, 2) # LND -> NLD
x = self.ln_final(x).type(self.dtype)
# x.shape = [batch_size, n_ctx, transformer.width]
# take features from the eot embedding (eot_token is the highest number in each sequence)
x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection_key_encoder
return x
def forward(self, image, text):
image_features = self.encode_image(image)
text_features = self.encode_text(text)
# normalized features
image_features = image_features / image_features.norm(dim=-1, keepdim=True)
text_features = text_features / text_features.norm(dim=-1, keepdim=True)
return image_features, text_features
###Output
_____no_output_____
###Markdown
Metric A useful proxy metric for tracking training performance and convergence.
###Code
#export
class RetrievalAtK(AccumMetric):
def __init__(self, k=20, **kwargs):
super().__init__(func=None, flatten=False, **kwargs)
self.k = k
@property
def value(self):
"For monitoring retrieval at k during training for sanity checking, should be used on < ~10000 samples"
if len(self.preds) == 0: return
image_features = torch.cat(list(L(self.preds).itemgot(0)))
text_features = torch.cat(list(L(self.preds).itemgot(1)))
ranking = torch.argsort(to_detach(image_features.to(default_device()) @ text_features.T.to(default_device()), gather=False),
descending=True)
preds = array(torch.where(ranking == torch.arange(len(image_features)).view(-1,1))[1])
if self.k == "mean": return preds.mean() + 1
elif self.k == "median": return np.floor(np.median(preds)) + 1
else: return np.mean(preds < self.k)
@property
def name(self):
if self.k == "mean": return "mean_retrieval_ranking"
elif self.k == "median": return "median_retrieval_ranking"
else: return f"retrieval_at_{self.k}"
###Output
_____no_output_____
###Markdown
CLIP-MoCo Callback
###Code
#export
class CLIPMOCOTrainer(Callback):
"MoCo Loss for CLIP. Can be used with or without DistributedDataParallel"
order,run_valid = 9,True
def before_fit(self):
self.learn.loss_func = self.lf
def before_batch(self):
"Generate image and text key for the current batch"
with torch.no_grad():
img_b, text_b = self.learn.xb
key_image_features = self.learn.model.key_encode_image(img_b)
key_text_features = self.learn.model.key_encode_text(text_b)
key_image_features = key_image_features / key_image_features.norm(dim=-1, keepdim=True)
key_text_features = key_text_features / key_text_features.norm(dim=-1, keepdim=True)
self.learn.yb = (key_image_features, key_text_features)
def lf(self, pred, *yb):
key_image_features, key_text_features = yb
image_features, text_features = pred
logit_scale = self.model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ key_text_features.t()
logits_per_text = logit_scale * text_features @ key_image_features.t()
labels = torch.arange(len(logits_per_image)).to(logits_per_image.device)
image_loss = F.cross_entropy(logits_per_image, labels)
text_loss = F.cross_entropy(logits_per_text, labels)
return (image_loss+text_loss)/2
def after_step(self):
# logit scaling set as max 100
if num_distrib()==0: self.model.logit_scale.data = torch.clamp(self.model.logit_scale.data, 0, 4.6052)
else: self.model.module.logit_scale.data = torch.clamp(self.model.module.logit_scale.data, 0, 4.6052)
# queues update
key_image_features, key_text_features = self.learn.yb
self.learn.model._dequeue_and_enqueue(key_image_features, key_text_features)
# momentum update
self.learn.model._momentum_update_key_encoders()
###Output
_____no_output_____
###Markdown
Example Usage
###Code
num2txt = {'3': 'three', '7': 'seven'}
def num_to_txt(o): return num2txt[o]
def dummy_targ(o): return 0 # loss func is not called without it
path = untar_data(URLs.MNIST_TINY)
items = get_image_files(path)
clip_tokenizer = ClipTokenizer()
tds = Datasets(items, [PILImage.create, [parent_label, num_to_txt], dummy_targ], n_inp=2, splits=GrandparentSplitter()(items))
dls = tds.dataloaders(bs=2, after_item=[Resize(224), clip_tokenizer, ToTensor()], after_batch=[IntToFloatTensor()], device='cpu')
vitb32_config_dict = vitb32_config(224, clip_tokenizer.context_length, clip_tokenizer.vocab_size)
clip_model = CLIPMOCO(K=4096,m=0.999, **vitb32_config_dict, checkpoint=False, checkpoint_nchunks=0)
learner = Learner(dls, clip_model, loss_func=noop, cbs=[CLIPMOCOTrainer(), ShortEpochCallback(0.001)],
metrics=[RetrievalAtK(k=5),
RetrievalAtK(k=20),
RetrievalAtK(k="mean"),
RetrievalAtK(k="median")])
learner.summary()
#hide
# Causes kernel died error in CI - github actions
# learner.fit(1)
# learner.recorder.losses
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
_____no_output_____ |
_downloads/srtomo_regularized.ipynb | ###Markdown
Straight-ray toy tomography with different regularization-------------------------------------------------------------A very simplified way of playing around with tomography is through astraight-ray approximation. If we assume that the seismic rays don't bend whenthey encounter a change in velocity (i.e., no refraction), then the inversionbecomes linear and much simpler to solve. It is a good example to illustratehow different forms of regularization impact the estimated velocity model.This simple tomography is implemented in the:class:`~fatiando.seismic.srtomo.SRTomo` class. The example below uses 3 formsof regularization to invert a synthetic data-set.
###Code
import numpy as np
import matplotlib.pyplot as plt
from fatiando.mesher import SquareMesh
from fatiando.seismic import ttime2d, srtomo
from fatiando.inversion import Smoothness2D, Damping, TotalVariation2D
from fatiando import utils, gridder
# First, we'll create a simple model with a high velocity square in the middle
area = (0, 500000, 0, 500000)
shape = (30, 30)
model = SquareMesh(area, shape)
vel = 4000 * np.ones(shape)
vel[5:25, 5:25] = 10000
model.addprop('vp', vel.ravel())
# Make some noisy travel time data using straight-rays
# Set the random seed so that points are the same every time we run this script
seed = 0
src_loc_x, src_loc_y = gridder.scatter(area, 80, seed=seed)
src_loc = np.transpose([src_loc_x, src_loc_y])
rec_loc_x, rec_loc_y = gridder.circular_scatter(area, 30,
random=True, seed=seed)
rec_loc = np.transpose([rec_loc_x, rec_loc_y])
srcs = [src for src in src_loc for _ in rec_loc]
recs = [rec for _ in src_loc for rec in rec_loc]
tts = ttime2d.straight(model, 'vp', srcs, recs)
# Use 2% random noise to corrupt the data
tts = utils.contaminate(tts, 0.02, percent=True, seed=seed)
# Make a mesh for the inversion. The inversion will estimate the velocity in
# each square of the mesh. To make things simpler, we'll use a mesh that is the
# same as our original model.
mesh = SquareMesh(area, shape)
# Create solvers for each type of regularization and fit the synthetic data to
# obtain an estimated velocity model
solver = srtomo.SRTomo(tts, srcs, recs, mesh)
smooth = solver + 1e8*Smoothness2D(mesh.shape)
smooth.fit()
damped = solver + 1e8*Damping(mesh.size)
damped.fit()
sharp = solver + 30*TotalVariation2D(1e-10, mesh.shape)
# Since Total Variation is a non-linear regularizing function, then the
# tomography becomes non-linear as well. We need to configure the inversion to
# use the Levemberg-Marquardt algorithm, a gradient descent method, that
# requires an initial estimate
sharp.config('levmarq', initial=0.00001*np.ones(mesh.size)).fit()
# Plot the original model and the 3 estimates using the same color bar
fig, axes = plt.subplots(2, 2, figsize=(8, 7), sharex='all', sharey='all')
x = model.get_xs()/1000
y = model.get_ys()/1000
vmin, vmax = vel.min(), vel.max()
ax = axes[0, 0]
ax.set_title('True model')
ax.pcolormesh(x, y, vel, cmap='Greens', vmin=vmin, vmax=vmax)
ax.plot(src_loc[:, 0]/1000, src_loc[:, 1]/1000, '+k', label='Earthquakes')
ax.plot(rec_loc[:, 0]/1000, rec_loc[:, 1]/1000, '^k', label='Receivers')
ax.legend(loc='upper right', numpoints=1)
ax = axes[0, 1]
ax.set_title('Damped solution')
ax.pcolormesh(x, y, damped.estimate_.reshape(shape), cmap='Greens', vmin=vmin,
vmax=vmax)
ax = axes[1, 0]
ax.set_title('Smooth solution')
ax.pcolormesh(x, y, smooth.estimate_.reshape(shape), cmap='Greens', vmin=vmin,
vmax=vmax)
ax = axes[1, 1]
ax.set_title('Sharp solution')
ax.pcolormesh(x, y, sharp.estimate_.reshape(shape), cmap='Greens', vmin=vmin,
vmax=vmax)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Straight-ray toy tomography with different regularization-------------------------------------------------------------A very simplified way of playing around with tomography is through astraight-ray approximation. If we assume that the seismic rays don't bend whenthey encounter a change in velocity (i.e., no refraction), then the inversionbecomes linear and much simpler to solve. It is a good example to illustratehow different forms of regularization impact the estimated velocity model.This simple tomography is implemented in the:class:`~fatiando.seismic.srtomo.SRTomo` class. The example below uses 3 formsof regularization to invert a synthetic data-set.WarningThe SRTomo class is meant as a teaching tool and not a **real tomography code**. It approximates the seismic rays with straight lines, thus ignoring refraction (Snell's Law). Results can be significantly distorted, particularly on highly heterogeneous media.
###Code
import numpy as np
import matplotlib.pyplot as plt
from fatiando.mesher import SquareMesh
from fatiando.seismic import ttime2d, srtomo
from fatiando.inversion import Smoothness2D, Damping, TotalVariation2D
from fatiando import utils, gridder
# First, we'll create a simple model with a high velocity square in the middle
area = (0, 500000, 0, 500000)
shape = (30, 30)
model = SquareMesh(area, shape)
vel = 4000 * np.ones(shape)
vel[5:25, 5:25] = 10000
model.addprop('vp', vel.ravel())
# Make some noisy travel time data using straight-rays
# Set the random seed so that points are the same every time we run this script
seed = 0
src_loc_x, src_loc_y = gridder.scatter(area, 80, seed=seed)
src_loc = np.transpose([src_loc_x, src_loc_y])
rec_loc_x, rec_loc_y = gridder.circular_scatter(area, 30,
random=True, seed=seed)
rec_loc = np.transpose([rec_loc_x, rec_loc_y])
srcs = [src for src in src_loc for _ in rec_loc]
recs = [rec for _ in src_loc for rec in rec_loc]
tts = ttime2d.straight(model, 'vp', srcs, recs)
# Use 2% random noise to corrupt the data
tts = utils.contaminate(tts, 0.02, percent=True, seed=seed)
# Make a mesh for the inversion. The inversion will estimate the velocity in
# each square of the mesh. To make things simpler, we'll use a mesh that is the
# same as our original model.
mesh = SquareMesh(area, shape)
# Create solvers for each type of regularization and fit the synthetic data to
# obtain an estimated velocity model
solver = srtomo.SRTomo(tts, srcs, recs, mesh)
smooth = solver + 1e8*Smoothness2D(mesh.shape)
smooth.fit()
damped = solver + 1e8*Damping(mesh.size)
damped.fit()
sharp = solver + 30*TotalVariation2D(1e-10, mesh.shape)
# Since Total Variation is a non-linear regularizing function, then the
# tomography becomes non-linear as well. We need to configure the inversion to
# use the Levemberg-Marquardt algorithm, a gradient descent method, that
# requires an initial estimate
sharp.config('levmarq', initial=0.00001*np.ones(mesh.size)).fit()
# Plot the original model and the 3 estimates using the same color bar
fig, axes = plt.subplots(2, 2, figsize=(8, 7), sharex='all', sharey='all')
x = model.get_xs()/1000
y = model.get_ys()/1000
vmin, vmax = vel.min(), vel.max()
ax = axes[0, 0]
ax.set_title('True model')
ax.pcolormesh(x, y, vel, cmap='Greens', vmin=vmin, vmax=vmax)
ax.plot(src_loc[:, 0]/1000, src_loc[:, 1]/1000, '+k', label='Earthquakes')
ax.plot(rec_loc[:, 0]/1000, rec_loc[:, 1]/1000, '^k', label='Receivers')
ax.legend(loc='upper right', numpoints=1)
ax = axes[0, 1]
ax.set_title('Damped solution')
ax.pcolormesh(x, y, damped.estimate_.reshape(shape), cmap='Greens', vmin=vmin,
vmax=vmax)
ax = axes[1, 0]
ax.set_title('Smooth solution')
ax.pcolormesh(x, y, smooth.estimate_.reshape(shape), cmap='Greens', vmin=vmin,
vmax=vmax)
ax = axes[1, 1]
ax.set_title('Sharp solution')
ax.pcolormesh(x, y, sharp.estimate_.reshape(shape), cmap='Greens', vmin=vmin,
vmax=vmax)
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
xarray/xarray-2.ipynb | ###Markdown
`Xarray` 数据处理(2)--------------------------主讲人:李显祥大气科学学院
###Code
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#xr.set_options(display_style="html")
%matplotlib inline
air_temp = xr.tutorial.load_dataset('air_temperature')
###Output
_____no_output_____
###Markdown
7. 广播和对齐 * 可以直接在 `Dataset` 和 `DataArray` 对象上进行算术运算。* 标签会被保留,在运算中 dataArray 的维度会自动对齐。 广播
###Code
import numpy as np
import xarray as xr
a = xr.DataArray(np.arange(3), dims='time',
coords={'time':np.arange(3)})
b = xr.DataArray(np.arange(4), dims='space',
coords={'space':np.arange(4)})
a + b
###Output
_____no_output_____
###Markdown
思考题 1:b + a = ? 思考题 2:```pythonanomaly = air_temp.air - air_temp.air.mean(dim='time')```这里使用了广播吗? 对齐
###Code
atime = np.arange(3)
btime = np.arange(5) + 1
atime, btime
a = xr.DataArray(np.arange(3), dims='time',
coords={'time':atime})
b = xr.DataArray(np.arange(5), dims='time',
coords={'time':btime})
a + b
###Output
_____no_output_____
###Markdown
如果我们要保留所有坐标,怎么办呢?
###Code
# 也可以保留所有坐标
with xr.set_options(arithmetic_join="outer"):
print(a + b)
###Output
<xarray.DataArray (time: 6)>
array([nan, 1., 3., nan, nan, nan])
Coordinates:
* time (time) int64 0 1 2 3 4 5
###Markdown
注意:没有对应坐标的点被设置为 `nan` 使用 `.where()` 进行屏蔽
###Code
means = air_temp.air.mean(dim=['time'])
means.where(means > 273.15).plot()
###Output
_____no_output_____
###Markdown
8. 分组 Groupby xarray 支持和 pandas 类似的分组(“group by”)操作来实现拆散-应用-合并: - 将数据根据一定的规则拆散为独立的组- 对每个组应用某种函数- 将这些组合并成一个单一的数据对象分组操作对于 Dataset 和 DataArray 对象都适用。
###Code
air_temp.air.mean(dim=['lat','lon']).groupby('time.season').mean()
# air_temp.air.groupby('time.season').mean().mean(dim=['lat','lon'])
clim = air_temp.air.groupby('time.month').mean('time')
clim
###Output
_____no_output_____
###Markdown
我们也可以对拆分的组应用算术运算
###Code
anomalies = air_temp.air.groupby('time.month') - clim
anomalies
###Output
_____no_output_____
###Markdown
注意 xarray 自动改变了 clim 的 month 维度以适应 air 的 time 维度。这也是 xarray 的“广播”功能的优势之一:它会自动根据维度的名字来匹配,我们也不需要像 numpy 那样使用 reshape 来改变数组的形状或者插入一个长度为1的维度 。
###Code
anomalies.plot()
anomalies.sel(time='2013-12').plot()
anomalies.sel(time='2014-12-01T00:00:00').plot(center=0)
###Output
_____no_output_____
###Markdown
注意这里 `anomalies`和上节课的 `anomalies` 的不同。 重采样 Resample重采样将时间序列改变为新的时间间隔
###Code
tmin = air_temp.air.resample(time='1D').min() # Resample to one day '1D
tmax = air_temp.air.resample(time='1D').max()
(tmin.sel(time='2013-08-01')-273.15).plot()
ds_extremes = xr.Dataset({'tmin': tmin, 'tmax': tmax})
ds_extremes
ds_extremes.to_dataframe().head()
###Output
_____no_output_____
###Markdown
如果我们要计算季节平均温度,但是季节不是从12月开始算,而是从1月份开始算,该怎么办?
###Code
tmean = air_temp.air.resample(time='QS-JAN').mean()
tmean
###Output
_____no_output_____
###Markdown
我们可以进一步求长期性的季节性平均(即季节性气候):
###Code
tmean.groupby('time.month').mean()
###Output
_____no_output_____
###Markdown
9. 滚动 Rollingxarray 支持滚动窗口操作
###Code
air_roll = air_temp.air.rolling(time=3,center=True)
air_roll
air_roll.mean()
###Output
_____no_output_____
###Markdown
`Xarray` 数据处理(2)--------------------------主讲人:李显祥大气科学学院
###Code
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#xr.set_options(display_style="html")
%matplotlib inline
#air_temp = xr.tutorial.load_dataset('air_temperature')
air_temp = xr.open_dataset('air_temperature.nc')
###Output
_____no_output_____
###Markdown
7. 广播和对齐 * 可以直接在 `Dataset` 和 `DataArray` 对象上进行算术运算。* 标签会被保留,在运算中 dataArray 的维度会自动对齐。 广播
###Code
import numpy as np
import xarray as xr
a = xr.DataArray(np.arange(3), dims='time',
coords={'time':np.arange(3)})
b = xr.DataArray(np.arange(4), dims='space',
coords={'space':np.arange(4)})
a + b
###Output
_____no_output_____
###Markdown
思考题 1:b + a = ? 思考题 2:```pythonanomaly = air_temp.air - air_temp.air.mean(dim='time')```这里使用了广播吗? 对齐
###Code
atime = np.arange(3)
btime = np.arange(5) + 1
atime, btime
a = xr.DataArray(np.arange(3), dims='time',
coords={'time':atime})
b = xr.DataArray(np.arange(5), dims='time',
coords={'time':btime})
a + b
###Output
_____no_output_____
###Markdown
如果我们要保留所有坐标,怎么办呢?
###Code
# 也可以保留所有坐标
with xr.set_options(arithmetic_join="outer"):
print(a + b)
###Output
<xarray.DataArray (time: 6)>
array([nan, 1., 3., nan, nan, nan])
Coordinates:
* time (time) int64 0 1 2 3 4 5
###Markdown
注意:没有对应坐标的点被设置为 `nan` 使用 `.where()` 进行屏蔽
###Code
means = air_temp.air.mean(dim=['time'])
means.where(means > 273.15).plot()
###Output
_____no_output_____
###Markdown
8. 分组 Groupby xarray 支持和 pandas 类似的分组(“group by”)操作来实现拆散-应用-合并: - 将数据根据一定的规则拆散为独立的组- 对每个组应用某种函数- 将这些组合并成一个单一的数据对象分组操作对于 Dataset 和 DataArray 对象都适用。
###Code
air_temp.air.mean(dim=['lat','lon']).groupby('time.season').mean()
# air_temp.air.groupby('time.season').mean().mean(dim=['lat','lon'])
clim = air_temp.air.groupby('time.month').mean('time')
clim
###Output
_____no_output_____
###Markdown
我们也可以对拆分的组应用算术运算
###Code
anomalies = air_temp.air.groupby('time.month') - clim
anomalies
###Output
_____no_output_____
###Markdown
注意 xarray 自动改变了 clim 的 month 维度以适应 air 的 time 维度。这也是 xarray 的“广播”功能的优势之一:它会自动根据维度的名字来匹配,我们也不需要像 numpy 那样使用 reshape 来改变数组的形状或者插入一个长度为1的维度 。
###Code
anomalies.plot()
anomalies.sel(time='2013-12').plot()
anomalies.sel(time='2014-12-01T00:00:00').plot(center=0)
###Output
_____no_output_____
###Markdown
注意这里 `anomalies`和上节课的 `anomalies` 的不同。 重采样 Resample重采样将时间序列改变为新的时间间隔
###Code
tmin = air_temp.air.resample(time='1D').min() # Resample to one day '1D
tmax = air_temp.air.resample(time='1D').max()
(tmin.sel(time='2013-08-01')-273.15).plot()
ds_extremes = xr.Dataset({'tmin': tmin, 'tmax': tmax})
ds_extremes
ds_extremes.to_dataframe().head()
###Output
_____no_output_____
###Markdown
如果我们要计算季节平均温度,但是季节不是从12月开始算,而是从1月份开始算,该怎么办?
###Code
tmean = air_temp.air.resample(time='QS-JAN').mean()
tmean
###Output
_____no_output_____
###Markdown
我们可以进一步求长期性的季节性平均(即季节性气候):
###Code
tmean.groupby('time.month').mean()
###Output
_____no_output_____
###Markdown
9. 滚动 Rollingxarray 支持滚动窗口操作
###Code
ctime = np.arange(10) + 1
c = xr.DataArray(np.arange(10), dims='time',
coords={'time':ctime})
rol = c.rolling(time=3)
#rol
for e in rol:
print(e)
air_roll = air_temp.air.rolling(time=3) #,center=True)
air_roll
air_roll.mean()
###Output
_____no_output_____ |
ipython/combustion_model_and_ignition_delay_demo.ipynb | ###Markdown
Combustion Model and Ignition Delay DemoWritten by A. Mark Payne and Alon Grinberg Dana for presentation at ICCK 2017 User input:
###Code
fuel = 'OC'
equivalence_ratio = 1.0
temperature = 1500.0 # (K)
pressure = 1.0 # (atm)
sim_time = 2 # (ms)
top_sens = 10 # number of top sensitive reactions and thermo to display
rmgpy_path = '../rmg.py' # Change to your rmg.py path
from IPython.display import display, Image
from rmgpy.molecule import Molecule
fuel_molecule = Molecule(smiles=fuel)
print("The fuel molecule is:")
display(fuel_molecule)
###Output
_____no_output_____
###Markdown
RMG's input file:
###Code
fuel_molecule = Molecule(smiles=fuel)
nC = int(fuel_molecule.get_num_atoms('C'))
nH = int(fuel_molecule.get_num_atoms('H'))
nO = int(fuel_molecule.get_num_atoms('O'))
fuel_stoich = equivalence_ratio/(nC+(nH/4.0)-(nO/2.0))
input_file = f'''
# Data sources
database(
thermoLibraries = ['BurkeH2O2','primaryThermoLibrary','thermo_DFT_CCSDTF12_BAC','DFT_QCI_thermo','FFCM1(-)'],
reactionLibraries = ['BurkeH2O2inN2','FFCM1(-)'],
seedMechanisms = [],
kineticsDepositories = ['training'],
kineticsFamilies = 'default',
kineticsEstimator = 'rate rules',
)
# List of species
species(
label='fuel',
reactive=True,
structure=SMILES('{fuel}'),
)
species(
label='O2',
reactive=True,
structure=SMILES('[O][O]'),
)
species(
label='N2',
reactive=False,
structure=SMILES('N#N'),
)
species(
label='OH',
reactive=True,
structure=SMILES('[OH]'),
)
# Reaction system
simpleReactor(
temperature=({temperature!s},'K'),
pressure=({pressure!s},'atm'),
initialMoleFractions={{
'fuel': {fuel_stoich!s},
'O2': 1,
'N2': 3.76,
}},
terminationTime=({sim_time/1000.0},'s'),
sensitivity=['OH'],
sensitivityThreshold=0.01,
)
simulator(
atol=1e-16,
rtol=1e-8,
sens_atol=1e-6,
sens_rtol=1e-4,
)
model(
toleranceKeepInEdge=0,
toleranceMoveToCore=0.1,
toleranceInterruptSimulation=0.1,
maximumEdgeSpecies=100000,
filterReactions=True,
maxNumObjsPerIter=2,
terminateAtMaxObjects=True,
maxNumSpecies=50,
)
#pressureDependence(
# method='modified strong collision',
# maximumGrainSize=(0.5,'kcal/mol'),
# minimumNumberOfGrains=250,
# temperatures=(298,2500,'K',10),
# pressures=(0.5,3,'bar',5),
# interpolation=('Chebyshev', 6, 4),
# maximumAtoms=16,
#)
options(
units='si',
generateOutputHTML=True,
generatePlots=False,
saveEdgeSpecies=False,
saveSimulationProfiles=True,
)
generatedSpeciesConstraints(
allowed=['input species','seed mechanisms','reaction libraries'],
maximumCarbonAtoms=5,
maximumOxygenAtoms=2,
maximumNitrogenAtoms=0,
maximumSiliconAtoms=0,
maximumSulfurAtoms=0,
maximumHeavyAtoms=6,
maximumRadicalElectrons=2,
allowSingletO2=False,
)
'''
import os
import shutil
directory = './rmg_demo'
if os.path.exists(directory):
shutil.rmtree(directory)
os.mkdir(directory)
input_path = os.path.join(directory, 'input.py')
with open(input_path,'w') as f:
f.write(input_file)
print('Created RMG input file at ' + os.path.abspath(input_path))
###Output
_____no_output_____
###Markdown
Run RMG:
###Code
import time
import datetime
import subprocess
start = time.time()
# Execute RMG job
subprocess.check_call(['python', rmgpy_path, input_path])
end = time.time()
print('Total simulation time: ' + str(datetime.timedelta(seconds=round(end-start))))
with open(os.path.join(directory, 'RMG.log'),'r') as f:
begin = False
for line in f:
if 'MODEL GENERATION COMPLETED' in line:
begin = True
if begin:
print(line.strip())
###Output
_____no_output_____
###Markdown
Run the generated model (using RMG's Cantera functions):
###Code
from rmgpy.chemkin import load_chemkin_file
from rmgpy.tools.canteraModel import Cantera, get_rmg_species_from_user_species
from rmgpy.species import Species
import time
chem_path = os.path.join(directory, 'chemkin')
species_list, reaction_list = load_chemkin_file(os.path.join(chem_path, 'chem_annotated.inp'),
os.path.join(chem_path, 'species_dictionary.txt'),
os.path.join(chem_path, 'tran.dat'))
fuel_species = Species(smiles=fuel)
O2_species = Species(smiles='[O][O]')
N2_species = Species(smiles='N#N')
OH_species = Species(smiles='[OH]')
species_dict = get_rmg_species_from_user_species([fuel_species, O2_species, N2_species, OH_species], species_list)
reactor_type_list = ['IdealGasReactor']
reaction_time_list = ([sim_time], 'ms')
mol_frac_list=[{species_dict[fuel_species]: fuel_stoich,
species_dict[O2_species]: 1,
species_dict[N2_species]: 3.76}]
T_list = ([temperature],'K')
P_list = ([pressure],'atm')
job = Cantera(species_list=species_list, reaction_list=reaction_list, output_directory=directory)
job.load_chemkin_model(os.path.join(chem_path, 'chem_annotated.inp'), os.path.join(chem_path, 'tran.dat'))
job.generate_conditions(reactor_type_list, reaction_time_list, mol_frac_list, T_list, P_list)
alldata = job.simulate()
print("Simulation Completed")
###Output
_____no_output_____
###Markdown
Plot:
###Code
############### Settings ###############
fsize = (8,4) # Change to make the figure fit on your screen
########################################
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from rmgpy.tools import plot as rmg_plot
from operator import itemgetter
%matplotlib notebook
times = alldata[0][0].data
temperatures = alldata[0][1][0].data
pressures = alldata[0][1][1].data
dpdt = (pressures[1:] - pressures[:-1]) / (times[1:] - times[:-1])
idi = next(i for i,d in enumerate(dpdt) if d==max(dpdt))
ign_delay_time = times[idi]
for spc in range(len(alldata[0][1][:])):
if alldata[0][1][spc].label == str(species_dict[fuel_species]):
Fuel_idx = spc
if alldata[0][1][spc].label == str(species_dict[OH_species]):
OH_idx = spc
for i in range(len(alldata[0][1][Fuel_idx].data)):
if alldata[0][1][Fuel_idx].data[i]<0.001:
Fuel_Depletion_Time = times[i]
break
files = os.listdir(os.path.join(directory, 'solver'))
sensitivity_file = [f for f in files if ('sensitivity' in f) and ('.csv' in f)][0]
SA_time, SA_data = rmg_plot.parse_csv_data(os.path.join(directory, 'solver', sensitivity_file))
time_error = 1
for i in range(len(SA_time.data)):
if abs(SA_time.data[i]-ign_delay_time)<time_error:
ign_delay_idx = i
time_error = abs(SA_time.data[i]-ign_delay_time)
Gidx = 0
for i in range(len(SA_data[:])):
if "G" in SA_data[i].label:
if not Gidx:
Gidx = i
SA_data[i].label = SA_data[i].label.split('G')[1][1:-1]
else:
SA_data[i].label = SA_data[i].label.split(' ')[1]
rank1 = []
for n in range(Gidx):
rank1.append(abs(SA_data[n].data[ign_delay_idx])) # list of max SA range for each rxn
num1 = np.linspace(0,len(rank1)-1,len(rank1)) # will be used to get the order of reactions by rank1
num1 = zip(rank1,num1)
num1 = sorted(num1, key=itemgetter(0),reverse=True)
SA_k_data = []
SA_k_label = []
for i in range(min(top_sens, Gidx)):
SA_k_data.append(SA_data[int(num1[i][1])].data[ign_delay_idx]) # make sorted lists size topSA of SA values and rxns labels
SA_k_label.append(SA_data[int(num1[i][1])].label)
rank2 = []
for n in range(len(SA_data)-Gidx):
rank2.append(abs(SA_data[n+Gidx].data[ign_delay_idx])) # list of max SA range for each rxn
num2 = np.linspace(0,len(rank2)-1,len(rank2)) # will be used to get the order of reactions by rank1
num2 = zip(rank2,num2)
num2 = sorted(num2, key=itemgetter(0),reverse=True)
SA_G_data = []
SA_G_label = []
for i in range(min(top_sens, len(SA_data)-Gidx)):
SA_G_data.append(SA_data[int(num2[i][1])+Gidx].data[ign_delay_idx]) # make sorted lists size topSA of SA values and rxns labels
SA_G_label.append(SA_data[int(num2[i][1])+Gidx].label)
print("Ignition delay time is {0:.4f} ms".format(ign_delay_time * 1000))
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
plt.rcParams['figure.autolayout'] = True
plt.style.use('ggplot')
plt.style.use('seaborn-pastel')
fig = plt.figure(figsize=fsize)
plt.subplot(1,2,1)
plt.plot(alldata[0][0].data*1000, alldata[0][1][Fuel_idx].data,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('$Y_{Fuel}$')
plt.title('Fuel profile')
plt.xlim([0,2000*ign_delay_time])
max_oh = max(alldata[0][1][OH_idx].data)
plt.subplot(1,2,2)
plt.plot(alldata[0][0].data*1000, alldata[0][1][OH_idx].data,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('$Y_{OH}$')
plt.title('OH profile')
plt.xlim([0,2000*ign_delay_time])
plt.arrow(0, alldata[0][1][OH_idx].data[idi], ign_delay_time*1000, 0, width=max_oh*0.01, head_width=max_oh*0.05, head_length=ign_delay_time*120, length_includes_head=True, color='r', shape='full')
plt.annotate(r'$Ignition Delay: \tau_{ign}$', xy=(0,0), xytext=(0, alldata[0][1][OH_idx].data[idi]+0.0005), fontsize=10);
fig = plt.figure(figsize=fsize)
plt.subplot(1,2,1)
plt.plot(alldata[0][0].data*1000, temperatures,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('Temperature (K)')
plt.title('Temperature')
plt.xlim([0,2000*ign_delay_time])
plt.subplot(1,2,2)
plt.plot(alldata[0][0].data*1000, pressures,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('Pressure (Pa)')
plt.title('Pressure')
plt.xlim([0,2000*ign_delay_time])
fig = plt.figure(figsize=fsize)
plt.barh(np.arange(min(top_sens, Gidx)), SA_k_data, 1/1.5, color="blue")
plt.gca().invert_yaxis()
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{[OH]}}{\partial\:\ln{k}}$');
plt.rcParams.update({'axes.labelsize': 20})
plt.yticks(np.arange(min(top_sens, Gidx)),SA_k_label)
plt.title("[OH] sensitivity to kinetics")
fig = plt.figure(figsize=fsize)
plt.barh(np.arange(min(top_sens, len(SA_data)-Gidx)), SA_G_data, 1/1.5, color="blue")
plt.gca().invert_yaxis()
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{[OH]}}{\partial\:G_i}$ $[mol/kcal]$');
plt.rcParams.update({'axes.labelsize': 20})
plt.yticks(np.arange(min(top_sens, len(SA_data)-Gidx)),SA_G_label)
plt.title("[OH] sensitivity to thermo")
plt.show()
import cantera as ct
gas = ct.Solution(os.path.join(directory, 'cantera', 'chem.cti'))
comp = str(species_dict[fuel_species])+":"+str(fuel_stoich)+","+str(species_dict[O2_species])+":1,"+str(species_dict[N2_species])+":3.76"
gas.TPX = temperature, pressure, comp
reactor = ct.IdealGasConstPressureReactor(gas)
network = ct.ReactorNet([reactor])
network.advance(ign_delay_time)
ROP_C = ct.ReactionPathDiagram(gas, 'C')
from PIL import Image as PILimg
ROP1 = plt.subplot(1,1,1)
dot_file = os.path.join(directory, 'cantera', 'rxnpathC.dot')
img_file = os.path.join(directory, 'cantera', 'rxnpathC.png')
ROP_C.title = 'Reaction path diagram following C'
ROP_C.threshold = 0.01
ROP_C.label_threshold = 0.01
ROP_C.show_details = True
ROP_C.write_dot(dot_file) # write dot file
os.system('dot {0} -Tpng -o{1} -Gdpi=300'.format(dot_file, img_file)) # write png file
fullpath = os.getcwd() + '/' + img_file
display(Image(fullpath))
###Output
_____no_output_____
###Markdown
Combustion Model and Ignition Delay DemoWritten by A. Mark Payne and Alon Grinberg Dana for presentation at ICCK 2017 User input:
###Code
fuel = 'OC'
equivalence_ratio = 1.0
temperature = 1500.0 # (K)
pressure = 1.0 # (atm)
sim_time = 2 # (ms)
top_sens = 10 # number of top sensitive reactions and thermo to display
rmgpy_path = '../rmg.py' # Change to your rmg.py path
from IPython.display import display, Image
from rmgpy.molecule import Molecule
fuel_molecule = Molecule(SMILES=fuel)
print("The fuel molecule is:")
display(fuel_molecule)
###Output
_____no_output_____
###Markdown
RMG's input file:
###Code
fuel_molecule = Molecule(SMILES=fuel)
nC = int(fuel_molecule.getNumAtoms('C'))
nH = int(fuel_molecule.getNumAtoms('H'))
nO = int(fuel_molecule.getNumAtoms('O'))
fuel_stoich = equivalence_ratio/(nC+(nH/4.0)-(nO/2.0))
input_file = '''
# Data sources
database(
thermoLibraries = ['BurkeH2O2','primaryThermoLibrary','thermo_DFT_CCSDTF12_BAC','DFT_QCI_thermo','FFCM1(-)'],
reactionLibraries = ['BurkeH2O2inN2','FFCM1(-)'],
seedMechanisms = [],
kineticsDepositories = ['training'],
kineticsFamilies = 'default',
kineticsEstimator = 'rate rules',
)
# List of species
species(
label='fuel',
reactive=True,
structure=SMILES('''+"'"+fuel+"'"+'''),
)
species(
label='O2',
reactive=True,
structure=SMILES('[O][O]'),
)
species(
label='N2',
reactive=False,
structure=SMILES('N#N'),
)
species(
label='OH',
reactive=True,
structure=SMILES('[OH]'),
)
# Reaction system
simpleReactor(
temperature=('''+str(temperature)+''','K'),
pressure=('''+str(pressure)+''','atm'),
initialMoleFractions={
'fuel': '''+str(fuel_stoich)+''',
'O2': 1,
'N2': 3.76,
},
terminationTime=('''+str(sim_time/1000.0)+''','s'),
sensitivity=['OH'],
sensitivityThreshold=0.01,
)
simulator(
atol=1e-16,
rtol=1e-8,
sens_atol=1e-6,
sens_rtol=1e-4,
)
model(
toleranceKeepInEdge=0,
toleranceMoveToCore=0.1,
toleranceInterruptSimulation=0.1,
maximumEdgeSpecies=100000,
filterReactions=True,
maxNumObjsPerIter=2,
terminateAtMaxObjects=True,
maxNumSpecies=50,
)
#pressureDependence(
# method='modified strong collision',
# maximumGrainSize=(0.5,'kcal/mol'),
# minimumNumberOfGrains=250,
# temperatures=(298,2500,'K',10),
# pressures=(0.5,3,'bar',5),
# interpolation=('Chebyshev', 6, 4),
# maximumAtoms=16,
#)
options(
units='si',
generateOutputHTML=True,
generatePlots=False,
saveEdgeSpecies=False,
saveSimulationProfiles=True,
)
generatedSpeciesConstraints(
allowed=['input species','seed mechanisms','reaction libraries'],
maximumCarbonAtoms=5,
maximumOxygenAtoms=2,
maximumNitrogenAtoms=0,
maximumSiliconAtoms=0,
maximumSulfurAtoms=0,
maximumHeavyAtoms=6,
maximumRadicalElectrons=2,
allowSingletO2=False,
)
'''
import os
import shutil
directory = './rmg_demo'
if os.path.exists(directory):
shutil.rmtree(directory)
os.mkdir(directory)
input_path = os.path.join(directory, 'input.py')
with open(input_path,'w') as f:
f.write(input_file)
print('Created RMG input file at ' + os.path.abspath(input_path))
###Output
_____no_output_____
###Markdown
Run RMG:
###Code
import time
import datetime
import subprocess
start = time.time()
# Execute RMG job
subprocess.check_call(['python', rmgpy_path, input_path])
end = time.time()
print 'Total simulation time: ' + str(datetime.timedelta(seconds=round(end-start)))
with open(os.path.join(directory, 'RMG.log'),'r') as f:
begin = False
for line in f:
if 'MODEL GENERATION COMPLETED' in line:
begin = True
if begin:
print line.strip()
###Output
_____no_output_____
###Markdown
Run the generated model (using RMG's Cantera functions):
###Code
from rmgpy.chemkin import loadChemkinFile
from rmgpy.tools.canteraModel import Cantera, getRMGSpeciesFromUserSpecies
from rmgpy.species import Species
import time
chem_path = os.path.join(directory, 'chemkin')
species_list, reaction_list = loadChemkinFile(os.path.join(chem_path, 'chem_annotated.inp'),
os.path.join(chem_path, 'species_dictionary.txt'),
os.path.join(chem_path, 'tran.dat'))
fuel_species=Species(SMILES=fuel)
O2_species=Species(SMILES='[O][O]')
N2_species=Species(SMILES='N#N')
OH_species=Species(SMILES='[OH]')
species_dict = getRMGSpeciesFromUserSpecies([fuel_species, O2_species, N2_species, OH_species], species_list)
reactor_type_list = ['IdealGasReactor']
reaction_time_list = ([sim_time], 'ms')
mol_frac_list=[{species_dict[fuel_species]: fuel_stoich,
species_dict[O2_species]: 1,
species_dict[N2_species]: 3.76}]
T_list = ([temperature],'K')
P_list = ([pressure],'atm')
job = Cantera(speciesList=species_list, reactionList=reaction_list, outputDirectory=directory)
job.loadChemkinModel(os.path.join(chem_path, 'chem_annotated.inp'), os.path.join(chem_path, 'tran.dat'))
job.generateConditions(reactor_type_list, reaction_time_list, mol_frac_list, T_list, P_list)
alldata = job.simulate()
print("Simulation Completed")
###Output
_____no_output_____
###Markdown
Plot:
###Code
############### Settings ###############
fsize = (8,4) # Change to make the figure fit on your screen
########################################
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from rmgpy.tools import plot as rmg_plot
from operator import itemgetter
%matplotlib notebook
times = alldata[0][0].data
temperatures = alldata[0][1][0].data
pressures = alldata[0][1][1].data
dpdt = (pressures[1:] - pressures[:-1]) / (times[1:] - times[:-1])
idi = next(i for i,d in enumerate(dpdt) if d==max(dpdt))
ign_delay_time = times[idi]
for spc in xrange(len(alldata[0][1][:])):
if alldata[0][1][spc].label == str(species_dict[fuel_species]):
Fuel_idx = spc
if alldata[0][1][spc].label == str(species_dict[OH_species]):
OH_idx = spc
for i in range(len(alldata[0][1][Fuel_idx].data)):
if alldata[0][1][Fuel_idx].data[i]<0.001:
Fuel_Depletion_Time = times[i]
break
files = os.listdir(os.path.join(directory, 'solver'))
sensitivity_file = str(filter(lambda x: ('sensitivity' in x) and ('.csv' in x),files)[0])
SA_time, SA_data = rmg_plot.parseCSVData(os.path.join(directory, 'solver', sensitivity_file))
time_error = 1
for i in range(len(SA_time.data)):
if abs(SA_time.data[i]-ign_delay_time)<time_error:
ign_delay_idx = i
time_error = abs(SA_time.data[i]-ign_delay_time)
Gidx = 0
for i in xrange(len(SA_data[:])):
if "G" in SA_data[i].label:
if not Gidx:
Gidx = i
SA_data[i].label = SA_data[i].label.split('G')[1][1:-1]
else:
SA_data[i].label = SA_data[i].label.split(' ')[1]
rank1 = []
for n in xrange(Gidx):
rank1.append(abs(SA_data[n].data[ign_delay_idx])) # list of max SA range for each rxn
num1 = np.linspace(0,len(rank1)-1,len(rank1)) # will be used to get the order of reactions by rank1
num1 = zip(rank1,num1)
num1 = sorted(num1, key=itemgetter(0),reverse=True)
SA_k_data = []
SA_k_label = []
for i in xrange(min(top_sens, Gidx)):
SA_k_data.append(SA_data[int(num1[i][1])].data[ign_delay_idx]) # make sorted lists size topSA of SA values and rxns labels
SA_k_label.append(SA_data[int(num1[i][1])].label)
rank2 = []
for n in xrange(len(SA_data)-Gidx):
rank2.append(abs(SA_data[n+Gidx].data[ign_delay_idx])) # list of max SA range for each rxn
num2 = np.linspace(0,len(rank2)-1,len(rank2)) # will be used to get the order of reactions by rank1
num2 = zip(rank2,num2)
num2 = sorted(num2, key=itemgetter(0),reverse=True)
SA_G_data = []
SA_G_label = []
for i in xrange(min(top_sens, len(SA_data)-Gidx)):
SA_G_data.append(SA_data[int(num2[i][1])+Gidx].data[ign_delay_idx]) # make sorted lists size topSA of SA values and rxns labels
SA_G_label.append(SA_data[int(num2[i][1])+Gidx].label)
print "Ignition delay time is {0:.4f} ms".format(ign_delay_time * 1000)
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
plt.rcParams['figure.autolayout'] = True
plt.style.use('ggplot')
plt.style.use('seaborn-pastel')
fig = plt.figure(figsize=fsize)
plt.subplot(1,2,1)
plt.plot(alldata[0][0].data*1000, alldata[0][1][Fuel_idx].data,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('$Y_{Fuel}$')
plt.title('Fuel profile')
plt.xlim([0,2000*ign_delay_time])
max_oh = max(alldata[0][1][OH_idx].data)
plt.subplot(1,2,2)
plt.plot(alldata[0][0].data*1000, alldata[0][1][OH_idx].data,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('$Y_{OH}$')
plt.title('OH profile')
plt.xlim([0,2000*ign_delay_time])
plt.arrow(0, alldata[0][1][OH_idx].data[idi], ign_delay_time*1000, 0, width=max_oh*0.01, head_width=max_oh*0.05, head_length=ign_delay_time*120, length_includes_head=True, color='r', shape='full')
plt.annotate(r'$Ignition Delay: \tau_{ign}$', xy=(0,0), xytext=(0, alldata[0][1][OH_idx].data[idi]+0.0005), fontsize=10);
fig = plt.figure(figsize=fsize)
plt.subplot(1,2,1)
plt.plot(alldata[0][0].data*1000, temperatures,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('Temperature (K)')
plt.title('Temperature')
plt.xlim([0,2000*ign_delay_time])
plt.subplot(1,2,2)
plt.plot(alldata[0][0].data*1000, pressures,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('Pressure (Pa)')
plt.title('Pressure')
plt.xlim([0,2000*ign_delay_time])
fig = plt.figure(figsize=fsize)
plt.barh(np.arange(min(top_sens, Gidx)), SA_k_data, 1/1.5, color="blue")
plt.gca().invert_yaxis()
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{[OH]}}{\partial\:\ln{k}}$');
plt.rcParams.update({'axes.labelsize': 20})
plt.yticks(np.arange(min(top_sens, Gidx)),SA_k_label)
plt.title("[OH] sensitivity to kinetics")
fig = plt.figure(figsize=fsize)
plt.barh(np.arange(min(top_sens, len(SA_data)-Gidx)), SA_G_data, 1/1.5, color="blue")
plt.gca().invert_yaxis()
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{[OH]}}{\partial\:G_i}$ $[mol/kcal]$');
plt.rcParams.update({'axes.labelsize': 20})
plt.yticks(np.arange(min(top_sens, len(SA_data)-Gidx)),SA_G_label)
plt.title("[OH] sensitivity to thermo")
plt.show()
import cantera as ct
gas = ct.Solution(os.path.join(directory, 'cantera', 'chem.cti'))
comp = str(species_dict[fuel_species])+":"+str(fuel_stoich)+","+str(species_dict[O2_species])+":1,"+str(species_dict[N2_species])+":3.76"
gas.TPX = temperature, pressure, comp
reactor = ct.IdealGasConstPressureReactor(gas)
network = ct.ReactorNet([reactor])
network.advance(ign_delay_time)
ROP_C = ct.ReactionPathDiagram(gas, 'C')
from PIL import Image as PILimg
ROP1 = plt.subplot(1,1,1)
dot_file = os.path.join(directory, 'cantera', 'rxnpathC.dot')
img_file = os.path.join(directory, 'cantera', 'rxnpathC.png')
ROP_C.title = 'Reaction path diagram following C'
ROP_C.threshold = 0.01
ROP_C.label_threshold = 0.01
ROP_C.show_details = True
ROP_C.write_dot(dot_file) # write dot file
os.system('dot {0} -Tpng -o{1} -Gdpi=300'.format(dot_file, img_file)) # write png file
fullpath = os.getcwd() + '/' + img_file
display(Image(fullpath))
###Output
_____no_output_____
###Markdown
Combustion Model and Ignition Delay DemoWritten by A. Mark Payne and Alon Grinberg Dana for presentation at ICCK 2017 User input:
###Code
fuel = 'OC'
equivalence_ratio = 1.0
temperature = 1500.0 # (K)
pressure = 1.0 # (atm)
sim_time = 2 # (ms)
top_sens = 10 # number of top sensitive reactions and thermo to display
rmgpy_path = '../rmg.py' # Change to your rmg.py path
from IPython.display import display, Image
from rmgpy.molecule import Molecule
fuel_molecule = Molecule(smiles=fuel)
print("The fuel molecule is:")
display(fuel_molecule)
###Output
_____no_output_____
###Markdown
RMG's input file:
###Code
fuel_molecule = Molecule(smiles=fuel)
nC = int(fuel_molecule.get_num_atoms('C'))
nH = int(fuel_molecule.get_num_atoms('H'))
nO = int(fuel_molecule.get_num_atoms('O'))
fuel_stoich = equivalence_ratio/(nC+(nH/4.0)-(nO/2.0))
input_file = f'''
# Data sources
database(
thermoLibraries = ['BurkeH2O2','primaryThermoLibrary','thermo_DFT_CCSDTF12_BAC','DFT_QCI_thermo','FFCM1(-)'],
reactionLibraries = ['BurkeH2O2inN2','FFCM1(-)'],
seedMechanisms = [],
kineticsDepositories = ['training'],
kineticsFamilies = 'default',
kineticsEstimator = 'rate rules',
)
# List of species
species(
label='fuel',
reactive=True,
structure=SMILES('{fuel}'),
)
species(
label='O2',
reactive=True,
structure=SMILES('[O][O]'),
)
species(
label='N2',
reactive=False,
structure=SMILES('N#N'),
)
species(
label='OH',
reactive=True,
structure=SMILES('[OH]'),
)
# Reaction system
simpleReactor(
temperature=({temperature!s},'K'),
pressure=({pressure!s},'atm'),
initialMoleFractions={{
'fuel': {fuel_stoich!s},
'O2': 1,
'N2': 3.76,
}},
terminationTime=({sim_time/1000.0},'s'),
sensitivity=['OH'],
sensitivityThreshold=0.01,
)
simulator(
atol=1e-16,
rtol=1e-8,
sens_atol=1e-6,
sens_rtol=1e-4,
)
model(
toleranceKeepInEdge=0,
toleranceMoveToCore=0.1,
toleranceInterruptSimulation=0.1,
maximumEdgeSpecies=100000,
filterReactions=True,
maxNumObjsPerIter=2,
terminateAtMaxObjects=True,
maxNumSpecies=50,
)
#pressureDependence(
# method='modified strong collision',
# maximumGrainSize=(0.5,'kcal/mol'),
# minimumNumberOfGrains=250,
# temperatures=(298,2500,'K',10),
# pressures=(0.5,3,'bar',5),
# interpolation=('Chebyshev', 6, 4),
# maximumAtoms=16,
#)
options(
units='si',
generateOutputHTML=True,
generatePlots=False,
saveEdgeSpecies=False,
saveSimulationProfiles=True,
)
generatedSpeciesConstraints(
allowed=['input species','seed mechanisms','reaction libraries'],
maximumCarbonAtoms=5,
maximumOxygenAtoms=2,
maximumNitrogenAtoms=0,
maximumSiliconAtoms=0,
maximumSulfurAtoms=0,
maximumHeavyAtoms=6,
maximumRadicalElectrons=2,
allowSingletO2=False,
)
'''
import os
import shutil
directory = './rmg_demo'
if os.path.exists(directory):
shutil.rmtree(directory)
os.mkdir(directory)
input_path = os.path.join(directory, 'input.py')
with open(input_path,'w') as f:
f.write(input_file)
print('Created RMG input file at ' + os.path.abspath(input_path))
###Output
_____no_output_____
###Markdown
Run RMG:
###Code
import time
import datetime
import subprocess
start = time.time()
# Execute RMG job
subprocess.check_call(['python', rmgpy_path, input_path])
end = time.time()
print('Total simulation time: ' + str(datetime.timedelta(seconds=round(end-start))))
with open(os.path.join(directory, 'RMG.log'),'r') as f:
begin = False
for line in f:
if 'MODEL GENERATION COMPLETED' in line:
begin = True
if begin:
print(line.strip())
###Output
_____no_output_____
###Markdown
Run the generated model (using RMG's Cantera functions):
###Code
from rmgpy.chemkin import load_chemkin_file
from rmgpy.tools.canteramodel import Cantera, get_rmg_species_from_user_species
from rmgpy.species import Species
import time
chem_path = os.path.join(directory, 'chemkin')
species_list, reaction_list = load_chemkin_file(os.path.join(chem_path, 'chem_annotated.inp'),
os.path.join(chem_path, 'species_dictionary.txt'),
os.path.join(chem_path, 'tran.dat'))
fuel_species = Species(smiles=fuel)
O2_species = Species(smiles='[O][O]')
N2_species = Species(smiles='N#N')
OH_species = Species(smiles='[OH]')
species_dict = get_rmg_species_from_user_species([fuel_species, O2_species, N2_species, OH_species], species_list)
reactor_type_list = ['IdealGasReactor']
reaction_time_list = ([sim_time], 'ms')
mol_frac_list=[{species_dict[fuel_species]: fuel_stoich,
species_dict[O2_species]: 1,
species_dict[N2_species]: 3.76}]
T_list = ([temperature],'K')
P_list = ([pressure],'atm')
job = Cantera(species_list=species_list, reaction_list=reaction_list, output_directory=directory)
job.load_chemkin_model(os.path.join(chem_path, 'chem_annotated.inp'), os.path.join(chem_path, 'tran.dat'))
job.generate_conditions(reactor_type_list, reaction_time_list, mol_frac_list, T_list, P_list)
alldata = job.simulate()
print("Simulation Completed")
###Output
_____no_output_____
###Markdown
Plot:
###Code
############### Settings ###############
fsize = (8,4) # Change to make the figure fit on your screen
########################################
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from rmgpy.tools import plot as rmg_plot
from operator import itemgetter
%matplotlib notebook
times = alldata[0][0].data
temperatures = alldata[0][1][0].data
pressures = alldata[0][1][1].data
dpdt = (pressures[1:] - pressures[:-1]) / (times[1:] - times[:-1])
idi = next(i for i,d in enumerate(dpdt) if d==max(dpdt))
ign_delay_time = times[idi]
for spc in range(len(alldata[0][1][:])):
if alldata[0][1][spc].label == str(species_dict[fuel_species]):
Fuel_idx = spc
if alldata[0][1][spc].label == str(species_dict[OH_species]):
OH_idx = spc
for i in range(len(alldata[0][1][Fuel_idx].data)):
if alldata[0][1][Fuel_idx].data[i]<0.001:
Fuel_Depletion_Time = times[i]
break
files = os.listdir(os.path.join(directory, 'solver'))
sensitivity_file = [f for f in files if ('sensitivity' in f) and ('.csv' in f)][0]
SA_time, SA_data = rmg_plot.parse_csv_data(os.path.join(directory, 'solver', sensitivity_file))
time_error = 1
for i in range(len(SA_time.data)):
if abs(SA_time.data[i]-ign_delay_time)<time_error:
ign_delay_idx = i
time_error = abs(SA_time.data[i]-ign_delay_time)
Gidx = 0
for i in range(len(SA_data[:])):
if "G" in SA_data[i].label:
if not Gidx:
Gidx = i
SA_data[i].label = SA_data[i].label.split('G')[1][1:-1]
else:
SA_data[i].label = SA_data[i].label.split(' ')[1]
rank1 = []
for n in range(Gidx):
rank1.append(abs(SA_data[n].data[ign_delay_idx])) # list of max SA range for each rxn
num1 = np.linspace(0,len(rank1)-1,len(rank1)) # will be used to get the order of reactions by rank1
num1 = zip(rank1,num1)
num1 = sorted(num1, key=itemgetter(0),reverse=True)
SA_k_data = []
SA_k_label = []
for i in range(min(top_sens, Gidx)):
SA_k_data.append(SA_data[int(num1[i][1])].data[ign_delay_idx]) # make sorted lists size topSA of SA values and rxns labels
SA_k_label.append(SA_data[int(num1[i][1])].label)
rank2 = []
for n in range(len(SA_data)-Gidx):
rank2.append(abs(SA_data[n+Gidx].data[ign_delay_idx])) # list of max SA range for each rxn
num2 = np.linspace(0,len(rank2)-1,len(rank2)) # will be used to get the order of reactions by rank1
num2 = zip(rank2,num2)
num2 = sorted(num2, key=itemgetter(0),reverse=True)
SA_G_data = []
SA_G_label = []
for i in range(min(top_sens, len(SA_data)-Gidx)):
SA_G_data.append(SA_data[int(num2[i][1])+Gidx].data[ign_delay_idx]) # make sorted lists size topSA of SA values and rxns labels
SA_G_label.append(SA_data[int(num2[i][1])+Gidx].label)
print("Ignition delay time is {0:.4f} ms".format(ign_delay_time * 1000))
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
plt.rcParams['figure.autolayout'] = True
plt.style.use('ggplot')
plt.style.use('seaborn-pastel')
fig = plt.figure(figsize=fsize)
plt.subplot(1,2,1)
plt.plot(alldata[0][0].data*1000, alldata[0][1][Fuel_idx].data,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('$Y_{Fuel}$')
plt.title('Fuel profile')
plt.xlim([0,2000*ign_delay_time])
max_oh = max(alldata[0][1][OH_idx].data)
plt.subplot(1,2,2)
plt.plot(alldata[0][0].data*1000, alldata[0][1][OH_idx].data,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('$Y_{OH}$')
plt.title('OH profile')
plt.xlim([0,2000*ign_delay_time])
plt.arrow(0, alldata[0][1][OH_idx].data[idi], ign_delay_time*1000, 0, width=max_oh*0.01, head_width=max_oh*0.05, head_length=ign_delay_time*120, length_includes_head=True, color='r', shape='full')
plt.annotate(r'$Ignition Delay: \tau_{ign}$', xy=(0,0), xytext=(0, alldata[0][1][OH_idx].data[idi]+0.0005), fontsize=10);
fig = plt.figure(figsize=fsize)
plt.subplot(1,2,1)
plt.plot(alldata[0][0].data*1000, temperatures,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('Temperature (K)')
plt.title('Temperature')
plt.xlim([0,2000*ign_delay_time])
plt.subplot(1,2,2)
plt.plot(alldata[0][0].data*1000, pressures,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('Pressure (Pa)')
plt.title('Pressure')
plt.xlim([0,2000*ign_delay_time])
fig = plt.figure(figsize=fsize)
plt.barh(np.arange(min(top_sens, Gidx)), SA_k_data, 1/1.5, color="blue")
plt.gca().invert_yaxis()
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{[OH]}}{\partial\:\ln{k}}$');
plt.rcParams.update({'axes.labelsize': 20})
plt.yticks(np.arange(min(top_sens, Gidx)),SA_k_label)
plt.title("[OH] sensitivity to kinetics")
fig = plt.figure(figsize=fsize)
plt.barh(np.arange(min(top_sens, len(SA_data)-Gidx)), SA_G_data, 1/1.5, color="blue")
plt.gca().invert_yaxis()
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{[OH]}}{\partial\:G_i}$ $[mol/kcal]$');
plt.rcParams.update({'axes.labelsize': 20})
plt.yticks(np.arange(min(top_sens, len(SA_data)-Gidx)),SA_G_label)
plt.title("[OH] sensitivity to thermo")
plt.show()
import cantera as ct
gas = ct.Solution(os.path.join(directory, 'cantera', 'chem.cti'))
comp = str(species_dict[fuel_species])+":"+str(fuel_stoich)+","+str(species_dict[O2_species])+":1,"+str(species_dict[N2_species])+":3.76"
gas.TPX = temperature, pressure, comp
reactor = ct.IdealGasConstPressureReactor(gas)
network = ct.ReactorNet([reactor])
network.advance(ign_delay_time)
ROP_C = ct.ReactionPathDiagram(gas, 'C')
from PIL import Image as PILimg
ROP1 = plt.subplot(1,1,1)
dot_file = os.path.join(directory, 'cantera', 'rxnpathC.dot')
img_file = os.path.join(directory, 'cantera', 'rxnpathC.png')
ROP_C.title = 'Reaction path diagram following C'
ROP_C.threshold = 0.01
ROP_C.label_threshold = 0.01
ROP_C.show_details = True
ROP_C.write_dot(dot_file) # write dot file
os.system('dot {0} -Tpng -o{1} -Gdpi=300'.format(dot_file, img_file)) # write png file
fullpath = os.getcwd() + '/' + img_file
display(Image(fullpath))
###Output
_____no_output_____
###Markdown
Combustion Model and Ignition Delay DemoWritten by A. Mark Payne and Alon Grinberg Dana for presentation at ICCK 2017 User input:
###Code
Fuel = 'CCO'
Equivalence_Ratio = 1.0
Temperature = 1500.0 # (K)
Pressure = 1.0 # (atm)
simtime = 2 # (ms)
topSA = 10 # number of top sensitive reactions and thermo to display
rmgpy_path = '../rmg.py' # Change to your rmg.py path
from IPython.display import display, Image
from rmgpy.molecule import Molecule
Fuel_Molecule = Molecule().fromSMILES(Fuel)
print("The fuel molecule is:");display(Fuel_Molecule)
###Output
_____no_output_____
###Markdown
RMG's input file:
###Code
Fuel_Molecule = Molecule().fromSMILES(Fuel)
nC = int(Fuel_Molecule.getNumAtoms('C'))
nH = int(Fuel_Molecule.getNumAtoms('H'))
nO = int(Fuel_Molecule.getNumAtoms('O'))
A = str(Equivalence_Ratio/(nC+(nH/4.0)-(nO/2.0)))
Input_file = '''
# Data sources
database(
thermoLibraries = ['BurkeH2O2','primaryThermoLibrary','thermo_DFT_CCSDTF12_BAC','DFT_QCI_thermo','FFCM1(-)','JetSurF2.0'],
reactionLibraries = ['BurkeH2O2inN2','FFCM1(-)','JetSurF2.0'],
seedMechanisms = [],
kineticsDepositories = ['training'],
kineticsFamilies = 'default',
kineticsEstimator = 'rate rules',
)
# List of species
species(
label='fuel',
reactive=True,
structure=SMILES('''+"'"+Fuel+"'"+'''),
)
species(
label='O2',
reactive=True,
structure=SMILES('[O][O]'),
)
species(
label='N2',
reactive=False,
structure=SMILES('N#N'),
)
species(
label='OH',
reactive=True,
structure=SMILES('[OH]'),
)
# Reaction system
simpleReactor(
temperature=('''+str(Temperature)+''','K'),
pressure=('''+str(Pressure)+''','atm'),
initialMoleFractions={
'fuel': '''+A+''',
'O2': 1,
'N2': 3.76,
},
terminationTime=(0.001,'s'),
sensitivity=['OH'],
sensitivityThreshold=0.01,
)
simulator(
atol=1e-16,
rtol=1e-8,
sens_atol=1e-6,
sens_rtol=1e-4,
)
model(
toleranceKeepInEdge=0,
toleranceMoveToCore=0.05,
toleranceInterruptSimulation=0.05,
maximumEdgeSpecies=300000
)
#pressureDependence(
# method='modified strong collision',
# maximumGrainSize=(0.5,'kcal/mol'),
# minimumNumberOfGrains=250,
# temperatures=(298,2500,'K',10),
# pressures=(0.5,3,'bar',5),
# interpolation=('Chebyshev', 6, 4),
# maximumAtoms=16,
#)
options(
units='si',
generateOutputHTML=True,
generatePlots=False,
saveEdgeSpecies=False,
saveSimulationProfiles=True,
saveRestartPeriod=None,
)
generatedSpeciesConstraints(
allowed=['input species','seed mechanisms','reaction libraries'],
maximumCarbonAtoms=5,
maximumOxygenAtoms=2,
maximumNitrogenAtoms=0,
maximumSiliconAtoms=0,
maximumSulfurAtoms=0,
maximumHeavyAtoms=6,
maximumRadicalElectrons=2,
allowSingletO2=False,
)
'''
import os
if not os.path.exists('RMG'):
os.mkdir('RMG')
os.system('rm -r RMG/*')
with open('RMG/Demo.py','w') as RMG_Input_File:
RMG_Input_File.write(Input_file)
print("Created RMG input file")
###Output
_____no_output_____
###Markdown
Run RMG:
###Code
os.system('python {0} RMG/Demo.py'.format(rmgpy_path))
print("RMG Simulation Completed. Summary of log file:\n")
RMG_log = open('RMG/RMG.log','r').readlines()
lines = [x for x in RMG_log[-13:-1] if x != "\n"]
for line in lines: print(line)
###Output
_____no_output_____
###Markdown
Run the generated model (using RMG's Cantera functions):
###Code
from rmgpy.chemkin import *
from rmgpy.tools.canteraModel import *
from rmgpy.species import Species
import time
path = "RMG/chemkin/"
speciesList, reactionList = loadChemkinFile(path+'chem_annotated.inp',
path+'species_dictionary.txt',
path+'tran.dat')
nC = int(Fuel_Molecule.getNumAtoms('C'))
nH = int(Fuel_Molecule.getNumAtoms('H'))
nO = int(Fuel_Molecule.getNumAtoms('O'))
phi = Equivalence_Ratio
FuelStoich = phi/(nC+(nH/4.0)-(nO/2.0))
Fuel_Species=Species().fromSMILES(Fuel)
O2_Species=Species().fromSMILES('[O][O]')
N2_Species=Species().fromSMILES('N#N')
OH_Species=Species().fromSMILES('[OH]')
species_dict = getRMGSpeciesFromUserSpecies([Fuel_Species,O2_Species,N2_Species,OH_Species], speciesList)
reactorTypeList = ['IdealGasReactor']
reactionTimeList = ([simtime], 'ms')
molFracList=[{species_dict[Fuel_Species]: FuelStoich,
species_dict[O2_Species]: 1,
species_dict[N2_Species]: 3.76}]
Tlist = ([Temperature],'K')
Plist = ([Pressure],'atm')
job = Cantera(speciesList=speciesList, reactionList=reactionList, outputDirectory='')
job.loadChemkinModel(path+'chem_annotated.inp',transportFile=path+'tran.dat')
job.generateConditions(reactorTypeList, reactionTimeList, molFracList, Tlist, Plist)
alldata = job.simulate()
print("Done.")
###Output
_____no_output_____
###Markdown
Plot:
###Code
############### Settings ###############
fsize = (8,4) # Change to make the figure fit on your screen
########################################
import matplotlib.pyplot as plt
import pandas as pd
from rmgpy.tools import plot as rmg_plot
from operator import itemgetter
from rmgpy.tools.sensitivity import runSensitivity
%matplotlib notebook
times = alldata[0][0].data
pressures = alldata[0][1][1].data
dpdt = (pressures[1:] - pressures[:-1]) / (times[1:] - times[:-1])
idi = next(i for i,d in enumerate(dpdt) if d==max(dpdt))
ign_delay_time = times[idi]
for spc in xrange(len(alldata[0][1][:])):
if alldata[0][1][spc].label == str(species_dict[Fuel_Species]):
Fuel_idx = spc
if alldata[0][1][spc].label == str(species_dict[OH_Species]):
OH_idx = spc
for i in range(len(alldata[0][1][Fuel_idx].data)):
if alldata[0][1][Fuel_idx].data[i]<0.001:
Fuel_Depletion_Time = times[i]
break
files = os.listdir('RMG/solver')
sensitivity_file = str(filter(lambda x: ('sensitivity' in x) and ('.csv' in x),files)[0])
SA_time, SA_data = rmg_plot.parseCSVData('RMG/solver/'+sensitivity_file)
time_error = 1
for i in range(len(SA_time.data)):
if abs(SA_time.data[i]-ign_delay_time)<time_error:
ign_delay_idx = i
time_error = abs(SA_time.data[i]-ign_delay_time)
Gidx = 0
for i in xrange(len(SA_data[:])):
if "G" in SA_data[i].label:
if not Gidx:
Gidx = i
SA_data[i].label = SA_data[i].label.split('G')[1][1:-1]
else:
SA_data[i].label = SA_data[i].label.split(' ')[1]
rank1 = []
for n in xrange(Gidx):
rank1.append(abs(SA_data[n].data[ign_delay_idx])) # list of max SA range for each rxn
num1 = np.linspace(0,len(rank1)-1,len(rank1)) # will be used to get the order of reactions by rank1
num1 = zip(rank1,num1)
num1 = sorted(num1, key=itemgetter(0),reverse=True)
SA_k_data = []
SA_k_label = []
for i in xrange(min(topSA, Gidx)):
SA_k_data.append(SA_data[int(num1[i][1])].data[ign_delay_idx]) # make sorted lists size topSA of SA values and rxns labels
SA_k_label.append(SA_data[int(num1[i][1])].label)
rank2 = []
for n in xrange(len(SA_data)-Gidx):
rank2.append(abs(SA_data[n+Gidx].data[ign_delay_idx])) # list of max SA range for each rxn
num2 = np.linspace(0,len(rank2)-1,len(rank2)) # will be used to get the order of reactions by rank1
num2 = zip(rank2,num2)
num2 = sorted(num2, key=itemgetter(0),reverse=True)
SA_G_data = []
SA_G_label = []
for i in xrange(min(topSA, len(SA_data)-Gidx)):
SA_G_data.append(SA_data[int(num2[i][1])+Gidx].data[ign_delay_idx]) # make sorted lists size topSA of SA values and rxns labels
SA_G_label.append(SA_data[int(num2[i][1])+Gidx].label)
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
plt.rcParams['figure.autolayout'] = True
plt.style.use('ggplot')
plt.style.use('seaborn-pastel')
fig = plt.figure(figsize=fsize)
plt.subplot(1,2,1)
plt.plot(alldata[0][0].data*1000, alldata[0][1][Fuel_idx].data,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('$Y_{Fuel}$')
plt.title('Fuel profile')
plt.xlim([0,2000*ign_delay_time])
plt.subplot(1,2,2)
plt.plot(alldata[0][0].data*1000, alldata[0][1][OH_idx].data,'-o')
plt.xlabel('Time (ms)')
plt.ylabel('$Y_{OH}$')
plt.title('OH profile')
plt.xlim([0,2000*ign_delay_time])
plt.arrow(0, alldata[0][1][OH_idx].data[idi], ign_delay_time*1000, 0, width=0.0001, head_width=0.0005, head_length=0.001, length_includes_head=True, color='r', shape='full')
plt.annotate(r'$Ignition Delay: \tau_{ign}$', xy=(0,0), xytext=(0, alldata[0][1][OH_idx].data[idi]+0.0005), fontsize=10);
fig = plt.figure(figsize=fsize)
plt.barh(np.arange(min(topSA, Gidx)), SA_k_data, 1/1.5, color="blue")
plt.gca().invert_yaxis()
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{[OH]}}{\partial\:\ln{k}}$');
plt.rcParams.update({'axes.labelsize': 20})
plt.yticks(np.arange(min(topSA, Gidx)),SA_k_label)
plt.title("[OH] sensitivity to kinetics")
fig = plt.figure(figsize=fsize)
plt.barh(np.arange(min(topSA, len(SA_data)-Gidx)), SA_G_data, 1/1.5, color="blue")
plt.gca().invert_yaxis()
plt.xlabel(r'Sensitivity: $\frac{\partial\:\ln{[OH]}}{\partial\:G_i}$ $[mol/kcal]$');
plt.rcParams.update({'axes.labelsize': 20})
plt.yticks(np.arange(min(topSA, len(SA_data)-Gidx)),SA_G_label)
plt.title("[OH] sensitivity to thermo")
plt.show()
gas = ct.Solution('RMG/cantera/chem.cti')
comp = str(species_dict[Fuel_Species])+":"+str(FuelStoich)+","+str(species_dict[O2_Species])+":1,"+str(species_dict[N2_Species])+":3.76"
gas.TPX = Temperature, Pressure, comp
reactor = ct.IdealGasConstPressureReactor(gas)
network = ct.ReactorNet([reactor])
network.advance(ign_delay_time)
ROP_C = ct.ReactionPathDiagram(gas, 'C')
from PIL import Image as PILimg
ROP1 = plt.subplot(1,1,1)
dot_file = 'RMG/cantera/rxnpathC.dot'
img_file = 'RMG/cantera/rxnpathC.png'
ROP_C.title = 'Reaction path diagram following C'
ROP_C.threshold = 0.01
ROP_C.label_threshold = 0.01
ROP_C.show_details = True
ROP_C.write_dot(dot_file) # write dot file
os.system('dot {0} -Tpng -o{1} -Gdpi=300'.format(dot_file, img_file)) # write png file
fullpath = os.getcwd() + '/' + img_file
Image(fullpath)
###Output
_____no_output_____ |
teaching_material/session_10/module_10_solution.ipynb | ###Markdown
Session 10: Introduction to modeling and machine learningIn this combined teaching module and exercise set you will get an introduction to modeling using data. We proceed with introducing machine learning, and will get your first taste of how machine learning algorithms are constructed. You will implement a [_perceptron_](https://en.wikipedia.org/wiki/Perceptron) from scratch using the matrix-algebra library NumPy. We will train this model on the iris data to predict flower types. Many of the concepts both programming-wise and related to machine learning are probably new to most of you - don't be afraid to ask questions about either, as much of this lecture/exercise set lays the foundation for the upcoming sessions. The structure of the notebook is that the beginning will contain a lot of lecturing material. However, towards the end you will find a few exercises. To a few of you, there may be some new mathematical terms - I have tried to provide som references where you can study these more. However, the focus should be on understanding the high-level concepts rather than machine learning. Raschka's chapter 2 is also excellent companion for this module. Modeling and machine learning.In the video below we introduce modeling. We focus on problems where for some given input we want to make a model of some output/target data (which can be anything). See the video below where the concepts are introduced.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('BTsgia9goJA', width=640, height=360)
###Output
_____no_output_____
###Markdown
Trade-offs in modelingWe proceed with showcasing some of the inherent trade-offs in modeling. When estimating models at some point we face a dilemma between two disadvantages in prediction - underfitting and overfitting. The video belows demonstrates using real data how these two problems can arise. In the video I talk about some mathemetical concepts, here are some links if you want a review: [derivative](https://en.wikipedia.org/wiki/Derivative), [polynomial](https://en.wikipedia.org/wiki/Polynomial), [Taylor expansion/series](https://en.wikipedia.org/wiki/Taylor_series).
###Code
YouTubeVideo('6WdAfFadgkY', width=640, height=360)
###Output
_____no_output_____
###Markdown
Machine learning - essential conceptsWe are now ready to introduce the main terminology of machine learning. Basically there are two big problems that machine learning attempts to solve - supervised and unsupervised learning. Watch the video below for an introduction of these concepts and an overview of machine learning we will work with in this course.
###Code
YouTubeVideo('c6wUs7QYea4', width=640, height=360)
###Output
_____no_output_____
###Markdown
Supervised learning conceptsIn the rest of this course and this notebook we dive more into supervised machine learning model. In the next video we hear more about the main problems and set the terminology.
###Code
YouTubeVideo('6cdy9txTQIM', width=640, height=360)
###Output
_____no_output_____
###Markdown
The perceptron modelThe first supervised learning model we will introduce is an old model. We will learn about it because it simple enough to grasp how it works and we will use to build the intuition for more advanced models. The video below introduces the model theoretically with mathematics. Parts of the talk will use matrices to make computations, thus you may want to re-familiarize yourself with [matrix multiplication](https://en.wikipedia.org/wiki/Matrix_multiplication) before starting.
###Code
YouTubeVideo('p4_MxERHo_8', width=640, height=360)
###Output
_____no_output_____
###Markdown
Implementing and using the model in PythonWe now implement our model in Python. We make the implementation using vector notation, as you will implemented in a simpler version below :). The video also shows how we can use others' code, in this Raschka's implementation of the Perceptron. You can see where Raschka's code is loaded by checking out the slides.
###Code
YouTubeVideo('QvY_KTZXfh0', width=640, height=360)
###Output
_____no_output_____
###Markdown
Validation of modelWe want to have a credible measure of model performance. In this video I talk about a simple approach to getting such a measure for cross-section/static data (i.e. not time series).
###Code
YouTubeVideo('9KNJZbFGmMc', width=640, height=360)
###Output
_____no_output_____
###Markdown
> **Ex. 10.1.1:** The mathematics and biological reasoning which justifies the perceptron model is presented in Raschka, 2017 on pages 18 to 24. If you haven't read it already, quickly do so. >> Begin by importing `numpy`, `pandas` and `seaborn`
###Code
# [Answer to Ex. 10.1.1]
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
###Output
_____no_output_____
###Markdown
> **Ex. 10.1.2:** Use the following code snippet to load the iris data. The code will create two new variablex **X** and **y**, each of which are numpy arrays. Split the data as follows. The first dataset should contain the first 70 rows; we call this sample our *training dataset*, or simply *train data*. We use the training data to estimate the data. We use the remaining rows as data for testing our model, thus we call it *test data*. >>```python iris = sns.load_dataset('iris')iris = iris.query("species == 'virginica' | species == 'versicolor'").sample(frac=1, random_state = 3)X = np.array(iris[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']])y = np.array(iris['species'].map({'virginica': 1, 'versicolor': -1}))sns.pairplot(iris, hue="species", palette="husl", diag_kws = {'shade': False})plt.show()```
###Code
# [Answer to Ex. 10.1.2]
iris = sns.load_dataset('iris')
iris = iris.query("species == 'virginica' | species == 'versicolor'").sample(frac = 1, random_state = 3)
X = np.array(iris[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']])
y = np.array(iris['species'].map({'virginica': 1, 'versicolor': -1}))
sns.pairplot(iris, hue="species", palette="husl", diag_kws = {'shade': False})
plt.show()
# A very simple deterministic test-train split
Xtrain = X[:70]
ytrain = y[:70]
Xtest = X[70:]
ytest = y[70:]
###Output
_____no_output_____
###Markdown
> **Ex. 10.1.3:** Write a function which initiate a set of weights `w` with length 1 larger than the number of features in your data. Ensure that your initial weights are not exactly 0, but close to it. >>> _Hint 1:_ Use [np.random.RandomState](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.RandomState.html) to set up a random number generator from which you can draw from a normal with mean 0 and scale 0.01. >>> _Hint 2:_ Say you have stored the random number generator in an object called `rgen`. You can then call `rgen.normal(size = 1 + columns_in_X)` to get the weights you want. You might want to tweak the `scale` parameter.
###Code
# [Answer to Ex. 10.1.3]
def random_weights(location = 0.0, scale = 0.01, seed = 1):
# Init random number generator
rgen = np.random.RandomState(seed)
w = rgen.normal(loc=location, scale=scale, size= 1 + X.shape[1])
return w
###Output
_____no_output_____
###Markdown
> **Ex. 10.1.4:** In this problem you need to write two functions:> * `net_input(X, W)`: calculates _and returns_ the net-input, i.e the linear combination of features and weights, $z=w_0 + \sum_k x_{k} w_{k}$> * `predict(X, W)`: a step function which returns 1 if the net activation is $\geq$ 0, and returns -1 otherwise. >>*Bonus:* Create a function which calculates the _accuracy_ (the share of cases that are correctly classified). The function should take a vector of y-values and a vector of predicted y-values as input. What is the accuracy of your untrained model on the training data?>> _Hint 1:_ you can compute the above using an array product. Here numpy's array product named `dot` may be useful>> _Hint 2:_ remember to include the bias, $w_0$, in the computation!
###Code
# [Answer to Ex. 10.1.4]
def net_input(X, W):
return np.dot(X, W[1:]) + W[0] # Linear product X'W + bias
def predict(X, W):
linProd = net_input(X, W)
return np.where(linProd >= 0.0, 1, -1) # 1(linProd > 0)
# Bonus
def accuracy(y, prediction):
return np.mean(y == prediction)
accuracy(ytrain, predict(Xtrain, random_weights()))
###Output
_____no_output_____
###Markdown
> **Ex. 10.1.5:** Write a function whichs loops over the training data (both X and y) using `zip`. For each row in the data, update the weights according to the perceptron rule (remember to update the bias in `w[0]`!). Set $\eta = 0.1$.>> Make sure the loop stores the total number of prediction errors encountered underways in the loop by creating an `int` which is incremented whenever you update the weights. >>> _Hint:_ your function should return the updated weights, as well as the number of errors made by the perceptron.>>> _Hint:_ The following code block implements the function in _pseudo_code (it wont run, but serves to communicate the functionality).>> ```>> function f(X, y, W, eta):>> set errors = 0>>>> for each pair xi, yi in zip(X,y) do:>> set update = eta * (yi - predict(xi, W))>> set W[1:] = W[1:] + update * xi>> set W[0] = W[0] + update>> set errors = errors + int(update != 0) >>>> return W, errors>> ```>> *Bonus:* If you completed the previous bonus exercise (for 10.1.4), calculate the accuracy on training data using the updated weights as input in the predict function. Any progress yet?
###Code
# [Answer to Ex. 10.1.5]
# This will be in assignment 2
###Output
_____no_output_____
###Markdown
> **Ex. 10.1.6:** Write a function, which repeats the updating procedure (calls the function) you constructed in 10.1.5 for `n_iter` times by packing the whole thing in a loop. Make sure you store the number of errors in each iteration in a list. >> Plot the total errors after each iteration in a graph.>>> _Hint:_ Make sure you dont reset the weights after each iteration.>>> _Hint:_ Once again some pseudocode:>> ```>> function g(X, y, n_iter):>> set eta = 0.1>> set weights = random_weights()>> set errorseq = list()>>>> for each _ in range(n_iter):>> weights, e = f(X, y, weights, eta) >> errorseq.append(e)>>>> return weights, errorseq>> ```
###Code
# [Answer to Ex. 10.1.6]
# This will be in assignment 2
###Output
_____no_output_____
###Markdown
> **Ex. 10.1.7 (BONUS):** Use the updated weights when predicting and calculate the accuracy of your perceptron on the test data?
###Code
# [Answer to Ex. 10.1.7 BONUS]
pred = predict(Xtest, trained_w)
accuracy(ytest, pred)
###Output
_____no_output_____
###Markdown
> **Ex.10.1.8 (BONUS):** Restructure your code as a class called `Perceptron` with `.fit()` and `.predict()` methods (you) will probably need more helper methods. Store hyperparameters as eta and the number of iterations as class attributes.
###Code
# [Answer to Ex. 10.1.8 BONUS]
class Perceptron:
""" Implements the simple perceptron algo
"""
def __init__(self, X, y, eta = 0.1, n_iter = 50, seed = 1):
""" Populate instance with relevant parameters and data
"""
self.n_iter = n_iter
self.eta = eta
self.seed = seed
self._errseq = []
self._shape = X.shape[1]
self._w = self._random_weights()
self.X = X
self.y = y
def _random_weights(self, loc = 0.0, scale = 0.01):
""" Initiates weights as random and close to 0
"""
# Init random number generator
rgen = np.random.RandomState(self.seed)
w = rgen.normal(loc=loc, scale=scale, size= 1 + self._shape)
return w
def _net_activation(self, X):
""" Calculate X'w
"""
return np.dot(X, self._w[1:]) + self._w[0] # Linear product W'X
def accuracy(self, prediction):
""" Assess accuracy
"""
return np.mean(self.y == prediction)
def predict(self, X = None):
""" Create predictions from trained (/untrained) classifier
"""
if X is None:
X = self.X
linProd = self._net_activation(X)
return np.where(linProd >= 0.0, 1, -1) # 1(linProd > 0)
def _perceptronEpoch(self):
""" One epoch of the perceptron algo
"""
errors = 0
# For each pair (x-row, y-row) in the data
for xi, yi in zip(self.X, self.y):
# Do the updating process described in Raschka
update = self.eta * (yi - self.predict(xi)) # Notice this is 0 if target == predicted
self._w[1:] = self._w[1:] + update * xi # Update weights
self._w[0] = self._w[0] + update # Update bias
errors += int(update != 0.0) # keep count of the errors in this iteration
self._errseq.append(errors)
return self
def fit(self):
""" Fit the perceptron
"""
for _ in range(self.n_iter):
self._perceptronEpoch()
return self
p = Perceptron(X = Xtrain, y= ytrain).fit()
plt.plot(p._errseq, 'b-o')
###Output
_____no_output_____
###Markdown
Beyond the perceptron modelHaving seen and worked with the perceptron I want to provide you with some ideas on how we can change parts of the perceptron to obtain another model. Again, you may want to familiarize yourself with background concepts: [gradient](https://en.wikipedia.org/wiki/Gradient), [sum of squared errors](https://en.wikipedia.org/wiki/Residual_sum_of_squares) and the [sigmoid function](https://en.wikipedia.org/wiki/Sigmoid_function).
###Code
YouTubeVideo('q4_NGRPHOPU', width=640, height=360)
###Output
_____no_output_____
###Markdown
Logistic regression Logistic regression is another simple linear machine-learning algorithm, you can read about it [here:](https://scikit-learn.org/stable/modules/linear_model.htmllogistic-regression) > **Ex. 10.2.1:** Import the LogisticRegression classifier from `sklearn.linear_model`. Create a new object called `clf` like:```clf = LogisticRegression()```All scikit learn models have two fundamental methods `.fit()` and `.predict()`. Fit your model to the training data, and store the fitted model in a new object. Import _accuracy_score_ from `sklearn.metrics` and asses the accuracy of the LogisticRegression on both your training data and your test data.
###Code
# [Answer to Ex. 10.2.1]
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
clf = LogisticRegression(solver='lbfgs')
fitted_model = clf.fit(Xtrain, ytrain)
train_score = accuracy_score(ytrain, fitted_model.predict(Xtrain))
test_score = accuracy_score(ytest, fitted_model.predict(Xtest))
print(f"On the training data we get a score of {round(train_score, 2)}, while the score on the test data is {round(test_score, 2)}")
###Output
_____no_output_____
###Markdown
AdaLine (BONUS)AdaLine is a modified version of the perceptron. The only difference lies in the way the two models learn from their training data, i.e. the optimization method used. The perceptron used the binary classifications for learning, while AdaLine only applies the binary threshold after training, and thus uses real valued numbers when learning. >> _Hint:_ Most of the code for this exercise can be written by copying and modifying code from exercise 10.1. > **Ex. 10.3.1 (BONUS):** Implement two functions described below. You shold reuse your `net_input` from Ex. 10.1.4.:* `ada_activation_function`: the identify function $ada\_activation(z) = z$* `ada_predict`: A step function $ada\_predict(z) = 1 \ if \ z \geq 0 \ else \ 0$ where z is the output of _the activation function_.> The following figure might help you understand how each of these functions relate to the algorithm, and how the perceptron and adaline differ:
###Code
# [Answer to Ex. 10.3.1 BONUS]
def ada_activation(Z):
return Z
def ada_predict(X, W):
linProd = net_input(X, W)
act = ada_activation(linprod)
return np.where(act >= 0.0, 1, -1) # 1(linProd > 0)
###Output
_____no_output_____
###Markdown
> **Ex. 10.3.2 (BONUS):** AdaLine uses a _cost function_ to quantize the accuracy of the classifier this is given by >$$ cost(X,y,W) = \frac{1}{2} \sum_{i=1}^N (y_i - activation(z_i) )^2 , \qquad z_i = net\_input(x_i, W)$$> If you've followed any normal undergraduate courses in statistics you should recognize this function. Begin by implementing the cost function. Unlike in undergraduate statistics we will optimize our estimator using gradient descent, therefore **code up the negative of the derivative of the cost function as well**. > $$ -cost'_j(X,y, W) = -\sum_{i=1}^N (y_i - activation(z_i)) x_i^j, \qquad z_i = net\_input(x_i, W)$$>>> _Hint:_ Dont compute the sum for each weight $w_j$, instead use numpy's matrix algebra to compute the all of the derivatives at once.>>> _Hint:_ The derivative should return a list of the same length as the number of weights, since there is one derivative for each one.
###Code
# [Answer to Ex. 10.3.2 BONUS]
def ada_cost(X, y, W):
linProd = net_input(X, W)
errors_sq = (y - ada_activation(linProd))**2
return errors_sq.sum() / 2.0
def ada_cost_derivative(X, y, W):
linProd = net_input(X, W)
errors = y - ada_activation(linProd)
return np.array( [errors.sum()] + list(X.T.dot(errors)))
ada_cost_derivative(Xtrain, ytrain, random_weights())
###Output
_____no_output_____
###Markdown
> **Ex. 10.3.3 BONUS:** Implement the adaline fitting algorithm using *batch gradient descent*. This is similar to what you did with the perceptron, but while the perceptron did it's optimization after evaluating each row in the dataset, adaline treats the entire dataset as a batch, adjusts it's weights and then does it all again. Thus you only need to loop over `n_iter`, _not_ the data rows. Use the cost function to track the progress of your algorithm.>>> _Hint:_ gradient descent will be extremely sensitive to the learning rate $\eta$ in this situation - try setting i to 0.0001 and running the algorithm for 5000 iterations to get some kind of convergence.
###Code
# [Answer to ex. 10.3.3 BONUS]
def AdaLine(X, y, n_iter = 10000, eta = 0.00001):
costseq = []
W = random_weights()
for i in range(n_iter):
nip = net_input(X, W)
output = ada_activation(nip)
W = W + eta * ada_cost_derivative(X, y, W)
costseq.append(ada_cost(X,y, W))
return W, costseq
w_trained, costs = AdaLine(Xtrain, ytrain)
plt.plot(costs)
###Output
_____no_output_____
###Markdown
> **Ex. 10.3.4 (BONUS):** Write a function that scales each of the variables in the dataset (including **y**) using the formula $$x_j^{new} = \frac{x_j^{old} - \mu_j}{\sigma_j}$$> rerun the adaline function on the scaled variables.
###Code
# [Answer to Ex. 10.3.4 BONUS]
def standardScaler(X, y):
""" Scales the input. (Horrible code)
"""
X_new = X.copy()
for i in range(X.shape[1]):
xj = X[:,i]
stdev = np.std(xj)
mean = np.mean(xj)
X_new[:,i] = (xj - mean)/stdev
y_stdev = np.std(y)
y_mean = np.mean(y)
y_new = (y.copy() - y_mean)/y_stdev
return X_new, y_new
X_scaled, y_scaled = standardScaler(Xtrain,ytrain)
w_trained, costs = AdaLine(X_scaled, y_scaled)
plt.plot(costs)
###Output
_____no_output_____ |
Exercise notebook - BLU3.ipynb | ###Markdown
We already know this dataset!
###Code
airlines = load_airline_data()
airlines.head()
airlines.plot();
###Output
_____no_output_____
###Markdown
Split the data set in train and test (consider test after the year of 1957) Exercise
###Code
airlines = load_airline_data()[:'1957'] # train
airlines_test = load_airline_data()['1958':] # test
###Output
_____no_output_____
###Markdown
Q1. Fit your SARIMAX model and get in sample predictions, starting from the first period of the training dataset (and not the test dataset)* Use (p,d,q) = (0,1,1)* Use seasonal_order = (1,1,1,12)* enforce_stationarity=False * enforce_invertibility=True Exercise
###Code
# order = (p,d,q)
# seasonal_order = (P,D,Q,s)
# model = # call your SARIMAX
# results = # fit your model
# YOUR CODE HERE
raise NotImplementedError()
# get in sample predictions
# pred = #
# mean_predictions = #
# YOUR CODE HERE
raise NotImplementedError()
assert order == (0, 1, 1)
assert seasonal_order == (1, 1, 1, 12)
assert math.isclose(mean_predictions.sum(), 24838.36, abs_tol=0.5)
# plot this
airlines.plot(label='observed', figsize=(16, 4))
mean_predictions.plot(label='One-step ahead Forecast with dynamic=False', alpha=.7)
plt.legend()
###Output
_____no_output_____
###Markdown
Q1.1: Get confidence intervals and plot it
###Code
# pred_ci = # get the confidence interval for the predictions
# YOUR CODE HERE
raise NotImplementedError()
airlines.plot(label='observed')
mean_predictions.plot(label='One-step ahead Forecast with dynamic=False', alpha=.7)
plt.fill_between(pred_ci.index,
pred_ci['lower passengers_thousands'],
pred_ci['upper passengers_thousands'],
color='k',
alpha=.2)
plt.ylim([0, 700])
plt.legend()
plt.show()
assert math.isclose(pred_ci.mean()[0], -24.61, abs_tol=0.5)
assert math.isclose(pred_ci.mean()[1], 484.58, abs_tol=0.5)
###Output
_____no_output_____
###Markdown
Q2: Predict the future! Forecast 36 months ahead and plot it against the test set Exercise
###Code
# forecast = # get your forecast object
# forecast_pred = # get your predictions
# forecast_ci = # get your confidence interval for the forecast
# YOUR CODE HERE
raise NotImplementedError()
airlines.plot(label='train')
forecast_pred.plot(label='predicted')
airlines_test.plot(label='test')
plt.legend()
plt.show()
forecast_ci.mean()[1]
assert math.isclose(forecast_pred.sum(), 15445.9, abs_tol=0.5)
assert math.isclose(forecast_ci.mean()[0], 338.2, abs_tol=0.5)
assert math.isclose(forecast_ci.mean()[1], 519.8, abs_tol=0.5)
# plot this
plot_predictions(series_=airlines, pred_=forecast)
###Output
_____no_output_____
###Markdown
Q3: Calculate the $R^{2}$ for your forecast and the `airline_test` Exercise
###Code
# y_pred =
# y_true =
# r2 = # use sklearn r2_score
# YOUR CODE HERE
raise NotImplementedError()
assert math.isclose(y_pred.sum(), 15445.91, abs_tol=0.5)
assert math.isclose(r2, 0.9232, abs_tol=0.5)
###Output
_____no_output_____
###Markdown
Ok all good for now but let's see what we can do with timeseries without using timeseries tools. Workflow Q4
###Code
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
###Output
_____no_output_____
###Markdown
Q4.1: Get a quick benchmark with last days sales of each store with AIC
###Code
# get your multi-index and store
# train['Date'] = # get your datetime
# train = # set the index, first the Date then Store
# train = # sort it!
# idx = # create your index slicer
# YOUR CODE HERE
raise NotImplementedError()
assert train.iloc[-1][0] == 11354
###Output
_____no_output_____
###Markdown
split train test: use the last 4 days as test
###Code
# train test split
# new_train = # train without the last 4 days
# new_test = # the last 4 days
# YOUR CODE HERE
raise NotImplementedError()
assert new_train.shape[0] == 20400
assert new_test.shape[0] == 400
###Output
_____no_output_____
###Markdown
make a quick benchmark with the sales of the last day of each store
###Code
# get a quick benchmark
# last_day = # gets the sale of each store on the last day of our training dataset
# new_test['predictions'] = # set it to zero
# days_in_test_set = # list of the unique values of the dates to predict
# for day in days_in_test_set:
# new_test.loc[idx[day, :], 'predictions'] = # assign it to the Sales' last day
# y_true = # get the true Sales values from the test
# y_pred = # get your predictions
# mean_absolute_error = # get the mean absolute error betwee y_true and y_pred
# YOUR CODE HERE
raise NotImplementedError()
assert y_pred[idx['2015-07-24', 1]][0] == 3769
assert y_true[idx['2015-07-24', 1]][0] == 3706
###Output
_____no_output_____
###Markdown
Q4.2: Use SARIMAX with grid search to predict the sales of the store for the same days you were predicing in 4.1
###Code
# store_4 = filter your store with the number 4
# store_4.index = drop the level "Store"
# YOUR CODE HERE
raise NotImplementedError()
assert store_4.sum() == 1657224
###Output
_____no_output_____
###Markdown
for the SARIMAX model, start with the following arguments: * order=(0, 1, 0)* seasonal_order=(1, 1, 1, 7) * enforce_stationarity=False * enforce_invertibility=False
###Code
# model = create your SARIMAX model
# results = fit your model
# aic = # get the aic
# YOUR CODE HERE
raise NotImplementedError()
assert math.isclose(results.fittedvalues.sum(), 1661410.194, abs_tol=0.5)
assert math.isclose(aic, 3618.186, abs_tol=0.5)
###Output
_____no_output_____
###Markdown
For the grid search, use the function `get_best_params` and `get_inputs` that are defined at the beginning of the notebook
###Code
# grid search
# inputs = # get the inputs
# best_params = # get the best_params for the SARIMAX
# YOUR CODE HERE
raise NotImplementedError()
assert math.isclose(best_params.aic, 3471.621, abs_tol=5)
# fit the new model
# model = # sarimax with the new parameters
# results = # fit the model
# aic = get the aic. This should be the same as before
# YOUR CODE HERE
raise NotImplementedError()
assert math.isclose(best_params.aic, 3471.621, abs_tol=0.5)
assert math.isclose(results.fittedvalues.sum(), 1581705.679, abs_tol=0.5)
# store_4_preds = # get SARIMAX predictions for store 4
# store_4_forecast = # get the forecast for the 4 days we are testing
# YOUR CODE HERE
raise NotImplementedError()
assert math.isclose(store_4_preds['2015-01-03'], 1106.3, abs_tol=0.5)
assert math.isclose(store_4_forecast['2015-07-25'], 10017.3, abs_tol=0.5)
###Output
_____no_output_____
###Markdown
Q4.3: Get a prediction for the first `10 stores` for the new test (the final 4 days)
###Code
def predict_with_sarimax(df_, store_nr, steps=4):
store_ = df_.loc[idx[:, store_nr], 'Sales']
store_.index = store_.index.droplevel('Store')
model = sm.tsa.statespace.SARIMAX(store_,
order=(1, 0, 1),
seasonal_order=(1, 1, 1, 7),
enforce_stationarity=False,
enforce_invertibility=False)
results = model.fit()
return results.get_forecast(steps=steps).predicted_mean
###Output
_____no_output_____
###Markdown
This part can be tricky! Have a look at the learning notebook if you need! * We wrote some of the parts for you, you just have to uncomment them. Others you will have to write some code but, then again, in case of troubles, check the notebooks!
###Code
# just uncomment
# stores = train.index.get_level_values('Store').unique()[:10]
# just uncomment
# new_test = new_test.loc[idx[:, stores], :]
# res = {}
# just uncomment
# i = 0
# for store_nr in stores:
# i += 1
# print('%0.0f%%'% (i/len(stores)*100), end=',')
# need to do
# res[store_nr] = # use predict sarimax to get predictions for each store
# just uncomment
# results = pd.DataFrame(res).unstack().reset_index()
# results.columns = ['Store', 'Date', 'Sales']
# results = results.set_index(['Date', 'Store']).sort_index()
# just uncomment
# days_in_test_set = new_test.index.get_level_values('Date').unique()
# just uncomment
# for day in days_in_test_set:
# new_test.loc[idx[day, :], 'predictions'] = results.loc[idx[day, :], 'Sales'].values
# need to do
# mean_absolute_error_final = # get the mean absolute error
# YOUR CODE HERE
raise NotImplementedError()
assert math.isclose(mean_absolute_error_final, 569.4, abs_tol=0.5)
###Output
_____no_output_____
###Markdown
OPTIONAL Prepare a dataframe with the following:- Target: should be the value `n` months after each month - Where `n` will be the number of periods ahead we are trying to predict- Features: diff1 and diff2
###Code
passenger = load_airline_data() # fresh start
# build useful functions
# build_target should give you a target variable
# build_features should give features with the diff1 and diff2
# prepare_preds gets them together and splits in X_train and y_train
def build_target(_series, periods):
_series = _series.copy()
_df = pd.DataFrame(_series)
# _df['target'] = # create your target variable
return _df
def build_features(_df, feature):
_df = _df.copy()
# _df['diff1'] = # create the feature with the difference of 1 period
# _df['diff2'] = # create the feature with the difference of 2 period
return _df.dropna()
def prepare_preds(_series, periods):
col_name = _series.name
# _df = # build your dataframe with target and features
# features = # get a list of features
# _df = # Get a _df without NaN
# X_train = # get your X_train
# y_train = # get your y_train
return X_train, y_train
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Q5: Split your dataframe in X_train, y_train fit a linear regression, predict 2 months ahead and check the r2 score
###Code
# X_train, y_train = use your prepare_preds
# lr = # Linear regression
# fit your Linear Regression
# r2_score = calculate the score of r2
# YOUR CODE HERE
raise NotImplementedError()
assert math.isclose(r2_score, 0.81, abs_tol=0.1)
###Output
_____no_output_____ |
even-more-python-for-beginners-data-tools/09 - Handling duplicates and rows with missing values/09 - Removing rows.ipynb | ###Markdown
TAIL_NUM, DEP_TIME, DEP_DELAY, ARR_TIME, ARR_DELAY, ACTUAL_ELAPSED_TIME, AIR_TIME 모두 누락된 값의 행을 가집니다. 누락된 값을 다루는 많은 테크닉이 있지만 가장 간단한 방법은 누락된 값의 행을 지우는 것입니다.**dropna**은 null이나 누락된 값을 가진 행을 지웁니다.
###Code
delay_no_nulls_df = delays_df.dropna() # Delete the rows with missing values
delay_no_nulls_df.info() # Check the number of rows and number of rows with non-null values to confirm
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 295832 non-null object
OP_UNIQUE_CARRIER 295832 non-null object
TAIL_NUM 295832 non-null object
OP_CARRIER_FL_NUM 295832 non-null int64
ORIGIN 295832 non-null object
DEST 295832 non-null object
CRS_DEP_TIME 295832 non-null int64
DEP_TIME 295832 non-null float64
DEP_DELAY 295832 non-null float64
CRS_ARR_TIME 295832 non-null int64
ARR_TIME 295832 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 295832 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 32.7+ MB
###Markdown
원본 DataFrame을 유지할 필요가없는 경우 새 DataFrame을 만드는 대신 기존 DataFrame 내의 행을 삭제하기 만하면됩니다.** inplace = * True ***는 지정된 DataFrame에서 행을 삭제하려고 함을 나타냅니다.
###Code
delays_df.dropna(inplace=True)
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 295832 non-null object
OP_UNIQUE_CARRIER 295832 non-null object
TAIL_NUM 295832 non-null object
OP_CARRIER_FL_NUM 295832 non-null int64
ORIGIN 295832 non-null object
DEST 295832 non-null object
CRS_DEP_TIME 295832 non-null int64
DEP_TIME 295832 non-null float64
DEP_DELAY 295832 non-null float64
CRS_ARR_TIME 295832 non-null int64
ARR_TIME 295832 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 295832 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 32.7+ MB
###Markdown
여러 데이터 소스에서 데이터를 불러온다면 중복 레코드가 발생하는 경우가 있습니다.
###Code
airports_df = pd.read_csv('Data/airportsDuplicateRows.csv')
airports_df.head()
###Output
_____no_output_____
###Markdown
중복된 행을 찾기 위해서 **duplicates**를 사용하세요.행이 이전 행과 중복되면 ** True **를 반환합니다.
###Code
airports_df.duplicated()
###Output
_____no_output_____
###Markdown
**drop_duplicates** will delete the duplicate rows
###Code
airports_df.drop_duplicates(inplace=True)
airports_df
###Output
_____no_output_____
###Markdown
누락된 값이 있는 중복 행 및 행 처리대부분의 기계학습 알고리즘은 누락된 값을 만났을 때 오류를 반환합니다. 그래서 종종 DataFrame에서 누락된 값을 가진 행을 제거할 필요가 있습니다.어떻게 하는지 배우기 위해 우리는 pandas DataFrame을 생성하고 데이터를 불러와야 합니다.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
The flight delays data set contains information about flights and flight delays
###Code
delays_df = pd.read_csv('Data/Lots_of_flight_data.csv')
delays_df.head()
###Output
_____no_output_____
###Markdown
**info** 는 DataFrame에 몇개의 행이 있는지와 각 열에 대해 Null이 아닌 값이 포함된 수를 알려줍니다. 이를 통해 우리는 null이나 누락된 값을 포함한 열을 식별할 수 있습니다.
###Code
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 300000 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 300000 non-null object
OP_UNIQUE_CARRIER 300000 non-null object
TAIL_NUM 299660 non-null object
OP_CARRIER_FL_NUM 300000 non-null int64
ORIGIN 300000 non-null object
DEST 300000 non-null object
CRS_DEP_TIME 300000 non-null int64
DEP_TIME 296825 non-null float64
DEP_DELAY 296825 non-null float64
CRS_ARR_TIME 300000 non-null int64
ARR_TIME 296574 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 300000 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 300000 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 30.9+ MB
###Markdown
Handling duplicate rows and rows with missing valuesMost machine learning algorithms will return an error if they encounter a missing value. So, you often have to remove rows with missing values from your DataFrame.To learn how, we need to create a pandas DataFrame and load it with data.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
The flight delays data set contains information about flights and flight delays
###Code
delays_df = pd.read_csv('Lots_of_flight_data.csv')
delays_df.head()
###Output
_____no_output_____
###Markdown
**info** will tell us how many rows are in the DataFrame and for each column how many of those rows contain non-null values. From this we can determine which columns (if any) contain null/missing values
###Code
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 300000 entries, 0 to 299999
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 FL_DATE 300000 non-null object
1 OP_UNIQUE_CARRIER 300000 non-null object
2 TAIL_NUM 299660 non-null object
3 OP_CARRIER_FL_NUM 300000 non-null int64
4 ORIGIN 300000 non-null object
5 DEST 300000 non-null object
6 CRS_DEP_TIME 300000 non-null int64
7 DEP_TIME 296825 non-null float64
8 DEP_DELAY 296825 non-null float64
9 CRS_ARR_TIME 300000 non-null int64
10 ARR_TIME 296574 non-null float64
11 ARR_DELAY 295832 non-null float64
12 CRS_ELAPSED_TIME 300000 non-null int64
13 ACTUAL_ELAPSED_TIME 295832 non-null float64
14 AIR_TIME 295832 non-null float64
15 DISTANCE 300000 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 36.6+ MB
###Markdown
TAIL_NUM, DEP_TIME, DEP_DELAY, ARR_TIME, ARR_DELAY, ACTUAL_ELAPSED_TIME, and AIR_TIME all have rows with missing values. There are many techniques to deal with missing values, the simplest is to delete the rows with missing values.**dropna** will delete rows containing null/missing values
###Code
delay_no_nulls_df = delays_df.dropna() # Delete the rows with missing values
delay_no_nulls_df.info() # Check the number of rows and number of rows with non-null values to confirm
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 FL_DATE 295832 non-null object
1 OP_UNIQUE_CARRIER 295832 non-null object
2 TAIL_NUM 295832 non-null object
3 OP_CARRIER_FL_NUM 295832 non-null int64
4 ORIGIN 295832 non-null object
5 DEST 295832 non-null object
6 CRS_DEP_TIME 295832 non-null int64
7 DEP_TIME 295832 non-null float64
8 DEP_DELAY 295832 non-null float64
9 CRS_ARR_TIME 295832 non-null int64
10 ARR_TIME 295832 non-null float64
11 ARR_DELAY 295832 non-null float64
12 CRS_ELAPSED_TIME 295832 non-null int64
13 ACTUAL_ELAPSED_TIME 295832 non-null float64
14 AIR_TIME 295832 non-null float64
15 DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 38.4+ MB
###Markdown
If you don't need to keep the original DataFrame, you can just delete the rows within the existing DataFrame instead of creating a new one**inplace=*True*** indicates you want to drop the rows in the specified DataFrame
###Code
delays_df.dropna(inplace=True)
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 FL_DATE 295832 non-null object
1 OP_UNIQUE_CARRIER 295832 non-null object
2 TAIL_NUM 295832 non-null object
3 OP_CARRIER_FL_NUM 295832 non-null int64
4 ORIGIN 295832 non-null object
5 DEST 295832 non-null object
6 CRS_DEP_TIME 295832 non-null int64
7 DEP_TIME 295832 non-null float64
8 DEP_DELAY 295832 non-null float64
9 CRS_ARR_TIME 295832 non-null int64
10 ARR_TIME 295832 non-null float64
11 ARR_DELAY 295832 non-null float64
12 CRS_ELAPSED_TIME 295832 non-null int64
13 ACTUAL_ELAPSED_TIME 295832 non-null float64
14 AIR_TIME 295832 non-null float64
15 DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 38.4+ MB
###Markdown
When data is loaded from multiple data sources you sometimes end up with duplicate records.
###Code
airports_df = pd.read_csv('airportsDuplicateRows.csv')
airports_df.head()
###Output
_____no_output_____
###Markdown
use **duplicates** to find the duplicate rows.If a row is a duplicate of a previous row it returns **True**
###Code
airports_df.duplicated()
###Output
_____no_output_____
###Markdown
**drop_duplicates** will delete the duplicate rows
###Code
airports_df.drop_duplicates(inplace=True)
airports_df
###Output
_____no_output_____
###Markdown
Handling duplicate rows and rows with missing valuesMost machine learning algorithms will return an error if they encounter a missing value. So, you often have to remove rows with missing values from your DataFrame.To learn how, we need to create a pandas DataFrame and load it with data.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
The flight delays data set contains information about flights and flight delays
###Code
delays_df = pd.read_csv('Lots_of_flight_data.csv')
delays_df.head()
###Output
_____no_output_____
###Markdown
**info** will tell us how many rows are in the DataFrame and for each column how many of those rows contain non-null values. From this we can determine which columns (if any) contain null/missing values
###Code
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 300000 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 300000 non-null object
OP_UNIQUE_CARRIER 300000 non-null object
TAIL_NUM 299660 non-null object
OP_CARRIER_FL_NUM 300000 non-null int64
ORIGIN 300000 non-null object
DEST 300000 non-null object
CRS_DEP_TIME 300000 non-null int64
DEP_TIME 296825 non-null float64
DEP_DELAY 296825 non-null float64
CRS_ARR_TIME 300000 non-null int64
ARR_TIME 296574 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 300000 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 300000 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 30.9+ MB
###Markdown
TAIL_NUM, DEP_TIME, DEP_DELAY, ARR_TIME, ARR_DELAY, ACTUAL_ELAPSED_TIME, and AIR_TIME all have rows with missing values. There are many techniques to deal with missing values, the simplest is to delete the rows with missing values.**dropna** will delete rows containing null/missing values
###Code
delay_no_nulls_df = delays_df.dropna() # Delete the rows with missing values
delay_no_nulls_df.info() # Check the number of rows and number of rows with non-null values to confirm
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 295832 non-null object
OP_UNIQUE_CARRIER 295832 non-null object
TAIL_NUM 295832 non-null object
OP_CARRIER_FL_NUM 295832 non-null int64
ORIGIN 295832 non-null object
DEST 295832 non-null object
CRS_DEP_TIME 295832 non-null int64
DEP_TIME 295832 non-null float64
DEP_DELAY 295832 non-null float64
CRS_ARR_TIME 295832 non-null int64
ARR_TIME 295832 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 295832 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 32.7+ MB
###Markdown
If you don't need to keep the original DataFrame, you can just delete the rows within the existing DataFrame instead of creating a new one**inplace=*True*** indicates you want to drop the rows in the specified DataFrame
###Code
delays_df.dropna(inplace=True)
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 295832 non-null object
OP_UNIQUE_CARRIER 295832 non-null object
TAIL_NUM 295832 non-null object
OP_CARRIER_FL_NUM 295832 non-null int64
ORIGIN 295832 non-null object
DEST 295832 non-null object
CRS_DEP_TIME 295832 non-null int64
DEP_TIME 295832 non-null float64
DEP_DELAY 295832 non-null float64
CRS_ARR_TIME 295832 non-null int64
ARR_TIME 295832 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 295832 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 32.7+ MB
###Markdown
When data is loaded from multiple data sources you sometimes end up with duplicate records.
###Code
airports_df = pd.read_csv('Data/airportsDuplicateRows.csv')
airports_df.head()
###Output
_____no_output_____
###Markdown
use **duplicates** to find the duplicate rows.If a row is a duplicate of a previous row it returns **True**
###Code
airports_df.duplicated()
###Output
_____no_output_____
###Markdown
**drop_duplicates** will delete the duplicate rows
###Code
airports_df.drop_duplicates(inplace=True)
airports_df
###Output
_____no_output_____
###Markdown
Handling duplicate rows and rows with missing valuesMost machine learning algorithms will return an error if they encounter a missing value. So, you often have to remove rows with missing values from your DataFrame.To learn how, we need to create a pandas DataFrame and load it with data.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
The flight delays data set contains information about flights and flight delays
###Code
delays_df = pd.read_csv('Data/Lots_of_flight_data.csv')
delays_df.head()
###Output
_____no_output_____
###Markdown
**info** will tell us how many rows are in the DataFrame and for each column how many of those rows contain non-null values. From this we can determine which columns (if any) contain null/missing values
###Code
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 300000 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 300000 non-null object
OP_UNIQUE_CARRIER 300000 non-null object
TAIL_NUM 299660 non-null object
OP_CARRIER_FL_NUM 300000 non-null int64
ORIGIN 300000 non-null object
DEST 300000 non-null object
CRS_DEP_TIME 300000 non-null int64
DEP_TIME 296825 non-null float64
DEP_DELAY 296825 non-null float64
CRS_ARR_TIME 300000 non-null int64
ARR_TIME 296574 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 300000 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 300000 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 30.9+ MB
###Markdown
TAIL_NUM, DEP_TIME, DEP_DELAY, ARR_TIME, ARR_DELAY, ACTUAL_ELAPSED_TIME, and AIR_TIME all have rows with missing values. There are many techniques to deal with missing values, the simplest is to delete the rows with missing values.**dropna** will delete rows containing null/missing values
###Code
delay_no_nulls_df = delays_df.dropna() # Delete the rows with missing values
delay_no_nulls_df.info() # Check the number of rows and number of rows with non-null values to confirm
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 295832 non-null object
OP_UNIQUE_CARRIER 295832 non-null object
TAIL_NUM 295832 non-null object
OP_CARRIER_FL_NUM 295832 non-null int64
ORIGIN 295832 non-null object
DEST 295832 non-null object
CRS_DEP_TIME 295832 non-null int64
DEP_TIME 295832 non-null float64
DEP_DELAY 295832 non-null float64
CRS_ARR_TIME 295832 non-null int64
ARR_TIME 295832 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 295832 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 32.7+ MB
###Markdown
If you don't need to keep the original DataFrame, you can just delete the rows within the existing DataFrame instead of creating a new one**inplace=*True*** indicates you want to drop the rows in the specified DataFrame
###Code
delays_df.dropna(inplace=True)
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
FL_DATE 295832 non-null object
OP_UNIQUE_CARRIER 295832 non-null object
TAIL_NUM 295832 non-null object
OP_CARRIER_FL_NUM 295832 non-null int64
ORIGIN 295832 non-null object
DEST 295832 non-null object
CRS_DEP_TIME 295832 non-null int64
DEP_TIME 295832 non-null float64
DEP_DELAY 295832 non-null float64
CRS_ARR_TIME 295832 non-null int64
ARR_TIME 295832 non-null float64
ARR_DELAY 295832 non-null float64
CRS_ELAPSED_TIME 295832 non-null int64
ACTUAL_ELAPSED_TIME 295832 non-null float64
AIR_TIME 295832 non-null float64
DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 32.7+ MB
###Markdown
When data is loaded from multiple data sources you sometimes end up with duplicate records.
###Code
airports_df = pd.read_csv('Data/airportsDuplicateRows.csv')
airports_df.head()
###Output
_____no_output_____
###Markdown
use **duplicates** to find the duplicate rows.If a row is a duplicate of a previous row it returns **True**
###Code
airports_df.duplicated()
###Output
_____no_output_____
###Markdown
**drop_duplicates** will delete the duplicate rows
###Code
airports_df.drop_duplicates(inplace=True)
airports_df
###Output
_____no_output_____
###Markdown
중복 데이터와 missing value(결측값) 처리대부분의 머신러닝 알고리즘은 missing value을 발견하면 오류를 리턴합니다. 따라서 DataFrame에서 missing value가 있는 row를 제거해야합니다.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
항공편 지연 데이터셋에는 항공편 및 항공편 지연에 대한 정보가 포함되어 있습니다.
###Code
delays_df = pd.read_csv('Lots_of_flight_data.csv')
delays_df.head()
###Output
_____no_output_____
###Markdown
**info**는 DataFrame에 있는 row 수와 각 column에 대해 Null이 아닌 값의 수를 표시합니다. 이를 통해 어떤 column에 null값(missing value)이 있는지 확인할 수 있습니다.
###Code
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 300000 entries, 0 to 299999
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 FL_DATE 300000 non-null object
1 OP_UNIQUE_CARRIER 300000 non-null object
2 TAIL_NUM 299660 non-null object
3 OP_CARRIER_FL_NUM 300000 non-null int64
4 ORIGIN 300000 non-null object
5 DEST 300000 non-null object
6 CRS_DEP_TIME 300000 non-null int64
7 DEP_TIME 296825 non-null float64
8 DEP_DELAY 296825 non-null float64
9 CRS_ARR_TIME 300000 non-null int64
10 ARR_TIME 296574 non-null float64
11 ARR_DELAY 295832 non-null float64
12 CRS_ELAPSED_TIME 300000 non-null int64
13 ACTUAL_ELAPSED_TIME 295832 non-null float64
14 AIR_TIME 295832 non-null float64
15 DISTANCE 300000 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 36.6+ MB
###Markdown
TAIL_NUM, DEP_TIME, DEP_DELAY, ARR_TIME, ARR_DELAY, ACTUAL_ELAPSED_TIME, AIR_TIME 이 column들에 missing value가 포함되어 있습니다. Missing value를 처리하는 많은 방법이 있으며, 가장 간단한 방법은 missing value가 있는 행을 삭제하는 것입니다.**dropna**는 null(missing value)값이 포함 된 행을 삭제합니다.
###Code
delay_no_nulls_df = delays_df.dropna() # Missing values가 있는 row 삭제
delay_no_nulls_df.info() # 확인을 위해 전체 row의 수와 Non-null value 수를 체크
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 FL_DATE 295832 non-null object
1 OP_UNIQUE_CARRIER 295832 non-null object
2 TAIL_NUM 295832 non-null object
3 OP_CARRIER_FL_NUM 295832 non-null int64
4 ORIGIN 295832 non-null object
5 DEST 295832 non-null object
6 CRS_DEP_TIME 295832 non-null int64
7 DEP_TIME 295832 non-null float64
8 DEP_DELAY 295832 non-null float64
9 CRS_ARR_TIME 295832 non-null int64
10 ARR_TIME 295832 non-null float64
11 ARR_DELAY 295832 non-null float64
12 CRS_ELAPSED_TIME 295832 non-null int64
13 ACTUAL_ELAPSED_TIME 295832 non-null float64
14 AIR_TIME 295832 non-null float64
15 DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 38.4+ MB
###Markdown
원본 DataFrame을 유지할 필요가 없는 경우는 새 DataFrame을 만드는 대신 원본 DataFrame의 row를 삭제하면됩니다.**inplace=*True***는 지정된 DataFrame에서 row를 삭제합니다.
###Code
delays_df.dropna(inplace=True)
delays_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 295832 entries, 0 to 299999
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 FL_DATE 295832 non-null object
1 OP_UNIQUE_CARRIER 295832 non-null object
2 TAIL_NUM 295832 non-null object
3 OP_CARRIER_FL_NUM 295832 non-null int64
4 ORIGIN 295832 non-null object
5 DEST 295832 non-null object
6 CRS_DEP_TIME 295832 non-null int64
7 DEP_TIME 295832 non-null float64
8 DEP_DELAY 295832 non-null float64
9 CRS_ARR_TIME 295832 non-null int64
10 ARR_TIME 295832 non-null float64
11 ARR_DELAY 295832 non-null float64
12 CRS_ELAPSED_TIME 295832 non-null int64
13 ACTUAL_ELAPSED_TIME 295832 non-null float64
14 AIR_TIME 295832 non-null float64
15 DISTANCE 295832 non-null int64
dtypes: float64(6), int64(5), object(5)
memory usage: 38.4+ MB
###Markdown
여러 데이터 소스에서 데이터를 로드하면 중복 row가 생길 수 있습니다.
###Code
airports_df = pd.read_csv('airportsDuplicateRows.csv')
airports_df.head()
###Output
_____no_output_____
###Markdown
**duplicates**를 사용하면 중복 row를 찾아냅니다.row가 이전 row와 중복되면 **True**를 리턴합니다.
###Code
airports_df.duplicated()
###Output
_____no_output_____
###Markdown
**drop_duplicates**는 중복 row를 삭제합니다.
###Code
airports_df.drop_duplicates(inplace=True)
airports_df
###Output
_____no_output_____ |
10_pydecorator_adv.ipynb | ###Markdown
Python advanced decorators> Some more examples of advanced decorators in python
###Code
#hide
from nbdev.showdoc import *
###Output
_____no_output_____
###Markdown
Keeping state
###Code
#export
import functools
def count_calls(func):
"""Count the number of calls to a function"""
@functools.wraps(func)
def _wrapper(*args, **kwargs):
value = func(*args, **kwargs)
_wrapper.num_calls += 1
return value
_wrapper.num_calls = 0
return _wrapper
@count_calls
def fibonacci(number):
"""Calculate Fibonacci numbers fib_n
The Fibonacci numbers are 1, 2, 3, 5 ,8, 12, 21, ...
fib_n = fib_n-1 + fib_n-2
"""
if number < 2: return 1
return fibonacci(number-1) + fibonacci(number-2)
fibonacci(5), fibonacci.num_calls
###Output
_____no_output_____
###Markdown
Decorating classes
###Code
#export
# A class can be a decorator if it is callable (or it contains a __call__ function)
class CountCalls:
"""Count a number of calls for a function"""
def __init__(self, func):
self.func = func
self.num_calls = 0
functools.update_wrapper(self, func)
def __call__(self, *args, **kwargs):
...
self.num_calls += 1
return self.func(*args, **kwargs)
@CountCalls
def fibonacci(number):
"""Calculate Fibonacci numbers fib_n
The Fibonacci numbers are 1, 2, 3, 5 ,8, 12, 21, ...
fib_n = fib_n-1 + fib_n-2
"""
if number < 2: return 1
return fibonacci(number-1) + fibonacci(number-2)
fibonacci(1), fibonacci.num_calls
###Output
_____no_output_____ |
content/_build/jupyter_execute/08_test_driven_development/exercise/solutions.ipynb | ###Markdown
<img src="https://colab.research.google.com/assets/colab-badge.svg" title="Open this file in Google Colab" alt="Colab"/> Unit testing a contact listThe code sample below has `Contact` class that contains both a `Person` and an `Address` class, and finally, a `Notebook` class that contains multiple contacts.Can you use `pytest` and `unittest.mock` modules to write tests for these classes and fix the bugs in this code
###Code
### useful: This is the code you should test
class Address:
def __init__(self, street, city):
self.street = str(street)
self.city = str(city)
def __repr__(self):
return f"Address({self.city!r}, {self.street!r})"
class Person:
def __init__(self, name, email):
self.name = name
self.email= email
def __repr__(self):
return f"Person({self.name!r}, {self.email!r})"
class Contact:
def __init__(self, street, city, name, email, **kwargs):
self.person = Person(name, email)
self.address = Address(street, city)
def __str__(self):
return f"""\
{self.person.name}:
{self.person.email}
address:
{self.address.city}
{self.address.street}
"""
class Notebook:
def __init__(self):
self.contacts = dict()
def add(self, street, city, name, email):
self.contacts[name] = Contact(name, email, city, street)
def remove(name):
self.contacts.remove(name)
def __str__(self):
results = []
for name, contact in self.contacts.items():
results.append(str(contact))
results.append("")
return '\n'.join(results)
# write your tests here
import pytest
import unittest.mock as mocking
@pytest.fixture
def city():
return 'city'
@pytest.fixture
def street():
return 'street'
def test_address(city, street):
address = Address(street=street, city=city)
assert address.street == street
assert address.city == city
@pytest.fixture
def name():
return 'name'
@pytest.fixture
def email():
return 'email'
def test_person(name, email):
person = Person(name=name, email=email)
assert person.name == name
assert person.email == email
def test_contact(name, email, city, street):
contact = Contact(name=name, email=email, city=city, street=street)
assert contact.person.name == name
assert contact.person.email == email
assert contact.address.city == city
assert contact.address.street == street
@pytest.fixture
def empty_notebook():
return Notebook()
def test_empty_notebook(empty_notebook):
assert len(empty_notebook.contacts) == 0
def test_notebook_add(empty_notebook, name, email, city, street):
empty_notebook.add(name=name, email=email, city=city, street=street)
assert len(empty_notebook.contacts) == 1
assert empty_notebook.contacts[name].person.name == name
assert empty_notebook.contacts[name].person.email == email
assert empty_notebook.contacts[name].address.city == city
assert empty_notebook.contacts[name].address.street == street
### useful: run the tests you wrote
import ipytest
# enable pytest's assertions and ipytest's magics
ipytest.config(rewrite_asserts=True, magics=True)
# set the filename
__file__ = 'ex 08 - solutions.ipynb'
# execute the tests via pytest, arguments are passed to pytest
ipytest.run('-qq')
###Output
_____no_output_____ |
doc/PHP_BB_FORUM_READER_EXAMPLE_PUBLIC.ipynb | ###Markdown
PHPBB Scraping Data API - Sample Use ConfigurationHere's a possible use for the phpbb Scraper API, that will read latest topics from a forum. Setup to access the forum is done in a config.json, that has the following content:```{ "user": "", "password": "", "base":"https://", "target_dir":"<target dir where to save / read html files", "meta_file":"" } mind the double backslash and encoding: eg meta_file = "C:\\Users\\JohnDoe\\Desktop\\TEST\\metafile.json"```The ScraperExecutor encapsulates generation of URL, log in, reading and processing of scraped forum pages Code is Documented :-)
###Code
import phpbb_scraper
from phpbb_scraper.scraper import ScraperExecutor
# code is documented, use help(<module>) to find out more about implemented code
# for inner structure, use dir(module)
help(ScraperExecutor)
###Output
Help on class ScraperExecutor in module phpbb_scraper.scraper:
class ScraperExecutor(builtins.object)
| Implementation of some frequently used scraping queries
|
| Class methods defined here:
|
| __init__(base=None, debug=False, wait_time=5, user=None, password=None, config_file=None, target_dir=None, meta_file=None) from builtins.type
| constructor
|
| close_session() from builtins.type
| close session
|
| get_session() from builtins.type
| gets/creates class session
|
| get_soup(url) from builtins.type
| retrieves soup for given url, configuration needed to be setup
| if url is a list, a list of soup will be returned
|
| get_soups(urls) from builtins.type
| reads multiple urls in case soup contains number of entries tags and a "start" property,
| url will not be read. returns a list of dictionary with entry
| {hash(soup_id):{'url':<url>,'url_hash':<url_hash>,'soup':<soup>,'soup_id':<soup_id>,'date':<date>}}
| soup_id is concatenation of Date and url hash <JJJJMMDD_HHMMSS>_<url_hash>
|
| read_topics_from_meta(meta_file=None, target_dir=None) from builtins.type
| reads all soup files referenced in meta file and trasnforms soups in list of metadata attributes, alongside with data from file
| attributes from post gets the prefix 'post' , file metadata gets the prefix 'source'. Usually, target_dir (source of soup files)
| and meta_file (file containing metainformation of soup files) is set upon instanciation of this class, but can be overwritten
| by the parameters
|
| retrieve_last_topics(past_days=14, start_num=0, steps_num=2, increment_num=70, file_extension='html', target_dir=None, meta_file=None) from builtins.type
| gets the last topics from forum (across multiple pages) and saves each to an html
| file locally.
| Parameters (default)
| past_days (14) - retrieve topics from last past_days
| start_num (0) / steps (3) start index of post, number of subsequent pages to be scraped
| increment_num (70) increase number of post index between pages, for default values we will
| be reading posts on three pages (0...69),(70...139),(140...209)
| file_extension ("html")
| target_dir (save path) directory to save (can also be configured in config file)
| returns list of metadata dictionary of each soup
|
| save_topics_as_html_table(html_file=None, path=None, append_timestamp=True) from builtins.type
| reads topics from loval html files and transforms posts into an html table
| for given path and file name. If append_timestamp is set to true, a timestamp will be added to
| the filename
|
| set_config_from_file(config_file) from builtins.type
| sets configuration from json file, having the following content:
| {
| "user": <user>,
| "password": <password>,
| "base":<url base of phpbb forum>
| "target_dir":<save directory>
| "meta_dir":<directory containing list of soup files>
| }
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| DEBUG = False
|
| QUERIES = None
|
| base_url = 'https://<base_url>'
|
| meta_file = None
|
| password = '<password>'
|
| target_dir = None
|
| user = '<user>'
|
| wait_time = 5
###Markdown
Sample ScrapeThe following sample shows application of the ScraperExecutor class that reads latest topics and saves them as HTML files and updates the meta file (list of files downloaded)
###Code
from phpbb_scraper.scraper import ScraperExecutor
# config file path
config_file = r"C:\<path>\config.json"
debug = False # run in debug mode
steps_num = 2 # num of max web pages to be scraped
# set configuration and instanciate web scraper
executor = ScraperExecutor(config_file=config_file,debug=debug)
# scrapes data from forum , saves them to files and returns metadata of each scraped page
metadata = executor.retrieve_last_topics(steps_num=steps_num)
###Output
_____no_output_____
###Markdown
Display Of meta data for scraping of each Page: To make it unique, each scraped page (and forum posts later on, as well) gets hash ids alongside with file name so as to make it ready for analysis in subsequent steps
###Code
# in case everything went fine, you can see the file metadata here (=what is appended to the metadata file)
metadata
###Output
_____no_output_____
###Markdown
Reading Of Scraped DataRead of scraped html data can be done with the read_topics_from_meta() function: It reads the metafile, accesses the referenced files there, and imports each post as dictionary.
###Code
from phpbb_scraper.scraper import ScraperExecutor
# read the urls from metafile and get post data as dictionary from stored html files
config_file = r"C:\<path>\config.json"
# read the urls from metafile and get metadata from stored html files
debug = False # run in debug mode
# set configuration
executor = ScraperExecutor(config_file=config_file,debug=debug)
# read metafile and access locally stored html files
topics = executor.read_topics_from_meta() #dictionary containing topics metadata
print(f"Number of topics {len(topics)}, type of topics: {type(topics)}")
print(f"Metadata Keys per Post: {list(topics[list(topics.keys())[0]].keys())}")
###Output
_____no_output_____
###Markdown
Having transformed posts into dictionary, everything is set for further analysis :-) Scraped Data as HTML TableScraperExecutor method save_topics_as_html_table will read scraped data and is transforming them into tabular HTML data
###Code
from phpbb_scraper.scraper import ScraperExecutor
config_file = r"C:\<path>\config.json"
# read the urls from metafile and get metadata from stored html files
debug = False # run in debug mode
# set configuration
executor = ScraperExecutor(config_file=config_file,debug=debug)
# html file name and path
html_file = r"posts_as_html_table"
path = r"C:\<path>\TEST"
add_timestamp = False
# create html table from dictionary and save file locally
executor.save_topics_as_html_table(html_file=html_file,path=path,append_timestamp=add_timestamp)
###Output
_____no_output_____ |
Intro_to_recommender_systems/.ipynb_checkpoints/Song_Recommender-checkpoint.ipynb | ###Markdown
Building a song recommender
###Code
-------------
Dataset used:
-------------
Million Songs Dataset
Source: http://labrosa.ee.columbia.edu/millionsong/
Paper: http://ismir2011.ismir.net/papers/OS6-1.pdf
The current notebook uses a subset of the above data containing 10,000 songs obtained from:
https://github.com/turi-code/tutorials/blob/master/notebooks/recsys_rank_10K_song.ipynb
###Output
_____no_output_____
###Markdown
import pandasfrom sklearn.cross_validation import train_test_splitimport numpy as npimport timefrom sklearn.externals import joblibimport Recommenders as Recommendersimport Evaluation as Evaluation%matplotlib inlineprint('Libraries loaded')
###Code
# Load music data
###Output
_____no_output_____
###Markdown
Read userid-songid-listen_count tripletsThis step might take time to download data from external sourcestriplets_file = 'https://static.turi.com/datasets/millionsong/10000.txt'songs_metadata_file = 'https://static.turi.com/datasets/millionsong/song_data.csv'song_df_1 = pandas.read_table(triplets_file,header=None)song_df_1.columns = ['user_id', 'song_id', 'listen_count']Read song metadatasong_df_2 = pandas.read_csv(songs_metadata_file)Merge the two dataframes above to create input dataframe for recommender systemssong_df = pandas.merge(song_df_1, song_df_2.drop_duplicates(['song_id']), on="song_id", how="left") song_df_1.head()
###Code
# Explore data
Music data shows how many times a user listened to a song, as well as the details of the song.
###Output
_____no_output_____
###Markdown
song_df.head()
###Code
## Length of the dataset
###Output
_____no_output_____
###Markdown
len(song_df)
###Code
## Create a subset of the dataset
###Output
_____no_output_____
###Markdown
song_df = song_df.head(10000)Merge song title and artist_name columns to make a merged columnsong_df['song'] = song_df['title'].map(str) + " - " + song_df['artist_name']
###Code
## Showing the most popular songs in the dataset
###Output
_____no_output_____
###Markdown
song_grouped = song_df.groupby(['song']).agg({'listen_count': 'count'}).reset_index()grouped_sum = song_grouped['listen_count'].sum()song_grouped['percentage'] = song_grouped['listen_count'].div(grouped_sum)*100song_grouped.sort_values(['listen_count', 'song'], ascending = [0,1])
###Code
## Count number of unique users in the dataset
###Output
_____no_output_____
###Markdown
users = song_df['user_id'].unique() len(users)
###Code
## Quiz 1. Count the number of unique songs in the dataset
###Output
_____no_output_____
###Markdown
Fill in the code heresongs = song_df['song'].unique()len(songs)
###Code
# Create a song recommender
###Output
_____no_output_____
###Markdown
train_data, test_data = train_test_split(song_df, test_size = 0.20, random_state=0)print(train_data.head(5))
###Code
## Simple popularity-based recommender class (Can be used as a black box)
###Output
_____no_output_____
###Markdown
Recommenders.popularity_recommender_py
###Code
### Create an instance of popularity based recommender class
###Output
_____no_output_____
###Markdown
pm = Recommenders.popularity_recommender_py()pm.create(train_data, 'user_id', 'song')
###Code
### Use the popularity model to make some predictions
###Output
_____no_output_____
###Markdown
user_id = users[5]pm.recommend(user_id)
###Code
### Quiz 2: Use the popularity based model to make predictions for the following user id (Note the difference in recommendations from the first user id).
###Output
_____no_output_____
###Markdown
Fill in the code hereuser_id = users[8]pm.recommend(user_id)
###Code
## Build a song recommender with personalization
We now create an item similarity based collaborative filtering model that allows us to make personalized recommendations to each user.
## Class for an item similarity based personalized recommender system (Can be used as a black box)
###Output
_____no_output_____
###Markdown
Recommenders.item_similarity_recommender_py
###Code
### Create an instance of item similarity based recommender class
###Output
_____no_output_____
###Markdown
is_model = Recommenders.item_similarity_recommender_py()is_model.create(train_data, 'user_id', 'song')
###Code
### Use the personalized model to make some song recommendations
###Output
_____no_output_____
###Markdown
Print the songs for the user in training datauser_id = users[5]user_items = is_model.get_user_items(user_id)print("------------------------------------------------------------------------------------")print("Training data songs for the user userid: %s:" % user_id)print("------------------------------------------------------------------------------------")for user_item in user_items: print(user_item)print("----------------------------------------------------------------------")print("Recommendation process going on:")print("----------------------------------------------------------------------")Recommend songs for the user using personalized modelis_model.recommend(user_id)
###Code
### Quiz 3. Use the personalized model to make recommendations for the following user id. (Note the difference in recommendations from the first user id.)
###Output
_____no_output_____
###Markdown
user_id = users[7]Fill in the code hereuser_items = is_model.get_user_items(user_id)print("------------------------------------------------------------------------------------")print("Training data songs for the user userid: %s:" % user_id)print("------------------------------------------------------------------------------------")for user_item in user_items: print(user_item)print("----------------------------------------------------------------------")print("Recommendation process going on:")print("----------------------------------------------------------------------")Recommend songs for the user using personalized modelis_model.recommend(user_id)
###Code
### We can also apply the model to find similar songs to any song in the dataset
###Output
_____no_output_____
###Markdown
is_model.get_similar_items(['U Smile - Justin Bieber'])
###Code
### Quiz 4. Use the personalized recommender model to get similar songs for the following song.
###Output
_____no_output_____
###Markdown
song = 'Yellow - Coldplay'Fill in the code hereis_model.get_similar_items([song])
###Code
# Quantitative comparison between the models
We now formally compare the popularity and the personalized models using precision-recall curves.
## Class to calculate precision and recall (This can be used as a black box)
###Output
_____no_output_____
###Markdown
Evaluation.precision_recall_calculator
###Code
## Use the above precision recall calculator class to calculate the evaluation measures
###Output
_____no_output_____
###Markdown
start = time.time()Define what percentage of users to use for precision recall calculationuser_sample = 0.05Instantiate the precision_recall_calculator classpr = Evaluation.precision_recall_calculator(test_data, train_data, pm, is_model)Call method to calculate precision and recall values(pm_avg_precision_list, pm_avg_recall_list, ism_avg_precision_list, ism_avg_recall_list) = pr.calculate_measures(user_sample)end = time.time()print(end - start)
###Code
## Code to plot precision recall curve
###Output
_____no_output_____
###Markdown
import pylab as plMethod to generate precision and recall curvedef plot_precision_recall(m1_precision_list, m1_recall_list, m1_label, m2_precision_list, m2_recall_list, m2_label): pl.clf() pl.plot(m1_recall_list, m1_precision_list, label=m1_label) pl.plot(m2_recall_list, m2_precision_list, label=m2_label) pl.xlabel('Recall') pl.ylabel('Precision') pl.ylim([0.0, 0.20]) pl.xlim([0.0, 0.20]) pl.title('Precision-Recall curve') pl.legend(loc="upper right") pl.legend(loc=9, bbox_to_anchor=(0.5, -0.2)) pl.show() print("Plotting precision recall curves.")plot_precision_recall(pm_avg_precision_list, pm_avg_recall_list, "popularity_model", ism_avg_precision_list, ism_avg_recall_list, "item_similarity_model")
###Code
### Generate Precision Recall curve using pickled results on a larger data subset(Python 3)
###Output
_____no_output_____
###Markdown
print("Plotting precision recall curves for a larger subset of data (100,000 rows) (user sample = 0.005).")Read the persisted files pm_avg_precision_list = joblib.load('pm_avg_precision_list_3.pkl')pm_avg_recall_list = joblib.load('pm_avg_recall_list_3.pkl')ism_avg_precision_list = joblib.load('ism_avg_precision_list_3.pkl')ism_avg_recall_list = joblib.load('ism_avg_recall_list_3.pkl')print("Plotting precision recall curves.")plot_precision_recall(pm_avg_precision_list, pm_avg_recall_list, "popularity_model", ism_avg_precision_list, ism_avg_recall_list, "item_similarity_model")
###Code
### Generate Precision Recall curve using pickled results on a larger data subset(Python 2.7)
###Output
_____no_output_____
###Markdown
print("Plotting precision recall curves for a larger subset of data (100,000 rows) (user sample = 0.005).")pm_avg_precision_list = joblib.load('pm_avg_precision_list_2.pkl')pm_avg_recall_list = joblib.load('pm_avg_recall_list_2.pkl')ism_avg_precision_list = joblib.load('ism_avg_precision_list_2.pkl')ism_avg_recall_list = joblib.load('ism_avg_recall_list_2.pkl')print("Plotting precision recall curves.")plot_precision_recall(pm_avg_precision_list, pm_avg_recall_list, "popularity_model", ism_avg_precision_list, ism_avg_recall_list, "item_similarity_model")
###Code
The curve shows that the personalized model provides much better performance over the popularity model.
# Matrix Factorization based Recommender System
###Output
_____no_output_____
###Markdown
Using SVD matrix factorization based collaborative filtering recommender system--------------------------------------------------------------------------------The following code implements a Singular Value Decomposition (SVD) based matrix factorization collaborative filtering recommender system. The user ratings matrix used is a small matrix as follows: Item0 Item1 Item2 Item3User0 3 1 2 3User1 4 3 4 3User2 3 2 1 5User3 1 6 5 2User4 0 0 5 0As we can see in the above matrix, all users except user 4 rate all items. The code calculates predicted recommendations for user 4.
###Code
### Import the required libraries
###Output
_____no_output_____
###Markdown
Code source written with help from: http://antoinevastel.github.io/machine%20learning/python/2016/02/14/svd-recommender-system.htmlimport math as mtimport csvfrom sparsesvd import sparsesvd used for matrix factorizationimport numpy as npfrom scipy.sparse import csc_matrix used for sparse matrixfrom scipy.sparse.linalg import * used for matrix multiplicationNote: You may need to install the library sparsesvd. Documentation for sparsesvd method can be found here:https://pypi.python.org/pypi/sparsesvd/
###Code
### Methods to compute SVD and recommendations
###Output
_____no_output_____
###Markdown
constants defining the dimensions of our User Rating Matrix (URM)MAX_PID = 4MAX_UID = 5Compute SVD of the user ratings matrixdef computeSVD(urm, K): U, s, Vt = sparsesvd(urm, K) dim = (len(s), len(s)) S = np.zeros(dim, dtype=np.float32) for i in range(0, len(s)): S[i,i] = mt.sqrt(s[i]) U = csc_matrix(np.transpose(U), dtype=np.float32) S = csc_matrix(S, dtype=np.float32) Vt = csc_matrix(Vt, dtype=np.float32) return U, S, VtCompute estimated rating for the test userdef computeEstimatedRatings(urm, U, S, Vt, uTest, K, test): rightTerm = S*Vt estimatedRatings = np.zeros(shape=(MAX_UID, MAX_PID), dtype=np.float16) for userTest in uTest: prod = U[userTest, :]*rightTerm we convert the vector to dense format in order to get the indices of the movies with the best estimated ratings estimatedRatings[userTest, :] = prod.todense() recom = (-estimatedRatings[userTest, :]).argsort()[:250] return recom
###Code
### Use SVD to make predictions for a test user id, say 4
###Output
_____no_output_____
###Markdown
Used in SVD calculation (number of latent factors)K=2Initialize a sample user rating matrixurm = np.array([[3, 1, 2, 3],[4, 3, 4, 3],[3, 2, 1, 5], [1, 6, 5, 2], [5, 0,0 , 0]])urm = csc_matrix(urm, dtype=np.float32)Compute SVD of the input user ratings matrixU, S, Vt = computeSVD(urm, K)Test user set as user_id 4 with ratings [0, 0, 5, 0]uTest = [4]print("User id for whom recommendations are needed: %d" % uTest[0])Get estimated rating for test userprint("Predictied ratings:")uTest_recommended_items = computeEstimatedRatings(urm, U, S, Vt, uTest, K, True)print(uTest_recommended_items)
###Code
### Quiz 4
###Output
_____no_output_____
###Markdown
a.) Change the input matrix row for test userid 4 in the user ratings matrix to the following value. Note the difference in predicted recommendations in this case.i.) [5 0 0 0](Note*: The predicted ratings by the code include the items already rated by test user as well. This has been left purposefully like this for better understanding of SVD).SVD tutorial: http://web.mit.edu/be.400/www/SVD/Singular_Value_Decomposition.htm
###Code
## Understanding Intuition behind SVD
###Output
_____no_output_____
###Markdown
SVD result gives three matrices as output: U, S and Vt (T in Vt means transpose). Matrix U represents user vectors and Matrix Vt represents item vectors. In simple terms, U represents users as 2 dimensional points in the latent vector space, and Vt represents items as 2 dimensional points in the same space. Next, we print the matrices U, S and Vt and try to interpret them. Think how the points for users and items will look like in a 2 dimensional axis. For example, the following code plots all user vectors from the matrix U in the 2 dimensional space. Similarly, we plot all the item vectors in the same plot from the matrix Vt.
###Code
%matplotlib inline
from pylab import *
#Plot all the users
print("Matrix Dimensions for U")
print(U.shape)
for i in range(0, U.shape[0]):
plot(U[i,0], U[i,1], marker = "*", label="user"+str(i))
for j in range(0, Vt.T.shape[0]):
plot(Vt.T[j,0], Vt.T[j,1], marker = 'd', label="item"+str(j))
legend(loc="upper right")
title('User vectors in the Latent semantic space')
ylim([-0.7, 0.7])
xlim([-0.7, 0])
show()
###Output
Matrix Dimensions for U
(5, 2)
|
classification-problems/logistic-regression-classifier-with-L2-regularization/breast-cancer-classifier-from-scratch.ipynb | ###Markdown
Breast Cancer Classifier from scratch Load the required libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
###Output
_____no_output_____
###Markdown
Load and Read the dataset Change the columns name also
###Code
df = pd.read_csv("datasets/breast-cancer-wisconsin.data", header = None)
df.rename(columns = {0:"id",
1:"clump-thickness",
2:"cell-size",
3:"cell-shape",
4:"marginal-adhesion",
5:"epithelial-cell-size",
6:"bare-nuclei",
7:"bland-chromatin",
8:"normal-nucleoli",
9:"mitoses",
10:"class"},
inplace = True)
df.head(10)
###Output
_____no_output_____
###Markdown
Count the observations for different classes
###Code
df['class'].value_counts()
# 2 is for benign cancer
# 4 is for malignant cancer
###Output
_____no_output_____
###Markdown
Create input matrix and labels for the given dataset
###Code
label_vector = df.iloc[:, 10] #class labels: 2 = benign, 4 = malignant
feature_vector = df.iloc[:, 1:10] #features vectors
feature_vector
###Output
_____no_output_____
###Markdown
Relabel the observed values as 0 and 1
###Code
# type(label_vector)
label_vector = label_vector.replace(2,0)
label_vector = label_vector.replace(4,1)
label_vector.value_counts()
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
print("Does the feature set contain any null values:",feature_vector.isnull().values.any())
print("\nFeature set information")
feature_vector.info()
###Output
Does the feature set contain any null values: False
Feature set information
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 699 entries, 0 to 698
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 clump-thickness 699 non-null int64
1 cell-size 699 non-null int64
2 cell-shape 699 non-null int64
3 marginal-adhesion 699 non-null int64
4 epithelial-cell-size 699 non-null int64
5 bare-nuclei 699 non-null object
6 bland-chromatin 699 non-null int64
7 normal-nucleoli 699 non-null int64
8 mitoses 699 non-null int64
dtypes: int64(8), object(1)
memory usage: 49.3+ KB
###Markdown
The last cell says that the values for the feature "bare-nuclei" is an object, which is not compatible for our calculations. So we need to change it into the int64 type, so that it becomes homogeneous with the entire dataset.
###Code
feature_vector.mean()
###Output
_____no_output_____
###Markdown
The last cell does not contain the mean value for the "bare-nuclei" feature, as it is of type object. Lets try to change the type for this.
###Code
# feature_vector["bare-nuclei"].astype('int64')
###Output
_____no_output_____
###Markdown
We got an error while trying to change the type for the "bare-nuclei" feature. ValueError: invalid literal for int() with base 10: '?' This means that there are some entries within that feature which contains '?' as values (hence incompatible).
###Code
# tmp = feature_vector["bare-nuclei"].where(lambda x : x != '?')
# tmp.str.contains("?", regex=False).value_counts()
feature_vector["bare-nuclei"].str.contains("?", regex=False).value_counts()
###Output
_____no_output_____
###Markdown
There are 16 such entries which are making the "bare-nuclei" feature incompatible for modelling. Now we have to get rid of such entries first before changing the type for that feature. One possible way of dealing with such entries would be to drop them and all the corresponding entries of the other features as well. But doing so would leave us with lesses data for modelling. We will even lose the patterns or insights that these entries might be holding to. There is another way to solve this issue, where we can replace the incompatible entries with the mean of all the entries for that particular feature. This will save us from lossing the insights into the dataset. Although dropping the entries would be advisable if the number of incompatible entries are insignificant than the voulume of the entire dataset.
###Code
def get_mean_for_bare_nulcei():
tmp = feature_vector["bare-nuclei"].to_numpy()
sum = 0
for n in tmp:
if (n == '?'):
continue
else:
sum += int(n)
return sum/np.size(tmp)
## END
mean_value = get_mean_for_bare_nulcei()
# mean_value # 3.463519313304721
###Output
_____no_output_____
###Markdown
We will replace the incompatible values with the mean values after converting them into int64.
###Code
feature_vector["bare-nuclei"] = feature_vector["bare-nuclei"].replace('?', int(mean_value))
###Output
_____no_output_____
###Markdown
Now we will change the type of the "bare-nuclei" feature to int64.
###Code
feature_vector["bare-nuclei"] = feature_vector["bare-nuclei"].astype('int64')
feature_vector.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 699 entries, 0 to 698
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 clump-thickness 699 non-null int64
1 cell-size 699 non-null int64
2 cell-shape 699 non-null int64
3 marginal-adhesion 699 non-null int64
4 epithelial-cell-size 699 non-null int64
5 bare-nuclei 699 non-null int64
6 bland-chromatin 699 non-null int64
7 normal-nucleoli 699 non-null int64
8 mitoses 699 non-null int64
dtypes: int64(9)
memory usage: 49.3 KB
###Markdown
Split the training and testing dataHere we will keep 30% of the entire dataset for testing.
###Code
X_train, X_test, Y_train, Y_test = train_test_split(feature_vector, label_vector, test_size = 0.3, random_state = 2020)
print("shape of input training data:",X_train.shape)
print("shape of output training data:",Y_train.shape)
print("shape of input testing data:",X_test.shape)
print("shape of output testing data:",Y_test.shape)
###Output
shape of input training data: (489, 9)
shape of output training data: (489,)
shape of input testing data: (210, 9)
shape of output testing data: (210,)
###Markdown
Z-score Normalization
###Code
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Train the model first
###Code
def sigmoid_function(args):
return (1/(1 + np.exp(-args)))
## END
def logLiklihood(z, y):
"""Log-liklihood function (cost function to be minimized in logistic regression classification)"""
return -1 * np.sum((y * np.log(sigmoid_function(z))) + ((1 - y) * np.log(1 - sigmoid_function(z))))
## END
def model_optimize(w, b, X, Y):
m = X.shape[0]
#Prediction
final_result = sigmoid_function(np.dot(w, X.T) + b)
# Y_T = Y.T
# cost = (-1/m)*(np.sum((Y_T*np.log(final_result)) + ((1 - Y_T)*(np.log(1 - final_result)))))
cost = (-1/m)*(np.sum((np.asarray(Y.T)*np.log(final_result)) + ((1 - np.asarray(Y.T)*(np.log(1 - final_result))))))
#Gradient calculation
dw = (1/m)*(np.dot(X.T, (final_result - np.asarray(Y.T)).T))
db = (1/m)*(np.sum(final_result - np.asarray(Y.T)))
grads = {"dw": dw, "db": db}
return grads, cost
## END
print("model_optimize")
def model_predict(w, b, X, Y, learning_rate, no_iterations):
costs = []
for i in range(no_iterations):
grads, cost = model_optimize(w, b, X, Y)
dw = grads["dw"]
db = grads["db"]
#weight update
w = w - (learning_rate * (dw.T))
b = b - (learning_rate * db)
if (i % 100 == 0):
costs.append(cost)
print("Cost after %i iteration is %f" %(i, cost))
#final parameters
coeff = {"w": w, "b": b}
gradient = {"dw": dw, "db": db}
return coeff, gradient, costs
## END
print("model_predict")
def weightInitialization(n_features):
w = np.zeros((1, n_features))
b = 0
return w, b
## END
print("weightInitialization")
def predict(final_pred, m):
y_pred = np.zeros((1,m))
for i in range(final_pred.shape[1]):
if final_pred[0][i] > 0.5:
y_pred[0][i] = 1
return y_pred
## END
print("predict")
n_features = X_train_std.shape[1]
print('Number of Features', n_features)
w, b = weightInitialization(n_features)
coeff, gradient, costs = model_predict(w, b, X_train_std, Y_train, learning_rate = 0.01, no_iterations = 5001)
#Final prediction
w = coeff["w"]
b = coeff["b"]
print('Optimized weights', w)
print('Optimized intercept', b)
final_train_pred = sigmoid_function(np.dot(w, X_train_std.T) + b)
final_test_pred = sigmoid_function(np.dot(w, X_test_std.T) + b)
m_tr = X_train_std.shape[0]
m_ts = X_test_std.shape[0]
y_tr_pred = predict(final_train_pred, m_tr)
print('Training Accuracy', accuracy_score(y_tr_pred.T, Y_train))
###Output
Training Accuracy 0.9652351738241309
|
notebooks/community/sdk/sdk_automl_tabular_binary_classification_batch.ipynb | ###Markdown
Vertex SDK: AutoML training tabular binary classification model for batch prediction Run in Colab View on GitHub Open in Vertex AI Workbench OverviewThis tutorial demonstrates how to use the Vertex SDK to create tabular binary classification models and do batch prediction using a Google Cloud [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users) model. DatasetThe dataset used for this tutorial is the [Bank Marketing](https://pantheon.corp.google.com/storage/browser/_details/cloud-ml-tables-data/bank-marketing.csv) . This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. ObjectiveIn this tutorial, you create an AutoML tabular binary classification model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environmentIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.Otherwise, make sure your environment meets this notebook's requirements. You need the following:- The Cloud Storage SDK- Git- Python 3- virtualenv- Jupyter notebook running in a virtual environment with Python 3The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).2. [Install Python 3](https://cloud.google.com/python/setupinstalling_python).3. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.6. Open this notebook in the Jupyter Notebook Dashboard. InstallationInstall the latest version of Vertex SDK for Python.
###Code
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtimeThis tutorial does not require a GPU runtime. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Initialize Vertex SDK for PythonInitialize the Vertex SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML tabular binary classification model. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = "gs://cloud-ml-tables-data/bank-marketing.csv"
###Output
_____no_output_____
###Markdown
Quick peek at your dataThis tutorial uses a version of the Bank Marketing dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.You also need for training to know the heading name of the label column, which is save as `label_column`. For this dataset, it is the last column in the CSV file.
###Code
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
###Output
_____no_output_____
###Markdown
Create the DatasetNext, create the `Dataset` resource using the `create` method for the `TabularDataset` class, which takes the following parameters:- `display_name`: The human readable name for the `Dataset` resource.- `gcs_source`: A list of one or more dataset index files to import the data items into the `Dataset` resource.- `bq_source`: Alternatively, import data items from a BigQuery table into the `Dataset` resource.This operation may take several minutes.
###Code
dataset = aip.TabularDataset.create(
display_name="Bank Marketing" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE]
)
print(dataset.resource_name)
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="bank_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 20 minutes.
###Code
model = dag.run(
dataset=dataset,
model_display_name="bank_" + TIMESTAMP,
training_fraction_split=0.6,
validation_fraction_split=0.2,
test_fraction_split=0.2,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column=label_column,
)
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=bank_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Send a batch prediction requestSend a batch prediction to your deployed model. Make test itemsYou will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction. Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:- The first line is the heading with the feature (fields) heading names.- Each remaining line is a separate prediction request with the corresponding feature values.For example: "feature_1", "feature_2". ... value_1, value_2, ...
###Code
! gsutil cat $IMPORT_FILE | head -n 1 > tmp.csv
! gsutil cat $IMPORT_FILE | tail -n 10 >> tmp.csv
! cut -d, -f1-16 tmp.csv > batch.csv
gcs_input_uri = BUCKET_NAME + "/test.csv"
! gsutil cp batch.csv $gcs_input_uri
###Output
_____no_output_____
###Markdown
Make the batch prediction requestNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:- `job_display_name`: The human readable name for the batch prediction job.- `gcs_source`: A list of one or more batch request input files.- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.- `instances_format`: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.- `predictions_format`: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.- `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
###Code
batch_predict_job = model.batch_predict(
job_display_name="bank_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="csv",
predictions_format="csv",
sync=False,
)
print(batch_predict_job)
###Output
_____no_output_____
###Markdown
Wait for completion of batch prediction jobNext, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
###Code
batch_predict_job.wait()
###Output
_____no_output_____
###Markdown
Get the predictionsNext, get the results from the completed batch prediction job.The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a CSV format:- CSV header + predicted_label- CSV row + prediction, per prediction request
###Code
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
print(line)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Vertex SDK: AutoML training tabular binary classification model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex SDK to create tabular binary classification models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the [Bank Marketing](gs://cloud-ml-tables-data/bank-marketing.csv). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. ObjectiveIn this tutorial, you create an AutoML tabular binary classification model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex SDK.
###Code
import sys
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = '--user'
else:
USER_FLAG = ''
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex SDK and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/ai-platform-unified/docs/general/locations)
###Code
REGION = 'us-central1' #@param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" #@param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Initialize Vertex SDKInitialize the Vertex SDK for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML tabular binary classification model. Create a Dataset ResourceFirst, you create an tabular Dataset resource for the Bank Marketing dataset. Data preparationThe Vertex `Dataset` resource for tabular has a couple of requirements for your tabular data.- Must be in a CSV file or a BigQuery query. CSVFor tabular binary classification, the CSV file has a few requirements:- The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.- All but one column are features.- One column is the label, which you will specify when you subsequently create the training pipeline. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = 'gs://cloud-ml-tables-data/bank-marketing.csv'
###Output
_____no_output_____
###Markdown
Quick peek at your dataYou will use a version of the Bank Marketing dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.You also need for training to know the heading name of the label column, which is save as `label_column`. For this dataset, it is the last column in the CSV file.
###Code
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(',')[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
###Output
_____no_output_____
###Markdown
Create the DatasetNext, create the `Dataset` resource using the `create()` method for the `TabularDataset` class, which takes the following parameters:- `display_name`: The human readable name for the `Dataset` resource.- `gcs_source`: A list of one or more dataset index file to import the data items into the `Dataset` resource.This operation may take several minutes.
###Code
dataset = aip.TabularDataset.create(
display_name="Bank Marketing" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE]
)
print(dataset.resource_name)
###Output
_____no_output_____
###Markdown
Train the modelNow train an AutoML tabular binary classification model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create and run training pipelineTo train an AutoML tabular binary classification model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model. - `forecasting`: A tabular forecasting model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - `minimize-log-loss`
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="bank_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss"
)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run()`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `target_column`: The name of the column to train as the label.- `training_fraction_split`: The percentage of the dataset to use for training.- `validation_fraction_split`: The percentage of the dataset to use for validation.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 20 minutes.
###Code
model = dag.run(
dataset=dataset,
target_column=label_column,
model_display_name="bank_" + TIMESTAMP,
training_fraction_split=0.6,
validation_fraction_split=0.2,
test_fraction_split=0.2,
budget_milli_node_hours=1000,
disable_early_stopping=False
)
###Output
_____no_output_____
###Markdown
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for online prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Make test itemsYou will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
###Code
HEADING = "Age,Job,MaritalStatus,Education,Default,Balance,Housing,Loan,Contact,Day,Month,Duration,Campaign,PDays,Previous,POutcome,Deposit"
INSTANCE_1 = "58,managment,married,teritary,no,2143,yes,no,unknown,5,may,261,1,-1,0, unknown"
INSTANCE_2 = "44,technician,single,secondary,no,39,yes,no,unknown,5,may,151,1,-1,0,unknown"
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:- The first line is the heading with the feature (fields) heading names.- Each remaining line is a separate prediction request with the corresponding feature values.For example: "feature_1", "feature_2". ... value_1, value_2, ...
###Code
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + '/test.csv'
with tf.io.gfile.GFile(gcs_input_uri, 'w') as f:
f.write(HEADING + '\n')
f.write(str(INSTANCE_1) + '\n')
f.write(str(INSTANCE_2) + '\n')
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
###Output
_____no_output_____
###Markdown
Make the batch prediction requestNow that your `Model` resource is trained, you can make a batch prediction by invoking the `batch_request()` method, with the following parameters:- `job_display_name`: The human readable name for the batch prediction job.- `gcs_source`: A list of one or more batch request input files.- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.- `sync`: If set to `True`, the call will block while waiting for the asynchronous batch job to complete.
###Code
batch_predict_job = model.batch_predict(
job_display_name="$(DATASET_ALIAS)_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False
)
print(batch_predict_job)
###Output
_____no_output_____
###Markdown
Wait for completion of batch prediction jobNext, wait for the batch job to complete.
###Code
batch_predict_job.wait()
###Output
_____no_output_____
###Markdown
Get the predictionsNext, get the results from the completed batch prediction job.The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method `iter_outputs()` to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:- `content`: The prediction request.- `prediction`: The prediction response. - `ids`: The internal assigned unique identifiers for each prediction request. - `displayNames`: The class names for each class label. - `confidences`: The predicted confidence, between 0 and 1, per class label.
###Code
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
print(line)
break
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex dataset object
try:
if delete_dataset and 'dataset' in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if delete_model and 'model' in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if delete_endpoint and 'model' in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if delete_batchjob and 'model' in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
if delete_bucket and 'BUCKET_NAME' in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Vertex SDK: AutoML training tabular binary classification model for batch prediction Run in Colab View on GitHub Open in Google Cloud Notebooks OverviewThis tutorial demonstrates how to use the Vertex SDK to create tabular binary classification models and do batch prediction using a Google Cloud [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users) model. DatasetThe dataset used for this tutorial is the [Bank Marketing](https://pantheon.corp.google.com/storage/browser/_details/cloud-ml-tables-data/bank-marketing.csv) . This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. ObjectiveIn this tutorial, you create an AutoML tabular binary classification model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environmentIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.Otherwise, make sure your environment meets this notebook's requirements. You need the following:- The Cloud Storage SDK- Git- Python 3- virtualenv- Jupyter notebook running in a virtual environment with Python 3The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).2. [Install Python 3](https://cloud.google.com/python/setupinstalling_python).3. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.6. Open this notebook in the Jupyter Notebook Dashboard. InstallationInstall the latest version of Vertex SDK for Python.
###Code
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtimeThis tutorial does not require a GPU runtime. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants
###Code
import google.cloud.aiplatform as aip
###Output
_____no_output_____
###Markdown
Initialize Vertex SDK for PythonInitialize the Vertex SDK for Python for your project and corresponding bucket.
###Code
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML tabular binary classification model. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = "gs://cloud-ml-tables-data/bank-marketing.csv"
###Output
_____no_output_____
###Markdown
Quick peek at your dataThis tutorial uses a version of the Bank Marketing dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.You also need for training to know the heading name of the label column, which is save as `label_column`. For this dataset, it is the last column in the CSV file.
###Code
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
###Output
_____no_output_____
###Markdown
Create the DatasetNext, create the `Dataset` resource using the `create` method for the `TabularDataset` class, which takes the following parameters:- `display_name`: The human readable name for the `Dataset` resource.- `gcs_source`: A list of one or more dataset index files to import the data items into the `Dataset` resource.- `bq_source`: Alternatively, import data items from a BigQuery table into the `Dataset` resource.This operation may take several minutes.
###Code
dataset = aip.TabularDataset.create(
display_name="Bank Marketing" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE]
)
print(dataset.resource_name)
###Output
_____no_output_____
###Markdown
Create and run training pipelineTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipelineAn AutoML training pipeline is created with the `AutoMLTabularTrainingJob` class, with the following parameters:- `display_name`: The human readable name for the `TrainingJob` resource.- `optimization_prediction_type`: The type task to train the model for. - `classification`: A tabuar classification model. - `regression`: A tabular regression model.- `column_transformations`: (Optional): Transformations to apply to the input columns- `optimization_objective`: The optimization objective to minimize or maximize. - binary classification: - `minimize-log-loss` - `maximize-au-roc` - `maximize-au-prc` - `maximize-precision-at-recall` - `maximize-recall-at-precision` - multi-class classification: - `minimize-log-loss` - regression: - `minimize-rmse` - `minimize-mae` - `minimize-rmsle`The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
###Code
dag = aip.AutoMLTabularTrainingJob(
display_name="bank_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
###Output
_____no_output_____
###Markdown
Run the training pipelineNext, you run the DAG to start the training job by invoking the method `run`, with the following parameters:- `dataset`: The `Dataset` resource to train the model.- `model_display_name`: The human readable name for the trained model.- `training_fraction_split`: The percentage of the dataset to use for training.- `test_fraction_split`: The percentage of the dataset to use for test (holdout data).- `validation_fraction_split`: The percentage of the dataset to use for validation.- `target_column`: The name of the column to train as the label.- `budget_milli_node_hours`: (optional) Maximum training time specified in unit of millihours (1000 = hour).- `disable_early_stopping`: If `True`, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.The `run` method when completed returns the `Model` resource.The execution of the training pipeline will take upto 20 minutes.
###Code
model = dag.run(
dataset=dataset,
model_display_name="bank_" + TIMESTAMP,
training_fraction_split=0.6,
validation_fraction_split=0.2,
test_fraction_split=0.2,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column=label_column,
)
###Output
_____no_output_____
###Markdown
Review model evaluation scoresAfter your model has finished training, you can review the evaluation scores for it.First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
###Code
# Get model resource ID
models = aip.Model.list(filter="display_name=bank_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
###Output
_____no_output_____
###Markdown
Send a batch prediction requestSend a batch prediction to your deployed model. Make test itemsYou will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction. Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:- The first line is the heading with the feature (fields) heading names.- Each remaining line is a separate prediction request with the corresponding feature values.For example: "feature_1", "feature_2". ... value_1, value_2, ...
###Code
! gsutil cat $IMPORT_FILE | head -n 1 > tmp.csv
! gsutil cat $IMPORT_FILE | tail -n 10 >> tmp.csv
! cut -d, -f1-16 tmp.csv > batch.csv
gcs_input_uri = BUCKET_NAME + "/test.csv"
! gsutil cp batch.csv $gcs_input_uri
###Output
_____no_output_____
###Markdown
Make the batch prediction requestNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:- `job_display_name`: The human readable name for the batch prediction job.- `gcs_source`: A list of one or more batch request input files.- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.- `instances_format`: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.- `predictions_format`: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.- `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
###Code
batch_predict_job = model.batch_predict(
job_display_name="bank_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="csv",
predictions_format="csv",
sync=False,
)
print(batch_predict_job)
###Output
_____no_output_____
###Markdown
Wait for completion of batch prediction jobNext, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
###Code
batch_predict_job.wait()
###Output
_____no_output_____
###Markdown
Get the predictionsNext, get the results from the completed batch prediction job.The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a CSV format:- CSV header + predicted_label- CSV row + prediction, per prediction request
###Code
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
print(line)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- AutoML Training Job- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____ |
jira-contributions.ipynb | ###Markdown
Contributors per participations in JIRA which are not created by self (commenting/helping JIRA)
###Code
c = contributions[contributions.author != contributions.owner][["identifier","author"]].groupby(["identifier","author"]).count() \
.reset_index().groupby("author").count()
c.sort_values("identifier",ascending=False).head(20)
###Output
_____no_output_____
###Markdown
Contributors per participations in any issue
###Code
c = contributions[["identifier","author"]].groupby(["identifier","author"]).count() \
.reset_index().groupby("author").count()
c.sort_values("identifier",ascending=False).head(20)
###Output
_____no_output_____
###Markdown
Bus factor (number of contributors responsible for the 50% of the issue creations) from last half year Contributors until the half of the all contributions
###Code
prcreated = contributions[contributions.type == "JIRA_CREATED"]
prcreated = prcreated[prcreated.date > (datetime.datetime.now() - datetime.timedelta(days=182)).strftime('%Y-%m-%d')]
prcreated = prcreated[["identifier"]].groupby(prcreated.author).count().reset_index()
prcreated = prcreated.sort_values("identifier", ascending=False)
prcreated = prcreated.reset_index(drop=True)
prcreated["cs"] = prcreated["identifier"].cumsum()
prcreated["ratio"]= prcreated.identifier / prcreated.identifier.sum() * 100
prcreated[prcreated.cs < prcreated.identifier.sum() / 2]
###Output
_____no_output_____
###Markdown
Pony number (bus factor)
###Code
pn = (prcreated[prcreated.cs < prcreated.identifier.sum() / 2]).shape[0] + 1
pn
###Output
_____no_output_____
###Markdown
Dev power (All the contributions in the ration of the top contributor)
###Code
prcreated["power"] = prcreated["identifier"] / prcreated.reset_index()["identifier"][0]
prcreated.power.sum()
labels = np.asarray(prcreated["author"])
for i in range(pn,len(labels)):
labels[i] = ""
plt.figure(figsize=(10,8))
plt.pie(prcreated["ratio"], labels=labels, startangle=90)
plt.show()
###Output
_____no_output_____
###Markdown
People with created JIRAs > commented JIRA
###Code
created = contributions[contributions.type == "JIRA_CREATED"][["author","identifier"]].groupby("author").count().rename(columns={"identifier":"created"})
helped = contributions[contributions.author != contributions.owner][["identifier"]].groupby([contributions.author,contributions.identifier]).sum().rename(columns={"identifier":"helped"}) \
.reset_index().groupby(["author"]).count().drop(columns=["identifier"])
merged = pd.merge(helped,created, left_index=True, right_index=True)
# merged.index = merged.index.rename("githubname")
# merged = merged.join(github_apache_membership.set_index("githubname")).reset_index()
# merged.role = merged.role.fillna("?")
# merged["rc"] = merged.role.map({"pmc":"red","committer":"yellow","?":"blue"})
merged = merged.reset_index()
source = merged.reset_index()
plt.figure(figsize=(20,15))
plt.scatter(source.created,source.helped, s= 100)
plt.xlabel('Created Issue')
plt.ylabel('Commented Issue')
plt.title('Issue created / commented ratio')
plt.grid()
for index, row in source.iterrows():
plt.annotate(row["author"], (row["created"], row["helped"]), xytext=(8,-2), textcoords='offset points')
plt.semilogx()
plt.semilogy()
plt.show()
###Output
_____no_output_____
###Markdown
Same graph with focusing to the last 6 month Only contributors with both created pr and helped pr visible
###Code
import datetime
filtered = contributions[contributions.date > (datetime.datetime.now() - datetime.timedelta(days=182)).strftime('%Y-%m-%d')]
created = filtered[filtered.type == "JIRA_CREATED"][["author","identifier"]].groupby("author").count().rename(columns={"identifier":"created"})
helped = filtered[filtered.author != filtered.owner][["identifier"]].groupby([filtered.author,filtered.identifier]).sum().rename(columns={"identifier":"helped"}) \
.reset_index().groupby(["author"]).count().drop(columns=["identifier"])
merged = pd.merge(helped,created,left_index=True, right_index=True)
merged = merged.reset_index()
source = merged.reset_index()
plt.figure(figsize=(20,15))
plt.scatter(source.created,source.helped, s= 100)
plt.xlabel('Assigned issues')
plt.ylabel('Commented Issue')
plt.title('Issue created / commented ratio')
plt.grid()
for index, row in source.iterrows():
plt.annotate(row["author"], (row["created"], row["helped"]), xytext=(8,-2), textcoords='offset points')
plt.semilogx()
plt.semilogy()
plt.show()
###Output
_____no_output_____
###Markdown
Number of individual contributors per monthNumber of different Jira users who either created ssue or commented Issue
###Code
m = contributions[["identifier"]].groupby([contributions.date.dt.strftime('%Y').rename("year"),contributions.date.dt.strftime('%m').rename("month"),contributions.author]).count() \
.reset_index()
result = m[["author"]].groupby([m.year,m.month]).count().sort_values(["year","month"]).reset_index()
result = result.pivot(index="year",columns="month",values="author").fillna(0)
util.create_mosaic(result, "Blues")
plt.title("Number of induvidual contributors per month")
###Output
_____no_output_____
###Markdown
JIRA activity heatmap
###Code
days = contributions.date.dt.strftime('%A')
hours = contributions.date.dt.strftime('%H')
a = contributions[["date"]].groupby([days,hours]).count()
a.columns = ["count"]
a.index.names = ["day","hour"]
a = a.reset_index()
a = a.pivot(index="day",columns="hour",values="count")
a = a.reindex(["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"])
plt.figure(figsize=(14,11))
im = plt.imshow(a, cmap="Greens")
plt.yticks(range(len(a)),a.index.values)
plt.title("Jira activites by UTC time")
plt.colorbar(im, fraction=0.012)
plt.show()
###Output
_____no_output_____ |
test_dash_jupyter.ipynb | ###Markdown
Example of jupyterlab-dash extension
###Code
# Imports
import dash
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
import plotly.graph_objs as go
# Load and preprocess data
df = pd.read_csv(
'https://gist.githubusercontent.com/chriddyp/'
'cb5392c35661370d95f300086accea51/raw/'
'8e0768211f6b747c0db42a9ce9a0937dafcbd8b2/'
'indicators.csv')
available_indicators = df['Indicator Name'].unique()
df.head()
# Build AppViewer
from jupyterlab_dash import AppViewer
viewer = AppViewer()
# Build App
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div([
html.Div([
html.Div([
dcc.Dropdown(
id='xaxis-column',
options=[{'label': i, 'value': i} for i in available_indicators],
value='CO2 emissions (metric tons per capita)'
),
dcc.RadioItems(
id='xaxis-type',
options=[{'label': i, 'value': i} for i in ['Linear', 'Log']],
value='Linear',
labelStyle={'display': 'inline-block'}
)
],
style={'width': '48%', 'display': 'inline-block'}),
html.Div([
dcc.Dropdown(
id='yaxis-column',
options=[{'label': i, 'value': i} for i in available_indicators],
value='Life expectancy at birth, total (years)'
),
dcc.RadioItems(
id='yaxis-type',
options=[{'label': i, 'value': i} for i in ['Linear', 'Log']],
value='Linear',
labelStyle={'display': 'inline-block'}
)
],style={'width': '48%', 'float': 'right', 'display': 'inline-block'})
]),
dcc.Graph(id='indicator-graphic'),
dcc.Slider(
id='year--slider',
min=df['Year'].min(),
max=df['Year'].max(),
value=df['Year'].max(),
marks={str(year): str(year) for year in df['Year'].unique()}
)
])
# Callbacks
@app.callback(
dash.dependencies.Output('indicator-graphic', 'figure'),
[dash.dependencies.Input('xaxis-column', 'value'),
dash.dependencies.Input('yaxis-column', 'value'),
dash.dependencies.Input('xaxis-type', 'value'),
dash.dependencies.Input('yaxis-type', 'value'),
dash.dependencies.Input('year--slider', 'value')])
def update_graph(xaxis_column_name, yaxis_column_name,
xaxis_type, yaxis_type,
year_value):
dff = df[df['Year'] == year_value]
return {
'data': [go.Scatter(
x=dff[dff['Indicator Name'] == xaxis_column_name]['Value'],
y=dff[dff['Indicator Name'] == yaxis_column_name]['Value'],
text=dff[dff['Indicator Name'] == yaxis_column_name]['Country Name'],
mode='markers',
marker={
'size': 15,
'opacity': 1,
'color': 'blue',
'line': {'width': 2}
}
)],
'layout': go.Layout(
xaxis={
'title': xaxis_column_name,
'type': 'linear' if xaxis_type == 'Linear' else 'log'
},
yaxis={
'title': yaxis_column_name,
'type': 'linear' if yaxis_type == 'Linear' else 'log'
},
margin={'l': 40, 'b': 40, 't': 10, 'r': 0},
hovermode='closest',
)
}
viewer.show(app)
###Output
_____no_output_____ |
english/python/regexp_in_python.ipynb | ###Markdown
Regular expressions in Python Regular expression (regexp) is a powerful tool to handle diverse text patterns in text processing. Several text editors (e.g Notepad++, vi) and programming languages have regexp functionality.To define text patterns, a special meaning is assigned to some characters. You can find below a very short and incomplete list of special regexp characters:|character(s)|explanation ||------------|-----------------------------------------------------------------||. (dot) | any character except new line ||^ |beginning of the line ||$ |end of the line ||[abc] |any character from the set in the brackets ||[^abc] |none of the characters in the set in brackets ||[a-z] |any character from the range in brackets (inclusive) ||[^a-z] |none of the characters in the range in brackets ||( ) |make group in pattern ||{min,max} | repetition of the previous character or group, max part is optional||p1 \| p2 |p1 pattern or p2 pattern ||p\* |any number of repetition of p pattern, including zero equivalent to p{0,}||p+ |one or more repetition of p pattern, equivalent to p{1,} ||p? |zero or one repetition of p pattern, equivalent to p{0,1} ||\ |escape the special meaning of the next character (e.g. \. the dot character, not any character)| Python has a special package named *re* to handle regular expressions. To use it, it is necessary to import it, as follows:
###Code
import re
###Output
_____no_output_____
###Markdown
Let's make some examples using regexps Pattern in string
###Code
text = """Python is an interpreted high-level general-purpose programming language.
Its design philosophy emphasizes code readability with its use of significant indentation.
Its language constructs as well as its object-oriented approach aim to help programmers write clear,
logical code for small and large-scale projects.""" # citation from Wikipedia
###Output
_____no_output_____
###Markdown
*re.match* searches for the pattern only at the beginning of string. It returns an object or *None* if the pattern not found.
###Code
re.match("Python", text) # is Python at the beginning of the text?
if re.match("[Pp]ython", text): # is Python or python at the beginning of the text?
print('text starts with Python')
result = re.match("[Pp]ython", text)
result.span(), result.group(0)
###Output
_____no_output_____
###Markdown
*re.search* searches the first occurence of the pattern in the string.
###Code
re.search('prog', text)
re.search('levels?', text) # optional 's' after level
re.findall('pro', text)
###Output
_____no_output_____
###Markdown
*r* preface is often used for regular expression
###Code
re.findall(r'[ \t\r\n]a[a-zA-Z0-9_][ \t\r\n]', text) # two letter words starting with letter 'a'
re.findall(r'\sa\w\s', text) # the same as above but shorter
re.findall(r'\sa\w*\s', text) # words strarting with 'a'
###Output
_____no_output_____
###Markdown
We can use regexp to find/match functions to validate input data. In the example below, is a string a valid number?
###Code
int_numbers = ('12356', '1ac', 'twelve', '23.65', '0', '-768')
for int_number in int_numbers:
if re.match(r'[+-]?(0|[1-9][0-9]*)$', int_number):
print(f'{int_number} is an integer number')
float_numbers =('12', '0.0', '-43.56', '1.76e-1', '1.1.1', '00.289')
for float_number in float_numbers:
if re.match(r'[+-]?(0|[1-9][0-9]*)(\.[0-9]*)?([eg][+-]?[0-9]+)?$', float_number):
print(f'{float_number} is a float number')
###Output
12 is a float number
0.0 is a float number
-43.56 is a float number
1.76e-1 is a float number
###Markdown
There is another approach to check numerical values without regexp, as follows:
###Code
for float_number in float_numbers:
try:
float(float_number) # try to convert to float number
except ValueError:
continue # can't convert skip it
print(f'{float_number} is a float number')
###Output
12 is a float number
0.0 is a float number
-43.56 is a float number
1.76e-1 is a float number
00.289 is a float number
###Markdown
Email address validation: We'll use the precompiled regular expression (*re.compile*). This alternative is faster than the alternative of using the same regexp evaluated several times:
###Code
email = re.compile(r'^[a-zA-Z0-9.!#$%&\'*+/=?^_`{|}~-]+@[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(\.[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$')
addresses = ['a.b@c', '[email protected]', 'plainaddress', '#@%^%#$@#$@#.com', '@example.com', 'Joe Smith <[email protected]>',
'email.example.com', 'email@[email protected]', '[email protected]']
valid_addresses = [addr for addr in addresses if email.search(addr)]
print('valid email addresses:\n', valid_addresses)
invalid_addresses = [addr for addr in addresses if not email.search(addr)]
print('invalid email addresses:\n', invalid_addresses)
###Output
valid email addresses:
['a.b@c', '[email protected]', '[email protected]']
invalid email addresses:
['plainaddress', '#@%^%#$@#$@#.com', '@example.com', 'Joe Smith <[email protected]>', 'email.example.com', 'email@[email protected]']
###Markdown
Other functions *re.sub* replaces the occurrence of a regexp with a given text in a string.
###Code
print(re.sub(r' *', ' ', 'Text with several unnecessary spaces')) # truncate adjecent spaces to a single space
print(re.sub(r'[ \t,;]', ',', 'first,second;third fourth fifth')) # unify separators
###Output
Text with several unneccesary spaces
first,second,third,fourth,fifth
###Markdown
*re.split* splits a text into a list of parts, where separators are given by regexp.
###Code
words = re.split(r'[, \.\t\r\n]', text) # word separators are space, dot, tabulator and EOL
words
###Output
_____no_output_____
###Markdown
Please note that the previous result contains some empty words where two or more separators are adjecent. Let's correct it:
###Code
words = re.split(r'[, \.\t\r\n]+', text) # join adjecent separators
words
###Output
_____no_output_____
###Markdown
Why is there an empty word at the end? Complex example Let's make a complex example: Find the most frequent four-letter word starting with "s" in Kipling's The Jungle Book.
###Code
import urllib.request
url = 'https://www.gutenberg.org/files/236/236-0.txt'
words = {}
with urllib.request.urlopen(url) as file:
for line in file:
ws = re.split(r'[, \.\t\r\n]+', line.decode('utf8'))
for w in ws:
w = w.lower()
if re.match('[sS][a-z]{3}', w):
if w in words:
words[w] += 1
else:
words[w] = 1
print(f'{len(words.keys())} different four letter words starting with "s"')
m = max(words, key=words.get)
print(f'{m}: {words[m]}')
###Output
751 different four letter words starting with "s"
said: 426
|
Model backlog/Train/211-tweet-train-5fold-roberta-custom-loss-hidden11.ipynb | ###Markdown
Dependencies
###Code
import json, warnings, shutil
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
from scripts_step_lr_schedulers import *
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Load data
###Code
# Unzip files
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64-clean/fold_1.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64-clean/fold_2.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64-clean/fold_3.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64-clean/fold_4.tar.gz
!tar -xf /kaggle/input/tweet-dataset-5fold-roberta-64-clean/fold_5.tar.gz
database_base_path = '/kaggle/input/tweet-dataset-5fold-roberta-64-clean/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
print(f'Training set samples: {len(k_fold)}')
display(k_fold.head())
###Output
Training set samples: 26882
###Markdown
Model parameters
###Code
vocab_path = database_base_path + 'vocab.json'
merges_path = database_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
config = {
'MAX_LEN': 64,
'BATCH_SIZE': 32,
'EPOCHS': 7,
'LEARNING_RATE': 3e-5,
'ES_PATIENCE': 2,
'N_FOLDS': 5,
'question_size': 4,
'base_model_path': base_path + 'roberta-base-tf_model.h5',
'config_path': base_path + 'roberta-base-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
tokenizer.save('./')
###Output
_____no_output_____
###Markdown
Learning rate schedule
###Code
lr_min = 1e-6
lr_start = 1e-7
lr_max = config['LEARNING_RATE']
train_size = len(k_fold[k_fold['fold_1'] == 'train'])
step_size = train_size // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
warmup_steps = total_steps * 0.1
decay = .9985
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 1e-07 to 2.96e-05 to 1e-06
###Markdown
Model
###Code
from tensorflow.keras import backend
def Custom_loss(label_smoothing=0., weight=0.):
def custom_loss(y_true, y_pred):
cce = losses.CategoricalCrossentropy(label_smoothing=label_smoothing)
y_true_pos = backend.cast(tf.math.argmax(y_true, axis=-1), 'float32')
y_pred_pos = backend.cast(tf.math.argmax(y_pred, axis=-1), 'float32')
loss = cce(y_true, y_pred)
gap = tf.math.reduce_mean(tf.math.sqrt(tf.math.pow(tf.math.subtract(y_true_pos, y_pred_pos), 2)))
gap = tf.math.multiply(gap, weight)
loss = tf.math.add(loss, gap)
return loss
return custom_loss
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
_, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
h11 = hidden_states[-2]
x = layers.Dropout(.1)(h11)
start_logits = layers.Dense(1, name="start_logit", use_bias=False)(x)
start_logits = layers.Flatten()(start_logits)
end_logits = layers.Dense(1, name="end_logit", use_bias=False)(x)
end_logits = layers.Flatten()(end_logits)
start_probs = layers.Activation('softmax', name='y_start')(start_logits)
end_probs = layers.Activation('softmax', name='y_end')(end_logits)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_probs, end_probs])
return model
###Output
_____no_output_____
###Markdown
Train
###Code
AUTO = tf.data.experimental.AUTOTUNE
strategy = tf.distribute.get_strategy()
k_fold_best = k_fold.copy()
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid.shape[1] // config['BATCH_SIZE']
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=lambda: exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps=warmup_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay))
model.compile(optimizer, loss={'y_start': Custom_loss(label_smoothing=0.2, weight=0.5),
'y_end': Custom_loss(label_smoothing=0.2, weight=0.5)})
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'],
restore_best_weights=True, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min',
save_best_only=True, save_weights_only=True)
history = model.fit(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED),
validation_data=(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=False, seed=SEED)),
epochs=config['EPOCHS'],
steps_per_epoch=step_size,
callbacks=[checkpoint, es],
verbose=2).history
history_list.append(history)
# Make predictions (best model)
# model.load_weights(model_path)
predict_eval_df(k_fold_best, model, x_train, x_valid, get_test_dataset, decode, n_fold, tokenizer, config, config['question_size'])
### Delete data dir
shutil.rmtree(base_data_path)
###Output
FOLD: 1
Train for 672 steps, validate for 168 steps
Epoch 1/7
672/672 - 176s - loss: 8.6138 - y_start_loss: 4.1449 - y_end_loss: 4.4689 - val_loss: 5.6411 - val_y_start_loss: 2.8245 - val_y_end_loss: 2.8166
Epoch 2/7
672/672 - 164s - loss: 5.4959 - y_start_loss: 2.7081 - y_end_loss: 2.7878 - val_loss: 5.5384 - val_y_start_loss: 2.7517 - val_y_end_loss: 2.7868
Epoch 3/7
672/672 - 163s - loss: 5.2219 - y_start_loss: 2.5732 - y_end_loss: 2.6487 - val_loss: 5.5100 - val_y_start_loss: 2.7426 - val_y_end_loss: 2.7674
Epoch 4/7
672/672 - 163s - loss: 5.0873 - y_start_loss: 2.5190 - y_end_loss: 2.5684 - val_loss: 5.4866 - val_y_start_loss: 2.7335 - val_y_end_loss: 2.7532
Epoch 5/7
672/672 - 162s - loss: 4.9989 - y_start_loss: 2.4747 - y_end_loss: 2.5241 - val_loss: 5.4962 - val_y_start_loss: 2.7346 - val_y_end_loss: 2.7616
Epoch 6/7
672/672 - 163s - loss: 5.0066 - y_start_loss: 2.4829 - y_end_loss: 2.5238 - val_loss: 5.4861 - val_y_start_loss: 2.7301 - val_y_end_loss: 2.7560
Epoch 7/7
672/672 - 162s - loss: 4.9358 - y_start_loss: 2.4375 - y_end_loss: 2.4983 - val_loss: 5.4984 - val_y_start_loss: 2.7369 - val_y_end_loss: 2.7615
FOLD: 2
Train for 672 steps, validate for 168 steps
Epoch 1/7
672/672 - 176s - loss: 7.9816 - y_start_loss: 3.9085 - y_end_loss: 4.0731 - val_loss: 5.5914 - val_y_start_loss: 2.7878 - val_y_end_loss: 2.8036
Epoch 2/7
672/672 - 163s - loss: 5.4673 - y_start_loss: 2.7093 - y_end_loss: 2.7580 - val_loss: 5.4438 - val_y_start_loss: 2.6935 - val_y_end_loss: 2.7503
Epoch 3/7
672/672 - 163s - loss: 5.1808 - y_start_loss: 2.5677 - y_end_loss: 2.6131 - val_loss: 5.4299 - val_y_start_loss: 2.6942 - val_y_end_loss: 2.7357
Epoch 4/7
672/672 - 163s - loss: 5.0665 - y_start_loss: 2.5216 - y_end_loss: 2.5450 - val_loss: 5.4288 - val_y_start_loss: 2.6823 - val_y_end_loss: 2.7465
Epoch 5/7
672/672 - 163s - loss: 5.0010 - y_start_loss: 2.4848 - y_end_loss: 2.5162 - val_loss: 5.4257 - val_y_start_loss: 2.6783 - val_y_end_loss: 2.7474
Epoch 6/7
672/672 - 161s - loss: 4.9706 - y_start_loss: 2.4721 - y_end_loss: 2.4984 - val_loss: 5.4302 - val_y_start_loss: 2.6918 - val_y_end_loss: 2.7384
Epoch 7/7
Restoring model weights from the end of the best epoch.
672/672 - 162s - loss: 4.9411 - y_start_loss: 2.4696 - y_end_loss: 2.4715 - val_loss: 5.4310 - val_y_start_loss: 2.6877 - val_y_end_loss: 2.7433
Epoch 00007: early stopping
FOLD: 3
Train for 672 steps, validate for 168 steps
Epoch 1/7
672/672 - 174s - loss: 8.2118 - y_start_loss: 3.9652 - y_end_loss: 4.2466 - val_loss: 5.7054 - val_y_start_loss: 2.8084 - val_y_end_loss: 2.8970
Epoch 2/7
672/672 - 163s - loss: 5.5526 - y_start_loss: 2.7541 - y_end_loss: 2.7985 - val_loss: 5.4351 - val_y_start_loss: 2.6918 - val_y_end_loss: 2.7433
Epoch 3/7
672/672 - 161s - loss: 5.2657 - y_start_loss: 2.6218 - y_end_loss: 2.6440 - val_loss: 5.4591 - val_y_start_loss: 2.6983 - val_y_end_loss: 2.7608
Epoch 4/7
Restoring model weights from the end of the best epoch.
672/672 - 161s - loss: 5.1429 - y_start_loss: 2.5681 - y_end_loss: 2.5748 - val_loss: 5.4531 - val_y_start_loss: 2.6966 - val_y_end_loss: 2.7565
Epoch 00004: early stopping
FOLD: 4
Train for 672 steps, validate for 168 steps
Epoch 1/7
672/672 - 175s - loss: 8.5882 - y_start_loss: 4.1089 - y_end_loss: 4.4794 - val_loss: 5.7028 - val_y_start_loss: 2.7472 - val_y_end_loss: 2.9556
Epoch 2/7
672/672 - 164s - loss: 5.6883 - y_start_loss: 2.7856 - y_end_loss: 2.9027 - val_loss: 5.5430 - val_y_start_loss: 2.6895 - val_y_end_loss: 2.8536
Epoch 3/7
672/672 - 163s - loss: 5.3747 - y_start_loss: 2.6435 - y_end_loss: 2.7311 - val_loss: 5.4810 - val_y_start_loss: 2.6667 - val_y_end_loss: 2.8142
Epoch 4/7
672/672 - 162s - loss: 5.2585 - y_start_loss: 2.5980 - y_end_loss: 2.6605 - val_loss: 5.4996 - val_y_start_loss: 2.6699 - val_y_end_loss: 2.8297
Epoch 5/7
Restoring model weights from the end of the best epoch.
672/672 - 162s - loss: 5.1602 - y_start_loss: 2.5488 - y_end_loss: 2.6114 - val_loss: 5.4858 - val_y_start_loss: 2.6700 - val_y_end_loss: 2.8158
Epoch 00005: early stopping
FOLD: 5
Train for 672 steps, validate for 168 steps
Epoch 1/7
672/672 - 174s - loss: 8.3366 - y_start_loss: 4.1508 - y_end_loss: 4.1858 - val_loss: 5.6845 - val_y_start_loss: 2.7736 - val_y_end_loss: 2.9109
Epoch 2/7
672/672 - 163s - loss: 5.5571 - y_start_loss: 2.7535 - y_end_loss: 2.8037 - val_loss: 5.4717 - val_y_start_loss: 2.6733 - val_y_end_loss: 2.7984
Epoch 3/7
672/672 - 163s - loss: 5.2443 - y_start_loss: 2.5946 - y_end_loss: 2.6497 - val_loss: 5.4456 - val_y_start_loss: 2.6878 - val_y_end_loss: 2.7578
Epoch 4/7
672/672 - 163s - loss: 5.1256 - y_start_loss: 2.5369 - y_end_loss: 2.5887 - val_loss: 5.4240 - val_y_start_loss: 2.6692 - val_y_end_loss: 2.7548
Epoch 5/7
672/672 - 162s - loss: 5.0615 - y_start_loss: 2.5191 - y_end_loss: 2.5424 - val_loss: 5.4129 - val_y_start_loss: 2.6553 - val_y_end_loss: 2.7576
Epoch 6/7
672/672 - 161s - loss: 5.0417 - y_start_loss: 2.5021 - y_end_loss: 2.5396 - val_loss: 5.4216 - val_y_start_loss: 2.6672 - val_y_end_loss: 2.7544
Epoch 7/7
Restoring model weights from the end of the best epoch.
672/672 - 161s - loss: 5.0208 - y_start_loss: 2.4944 - y_end_loss: 2.5263 - val_loss: 5.4157 - val_y_start_loss: 2.6506 - val_y_end_loss: 2.7651
Epoch 00007: early stopping
###Markdown
Model loss graph
###Code
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
###Output
Fold: 1
###Markdown
Model evaluation (best model)
###Code
display(evaluate_model_kfold(k_fold_best, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Model evaluation (last model)
###Code
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or
c.startswith('text_len') or
c.startswith('selected_text_len') or
c.startswith('text_wordCnt') or
c.startswith('selected_text_wordCnt') or
c.startswith('fold_') or
c.startswith('start_fold_') or
c.startswith('end_fold_'))]].head(15))
###Output
_____no_output_____ |
P2/Investigate_a_Dataset Project.ipynb | ###Markdown
Analysis of a TMDb Movie Dataset Table of ContentsIntroductionData WranglingExploratory Data AnalysisConclusions IntroductionFor this analysis I will be looking at the TMDb movie data. This dataset contains information on more than 10 000 movies. For example, information is provided on the popularity of each movie, the budget, genre, etc.I will be investigating the relationship between popularity and genre, production company, and director. I will also look at the relationship between revenue, budget , and average vote.By doing this I aim to answer the following questions:1. Which genre of movie is most popular?2. Which production company produces the most popular movies?3. Which director is used in the most popular moves?4. Is revenue directly proportional to budget?5. Is revenue directly proportional to average vote?In the code cell below I have imported the libraries and packages that I will be using throughout this analysis.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import csv
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
###Output
_____no_output_____
###Markdown
Data WranglingFirst off, I will load and take a look at the data available. I will look at the columns to see what information is provided and check the format in which data is presented in each column to see if any conversions or changes are necessary.Overall I will inspect the cleanliness of the data, and identify where I want to make changes, trim, and clean the data to make it easier to work with. General Properties First I will load the data to a dataframe and use methods to obtain more information on the datase, such as its size and the columns it contains.
###Code
df = pd.read_csv('tmdb-movies.csv')
df.head(50)
df.info()
df.describe()
df.nunique()
###Output
_____no_output_____
###Markdown
The above code cells were executed to get a better idea of and feel for the data. Looking at the first few lines allowed me to see what the row inputs look like as well as what the column headings are, in other words what information is given about each movie in the dataset. The number of movies can also be seen, as well as the number on NULL entries in each column by calling ".info". There are 10866 movies in this dataset. Mean, max and other qualities are found from ".describe", providing important statistical insight into the spread of the data. From this this brief look at the data, we can see that the following columns have missing values:* imbd_id* cost* homepage* director* tagline* keywords* overview* genres* production_companiesI will start by removing from my dataframe all columns that I will not need to answer my questions in my analysis.In the process some of the columns containing NULL values will be removed. I will then have to get rid of the remaining NULL values, either by replacing them, or by removing rows containing NULLs.I will use budget_adj and revenue_adj rather than budget and revenue, since budget_adj and revenue_adj have been adjusted to show the budget and revenue in terms of 2010 dollars. I believe having a common base makes using this data better for comparisons. I will convert the values in budget_adj and revenue_adj from scientific notation to truncated decimal form, to improve clarity when viewing them. Data Cleaning and Trimming: Changes Required to Arrive at Final Dataset that was Analysed Having looked at the data, I can start making changes that will help me end up with a dataset I can easily analyse.To clean and trim the data I decided to do the following:1. Remove all columns that I do not need in my analysis.2. Remove NULLs from data3. Remove duplicates4. Change format of data in revenue_adj and budget_adj from scientific notation to decimal format5. Extract first genre from genre column.6. Extract first production company from production company column7. Extract first director from director column8. Remove outliers from popularity column
###Code
remove_cols=['id','imdb_id','budget','revenue','cast','homepage','tagline','keywords','overview','release_date','release_year','vote_count']
df1 = df.drop(remove_cols,axis=1)
df1.head(5)
df1.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10866 entries, 0 to 10865
Data columns (total 9 columns):
popularity 10866 non-null float64
original_title 10866 non-null object
director 10822 non-null object
runtime 10866 non-null int64
genres 10843 non-null object
production_companies 9836 non-null object
vote_average 10866 non-null float64
budget_adj 10866 non-null float64
revenue_adj 10866 non-null float64
dtypes: float64(4), int64(1), object(4)
memory usage: 764.1+ KB
###Markdown
Above I removed all columns that I do not need in my analysis. The colums used in this analysis are:* popularity * original_title* director (contains NULLS)* runtime* genres (contains NULLS)* production_companies (contains NULLS)* budget_adj* revenue_adj* vote_averageThree of these columns contain some NULL entries. Since these are all non-numerical entries, no mathematical imputation can be done. Therefore I will remove all rows containing NULL values.
###Code
sum(df1.apply(lambda x: sum(x.isnull().values), axis = 1)>0) #counts number of rows containing 1 or more NULL entries
df1.dropna(how='any', inplace=True);
df1.info()
sum(df1.apply(lambda x: sum(x.isnull().values), axis = 1)>0) #counts number of rows containing 1 or more NULL entries
###Output
_____no_output_____
###Markdown
I checked how many rows contained one or more NULL entries - there were 1059. I then removed all these rows, checked again to see if any rows contained NULL entries, and found that all had been removed successfully. Next I will see if there are any duplicated rows.
###Code
sum(df1.duplicated())
df1[df1.duplicated()==True]
df1[df1['original_title']=="TEKKEN"]
df1.drop_duplicates(inplace=True)
sum(df1.duplicated())
###Output
_____no_output_____
###Markdown
One duplicated row was found for the movie " TEKKEN". The duplicated row was removed. The displayed format of the values in revenue_adj and budget_adj was looked at next.
###Code
df1[['budget_adj','revenue_adj']].head(5)
pd.options.display.float_format = '{:20,.2f}'.format
df1[['budget_adj','revenue_adj']].head(5)
###Output
_____no_output_____
###Markdown
I changed budget_adj and revenue_adj from scientific notation to decimal format, as can be seen in the first few rows of these columns above.
###Code
df1['genres'].unique()
df1['genres'].nunique()
df1['production_companies'].unique()
df1['production_companies'].nunique()
df1['genres1'] = df1['genres'].apply(lambda x:x.split('|')[0])
df1['production_company1'] = df1['production_companies'].apply(lambda x:x.split('|')[0])
df1['director1'] = df1['director'].apply(lambda x:x.split('|')[0])
###Output
_____no_output_____
###Markdown
As can be seen above, the "genres", "production_companies" and "director" column have many unique entries. I have decided to only look at the first entry in each row, assuming that the first entry is the most important and dominant classificaiton of the movie genre and director and production company involved in making the movie. Below I check to make sure that I managed to add three new columns to the dataset, providing the first movie genre, first production company and first director for each movie.
###Code
df1.info()
df1.head()
###Output
_____no_output_____
###Markdown
Next I want to determine the number of unique values in these new columns.
###Code
df1['genres1'].unique()
df1['genres1'].nunique()
df1['production_company1'].unique()
df1['production_company1'].nunique()
df1['director1'].unique()
df1['director1'].nunique()
###Output
_____no_output_____
###Markdown
As can be seen above, there are now fewer unique genres, production companies, and directors to work with. Next, I will use a histogram to take a look at the values in the popularity column. This should help me view the range and see if there are any outliers.
###Code
df['popularity'].hist()
plt.xlabel("popularity rating")
plt.ylabel("frequency of popularity rating")
plt.title("Spread of popularity rating");
###Output
_____no_output_____
###Markdown
From this chart it seems like there are some outliers in the popularity column. Lets look at the spread:
###Code
df1['popularity'].describe()
###Output
_____no_output_____
###Markdown
From this one can see that the max value in this column is 32.99, however most of the values (75%) is less than 1. This suggests that 32 is a outlier. Lets look at the 85% quantile and find the percentage of values in the popularity column that is less than 1.
###Code
df1['popularity'].quantile(.85)
stats.percentileofscore(df1['popularity'],1)
###Output
_____no_output_____
###Markdown
Based on the investigation above one can see than approximately 82% of inputs in the populority column are less than or equal to 1.This seems to indicate that the values larger than 1 may be outliers resulting from errors in the data capturing process.What would make most sense, would be if popularity scores always ranged between zero and one - hence providing popularity as a proportion of 1 (100%). I will therefore remove all rows with popularity inputs higher than 1.
###Code
df1 = df1.query('popularity <=1')
df1.head()
df1.info()
df1.describe()
###Output
_____no_output_____
###Markdown
I have now removed all popularity ratings larger than 1. The three preceding tables describe the dataset that I am now left with. At this point, all changes I believed were necessary to help me analyse this data and answer my questions have been carried out. The data is now ready to be explored in more detail. Exploratory Data Analysis of the TMDb datasetUsing the cleaned and trimmed dataset, I will now use statistics and visualizations to try and answer the questions posed at the beginning of this report. Determinig the Most Popular Genre of Movie First I would like to see which genre is the most popular. I will create a new dataframe showing the popularity of each genre as well as the count - number of times that genre occurs in the my original cleaned and trimmed dataset.
###Code
genres=df1.groupby('genres1')['revenue_adj'].count().reset_index()
genres.head(5)
genres1=df1.groupby('genres1')['popularity'].mean().reset_index()
genres1.head(5)
df_genres = genres1.merge(genres, how='outer', left_index=True, right_index=True)
df_genres.drop(['genres1_y'],axis = 1,inplace=True)
df_genres.head(5)
###Output
_____no_output_____
###Markdown
Having created this dataset, I want to rename the 'revenue_adj' column to 'count', which better describes it.
###Code
df_genres.rename(columns={'revenue_adj': 'count'},inplace=True)
df_genres.info()
df_genres.describe()
###Output
_____no_output_____
###Markdown
To get a reliable mean, I decided to only used rows where count is greater than the 25% count quartile which was found in the statistics above.
###Code
df_genres = df_genres[df_genres['count'] > 68]
df_genres.head(5)
###Output
_____no_output_____
###Markdown
I will now display this data in the form of a bar graph to visually compare the popularity of different genres.
###Code
plt.figure(figsize=(15,10))
plt.title('Genre vs Popularity')
plt.xlabel('Genre')
plt.ylabel('Popularity')
x=df_genres['genres1_x']
y=df_genres['popularity']
plt.xticks(np.arange(len(x)), x, rotation='vertical')
plt.bar(x,y);
###Output
_____no_output_____
###Markdown
From the chart it seems that animation, thriller, and adventure are the most popular genres. Lets used code to extract the top 10 genres and seen their exact mean popularity rating.
###Code
df_genres.sort_values('popularity',ascending= False).head(10)
###Output
_____no_output_____
###Markdown
As seen on the graph,the four most popular genres, according to this analysis, is Animation, Thriller, Adventure, and Crime. Identifying the Production Company with the Most Popular Movies I would now like to investigate my second question and find the most popular production company. As before, I will first create a new dataframe containing the popularity and count of each production company. The count will indicate the number of movies in the dataset that featured the associated production company first in its list of production companies.
###Code
production=df1.groupby('production_company1')['revenue_adj'].count().reset_index()
production.head(5)
production1=df1.groupby('production_company1')['popularity'].mean().reset_index()
production1.head(5)
df_prod = production1.merge(production, how='outer', left_index=True, right_index=True)
df_prod.drop(['production_company1_y'],axis = 1,inplace=True)
df_prod.head(5)
###Output
_____no_output_____
###Markdown
I will rename 'revenue_adj' to 'count'. and use info and describe methods to look at this new dataframe.
###Code
df_prod.rename(columns={'revenue_adj': 'count'},inplace=True)
df_prod.info()
df_prod.describe()
###Output
_____no_output_____
###Markdown
Next, I will produce a histogram based on the values in count, in order to see how they are spread in this dataframe.
###Code
plt.ylim(1,100)
plt.xlim(0,100)
plt.hist(df_prod['count'])
plt.xlabel("count")
plt.ylabel("frequency of count")
plt.title("Spread of count");
###Output
_____no_output_____
###Markdown
Based on the spread observed in the chart above, I decided to look only at production companies that occur more than 20 times in the dataset, so as to get a more reliable average popularity. Next, I will remove all rows where count is less than or equal to 20 and illustrate the results on a bar chart.
###Code
df_prod = df_prod[df_prod['count'] > 20]
df_prod.head(5)
plt.figure(figsize=(15,10))
plt.title('Movie Produciton Company vs Popularity')
plt.xlabel('Production Company')
plt.ylabel('Popularity')
#plt.ylim(ymin= 0.9, ymax=1 )
x1=df_prod['production_company1_x']
y1=df_prod['popularity']
plt.xticks(np.arange(len(x1)), x1, rotation='vertical')
plt.bar(x1,y1);
df_prod.sort_values('popularity',ascending= False).head(10)
###Output
_____no_output_____
###Markdown
Based on all production companies that appear more than 20 times in the dataset, the bargraph and executed code shows than the top three production companies associated with the most popular movies are Village Roadshow Pictures, Dreamworks SKG, and The Weinstein Company. Which director is used for the most popular moves? I would now like to determine which director is used in the most popular movies. For this analysis I will look only at the first director given for each movie, in cases where there are more than one directors used in a movie.Once more I will create a dataframe, this time showing the popularity and count for each director in my dataset.
###Code
df1.info()
director=df1.groupby('director1')['revenue_adj'].count().reset_index()
director.head(5)
director1=df1.groupby('director1')['popularity'].mean().reset_index()
director1.head(5)
df_direct = director1.merge(director, how='outer', left_index=True, right_index=True)
df_direct.drop(['director1_y'],axis = 1,inplace=True)
df_direct.head(5)
df_direct.rename(columns={'revenue_adj': 'count'},inplace=True)
###Output
_____no_output_____
###Markdown
Having created the new dataframe, I will first investigate the spread of count in this dataframe using a histogram.
###Code
plt.xlim(0,15)
plt.xlabel('number of movies for director')
plt.ylabel('frequency of number of movies for directors')
plt.title('Spread of number of movies for directors')
plt.hist(df_direct['count']);
df_direct.describe()
###Output
_____no_output_____
###Markdown
Based on the spread in the histogram above, I will only look at rows where count is greater than 8, to get a reliable mean popularity. I will remove all rows with directors that appear less than 8 times in this particular dataframe. I will then create a bar graph to see the popularity of each director.
###Code
df_direct = df_direct[df_direct['count'] > 8]
df_direct.head(5)
plt.figure(figsize=(18,10))
plt.title('Director vs Popularity')
plt.xlabel('Director')
plt.ylabel('Popularity')
#plt.ylim(ymin= 0.9, ymax=1 )
x2=df_direct['director1_x']
y2=df_direct['popularity']
plt.xticks(np.arange(len(x2)), x2, rotation='vertical')
plt.bar(x2,y2);
df_direct.sort_values('popularity',ascending= False).head(10)
###Output
_____no_output_____
###Markdown
Based on the graph and execute code, the three directors most often used for popular movies are Lasse Hallström, Ron Howard, and Robert Rodriguez. The graph was used to get an indication on the relative popularity of each director, and code was used to see the exact mean popularity rating associated with the top directors. Determining the relationship between revenue and budget The question I would like to answer here is whether or not high budget movies result in high revenue movies. Is there a positive correlation between budget and revenue? I will look at the relationship between these two using a scatter plot. First, i will remove all rows in which the revenue or budget is 0 since realistically budget and revenue cannot be 0.
###Code
len(df[df['revenue_adj']==0])
len(df['revenue_adj'])
len(df[df['budget_adj']==0])
len(df['budget_adj'])
df2=df1.loc[(df1['revenue_adj']!=0)]
len(df2[df2['revenue_adj']== 0])
df2=df2.loc[(df2['budget_adj']!=0)]
len(df2[df2['budget_adj']< 0])
###Output
_____no_output_____
###Markdown
I have now removed all 0 entires and will now plot the graph to see if there are any visible correlations between budget and revenue.
###Code
y4=df2['revenue_adj']
x4=df2['budget_adj']
df2['revenue_adj'].describe()
df2['budget_adj'].describe()
plt.figure(figsize=(15,10))
plt.title('Budget vs Revenue ')
plt.xlabel('Budget ($)')
plt.ylabel('Revenue ($)')
plt.xlim(0,200000000)
plt.ylim(0,1000000000)
plt.scatter(x4,y4);
###Output
_____no_output_____
###Markdown
From the graph above there doesn't seem to be any visible correlation beween revenue and budget. Must of the data lie between regions 0 and 0.75 (75 000000) on the x-axis and below regions 0 and 0.2 (200 000000) on the y-axis. Next I will look at a smaller section of this data, in these regions where most of the data is located. by doing this I hope to have removed some outliers and be able to identify a trend, if present.
###Code
plt.figure(figsize=(15,10))
plt.title('Budget vs Revenue - closer look')
plt.xlabel('Budget')
plt.ylabel('Revenue')
plt.xlim(0,75000000)
plt.ylim(0,200000000)
plt.scatter(x4,y4);
###Output
_____no_output_____
###Markdown
The graph found above also show no correlation between budget and revenue. I will now try looking only at the highest revenue and highest budget movies to see if that provides any further insight. i will create two dataframes, one containing the 10 highest revenue movies and their corresponding budgets and the other containing the 10 highest budget movies and their correponding revenues.
###Code
rev_larg=df2.nlargest(10,'revenue_adj')
rev_larg.head()
budget_larg=df2.nlargest(10,'budget_adj')
budget_larg.head()
###Output
_____no_output_____
###Markdown
I will now plot these two dataframes to see how they compare to each other and if there are any visibily trends.
###Code
plt.figure(figsize=(15,10))
plt.subplot(121)
plt.title('High Revenue')
plt.xlabel('Budget')
plt.ylabel('Revenue')
plt.ylim(0,1200000000)
plt.xlim(0,400000000)
plt.scatter(rev_larg['budget_adj'],rev_larg['revenue_adj'], color='red')
plt.subplot(122)
plt.title('High Budget')
plt.xlabel('Budget')
plt.ylabel('Revenue')
plt.ylim(0,1200000000)
plt.xlim(0,400000000)
plt.scatter(budget_larg['budget_adj'],budget_larg['revenue_adj'], color='green');
df2['budget_adj'].median()
df2['revenue_adj'].median()
###Output
_____no_output_____
###Markdown
In the cells above I looked at the 10 highest revenues and their corresponding budgets as well as the 10 highest budgets and their corresponding revenues. I plotted two graphs using these values, with the red dots indicating the high revenue dataset and the green dots indicating the high budget dataset. From this plot and the table describing this data, one can see that the highest revenue movie was also one of the highest budget movies. However some of the highest budget movies had much lower revenues than the 10 highest revenue movies. This is clear when looking at the median revenue calculated above and comparing it to some of the revenues corresponding to the 10 highest budgets. Similarly, some of the highest revenue movies had much smaller budgets than the 10 highest budget movies. Is budget directly proportional to average vote? Do high budget movies get higher votes on average? I will use a scatter plot to see the relationship between these two factors.
###Code
y4=df1['budget_adj']
y4.head()
x4=df1['vote_average']
x4.head()
plt.figure(figsize=(15,10))
plt.title('Vote vs Budget')
plt.xlabel('Vote')
plt.ylabel('Budget')
plt.scatter(x4,y4);
###Output
_____no_output_____ |
site/ko/tutorials/customization/custom_training_walkthrough.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 지원하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pyplot as plt
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 Google Colab에서 실행 GitHub에서 소스 보기 노트북 다운로드 이번 튜토리얼에서는 머신러닝을 이용해 붓꽃의 품종을 *분류*해 보도록 하겠습니다. TensorFlow를 사용하여 다음을 실행할 수 있습니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 TensorFlow 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 TensorFlow 개념을 사용합니다.- TensorFlow의 [즉시 실행](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용- [데이터세트 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기- TensorFlow의 [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용하여 모델 및 레이어 구축이번 튜토리얼은 다음과 같이 기타 TensorFlow 프로그램과 유사하게 구성됩니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용하여 예측하기 프로그램 설정 라이브러리 임포트TensorFlow 및 기타 필요한 Python 모듈을 가져옵니다. TensorFlow는 기본적으로 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용하여 나중에 실행되는 계산 그래프를 생성하는 대신 연산을 즉시 평가하고 구체적인 값을 반환합니다. REPL이나 `python` 대화형 콘솔을 사용한다면 익숙할 것입니다.
###Code
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제만약 식물학자가 붓꽃을 자동으로 분류하는 방법을 찾고 있다고 가정해 봅시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램이라면 사진을 통해 꽃을 분류할 수 있을 겁니다. 하지만 이번 튜토리얼에서는 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)에서 측정된 길이와 폭의 값에 기반해서 붓꽃을 분류해 보도록 하겠습니다.이 붓꽃은 그 종류가 약 300종에 이르지만, 튜토리얼에서는 다음의 3가지 품종으로만 분류해 보겠습니다.- Iris setosa- Iris virginica- Iris versicolor 그림 1. Iris setosa (Radomil, CC BY-SA 3.0), Iris versicolor{, (Dlanglois, CC BY-SA 3.0), Iris virginica (Frank Mayfield, CC BY-SA 2.0). 다행히도 꽃받침과 꽃잎의 길이와 폭의 값을 측정한 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)가 있습니다. 해당 데이터는 머신러닝 분류 문제에 있어 초보자에게 유명한 고전적인 데이터세트입니다. 훈련 데이터 가져오기 및 파싱데이터 파일을 다운로드하여 이 Python 프로그램이 사용할 수 있는 구조로 해당 데이터를 전환합니다. 데이터세트 다운로드`tf.keras.utils.get_file` 함수를 사용하여 훈련 데이터세트를 다운로드합니다. 이 함수는 다운로드된 파일 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("Local copy of the dataset file: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터세트 `iris_training.csv`는 텍스트 파일이며, 표로 된 데이터를 CSV(comma-separated values)로 저장합니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
해당 데이터세트에서 다음 사항에 주목하세요.1. 첫 번째 줄은 데이터세트 정보를 포함하는 헤더입니다.- 총 120개의 샘플이 있으며, 각 샘플에는 4개의 특성과 3개의 가능한 라벨 이름 중 하나가 있습니다.1. 다음 줄은 데이터 레코드로, 한 줄당 한 개의 *[예](https://developers.google.com/machine-learning/glossary/example)*가 있습니다.- 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)*으로, 예의 특징을 보여줍니다. 여기서 필드는 붓꽃의 측정값을 부동소수점으로 표시합니다.- 마지막 열은 *[라벨](https://developers.google.com/machine-learning/glossary/label)*이며 예측하려는 값을 나타냅니다. 이 데이터세트에서는 꽃의 이름에 상응하는 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# column order in CSV file
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("Features: {}".format(feature_names))
print("Label: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각 라벨은 'setosa'와 같은 문자형 이름과 관련이 있습니다. 하지만 머신러닝은 주로 숫자형 값에 의존합니다. 라벨 숫자는 다음의 이름을 대신합니다.- `0`: Iris setosa- `1`: Iris versicolor- `2`: Iris virginica특성과 라벨에 관한 더 자세한 내용은 [머신러닝 단기 집중 과정의 ML 용어 섹션](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성TensorFlow의 [데이터세트 API](https://www.tensorflow.org/guide/datasets)는 모델로 데이터를 로드할 때 일반적으로 발생하는 다양한 사례를 다룹니다. 이는 데이터를 읽고 훈련에 필요한 형태로 변환하는 고수준 API입니다.데이터세트는 CSV 형식의 텍스트 파일이므로, 적절한 형태로 데이터를 구분하기 위해 `tf.data.experimental.make_csv_dataset` 함수를 사용하겠습니다. 이 함수는 훈련 모델용 데이터를 생성하므로, 초기값은 데이터 (`shuffle=True, shuffle_buffer_size=10000`)의 셔플링 및 데이터세트(`num_epochs=None`)의 무한 반복으로 설정되어있습니다. 또한 [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) 파라미터를 다음과 같이 설정합니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍에서 `tf.data.Dataset`를 반환하며, 여기서 `features`는 `{'feature_name': value}` 사전에 해당합니다.이러한 `Dataset` 객체는 반복 가능합니다. 다음을 통해 배치별 특성을 살펴보겠습니다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성은 하나의 그룹으로 묶이거나 *배치 처리*된다는 점에 주목하세요. 각 예제 행의 필드는 해당하는 특성 배열에 추가됩니다. `batch_size`를 조정하여 이러한 특성 배열에 저장된 샘플 수를 설정하세요.또한 배치에서 일부 특성을 플롯하여 클러스터가 생기는 것을 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 사전을 `(batch_size, num_features)`의 형상을 갖는 단일 배열로 리패키징하는 함수를 생성합니다.이 함수는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드를 사용하여 텐서의 목록에서 값을 취하고 지정된 차원에서 결합된 텐서를 생성합니다.
###Code
def pack_features_vector(features, labels):
"""Pack the features into a single array."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그런 다음 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용하여 각 `(features,label)` 쌍의 `features`을 훈련 데이터세트에 저장합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
`Dataset`의 특성 요소는 `(batch_size, num_features)` 형상의 배열이 되었습니다. 예제의 앞부분을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가?*[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)*은 특성과 레이블 간의 관계입니다. 붓꽃 분류 문제의 경우, 모델은 꽃받침과 꽃잎 측정치와 예측된 붓꽃 종 간의 관계를 정의합니다. 일부 간단한 모델은 몇 줄의 대수로 설명할 수 있지만, 복잡한 머신러닝 모델에는 요약하기 어려운 매개변수가 많습니다.머신러닝을 사용하지 *않고* 4가지 특성과 붓꽃 종 간의 관계를 확인할 수 있을까요? 즉, 기존 프로그래밍 기술(예: 여러 개의 조건문)을 사용하여 모델을 만들 수 있을까요? 특정 종에 대한 꽃잎과 꽃받침 측정치 간의 관계를 확인할 수 있을 만큼 충분히 오랫동안 데이터세트를 분석한 경우 가능할 수도 있습니다. 그러나 이것은 더 복잡한 데이터세트에서는 어렵거나 불가능할 수도 있습니다. 좋은 머신러닝 접근 방식이라면 적절한 모델을 제시해 줍니다. 적절한 머신러닝 모델 형식에 충분한 대표 예제를 제공하면 프로그램이 관계를 파악해 줍니다. 모델 선정훈련할 모델의 종류를 선택해야 합니다. 많은 형식의 모델이 있으며 좋은 모델을 선택하려면 경험이 필요합니다. 이 튜토리얼에서는 신경망을 사용하여 붓꽃 분류 문제를 해결합니다. *[신경망](https://developers.google.com/machine-learning/glossary/neural_network)*은 특성과 레이블 간의 복잡한 관계를 찾을 수 있으며, 하나 이상의 숨겨진 레이어로 구성된 고도로 구조화된 그래프입니다. 각 *[숨겨진 레이어](https://developers.google.com/machine-learning/glossary/neural_network)*는 하나 이상의 *[신경](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성됩니다. 신경망에는 여러 범주가 있으며, 이 프로그램은 조밀하거나 *[완전히 연결된 신경망](https://developers.google.com/machine-learning/glossary/neuron)*을 사용합니다. 즉, 한 레이어의 신경은 이전 레이어의 *모든* 신경에서 입력 연결을 받습니다. 예를 들어, 그림 2는 입력 레이어, 2개의 숨겨진 레이어 및 출력 레이어로 구성된 조밀한 신경망을 보여줍니다. 그림 2. 특성, 숨겨진 레이어, 예측으로 구성된 신경망 그림 2의 모델을 훈련하고 레이블이 지정되지 않은 예제를 제공하면, 이 꽃이 주어진 붓꽃 종일 가능성에 대한 3가지 예측값이 생성됩니다. 이 예측을 *[추론](https://developers.google.com/machine-learning/crash-course/glossaryinference)*이라고 합니다. 이 예에서 출력 예측값의 합계는 1.0입니다. 그림 2에서 이 예측은 *Iris setosa*의 경우 `0.02`, *Iris versicolor*의 경우 `0.95`, *Iris virginica*의 경우 `0.03`입니다. 즉, 모델은 95% 확률로 레이블이 지정되지 않은 예시 꽃이 *Iris versicolor*라고 예측합니다. 케라스를 사용한 모델 생성TensorFlow의 `tf.keras` API는 모델과 레이어를 생성하는 데 주로 사용됩니다. Keras가 모든 구성 요소 연결에 대한 복잡성을 처리해 주기 때문에 모델을 구축하고 실험하는 데 용이합니다.`tf.keras.Sequential` 모델은 레이어의 선형 스택입니다. 이 생성자는 레이어 인스턴스 목록을 취하는데, 아래의 경우, 각 10개의 노드를 갖는 두 개의 `tf.keras.layers.Dense` 레이어 및 3개의 노드를 갖는 출력 레이어로 구성되어 레이블 예측을 보여주고 있습니다. 첫 번째 레이어의 `input_shape` 매개변수는 데이터세트의 특성 수에 해당하며 필수적입니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 레이어의 노드에서 출력 형상을 결정합니다. 이러한 비선형성이 중요한데, 활성화 함수가 없는 모델은 단일 레이어와 마찬가지이기 때문입니다. `tf.keras.activations`가 많이 있지만, 숨겨진 레이어에서는 주로 [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) 함수가 사용됩니다.숨겨진 레이어와 신경의 이상적인 수는 문제와 데이터세트에 따라 다릅니다. 머신러닝의 여러 측면과 마찬가지로 신경망의 최상의 형태를 고르기 위해서는 지식과 실험이 모두 필요합니다. 경험상 숨겨진 레이어와 신경의 수를 늘리면 일반적으로 더 강력한 모델이 생성되며 이를 효과적으로 훈련하려면 더 많은 데이터가 필요합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
여기에서 각 예제는 각 클래스에 대한 [로짓](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다.이러한 로짓을 각 클래스의 확률로 변환하려면 [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하세요.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
클래스에서 `tf.argmax`를 사용하면 예측된 클래스 인덱스가 제공됩니다. 그러나 모델은 아직 훈련되지 않았으므로 좋은 예측이 아닙니다.
###Code
print("Prediction: {}".format(tf.argmax(predictions, axis=1)))
print(" Labels: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련하기](https://developers.google.com/machine-learning/crash-course/glossarytraining)*는 모델이 점차 최적화될 때 또는 모델이 데이터세트를 학습하는 머신러닝 단계입니다. 이 단계의 목표는 훈련 데이터세트의 구조에 대해 충분히 학습하여 보이지 않는 데이터를 예측하는 것입니다. 훈련 데이터세트에 대해 너무 많이 배우면 예측이 관측한 데이터에 대해서만 작동하고 일반화할 수 없습니다. 이런 문제를 과대적합이라고 하며, 이는 문제를 해결하는 방법을 이해하는 대신 답을 암기하는 것과 같습니다.붓꽃 분류 문제는 *[감독 머신러닝](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*의 예입니다. 모델은 레이블이 포함된 예시로 훈련됩니다. 비감독 머신러닝에서 예시에는 레이블이 포함되지 않습니다. 대신 모델은 일반적으로 특성 사이에서 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련 및 평가 단계 모두 모델의 *[손실](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 이것은 모델의 예측이 원하는 레이블에서 얼마나 떨어져 있는지, 즉 모델의 성능이 얼마나 나쁜지를 측정합니다. 이 값을 최소화하거나 최적화하려고 합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산합니다. 이 함수는 모델의 클래스 확률 예측과 원하는 레이블을 입력으로 받아 예의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y, training):
# training=training is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
y_ = model(x, training=training)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels, training=False)
print("Loss test: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 `tf.GradientTape` 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, training=True)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성*[옵티마이저](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 계산된 그래디언트를 모델의 변수에 적용하여 `loss` 함수를 최소화합니다. 손실 함수를 곡면으로 생각해 보세요(그림 3 참조). 곡면을 걸어 다니면서 가장 낮은 지점을 찾으려고 하는 것입니다. 그래디언트는 가장 가파른 상승 방향을 가리키므로 반대 방향으로 이동하여 경사를 내려갑니다. 각 배치의 손실과 그래디언트를 반복적으로 계산하여 훈련 중에 모델을 조정합니다. 점차적으로 모델은 손실을 최소화하기 위해 가중치와 바이어스의 최상의 조합을 찾습니다. 손실이 낮을수록 모델의 예측값이 더 좋습니다. 그림 3. 3D 공간에서 시간에 걸쳐 시각화한 최적화 알고리즘.(출처: Stanford class CS231n, MIT License, 이미지 제공: Alec Radford)TensorFlow에는 훈련에 사용할 수 있는 많은 최적화 알고리즘이 있습니다. 이 모델에서는 *[확률적 경사하강법](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)*(SGD) 알고리즘을 구현하는 `tf.keras.optimizers.SGD`를 사용합니다. `learning_rate`는 경사 아래로 반복할 때마다 사용할 단계의 크기를 설정하는 *하이퍼 매개변수*로서, 더 나은 결과를 얻기 위해 주로 조정하게 됩니다. 옵티마이저를 다음과 같이 설정합니다.
###Code
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("Step: {}, Initial Loss: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("Step: {}, Loss: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels, training=True).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프여기까지 모두 마쳤다면 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 나은 예측을 할 수 있도록 데이터세트 예제를 모델에 제공합니다. 다음 코드 블록은 이러한 훈련 단계를 설정합니다.1. 각 *epoch* 반복. Epoch는 데이터세트를 통과시키는 횟수입니다.2. 하나의 Epoch 내에서 *특성*(`x`)과 *레이블*(`y`)이 포함된 훈련 `Dataset`의 각 예를 반복합니다.3. 예의 특성을 사용하여 예측을 수행하고 레이블과 비교합니다. 예측의 부정확성을 측정하고 이를 사용하여 모델의 손실 및 그래디언트를 계산합니다.4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다.5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 epoch에 대해 반복합니다.`num_epochs` 변수는 데이터세트 모음을 반복하는 횟수입니다. 단순히 생각해도, 모델을 더 오래 훈련한다고 해서 더 나은 모델이 보장되는 것은 아닐 것입니다. `num_epochs`는 조정할 수 있는 *[하이퍼 매개변수](https://developers.google.com/machine-learning/glossary/hyperparameter)*입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## Note: Rerunning this cell uses the same model variables
# Keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg.update_state(loss_value) # Add current batch loss
# Compare predicted label to actual label
# training=True is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
epoch_accuracy.update_state(y, model(x, training=True))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 유용하지만, 훈련 과정을 직접 보는 것이 *더* 도움이 되기도 합니다. [텐서보드(TensorBoard)](https://www.tensorflow.org/tensorboard)는 TensorFlow에 함께 구성된 굉장히 유용한 시각화 도구입니다. 하지만 `matplotlib` 모듈을 사용하여 기본적인 차트를 생성할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다.*평가*는 모델이 얼마나 효과적으로 예측을 수행하는지 알아보는 것을 의미합니다. 붓꽃 분류에서 모델의 효과를 확인하려면 꽃받침과 꽃잎 측정치를 모델에 전달하고 모델이 붓꽃 종을 예측하도록 요청합니다. 그런 다음 모델의 예측을 실제 레이블과 비교합니다. 예를 들어, 입력 예제의 절반에서 올바른 종을 선택한 모델의 *[정확성](https://developers.google.com/machine-learning/glossary/accuracy)*은 `0.5`입니다. 그림 4는 약간 더 효과적인 모델을 보여줍니다. 5개 예측 중 4개는 80% 정확성으로 정확합니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 정확성 80%의 붓꽃 분류기 테스트 데이터 세트 설정모델 평가는 모델 훈련과 유사합니다. 가장 큰 차이점은 예제가 훈련 세트가 아닌 별도의 *[테스트 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)*에서 나온다는 것입니다. 모델의 효과를 공정하게 평가하려면 모델을 평가하는 데 사용되는 예가 모델 훈련에 사용된 예와 달라야 합니다.테스트 `Dataset`를 설정하는 것은 훈련 `Dataset`를 설정하는 것과 유사합니다. CSV 텍스트 파일을 다운로드하고 값을 파싱한 후 약간의 셔플링을 합니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와 달리 모델은 테스트 데이터의 단일 [epoch](https://developers.google.com/machine-learning/glossary/epoch)만 평가합니다. 다음 코드 셀에서 테스트 세트의 각 예제를 반복하고 모델의 예측값을 실제 레이블과 비교합니다. 이것은 전체 테스트 세트에서 모델의 정확성을 측정하는 데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
logits = model(x, training=False)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기모델을 훈련하고 이 모델이 붓꽃 종을 분류하는 데 훌륭함을 "증명"했지만 완벽하지는 않습니다. 이제 훈련된 모델을 사용하여 [레이블이 없는 예](https://developers.google.com/machine-learning/glossary/unlabeled_example)에 대한 예측을 수행해 보겠습니다. 즉, 특성은 포함하지만 레이블은 포함하지 않는 예입니다.실제로 레이블이 없는 예는 앱, CSV 파일, 데이터 피드 등 다양한 소스에서 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 레이블이 없는 3가지 예를 수동으로 제공할 것입니다. 레이블 번호는 다음과 같이 표시됩니다.- `0`: Iris setosa- `1`: Iris versicolor- `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(predict_dataset, training=False)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 지원하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pyplot as plt
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pyplot as plt
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 Google Colab에서 실행 GitHub에서 소스 보기 노트북 다운로드 이번 튜토리얼에서는 머신러닝을 이용해 붓꽃의 품종을 *분류*해 보도록 하겠습니다. TensorFlow를 사용하여 다음을 실행할 수 있습니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 TensorFlow 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 TensorFlow 개념을 사용합니다.- TensorFlow의 [즉시 실행](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용- [데이터세트 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기- TensorFlow의 [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용하여 모델 및 레이어 구축이번 튜토리얼은 다음과 같이 기타 TensorFlow 프로그램과 유사하게 구성됩니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용하여 예측하기 프로그램 설정 라이브러리 임포트TensorFlow 및 기타 필요한 Python 모듈을 가져옵니다. TensorFlow는 기본적으로 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용하여 나중에 실행되는 계산 그래프를 생성하는 대신 연산을 즉시 평가하고 구체적인 값을 반환합니다. REPL이나 `python` 대화형 콘솔을 사용한다면 익숙할 것입니다.
###Code
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제만약 식물학자가 붓꽃을 자동으로 분류하는 방법을 찾고 있다고 가정해 봅시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램이라면 사진을 통해 꽃을 분류할 수 있을 겁니다. 하지만 이번 튜토리얼에서는 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)에서 측정된 길이와 폭의 값에 기반해서 붓꽃을 분류해 보도록 하겠습니다.이 붓꽃은 그 종류가 약 300종에 이르지만, 튜토리얼에서는 다음의 3가지 품종으로만 분류해 보겠습니다.- Iris setosa- Iris virginica- Iris versicolor 그림 1. Iris setosa (Radomil, CC BY-SA 3.0), Iris versicolor{, (Dlanglois, CC BY-SA 3.0), Iris virginica (Frank Mayfield, CC BY-SA 2.0).다행히도 꽃받침과 꽃잎의 길이와 폭의 값을 측정한 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)가 있습니다. 해당 데이터는 머신러닝 분류 문제에 있어 초보자에게 유명한 고전적인 데이터세트입니다. 훈련 데이터 가져오기 및 파싱데이터 파일을 다운로드하여 이 Python 프로그램이 사용할 수 있는 구조로 해당 데이터를 전환합니다. 데이터세트 다운로드`tf.keras.utils.get_file` 함수를 사용하여 훈련 데이터세트를 다운로드합니다. 이 함수는 다운로드된 파일 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("Local copy of the dataset file: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터세트 `iris_training.csv`는 텍스트 파일이며, 표로 된 데이터를 CSV(comma-separated values)로 저장합니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
해당 데이터세트에서 다음 사항에 주목하세요.1. 첫 번째 줄은 데이터세트 정보를 포함하는 헤더입니다.- 총 120개의 샘플이 있으며, 각 샘플에는 4개의 특성과 3개의 가능한 라벨 이름 중 하나가 있습니다.1. 다음 줄은 데이터 레코드로, 한 줄당 한 개의 *[예](https://developers.google.com/machine-learning/glossary/example)*가 있습니다.- 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)*으로, 예의 특징을 보여줍니다. 여기서 필드는 붓꽃의 측정값을 부동소수점으로 표시합니다.- 마지막 열은 *[라벨](https://developers.google.com/machine-learning/glossary/label)*이며 예측하려는 값을 나타냅니다. 이 데이터세트에서는 꽃의 이름에 상응하는 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각 라벨은 'setosa'와 같은 문자형 이름과 관련이 있습니다. 하지만 머신러닝은 주로 숫자형 값에 의존합니다. 라벨 숫자는 다음의 이름을 대신합니다.- `0`: Iris setosa- `1`: Iris versicolor- `2`: Iris virginica특성과 라벨에 관한 더 자세한 내용은 [머신러닝 단기 집중 과정의 ML 용어 섹션](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성TensorFlow의 [데이터세트 API](https://www.tensorflow.org/guide/datasets)는 모델로 데이터를 로드할 때 일반적으로 발생하는 다양한 사례를 다룹니다. 이는 데이터를 읽고 훈련에 필요한 형태로 변환하는 고수준 API입니다.데이터세트는 CSV 형식의 텍스트 파일이므로, 적절한 형태로 데이터를 구분하기 위해 `tf.data.experimental.make_csv_dataset` 함수를 사용하겠습니다. 이 함수는 훈련 모델용 데이터를 생성하므로, 초기값은 데이터 (`shuffle=True, shuffle_buffer_size=10000`)의 셔플링 및 데이터세트(`num_epochs=None`)의 무한 반복으로 설정되어있습니다. 또한 [batch_size](https://developers.google.com/machine-learning/glossary/batch_size) 파라미터를 다음과 같이 설정합니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍에서 `tf.data.Dataset`를 반환하며, 여기서 `features`는 `{'feature_name': value}` 사전에 해당합니다.이러한 `Dataset` 객체는 반복 가능합니다. 다음을 통해 배치별 특성을 살펴보겠습니다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성은 하나의 그룹으로 묶이거나 *배치 처리*된다는 점에 주목하세요. 각 예제 행의 필드는 해당하는 특성 배열에 추가됩니다. `batch_size`를 조정하여 이러한 특성 배열에 저장된 샘플 수를 설정하세요.또한 배치에서 일부 특성을 플롯하여 클러스터가 생기는 것을 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 사전을 `(batch_size, num_features)`의 형상을 갖는 단일 배열로 리패키징하는 함수를 생성합니다.이 함수는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드를 사용하여 텐서의 목록에서 값을 취하고 지정된 차원에서 결합된 텐서를 생성합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그런 다음 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용하여 각 `(features,label)` 쌍의 `features`을 훈련 데이터세트에 저장합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
`Dataset`의 특성 요소는 `(batch_size, num_features)` 형상의 배열이 되었습니다. 예제의 앞부분을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가?*[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)*은 특성과 레이블 간의 관계입니다. 붓꽃 분류 문제의 경우, 모델은 꽃받침과 꽃잎 측정치와 예측된 붓꽃 종 간의 관계를 정의합니다. 일부 간단한 모델은 몇 줄의 대수로 설명할 수 있지만, 복잡한 머신러닝 모델에는 요약하기 어려운 매개변수가 많습니다.머신러닝을 사용하지 *않고* 4가지 특성과 붓꽃 종 간의 관계를 확인할 수 있을까요? 즉, 기존 프로그래밍 기술(예: 여러 개의 조건문)을 사용하여 모델을 만들 수 있을까요? 특정 종에 대한 꽃잎과 꽃받침 측정치 간의 관계를 확인할 수 있을 만큼 충분히 오랫동안 데이터세트를 분석한 경우 가능할 수도 있습니다. 그러나 이것은 더 복잡한 데이터세트에서는 어렵거나 불가능할 수도 있습니다. 좋은 머신러닝 접근 방식이라면 적절한 모델을 제시해 줍니다. 적절한 머신러닝 모델 형식에 충분한 대표 예제를 제공하면 프로그램이 관계를 파악해 줍니다. 모델 선정훈련할 모델의 종류를 선택해야 합니다. 많은 형식의 모델이 있으며 좋은 모델을 선택하려면 경험이 필요합니다. 이 튜토리얼에서는 신경망을 사용하여 붓꽃 분류 문제를 해결합니다. *[신경망](https://developers.google.com/machine-learning/glossary/neural_network)*은 특성과 레이블 간의 복잡한 관계를 찾을 수 있으며, 하나 이상의 숨겨진 레이어로 구성된 고도로 구조화된 그래프입니다. 각 *[숨겨진 레이어](https://developers.google.com/machine-learning/glossary/neural_network)*는 하나 이상의 *[신경](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성됩니다. 신경망에는 여러 범주가 있으며, 이 프로그램은 조밀하거나 *[완전히 연결된 신경망](https://developers.google.com/machine-learning/glossary/neuron)*을 사용합니다. 즉, 한 레이어의 신경은 이전 레이어의 *모든* 신경에서 입력 연결을 받습니다. 예를 들어, 그림 2는 입력 레이어, 2개의 숨겨진 레이어 및 출력 레이어로 구성된 조밀한 신경망을 보여줍니다. 그림 2. 특성, 숨겨진 레이어, 예측으로 구성된 신경망그림 2의 모델을 훈련하고 레이블이 지정되지 않은 예제를 제공하면, 이 꽃이 주어진 붓꽃 종일 가능성에 대한 3가지 예측값이 생성됩니다. 이 예측을 *[추론](https://developers.google.com/machine-learning/crash-course/glossaryinference)*이라고 합니다. 이 예에서 출력 예측값의 합계는 1.0입니다. 그림 2에서 이 예측은 *Iris setosa*의 경우 `0.02`, *Iris versicolor*의 경우 `0.95`, *Iris virginica*의 경우 `0.03`입니다. 즉, 모델은 95% 확률로 레이블이 지정되지 않은 예시 꽃이 *Iris versicolor*라고 예측합니다. 케라스를 사용한 모델 생성TensorFlow의 `tf.keras` API는 모델과 레이어를 생성하는 데 주로 사용됩니다. Keras가 모든 구성 요소 연결에 대한 복잡성을 처리해 주기 때문에 모델을 구축하고 실험하는 데 용이합니다.`tf.keras.Sequential` 모델은 레이어의 선형 스택입니다. 이 생성자는 레이어 인스턴스 목록을 취하는데, 아래의 경우, 각 10개의 노드를 갖는 두 개의 `tf.keras.layers.Dense` 레이어 및 3개의 노드를 갖는 출력 레이어로 구성되어 레이블 예측을 보여주고 있습니다. 첫 번째 레이어의 `input_shape` 매개변수는 데이터세트의 특성 수에 해당하며 필수적입니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력 형태 필요
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 레이어의 노드에서 출력 형상을 결정합니다. 이러한 비선형성이 중요한데, 활성화 함수가 없는 모델은 단일 레이어와 마찬가지이기 때문입니다. `tf.keras.activations`가 많이 있지만, 숨겨진 레이어에서는 주로 [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) 함수가 사용됩니다.숨겨진 레이어와 신경의 이상적인 수는 문제와 데이터세트에 따라 다릅니다. 머신러닝의 여러 측면과 마찬가지로 신경망의 최상의 형태를 고르기 위해서는 지식과 실험이 모두 필요합니다. 경험상 숨겨진 레이어와 신경의 수를 늘리면 일반적으로 더 강력한 모델이 생성되며 이를 효과적으로 훈련하려면 더 많은 데이터가 필요합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
여기에서 각 예제는 각 클래스에 대한 [로짓](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다.이러한 로짓을 각 클래스의 확률로 변환하려면 [softmax](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하세요.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
클래스에서 `tf.argmax`를 사용하면 예측된 클래스 인덱스가 제공됩니다. 그러나 모델은 아직 훈련되지 않았으므로 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련하기](https://developers.google.com/machine-learning/crash-course/glossarytraining)*는 모델이 점차 최적화될 때 또는 모델이 데이터세트를 학습하는 머신러닝 단계입니다. 이 단계의 목표는 훈련 데이터세트의 구조에 대해 충분히 학습하여 보이지 않는 데이터를 예측하는 것입니다. 훈련 데이터세트에 대해 너무 많이 배우면 예측이 관측한 데이터에 대해서만 작동하고 일반화할 수 없습니다. 이런 문제를 과대적합이라고 하며, 이는 문제를 해결하는 방법을 이해하는 대신 답을 암기하는 것과 같습니다.붓꽃 분류 문제는 *[감독 머신러닝](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*의 예입니다. 모델은 레이블이 포함된 예시로 훈련됩니다. 비감독 머신러닝에서 예시에는 레이블이 포함되지 않습니다. 대신 모델은 일반적으로 특성 사이에서 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련 및 평가 단계 모두 모델의 *[손실](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 이것은 모델의 예측이 원하는 레이블에서 얼마나 떨어져 있는지, 즉 모델의 성능이 얼마나 나쁜지를 측정합니다. 이 값을 최소화하거나 최적화하려고 합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산합니다. 이 함수는 모델의 클래스 확률 예측과 원하는 레이블을 입력으로 받아 예의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 `tf.GradientTape` 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성*[옵티마이저](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 계산된 그래디언트를 모델의 변수에 적용하여 `loss` 함수를 최소화합니다. 손실 함수를 곡면으로 생각해 보세요(그림 3 참조). 곡면을 걸어 다니면서 가장 낮은 지점을 찾으려고 하는 것입니다. 그래디언트는 가장 가파른 상승 방향을 가리키므로 반대 방향으로 이동하여 경사를 내려갑니다. 각 배치의 손실과 그래디언트를 반복적으로 계산하여 훈련 중에 모델을 조정합니다. 점차적으로 모델은 손실을 최소화하기 위해 가중치와 바이어스의 최상의 조합을 찾습니다. 손실이 낮을수록 모델의 예측값이 더 좋습니다. 그림 3. 3D 공간에서 시간에 걸쳐 시각화한 최적화 알고리즘.(출처: Stanford class CS231n, MIT License, 이미지 제공: Alec Radford)TensorFlow에는 훈련에 사용할 수 있는 많은 최적화 알고리즘이 있습니다. 이 모델에서는 *[확률적 경사하강법](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)*(SGD) 알고리즘을 구현하는 `tf.keras.optimizers.SGD`를 사용합니다. `learning_rate`는 경사 아래로 반복할 때마다 사용할 단계의 크기를 설정하는 *하이퍼 매개변수*로서, 더 나은 결과를 얻기 위해 주로 조정하게 됩니다. 옵티마이저를 다음과 같이 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프여기까지 모두 마쳤다면 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 나은 예측을 할 수 있도록 데이터세트 예제를 모델에 제공합니다. 다음 코드 블록은 이러한 훈련 단계를 설정합니다.1. 각 *epoch* 반복. Epoch는 데이터세트를 통과시키는 횟수입니다.2. 하나의 Epoch 내에서 *특성*(`x`)과 *레이블*(`y`)이 포함된 훈련 `Dataset`의 각 예를 반복합니다.3. 예의 특성을 사용하여 예측을 수행하고 레이블과 비교합니다. 예측의 부정확성을 측정하고 이를 사용하여 모델의 손실 및 그래디언트를 계산합니다.4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다.5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 epoch에 대해 반복합니다.`num_epochs` 변수는 데이터세트 모음을 반복하는 횟수입니다. 단순히 생각해도, 모델을 더 오래 훈련한다고 해서 더 나은 모델이 보장되는 것은 아닐 것입니다. `num_epochs`는 조정할 수 있는 *[하이퍼 매개변수](https://developers.google.com/machine-learning/glossary/hyperparameter)*입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 플롯팅을 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개 샘플의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 라벨과 실제 라벨을 비교합니다.
epoch_accuracy(y, model(x))
# 세대 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 유용하지만, 훈련 과정을 직접 보는 것이 *더* 도움이 되기도 합니다. [텐서보드(TensorBoard)](https://www.tensorflow.org/tensorboard)는 TensorFlow에 함께 구성된 굉장히 유용한 시각화 도구입니다. 하지만 `matplotlib` 모듈을 사용하여 기본적인 차트를 생성할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다.*평가*는 모델이 얼마나 효과적으로 예측을 수행하는지 알아보는 것을 의미합니다. 붓꽃 분류에서 모델의 효과를 확인하려면 꽃받침과 꽃잎 측정치를 모델에 전달하고 모델이 붓꽃 종을 예측하도록 요청합니다. 그런 다음 모델의 예측을 실제 레이블과 비교합니다. 예를 들어, 입력 예제의 절반에서 올바른 종을 선택한 모델의 *[정확성](https://developers.google.com/machine-learning/glossary/accuracy)*은 `0.5`입니다. 그림 4는 약간 더 효과적인 모델을 보여줍니다. 5개 예측 중 4개는 80% 정확성으로 정확합니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 정확성 80%의 붓꽃 분류기 테스트 데이터 세트 설정모델 평가는 모델 훈련과 유사합니다. 가장 큰 차이점은 예제가 훈련 세트가 아닌 별도의 *[테스트 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)*에서 나온다는 것입니다. 모델의 효과를 공정하게 평가하려면 모델을 평가하는 데 사용되는 예가 모델 훈련에 사용된 예와 달라야 합니다.테스트 `Dataset`를 설정하는 것은 훈련 `Dataset`를 설정하는 것과 유사합니다. CSV 텍스트 파일을 다운로드하고 값을 파싱한 후 약간의 셔플링을 합니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와 달리 모델은 테스트 데이터의 단일 [epoch](https://developers.google.com/machine-learning/glossary/epoch)만 평가합니다. 다음 코드 셀에서 테스트 세트의 각 예제를 반복하고 모델의 예측값을 실제 레이블과 비교합니다. 이것은 전체 테스트 세트에서 모델의 정확성을 측정하는 데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기모델을 훈련하고 이 모델이 붓꽃 종을 분류하는 데 훌륭함을 "증명"했지만 완벽하지는 않습니다. 이제 훈련된 모델을 사용하여 [레이블이 없는 예](https://developers.google.com/machine-learning/glossary/unlabeled_example)에 대한 예측을 수행해 보겠습니다. 즉, 특성은 포함하지만 레이블은 포함하지 않는 예입니다.실제로 레이블이 없는 예는 앱, CSV 파일, 데이터 피드 등 다양한 소스에서 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 레이블이 없는 3가지 예를 수동으로 제공할 것입니다. 레이블 번호는 다음과 같이 표시됩니다.- `0`: Iris setosa- `1`: Iris versicolor- `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
사용자 정의 학습: 자세히 둘러보기 TensorFlow.org에서 보기 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 노트북 다운로드 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 지원하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs)로메일을 보내주시기 바랍니다. 이번 튜토리얼은 붓꽃의 품종을 *분류*하기 위한 머신러닝 모델을 구축할 것입니다. 다음을 위해 텐서플로를 사용합니다.1. 모델 구축2. 모델 훈련3. 모델을 사용한 예측 텐서플로 프로그래밍이번 튜토리얼에서는 다음과 같은 고수준 텐서플로의 개념을 사용합니다.* 텐서플로의 [즉시 실행(eager execution)](https://www.tensorflow.org/guide/eager) 기본 개발 환경 사용,* [데이터셋 API](https://www.tensorflow.org/guide/datasets)를 활용한 데이터 가져오기,* [케라스 API](https://keras.io/getting-started/sequential-model-guide/)를 활용한 모델과 층(layer) 구축 .이번 튜토리얼은 다른 텐서플로 프로그램과 유사하게 구성되어있습니다.1. 데이터 가져오기 및 분석.2. 모델 타입 선정.3. 모델 훈련.4. 모델 효과 평가.5. 훈련된 모델을 사용한 예측. 프로그램 설정 라이브러리 임포트텐서플로와 필요한 파이썬 모듈을 임포트합니다. 텐서플로는 연산이 나중에 실행되는 [계산 그래프(computational graph)](https://www.tensorflow.org/guide/graphs)를 만드는 대신에 연산을 즉시 평가하고 구체적인 값을 반환하는 [즉시 실행](https://www.tensorflow.org/guide/eager)을 사용합니다. 만약 파이썬 대화형 창이나 상호작용 콘솔을 사용하면 더욱 익숙할 겁니다.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pyplot as plt
try:
# Colab only
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print("텐서플로 버전: {}".format(tf.__version__))
print("즉시 실행: {}".format(tf.executing_eagerly()))
###Output
_____no_output_____
###Markdown
붓꽃 분류 문제당신이 식물학자라고 상상하고, 주어진 붓꽃을 자동적으로 분류하는 방법을 찾고 있다고 가정합시다. 머신러닝은 통계적으로 꽃을 분류할 수 있는 다양한 알고리즘을 제공합니다. 예를 들어, 정교한 머신러닝 프로그램은 사진을 통해 꽃을 분류할 수 있습니다. 이번 튜토리얼의 목적은 좀 더 겸손하게, 측정된 [꽃받침](https://en.wikipedia.org/wiki/Sepal)과 [꽃잎](https://en.wikipedia.org/wiki/Petal)의 길이와 폭을 토대로 붓꽃을 분류하는 것입니다.이 붓꽃은 약 300종입니다. 하지만 이번 튜토리얼에서는 오직 3가지 품종을 기준으로 분류할 것입니다. * Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> 그림 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). 다행히도 다른 사람들이 먼저 꽃받침과 꽃잎의 길이와 폭이 측정된 [120개의 붓꽃 데이터](https://en.wikipedia.org/wiki/Iris_flower_data_set)를 만들어 놓았습니다. 이것은 머신러닝 분류 문제에 있어 초보자에게 유명한 고전 데이터셋입니다. 훈련 데이터 가져오기 및 파싱데이터를 불러오고 파이썬 프로그램이 사용할 수 있는 구조로 전환합니다. 데이터셋 다운로드[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) 함수를 사용하여 데이터셋을 다운로드합니다. 이 함수는 다운로드된 파일의 경로를 반환합니다.
###Code
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("데이터셋이 복사된 위치: {}".format(train_dataset_fp))
###Output
_____no_output_____
###Markdown
데이터 탐색이 데이터셋(`iris_training.csv`)은 콤마 ','로 구분된 CSV 파일입니다. `head -n5` 명령을 사용하여 처음 5개 항목을 확인합니다.
###Code
!head -n5 {train_dataset_fp}
###Output
_____no_output_____
###Markdown
처음 5개의 데이터로부터 다음을 주목하세요.1. 첫 번째 줄은 다음과 같은 정보를 포함하고 있는 헤더(header)입니다. * 총 120개의 샘플이 있으며, 각 샘플들은 4개의 특성(feature), 3개의 레이블(label)을 가지고 있습니다.2. 후속행은 데이터 레코드입니다. 한 줄당 한 개의 *[샘플](https://developers.google.com/machine-learning/glossary/example)* 입니다. * 처음 4개의 필드는 *[특성](https://developers.google.com/machine-learning/glossary/feature)* 입니다.: 이것들은 샘플의 특징을 나타냅니다. 이 필드들는 붓꽃의 측정값을 부동소수점으로 나타냅니다. * 마지막 컬럼(column)은 *[레이블(label)](https://developers.google.com/machine-learning/glossary/label)* 입니다.: 레이블은 예측하고자 하는 값을 나타냅니다. 이 데이터셋에서는 꽃의 이름과 관련된 정수값 0, 1, 2를 나타냅니다.코드로 표현하면 다음과 같습니다.:
###Code
# CSV 파일안에서 컬럼의 순서
column_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
feature_names = column_names[:-1]
label_name = column_names[-1]
print("특성: {}".format(feature_names))
print("레이블: {}".format(label_name))
###Output
_____no_output_____
###Markdown
각각의 레이블은 "setosa"와 같은 문자형 이름과 연관되어있습니다. 하지만 머신러닝은 전형적으로 숫자형 값에 의존합니다. 레이블을 다음과 같이 매핑(mapping) 합니다. * `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica특성과 레이블에 관한 더 많은 정보를 위해서는 [머신러닝 특강의 전문용어 부분](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology)을 참조하세요.
###Code
class_names = ['Iris setosa', 'Iris versicolor', 'Iris virginica']
###Output
_____no_output_____
###Markdown
`tf.data.Dataset` 생성텐서플로의 [데이터셋 API](https://www.tensorflow.org/guide/datasets)는 데이터를 적재할 때 발생하는 다양한 경우를 다룰 수 있습니다. 이는 훈련에 필요한 형태로 데이터를 읽고 변환하는 고수준 API입니다. 더 많은 정보를 위해서는 [데이터셋 빠른 실행 가이드](https://www.tensorflow.org/get_started/datasets_quickstart)를 참조하세요. 데이터셋이 CSV 파일이므로, 적절한 형태로 데이터를 구분하기 위해 [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset) 함수를 사용하겠습니다. 이 함수는 훈련 모델을 위한 데이터를 생성하므로, 초기값은 셔플(`shuffle=True, shuffle_buffer_size=10000`)과 무한 반복(`num_epochs=None`)으로 설정되어있습니다. 또한 [배치 사이즈(batch_size)](https://developers.google.com/machine-learning/glossary/batch_size)를 설정해줍니다.
###Code
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1)
###Output
_____no_output_____
###Markdown
`make_csv_dataset` 함수는 `(features, label)` 쌍으로 구성된 `tf.data.Dataset`을 반환합니다. `features`는 딕셔너리 객체인: `{'feature_name': value}`로 주어집니다. 이 데이터셋은 반복가능합니다. 다음은 특성(feature)을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features)
###Output
_____no_output_____
###Markdown
유사한 특성의 값은 같이 그룹 되어있거나, *배치* 돼있다는 사실에 주목하세요. 각 샘플 행의 필드는 해당 특성 배열에 추가됩니다. `batch_size`를 조절하여 이 특성 배열에 저장된 샘플의 수를 설정하세요.또한 배치(batch)로부터 약간의 특성을 도식화하여 군집돼있는 데이터를 확인할 수 있습니다.
###Code
plt.scatter(features['petal_length'],
features['sepal_length'],
c=labels,
cmap='viridis')
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
###Output
_____no_output_____
###Markdown
모델 구축 단계를 단순화하기 위해, 특성 딕셔너리를 `(batch_size, num_features)`의 형태를 가지는 단일 배열로 다시 구성하는 함수를 생성합니다.이 함수는 텐서의 리스트(list)로부터 값을 취하고 특정한 차원으로 결합된 텐서를 생성하는 [tf.stack](https://www.tensorflow.org/api_docs/python/tf/stack) 메서드(method)를 사용합니다.
###Code
def pack_features_vector(features, labels):
"""특성들을 단일 배열로 묶습니다."""
features = tf.stack(list(features.values()), axis=1)
return features, labels
###Output
_____no_output_____
###Markdown
그 후 각 `(features,label)`쌍의 특성을 훈련 데이터 세트에 쌓기위해 [tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/dataset/map) 메서드를 사용합니다.
###Code
train_dataset = train_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
데이터셋의 특성 요소는 이제 형태가 `(batch_size, num_features)`인 배열입니다. 첫 5개행의 샘플을 살펴봅시다.
###Code
features, labels = next(iter(train_dataset))
print(features[:5])
###Output
_____no_output_____
###Markdown
모델 타입 선정 왜 모델을 사용해야하는가? *[모델](https://developers.google.com/machine-learning/crash-course/glossarymodel)* 은 특성(feature)과 레이블(label) 과의 관계입니다. 붓꽃 분류 문제에서 모델은 측정된 꽃받침과 꽃잎 사이의 관계를 정의하고 붓꽃의 품종을 예측합니다. 몇 가지 간단한 모델은 몇 줄의 대수학으로 표현할 수 있으나, 복잡한 머신러닝 모델은 요약하기 힘든 굉장히 많은 수의 매개변수를 가지고 있습니다.머신러닝을 사용하지 않고 4가지의 특성 사이의 관계를 결정하고 붓꽃을 품종을 예측하실 수 있나요? 즉, 특정 품종의 꽃받침과 꽃잎과의 관계를 정의할 수 있을 정도로 데이터셋을 분석했다면, 전통적인 프로그래밍 기술(예를 들어 굉장히 많은 조건문)을 사용하여 모델은 만들 수 있으신가요? 더 복잡한 데이터셋에서 이는 불가능에 가까울 수 있습니다. 잘 구성된 머신러닝은 사용자를 위한 모델을 결정합니다. 만약 충분히 좋은 샘플을 잘 구성된 머신러닝 모델에 제공한다면, 프로그램은 사용자를 위한 특성 간의 관계를 이해하고 제공합니다. 모델 선정이제 학습을 위한 모델의 종류를 선정해야합니다. 여러 종류의 모델이 있고, 이를 선택하는 것은 많은 경험이 필요합니다. 이번 튜토리얼에서는 붓꽃 분류 문제를 해결하기위해 *[신경망(neural network)](https://developers.google.com/machine-learning/glossary/neural_network)* 모델을 사용하겠습니다. 신경망 모델은 특성과 레이블 사이의 복잡한 관계를 찾을 수 있습니다. 신경망은 하나 또는 그 이상의 *[은닉층(hidden layer)](https://developers.google.com/machine-learning/glossary/hidden_layer)*으로 구성된 그래프입니다. 각각의 은닉층은 하나 이상의 *[뉴런(neuron)](https://developers.google.com/machine-learning/glossary/neuron)*으로 구성되어있습니다. 몇가지 신경망의 범주가 있으며, 이번 튜토리얼에서는 *[밀집(dense) 또는 완전 연결 신경망(fully-connected neural network)](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*를 사용합니다: 완전 연결 신경망(fully-connected neural network)은 하나의 뉴런에 이전층의 *모든* 뉴런의 입력을 받는 신경망입니다. 예를 들어, 그림 2는 입력층, 2개의 은닉층, 그리고 출력층으로 구성된 완전 연결 신경망입니다. <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> 그림 2. 특성, 은닉층, 예측으로 구성된 신경망 그림 2의 모델이 훈련된 다음 레이블 되어있지 않은 데이터를 제공했을때, 모델은 주어진 데이터의 3가지(주어진 레이블의 개수) 예측을 출력합니다. 이러한 예측은 *[추론(inference)](https://developers.google.com/machine-learning/crash-course/glossaryinference)* 이라고 불립니다. 이 샘플에서 출력의 합은 1.0입니다. 그림 2에서 예측은 *Iris setosa* `0.02`, *Iris versicolor* `0.95`, *Iris virginica*에 `0.03`로 주어집니다. 이는 모델이 95%의 확률로 주어진 데이터를 *Iris versicolor*로 예측한다는 것을 의미합니다. 케라스를 사용한 모델 생성텐서플로의 [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API는 모델과 층을 생성하기 위한 풍부한 라이브러리를 제공합니다. 케라스가 구성 요소를 연결하기 위한 복잡함을 모두 처리해 주기 때문에 모델을 구축하고 실험하는 것이 쉽습니다.[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)은 여러 층을 연이어 쌓은 모델입니다. 이 구조는 층의 인스턴스를 취하며, 아래의 경우 각 층당 10개의 노드(node)를 가지는 2개의 [완전 연결((Dense) 층](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)과 3개의 예측(레이블의 수) 노드를 가지는 출력 층으로 구성되어있습니다. 첫 번째 층의 `input_shape` 매개변수는 데이터셋의 특성의 수와 관계있습니다.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # 입력의 형태가 필요합니다.
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
*[활성화 함수(activation function)](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)*는 각 층에서 출력의 형태를 결정합니다. 이러한 비선형성은 중요하며, 활성화 함수가 없는 모델은 하나의 층과 동일하다고 생각할 수 있습니다. 사용 가능한 [활성화 함수](https://www.tensorflow.org/api_docs/python/tf/keras/activations)는 많지만, [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU)가 은닉층에 주로 사용됩니다. 이상적인 은닉층과 뉴런의 개수는 문제와 데이터셋에 의해 좌우됩니다. 머신러닝의 여러 측면과 마찬가지로, 최적의 신경망 타입을 결정하는 것은 많은 경험과 지식이 필요합니다. 경험을 토대로 보면 은닉층과 뉴런의 증가는 전형적으로 강력한 모델을 생성하므로, 모델을 효과적으로 훈련시키기 위해서 더 많은 데이터를 필요로 합니다. 모델 사용이 모델이 특성의 배치에 대해 수행하는 작업을 간단히 살펴봅시다.
###Code
predictions = model(features)
predictions[:5]
###Output
_____no_output_____
###Markdown
각 샘플은 각 클래스에 대한 [로짓(logit)](https://developers.google.com/machine-learning/crash-course/glossarylogits)을 반환합니다. 이 로짓(logit)을 각 클래스에 대한 확률로 변환하기 위하서 [소프트맥스(softmax)](https://developers.google.com/machine-learning/crash-course/glossarysoftmax) 함수를 사용하겠습니다.
###Code
tf.nn.softmax(predictions[:5])
###Output
_____no_output_____
###Markdown
`tf.argmax`는 예측된 값 중 가장 큰 확률(원하는 클래스)을 반환합니다. 하지만 모델이 아직 훈련되지 않았으므로 이는 좋은 예측이 아닙니다.
###Code
print(" 예측: {}".format(tf.argmax(predictions, axis=1)))
print("레이블: {}".format(labels))
###Output
_____no_output_____
###Markdown
모델 훈련하기*[훈련 단계](https://developers.google.com/machine-learning/crash-course/glossarytraining)* 는 모델이 점진적으로 최적화되거나 데이터셋을 학습하는 머신러닝의 과정입니다. 훈련의 목적은 미지의 데이터를 예측하기 위해, 훈련 데이터 세트의 구조에 대해서 충분히 학습하는 것입니다. 만약 모델이 훈련 데이터 세트에 대해서 과하게 학습된다면 오직 훈련 데이터 세트에 대해서 작동할 것이며, 일반화되기 힘들 것입니다. 이러한 문제를 *[과대적합(overfitting)](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)* 이라고 합니다. 이는 마치 문제를 이해하고 해결한다기보다는 답을 기억하는 것이라고 생각할 수 있습니다. 붓꽃 분류 문제는 *[지도 학습(supervised machine learning)](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)* 의 예시 중 하나입니다.: 지도학습은 모델이 레이블을 포함한 훈련 데이터로부터 학습됩니다. *[비지도 학습(unsupervised machine learning)](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)* 에서는 훈련 데이터가 레이블을 포함하고 있지 않습니다. 대신에 모델은 특성 간의 패턴을 찾습니다. 손실 함수와 그래디언트 함수 정의하기훈련과 평가단계에서 모델의 *[손실(loss)](https://developers.google.com/machine-learning/crash-course/glossaryloss)*을 계산해야 합니다. 손실은 모델의 예측이 원하는 레이블과 얼마나 일치하는지, 또한 모델이 잘 작동하는지에 대한 척도로 사용됩니다. 이 값을 최소화하고, 최적화 해야합니다.모델의 손실은 `tf.keras.losses.categorical_crossentropy` 함수를 사용해 계산할 것입니다. 이 함수는 모델의 클래스(레이블)과 예측된 값(로짓)을 입력받아 샘플의 평균 손실을 반환합니다.
###Code
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y):
y_ = model(x)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels)
print("손실 테스트: {}".format(l))
###Output
_____no_output_____
###Markdown
모델을 최적화하기 위해 사용되는 *[그래디언트(gradient)](https://developers.google.com/machine-learning/crash-course/glossarygradient)*를 계산하기 위해 [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) 컨텍스트를 사용합니다.
###Code
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
옵티마이저 생성 *[옵티마이저(optimizer)](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)*는 `손실` 함수를 최소화하기 위해 계산된 그래디언트를 모델의 변수에 적용합니다. 손실 함수를 구부러진 곡선의 표면(그림 3)으로 생각할 수 있으며, 이 함수의 최저점을 찾고자 합니다. 그래디언트는 가장 가파른 상승 방향을 가리키며 따라서 반대 방향으로 이동하는 여행을 합니다. 각 배치마다의 손실과 기울기를 반복적으로 계산하여 훈련과정 동안 모델을 조정합니다. 점진적으로, 모델은 손실을 최소화하기 위해 가중치(weight)와 편향(bias)의 최적의 조합을 찾아냅니다. 손실이 낮을수록 더 좋은 모델의 예측을 기대할 수 있습니다. <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> 그림 3. 3차원 공간에 대한 최적화 알고리즘 시각화.(Source: Stanford class CS231n, MIT License, Image credit: Alec Radford) 텐서플로는 훈련을 위해 사용 가능한 여러종류의 [최적화 알고리즘](https://www.tensorflow.org/api_guides/python/train)을 가지고 있습니다. 이번 모델에서는 *[확률적 경사 하강법(stochastic gradient descent, SGD)](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* 을 구현한 [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)를 사용하겠습니다. `learning_rate`은 경사하강 과정의 크기를 나타내는 매개변수이며, 더 나은 결과를 위해 조절가능한 *하이퍼파라미터(hyperparameter)* 입니다. 옵티마이저(optimizer)를 설정합니다.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
이를 사용해 한 번의 최적화 단계를 계산하기 위해 사용합니다.
###Code
loss_value, grads = grad(model, features, labels)
print("단계: {}, 초기 손실: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("단계: {}, 손실: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels).numpy()))
###Output
_____no_output_____
###Markdown
훈련 루프모든 사항이 갖춰졌으므로 모델을 훈련할 준비가 되었습니다! 훈련 루프는 더 좋은 예측을 위해 데이터셋을 모델로 제공합니다. 다음의 코드 블럭은 아래의 훈련 단계를 작성한 것입니다. 1. 각 *에포크(epoch)* 반복. 에포크는 데이터셋을 통과시키는 횟수입니다. 2. 에포크 내에서, *특성* (`x`)와 *레이블* (`y`)가 포함된 훈련 데이터 세트에 있는 샘플을 반복합니다.3. 샘플의 특성을 사용하여 결과를 예측 하고 레이블과 비교합니다. 예측의 부정확도를 측정하고 모델의 손실과 그래디언트를 계산하기 위해 사용합니다. 4. 모델의 변수를 업데이트하기 위해 `옵티마이저`를 사용합니다. 5. 시각화를 위해 몇가지 값들을 저장합니다.6. 각 에포크를 반복합니다.`num_epochs` 변수는 데이터셋의 반복 횟수입니다. 직관과는 반대로, 모델을 길게 학습하는 것이 더 나은 모델이 될 것이라고 보장하지 못합니다. `num_epochs`는 조정가능한 *[하이퍼파라미터(hyperparameter)](https://developers.google.com/machine-learning/glossary/hyperparameter)* 입니다. 적절한 횟수를 선택하는 것은 많은 경험과 직관을 필요로 합니다.
###Code
## 노트: 이 셀을 다시 실행하면 동일한 모델의 변수가 사용됩니다.
# 도식화를 위해 결과를 저장합니다.
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# 훈련 루프 - 32개의 배치를 사용합니다.
for x, y in train_dataset:
# 모델을 최적화합니다.
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# 진행 상황을 추적합니다.
epoch_loss_avg(loss_value) # 현재 배치 손실을 추가합니다.
# 예측된 레이블과 실제 레이블 비교합니다.
epoch_accuracy(y, model(x))
# epoch 종료
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("에포크 {:03d}: 손실: {:.3f}, 정확도: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
시간에 따른 손실함수 시각화 모델의 훈련 과정을 출력하는 것도 도움이 되지만, 훈련 과정을 직접 보는 것이 더 도움이 되곤합니다. [텐서보드(tensorboard)](https://www.tensorflow.org/guide/summaries_and_tensorboard)는 텐서플로에 패키지 되어있는 굉장히 유용한 시각화 툴입니다. 하지만 `matplotlib` 모듈을 사용하여 일반적인 도표를 출력할 수 있습니다.이 도표를 해석하는 것은 여러 경험이 필요하지만, 결국 모델을 최적화하기 위해 *손실* 이 내려가고 *정확도* 가 올라가는 것을 원합니다.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('훈련 지표')
axes[0].set_ylabel("손실", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("정확도", fontsize=14)
axes[1].set_xlabel("에포크", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
모델 유효성 평가이제 모델은 훈련되었습니다. 모델의 성능에 대한 몇가지 통계를 얻을 수 있습니다. *평가(Evaluating)*는 모델이 예측을 얼마나 효과적으로 수행하는지 결정하는 것을 의미합니다. 붓꽃 분류 모델의 유효성을 결정하기 위해, 몇가지 꽃잎과 꽃받침 데이터를 통과시키고 어떠한 품종을 예측하는지 확인합니다. 그 후 실제 품종과 비교합니다. 예를 들어, 절반의 데이터를 올바르게 예측한 모델의 *[정확도](https://developers.google.com/machine-learning/glossary/accuracy)* 는 `0.5`입니다. 그림 4는 조금 더 효과적인 모델입니다. 5개의 예측 중 4개를 올바르게 예측하여 80% 정확도를 냅니다. 샘플 특성 레이블 모델 예측 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 그림 4. 80% 정확도 붓꽃 분류기. 테스트 데이터 세트 설정모델을 평가하는 것은 모델을 훈련하는 것과 유사합니다. 가장 큰 차이는 훈련 데이터가 아닌 *[테스트 데이터 세트](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* 를 사용했다는 것입니다. 공정하게 모델의 유효성을 평가하기 위해, 모델을 평가하기 위한 샘플은 반드시 훈련 데이터와 달라야합니다. 테스트 데이터 세트를 설정하는 것은 훈련 데이터 세트를 설정하는 것과 유사합니다. CSV 파일을 다운로드하고 값을 파싱합니다. 그 후 셔플은 적용하지 않습니다.
###Code
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name='species',
num_epochs=1,
shuffle=False)
test_dataset = test_dataset.map(pack_features_vector)
###Output
_____no_output_____
###Markdown
테스트 데이터 세트를 사용한 모델 평가훈련 단계와는 다르게 모델은 테스트 데이터에 대해서 오직 한 번의 [에포크](https://developers.google.com/machine-learning/glossary/epoch)를 진행합니다. 다음의 코드 셀은 테스트 셋에 있는 샘플에 대해 실행하고 실제 레이블과 비교합니다. 이는 전체 테스트 데이터 세트에 대한 정확도를 측정하는데 사용됩니다.
###Code
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
logits = model(x)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("테스트 세트 정확도: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
마지막 배치에서 모델이 올바르게 예측한 것을 확인할 수 있습니다.
###Code
tf.stack([y,prediction],axis=1)
###Output
_____no_output_____
###Markdown
훈련된 모델로 예측하기이제 붓꽃을 분류하기 위해 완벽하지는 않지만 어느 정도 검증된 모델을 가지고 있습니다. 훈련된 모델을 사용하여 [레이블 되지 않은 데이터](https://developers.google.com/machine-learning/glossary/unlabeled_example)를 예측해봅시다.실제로는 레이블 되지 않은 샘플들은 여러 소스(앱, CSV 파일, 직접 제공 등)로부터 제공될 수 있습니다. 지금은 레이블을 예측하기 위해 수동으로 3개의 레이블 되지 않은 샘플을 제공하겠습니다. 레이블은 다음과 같은 붓꽃 이름으로 매핑되어있습니다.* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("샘플 {} 예측: {} ({:4.1f}%)".format(i, name, 100*p))
###Output
_____no_output_____ |
if-else/.ipynb_checkpoints/elif-checkpoint.ipynb | ###Markdown
elifE se temos mais do que um caso de sim e não?E se tivermos 3 casos?Se a situação passar no if, vai passar para o elifUsamos o elif da seguinte forma:
###Code
if condição:
o que fazer se a condição 1 for verdadeira
elif condição_2:
o que fazer se a condição 1 for falsa e a condição 2 for verdadeira
else:
o que fazer se a condição 1 e a condição 2 forem falsas
###Output
_____no_output_____
###Markdown
Exemplo:Vamos criar um programa para analisar o bônus dos funcionários de uma empresa (pode parecer "simples", mas uma empresa como a Amazon tem 900.000 funcionários)Para os cargos de vendedores, a regra do bônus é de acordo com a meta de vendas da pessoa:Se ela vendeu abaixo da meta dela, ela não ganha bônus.Se ela vendeu acima da meta dela, ela ganha como bônus 3% do valor que ela vendeu.Se ela vendeu mais do que o dobro da meta dela, ela ganha como bônus 7% do valor que ela vendeu.Vamos criar um programa para avaliar uma pessoa que tinha como meta de vendas 20.000 reais e calcular o bônus dela de acordo com o valor de vendas que ela tiver.
###Code
meta = 20000
venda = 25000
if venda < meta:
print('Não ganhou o bônus')
elif venda > (meta * 2):
bonus = 0.07 * vendas
print(f'Ganhou {bonus} de bônus')
else:
bonus = 0.03 * venda
print(f'Ganhou {bonus} de bônus')
###Output
Ganhou 750.0 de bônus
|
week9/in_class_notebooks/week9-196.ipynb | ###Markdown
 **Data Visualization and Exploratory Data Analysis** Visualization is an important part of data analysis. By presenting information visually, you facilitate the process of its perception, which makes it possible to highlight additional patterns, evaluate the ratios of quantities, and quickly communicate key aspects in the data.Let's start with a little "memo" that should always be kept in mind when creating any graphs. How to visualize data and make everyone hate you 1. Chart **titles** are unnecessary. It is always clear from the graph what data it describes.2. Do not label under any circumstances both **axes** of the graph. Let the others check their intuition!3. **Units** are optional. What difference does it make if the quantity was measured, in people or in liters!4. The smaller the **text** on the graph, the sharper the viewer's eyesight.5. You should try to fit all the **information** that you have in the dataset in one chart. With full titles, transcripts, footnotes. The more text, the more informative!6. Whenever possible, use as many 3D and special effects as you have. There will be less visual distortion rather than 2D. As an example, consider the pandemic case. Let's use a dataset with promptly updated statistics on coronavirus (COVID-19), which is publicly available on Kaggle: https://www.kaggle.com/imdevskp/corona-virus-report?select=covid_19_clean_complete.csv The main libraries for visualization in Python that we need today are **matplotlib, seaborn, plotly**.
###Code
# cmd
# Download required binded packages
!pip install plotly-express
!pip install nbformat==4.2.0
!pip install plotly
import matplotlib.pyplot as plt #the most popular library for creating the plots
%matplotlib inline
import numpy as np
import seaborn as sns
import pandas as pd
import pickle # for JSON serialization
import plotly
import plotly_express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from plotly.offline import init_notebook_mode
init_notebook_mode(connected=True)
%config InlineBackend.figure_format = 'svg' # graphs in svg look sharper
# Change the default plot size
from pylab import rcParams
rcParams['figure.figsize'] = 7, 5
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
We read the data and look at the number of countries in the dataset and what time period it covers.
###Code
data = pd.read_csv('./data/covid_19_clean.csv')
data.head(10)
###Output
_____no_output_____
###Markdown
How many countries there are in this table?
###Code
data['Country/Region'].nunique()
data.shape
data.describe()
data['Active'].sort_values()[:20]
data[data['Active'] >= 0]
float(-1.400000e+01)
data.describe(include=['object'])
###Output
_____no_output_____
###Markdown
How many cases in average were confirmed per report (in all reports/rows)? Metrics of centrality:
###Code
data.iloc[:60]
data['Confirmed'].mode()
data['Confirmed'].median()
data['Confirmed'].mean()
data[data['Country/Region'] == 'Russia'].iloc[:60]
data
###Output
_____no_output_____
###Markdown
What is the average number of total confirmed cases across all the countries in this table (based on the last available date in the data set) ?
###Code
max('askdjskadj')
max(data.Date)
# data[data.Date == '2020-07-27']
df = data[data.Date == max(data.Date)]['Confirmed']
df.mean()
df.mode()
df.median()
###Output
_____no_output_____
###Markdown
What is the maximum number of confirmed cases in every country?
###Code
data.head(10)
data.groupby('Country/Region')
data.groupby('Country/Region')['Confirmed'].agg('max').sort_values(ascending=False)[:10]
data.groupby('Country/Region')['Confirmed'].max().sort_values(ascending=False)[:10]
data.groupby('Country/Region')['Confirmed'].mean().sum().std()
data.groupby('Country/Region')['Confirmed'].agg(['mean'])
data.groupby('Country/Region')['Confirmed'].agg(['mean', 'sum', 'std'])
###Output
_____no_output_____
###Markdown
More info on groupby: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html* **mean()**: Compute mean of groups* **sum()**: Compute sum of group values* **size()**: Compute group sizes* **count()**: Compute count of group* **std()**: Standard deviation of groups* **var()**: Compute variance of groups* **sem()**: Standard error of the mean of groups* **describe()**: Generates descriptive statistics* **first()**: Compute first of group values* **last()**: Compute last of group values* **nth()** : Take nth value, or a subset if n is a list* **min()**: Compute min of group values* **max()**: Compute max of group values You can see several characteristics at once (mean, median, prod, sum, std,var) - both in DataFrame and Series:
###Code
data.groupby('Country/Region')['Confirmed'].agg(['mean', 'median', 'std'])
data
data.pivot_table(columns='WHO Region', index='Date', values='Confirmed', aggfunc='sum')
np.argmax(data.pivot_table(columns='WHO Region', index='Date', values='Confirmed', aggfunc='sum').iloc[-1])
max(data.pivot_table(columns='WHO Region', index='Date', values='Confirmed', aggfunc='sum').iloc[-1])
our_dict = dict(data.pivot_table(columns='WHO Region', index='Date', values='Confirmed', aggfunc='sum').iloc[-1])
# How to return back a key in a dictionary with the maximum value?
our_dict
max(our_dict.items(), key=lambda x: x[1])
data[data['Active'] > 0]
data[data['WHO Region'] == 'Western Pacific']['Country/Region'].unique()
avg_confirmed = data[data.Date == max(data.Date)]['Confirmed'].mean()
data[(data['WHO Region'] == 'Western Pacific') & (data['Confirmed'] > avg_confirmed)]
avg_confirmed = data[data.Date == max(data.Date)]['Confirmed'].mean()
data[(data['WHO Region'] == 'Western Pacific') & (data['Confirmed'] > avg_confirmed)]['Confirmed'].mean()
data[(data['WHO Region'] == 'Western Pacific') & (data['Confirmed'] > avg_confirmed)]['Country/Region'].unique()
some_countries = ['China', 'Singapore', 'Philippines', 'Japan']
data[data['Country/Region'].isin(some_countries)]
###Output
_____no_output_____
###Markdown
Let's make a small report:
###Code
data = pd.read_csv('./data/covid_19_clean.csv')
print("Number of countries: ", data['Country/Region'].nunique())
print(f"Day from {min(data['Date'])} till {max(data['Date'])}, overall {data['Date'].nunique()} days.")
data['Date'] = pd.to_datetime(data['Date'], format = '%Y-%m-%d')
display(data[data['Country/Region'] == 'Russia'].tail())
###Output
_____no_output_____
###Markdown
The coronavirus pandemic is a clear example of an exponential distribution. To demonstrate this, let's build a graph of the total number of infected and dead. We will use a linear chart type (** Line Chart **), which can reflect the dynamics of one or several indicators. It is convenient to use it to see how a value changes over time.
###Code
# Line chart
ax = data[['Confirmed', 'Deaths', 'Date']].groupby('Date').sum().plot(title='Title')
ax.set_xlabel("X axes")
ax.set_ylabel("Y axes");
# TODO
# Change the title and axes names
###Output
_____no_output_____
###Markdown
The graph above shows us general information around the world. Let's select the 10 most affected countries (based on the results of the last day from the dataset) and on one **Line Chart** show data for each of them according to the number of registered cases of the disease. This time, let's try using the **plotly** library.
###Code
# Preparation steps fot the table
# Extract the top 10 countries by the number of confirmed cases
df_top = data[data['Date'] == max(data.Date)]
df_top = df_top.groupby('Country/Region', as_index=False)['Confirmed'].sum()
df_top = df_top.nlargest(10,'Confirmed')
# Extract trend across time
df_trend = data.groupby(['Date','Country/Region'], as_index=False)['Confirmed'].sum()
df_trend = df_trend.merge(df_top, on='Country/Region')
df_trend.rename(columns={'Country/Region' : 'Countries',
'Confirmed_x':'Cases',
'Date' : 'Dates'},
inplace=True)
# Plot a graph
# px stands for plotly_express
px.line(df_trend,
title='Increased number of cases of COVID-19',
x='Dates',
y='Cases',
color='Countries')
###Output
_____no_output_____
###Markdown
Let's put a logarithm on this column.
###Code
# Add a column to visualize the logarithmic
df_trend['ln(Cases)'] = np.log(df_trend['Cases'] + 1) # Add 1 for log (0) case
px.line(df_trend,
x='Dates',
y='ln(Cases)',
color='Countries',
title='COVID19 Total Cases growth for top 10 worst affected countries(Logarithmic Scale)')
###Output
_____no_output_____
###Markdown
What interesting conclusions can you draw from this graph? Try to do similar graphs for the deaths and active cases.
###Code
# TODO
###Output
_____no_output_____
###Markdown
Another popular chart is the **Pie chart**. Most often, this graph is used to visualize the relationship between parts (ratios).
###Code
# Pie chart
fig = make_subplots(rows=1, cols=2, specs=[[{'type':'domain'}, {'type':'domain'}]])
labels_donut = [country for country in df_top['Country/Region']]
fig.add_trace(go.Pie(labels=labels_donut, hole=.4, hoverinfo="label+percent+name",
values=[cases for cases in df_top.Confirmed],
name="Ratio", ), 1, 1)
labels_pie = [country for country in df_top['Country/Region']]
fig.add_trace(go.Pie(labels=labels_pie, pull=[0, 0, 0.2, 0],
values=[cases for cases in df_top.Confirmed],
name="Ratio"), 1, 2)
fig.update_layout(
title_text="Donut & Pie Chart: Distribution of COVID-19 cases among the top-10 affected countries",
# Add annotations in the center of the donut pies.
annotations=[dict(text=' ', x=0.5, y=0.5, font_size=16, showarrow=False)],
colorway=['rgb(69, 135, 24)', 'rgb(136, 204, 41)', 'rgb(204, 204, 41)',
'rgb(235, 210, 26)', 'rgb(209, 156, 42)', 'rgb(209, 86, 42)', 'rgb(209, 42, 42)', ])
fig.show()
###Output
_____no_output_____
###Markdown
In the line graphs above, we have visualized aggregate country information by the number of cases detected. Now, let's try to plot a daily trend chart by calculating the difference between the current value and the previous day's value.For this purpose, we will use a histogram (**Histogram**). Also, let's add pointers to key events, for example, lockdown dates in Wuhan province in China, Italy and the UK.
###Code
# Histogram
def add_daily_diffs(df):
# 0 because the previous value is unknown
df.loc[0,'Cases_daily'] = 0
df.loc[0,'Deaths_daily'] = 0
for i in range(1, len(df)):
df.loc[i,'Cases_daily'] = df.loc[i,'Confirmed'] - df.loc[i - 1,'Confirmed']
df.loc[i,'Deaths_daily'] = df.loc[i,'Deaths'] - df.loc[i - 1,'Deaths']
return df
df_world = data.groupby('Date', as_index=False)['Deaths', 'Confirmed'].sum()
df_world = add_daily_diffs(df_world)
fig = go.Figure(data=[
go.Bar(name='The number of cases',
marker={'color': 'rgb(0,100,153)'},
x=df_world.Date,
y=df_world.Cases_daily),
go.Bar(name='The number of cases', x=df_world.Date, y=df_world.Deaths_daily)
])
fig.update_layout(barmode='overlay', title='Statistics on the number of Confirmed and Deaths from COVID-19 across the world',
annotations=[dict(x='2020-01-23', y=1797, text="Lockdown (Wuhan)",
showarrow=True, arrowhead=1, ax=-100, ay=-200),
dict(x='2020-03-09', y=1797, text="Lockdown (Italy)",
showarrow=True, arrowhead=1, ax=-100, ay=-200),
dict(x='2020-03-23', y=19000, text="Lockdown (UK)",
showarrow=True, arrowhead=1, ax=-100, ay=-200)])
fig.show()
# Save
plotly.offline.plot(fig, filename='my_beautiful_histogram.html', show_link=False)
###Output
_____no_output_____
###Markdown
A histogram is often mistaken for a bar chart due to its visual similarity, but these charts have different purposes. The bar graph shows how the data is distributed over a continuous interval or a specific period of time. Frequency is located along the vertical axis of the histogram, intervals or some time period along the horizontal axis.Let's build the **Bar Chart** now. It can be vertical and horizontal, let's choose the second option.Let's build a graph only for the top 20 countries in mortality. We will calculate this statistics as the ratio of the number of deaths to the number of confirmed cases for each country.For some countries in the dataset, statistics are presented for each region (for example, for all US states). For such countries, we will leave only one (maximum) value. Alternatively, one could calculate the average for the regions and leave it as an indicator for the country.
###Code
# Bar chart
df_mortality = data.query('(Date == "2020-07-17") & (Confirmed > 100)')
df_mortality['mortality'] = df_mortality['Deaths'] / df_mortality['Confirmed']
df_mortality['mortality'] = df_mortality['mortality'].apply(lambda x: round(x, 3))
df_mortality.sort_values('mortality', ascending=False, inplace=True)
# Keep the maximum mortality rate for countries for which statistics are provided for each region.
df_mortality.drop_duplicates(subset=['Country/Region'], keep='first', inplace=True)
fig = px.bar(df_mortality[:20].iloc[::-1],
x='mortality',
y='Country/Region',
labels={'mortality': 'Death rate', 'Country\Region': 'Country'},
title=f'Death rate: top-20 affected countries on 2020-07-17',
text='mortality',
height=800,
orientation='h') # горизонтальный
fig.show()
# TODO: раскрасить столбцы по тепловой карте (используя уровень смерности)
# Для этого добавьте аргументы color = 'mortality'
###Output
_____no_output_____
###Markdown
**Heat Maps** quite useful for additional visualization of correlation matrices between features. When there are a lot of features, with the help of such a graph you can more quickly assess which features are highly correlated or do not have a linear relationship.
###Code
# Heat map
sns.heatmap(data.corr(), annot=True, fmt='.2f', cmap='cividis'); # try another color, e.g.'RdBu'
###Output
_____no_output_____
###Markdown
The scatter plot helps to find the relationship between the two indicators. To do this, you can use pairplot, which will immediately display a histogram for each variable and a scatter plot for two variables (along different plot axes).
###Code
# Pairplot
sns_plot = sns.pairplot(data[['Deaths', 'Confirmed']])
sns_plot.savefig('pairplot.png') # save
###Output
_____no_output_____
###Markdown
**Pivot table** can automatically sort and aggregate your data.
###Code
# Pivot table
plt.figure(figsize=(12, 4))
df_new = df_mortality.iloc[:10]
df_new['Confirmed'] = df_new['Confirmed'].astype(np.int)
df_new['binned_fatalities'] = pd.cut(df_new['Deaths'], 3)
platform_genre_sales = df_new.pivot_table(
index='binned_fatalities',
columns='Country/Region',
values='Confirmed',
aggfunc=sum).fillna(int(0)).applymap(np.int)
sns.heatmap(platform_genre_sales, annot=True, fmt=".1f", linewidths=0.7, cmap="viridis");
# Geo
# file with abbreviations
with open('./data/countries_codes.pkl', 'rb') as file:
countries_codes = pickle.load(file)
df_map = data.copy()
df_map['Date'] = data['Date'].astype(str)
df_map = df_map.groupby(['Date','Country/Region'], as_index=False)['Confirmed','Deaths'].sum()
df_map['iso_alpha'] = df_map['Country/Region'].map(countries_codes)
df_map['ln(Confirmed)'] = np.log(df_map.Confirmed + 1)
df_map['ln(Deaths)'] = np.log(df_map.Deaths + 1)
px.choropleth(df_map,
locations="iso_alpha",
color="ln(Confirmed)",
hover_name="Country/Region",
hover_data=["Confirmed"],
animation_frame="Date",
color_continuous_scale=px.colors.sequential.OrRd,
title = 'Total Confirmed Cases growth (Logarithmic Scale)')
###Output
_____no_output_____ |
demo/verifai_example.ipynb | ###Markdown
Falsification with VerifAIPlease ensure that the CARLA simulator is up and running on port 2000 before running the falsifier below. For more information, visit the [CARLA website](https://carla.org/). Also, be sure to install all the required dependencies by running `install.sh` from this directory.
###Code
%load_ext autoreload
%autoreload 2
import time
import numpy as np
from dotmap import DotMap
from verifai.samplers.scenic_sampler import ScenicSampler
from verifai.scenic_server import ScenicServer
from verifai.falsifier import generic_falsifier
from verifai.monitor import specification_monitor
from verifai.falsifier import generic_falsifier
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
# The specification must assume multi_objective_monitor class
class confidence_spec(specification_monitor):
def __init__(self):
def specification(traj):
return 1
super().__init__(specification)
def test_driving_dynamic():
path = 'carlaChallenge1.scenic'
sampler = ScenicSampler.fromScenario(path)
falsifier_params = DotMap(
n_iters=5,
save_error_table=True,
save_safe_table=True,
)
server_options = DotMap(maxSteps=200, verbosity=0)
monitor = confidence_spec()
falsifier = generic_falsifier(sampler=sampler,
falsifier_params=falsifier_params,
server_class=ScenicServer,
server_options=server_options,
monitor=monitor)
t0 = time.time()
falsifier.run_falsifier()
t = time.time() - t0
print(f'Generated {len(falsifier.samples)} samples in {t} seconds with 1 worker')
print(f'Number of counterexamples: {len(falsifier.error_table.table)}')
return falsifier
falsifier = test_driving_dynamic()
df = pd.concat([falsifier.safe_table.table, falsifier.error_table.table])
plt.scatter(df['point.objects.object0.position[0]'], df['point.objects.object1.position[0]'], c=df['rho'] < 0);
###Output
_____no_output_____ |
_docs/nbs/T929652-Efficient-Frontier.ipynb | ###Markdown
Efficient Frontier **Scenario: Portfolio optimization**> Efficient frontier is the Nobel Prize Winner Theory To Gain Higher Returns In Your Investment.Let’s consider you have \$10'000 of cash available and you are interested in investing it. Your aim is to invest the money for a year. Like any rational investor, you expect the final amount in a years time to be higher than the $10'000 amount you want to invest.There are many investment options available, such as buying a T-bill or company shares, etc. Some of the investment options are riskier than the others because they attract us to gain higher returns. Hence, the point to note is that there exists a risk-return trade-off.If we buy a number of assets such as shares of different companies then the total risk of the portfolio can be reduced due to diversification. This means that an investor can reduce the total risk and increase the return by choosing different assets with different proportions in a portfolio. This is due to the fact that the assets can be correlated with each other.We understood that the allocations (weights) of the assets can change the risk of the portfolio. Hence, we can generate 1000s of portfolios randomly where each portfolio will contain a different set of weights for the assets.We know that as we increase the number of portfolios, we will get closer to the real optimum portfolio. This is the brute force approach and it can turn out to be a time-consuming task. Furthermore, there is no guarantee that we will find the right allocations. Setup
###Code
!wget -q --show-progress https://github.com/rian-dolphin/Efficient-Frontier-Python/raw/main/daily_returns.csv
import pandas as pd
import numpy as np
from tqdm import tqdm
import plotly
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.express as px
import plotly.figure_factory as ff
daily_returns = pd.read_csv('daily_returns.csv', index_col=0)
daily_returns.head()
#-- Get annualised mean returns
mus = (1+daily_returns.mean())**252 - 1
#-- Get covariances
#- Multiply by 252 to annualise it (square root time for volatility but no square root for variance)
#- Note: 252 trading days in a year
#- https://quant.stackexchange.com/questions/4753/annualized-covariance
cov = daily_returns.cov()*252
###Output
_____no_output_____
###Markdown
Create Random Portfolios
###Code
#- How many assests to include in each portfolio
n_assets = 5
#-- How many portfolios to generate
n_portfolios = 1000
#-- Initialize empty list to store mean-variance pairs for plotting
mean_variance_pairs = []
np.random.seed(75)
#-- Loop through and generate lots of random portfolios
for i in range(n_portfolios):
#- Choose assets randomly without replacement
assets = np.random.choice(list(daily_returns.columns), n_assets, replace=False)
#- Choose weights randomly
weights = np.random.rand(n_assets)
#- Ensure weights sum to 1
weights = weights/sum(weights)
#-- Loop over asset pairs and compute portfolio return and variance
#- https://quant.stackexchange.com/questions/43442/portfolio-variance-explanation-for-equation-investments-by-zvi-bodie
portfolio_E_Variance = 0
portfolio_E_Return = 0
for i in range(len(assets)):
portfolio_E_Return += weights[i] * mus.loc[assets[i]]
for j in range(len(assets)):
#-- Add variance/covariance for each asset pair
#- Note that when i==j this adds the variance
portfolio_E_Variance += weights[i] * weights[j] * cov.loc[assets[i], assets[j]]
#-- Add the mean/variance pairs to a list for plotting
mean_variance_pairs.append([portfolio_E_Return, portfolio_E_Variance])
#-- Plot the risk vs. return of randomly generated portfolios
#-- Convert the list from before into an array for easy plotting
mean_variance_pairs = np.array(mean_variance_pairs)
risk_free_rate=0 #-- Include risk free rate here
fig = go.Figure()
fig.add_trace(go.Scatter(x=mean_variance_pairs[:,1]**0.5, y=mean_variance_pairs[:,0],
marker=dict(color=(mean_variance_pairs[:,0]-risk_free_rate)/(mean_variance_pairs[:,1]**0.5),
showscale=True,
size=7,
line=dict(width=1),
colorscale="RdBu",
colorbar=dict(title="Sharpe<br>Ratio")
),
mode='markers'))
fig.update_layout(template='plotly_white',
xaxis=dict(title='Annualised Risk (Volatility)'),
yaxis=dict(title='Annualised Return'),
title='Sample of Random Portfolios',
width=850,
height=500)
fig.update_xaxes(range=[0.18, 0.32])
fig.update_yaxes(range=[0.02,0.27])
fig.update_layout(coloraxis_colorbar=dict(title="Sharpe Ratio"))
###Output
_____no_output_____
###Markdown
Sample only from efficient frontier
###Code
#-- Create random portfolio weights and indexes
#- How many assests in the portfolio
n_assets = 5
mean_variance_pairs = []
weights_list=[]
tickers_list=[]
for i in tqdm(range(10000)):
next_i = False
while True:
#- Choose assets randomly without replacement
assets = np.random.choice(list(daily_returns.columns), n_assets, replace=False)
#- Choose weights randomly ensuring they sum to one
weights = np.random.rand(n_assets)
weights = weights/sum(weights)
#-- Loop over asset pairs and compute portfolio return and variance
portfolio_E_Variance = 0
portfolio_E_Return = 0
for i in range(len(assets)):
portfolio_E_Return += weights[i] * mus.loc[assets[i]]
for j in range(len(assets)):
portfolio_E_Variance += weights[i] * weights[j] * cov.loc[assets[i], assets[j]]
#-- Skip over dominated portfolios
for R,V in mean_variance_pairs:
if (R > portfolio_E_Return) & (V < portfolio_E_Variance):
next_i = True
break
if next_i:
break
#-- Add the mean/variance pairs to a list for plotting
mean_variance_pairs.append([portfolio_E_Return, portfolio_E_Variance])
weights_list.append(weights)
tickers_list.append(assets)
break
len(mean_variance_pairs)
###Output
_____no_output_____
###Markdown
If we plot the risk and return for each of the portfolios on a chart then we will see an arch line at the top of the portfolios.
###Code
#-- Plot the risk vs. return of randomly generated portfolios
#-- Convert the list from before into an array for easy plotting
mean_variance_pairs = np.array(mean_variance_pairs)
risk_free_rate=0 #-- Include risk free rate here
fig = go.Figure()
fig.add_trace(go.Scatter(x=mean_variance_pairs[:,1]**0.5, y=mean_variance_pairs[:,0],
marker=dict(color=(mean_variance_pairs[:,0]-risk_free_rate)/(mean_variance_pairs[:,1]**0.5),
showscale=True,
size=7,
line=dict(width=1),
colorscale="RdBu",
colorbar=dict(title="Sharpe<br>Ratio")
),
mode='markers',
text=[str(np.array(tickers_list[i])) + "<br>" + str(np.array(weights_list[i]).round(2)) for i in range(len(tickers_list))]))
fig.update_layout(template='plotly_white',
xaxis=dict(title='Annualised Risk (Volatility)'),
yaxis=dict(title='Annualised Return'),
title='Sample of Random Portfolios',
width=850,
height=500)
fig.update_xaxes(range=[0.18, 0.35])
fig.update_yaxes(range=[0.05,0.29])
fig.update_layout(coloraxis_colorbar=dict(title="Sharpe Ratio"))
###Output
_____no_output_____ |
ex3_polymer_modeling.ipynb | ###Markdown
Polymer Data Huan Tran ([email protected])Ramprasad Research Group, Georgia Institute of Technology This notebook provides scripts used to train four datasets provided by Polymer Genome (PG, https://www.polymergenome.org/), including the (DFT) HSE band gap, the (electronic and ionic) dielectric constants, and the atomization energy of about 380 organic polymers. Polymers crystals in each dataset were fingerprinted at 3 levels of atomic-level fingerprints, i.e., singles, doubles, and triples (from lower to higher length scales). Three fingerprinted datasets are then named as "**fp_aS.csv**", "**fp_aD.csv**", and "**fp_aT.csv**". Given the nature of the physical properties demonstrated here, these atomic-level fingerprints (described below) are sufficient, no need to go use fingerprints of longer length scales. Data files are in csv format in which each polymer is in a line. The *first column* is for polymer id, the *next four columns* are for atomization energy, band gap, electronic, and ionic dielectric constants, and the *remaining columns* are for fingerprints. Details on the data curation are given in *[Huan et el, Sci. Data **3**, 160012 (2016)]*. This includes how to get polymers crystal structures, what level of DFT used for computations, to what extent the data is validated, and so on. The atomic fragment-based fingerprints are described in *[Huan et al., Phys. Rev. B **92**, 014106 (2015)]*, in which the motivation, the definition, and the relations of the fingerprints are described. In short, these fingerprints capture how many fragments of a given type that show up in the polymer. In the figure below, C2 is **a single**, representing a C atom with two bonds, and O2 is another **single** representing an O atom with two bonds. A **double** or **triple** contains two or three singles in a given order. From the chemistry, more information can be readily extracted. For example, two bonds of a C2 must be double bonds while for C3, two are single bonds and the other is a double bond. The scripts provided below will train the Gaussian Process Regression (GPR) models on the provided data. GPR is used in PG for some reasons, i.e., it is quite intuitive and a measure of uncertainty can be obtained. One needs to update the data file name in this notebook to make the model of interest. In general, it is expected that models based on higher levels of atomic fingerprints will be better than those based on lower levels. Materials used for this hackathon are the results of some polymer-related projects pursued in our research group, led by Prof. Rampi Ramprasad at Georgia Institute of Technology. Of these projects, Polymer Genome aims at developing an informatics platform for polymers predictions and design. Data, the most essential component of PG, is currently in a significantly expanding phase with supports from Toyota Research Institute. *NOTICE: All information contained herein is, and remains the property of Georgia Tech Research Corporation and its sponsors, if any. The intellectual and technical concepts contained herein are proprietary to Georgia Tech Research Corporation and its sponsors and may be covered by U.S. and Foreign Patents, patents in process, and are protected by trade secret or copyright law. Dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from Georgia Tech Research Corporation.*
###Code
# Some necessary modules are loaded
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import mean_squared_error
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import WhiteKernel, RBF
import matplotlib.pyplot as plt
# Reading data from the data file
df = pd.read_csv('dataset/fp_aT.csv', delimiter=',', header=0)
data_tot = np.array(df)
# Select property
prop_sel = "band gap"
if prop_sel == "band gap":
data_sel = np.delete(data_tot,[0,2,3,4],axis=1)
elif prop_sel == "atomization energy":
data_sel = np.delete(data_tot,[0,1,3,4],axis=1)
elif prop_sel == "electronic dielectric":
data_sel = np.delete(data_tot,[0,1,2,4],axis=1)
elif prop_sel == "ionic dielectric":
data_sel = np.delete(data_tot,[0,1,2,3],axis=1)
# Remove NaN data
data_sel_nonan = data_sel[~np.isnan(data_sel).any(axis=1)]
# X (fingerprint) and Y (property) of the polymers
X = data_sel_nonan[:,1:]
Y = data_sel_nonan[:,0]
#print (np.shape(data_sel))
#print (np.shape(data_sel_nonan))
# Split the data into training and test sets
test_size = 0.20
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = test_size, random_state=1)
# Some initial parameters to determine the hyperparameters
Y_average = np.average(Y)
noise_avr = np.std(Y)
noise_lb = noise_avr/10
noise_ub = noise_avr*10
n_fold = 5
# The prior of the GPR model
kernel = (Y_average)**2*RBF(length_scale=1)+WhiteKernel(noise_level=noise_avr**2,noise_level_bounds=(noise_lb**2, noise_ub**2))
gp = GaussianProcessRegressor(kernel=kernel, alpha=0, n_restarts_optimizer=5)
# Now training the GPR model
opt_gp = gp
opt_rmse = 1.0E20
ncv = 0
ncv_opt = ncv
# Training set splitted into n_fold subsets
kf_ = KFold(n_splits=n_fold, shuffle = True)
kf = kf_.split(Y_train)
# Loop for the best kernal
for train, test in kf:
X_cv_train = X_train[train]
X_cv_test = X_train[test]
Y_cv_train = Y_train[train]
Y_cv_test = Y_train[test]
gp = GaussianProcessRegressor(kernel=kernel, alpha=0, n_restarts_optimizer=10)
gp.fit(X_cv_train, Y_cv_train)
y_cv_train = gp.predict(X_cv_train, return_std=False)
y_cv_test = gp.predict(X_cv_test, return_std=False)
rmse_cv_train = np.sqrt(mean_squared_error(Y_cv_train, y_cv_train))
rmse_cv_test = np.sqrt(mean_squared_error(Y_cv_test, y_cv_test))
print(' ncv, rmse_train, rmse_test: ', ncv, rmse_cv_train, rmse_cv_test)
if rmse_cv_test < opt_rmse:
opt_rmse = rmse_cv_test
opt_gp = gp
ncv_opt = ncv
ncv = ncv + 1
print(' Optimal ncv: ', ncv_opt, "; optimal kernel saved.")
# Come back to the initial training and sets
X_train_final = X_train
X_test_final = X_test
# Take the optimal kernel (hyperparameters) to "train" the model on the initial training set
gp_final = GaussianProcessRegressor(kernel=opt_gp.kernel_, alpha=0, optimizer=None)
gp_final.fit(X_train_final, Y_train)
# Make predictions
y_train = gp_final.predict(X_train_final, return_std=False)
y_test = gp_final.predict(X_test_final, return_std=False)
# Error measures
rmse_train = np.sqrt(mean_squared_error(Y_train, y_train))
rmse_test = np.sqrt(mean_squared_error(Y_test, y_test))
R2_train_ = gp_final.score(X_train_final, Y_train)
R2_test_ = gp_final.score(X_test_final, Y_test)
# Three optimal hyperparameters can be obtained by the following lines
print ("k1.k1.constant_value = " + str(gp_final.kernel_.k1.k1.constant_value))
print ("k2.noise_level = " + str(gp_final.kernel_.k2.noise_level))
print ("k2.k2.length_scale = " + str(gp_final.kernel_.k1.k2.length_scale))
# Visualize the prediction
train_size = 1.0-test_size
label_train = 'Train: size = ' + str(train_size) +'; R2 = ' + str('%.3f' % R2_train_) + '; rmse = ' + str(
'%.3f' % rmse_train)
label_test = 'Test: size = ' + str(test_size) + '; R2 = ' + str('%.3f' % R2_test_) + '; rmse = ' + str(
'%.3f' % rmse_test)
plt.figure(figsize=(8, 8))
plt.rc('xtick', labelsize=15)
plt.rc('ytick', labelsize=15)
lim_min = min(min(Y_train), min(Y_test), min(y_train), min(y_test))
lim_max = max(max(Y_train), max(Y_test), max(y_train), max(y_test))
lim = [lim_min - (lim_max - lim_min) * 0.1, lim_max + (lim_max - lim_min) * 0.1]
plt.xlim(lim)
plt.ylim(lim)
plt.text(lim_min + (lim_max - lim_min) * 0.4, lim_min + (lim_max - lim_min) * 0.1, label_train)
plt.text(lim_min + (lim_max - lim_min) * 0.4, lim_min + (lim_max - lim_min) * 0.05, label_test)
if prop_sel == "band gap":
plt.xlabel("Computed band gap (eV)", size=17)
plt.ylabel("Predicted band gap (eV)", size=17)
elif prop_sel == "atomization energy":
plt.xlabel("Computed atomization energy (eV/atom)", size=17)
plt.ylabel("Predicted atomization energy (eV/atom)", size=17)
elif prop_sel == "electronic dielectric":
plt.xlabel("Computed electronic dielectric constant", size=17)
plt.ylabel("Predicted electronic dielectric constant", size=17)
elif prop_sel == "ionic dielectric":
plt.xlabel("Computed ionic dielectric constant", size=17)
plt.ylabel("Predicted ionic dielectric constant", size=17)
plots_ = list()
plot_train = plt.scatter(Y_train, y_train, marker='o', label="train set")
plots_.append(plot_train)
plot_test = plt.scatter(Y_test, y_test, marker='s', label="test set")
plots_.append(plot_test)
#show the plot
plt.show()
###Output
_____no_output_____ |
tasks/task_01_cross_sections/4_Doppler_broadening.ipynb | ###Markdown
Part 4 - Plotting Doppler broadened cross sectionsInteraction cross sections are affected by the temperature of the target atom. The relative motion of the target can result in the target moving towards or away from the incident particle causing them to collide with different energies.This python notebook allows users to plot neutron interaction cross sections using OpenMC taking Doppler broadening into account.
###Code
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/mkl1mVnTO6g" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
###Output
_____no_output_____
###Markdown
This code block plots the total neutron cross section for Tungsten-186 at 3 different temperatures.
###Code
from plotting_utils import create_temperature_plot_for_isotope
create_temperature_plot_for_isotope(
isotope='W186',
temperatures=[300, 700, 1000], # temperatures in Kelvin
reaction='(n,total)'
)
###Output
_____no_output_____
###Markdown
Zoom in on one of the spectral peaks to observe how increasing temperature causes Doppler broadening. The next code block plots the total neutron cross section for Iron-56 at 2 different temperatures for a specified energy range that captures a particular resonance. Doppler broadening of the resonance peak should be observed.
###Code
create_temperature_plot_for_isotope(
isotope='Fe56',
temperatures=[300, 1000], # temperatures in Kelvin
reaction='(n,total)',
min_energy=1100,
max_energy=1200
)
###Output
_____no_output_____
###Markdown
Part 4 - Plotting Doppler broadened cross sectionsInteraction cross sections are affected by the temperature of the target atom. The relative motion of the target can result in the target moving towards or away from the incident particle causing them to collide with different energies.This python notebook allows users to plot neutron interaction cross sections using OpenMC taking Doppler broadening into account.
###Code
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/mkl1mVnTO6g" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
###Output
_____no_output_____
###Markdown
This code block plots the total neutron cross section for Tungsten-186 at 3 different temperatures.
###Code
from plotting_utils import create_temperature_plot_for_isotope
create_temperature_plot_for_isotope(
isotope='W186',
temperatures=[300, 700, 1000], # temperatures in Kelvin
reaction='(n,total)'
)
###Output
_____no_output_____
###Markdown
Zoom in on one of the spectral peaks to observe how increasing temperature causes Doppler broadening. The next code block plots the total neutron cross section for Iron-56 at 2 different temperatures for a specified energy range that captures a particular resonance. Doppler broadening of the resonance peak should be observed.
###Code
create_temperature_plot_for_isotope(
isotope='Fe56',
temperatures=[300, 1000], # temperatures in Kelvin
reaction='(n,total)',
min_energy=1100,
max_energy=1200
)
###Output
_____no_output_____
###Markdown
Part 4 - Plotting Doppler broadened cross sectionsInteraction cross sections are affected by the temperature of the target atom. The relative motion of the target can result in the target moving towards or away from the incident particle causing them to collide with different energies.This python notebook allows users to plot neutron interaction cross sections using OpenMC taking Doppler broadening into account.
###Code
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/mkl1mVnTO6g" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
###Output
/home/jshim/anaconda3/lib/python3.8/site-packages/IPython/core/display.py:717: UserWarning: Consider using IPython.display.IFrame instead
warnings.warn("Consider using IPython.display.IFrame instead")
###Markdown
This code block plots the total neutron cross section for Tungsten-186 at 3 different temperatures.
###Code
from plotting_utils import create_temperature_plot_for_isotope
create_temperature_plot_for_isotope(
isotope='W186',
temperatures=[300, 700, 1000], # temperatures in Kelvin
reaction='(n,total)'
)
###Output
_____no_output_____
###Markdown
Zoom in on one of the spectral peaks to observe how increasing temperature causes Doppler broadening. The next code block plots the total neutron cross section for Iron-56 at 2 different temperatures for a specified energy range that captures a particular resonance. Doppler broadening of the resonance peak should be observed.
###Code
create_temperature_plot_for_isotope(
isotope='Fe56',
temperatures=[300, 1000], # temperatures in Kelvin
reaction='(n,total)',
min_energy=1100,
max_energy=1200
)
###Output
_____no_output_____ |
fraud-detection-solution-autoencoders-in-keras.ipynb | ###Markdown
Reading in data
###Code
data = pd.read_csv('creditcard.csv')
data.head()
###Output
_____no_output_____
###Markdown
Exploring in data
###Code
print(data.shape)
print(data.columns)
###Output
(284807, 31)
Index(['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10',
'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20',
'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount',
'Class'],
dtype='object')
###Markdown
31 columns, 2 of which are Time and Amount. The rest are output from the PCA transformation Let’s check for missing values
###Code
data.isnull().sum().any()
data.Class.value_counts().rename(index = {0:'Not Fraud', 1:'Fraud'})
###Output
_____no_output_____
###Markdown
Out of 285k transactions just 492 were labelled as fraudulent, it is a small percentage but may represent billions of dollars of lost revenue each year. The PCA done on the dataset transformed it into standard-normal form. I will do the same to the 'time' and 'amount' columns
###Code
data['Time'] = StandardScaler().fit_transform(data['Time'].values.reshape(-1, 1))
data['Amount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))
###Output
_____no_output_____
###Markdown
Now we split the data into training and testing sets, And To evaluate the performance of our model we will training our model on the legitimate transactions,only, And Reserving the correct class on the test set.
###Code
train_x, test_x = train_test_split(data,test_size = 0.3,random_state=42)
train_x = train_x[train_x.Class == 0]
train_x = train_x.drop(['Class'], axis=1)
test_y = test_x['Class']
test_x = test_x.drop(['Class'], axis=1)
###Output
_____no_output_____
###Markdown
Our Autoencoder uses 4 Desnse (fully connected) layers with 14, 7, 7 and 30 neurons respectively. The first two layers are used for our encoder, the last two go for the decoder.
###Code
input_dim = train_x.shape[1]
encoding_dim = int(input_dim / 2) - 1
hidden_dim = int(encoding_dim / 2)
learning_rate = 1e-7
input_layer = Input(shape=(input_dim, ))
encoder = Dense(encoding_dim, activation="tanh", activity_regularizer=regularizers.l1(learning_rate))(input_layer)
encoder = Dense(hidden_dim, activation="relu")(encoder)
decoder = Dense(hidden_dim, activation='tanh')(encoder)
decoder = Dense(input_dim, activation='relu')(decoder)
autoencoder = Model(inputs=input_layer, outputs=decoder)
###Output
_____no_output_____
###Markdown
We will train our model for 100 epochs with a batch size of 128 samples.
###Code
nb_epoch = 100
batch_size = 128
###Output
_____no_output_____
###Markdown
We will use Model Checkpoint to save the best model and TensorBoard for graph visualization
###Code
autoencoder.compile(metrics=['accuracy'],
loss='mean_squared_error',
optimizer='adam')
cp = ModelCheckpoint(filepath="autoencoder_fraud.h5",
save_best_only=True,
verbose=0)
tb = TensorBoard(log_dir='./logs',
histogram_freq=0,
write_graph=True,
write_images=True)
history = autoencoder.fit(train_x, train_x,
epochs=nb_epoch,
batch_size=batch_size,
shuffle=True,
validation_data=(test_x, test_x),
verbose=1,
callbacks=[cp, tb]).history
autoencoder = load_model('autoencoder_fraud.h5')
###Output
_____no_output_____
###Markdown
Model Visualization
###Code
plt.plot(history['loss'], linewidth=2, label='Train')
plt.plot(history['val_loss'], linewidth=2, label='Test')
plt.legend(loc='upper right')
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.show()
###Output
_____no_output_____
###Markdown
seems like our model work nicely, now we will make the predictions.
###Code
pred = autoencoder.predict(test_x)
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
mse = np.mean(np.power(test_x - pred, 2), axis=1)
error_df = pd.DataFrame({'Reconstruction_error': mse,
'True_class': test_y})
error_df.Reconstruction_error.values
###Output
_____no_output_____
###Markdown
we will use a threshold to separate between fraudulent transactions and legitimate transactions
###Code
threshold_fixed = 5
pred_y = [1 if e > threshold_fixed else 0 for e in error_df.Reconstruction_error.values]
matrix = confusion_matrix(error_df.True_class, pred_y)
tpos = matrix[0][0]
fneg = matrix[1][1]
fpos = matrix[0][1]
tneg = matrix[1][0]
print( 'Accuracy: '+ str(np.round(100*float(tpos+fneg)/float(tpos+fneg + fpos + tneg),2))+'%')
print( 'Cohen Kappa: '+ str(np.round(cohen_kappa_score(error_df.True_class, pred_y),3)))
print("Sensitivity/Recall for Model : {}".format(round(recall_score(error_df.True_class, pred_y), 2)))
print("F1 Score for Model : {}".format(round(f1_score(error_df.True_class, pred_y), 2)))
weights = autoencoder.get_weights()
weights
autoencoder.set_weights(weights)
type(weights)
from phe import paillier
paillier.
###Output
_____no_output_____ |
JOCSON_Assignment4.ipynb | ###Markdown
###Code
###Output
_____no_output_____ |
Prototype2_CharBasedLSTM/CharacterBasedLSTM.ipynb | ###Markdown
**Import**
###Code
!pip install keras-rl
!pip install music21
from google.colab import files
import numpy as np
from music21 import stream, converter, instrument, note, chord
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.layers import CuDNNLSTM
from keras.layers import Activation
from keras.utils import np_utils
from keras.callbacks import ModelCheckpoint
###Output
_____no_output_____
###Markdown
**Data Import**
###Code
uploaded = files.upload()
fileNames = [];
for fn in uploaded.keys():
fileNames.append(fn)
###Output
_____no_output_____
###Markdown
**Data Processing**
###Code
'''
Parse all notes & chords in songs into strings
'''
notes = []
for file in fileNames:
midi = converter.parse(file)
print("Parsing {}".format(file))
notes_to_parse = None
try:
# file has instrument
s2 = instrument.partitionByInstrument(midi)
notes_to_parse = s2.parts[0].recurse()
except: # file has notes in a flat structure
notes_to_parse = midi.flat.notes
for element in notes_to_parse:
if isinstance(element, note.Note):
notes.append(str(element.pitch))
elif isinstance(element, chord.Chord):
notes.append('.'.join(str(n) for n in element.normalOrder))
print(notes)
'''
Create dictionary for notes
'''
pitchnames = sorted(set(item for item in notes))
dictionary = dict((note, number) for number,note in enumerate(pitchnames))
print(dictionary)
'''
Prepare training data
'''
WINDOW = 100
X = []
Y = []
for i in range(0, len(notes) - WINDOW, 1):
x = notes[i:i + WINDOW]
y = notes[i + WINDOW]
X.append([dictionary[c] for c in x])
Y.append(dictionary[y])
dataSetSize = len(X)
X = np.reshape(X, (dataSetSize, WINDOW, 1))
X = X / float(len(dictionary))
print(Y)
Y = np_utils.to_categorical(Y)
print(X.shape)
print(Y.shape)
###Output
_____no_output_____
###Markdown
**Model Definition**
###Code
model = Sequential()
model.add(CuDNNLSTM(512, input_shape=(WINDOW, 1), return_sequences=True))
model.add(Dropout(0.3))
model.add(CuDNNLSTM(512))
model.add(Dense(256))
model.add(Dropout(0.3))
model.add(Dense(len(dictionary)))
model.add(Activation('softmax'))
model.summary()
###Output
_____no_output_____
###Markdown
**Model Training**
###Code
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
weights_filename = 'model_weights.h5f'
checkpoint = ModelCheckpoint(
weights_filename,
monitor='loss',
verbose=0
)
callbacks = [checkpoint]
model.fit(
X,
Y,
epochs=200,
batch_size=200,
callbacks=callbacks
)
###Output
_____no_output_____
###Markdown
**Music Generation**
###Code
'''
Generate song
'''
songlength = 500
seed = np.random.randint(0, len(X) - 1)
reverse_dictionary = dict((number, note) for number, note in enumerate(pitchnames))
currentSequence = X[seed][:]
generatedSong = []
for i in range(songlength):
x = np.reshape(currentSequence, (1, len(currentSequence,), 1))
x = x / float(len(dictionary))
p = model.predict(x, verbose=0)
index = np.argmax(p)
result = reverse_dictionary[index]
generatedSong.append(result)
currentSequence = np.append(currentSequence, index)
currentSequence = currentSequence[1 : len(currentSequence)]
'''
Convert to midi
'''
offset = 0
output_notes = []
for sequence in generatedSong:
#if sequence is chord
if('.' in sequence) or sequence.isdigit():
notes_in_chord = sequence.split('.')
notes = []
for n in notes_in_chord:
new_n = note.Note(int(n))
new_n.storedInstrument = instrument.Piano()
notes.append(new_n)
new_chord = chord.Chord(notes)
new_chord.offset = offset
output_notes.append(new_chord)
#if sequence is note
else:
new_n = note.Note(sequence)
new_n.offset = offset
new_n.storedInstrument = instrument.Piano()
output_notes.append(new_n)
offset += 0.5
midi_stream = stream.Stream(output_notes)
midi_stream.write('midi', fp='AI_ongaku.mid')
###Output
_____no_output_____ |
text-classification/jigsaw-toxic-comments/jigsaw-toxic-comments-challenge-autokeras-multilabel.ipynb | ###Markdown
Train a neural network using AutoKeras Set paths and other variables
###Code
train_input_file = "data/train.csv.zip"
BATCH_SIZE = 8 # It runs out-of-memmory quite easily :/
%env TF_GPU_ALLOCATOR=cuda_malloc_async
###Output
_____no_output_____
###Markdown
Import libs
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score
import autokeras as ak
import keras_tuner as kt
tf.__version__
###Output
_____no_output_____
###Markdown
Load ground truth dataset
###Code
train_df = pd.read_csv(train_input_file, compression="zip")
train_df.columns
###Output
_____no_output_____
###Markdown
Split ground truth dataset into training, validation and test
###Code
train_df, test_df = train_test_split(train_df, test_size=0.1)
train_df, val_df = train_test_split(train_df, test_size=0.1)
train_df.shape, val_df.shape, test_df.shape
train_df[
["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
].values
###Output
_____no_output_____
###Markdown
Convert pandas dataframes into tensorflow datasets
###Code
train_set = tf.data.Dataset.from_tensor_slices(
(
(train_df.comment_text.values,),
(
train_df[
[
"toxic",
"severe_toxic",
"obscene",
"threat",
"insult",
"identity_hate",
]
].values
),
)
).batch(BATCH_SIZE)
val_set = tf.data.Dataset.from_tensor_slices(
(
(val_df.comment_text.values,),
(
val_df[
[
"toxic",
"severe_toxic",
"obscene",
"threat",
"insult",
"identity_hate",
]
].values
),
)
).batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Train AutoKeras AutoML model Init AutoKeras text classifier model
###Code
clf = ak.TextClassifier(
overwrite=False, # True,
multi_label=True,
max_trials=10,
metrics=[tf.keras.metrics.AUC()],
)
###Output
_____no_output_____
###Markdown
Define earlystop to stop training if it does not improve anymore
###Code
earlystop = tf.keras.callbacks.EarlyStopping(
monitor="val_loss",
min_delta=0,
patience=0,
verbose=0,
mode="auto",
restore_best_weights=True,
)
%env TF_GPU_ALLOCATOR=cuda_malloc_async
###Output
_____no_output_____
###Markdown
Start training a text classifier using AutoKeras AutoML
###Code
clf.fit(
train_set,
validation_data=val_set,
epochs=10,
batch_size=BATCH_SIZE,
callbacks=[earlystop],
verbose=1,
)
# Display the best model architecture
clf.export_model().summary()
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
model = clf.export_model()
y_test = test_df[
["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
].values
test_set = tf.data.Dataset.from_tensor_slices(
(
(test_df.comment_text.values,),
(
test_df[
[
"toxic",
"severe_toxic",
"obscene",
"threat",
"insult",
"identity_hate",
]
].values,
),
)
).batch(BATCH_SIZE)
predicted_y = model.predict(test_df.comment_text.values)
roc_auc_score(
test_df[
["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
].values,
predicted_y,
)
model.evaluate(test_set)
model.evaluate(val_set)
model.summary()
###Output
_____no_output_____
###Markdown
Predict unseen labels (for the Kaggle competition) Load the actual test data
###Code
real_test_df = pd.read_csv("data/test.csv.zip", compression="zip")
###Output
_____no_output_____
###Markdown
Predict unseen samples
###Code
real_test_pred = model.predict(real_test_df.comment_text)
###Output
_____no_output_____
###Markdown
Combine predictions with sample ids to store result file in a csv
###Code
predictions_df = pd.DataFrame(
real_test_pred,
columns=["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"],
)
predictions_df["id"] = real_test_df["id"]
predictions_df = predictions_df[
["id", "toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
]
# Predictions output looks like:
predictions_df.head()
# Store prediction to be submitted to Kaggle
predictions_df.to_csv("data/autokeras_predictions.csv", index=False)
###Output
_____no_output_____ |
Face recognition.ipynb | ###Markdown
Face & Eye Detection using HAAR Cascade Classifiers Introduction :
###Code
This work sample contains the face and eye recognition process of a test image.
This is my third beginner project on Natural Language Processing where I am going to detect face and eye of a person from a sample image.
The sample image is a color image and I have converted it to gray scale because sheer complexity of coding and processing of colored images.
Gray scale images are much easier and faster to process as compared to colored images.
Hope you all will like it !
###Output
_____no_output_____
###Markdown
Importing the packages
###Code
#Importing the packages
import numpy as np
import cv2
###Output
_____no_output_____
###Markdown
Importing the classifier file
###Code
# We point OpenCV's CascadeClassifier function to where our classifier (XML file format) is stored.
# Uploading the classifier file.
face_classifier = cv2.CascadeClassifier('C:\\Users\\Home\\Documents\\New folder\\ML-DL-NLP-Tableau\\OpenCV\\haarcascades\\haarcascade_frontalface_alt2.xml')
###Output
_____no_output_____
###Markdown
Importing the sample test image
###Code
# Uploading our sample image.
image = cv2.imread('C:\\Users\\Home\\Documents\\New folder\\ML-DL-NLP-Tableau\\OpenCV\\testface.jpg')
# Now converting our sample image to grayscale.
# We are converting RGB image to grayscale so as to reduce the complexity of code and image processing as the gray scale images are much easier to process as compared to colored images.
# It is also important for learning image processing
# As it's better to understand grayscale processing first and understand how it applies to multichannel processing rather than starting with full color imaging and missing all the important insights that can be learned from single channel processing.
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
###Output
_____no_output_____
###Markdown
Developing face recognition algorithm
###Code
# Our classifier returns the ROI (Region of Interest) of the detected face as a tuple.
# It stores the top left coordinate and the bottom right coordiantes
# ROI or Region of Interest is dividing or partitioning a facial image into different regions so as to increase the overall accuracy
faces = face_classifier.detectMultiScale(gray, 1.3, 5)
# When no faces detected, face_classifier returns and empty tuple
if faces is ():
print("No faces found")
# We iterate through our faces array and draw a rectangle
# over each face in faces
for (x,y,w,h) in faces:
cv2.rectangle(image, (x,y), (x+w,y+h), (127,0,255), 2)
cv2.imshow('Face Detection', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Here I have not given any specific command to destrot the popup window.
# We can destroy or close the window either by pressing Enter key or Esc key or any key.
###Output
_____no_output_____
###Markdown
Let's combine face and eye detection Importing the classifier file
###Code
# We point OpenCV's CascadeClassifier function to where our classifier (XML file format) is stored
# Uploading the classifier file.
# In this we use two classifier files, one for facial recognition and the other for eye recognition.
face_classifier = cv2.CascadeClassifier('C:\\Users\\Home\\Documents\\New folder\\ML-DL-NLP-Tableau\\OpenCV\\haarcascades\\haarcascade_frontalface_alt2.xml')
eye_classifier = cv2.CascadeClassifier('C:\\Users\\Home\\Documents\\New folder\\ML-DL-NLP-Tableau\\OpenCV\\haarcascades\\haarcascade_eye.xml')
###Output
_____no_output_____
###Markdown
Importing the sample image and converting it to gray scale
###Code
# Again Uploading our sample image and converting our sample image to grayscale.
# We are converting RGB image to grayscale so as to reduce the complexity of code and image processing as the gray scale images are much easier to process as compared to colored images.
img = cv2.imread('C:\\Users\\Home\\Documents\\New folder\\ML-DL-NLP-Tableau\\OpenCV\\testface.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
###Output
_____no_output_____
###Markdown
Developing face & eye recognition algorithm
###Code
# Our classifier returns the ROI (Region of Interest) of the detected face as a tuple.
# It stores the top left coordinate and the bottom right coordiantes
# ROI or Region of Interest is dividing or partitioning a facial image into different regions so as to increase the overall accuracy
faces = face_classifier.detectMultiScale(gray, 1.2, 4)
# When no faces detected, face_classifier returns and empty tuple
if faces is ():
print("No Face Found")
# We iterate through our faces array and draw a rectangle
# over each face in faces
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(127,0,255),2)
#cv2.imshow('img',img)
#cv2.waitKey(0)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w] # Also converting eye color into gray scale as it is easier to process
eyes = eye_classifier.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(255,255,0),2) # Now we iterate through eye array and draw a rectangle
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Here I have not given any specific command to destrot the popup window.
# We can destroy or close the window either by pressing Enter key or Esc key or any key.
#Disclaimer :-
#If for some reason only one eye is being detected, press C key once. This will detect both the eyes.
###Output
_____no_output_____
###Markdown
Conclusion :
###Code
This is a beginner approach I have carried out in detecting the face and eyes from a sample image and I have tried to attain maximum accuracy with the same.
Hope You all liked it !
Any comments, suggestions are welcome
Thank You !
###Output
_____no_output_____ |
examples/DPUCADX8G/notebooks/image_classification_caffe.ipynb | ###Markdown
Image Classification with CaffeThis tutorial demonstrates the steps required to prepare and deploy a trained Caffe model for FPGA acceleration using Xilinx MLSuite: 1. **Quantize the model** - The quantizer will generate scaling parameters for quantizing floats INT8. This is required, because FPGAs will take advantage of Fixed Point Precision, to achieve more parallelization at lower power. 2. **Compile the Model** - In this step, the network Graph (prototxt) and the Weights (caffemodel) are compiled, the compiler 3. **Subgraph Cutting** - In this step, the original graph is cut, and a custom FPGA accelerated python layer is inserted to be used for Inference. 4. **Classification** - In this step, the caffe model and the prototxt from the previous step are run on the FPGA to perform inference on an input image. For command line versions see: examples/caffe/ Prerequisite Files1. **Model files** - This notebook requires that model files are located in `$VAI_ALVEO_ROOT/DPUCADX8G/caffe/models/`2. **Image files** - This notebook requires ilsvrc2012 image files are downloaded in `$HOME/CK-TOOLS/dataset-imagenet-ilsvrc2012-val-min/` Setup (Before Running Notebook)**Note:** User is responsible for the use of the downloaded content and compliance with any copyright licenses.```conda activate vitis-ai-caffepython -m ck pull repo:ck-envpython -m ck install package:imagenet-2012-val-minpython -m ck install package:imagenet-2012-auxhead -n 500 $HOME/CK-TOOLS/dataset-imagenet-ilsvrc2012-aux/val.txt > $HOME/CK-TOOLS/dataset-imagenet-ilsvrc2012-val-min/val_map.txtcd $VAI_ALVEO_ROOT/DPUCADX8G/caffepython resize.py $HOME/CK-TOOLS/dataset-imagenet-ilsvrc2012-val-min 256 256python getModels.pysource /vitis_ai_home/setup/alveo/u200_u250/overlaybins/setup.shpython replace_mluser.py --modelsdir models``` Step 1. Import required packages
###Code
from __future__ import print_function
import os
import shutil
import subprocess
from IPython.display import Image as display
from ipywidgets import interact
import numpy as np
from caffe import Classifier, io
from caffe.proto import caffe_pb2
from caffe.draw import draw_net_to_file
from google.protobuf import text_format
# Environment Variables ("source /vitis_ai_home/setup/alveo/u200_u250/overlaybins/setup.sh")
VAI_ALVEO_ROOT = os.getenv("VAI_ALVEO_ROOT",os.getcwd()+"/..")
XCLBIN = "/opt/xilinx/overlaybins/xdnnv3"
print("Running w/ VAI_ALVEO_ROOT: %s" % VAI_ALVEO_ROOT)
print("Running w/ XCLBIN: %s" % XCLBIN)
# Bring in SubGraph Cutter
from decent import CaffeFrontend as xfdnnQuantizer
from vai.dpuv1.rt.scripts.framework.caffe.xfdnn_subgraph import CaffeCutter as xfdnnCutter
# Delete stale directories
if os.path.exists("quantize_results"):
shutil.rmtree("quantize_results")
if os.path.exists("work"):
shutil.rmtree("work")
###Output
_____no_output_____
###Markdown
Step 2. Choose a modelChoose a model using the drop down, or select custom, and enter your own.
###Code
@interact(MODEL=["bvlc_googlenet","inception_v2","inception_v3","inception_v4",\
"resnet50_v1","resnet50_v2","squeezenet","vgg16","custom"])
def selectModel(MODEL):
global prototxt
global caffemodel
global name
model_root = VAI_ALVEO_ROOT + "/DPUCADX8G/caffe/models/"
if MODEL == "custom":
prototxt = None
caffemodel = None
name = None
else:
prototxt = model_root + MODEL + "/" + MODEL + "_train_val.prototxt"
caffemodel = model_root + MODEL + "/" + MODEL + ".caffemodel"
name = MODEL
if not prototxt:
@interact(PROTOTXT="Provide the path to your prototxt")
def selectPrototxt(PROTOTXT):
global prototxt
prototxt = PROTOTXT
@interact(CAFFEMODEL="Provide the path to your caffemodel")
def selectCaffemodel(CAFFEMODEL):
global caffemodel
caffemodel = CAFFEMODEL
@interact(MODEL="Provide a name to your model")
def selectCaffemodel(MODEL):
global name
name = MODEL
print("Currently running : %s" % name)
print("Running with prototxt: %s" % prototxt)
print("Running with caffemodel: %s" % caffemodel)
###Output
_____no_output_____
###Markdown
Step 3. Run the QuantizerHere, we will quantize the model. The inputs are model prototxt, model weights, number of test iterations and calibration iterations. The output is quantized prototxt, weights, and quantize_info.txt and will be generated in the quantize_results/ directory.The Quantizer will generate a json file holding scaling parameters for quantizing floats to INT8This is required, because FPGAs will take advantage of Fixed Point Precision, to achieve accelerated inference
###Code
def Quantize(prototxt,caffemodel,calib_iter=1,output_dir="quantize_results"):
os.environ["DECENT_DEBUG"] = "1"
subprocess.call(["vai_q_caffe", "quantize",
"--model", prototxt,
"--weights", caffemodel,
"--calib_iter", str(calib_iter)])
Quantize(prototxt,caffemodel)
###Output
_____no_output_____
###Markdown
Step 4: Run the CompilerThe compiler takes in the quantizer outputs from the previous step (prototxt, weights, quantize_info) and outputs a compiler.json and quantizer.json.* A Network Graph (prototxt) and a Weights Blob (caffemodel) are compiled* The network is optimized* FPGA Instructions are generated
###Code
arch = "/opt/vitis_ai/compiler/arch/DPUCADX8G/ALVEO/arch.json" # Informs compiler what underlying hardware is capable of
def Compile(prototxt="quantize_results/deploy.prototxt",\
caffemodel="quantize_results/deploy.caffemodel",\
quantize_info="quantize_results/quantize_info.txt"):
subprocess.call(["vai_c_caffe",
"--prototxt", prototxt,
"--caffemodel", caffemodel,
"--net_name", name,
"--output_dir", "work",
"--arch", arch,
"--options", "{\"quant_cfgfile\":\"%s\"}" %(quantize_info)])
Compile()
###Output
_____no_output_____
###Markdown
Step 4: Run the Subgraph CutterThe subgraph cutter creates a custom python layer to be accelerated on the FPGA. The inputs are compiler.json, quantizer.json and model weights from the compiler step, as well as the FPGA xclbin. This outputs a cut prototxt file with FPGA references, to be used for inference.
###Code
def Cut(prototxt):
cutter = xfdnnCutter(
inproto="quantize_results/deploy.prototxt",
trainproto=prototxt,
outproto="xfdnn_auto_cut_deploy.prototxt",
outtrainproto="xfdnn_auto_cut_train_val.prototxt",
cutAfter="data",
xclbin=XCLBIN,
netcfg="work/compiler.json",
quantizecfg="work/quantizer.json",
weights="work/weights.h5"
)
cutter.cut()
Cut(prototxt)
# Lets visualize the new graph with the FPGA subgraph
net = caffe_pb2.NetParameter()
text_format.Merge(open("xfdnn_auto_cut_deploy.prototxt").read(), net)
draw_net_to_file(net,"xfdnn_auto_cut_deploy.png")
display("xfdnn_auto_cut_deploy.png")
###Output
_____no_output_____
###Markdown
Step 5: Inference The inputs are the FPGA prototxt file, caffemodel weights, a test image, and the labels
###Code
def Classify(prototxt,caffemodel,image,labels):
classifier = Classifier(prototxt,caffemodel,
mean=np.array([104,117,123]),
raw_scale=255, channel_swap=[2,1,0])
predictions = classifier.predict([io.load_image(image)]).flatten()
labels = np.loadtxt(labels, str, delimiter='\t')
top_k = predictions.argsort()[-1:-6:-1]
for l,p in zip(labels[top_k],predictions[top_k]):
print (l," : ",p)
# Choose image to run, display it for reference
HOME = os.getenv("HOME")
image = HOME+"/CK-TOOLS/dataset-imagenet-ilsvrc2012-val-min/ILSVRC2012_val_00000002.JPEG"
display(filename=image)
Classify("xfdnn_auto_cut_deploy.prototxt","quantize_results/deploy.caffemodel",image,HOME+"/CK-TOOLS/dataset-imagenet-ilsvrc2012-aux/synset_words.txt")
###Output
_____no_output_____
###Markdown
Image Classification with CaffeThis tutorial demonstrates the steps required to prepare and deploy a trained Caffe model for FPGA acceleration using Xilinx MLSuite: 1. **Quantize the model** - The quantizer will generate scaling parameters for quantizing floats INT8. This is required, because FPGAs will take advantage of Fixed Point Precision, to achieve more parallelization at lower power. 2. **Compile the Model** - In this step, the network Graph (prototxt) and the Weights (caffemodel) are compiled, the compiler 3. **Subgraph Cutting** - In this step, the original graph is cut, and a custom FPGA accelerated python layer is inserted to be used for Inference. 4. **Classification** - In this step, the caffe model and the prototxt from the previous step are run on the FPGA to perform inference on an input image. For command line versions see: examples/caffe/ Prerequisite Files1. **Model files** - This notebook requires that model files are located in `$VAI_HOME/examples/DPUCADX8G/caffe/models/`2. **Image files** - This notebook requires ilsvrc2012 image files are downloaded in `$HOME/CK-TOOLS/dataset-imagenet-ilsvrc2012-val-min/` Setup (Before Running Notebook)**Note:** User is responsible for the use of the downloaded content and compliance with any copyright licenses.```conda activate vitis-ai-caffepython -m ck pull repo:ck-envpython -m ck install package:imagenet-2012-val-minpython -m ck install package:imagenet-2012-auxhead -n 500 $HOME/CK-TOOLS/dataset-imagenet-ilsvrc2012-aux/val.txt > $HOME/CK-TOOLS/dataset-imagenet-ilsvrc2012-val-min/val_map.txtcd $VAI_HOME/examples/DPUCADX8G/caffepython resize.py $HOME/CK-TOOLS/dataset-imagenet-ilsvrc2012-val-min 256 256python getModels.pysource /vitis_ai_home/setup/alveo/u200_u250/overlaybins/setup.shpython replace_mluser.py --modelsdir models``` Step 1. Import required packages
###Code
from __future__ import print_function
import os
import shutil
import subprocess
from IPython.display import Image as display
from ipywidgets import interact
import numpy as np
from caffe import Classifier, io
from caffe.proto import caffe_pb2
from caffe.draw import draw_net_to_file
from google.protobuf import text_format
# Environment Variables ("source /vitis_ai_home/setup/alveo/u200_u250/overlaybins/setup.sh")
VAI_HOME = os.getenv("VAI_HOME", os.getcwd()+"/../../../")
XCLBIN = "/opt/xilinx/overlaybins/xdnnv3"
print("Running w/ VAI_HOME: %s" % VAI_HOME)
print("Running w/ XCLBIN: %s" % XCLBIN)
# Bring in SubGraph Cutter
from decent import CaffeFrontend as xfdnnQuantizer
from vai.dpuv1.rt.scripts.framework.caffe.xfdnn_subgraph import CaffeCutter as xfdnnCutter
# Delete stale directories
if os.path.exists("quantize_results"):
shutil.rmtree("quantize_results")
if os.path.exists("work"):
shutil.rmtree("work")
###Output
_____no_output_____
###Markdown
Step 2. Choose a modelChoose a model using the drop down, or select custom, and enter your own.
###Code
@interact(MODEL=["bvlc_googlenet","inception_v2","inception_v3","inception_v4",\
"resnet50_v1","resnet50_v2","squeezenet","vgg16","custom"])
def selectModel(MODEL):
global prototxt
global caffemodel
global name
model_root = VAI_HOME + "/examples/DPUCADX8G/caffe/models/"
if MODEL == "custom":
prototxt = None
caffemodel = None
name = None
else:
prototxt = model_root + MODEL + "/" + MODEL + "_train_val.prototxt"
caffemodel = model_root + MODEL + "/" + MODEL + ".caffemodel"
name = MODEL
if not prototxt:
@interact(PROTOTXT="Provide the path to your prototxt")
def selectPrototxt(PROTOTXT):
global prototxt
prototxt = PROTOTXT
@interact(CAFFEMODEL="Provide the path to your caffemodel")
def selectCaffemodel(CAFFEMODEL):
global caffemodel
caffemodel = CAFFEMODEL
@interact(MODEL="Provide a name to your model")
def selectCaffemodel(MODEL):
global name
name = MODEL
print("Currently running : %s" % name)
print("Running with prototxt: %s" % prototxt)
print("Running with caffemodel: %s" % caffemodel)
###Output
_____no_output_____
###Markdown
Step 3. Run the QuantizerHere, we will quantize the model. The inputs are model prototxt, model weights, number of test iterations and calibration iterations. The output is quantized prototxt, weights, and quantize_info.txt and will be generated in the quantize_results/ directory.The Quantizer will generate a json file holding scaling parameters for quantizing floats to INT8This is required, because FPGAs will take advantage of Fixed Point Precision, to achieve accelerated inference
###Code
def Quantize(prototxt,caffemodel,calib_iter=1,output_dir="quantize_results"):
os.environ["DECENT_DEBUG"] = "1"
subprocess.call(["vai_q_caffe", "quantize",
"--model", prototxt,
"--weights", caffemodel,
"--calib_iter", str(calib_iter)])
Quantize(prototxt,caffemodel)
###Output
_____no_output_____
###Markdown
Step 4: Run the CompilerThe compiler takes in the quantizer outputs from the previous step (prototxt, weights, quantize_info) and outputs a compiler.json and quantizer.json.* A Network Graph (prototxt) and a Weights Blob (caffemodel) are compiled* The network is optimized* FPGA Instructions are generated
###Code
arch = "/opt/vitis_ai/compiler/arch/DPUCADX8G/ALVEO/arch.json" # Informs compiler what underlying hardware is capable of
def Compile(prototxt="quantize_results/deploy.prototxt",\
caffemodel="quantize_results/deploy.caffemodel",\
quantize_info="quantize_results/quantize_info.txt"):
subprocess.call(["vai_c_caffe",
"--prototxt", prototxt,
"--caffemodel", caffemodel,
"--net_name", name,
"--output_dir", "work",
"--arch", arch,
"--options", "{\"quant_cfgfile\":\"%s\"}" %(quantize_info)])
Compile()
###Output
_____no_output_____
###Markdown
Step 4: Run the Subgraph CutterThe subgraph cutter creates a custom python layer to be accelerated on the FPGA. The inputs are compiler.json, quantizer.json and model weights from the compiler step, as well as the FPGA xclbin. This outputs a cut prototxt file with FPGA references, to be used for inference.
###Code
def Cut(prototxt):
cutter = xfdnnCutter(
inproto="quantize_results/deploy.prototxt",
trainproto=prototxt,
outproto="xfdnn_auto_cut_deploy.prototxt",
outtrainproto="xfdnn_auto_cut_train_val.prototxt",
cutAfter="data",
xclbin=XCLBIN,
netcfg="work/compiler.json",
quantizecfg="work/quantizer.json",
weights="work/weights.h5"
)
cutter.cut()
Cut(prototxt)
# Lets visualize the new graph with the FPGA subgraph
net = caffe_pb2.NetParameter()
text_format.Merge(open("xfdnn_auto_cut_deploy.prototxt").read(), net)
draw_net_to_file(net,"xfdnn_auto_cut_deploy.png")
display("xfdnn_auto_cut_deploy.png")
###Output
_____no_output_____
###Markdown
Step 5: Inference The inputs are the FPGA prototxt file, caffemodel weights, a test image, and the labels
###Code
def Classify(prototxt,caffemodel,image,labels):
classifier = Classifier(prototxt,caffemodel,
mean=np.array([104,117,123]),
raw_scale=255, channel_swap=[2,1,0])
predictions = classifier.predict([io.load_image(image)]).flatten()
labels = np.loadtxt(labels, str, delimiter='\t')
top_k = predictions.argsort()[-1:-6:-1]
for l,p in zip(labels[top_k],predictions[top_k]):
print (l," : ",p)
# Choose image to run, display it for reference
HOME = os.getenv("HOME")
image = HOME+"/CK-TOOLS/dataset-imagenet-ilsvrc2012-val-min/ILSVRC2012_val_00000002.JPEG"
display(filename=image)
Classify("xfdnn_auto_cut_deploy.prototxt","quantize_results/deploy.caffemodel",image,HOME+"/CK-TOOLS/dataset-imagenet-ilsvrc2012-aux/synset_words.txt")
###Output
_____no_output_____ |
_posts/scikit/Hierarchical/Hierarchical-clustering.ipynb | ###Markdown
Example builds a swiss roll dataset and runs hierarchical clustering on their position.For more information, see [Hierarchical clustering](http://scikit-learn.org/stable/modules/clustering.htmlhierarchical-clustering).In a first step, the hierarchical clustering is performed without connectivity constraints on the structure and is solely based on distance, whereas in a second step the clustering is restricted to the k-Nearest Neighbors graph: it’s a hierarchical clustering with structure prior.Some of the clusters learned without connectivity constraints do not respect the structure of the swiss roll and extend across different folds of the manifolds. On the opposite, when opposing connectivity constraints, the clusters form a nice parcellation of the swiss roll. New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version
###Code
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
Imports This tutorial imports [AgglomerativeClustering](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.htmlsklearn.cluster.AgglomerativeClustering) and [make_swiss_roll](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_swiss_roll.htmlsklearn.datasets.make_swiss_roll).
###Code
print(__doc__)
import plotly.plotly as py
import plotly.graph_objs as go
import time as time
import numpy as np
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d.axes3d as p3
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets.samples_generator import make_swiss_roll
###Output
Automatically created module for IPython interactive environment
###Markdown
Calculations Generate data [swiss roll dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_swiss_roll.htmlsklearn.datasets.make_swiss_roll).
###Code
n_samples = 1500
noise = 0.05
X, _ = make_swiss_roll(n_samples, noise)
# Make it thinner
X[:, 1] *= .5
###Output
_____no_output_____
###Markdown
Plot result Function to convert matplotlib colormap to plotly colormap.
###Code
def matplotlib_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries-1)
pl_colorscale = []
for k in range(pl_entries):
C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
return pl_colorscale
###Output
_____no_output_____
###Markdown
Without connectivity constraints Compute clustering
###Code
print("Compute unstructured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(n_clusters=6, linkage='ward').fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print("Elapsed time: %.2fs" % elapsed_time)
print("Number of points: %i" % label.size)
color = matplotlib_to_plotly(plt.cm.jet, 6)
data = [ ]
for l in np.unique(label):
trace = go.Scatter3d(x=X[label == l, 0],
y=X[label == l, 1],
z=X[label == l, 2],
mode='markers',
showlegend = False,
marker=dict( color= color[l][1],
line= dict(color='black', width=1)
))
data.append(trace)
layout = go.Layout(height = 600,
title = 'Without connectivity constraints (time %.2fs)' % elapsed_time,
scene = dict(
xaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True),
yaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True,),
zaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True)),
margin=dict(
l=0, r=0,
b=0, t=50)
)
fig = go.Figure(data=data, layout = layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
With connectivity constraints Define the structure A of the data. Here a 10 nearest neighbors.
###Code
from sklearn.neighbors import kneighbors_graph
connectivity = kneighbors_graph(X, n_neighbors=10, include_self=False)
###Output
_____no_output_____
###Markdown
Compute clustering
###Code
print("Compute structured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(n_clusters=6, connectivity=connectivity,
linkage='ward').fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print("Elapsed time: %.2fs" % elapsed_time)
print("Number of points: %i" % label.size)
color = matplotlib_to_plotly(plt.cm.jet, 6)
data = [ ]
for l in np.unique(label):
trace = go.Scatter3d(x=X[label == l, 0],
y=X[label == l, 1],
z=X[label == l, 2],
mode='markers',
showlegend = False,
marker=dict( color= color[l][1],
line= dict(color='black', width=1)
))
data.append(trace)
layout = go.Layout(height = 600,
title = 'With connectivity constraints (time %.2fs)' % elapsed_time,
scene = dict(
xaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True),
yaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True,),
zaxis = dict(
backgroundcolor="rgb(233, 233, 233)",
showbackground=True)),
margin=dict(
l=0, r=0,
b=0, t=50)
)
fig = go.Figure(data=data, layout = layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
License Authors : Vincent Michel, 2010 Alexandre Gramfort, 2010 Gael Varoquaux, 2010License: BSD 3 clause
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Hierarchical-clustering.ipynb', 'scikit-learn/plot-ward-structured-vs-unstructured/', 'Hierarchical Clustering Structured vs Unstructured Ward | plotly',
' ',
title = 'Hierarchical Clustering Structured vs Unstructured Ward | plotly',
name = 'Hierarchical Clustering Structured vs Unstructured Ward',
has_thumbnail='true', thumbnail='thumbnail/Hierarchical.jpg',
language='scikit-learn', page_type='example_index',
display_as='clustering', order=5,
ipynb= '~Diksha_Gabha/2764')
###Output
_____no_output_____ |
assets/all_html/2019_10_06_HW1_viathedocs_kdata.ipynb | ###Markdown
SENTIMENT ANALYSIS -- (WITH HOMEMADE DATA!) (via [these docs](http://www.nltk.org/howto/sentiment.html)) | 10-06-19 STEP 1: Import ALL the things
###Code
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import subjectivity
from nltk.sentiment import SentimentAnalyzer
from nltk.sentiment.util import *
###Output
_____no_output_____
###Markdown
STEP 2: Import the fake data you created just for this project
###Code
import os
import pandas as pd
negative = os.listdir('AI_NEG/')
positive = os.listdir('AI_POS/')
positive_alltext = []
for file in positive:
f=open('AI_POS/'+file)
content=f.read()
positive_alltext.append(content)
f.close()
negative_alltext = []
for file in negative:
f=open('AI_NEG/'+file)
content=f.read()
negative_alltext.append(content)
f.close()
###Output
_____no_output_____
###Markdown
STEP 2b: Tokenize and clean the data
###Code
from nltk.tokenize import word_tokenize
def get_tokens(sentence):
tokens = word_tokenize(sentence)
clean_tokens = [word.lower() for word in tokens if word.isalpha()]
return clean_tokens
negative_alltext_tokens = [get_tokens(sentence) for sentence in negative_alltext]
positive_alltext_tokens = [get_tokens(sentence) for sentence in positive_alltext]
neg_docs = [(sent, 'neg') for sent in negative_alltext_tokens]
pos_docs = [(sent, 'pos') for sent in positive_alltext_tokens]
###Output
_____no_output_____
###Markdown
STEP 3: Create `test` and `train` for both `subj` and `obj`
###Code
train_neg_docs = neg_docs[:4]
test_neg_docs = neg_docs[4:5]
train_pos_docs = pos_docs[:4]
test_pos_docs = pos_docs[4:5]
###Output
_____no_output_____
###Markdown
STEP 4: Combine the two `test` and `train` sets
###Code
training_docs = train_neg_docs + train_pos_docs
testing_docs = test_neg_docs + test_pos_docs
###Output
_____no_output_____
###Markdown
STEP 5: Use `SentimentAnalyzer` to mark negation in training docs
###Code
sentim_analyzer = SentimentAnalyzer()
all_words_neg = sentim_analyzer.all_words([mark_negation(doc) for doc in training_docs])
all_words_neg
###Output
_____no_output_____
###Markdown
Note how this sentiment analyzer is SUPPOSED to mark everything after a negation word with '_NEG'However, we do not have enough data in our 10 text file dataset to actually run this successfully STEP 6: Use `unigram_word_feats` to get unigrams features
###Code
unigram_feats = sentim_analyzer.unigram_word_feats(all_words_neg)
len(unigram_feats)
###Output
_____no_output_____
###Markdown
STEP 7: Use `add_feat_extractor` to get a feature-value representation of our data Apply to both `training_set` and `testing_set`
###Code
sentim_analyzer.add_feat_extractor(extract_unigram_feats, unigrams=unigram_feats)
training_set = sentim_analyzer.apply_features(training_docs)
training_set[:1]
test_set = sentim_analyzer.apply_features(testing_docs)
test_set[:1]
###Output
_____no_output_____
###Markdown
STEP 8: FINAL STEP!! We use Naive Bayes to create a trainer and FINALLY classify our data!
###Code
trainer = NaiveBayesClassifier.train
classifier = sentim_analyzer.train(trainer, training_set)
for key,value in sorted(sentim_analyzer.evaluate(test_set).items()):
print('{0}: {1}'.format(key,value))
###Output
Evaluating NaiveBayesClassifier results...
Accuracy: 1.0
F-measure [neg]: 1.0
F-measure [pos]: 1.0
Precision [neg]: 1.0
Precision [pos]: 1.0
Recall [neg]: 1.0
Recall [pos]: 1.0
|
notebooks/examples/line_percent.ipynb | ###Markdown
Line Chart with Percent axis----------------------------This example shows how to format the tick labels of they-axis of a chart as percentages.
###Code
import altair as alt
alt.data_transformers.enable('json')
from altair.expr import datum
from vega_datasets import data
source = data.jobs.url
alt.Chart(source).mark_line().encode(
alt.X('year:O'),
alt.Y('perc:Q', axis=alt.Axis(format='%')),
color='sex:N'
).properties(
title='Percent of work-force working as Welders'
).transform_filter(
datum.job == 'Welder'
)
###Output
_____no_output_____ |
examples/01_mms/example_mms_particle_distributions.ipynb | ###Markdown
Particle Distributionsauthor: Louis Richard\Example showing how to you can work with particle distributions
###Code
import numpy as np
import xarray as xr
import matplotlib.pylab as pl
import matplotlib.pyplot as plt
from pyrfu import mms, pyrf
from pyrfu.plot import plot_line, plot_spectr, plot_projection
###Output
_____no_output_____
###Markdown
Define spacecraft index, time interval and data path
###Code
mms_id = 3
tint = ["2015-12-02T01:14:15.000", "2015-12-02T01:15:13.000"]
data_path = "/Users/louisr/Documents/PhD/irfu-python-draft/pydata"
###Output
_____no_output_____
###Markdown
Load Velocity Distribution Functions (VDFs)
###Code
vdf_i = mms.get_data("PDi_fpi_brst_l2", tint, mms_id, data_path=data_path)
vdf_e = mms.get_data("PDe_fpi_brst_l2", tint, mms_id, data_path=data_path)
###Output
Loading mms3_dis_dist_brst...
Loading mms3_des_dist_brst...
###Markdown
Load supporting information
###Code
b_xyz = mms.get_data("B_dmpa_fgm_brst_l2", tint, mms_id, data_path=data_path)
e_xyz = mms.get_data("e_dsl_edp_brst_l2", tint, mms_id, data_path=data_path)
sc_pot = mms.get_data("v_edp_brst_l2", tint, mms_id, data_path=data_path)
###Output
Loading mms3_fgm_b_dmpa_brst_l2...
Loading mms3_edp_dce_dsl_brst_l2...
Loading mms3_edp_scpot_brst_l2...
###Markdown
Example operations Omnidirectional differential energy flux
###Code
vdf_e_omni = mms.vdf_omni(vdf_e)
###Output
_____no_output_____
###Markdown
Construt pitchangle distribution
###Code
vdf_e_pad = mms.get_pitch_angle_dist(vdf_e, b_xyz, tint=tint, angles=24)
###Output
notice : User defined number of pitch angles.
###Markdown
Limit energy range
###Code
vdf_e_lowen = mms.vdf_elim(vdf_e, [0, 200])
###Output
Effective eint = [10.96, 191.15]
###Markdown
Change units to differential energy flux
###Code
vdf_e_deflux = mms.vdf_to_deflux(vdf_e)
###Output
_____no_output_____
###Markdown
Change units to particle energy flux
###Code
vdf_e_dpflux = mms.vdf_to_dpflux(vdf_e)
###Output
_____no_output_____
###Markdown
Resample energy to 64 energy levels, reduces the time resolution
###Code
vdf_e_e64 = mms.vdf_to_e64(vdf_e)
vdf_e_pa_lowen = mms.get_pitch_angle_dist(mms.vdf_elim(vdf_e_e64, [20, 200]), b_xyz, tint=tint, angles=18)
vdf_e_pa_lowen_spectr = xr.DataArray(np.nanmean(vdf_e_pa_lowen.data, axis=1),
coords=[vdf_e_pa_lowen.time.data, vdf_e_pa_lowen.theta.data[0, :]],
dims=["time", "theta"])
vdf_e_pa_miden = mms.get_pitch_angle_dist(mms.vdf_elim(vdf_e_e64, [200, 2000]), b_xyz, tint=tint, angles=18)
vdf_e_pa_miden_spectr = xr.DataArray(np.nanmean(vdf_e_pa_miden.data, axis=1),
coords=[vdf_e_pa_miden.time.data, vdf_e_pa_miden.theta.data[0, :]],
dims=["time", "theta"])
###Output
Effective eint = [20.40, 191.15]
notice : User defined number of pitch angles.
###Markdown
Plot
###Code
%matplotlib notebook
n_panels = 7;
f, axs = plt.subplots(n_panels, sharex="all", figsize=(8.5, 11))
f.subplots_adjust(hspace=0, left=.13, right=.87, bottom=.05, top=.95)
plot_line(axs[0], b_xyz)
plot_line(axs[0], pyrf.norm(b_xyz))
axs[0].legend(["$B_x$", "$B_y$", "$B_z$", "$|B|$"], bbox_to_anchor=(1, 1.05))
axs[0].set_ylabel("$B$ [nT]")
axs[0].set_title(f"MMS{mms_id:d}")
axs[1], caxs1 = plot_spectr(axs[1], mms.vdf_omni(vdf_e), yscale="log",
cscale="log", cmap="jet")
axs[1].set_yticks(np.logspace(1, 4, 4))
caxs1.set_ylabel("PSD" + "\n" + "[s$^{3}$ cm$^{-6}$]")
axs[1].set_ylabel("$E_e$ [eV]")
e_lim = [20, 200]
vdf_e_pa_lowen = mms.get_pitch_angle_dist(mms.vdf_elim(vdf_e_e64, e_lim), b_xyz, tint=tint, angles=18)
vdf_e_pa_lowen_spectr = xr.DataArray(np.nanmean(vdf_e_pa_lowen.data, axis=1),
coords=[vdf_e_pa_lowen.time.data, vdf_e_pa_lowen.theta.data[0, :]],
dims=["time", "theta"])
axs[2], caxs2 = plot_spectr(axs[2], vdf_e_pa_lowen_spectr,
cscale="log", cmap="jet")
axs[2].set_yticks([0, 45, 90, 135])
caxs2.set_ylabel("PSD" + "\n" + "[s$^{3}$ cm$^{-6}$]")
axs[2].set_ylabel("$\\theta$ [deg.]")
axs[2].text(.03, .1, f"{e_lim[0]:d} < $E_e$ < {e_lim[1]:d} eV", transform=axs[2].transAxes,
bbox=dict(boxstyle="square", ec=(1., 1., 1.), fc=(1., 1., 1.)))
e_lim = [200, 2000]
vdf_e_pa_miden = mms.get_pitch_angle_dist(mms.vdf_elim(vdf_e_e64, e_lim), b_xyz, tint=tint, angles=18)
vdf_e_pa_miden_spectr = xr.DataArray(np.nanmean(vdf_e_pa_miden.data, axis=1),
coords=[vdf_e_pa_miden.time.data, vdf_e_pa_miden.theta.data[0, :]],
dims=["time", "theta"])
axs[3], caxs3 = plot_spectr(axs[3], vdf_e_pa_miden_spectr,
cscale="log", cmap="jet")
axs[3].set_yticks([0, 45, 90, 135])
caxs3.set_ylabel("PSD" + "\n" + "[s$^{3}$ cm$^{-6}$]")
axs[3].set_ylabel("$\\theta$ [deg.]")
axs[3].text(.03, .1, f"{e_lim[0]:d} < $E_e$ < {e_lim[1]:d} eV", transform=axs[3].transAxes,
bbox=dict(boxstyle="square", ec=(1., 1., 1.), fc=(1., 1., 1.)))
pa_lim = [0, 15]
vdf_e_lowan = mms.get_pitch_angle_dist(mms.vdf_to_e64(vdf_e), b_xyz, tint=tint, angles=pa_lim)
vdf_e_lowan_spectr = xr.DataArray(np.nanmean(vdf_e_lowan.data, axis=2),
coords=[vdf_e_lowan.time.data, vdf_e_lowan.energy.data[0, :]],
dims=["time", "energy"])
axs[4], caxs4 = plot_spectr(axs[4], vdf_e_lowan_spectr, yscale="log",
cscale="log", cmap="jet")
caxs4.set_ylabel("PSD" + "\n" + "[s$^{3}$ cm$^{-6}$]")
axs[4].set_ylabel("$E_e$ [eV]")
axs[4].text(.03, .1, f"{pa_lim[0]:d}$^\\circ$ < $\\theta$ < {pa_lim[1]:d}$^\\circ$",
transform=axs[4].transAxes,
bbox=dict(boxstyle="square", ec=(1., 1., 1.), fc=(1., 1., 1.)))
pa_lim = [75, 105]
vdf_e_midan = mms.get_pitch_angle_dist(mms.vdf_to_e64(vdf_e), b_xyz, tint=tint, angles=pa_lim)
vdf_e_midan_spectr = xr.DataArray(np.nanmean(vdf_e_midan.data, axis=2),
coords=[vdf_e_midan.time.data, vdf_e_midan.energy.data[0, :]],
dims=["time", "energy"])
axs[5], caxs5 = plot_spectr(axs[5], vdf_e_midan_spectr, yscale="log",
cscale="log", cmap="jet")
caxs5.set_ylabel("PSD" + "\n" + "[s$^{3}$ cm$^{-6}$]")
axs[5].set_ylabel("$E_e$ [eV]")
axs[5].text(.03, .1, f"{pa_lim[0]:d}$^\\circ$ < $\\theta$ < {pa_lim[1]:d}$^\\circ$",
transform=axs[5].transAxes,
bbox=dict(boxstyle="square", ec=(1., 1., 1.), fc=(1., 1., 1.)))
pa_lim = [165, 180]
vdf_e_higan = mms.get_pitch_angle_dist(mms.vdf_to_e64(vdf_e), b_xyz, tint=tint, angles=pa_lim)
vdf_e_higan_spectr = xr.DataArray(np.nanmean(vdf_e_higan.data, axis=2),
coords=[vdf_e_higan.time.data, vdf_e_higan.energy.data[0, :]],
dims=["time", "energy"])
axs[6], caxs6 = plot_spectr(axs[6], vdf_e_higan_spectr, yscale="log",
cscale="log", cmap="jet")
caxs6.set_ylabel("PSD" + "\n" + "[s$^{3}$ cm$^{-6}$]")
axs[6].set_ylabel("$E_e$ [eV]")
axs[6].text(.03, .1, f"{pa_lim[0]:d}$^\\circ$ < $\\theta$ < {pa_lim[1]:d}$^\\circ$",
transform=axs[6].transAxes,
bbox=dict(boxstyle="square", ec=(1., 1., 1.), fc=(1., 1., 1.)))
###Output
_____no_output_____
###Markdown
Project distribution onto the (E, ExB), (ExB, B), (B, E) Compute pitchangle distribution with 17 angles
###Code
vdf_e_pad = mms.get_pitch_angle_dist(vdf_e, b_xyz, tint, angles=17)
###Output
notice : User defined number of pitch angles.
###Markdown
Resample background magnetic field, electric field and ExB drift
###Code
b_0 = pyrf.resample(b_xyz, vdf_e)
e_0 = pyrf.resample(e_xyz, vdf_e)
exb = pyrf.cross(e_0, b_0)
###Output
/Users/louisr/opt/anaconda3/lib/python3.8/site-packages/pyrfu/pyrf/resample.py:223: UserWarning: Using averages in resample
warnings.warn("Using averages in resample", UserWarning)
###Markdown
Plot
###Code
%matplotlib notebook
idx = 1339
x = e_0.data[idx, :]
y = exb.data[idx, :]
z = b_0.data[idx, :]
time = list(pyrf.datetime642iso8601(vdf_e.time.data[idx]))
f = plt.figure(figsize=(8.5, 9))
gsp1 = f.add_gridspec(2, 3, hspace=0, bottom=.07, top=.99, left=.1, right=.9)
gsp10 = gsp1[0, :].subgridspec(1, 3, hspace=0)
gsp11 = gsp1[1, :].subgridspec(1, 2, hspace=0)
# Create axes in the grid spec
axs10 = [f.add_subplot(gsp10[i]) for i in range(3)]
axs11 = [f.add_subplot(gsp11[i]) for i in range(2)]
f.subplots_adjust(wspace=.4)
v_x, v_y, f_mat = mms.vdf_projection(vdf_e, time, np.vstack([x, y, -z]), sc_pot, e_lim=15)
axs10[0], caxs10 = plot_projection(axs10[0], v_x, v_y, f_mat * 1e12, vlim=12e3,
clim=[-18, -13], cbar_pos="top")
axs10[0].set_xlabel("$V_{E}$ [Mm s$^{-1}$]")
axs10[0].set_ylabel("$V_{E\\times B}$ [Mm s$^{-1}$]")
caxs10.set_xlabel("log$_{10} f_e$ [s$^{3}$ m$^{-6}$]")
v_x, v_y, f_mat = mms.vdf_projection(vdf_e, time, np.vstack([y, z, -x]), sc_pot, e_lim=15)
axs10[1], caxs11 = plot_projection(axs10[1], v_x, v_y, f_mat * 1e12, vlim=12e3,
clim=[-18, -13], cbar_pos="top")
axs10[1].set_xlabel("$V_{E\\times B}$ [Mm s$^{-1}$]")
axs10[1].set_ylabel("$V_{B}$ [Mm s$^{-1}$]")
caxs11.set_xlabel("log$_{10} f_e$ [s$^{3}$ m$^{-6}$]")
v_x, v_y, f_mat = mms.vdf_projection(vdf_e, time, np.vstack([z, x, -y]), sc_pot, e_lim=15)
axs10[2], caxs12 = plot_projection(axs10[2], v_x, v_y, f_mat * 1e12, vlim=12e3,
clim=[-18, -13], cbar_pos="top")
axs10[2].set_xlabel("$V_{B}$ [Mm s$^{-1}$]")
axs10[2].set_ylabel("$V_{E}$ [Mm s$^{-1}$]")
caxs12.set_xlabel("log$_{10} f_e$ [s$^{3}$ m$^{-6}$]")
axs11[0].loglog(vdf_e_pad.energy.data[idx, :], vdf_e_pad.data.data[idx, :, 0],
label="$\\theta = 0$ deg.")
axs11[0].loglog(vdf_e_pad.energy.data[idx, :], vdf_e_pad.data.data[idx, :, 9],
label="$\\theta = 90$ deg.")
axs11[0].loglog(vdf_e_pad.energy.data[idx, :], vdf_e_pad.data.data[idx, :, -1],
label="$\\theta = 180$ deg.")
axs11[0].legend()
axs11[0].set_xlim([1e1, 1e3])
axs11[0].set_xlabel("$E_e$ [eV]")
axs11[0].set_ylim([1e-31, 2e-26])
axs11[0].set_ylabel("$f_e$ [s$^{3}$ cm$^{-6}$]")
colors = pl.cm.jet(np.linspace(0, 1, len(vdf_e_pad.energy[idx, :])))
for i_en in range(len(vdf_e_pad.energy[idx, :])):
axs11[1].semilogy(vdf_e_pad.theta.data[idx, :], vdf_e_pad.data.data[idx, i_en, :],
color=colors[i_en], label=f"{vdf_e_pad.energy.data[idx, i_en]:5.2f} eV")
axs11[1].set_xlim([0, 180.])
axs11[1].set_xlabel("$\\theta$ [deg.]")
axs11[1].set_ylim([1e-31, 2e-26])
axs11[1].set_ylabel("$f_e$ [s$^{3}$ cm$^{-6}$]")
axs11[1].set_xticks([0, 45, 90, 135, 180])
f.suptitle(time[0])
###Output
_____no_output_____ |
doc/source/tutorials/iiasa_dbs.ipynb | ###Markdown
Read Directly from IIASA Data ResourcesIIASA's new [scenario explorer](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer//workspaces) is not only a great resource on its own, but it also allows the underlying datasets to be directly queried. `pyam` takes advantage of this ability to allow you to easily pull data and work with it.
###Code
import pyam
from pyam.iiasa import valid_connection_names
###Output
_____no_output_____
###Markdown
There are currently not many available data sources, but more will be added with time
###Code
valid_connection_names()
###Output
_____no_output_____
###Markdown
In this example, we will be pulling data from the Special Report on 1.5C explorer. This can be done in a number of ways, for example```pyam.read_iiasa('iamc15')pyam.read_iiasa_iamc15()```However, this would pull all available data. It is also possible to query specific subsets of data in a manner similar to `pyam.IamDataFrame.filter()`. We'll do that to keep it manageable.
###Code
df = pyam.read_iiasa_iamc15(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
###Output
INFO:root:You are connected to the iamc15 scenario explorer. Please cite as:
D. Huppmann, E. Kriegler, V. Krey, K. Riahi, J. Rogelj, S.K. Rose, J. Weyant, et al., IAMC 1.5°C Scenario Explorer and Data hosted by IIASA. IIASA & IAMC, 2018. doi: 10.22022/SR15/08-2018.15429, url: data.ene.iiasa.ac.at/iamc-1.5c-explorer
###Markdown
Here we pulled out all results for model(s) that start with 'MESSAGEix' that are in the 'World' region and associated with the two named variables.Let's plot CO2 emissions.
###Code
ax = df.filter(variable='Emissions|CO2').line_plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
And now continue doing all of your analysis!
###Code
ax = df.scatter(
x='Primary Energy|Coal',
y='Emissions|CO2',
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Exploring the Data SourceIf you're interested in what data is actually in the data source, you can use `pyam.iiasa.Connection` to do so.
###Code
conn = pyam.iiasa.Connection('iamc15')
###Output
INFO:root:You are connected to the iamc15 scenario explorer. Please cite as:
D. Huppmann, E. Kriegler, V. Krey, K. Riahi, J. Rogelj, S.K. Rose, J. Weyant, et al., IAMC 1.5°C Scenario Explorer and Data hosted by IIASA. IIASA & IAMC, 2018. doi: 10.22022/SR15/08-2018.15429, url: data.ene.iiasa.ac.at/iamc-1.5c-explorer
###Markdown
The `conn` object has a number of useful functions for listing what's in the dataset. A few of them are shown below.
###Code
conn.models().head()
conn.scenarios().head()
conn.variables().head()
conn.regions().head()
###Output
_____no_output_____
###Markdown
You can directly query the the `conn`, which will give you a `pd.DataFrame`
###Code
df = conn.query(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
df.head()
###Output
_____no_output_____
###Markdown
And you can easily turn this into a `pyam.IamDataFrame` to continue your analysis.
###Code
df = pyam.IamDataFrame(df)
ax = df.filter(variable='Primary Energy|Coal').line_plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Read directly from IIASA data resourcesThe IIASA Energy Program hosts a suite of **Scenario Explorer** instances and related infrastructure to support analysis of integrated-assessment pathways in IPCC reports and model comparison projects. High-profile use cases include the [IAMC 1.5°C Scenario Explorer hosted by IIASA](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer) supporting the *IPCC Special Report on Global Warming of 1.5°C* (SR15) and the Horizon 2020 project [CD-LINKS](https://data.ene.iiasa.ac.at/cd-links).IIASA's [modeling platform infrastructure](http://software.ene.iiasa.ac.at/ixmp-server) and the Scenario Explorer UI is not only a great resource on its own, but it also allows the underlying datasets to be directly queried.**pyam** takes advantage of this ability to allow you to easily pull data and work with it in your Python data processing and analysis workflow.
###Code
import pyam
###Output
_____no_output_____
###Markdown
Connecting to a data resource (aka the database API of a Scenario Explorer instance)Accessing a data resource is done via a **Connection** object.By default, your can connect to all public scenario explorers instances.
###Code
conn = pyam.iiasa.Connection()
conn.valid_connections
###Output
_____no_output_____
###Markdown
If you have credentials to connect to a non-public or restricted Scenario Explorer instance,you can store this information by running the following command in a separate Python console:```import pyampyam.iiasa.set_config(, )```When initializing a new **Connection** instance, **pyam** will automatically search for the configuration in a known location. In this example, we will be retrieving data from the *IAMC 1.5°C Scenario Explorer hosted by IIASA*([link](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer)),which provides the quantiative scenario ensemble underpinningthe *IPCC Special Report on Global Warming of 1.5C* (SR15).This can be done either via the constructor:```pyam.iiasa.Connection('iamc15')```or, if you want to query multiple databases, via the explicit `connect()` method:```conn = pyam.iiasa.Connection()conn.connect('iamc15')``` We also provide some convenience functions to shorten the amount of code you have to write. Under the hood, `read_iiasa()` is just opening a connection to a database API and sends a query to the resource.In this tutorial, we will query specific subsets of data in a manner similar to `pyam.IamDataFrame.filter()`.
###Code
df = pyam.read_iiasa(
'iamc15',
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World',
meta=['category']
)
###Output
_____no_output_____
###Markdown
Here we pulled out all times series data for model(s) that start with 'MESSAGEix' that are in the 'World' region and associated with the two named variables. We also added the meta column "category", which tells us the climate impact categorisation of each scenario as assessed in the IPCC SR15.Let's plot CO2 emissions.
###Code
ax = df.filter(variable='Emissions|CO2').line_plot(
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
And now continue doing all of your analysis!
###Code
ax = df.scatter(
x='Primary Energy|Coal',
y='Emissions|CO2',
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Exploring the data resourceIf you're interested in what data is available in the data source, you can use **pyam.iiasa.Connection** to do so.
###Code
conn = pyam.iiasa.Connection('iamc15')
###Output
_____no_output_____
###Markdown
The **Connection** object has a number of useful functions for listing what's available in the data resource.These functions follow the conventions of the **IamDataFrame** class (where possible).A few of them are shown below.
###Code
conn.models().head()
conn.scenarios().head()
conn.variables().head()
conn.regions().head()
###Output
_____no_output_____
###Markdown
A number of different categorization and quantitative indicators are available for model/scenario combinations.These are usually called `meta` indicators in **pyam**.We queried the meta-indicator "category" in the above example, but there are many more.You can get a list with the following command:
###Code
conn.meta_columns.head()
###Output
_____no_output_____
###Markdown
You can directly query the **Connection**, which will return a **pyam.IamDataFrame**...
###Code
df = conn.query(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
###Output
_____no_output_____
###Markdown
...so that you can directly continue with your analysis and visualization workflow using **pyam**!
###Code
ax = df.filter(variable='Primary Energy|Coal').line_plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Read Directly from IIASA Data ResourcesIIASA's new [scenario explorer](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer//workspaces) is not only a great resource on its own, but it also allows the underlying datasets to be directly queried.**pyam** takes advantage of this ability to allow you to easily pull data and work with it.
###Code
import pyam
###Output
_____no_output_____
###Markdown
Accessing an explorer is done via a `Connection` object. By default, all public explorers can be connected to.
###Code
conn = pyam.iiasa.Connection()
conn.valid_connections
###Output
_____no_output_____
###Markdown
If you have additional credentials, you can supply them as well via the `creds` key-word argument:```pyam.iiasa.Connection(creds=(, ))``` In this example, we will be pulling data from the Special Report on 1.5C explorer. This can be done either via the constructor:```pyam.iiasa.Connection('IXSE_SR15')```or, if you want to query multiple databases, via the explicit `connect()` method:```conn = pyam.iiasa.Connection()conn.connect('IXSE_SR15')``` We also provide some convenience functions to shorten the amount of code you have to write. Under the hood, `read_iiasa()` is just opening an connection to a database and making a query on that data.In this tutorial, we will query specific subsets of data in a manner similar to `pyam.IamDataFrame.filter()`.
###Code
df = pyam.read_iiasa(
'IXSE_SR15',
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World',
meta=['category']
)
###Output
_____no_output_____
###Markdown
Here we pulled out all times series data for model(s) that start with 'MESSAGEix' that are in the 'World' region and associated with the two named variables. We also added the "category" metadata, which tells us the climate impact categorisation of each scenario as assessed in the IPCC SR15.Let's plot CO2 emissions.
###Code
ax = df.filter(variable='Emissions|CO2').line_plot(
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
And now continue doing all of your analysis!
###Code
ax = df.scatter(
x='Primary Energy|Coal',
y='Emissions|CO2',
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Exploring the Data SourceIf you're interested in what data is actually in the data source, you can use `pyam.iiasa.Connection` to do so.
###Code
conn = pyam.iiasa.Connection('IXSE_SR15')
###Output
_____no_output_____
###Markdown
The `conn` object has a number of useful functions for listing what's in the dataset. A few of them are shown below.
###Code
conn.models().head()
conn.scenarios().head()
conn.variables().head()
conn.regions().head()
###Output
_____no_output_____
###Markdown
A number of different kinds of indicators are available for model/scenario combinations.We queried the "category" metadata in the above example, but there are many more. You can see them with
###Code
conn.available_metadata().head()
###Output
_____no_output_____
###Markdown
You can directly query the `Connection`, which will give you a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html).
###Code
df = conn.query(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
df.head()
###Output
_____no_output_____
###Markdown
And you can easily turn this into a `pyam.IamDataFrame` to continue your analysis.
###Code
df = pyam.IamDataFrame(df)
ax = df.filter(variable='Primary Energy|Coal').line_plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Read Directly from IIASA Data ResourcesIIASA's new [scenario explorer](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer//workspaces) is not only a great resource on its own, but it also allows the underlying datasets to be directly queried. `pyam` takes advantage of this ability to allow you to easily pull data and work with it.
###Code
import pyam
###Output
_____no_output_____
###Markdown
Accessing an explorer is done via a `Connection` object. By default, all public explorers can be connected to.
###Code
conn = pyam.iiasa.Connection()
conn.valid_connections
###Output
_____no_output_____
###Markdown
If you have additional credentials, you can supply them as well via the `creds` key-word argument:```pyam.iiasa.Connection(creds=(, ))``` In this example, we will be pulling data from the Special Report on 1.5C explorer. This can be done either via the constructor:```pyam.iiasa.Connection('IXSE_SR15')```or, if you want to query multiple databases, via the explicit `connect()` method:```conn = pyam.iiasa.Connection()conn.connect('IXSE_SR15')``` We also provide some convenience functions to shorten the amount of code you have to write. Under the hood, `read_iiasa()` is just opening an connection to a database and making a query on that data.In this tutorial, we will query specific subsets of data in a manner similar to `pyam.IamDataFrame.filter()`.
###Code
df = pyam.read_iiasa(
'IXSE_SR15',
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World',
meta=['category']
)
###Output
INFO:root:You are connected to the IXSE_SR15 scenario explorer hosted by IIASA. If you use this data in any published format, please cite the data as provided in the explorer guidelines: https://data.ene.iiasa.ac.at/iamc-1.5c-explorer/#/about.
###Markdown
Here we pulled out all times series data for model(s) that start with 'MESSAGEix' that are in the 'World' region and associated with the two named variables. We also added the "category" metadata, which tells us the climate impact categorisation of each scenario as assessed in the IPCC SR15.Let's plot CO2 emissions.
###Code
ax = df.filter(variable='Emissions|CO2').line_plot(
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
And now continue doing all of your analysis!
###Code
ax = df.scatter(
x='Primary Energy|Coal',
y='Emissions|CO2',
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Exploring the Data SourceIf you're interested in what data is actually in the data source, you can use `pyam.iiasa.Connection` to do so.
###Code
conn = pyam.iiasa.Connection('IXSE_SR15')
###Output
INFO:root:You are connected to the IXSE_SR15 scenario explorer hosted by IIASA. If you use this data in any published format, please cite the data as provided in the explorer guidelines: https://data.ene.iiasa.ac.at/iamc-1.5c-explorer/#/about.
###Markdown
The `conn` object has a number of useful functions for listing what's in the dataset. A few of them are shown below.
###Code
conn.models().head()
conn.scenarios().head()
conn.variables().head()
conn.regions().head()
###Output
_____no_output_____
###Markdown
A number of different kinds of indicators are available for model/scenario combinations. We queried the `category` metadata in the above example, but there are many more. You can see them with
###Code
conn.available_metadata().head()
###Output
_____no_output_____
###Markdown
You can directly query the `Connection`, which will give you a `pd.DataFrame`
###Code
df = conn.query(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
df.head()
###Output
_____no_output_____
###Markdown
And you can easily turn this into a `pyam.IamDataFrame` to continue your analysis.
###Code
df = pyam.IamDataFrame(df)
ax = df.filter(variable='Primary Energy|Coal').line_plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Read Directly from IIASA Data ResourcesIIASA's new [scenario explorer](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer//workspaces) is not only a great resource on its own, but it also allows the underlying datasets to be directly queried. `pyam` takes advantage of this ability to allow you to easily pull data and work with it.
###Code
import pyam
from pyam.iiasa import valid_connection_names
###Output
_____no_output_____
###Markdown
There are currently not many available data sources, but more will be added with time
###Code
valid_connection_names()
###Output
_____no_output_____
###Markdown
In this example, we will be pulling data from the Special Report on 1.5C explorer. This can be done in a number of ways, for example```pyam.read_iiasa('iamc15')pyam.read_iiasa_iamc15()```However, this would pull all available data. It is also possible to query specific subsets of data in a manner similar to `pyam.IamDataFrame.filter()`.
###Code
df = pyam.read_iiasa_iamc15(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World',
meta=['category']
)
###Output
INFO:root:You are connected to the iamc15 scenario explorer. Please cite as:
D. Huppmann, E. Kriegler, V. Krey, K. Riahi, J. Rogelj, S.K. Rose, J. Weyant, et al., IAMC 1.5C Scenario Explorer and Data hosted by IIASA. IIASA & IAMC, 2018. doi: 10.22022/SR15/08-2018.15429, url: data.ene.iiasa.ac.at/iamc-1.5c-explorer
###Markdown
Here we pulled out all times series data for model(s) that start with 'MESSAGEix' that are in the 'World' region and associated with the two named variables. We also added the "category" metadata, which tells us the climate impact categorisation of each scenario as assessed in the IPCC SR15.Let's plot CO2 emissions.
###Code
ax = df.filter(variable='Emissions|CO2').line_plot(
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
And now continue doing all of your analysis!
###Code
ax = df.scatter(
x='Primary Energy|Coal',
y='Emissions|CO2',
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Exploring the Data SourceIf you're interested in what data is actually in the data source, you can use `pyam.iiasa.Connection` to do so.
###Code
conn = pyam.iiasa.Connection('iamc15')
###Output
INFO:root:You are connected to the iamc15 scenario explorer. Please cite as:
D. Huppmann, E. Kriegler, V. Krey, K. Riahi, J. Rogelj, S.K. Rose, J. Weyant, et al., IAMC 1.5C Scenario Explorer and Data hosted by IIASA. IIASA & IAMC, 2018. doi: 10.22022/SR15/08-2018.15429, url: data.ene.iiasa.ac.at/iamc-1.5c-explorer
###Markdown
The `conn` object has a number of useful functions for listing what's in the dataset. A few of them are shown below.
###Code
conn.models().head()
conn.scenarios().head()
conn.variables().head()
conn.regions().head()
###Output
_____no_output_____
###Markdown
A number of different kinds of indicators are available for model/scenario combinations. We queried the `category` metadata in the above example, but there are many more. You can see them with
###Code
conn.available_metadata().head()
###Output
_____no_output_____
###Markdown
You can directly query the `Connection`, which will give you a `pd.DataFrame`
###Code
df = conn.query(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
df.head()
###Output
_____no_output_____
###Markdown
And you can easily turn this into a `pyam.IamDataFrame` to continue your analysis.
###Code
df = pyam.IamDataFrame(df)
ax = df.filter(variable='Primary Energy|Coal').line_plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Read directly from IIASA data resourcesThe IIASA *Energy, Climate, and Environment* Program hosts a suite of **Scenario Explorer** instances and related infrastructure to support analysis of integrated-assessment pathways in IPCC reports and model comparison projects. High-profile use cases include the [IAMC 1.5°C Scenario Explorer hosted by IIASA](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer) supporting the *IPCC Special Report on Global Warming of 1.5°C* (SR15) and the Horizon 2020 project [CD-LINKS](https://data.ene.iiasa.ac.at/cd-links).IIASA's [modeling platform infrastructure](http://software.ene.iiasa.ac.at/ixmp-server) and the Scenario Explorer UI is not only a great resource on its own, but it also allows the underlying datasets to be directly queried.**pyam** takes advantage of this ability to allow you to easily pull data and work with it in your Python data processing and analysis workflow.
###Code
import pyam
###Output
_____no_output_____
###Markdown
Connecting to a data resource (aka the database API of a Scenario Explorer instance)Accessing a data resource is done via a **Connection** object.By default, your can connect to all public Scenario Explorer instances.
###Code
conn = pyam.iiasa.Connection()
conn.valid_connections
###Output
_____no_output_____
###Markdown
If you have credentials to connect to a non-public or restricted Scenario Explorer instance,you can store this information by running the following command in a separate Python console:```import pyampyam.iiasa.set_config(, )```When initializing a new **Connection** instance, **pyam** will automatically search for the configuration in a known location. In this example, we will be retrieving data from the *IAMC 1.5°C Scenario Explorer hosted by IIASA*([link](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer)),which provides the quantiative scenario ensemble underpinningthe *IPCC Special Report on Global Warming of 1.5C* (SR15).This can be done either via the constructor:```pyam.iiasa.Connection('iamc15')```or, if you want to query multiple databases, via the explicit `connect()` method:```conn = pyam.iiasa.Connection()conn.connect('iamc15')``` We also provide some convenience functions to shorten the amount of code you have to write. Under the hood, `read_iiasa()` is just opening a connection to a database API and sends a query to the resource.In this tutorial, we will query specific subsets of data in a manner similar to `pyam.IamDataFrame.filter()`.
###Code
df = pyam.read_iiasa(
'iamc15',
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World',
meta=['category']
)
###Output
_____no_output_____
###Markdown
Here we pulled out all times series data for model(s) that start with 'MESSAGEix' that are in the 'World' region and associated with the two named variables. We also added the meta column "category", which tells us the climate impact categorisation of each scenario as assessed in the IPCC SR15.Let's plot CO2 emissions.
###Code
ax = df.filter(variable='Emissions|CO2').plot(
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
And now continue doing all of your analysis!
###Code
ax = df.plot.scatter(
x='Primary Energy|Coal',
y='Emissions|CO2',
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Exploring the data resourceIf you're interested in what data is available in the data source, you can use **pyam.iiasa.Connection** to do so.
###Code
conn = pyam.iiasa.Connection('iamc15')
###Output
_____no_output_____
###Markdown
The **Connection** object has a number of useful functions for listing what's available in the data resource.These functions follow the conventions of the **IamDataFrame** class (where possible).A few of them are shown below.
###Code
conn.models().head()
conn.scenarios().head()
conn.variables().head()
conn.regions().head()
###Output
_____no_output_____
###Markdown
A number of different categorization and quantitative indicators are available for model/scenario combinations. These are usually called 'meta' indicators in **pyam**.We queried the meta-indicator "category" in the above example, but there are many more.You can get a list with the following command:
###Code
conn.meta_columns.head()
###Output
_____no_output_____
###Markdown
You can directly query the **Connection**, which will return a **pyam.IamDataFrame**...
###Code
df = conn.query(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
###Output
_____no_output_____
###Markdown
...so that you can directly continue with your analysis and visualization workflow using **pyam**!
###Code
ax = df.filter(variable='Primary Energy|Coal').plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Read directly from IIASA data resourcesThe IIASA Energy Program hosts a suite of **Scenario Explorer** instances and related infrastructure to support analysis of integrated-assessment pathways in IPCC reports and model comparison projects. High-profile use cases include the [IAMC 1.5°C Scenario Explorer hosted by IIASA](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer) supporting the *IPCC Special Report on Global Warming of 1.5°C* (SR15) and the Horizon 2020 project [CD-LINKS](https://data.ene.iiasa.ac.at/cd-links).IIASA's [modeling platform infrastructure](http://software.ene.iiasa.ac.at/ixmp-server) and the Scenario Explorer UI is not only a great resource on its own, but it also allows the underlying datasets to be directly queried.**pyam** takes advantage of this ability to allow you to easily pull data and work with it.
###Code
import pyam
###Output
_____no_output_____
###Markdown
Accessing an explorer is done via a `Connection` object.By default, your can connect to all public scenario explorers instances.
###Code
conn = pyam.iiasa.Connection()
conn.valid_connections
###Output
_____no_output_____
###Markdown
If you have additional credentials, you can supply them as well via the `creds` key-word argument:```pyam.iiasa.Connection(creds=(, ))``` In this example, we will be pulling data from the Special Report on 1.5C explorer. This can be done either via the constructor:```pyam.iiasa.Connection('iamc15')```or, if you want to query multiple databases, via the explicit `connect()` method:```conn = pyam.iiasa.Connection()conn.connect('iamc15')``` We also provide some convenience functions to shorten the amount of code you have to write. Under the hood, `read_iiasa()` is just opening an connection to a database and making a query on that data.In this tutorial, we will query specific subsets of data in a manner similar to `pyam.IamDataFrame.filter()`.
###Code
df = pyam.read_iiasa(
'iamc15',
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World',
meta=['category']
)
###Output
_____no_output_____
###Markdown
Here we pulled out all times series data for model(s) that start with 'MESSAGEix' that are in the 'World' region and associated with the two named variables. We also added the "category" metadata, which tells us the climate impact categorisation of each scenario as assessed in the IPCC SR15.Let's plot CO2 emissions.
###Code
ax = df.filter(variable='Emissions|CO2').line_plot(
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
And now continue doing all of your analysis!
###Code
ax = df.scatter(
x='Primary Energy|Coal',
y='Emissions|CO2',
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Exploring the data resourceIf you're interested in what data is actually in the data source, you can use **pyam.iiasa.Connection** to do so.
###Code
conn = pyam.iiasa.Connection('iamc15')
###Output
_____no_output_____
###Markdown
The `conn` object has a number of useful functions for listing what's in the dataset. A few of them are shown below.
###Code
conn.models().head()
conn.scenarios().head()
conn.variables().head()
conn.regions().head()
###Output
_____no_output_____
###Markdown
A number of different kinds of indicators are available for model/scenario combinations.We queried the "category" metadata in the above example, but there are many more. You can see them with
###Code
conn.available_metadata().head()
###Output
_____no_output_____
###Markdown
You can directly query the **Connection**, which will give you a [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html).
###Code
df = conn.query(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
df.head()
###Output
_____no_output_____
###Markdown
And you can easily turn this into a **pyam.IamDataFrame** to continue your analysis.
###Code
df = pyam.IamDataFrame(df)
ax = df.filter(variable='Primary Energy|Coal').line_plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Read directly from IIASA data resourcesThe IIASA Energy Program hosts a suite of **Scenario Explorer** instances and related infrastructure to support analysis of integrated-assessment pathways in IPCC reports and model comparison projects. High-profile use cases include the [IAMC 1.5°C Scenario Explorer hosted by IIASA](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer) supporting the *IPCC Special Report on Global Warming of 1.5°C* (SR15) and the Horizon 2020 project [CD-LINKS](https://data.ene.iiasa.ac.at/cd-links).IIASA's [modeling platform infrastructure](http://software.ene.iiasa.ac.at/ixmp-server) and the Scenario Explorer UI is not only a great resource on its own, but it also allows the underlying datasets to be directly queried.**pyam** takes advantage of this ability to allow you to easily pull data and work with it in your Python data processing and analysis workflow.
###Code
import pyam
###Output
_____no_output_____
###Markdown
Connecting to a data resource (aka the database API of a Scenario Explorer instance)Accessing a data resource is done via a **Connection** object.By default, your can connect to all public Scenario Explorer instances.
###Code
conn = pyam.iiasa.Connection()
conn.valid_connections
###Output
_____no_output_____
###Markdown
If you have credentials to connect to a non-public or restricted Scenario Explorer instance,you can store this information by running the following command in a separate Python console:```import pyampyam.iiasa.set_config(, )```When initializing a new **Connection** instance, **pyam** will automatically search for the configuration in a known location. In this example, we will be retrieving data from the *IAMC 1.5°C Scenario Explorer hosted by IIASA*([link](https://data.ene.iiasa.ac.at/iamc-1.5c-explorer)),which provides the quantiative scenario ensemble underpinningthe *IPCC Special Report on Global Warming of 1.5C* (SR15).This can be done either via the constructor:```pyam.iiasa.Connection('iamc15')```or, if you want to query multiple databases, via the explicit `connect()` method:```conn = pyam.iiasa.Connection()conn.connect('iamc15')``` We also provide some convenience functions to shorten the amount of code you have to write. Under the hood, `read_iiasa()` is just opening a connection to a database API and sends a query to the resource.In this tutorial, we will query specific subsets of data in a manner similar to `pyam.IamDataFrame.filter()`.
###Code
df = pyam.read_iiasa(
'iamc15',
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World',
meta=['category']
)
###Output
_____no_output_____
###Markdown
Here we pulled out all times series data for model(s) that start with 'MESSAGEix' that are in the 'World' region and associated with the two named variables. We also added the meta column "category", which tells us the climate impact categorisation of each scenario as assessed in the IPCC SR15.Let's plot CO2 emissions.
###Code
ax = df.filter(variable='Emissions|CO2').plot(
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
And now continue doing all of your analysis!
###Code
ax = df.plot.scatter(
x='Primary Energy|Coal',
y='Emissions|CO2',
color='category',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____
###Markdown
Exploring the data resourceIf you're interested in what data is available in the data source, you can use **pyam.iiasa.Connection** to do so.
###Code
conn = pyam.iiasa.Connection('iamc15')
###Output
_____no_output_____
###Markdown
The **Connection** object has a number of useful functions for listing what's available in the data resource.These functions follow the conventions of the **IamDataFrame** class (where possible).A few of them are shown below.
###Code
conn.models().head()
conn.scenarios().head()
conn.variables().head()
conn.regions().head()
###Output
_____no_output_____
###Markdown
A number of different categorization and quantitative indicators are available for model/scenario combinations. These are usually called 'meta' indicators in **pyam**.We queried the meta-indicator "category" in the above example, but there are many more.You can get a list with the following command:
###Code
conn.meta_columns.head()
###Output
_____no_output_____
###Markdown
You can directly query the **Connection**, which will return a **pyam.IamDataFrame**...
###Code
df = conn.query(
model='MESSAGEix*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
###Output
_____no_output_____
###Markdown
...so that you can directly continue with your analysis and visualization workflow using **pyam**!
###Code
ax = df.filter(variable='Primary Energy|Coal').plot(
color='scenario',
legend=dict(loc='center left', bbox_to_anchor=(1.0, 0.5))
)
###Output
_____no_output_____ |
Climate Assignment.ipynb | ###Markdown
Reflect Tables Into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
###Output
_____no_output_____
###Markdown
1 Year Exploratory Climate Analysis
###Code
# Design a query to retrieve the last 12 months of precipitation data and plot the results
maxdate = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
maxdate = maxdate[0]
#Calculate the date 1 year ago from today.
year_ago = dt.datetime.strptime(maxdate, "%Y-%m-%d") - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
query = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= year_ago).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
precipitation_df = pd.DataFrame(query,columns=['date', 'precipitation'])
precipitation_df['date'] = pd.to_datetime(precipitation_df['date'], format='%Y-%m-%d')
precipitation_df.set_index('date', inplace=True)
# Sort the dataframe by date
precipitation_df = precipitation_df.sort_values(by='date', ascending=True)
# Use Pandas Plotting with Matplotlib to plot the data
precipitation_df.plot(title="Precipitation(1 Year)")
plt.legend
plt.savefig("Images/Precipitation.png")
plt.show()
# Use Pandas to calculate the summary statistics for the precipitation data
precipitation_df.describe()
###Output
_____no_output_____
###Markdown
Station Analysis
###Code
# Design a query to show how many stations are available in this dataset?
available = session.query(Measurement.station).distinct().count()
available
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
active = session.query(Measurement.station,func.count(Measurement.station)).group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
print(f"Most Active Stations Are:")
active
# Using the station id from the previous query, calculate the lowest temperature recorded, highest temperature recorded, and average temperature most active station?
# First find the most active station:
most_active = active[0][0]
# Then proceed
active_temps = session.query(func.min(Measurement.tobs), func.max(Measurement.tobs),
func.avg(Measurement.tobs)).filter(Measurement.station == most_active).all()
print(f"Most Active Station Temperature In Order of Lowest, Highest, and Average")
active_temps
# Choose the station with the highest number of temperature observations.
most_temps_station = session.query(Measurement.station, func.count(Measurement.tobs)).group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).first()
most_temps_station= most_temps_station[0]
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temp_obs = session.query( Measurement.tobs).filter(Measurement.date >= year_ago).filter(Measurement.station == most_temps_station).all()
temp_obs = pd.DataFrame(temp_obs, columns=['temperature'])
temp_obs.plot.hist(bins=12, title="Temperature vs. Frequency Histogram")
plt.tight_layout()
plt.savefig("Images/TemperaturevsFrequency.png")
plt.show()
###Output
_____no_output_____ |
leetcode_easy.ipynb | ###Markdown
LeetCode-EASY 1323. Maximum 6 9 Number给你一个仅由数字 6 和 9 组成的正整数 num。你最多只能翻转一位数字,将 6 变成 9,或者把 9 变成 6 。请返回你可以得到的最大数字。
###Code
class Solution:
def maximum69Number (self, num: int) -> int:
res = ''
flag = True
for i in str(num):
if i == '6' and flag:
res += '9'
flag = False
else:
res += i
return int(res)
###Output
_____no_output_____
###Markdown
更简洁的方式
###Code
class Solution:
def maximum69Number(self, num: int) -> int:
return int(str(num).replace("6", "9", 1))
###Output
_____no_output_____
###Markdown
1446. Consecutive Characters给你一个字符串 s ,字符串的「能量」定义为:只包含一种字符的最长非空子字符串的长度。请你返回字符串的能量。
###Code
class Solution:
def maxPower(self, s: str) -> int:
res = 1
tmp = 1
for i in range(1, len(s)):
if s[i] == s[i-1]:
tmp += 1
res = max(res, tmp)
else:
tmp = 1
return res
###Output
_____no_output_____
###Markdown
506. Relative Ranks给你一个长度为 n 的整数数组 score ,其中 score[i] 是第 i 位运动员在比赛中的得分。所有得分都 互不相同 。运动员将根据得分 决定名次 ,其中名次第 1 的运动员得分最高,名次第 2 的运动员得分第 2 高,依此类推。运动员的名次决定了他们的获奖情况:名次第 1 的运动员获金牌 "Gold Medal" 。名次第 2 的运动员获银牌 "Silver Medal" 。名次第 3 的运动员获铜牌 "Bronze Medal" 。从名次第 4 到第 n 的运动员,只能获得他们的名次编号(即,名次第 x 的运动员获得编号 "x")。使用长度为 n 的数组 answer 返回获奖,其中 answer[i] 是第 i 位运动员的获奖情况。
###Code
class Solution:
def findRelativeRanks(self, score):
dic = {num : i for i, num in enumerate(sorted(score, reverse=True))}
res = []
rank = ['Gold Medal', 'Silver Medal', 'Bronze Medal']
for num in score:
if dic[num] < 3:
res.append(rank[dic[num]])
else:
res.append(str(dic[num]+1))
return res
###Output
_____no_output_____
###Markdown
1005. Maximize Sum Of Array After K Negations给你一个整数数组 nums 和一个整数 k ,按以下方法修改该数组:* 选择某个下标 i 并将 nums[i] 替换为 -nums[i] 。重复这个过程恰好 k 次。可以多次选择同一个下标 i 。以这种方式修改数组后,返回数组 可能的最大和
###Code
class Solution:
def largestSumAfterKNegations(self, nums, k):
nums.sort(key=abs, reverse=True)
res = 0
for num in nums:
if num < 0 and k > 0:
res -= num
k -= 1
else:
res += num
if k % 2:
res -= 2*abs(nums[-1])
return res
###Output
_____no_output_____ |
doc/source/examples/networktheory/DC Extrapolation for Time Domain .ipynb | ###Markdown
DC Extrapolation for Time Domain Extrapolates the low frequency points needed for time-domain transformations, when measurement doesn't include DC.Example:
###Code
import skrf as rf
from skrf.media import Coaxial
import matplotlib.pyplot as plt
from skrf.plotting import stylely
stylely()
%matplotlib inline
freq = rf.Frequency(0.11, 110, 1001)
coax1mm = Coaxial(freq, z0=50, Dint=0.44e-3, Dout=1.0e-3, sigma=1e20)
X = coax1mm.line(10, unit='mm', z0=50, name='X', embed=True)
Y = coax1mm.line(80, unit='mm', z0=75, name='Y', embed=True)
dut = X**Y**X
dut_dc = dut.extrapolate_to_dc(dc_sparam=[[0,1],[1,0]])
plt.figure()
plt.title('Step')
t, y = dut.s11.step_response(pad=2000)
t2, y2 = dut_dc.s11.step_response(pad=2000)
plt.plot(t*1e9, y, label='Original')
plt.plot(t2*1e9, y2, label='Extrapolated')
plt.legend()
plt.xlabel('Time (ns)')
plt.figure()
plt.title('Impulse')
t, y = dut.s11.impulse_response(pad=2000)
t2, y2 = dut_dc.s11.impulse_response(pad=2000)
plt.plot(t*1e9, y, label='Original')
plt.plot(t2*1e9, y2, label='Extrapolated')
plt.legend()
plt.xlabel('Time (ns)')
###Output
_____no_output_____
###Markdown
DC Extrapolation for Time Domain Extrapolates the low frequency points needed for time-domain transformations, when measurement doesn't include DC.Example:
###Code
import skrf
import matplotlib.pyplot as plt
skrf.stylely()
freq = skrf.F(0.11,110,1001)
coax1mm = skrf.media.Coaxial(freq, z0=50, Dint=0.44e-3, Dout=1.0e-3, sigma=1e20)
X = coax1mm.line(10, 'mm', z0=50, name='X', embed=True)
Y = coax1mm.line(80, 'mm', z0=75, name='Y', embed=True)
dut = X**Y**X
dut_dc = dut.extrapolate_to_dc(dc_sparam=[[0,1],[1,0]])
plt.figure()
plt.title('Step')
t, y = dut.s11.step_response(pad=2000)
t2, y2 = dut_dc.s11.step_response(pad=2000)
plt.plot(t, y, label='Original')
plt.plot(t2, y2, label='Extrapolated')
plt.legend()
plt.xlabel('Time (ns)')
plt.figure()
plt.title('Impulse')
t, y = dut.s11.impulse_response(pad=2000)
t2, y2 = dut_dc.s11.impulse_response(pad=2000)
plt.plot(t, y, label='Original')
plt.plot(t2, y2, label='Extrapolated')
plt.legend()
plt.xlabel('Time (ns)')
plt.show(block=True)
###Output
/home/alex/anaconda3/lib/python3.5/site-packages/matplotlib/__init__.py:878: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
warnings.warn(self.msg_depr % (key, alt_key))
/home/alex/code/scikit-rf/skrf/network.py:2710: RuntimeWarning: Frequency doesn't begin from 0. Step response will not be correct.
RuntimeWarning
###Markdown
DC Extrapolation for Time Domain Extrapolates the low frequency points needed for time-domain transformations, when measurement doesn't include DC.Example:
###Code
import skrf
from skrf.media import Coaxial
import matplotlib.pyplot as plt
from skrf.plotting import stylely
stylely()
%matplotlib inline
freq = skrf.F(0.11,110,1001)
coax1mm = Coaxial(freq, z0=50, Dint=0.44e-3, Dout=1.0e-3, sigma=1e20)
X = coax1mm.line(10, 'mm', z0=50, name='X', embed=True)
Y = coax1mm.line(80, 'mm', z0=75, name='Y', embed=True)
dut = X**Y**X
dut_dc = dut.extrapolate_to_dc(dc_sparam=[[0,1],[1,0]])
plt.figure()
plt.title('Step')
t, y = dut.s11.step_response(pad=2000)
t2, y2 = dut_dc.s11.step_response(pad=2000)
plt.plot(t*1e9, y, label='Original')
plt.plot(t2*1e9, y2, label='Extrapolated')
plt.legend()
plt.xlabel('Time (ns)')
plt.figure()
plt.title('Impulse')
t, y = dut.s11.impulse_response(pad=2000)
t2, y2 = dut_dc.s11.impulse_response(pad=2000)
plt.plot(t*1e9, y, label='Original')
plt.plot(t2*1e9, y2, label='Extrapolated')
plt.legend()
plt.xlabel('Time (ns)')
plt.show(block=True)
###Output
_____no_output_____
###Markdown
DC Extrapolation for Time Domain When converting S-parameters to time domain the frequency points should be equally spaced and start from 0 Hz. Usually VNA doesn't measure down to DC and DC point should be added afterwards.When a DC point measurement is added it might be the case that frequency point spacing between the DC and the first measured point is not equal to the spacing between the other measured points. Since time domain conversion relies on Fourier transform that assumes regularly spaced points, the measurements should also be resampled to be equally spaced.
###Code
%matplotlib inline
import skrf as rf
from skrf.media import Coaxial
import matplotlib.pyplot as plt
from skrf.plotting import stylely
stylely()
freq = rf.Frequency(0.11, 110, 401)
coax1mm = Coaxial(freq, z0=50, Dint=0.44e-3, Dout=1.0e-3, sigma=1e20)
X = coax1mm.line(10, unit='mm', z0=50, name='X', embed=True)
Y = coax1mm.line(80, unit='mm', z0=75, name='Y', embed=True)
dut = X**Y**X
dut.name = 'Original'
dut_dc = dut.extrapolate_to_dc(dc_sparam=[[0,1],[1,0]], kind='cubic')
dut_dc.name = 'Extrapolated'
plt.figure()
plt.title('Step response')
dut.s11.plot_s_time_step()
dut_dc.s11.plot_s_time_step()
###Output
_____no_output_____
###Markdown
Interpolation method comparison
###Code
# Frequency points for measurements to be interpolated
# Frequencies from 0 to 1 GHz are missing
freq = rf.F(1, 110, 601)
# Frequency for ideal
freq2 = rf.F(0, 110, 601)
coax1mm = Coaxial(freq, z0=50, Dint=0.44e-3, Dout=1.0e-3, sigma=1e20)
coax1mm2 = Coaxial(freq2, z0=50, Dint=0.44e-3, Dout=1.0e-3, sigma=1e20)
# Generate the DUT
X = coax1mm.line(10, 'mm', z0=50, name='X', embed=True)
Y = coax1mm.line(80, 'mm', z0=75, name='Y', embed=True)
dut = X**Y**X
# DUT with full frequencies for comparison
X2 = coax1mm2.line(10, 'mm', z0=50, name='X', embed=True)
Y2 = coax1mm2.line(80, 'mm', z0=75, name='Y', embed=True)
dut_ideal = X2**Y2**X2
dut_ideal.name = 'ideal'
# Extrapolate to DC with different methods
dut_dc_rational = dut.extrapolate_to_dc(kind='rational', dc_sparam=[[0,1],[1,0]])
dut_dc_rational.name = 'rational'
dut_dc_linear = dut.extrapolate_to_dc(kind='linear', dc_sparam=[[0,1],[1,0]])
dut_dc_linear.name = 'linear'
dut_dc_cubic = dut.extrapolate_to_dc(kind='cubic', dc_sparam=[[0,1],[1,0]])
dut_dc_cubic.name = 'cubic'
plt.figure()
plt.title('Step Response')
dut_ideal.s11.plot_s_time_step()
dut_dc_rational.s11.plot_s_time_step()
dut_dc_cubic.s11.plot_s_time_step()
dut_dc_linear.s11.plot_s_time_step()
dut_ideal.s11['0-2ghz'].plot_s_smith()
dut_dc_rational.s11['0-2ghz'].plot_s_smith()
dut_dc_cubic.s11['0-2ghz'].plot_s_smith()
dut_dc_linear.s11['0-2ghz'].plot_s_smith()
###Output
_____no_output_____
###Markdown
Interpolation basis By default S-parameters are interpolated, but it's possible to interpolate also other parameters such as T, ABCD, Z or Y-parameters.Usually S-parameters are the best choice, but in this case T-parameters give better results. Note that T-parameters are defined only for two-ports with non-zero S21. It probably isn't the best choice in most cases but for well matched transmissive two-ports it might be worth trying.
###Code
dut_dc_cubic_t = dut.extrapolate_to_dc(kind='cubic', dc_sparam=[[0,1],[1,0]], basis='t')
dut_dc_cubic_t.name = 'cubic T'
dut_dc_cubic.name = 'cubic S'
plt.figure()
plt.title('Step Response')
dut_ideal.s11.plot_s_time_step()
dut_dc_cubic.s11.plot_s_time_step()
dut_dc_cubic_t.s11.plot_s_time_step()
dut_ideal.s11['0-2ghz'].plot_s_smith()
dut_dc_cubic.s11['0-2ghz'].plot_s_smith()
dut_dc_cubic_t.s11['0-2ghz'].plot_s_smith()
###Output
_____no_output_____
###Markdown
DC Extrapolation for Time Domain Extrapolates the low frequency points needed for time-domain transformations, when measurement doesn't include DC.Example:
###Code
import skrf
import matplotlib.pyplot as plt
from skrf.plotting import stylely
stylely()
%matplotlib inline
freq = skrf.F(0.11,110,1001)
coax1mm = skrf.media.Coaxial(freq, z0=50, Dint=0.44e-3, Dout=1.0e-3, sigma=1e20)
X = coax1mm.line(10, 'mm', z0=50, name='X', embed=True)
Y = coax1mm.line(80, 'mm', z0=75, name='Y', embed=True)
dut = X**Y**X
dut_dc = dut.extrapolate_to_dc(dc_sparam=[[0,1],[1,0]])
plt.figure()
plt.title('Step')
t, y = dut.s11.step_response(pad=2000)
t2, y2 = dut_dc.s11.step_response(pad=2000)
plt.plot(t*1e9, y, label='Original')
plt.plot(t2*1e9, y2, label='Extrapolated')
plt.legend()
plt.xlabel('Time (ns)')
plt.figure()
plt.title('Impulse')
t, y = dut.s11.impulse_response(pad=2000)
t2, y2 = dut_dc.s11.impulse_response(pad=2000)
plt.plot(t*1e9, y, label='Original')
plt.plot(t2*1e9, y2, label='Extrapolated')
plt.legend()
plt.xlabel('Time (ns)')
plt.show(block=True)
###Output
_____no_output_____ |
2_predicitive-maintenance/4_training.ipynb | ###Markdown
Predictive Maintenance of Turbofan Engines SageMaker MXNet EstimatorFirst we'll import our variables from the pervious notebook
###Code
import pickle
with open('shared_vars', 'rb') as f:
bucket, prefix, s3_train_data = pickle.load(f)
###Output
_____no_output_____
###Markdown
MXNet Model Training Script (Out of scope)Training MXNet models using MXNet Estimators is a two-step process. First, you prepare your training script, then second, you run this on SageMaker via an MXNet Estimator. The training script we have prepared for the model is located in the entry_point folder.The training script contains functions to create the model for training and for inference. We also have functions to convert our dataframes into a Gluon Dataset so that it can be efficiently prefetched, transformed into numerical features used by the network and padded so that we can learn from multiple samples in batches.For more information on how to setup a training script for SageMaker using the MXNet estimator see: https://sagemaker.readthedocs.io/en/stable/using_mxnet.htmlpreparing-the-mxnet-training-script**Important note**** The upper bound for the RUL is set to 130 in the training script, this means that the predictions from the model will be a fraction of this value **
###Code
import numpy as np
import pandas as pd
import sagemaker
import os
import io
import json
import boto3
from time import strftime, gmtime
from sagemaker import get_execution_role
from sagemaker.mxnet import MXNet
role = get_execution_role()
###Output
_____no_output_____
###Markdown
We import the training data that we previously saved to CSV.
###Code
train_df = []
for i in range(0,4):
df = pd.read_csv('data/train-{:01d}.csv'.format(i), index_col=0)
train_df.append(df)
###Output
_____no_output_____
###Markdown
Train MXNet EstimatorWe now start the Sagemaker training job by creating an MXNet estimator. We pass some arguments to the MXNet estimator constructor such as `entry_point`, `instance_count` and `instance_type`.
###Code
model_name = "pred-maintenance-mxnet-model"
training_job_name = "{}-{}".format(model_name, strftime("%Y-%m-%d-%H-%M-%S", gmtime()))
train_instance_type = 'ml.p3.2xlarge'
output_location = 's3://{}/{}/output'.format(bucket, prefix)
m = MXNet(entry_point='script.py',
source_dir='entry_point',
py_version='py3',
role=role,
instance_count=1,
instance_type=train_instance_type,
output_path=output_location,
hyperparameters={'num-datasets' : len(train_df),
'num-gpus': 1,
'epochs': 500,
'optimizer': 'adam',
'batch-size':1,
'log-interval': 100},
input_mode='File',
use_spot_instances = True,
max_run = 3600,
max_wait = 3600,
framework_version='1.6.0')
###Output
_____no_output_____
###Markdown
We kick off the trianing job by calling the fit method. fit has a required argument of the S3 training data location, and we also pass an optional job_name argument which we will use later to call the model for batch transformation.
###Code
m.fit({'train': s3_train_data}, job_name=training_job_name)
###Output
_____no_output_____
###Markdown
Deploy the model Create Transformer ModelWe now call the transformer function which will take the training model and create a SageMaker model suitable for deployment.
###Code
batch_output = 's3://{}/{}/{}'.format(bucket, prefix, 'batch-inference')
transformer = m.transformer(instance_count=1, instance_type='ml.m4.xlarge', output_path=batch_output)
###Output
_____no_output_____
###Markdown
Batch transform of test data using the transformer modelUsing the `transformer` model that we just created we can run a SageMaker Batch Transformation job to get some predictions on the test data sets that we have.Below is a function that copies some test data to a new location in S3 where it's then used as the input for the `transform` fucntion for the SageMaker Batch Transformation Job.
###Code
s3_test_key = '{}/data/test-0.csv'.format(prefix)
s3_transform_input = os.path.join(prefix, "batch-transform-input")
def get_transform_input():
s3_client = boto3.client('s3')
s3_response = s3_client.get_object(Bucket=bucket, Key=s3_test_key)
test_file = s3_response["Body"].read()
test_df_entry = pd.read_csv(io.BytesIO(test_file))
test_data = test_df_entry[test_df_entry['id']==0+1][test_df_entry.columns[2:-1]].values
test_data = test_data[0:test_data.shape[0]-1,:].astype('float32')
data_payload = {'input':np.expand_dims(test_data, axis=0).tolist()}
job_name = 'predictive-maintenance-batch-transform-job-{}'.format(strftime("%Y-%m-%d-%H-%M-%S", gmtime()))
s3_batch_transform_input_key = os.path.join(s3_transform_input, job_name)
s3_client.put_object(Body=json.dumps(data_payload),
Bucket=bucket,
Key=s3_batch_transform_input_key)
return job_name, 's3://{}/{}'.format(bucket, s3_batch_transform_input_key)
job_name, input_key = get_transform_input()
transformer.transform(input_key, wait=True)
###Output
_____no_output_____
###Markdown
View prediction resultsOnce the SageMaker Batch Transform job completes we can see the prediction of the fractional remaining useful life for the sensor readings we provided.
###Code
def get_transform_output():
s3_client = boto3.client('s3')
s3_response = s3_client.get_object(Bucket=bucket, Key=os.path.join(prefix,
'batch-inference',
job_name+'.out'))
transform_out = np.array(eval(s3_response["Body"].read()))
return transform_out
transform_output = get_transform_output()
print(transform_output)
###Output
_____no_output_____
###Markdown
To see the actual number of predicted cycles until failure we simply multiply the output by our upper bound that was set in the traiing script, 130 in this case.
###Code
print(transform_output * 130)
###Output
_____no_output_____ |
content/Pandas Operations/Distribution probabilities for each column data frame, in one plot.ipynb | ###Markdown
Loop over columns:
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
df = pd.DataFrame(np.random.randn(14,5), columns=list("ABCDE"))
df
fig, axes = plt.subplots(ncols=5)
for ax, col in zip(axes, df.columns):
sns.distplot(df[col], ax=ax)
plt.show()
g = sns.FacetGrid(df.melt(), col="variable")
g.map(sns.distplot, "value")
plt.show()
###Output
_____no_output_____ |
_notebooks/2020-12-28-week4-day1.ipynb | ###Markdown
Week 4, Day 1 (Introduction to Reinforcement Learning Framework)> Welcome to first day (Week 4) of the McE-51069 course.- sticky_rank: 10- toc: true- badges: false- comments: false- categories: [deep_learning, reinforcement_learning] Lecture NotebookYou can download the notebooks file for today lecture [here](https://github.com/ytu-cvlab/week4-day1/archive/main.zip). Introduction to OpenAI Gym[Gym](https://gym.openai.com/) is a toolkit for developing and comparing reinforcement learning algorithms. It supports teaching agents everything from walking to playing games like Pong or Pinball. It makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano. InstallationFirst, we need to install `gym` in our local machine. To do this, simply install `gym` using pip.
###Code
!pip install gym
###Output
_____no_output_____
###Markdown
Gym EnvironmentsThe OpenAI Gym involves a diverse suite of physical simulation environments ranging from easy to challenging tasks that we can play with and test our reinforcement learning algorithms with it. These include **Classic control** games, **Atari**, **MuJoCo**, **Robotics** and much more. You can find more about gym environments [here](https://gym.openai.com/envs).In this course, we will focus on **Toy text**, **Classic control** and **Atari** environments. Classic control and Toy textThese environments include small-scale tasks, mostly from the RL literature. AtariThese include classic atari games, which had a big impact on reinforcement learning research. Creating your first environmentWe can simply call `gym.make("env_name")` to create an environment object. Here "env_name" denotes the name of the environment we are calling. All the available names of the environments can be found [here]().
###Code
#collapse-hide
# importing openai gym library
import gym
import numpy as np
# create classic cart-pole env
env = gym.make('MountainCar-v0')
# reset/initialize the env
env.reset()
for _ in range(200):
env.step(env.action_space.sample()) # take a random action
env.render() # render the environment
# close the rendering
env.close()
###Output
_____no_output_____
###Markdown
We can interact with the environment by two main methods:1. `env.reset()`2. `env.step(action)`> `obs = env.reset()` method initialize and returns an initial observation (or state) of the environment. We will learn more about gym observations later.> `obs_next, reward, done, info = env.step(action)` method interacts with the environment by taking an action as an input and returns four values: obs_next(env.observation_space), reward(float), done(bool) and info(dict). SpacesEvery gym environment comes with an action_space and an observation_space. The formats of action and observation of an environment are defined by `env.action_space` and `env.observation_space` respectively, which are of type `Space`.Types of gym `spaces`:- `gym.spaces.Discrete(n)`: fixed range of non-negative discrete numbers from 0 to n-1.- `gym.spaces.Box`: represents an n-dimensional box, where the upper and lower bounds of each dimension are defined by `Box.low` and `Box.high`. Lets explore these two spaces.
###Code
#collapse-hide
# import spaces module from gym
from gym import spaces
space = spaces.Discrete(8) # Set with 8 elements {0, 1, 2, ..., 7}
space
#collapse-hide
space.sample()
#collapse-hide
import warnings
low_value = np.array([0,0,-1])
high_value = np.array([1,1,1])
box = spaces.Box(low_value, high_value)
box
#collapse-hide
box.low
#collapse-hide
box.high
###Output
_____no_output_____
###Markdown
We can now check what are the spaces of previous cart-pole example used. You can find more about the spaces of cart-pole environment [here](https://github.com/openai/gym/blob/master/gym/envs/classic_control/cartpole.pyL26).
###Code
#collapse-hide
env = gym.make('CartPole-v0')
print(env.action_space)
print(env.observation_space)
#collapse-hide
env.observation_space.low
#collapse-hide
env.observation_space.high
###Output
_____no_output_____
###Markdown
 Monte Carlo MethodsIn this notebook, we will learn Monte Carlo Methods using Blackjack environment from OpenAI Gym. You can find more about this environment [here](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py). BlackJack EnvLets explore gym blackjack environment. We begin by importing the necessary modules and packages.
###Code
#collapse-hide
import gym
import numpy as np
from collections import defaultdict
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
from utilities import plot_v, plot_policy
###Output
_____no_output_____
###Markdown
We can create an instance of the blackjack env by calling `gym.make('Blackjack-v0')`.
###Code
#collapse-hide
env = gym.make('Blackjack-v0')
###Output
_____no_output_____
###Markdown
Now, lets find the details information of observation and action spaces.
###Code
#collapse-hide
env.observation_space
#collapse-hide
env.action_space
###Output
_____no_output_____
###Markdown
Observation space (State) is a tuple of three discrete spaces. You can find the details information of the observation space of blackjack env [here](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.pyL66).- the players current sum $\in \{0, 1, \ldots, 31\}$- the dealer's one showing card $\in \{1, 2, \ldots, 10\}$- the player holds a usable ace or not (0 = `false` or 1 = `true`)Action space consists of two discrete values. You can find the details information of the action space of blackjack env [here](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.pyL56).- 0 = `STICK`- 1 = `HIT` Before training our agent with Monte Carlo Methods, lets play blackjack with a random policy.
###Code
#collapse-hide
# number of episodes to play
num_of_games = 2
for episode in range(num_of_games):
# initialize the env
state = env.reset()
while True:
# print current observation state
print(state)
# select action (hit or stick) randomly
action = env.action_space.sample()
# interacts with the env
state, reward, done, info = env.step(action)
# break loop if wins or lose
if done:
print('End game! Reward: ', reward)
print('You won :)\n') if reward > 0 else print('You lost :(\n')
break
###Output
(12, 1, False)
End game! Reward: -1.0
You lost :(
(14, 5, False)
(15, 5, False)
End game! Reward: -1.0
You lost :(
###Markdown
Monte Carlo PredictionWe begin Monte Carlo Methods by implementing `Every-visit` and `First-visit` MC Prediction algorithms for action-values. We will closely follow the example (5.1) of Sutton and Barto text book. > We begin by considering a policy that `sticks` if the player's sum is 20 or 21, and otherwise `hits`.The following function implements this policy. The function accepts an instance of OpenAI Gym's Blackjack environment as an input and returns an episode as an output which is a list of (state, actions and rewards) tuples.
###Code
#collapse-hide
def play_single_episode(env):
episode = []
state = env.reset()
while True:
# custom policy
action = 0 if state[0] >= 20 else 1
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward)) # (S0, A0, R1), (S1, A1, R2), ...
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Lets play Blackjack with the implemented policy.
###Code
#collapse-hide
env = gym.make('Blackjack-v0')
num_of_games = 3
for i in range(num_of_games):
print(play_single_episode(env))
###Output
[((16, 10, False), 1, -1.0)]
[((15, 3, False), 1, 0.0), ((17, 3, False), 1, 0.0), ((18, 3, False), 1, -1.0)]
[((12, 4, False), 1, -1.0)]
###Markdown
Now, we are ready to implement our **MC Prediction** algorithm. We will start with **Every-Visit MC Prediction**, which is easier to understand and implement. The pseudocode below is used to implement our algorithm.We will call this function `every_visit_mc_prediction`.The function accepts four input arguments:- `env`: an instance of OpenAI Gym's Blackjack environment- `num_of_episodes`: number of episodes to play- `episode_generator`: generate an episode of $(S_{i-1}, A_{i-1} , R_i)$ tuples using custom policy- `gamma`: discount rate (default = 1.0)The function returns an action-value:- `Q`: Q-Table of state, action pairs $Q(s,a)$
###Code
#collapse-hide
def every_visit_mc_prediction(env, num_of_episodes, episode_generator, gamma=1.0):
# initialize empty dictionaries of arrays
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
Returns = defaultdict(lambda: np.zeros(env.action_space.n))
for i in range(1, num_of_episodes+1):
# generate a single episode (S0, A0, R1,..., ST, AT, RT)
episode = episode_generator(env)
# for each tuple in episode
for i, (s, a, r) in enumerate(episode):
# calculate expected discounted return G
# from current state onwards
G = sum([x[2]*(gamma**i) for i,x in enumerate(episode[i:])])
Returns[s][a] += G
N[s][a] += 1.0
Q[s][a] = Returns[s][a] / N[s][a]
return Q
###Output
_____no_output_____
###Markdown
Lets run our *Every-Visit MC Prediction* algorithm. Set the desired number of episodes.
###Code
#collapse-hide
env.seed(0)
num_of_episodes = 10000
# run every-visit mc prediction algorithm for n episodes
Q = every_visit_mc_prediction(env, num_of_episodes, play_single_episode)
###Output
_____no_output_____
###Markdown
Now, lets plot our predicted state-value function using our test policy.
###Code
#collapse-hide
# obtain the corresponding state-value function for our test policy
V = dict((k, (k[0]>=20)*v[0] + (k[0]<20)*v[1]) for k, v in Q.items())
# plot the state-value function heatmap
plot_v(V)
###Output
_____no_output_____
###Markdown
We can also test the **First-Visit MC Prediction** algorithm by using the pseudocode provided below.We will call this function `first_visit_mc_prediction`.
###Code
#collapse-hide
def first_visit_mc_prediction(env, num_of_episodes, episode_generator, gamma=1.0):
# initialize empty dictionaries of arrays
N = defaultdict(lambda: np.zeros(env.action_space.n))
Q = defaultdict(lambda: np.zeros(env.action_space.n))
Returns = defaultdict(lambda: np.zeros(env.action_space.n))
for i in range(1, num_of_episodes+1):
# generate a single episode (S0, A0, R1,..., ST, AT, RT)
episode = episode_generator(env)
# create first-visit-check set
first_visit_check = set()
# for each tuple in episode
for i, (s, a, r) in enumerate(episode):
# if s is already in visit-check set
if s in first_visit_check:
# skip this state
continue
# add state to set
first_visit_check.add(s)
# calculate expected discounted return G
# from current state onwards
G = sum([x[2]*(gamma**i) for i,x in enumerate(episode[i:])])
Returns[s][a] += G
N[s][a] += 1.0
Q[s][a] = Returns[s][a] / N[s][a]
return Q
env.seed(0)
num_of_episodes = 10000
# run every-visit mc prediction algorithm for n episodes
Q = first_visit_mc_prediction(env, num_of_episodes, play_single_episode)
#collapse-hide
# obtain the corresponding state-value function for our test policy
V = dict((k, (k[0]>=20)*v[0] + (k[0]<20)*v[1]) for k, v in Q.items())
# plot the state-value function heatmap
plot_v(V)
###Output
_____no_output_____
###Markdown
Monte Carlo ControlWe just finish the first step in Generalized Policy Iteration (GPI), which is a Policy Evaluation step. We do this by implementing two different MC prediction algorithms. Lets move to the next step where we will improve our policy.In this notebook, we will implement **Every-Visit Constant-$\alpha$ MC Control** algorithm. $\epsilon$-Greedy PolicyWe will start by implementing the **$\epsilon$-greedy** policy.$\pi{(a|s)} \leftarrow \begin{cases} 1-\epsilon+\epsilon/{|A(s)|} & \text{if } a=\text{argmax}_a Q(s,a)\\ \epsilon/{|A(s)|} & \text{otherwise}\end{cases}$
###Code
#collapse-hide
number_of_actions = 4
policy = np.ones(number_of_actions) * 1/number_of_actions
print("Initial Policy: ",policy)
#collapse-hide
# Choose epsilon value
epsilon = 1
# for all actions
policy = np.ones(number_of_actions) * epsilon / number_of_actions
print(policy)
#collapse-hide
# only for maximum action
max_action = 3
policy[max_action] = 1 - epsilon + (epsilon / number_of_actions)
print(policy)
#collapse-hide
np.random.choice(np.arange(number_of_actions), p=policy)
###Output
_____no_output_____
###Markdown
We will call this function `epsilon_greedy_action`.The function accepts three input arguments:- `epsilon` (value between 0 and 1)- `number_of_actions` (action space size)- `Q` (Action-Value function)The function returns `action` index, which is chosen using the epsilon-greedy policy.
###Code
#collapse-hide
def epsilon_greedy_action(epsilon, number_of_actions, Q):
policy = np.ones(number_of_actions) * epsilon / number_of_actions
max_action_index = np.argmax(Q)
policy[max_action_index] = 1 - epsilon + (epsilon / number_of_actions)
action = np.random.choice(np.arange(number_of_actions), p=policy)
return action
###Output
_____no_output_____
###Markdown
Lets implement similar episode generator function from MC Prediction using our epsilon-greedy policy.
###Code
#collapse-hide
def play_single_episode(env, Q, epsilon):
episode = []
state = env.reset()
while True:
# epsilon-greedy policy
action = epsilon_greedy_action(epsilon, env.action_space.n, Q[state])
next_state, reward, done, info = env.step(action)
episode.append((state, action, reward)) # (S0, A0, R1), (S1, A1, R2), ...
state = next_state
if done:
break
return episode
###Output
_____no_output_____
###Markdown
Every-Visit Constant-$\alpha$ MC ControlNow, we are ready to implement the **Every-Visit Constant-$\alpha$ MC Control** algorithm. The pseudocode below is used to implement our algorithm.We will call this function `every_visit_mc_control`.The function accepts eight input arguments:- `env`: an instance of OpenAI Gym's Blackjack environment- `num_of_episodes`: number of episodes to play- `episode_generator`: generate an episode of $(S_{i-1}, A_{i-1} , R_i)$ tuples using epsilon-greedy policy- `alpha`: step-size parameter in update equation- `gamma`: discount rate (default = 1.0)- `epsilon`: epsilon value for policy (default = 1.0)- `epsilon_decay`: decay rate of the epsilon (default = 0.99999)- `epsilon_min`: minimum value of the epsilon (default = 0.05)The function returns an action-value:- `Q`: Q-Table of state, action pairs $Q(s,a)$
###Code
#collapse-hide
def every_visit_mc_control(env, num_of_episodes, episode_generator, alpha,
gamma=1.0, epsilon=1.0,
epsilon_decay=0.99999, epsilon_min=0.05):
# initialize empty dictionary Q
Q = defaultdict(lambda: np.zeros(env.action_space.n))
for i in range(1, num_of_episodes+1):
# calculate epsilon value
epsilon = max(epsilon*epsilon_decay, epsilon_min)
# generate a single episode (S0, A0, R1,..., ST, AT, RT)
# using epsilon-greedy policy
episode = episode_generator(env, Q, epsilon)
# for each tuple in episode
for i, (s, a, r) in enumerate(episode):
# calculate expected discounted return G
# from current state onwards
G = sum([x[2]*(gamma**i) for i,x in enumerate(episode[i:])])
Q[s][a] = Q[s][a] + alpha*(G - Q[s][a])
return Q
# Hyperparameters
num_of_episodes = 10000
alpha = 0.01
gamma = 1.0
epsilon = 1.0
epsilon_decay = 0.9999
epsilon_min = 0.09
# run every-visit constant-alpha mc control algorithm
Q = every_visit_mc_control(env, num_of_episodes, play_single_episode, alpha,
gamma, epsilon, epsilon_decay, epsilon_min)
#collapse-hide
# obtain the corresponding state-value function
V = dict((k,np.max(v)) for k, v in Q.items())
# plot the state-value function heatmap
plot_v(V)
#collapse-hide
# obtain the corresponding best policy
policy = dict((k,np.argmax(v)) for k, v in Q.items())
# plot the state-value function heatmap
plot_policy(policy)
###Output
_____no_output_____ |
notebooks/API creation.ipynb | ###Markdown
API creation
###Code
import os
hello_world_script_fie=os.path.join(os.path.pardir,'src','models','hello_world_api.py')
%%writefile $hello_world_script_fie
from flask import Flask,request
app=Flask(__name__)
@app.route('/api',methods=['POST'])
def say_hello():
data=request.get_json(force=True)
name=data['name']
return "Hello {0}".format(name)
if __name__=='__main__':
app.run(port=10001,debug=True)
#run api in gitbash using python hello_world_api.py in folder in which it resides
import json
import requests
url='http://127.0.0.1:10001/api'
data=json.dumps({'name':'sunny'})
r=requests.post(url,data)
print(r.text)
###Output
Hello sunny
###Markdown
Machine learning api using flask Building API
###Code
machine_learning_api_script_file=os.path.join(os.path.pardir,'src','models','titanic_api1.py')
%%writefile $machine_learning_api_script_file
from flask import Flask,request
import pickle
import numpy as np
import pandas as pd
import os
import json
app=Flask(__name__)
#load modal and scaler files
model_path=os.path.join(os.path.pardir,os.path.pardir,'models')
model_file_path=os.path.join(model_path,'model1.pkl')
scaler_file_path=os.path.join(model_path,'scaler1.pkl')
# model = pickle.load(f)
#with open(scaler_file_path, 'rb') as f:
# scaler = pickle.load(f)
model=pickle.load(open(model_file_path,'rb'))
columns=[u'Age', u'Fare', u'FamilySize', u'IsMother', u'Ismale', u'Deck_A',\
u'Deck_B', u'Deck_C', u'Deck_D', u'Deck_E', u'Deck_F', u'Deck_G', u'Deck_Z',\
u'Pclass_1', u'Pclass_2', u'Pclass_3', u'Title_Lady', u'Title_Master',\
u'Title_Miss', u'Title_Mr', u'Title_Mrs', u'Title_Officer', u'Title_Sir',\
u'Fare_Bin_very_low', u'Fare_Bin_low', u'Fare_Bin_high',\
u'Fare_Bin_very_high', u'Embarked_C', u'Embarked_Q', u'Embarked_S',\
u'AgeState_Adult', u'AgeState_Child']
@app.route('/api',methods=['POST'])
def make_prediction():
#read json and convert into json string
data1=json.dumps(request.get_json(True))
#create panda dataframe
df=pd.read_json(data1)
#extract passengerIds
passenger_ids=df['PassengerId'].ravel()
#actual survived values
actuals=df['Survived'].ravel()
#extract required columns and convert to matrix
X=df[columns].as_matrix().astype('float')
#transform the input
#X_scaled=scaler.transform(X)
#make predictions
predictions=model.predict(X)
#create response dataframe
df_response=pd.DataFrame({'PassengerId':passenger_ids,'Predicted':predictions,'Actuals':actuals})
#return json
return df_response.to_json()
if __name__=='__main__':
#host flask app at port 10001
app.run(port=10002,debug=True)
###Output
Overwriting ..\src\models\titanic_api1.py
###Markdown
Invoke api using request
###Code
import os
import pandas as pd
test_data_set_path=os.path.join(os.path.pardir,'data','processed')
test_data_file=os.path.join(test_data_set_path,'train.csv')
train_df=pd.read_csv(test_data_file)
type(train_df)
import numpy as np
test_data=train_df[train_df['Survived']==1][:5]
import requests
def make_api_request(data):
#url
url='http://127.0.0.1:10002/api'
#post req
r=requests.post(url,data)
return r.json()
output=make_api_request(test_data.to_json())
print(output)
import io
output1=make_api_request(train_df.to_json())
df_result=pd.read_json(json.dumps(output1))
df_result
#res=pd.read_csv(output1.text)
#res.head()
#output1
#urlData = requests.get(url).content
#rawData = pd.read_csv(io.StringIO(output1.decode('utf-8')))
#rawData.head()
import numpy as np
np.mean(df_result.Actuals==df_result.Predicted)
###Output
_____no_output_____ |
notebooks/finance/preprocessing.ipynb | ###Markdown
Adult preprocessingThis notebook contains all preprocessing of the [Adult dataset](https://archive.ics.uci.edu/ml/datasets/Adult)
###Code
from pathlib import Path
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
###Output
_____no_output_____
###Markdown
The column names are available in the [`adult.names`](https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.names) file which contains lots of additional information. We hard code them here for convenience.
###Code
names = [
"age",
"workclass",
"fnlwgt",
"education",
"education_num",
"marital_status",
"occupation",
"relationship",
"race",
"sex",
"capital_gain",
"capital_loss",
"hours_per_week",
"native_country",
"salary",
]
###Output
_____no_output_____
###Markdown
Data contains a "native country" feature. Other than USA and Mexico, many of the features have low numbers of observations, so we group them into a single category using this function.
###Code
def clean_string(s):
"""
Helper function that strips leading / trailing whitespace, lower
cases, and replaces hyphens with underscores.
"""
return s.strip().lower().replace("-", "_")
def parse_native_country(country):
"""
Group countries other than United-States and Mexico into single
"other" category"
"""
country = clean_string(country)
if country == "united_states" or country == "mexico":
return country
return "other"
###Output
_____no_output_____
###Markdown
Load train set and apply some basic preprocessing. Categorical features are left as strings for now to be one-hot encoded shortly. We drop `fnlwgt` as it represents census weights that are not relevant to our analysis, and `education-num` as it duplicates data present in the `education` feature which we use instead.
###Code
train = (
pd.read_csv(
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
header=None,
na_values=[" ?"],
names=names,
)
.drop(columns=["fnlwgt", "education_num"])
# drop all rows with missing values
.dropna()
.reset_index(drop=True)
# simple preprocessing on columns
.assign(
# clean all string columns
education=lambda df: df.education.map(clean_string),
marital_status=lambda df: df.marital_status.map(clean_string),
occupation=lambda df: df.occupation.map(clean_string),
race=lambda df: df.race.map(clean_string),
relationship=lambda df: df.relationship.map(clean_string),
workclass=lambda df: df.workclass.map(clean_string),
# clean and aggregate native_country
native_country=lambda df: df.native_country.map(parse_native_country),
# encode binary features as integers
salary=lambda df: (df.salary == " >50K").astype(np.int32),
sex=lambda df: (df.sex == " Male").astype(np.int32),
)
)
###Output
_____no_output_____
###Markdown
Load test set and apply similar basic preprocessing. Note `adult.test` file has an extra line at the start of the file we ignore, and that the `salary` column is coded differently to `adult.data` in a subtle way (has an extra `.`).
###Code
test = (
pd.read_csv(
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
header=None,
na_values=[" ?"],
skiprows=1,
names=names,
)
.drop(columns=["fnlwgt", "education_num"])
# drop all rows with missing values
.dropna()
.reset_index(drop=True)
# simple preprocessing on columns
.assign(
# clean all string columns
education=lambda df: df.education.map(clean_string),
marital_status=lambda df: df.marital_status.map(clean_string),
occupation=lambda df: df.occupation.map(clean_string),
race=lambda df: df.race.map(clean_string),
relationship=lambda df: df.relationship.map(clean_string),
workclass=lambda df: df.workclass.map(clean_string),
# clean and aggregate native_country
native_country=lambda df: df.native_country.map(parse_native_country),
# encode binary features as integers
# note extra '.' in test set not present in train set
salary=lambda df: (df.salary == " >50K.").astype(np.int32),
sex=lambda df: (df.sex == " Male").astype(np.int32),
)
)
###Output
_____no_output_____
###Markdown
Sanity check that categories in categorical variables are the same for train and test sets.
###Code
assert set(train.education) == set(test.education)
assert set(train.race) == set(test.race)
assert set(train.relationship) == set(test.relationship)
assert set(train.marital_status) == set(test.marital_status)
one_hot_features = [
"workclass",
"education",
"occupation",
"race",
"relationship",
"marital_status",
"native_country",
]
cts_features = ["age", "capital_gain", "capital_loss", "hours_per_week"]
binary_features = ["sex", "salary"]
###Output
_____no_output_____
###Markdown
We one-hot encode categorical features. We'll keep both one-hot encodings and the original categorical encodings for now, as we want to construct two versions of the data, one for training the model on, and one for making visualisations.
###Code
train["race"].value_counts()
train_df = pd.concat(
[train, pd.get_dummies(train.loc[:, one_hot_features], dtype=np.int32)],
axis=1,
)
test_df = pd.concat(
[test, pd.get_dummies(test.loc[:, one_hot_features], dtype=np.int32)],
axis=1,
)
###Output
_____no_output_____
###Markdown
Sanity check that the columns are the same (including order).
###Code
assert train_df.columns.tolist() == test_df.columns.tolist()
###Output
_____no_output_____
###Markdown
We further split the train set to create a validation set.
###Code
train_df, val_df = train_test_split(train_df, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Save all splits to disk without one-hot encodings. This version of the data will be used for exploration and making plots.
###Code
artifacts_dir = Path("../../artifacts")
data_dir = artifacts_dir / "data" / "adult"
original_features = cts_features + one_hot_features + binary_features
train_df[original_features].to_csv(
data_dir / "processed" / "train.csv", index=False
)
val_df[original_features].to_csv(
data_dir / "processed" / "val.csv", index=False
)
test_df[original_features].to_csv(
data_dir / "processed" / "test.csv", index=False
)
###Output
_____no_output_____
###Markdown
We now scale the continuous features and drop the categorical encodings, which will be used for model training.
###Code
ss = StandardScaler()
train_df[cts_features] = ss.fit_transform(train_df[cts_features])
val_df[cts_features] = ss.transform(val_df[cts_features])
test_df[cts_features] = ss.transform(test_df[cts_features])
train_df.drop(columns=one_hot_features).to_csv(
data_dir / "processed" / "train-one-hot.csv", index=False
)
val_df.drop(columns=one_hot_features).to_csv(
data_dir / "processed" / "val-one-hot.csv", index=False
)
test_df.drop(columns=one_hot_features).to_csv(
data_dir / "processed" / "test-one-hot.csv", index=False
)
###Output
_____no_output_____ |
Music_Transformer_Public.ipynb | ###Markdown
© Copyright 2021 Aditya GomatamLicensed under the Apache License, Version 2.0 (the "License"); Music Transformer Model ___SpectralDoy, May 8 - Aug 26 2020___This notebook builds and trains a Music Transformer decoder model to generate music, based off the description in Huang et. al, 2018 (https://arxiv.org/pdf/1809.04281.pdf), with inferences taken from Shaw et. al, 2018 (Self Attention with Relative Position Representations: https://arxiv.org/pdf/1803.02155.pdf) and Vaswani et. al, 2017 (Attention is All You Need: https://arxiv.org/pdf/1706.03762.pdf), as well as Jay Alammar's blog (http://jalammar.github.io), and using the data representation given by Oore, et. al 2018, (This time with Feeling: https://arxiv.org/pdf/1808.03715.pdf).This project will utilize the TPU provided by Google Colab in order to train the model. However, the GPU or CPU can be used to test it.
###Code
# mount gdrive
from google.colab import drive
drive.mount("/content/gdrive/", force_remount=True)
PATH = "./private/path"
# in order to be able to checkpoint while training, connect to GCS Bucket
from google.colab import auth
auth.authenticate_user()
!gcloud config set project-name
!gsutil acl ch -u [email protected]:WRITER gs://bucket
# things to handle midi files
!apt install fluidsynth
!cp /usr/share/sounds/sf2/FluidR3_GM.sf2 ./font.sf2 # normal FluidSynth
!gsutil -q -m cp gs://magentadata/soundfonts/Yamaha-C5-Salamander-JNv5.1.sf2 /content/ # magenta fluidsynth
!pip install midi2audio
!pip install mido
# normal imports
%matplotlib inline
import tensorflow as tf
import numpy as np
from matplotlib import pyplot as plt
import os
import math
import time
import random
%cd ./private/path
import transformerutil5 as tu
%cd ../../..
# fancy imports
import mido
from midi2audio import FluidSynth
from IPython.display import Audio
# set up tpu
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver=resolver)
tf.config.list_logical_devices('TPU')
# tpu distribution strategy
strategy = tf.distribute.TPUStrategy(resolver)
###Output
_____no_output_____
###Markdown
Setup Input Pipeline Here we will prepare the input to load into the model. This means setting up the Dataset. A key aspect of this is that, while the first (n-1) tokens will be input to the model, it will be asked to predict the last (n-1) tokens. This property can be encoded into the Dataset to be able to make adequate batches.Since the task at hand is generation, we can split the data into train / validation / test in an 8 / 1 / 1 ratio. During training, the validation data will be used to calculate metrics, while the test data will be used as priors for generation.
###Code
MAX_LENGTH = 1921
MAX_REL_DIST = 1921
data = np.load(PATH + 'maestro_data_1922.npy')
# using the ratio 8/1/1, split data into train, val and test
lth = data.shape[0]
train_len = round(lth * 0.8)
val_len = round(lth * 0.1)
test_len = round(lth * 0.1)
if train_len + val_len + test_len != lth:
test_len += lth - (train_len + val_len + test_len)
train_data = data[:train_len]
val_data = data[train_len:val_len + train_len]
test_data = data[train_len + val_len:]
print(f"There are {lth} files in the data, {train_data.shape[0]} files in the train_data, "\
f"\n{val_data.shape[0]} files in the validation data, and {test_data.shape[0]} files in the test data.")
###Output
There are 302731 files in the data, 242185 files in the train_data,
30273 files in the validation data, and 30273 files in the test data.
###Markdown
Now create the datasets and batch the data.
###Code
BUFFER_SIZE = 120000
GLOBAL_BATCH_SIZE = 48
per_replica_batch_size = GLOBAL_BATCH_SIZE // strategy.num_replicas_in_sync
train_ds = tf.data.Dataset.from_tensor_slices((train_data[:, 1:], train_data[:, :-1]))
# drop remainder to be able to distribute on the TPU
train_ds = train_ds.shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE, drop_remainder=True)
# prefetch makes fetching data faster
train_ds = train_ds.prefetch(tf.data.experimental.AUTOTUNE)
# distribute over TPUs
train_dist_ds = strategy.experimental_distribute_dataset(train_ds)
num_train_batches = len(list(train_dist_ds))
print(f"There are {num_train_batches} train batches")
###Output
There are 5045 train batches
###Markdown
Here's what a batch of inputs and targets looks like. Notice how the input batch contains start tokens (414) but no end tokens, while the target batch contains no start tokens, but contains end tokens (415), and that all other intermediate tokens are the same in corresponding rows of the input and target batches.
###Code
tar_batch, inp_batch = next(iter(train_dist_ds))
inp_batch, tar_batch
###Output
_____no_output_____
###Markdown
Now, the data has been batched and shuffled, ready to input to the model. However, before we can actually build and train the model, we have to define certain functionalities. Absolute Positional Encoding Since the transformer does not use recurrence or convolution, we have to deliberately give it positional information. Though learned relative position embeddings will be added to the model, it is possible that absolute position encoding will aid it in predicting next tokens.The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode position in the input sequence. So after adding the positional encoding, words will be closer to each other based on the similarity of their meaning, as well as their position in the sentence, in this d-dimensional space - information which the transformer can use to better predict next tokens.The formula for absolute position encoding is as follows:$$\Large{PE_{(pos, 2k)} = sin(pos / 10000^{2k / d_{model}})} $$$$\Large{PE_{(pos, 2k+1)} = cos(pos / 10000^{2k / d_{model}})} $$
###Code
def get_angles(position, k, d_model):
# all values of each k
angle = 1 / np.power(10000, 2 * (k // 2) / d_model)
# matrix multiplied into all positions - represent each position with a d_model sized vector
return position @ angle
def abs_positional_encoding(max_position, d_model, n=3):
"""
returns absolute position encoding, creating a vector representation for all positions
from 0 to max_position of shape (d_model,) -> a matrix of shape (max_position, d_model)
and broadcasts it to n dimensions
"""
# angles are of shape (positions, d_model)
angles = get_angles(np.arange(max_position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# apply sin to the even indices along the last axis
angles[:, 0::2] = np.sin(angles[:, 0::2])
# apply cos to the odd indices along the last axis
angles[:, 1::2] = np.cos(angles[:, 1::2])
# broadcast to n dimensions
for _ in range(n - 2):
angles = angles[np.newaxis, :]
return tf.cast(angles, tf.float32)
pos_encoding = abs_positional_encoding(50, 256)
print (pos_encoding.shape)
fig = plt.figure(figsize=(8, 5.5))
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 256))
plt.ylabel('Position')
plt.colorbar()
plt.show()
del pos_encoding, fig
###Output
(1, 50, 256)
###Markdown
Masking Since some of the input sequences are padded with pad tokens (0), we need to mask out these parts of the input sequences so that the model does not treat it as input. The mask will be created as a tensor of the same shape as the input with ones in the positions that need to be masked.However, the network will be dealing with 3-dimensional or 4-dimensional tensors, rather than a simple 2D embedded sequence. Thus, the shape of the mask must be made broadcastable to n dimensions.
###Code
def create_padding_mask(seq, n=4):
"""
Creates padding mask for a batch of sequences seq. Mask will be of shape
(batch_size, seq_len), and can be broadcasted to n dimensions
"""
mask = tf.cast(tf.equal(seq, 0), tf.float32) # mask is 1 where seq is 0
# reshape to # batch_size, 1, ..., 1. seq_len
return tf.reshape(mask, (tf.shape(mask)[0], *[1 for _ in range(n-2)], tf.shape(mask)[-1]))
# for example
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
print(create_padding_mask(x, n=3))
del x
###Output
tf.Tensor(
[[[0. 0. 1. 1. 0.]]
[[0. 0. 0. 1. 1.]]
[[1. 1. 1. 0. 0.]]], shape=(3, 1, 5), dtype=float32)
###Markdown
Additionally, in the calculation of Scaled Dot Product Attention, the transformer must be prevented from looking ahead at future tokens, so that the next outputs of the model are based only on the current and previous tokens in the input sequence.This can be achieved by placing an upper triangular mask on the calculated Attention weights. Again, this will be 1 where the attention weights need to be zeroed, and zero otherwise. Unlike the case of the padding mask, where we need to know the values of the input sequences in order to create the mask, here the masking is the same for every input: an upper triangular matrix of ones of shape $(L, L)$ where $L$ is the length of the input sequence. We can create this just with $L$, without needing to input the sequence itself.
###Code
def create_look_ahead_mask(seq_len):
"""
Creates an upper triangular mask of ones of shape (seq_len seq_len).
It is the same for all inputs of shape seq_len
"""
mask = 1 - tf.linalg.band_part(tf.ones((seq_len, seq_len)), -1, 0)
return tf.cast(mask, tf.float32) # (seq_len, seq_len)
# example
plt.matshow(create_look_ahead_mask(5))
###Output
_____no_output_____
###Markdown
At the stage when we need to input to the decoder, we have to create both a padding mask and a look ahead mask for the input batch. The correct final mask will be the maximum of these two, as the elements that need to be zeroed are represented by 1's, and those that need to be preserved are represented by 0's.
###Code
def create_mask(inp, n=4):
"""
function to create the proper mask for an input batch
mask = max(padding_mask, look_ahead_mask)
Args:
inp: batch tensor of input sequences of shape (..., seq_len)
"""
padding_mask = create_padding_mask(inp, n)
look_ahead_mask = create_look_ahead_mask(inp.shape[-1])
# create final mask
return tf.maximum(padding_mask, look_ahead_mask)
# example: create a final decoder mask - columns of same indices where pad_mask is 1 are entirely masked
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
mask = create_mask(x, n=3)
fig = plt.figure(figsize=(10, 10))
rows = 1
cols = 3
labels = ["cols 2, 3", "cols 3, 4", "cols 1, 2, 3"]
for i in range(rows*cols):
fig.add_subplot(rows, cols, i+1).set_title("Masked out upper tri & " + labels[i])
plt.imshow(mask[i])
fig.tight_layout()
plt.show()
del x, mask, rows, cols, labels
###Output
_____no_output_____
###Markdown
The calculation of attention relies on a softmax of attention weights, and the elements of the attention weights that need to be zeroed are indicated by 1's in the mask. By adding the mask, scaled by an extremely low value such as -1e9 (to mimic -inf), to the attention weights logits, and then softmaxing this sum, the positions at which the mask was 1 will be effectively zeroed. Self-Attention with Relative Position Embeddings A modification given by Shaw et. al, 2018, improved by Huang et. al, 2018, to the Scaled Dot-Product Attention mechanism given in Vaswani et. al, 2017, which allows the Transformer model to attend to all relevant elements of the input sequences as well as the relative distances between them.$${\text{RelativeAttention} = \text{Softmax} \left(\frac{QK^\top + S^{rel}}{\sqrt{d_{k}}}\right)V}$$Before attention can be computed, the input batch of sequences $X$ must be embedded to make it of shape $(..., L, d_{model})$, where $d_{model}$ is the size of the embeddings, from simply shape $(..., L)$. For attention to work properly, and for residual connections to be made conveniently, all inputs and outputs that the model deals with are associated with the shape $d_{model}$, which is why it is so named.For instance, the first step in the computation of attention is to _transform_ the input $X$ into 3 representations - the queries($Q$), keys($K$) and values($V$):$$Q = XW^Q$$$$K = XW^K$$$$V = XW^V$$$Q$, $K$, and $V$ are all determined by the parameter matrices $W$, which are learned by backpropagation. In order to preserve the shape of $X$ (i.e., keep $Q, K, V$ all shaped $(..., L, d_{model})$), these $W$ must be of shape $(d_{model}, d_{model})$.Now, the attention function in the transformer takes these three values - $Q$, $K$ and $V$ (and a last one, $E$) - as input. What we want the attention mechanism to achieve is to determines the "amount" of "attention" that each element in $X$ should pay attention to every other element in $X$ - that is, the importance of different positions of the sequence in constructing the sequence. This is achieved by the compatibility function $QK^\top + S^{rel}$.Notice that $Q$ is of shape $(..., L, d_{model})$ and $K^\top$ is of shape $(..., d_{model}, L)$. As a result, the matrix product $QK^\top$ (and the additional relative position encoding $S^{rel}$) is of shape $(...,L, L)$. Given the goal of the attention mechanism, we would want the $i, j$ element of the output of the compatibility function to represent the amount of attention that the $ith$ element of the input sequence should pay to its $jth$ element. One can imagine that if $W^Q$ and $W^K$ were properly optimized by backpropagation, the representations of the input sequences in $Q$ by $XW^Q$ and in $K$ by $XW^K$ would be such that the matrix product $QK^\top$ achieves this goal. The softmax turns these compatibilities into probabilities (0 to 1), and it is the softmax of the compatibilities that is multipled by $V$. What does this achieve? Well, the compatibility matrix in itself is virtually useless. All it is is a lookup table of how important element $i$ is to element $j$. By multiplying this compatibility matrix by $V$, what we do is distribute this information about the attention that every element in the sequence should pay to every other element in the sequence into every element of a representation of the input sequence itself. What this also achieves, the softmaxed values being between 0 and 1, is the minimization of unimportant values, and the maximization of the most important ones.However, without altering this configuration properly, elements of the compatibility matrix where $j>i$, would give the $ith$ position of the sequence information about a future position in the sequence, after which the model could simply learn to use this information in order to predict the future tokens, instead of paying attention to its previous and current inputs to do so. This is why the look_ahead_mask must be used on the compatibility matrix. All elements where $j>i$ are simply the upper triangle of the matrix, and so the look_ahead_mask is simply an upper triangular matrix of the same shape.Lastly, the compatibility matrix is scaled by $1/\sqrt{d_{k}}$ before softmaxing, where $d_k$ is the length of the embeddings for each sequence in $K$ (i.e., shape of last axis), in order to counteract the problem of vanishing gradients when computing softmax (Vaswani et. al, 2017). But where does $E$ come into this, and what is $S^{rel}$? Skewing While the computation of attention given in Vaswani et. al, 2017 is simply:$${\text{Attention}(Q, K, V) = \text{Softmax}\left(\frac{QK^\top}{\sqrt{d_k}}\right)V}\tag{1}$$Shaw et. al, 2018 proposed the addition of a $(..., L, L)$ tensor $S^{rel}$ to the calculation of the softmax logits (as well as to $V$, but we can ignore that one for reasons of space and time complexity) in order to inject information about the relative distances between positions in the sequence into the calculation of attention. One can imagine that without information about the importance of the positions itself, the model would learn how important each element is in the sequence, but not in relation to its actual position in the sequence. This is why positional encoding is necessary. The calculation of $S^{rel}$ was greatly improved on by Huang et. al, 2018:$${S^{rel} = \text{skew}\left(QE^\top\right)}\tag{2}$$$Q$ is our queries matrix $XW^Q$, but $E$ is more interesting. $E$ is a set of embeddings for each possible relative distance in the sequence from $-L_{max} + 1$ to 0 (where $L_{max}$ is a maximum relative distance to consider - this has to be predefined). As was stated before, in the computation of the attention logits (the compatibility matrix before being softmaxed), the $i, j$ element describes how much attention the $ith$ position of the sequence should pay to the $jth$ position of the sequence. If we were to inject information about relative position into this calculation with $S^{rel}$, we would therefore want the $i, j$ element of $S^{rel}$ to represent the importance of the relative distance $(j-i)$ for the $ith$ element of the sequence. The way we inject this information is by letting the Relative Position Embeddings $E$ interact with the Queries tensor $Q$, giving rise to a new tensor $S^{rel}$, and then simply adding this information to the vanilla attention logits $(QK^\top)$. Before this, however, we must slice the last $L$ embeddings from $E$ (the embeddings for relative distances of $-L+1$ to $0$) and to calculate the matrix product $QE^\top$, or if $L > L_{max}$ (i.e., the input sequence length is greater than the maximum relative distance to consider), we simply use the last relative position embedding (for a relative distance of $-L_{max}+1$) for all indices past this relative distance. This slice is necessary to preserve shape ($QE^\top$ will be of shape $(..., L, L)$, which is the same shape as $QK^\top$ and the two can simply be added). Additionally, what we get out of computing this matrix product is the product of every query $Q_i$ with every relative position embedding $E_j$, thus instantiating a tensor whose elements describe the importance of a relative distance of $j$ to the $ith$ element in the sequence.However, simply now adding this matrix product $QE^\top$ to $QK^\top$ would not inject the correct relative position information at the required indices, because its $i, j$ element does not descibe the importance of a relative distance of $(j-i)$ to the $ith$ elemtn of the sequence.How do we get around this? First, let's visualize the problem. Consider $E$ to be a set of embedding vectors $E_j$ of length $d_{model}$, ordered from $-L + 1$ to $0$, i.e., $E=(E_{-L+1}, E_{-L+2}, ..., E_0)$, and, for 1 sequence, consider the Queries matrix to be a set of $L$ vectors $Q_i$ each of length $d_{model}$, $Q=(Q_0, Q_1,...,Q_{L-1})$. Then, it is easy to see that the matrix product $QE^\top$ does not achieve the desired ordering, because the $i, j$ element of $QE^\top$ does not necessarily incorporate the embedding for a relative distance of $(j-i)$:$$\begin{equation*}\pmatrix{Q_0 \\ Q_1 \\ \vdots \\ Q_{L-2} \\ Q_{L-1}} \cdot\pmatrix{E_{-L+1} & E_{-L+2} & \cdots & E_{-1} & E_0} =\begin{pmatrix}Q_0E_{-L+1} & Q_0E_{-L+2} & \cdots & Q_0E_{-1} & Q_0E_0 \\Q_1E_{-L+1} & Q_1E_{-L+2} & \cdots & Q_1E_{-1} & Q_1E_0 \\\vdots & \vdots & \ddots & \vdots & \vdots \\Q_{L-2}E_{-L+1} & Q_{L-2}E_{-L+2} & \cdots & Q_{L-2}E_{-1} & Q_{L-2}E_0 \\Q_{L-1}E_{-L+1} & Q_{L-1}E_{-L+2} & \cdots & Q_{L-1}E_{-1} & Q_{L-1}E_0 \\\end{pmatrix}\end{equation*}$$Nevertheless, this matrix contains all the information we need to inject this information correctly - every $Q_i$ multiplied by every $E_j$, and we just need to order it so that the $i, j$ element of the matrix is of the form $Q_iE_{j-i}$. Huang et. al, 2018 implemented the skewing algorithm to do this.To understand skewing, imagine the desired ordering. In this ordering, the main diagonal would consist of $Q_iE_0$ - at the $i, i$ position of the matrix, the relative distance to be injected would be 0. Furthermore, along the diagonal just under the main diagonal - the $i, (i-1)$ elements of the matrix - all elements would be $Q_iE_{-1}$. As visible in the above matrix, these elements are given in the last 2 columns - the elements we want along the main diagonal are in the last column, and the elements we want along the diagonal just under it are in the first column from the last.Nevertheless, although we have $L$ elements along the last column, which would fit the main diagonal, we also have $L$ elements in the first column from the last, which is one more than would fit the diagonal just under the main. Nevertheless, if we look closely, we see that we can exclude the first entry, $Q_0E_{-1}$, from this second-to-last column. This is because it has no meaning - the 0th element in the sequence does not have any previous elements, and so cannot have a relative distance of (-1) to any other element of the sequence. Similarly, the first two elements in the third-to-last column of $QE^\top$, where the queries are multiplied by $E_{-2}$ – $Q_0E_{-2}$ and $Q_1E_{-2}$ – can be ignored, because there are no elements in the sequence that are (-2) positions away from the 0th and 1st positions in the sequence. And so on, so that the last $L-n$ elements of the $nth$ column from the last can fit the $nth$ diagonal under the main.Now, we know where the information we want is (the columns from the last in $QE^\top$), and we know where we want it to go to make $S^{rel}$ (the diagonals on and under the main). If we could just **skew** the elements of $QE^\top$ in the columns from the right into the desired diagonals, we would get $S^{rel}$, and we could just add it to $QK^\top$ in the calculation of the attention logits in order to properly encode relative position. And so, the skewing algorithm was made:1. Pad $QE^\top$ with a dummy vector of length $L$ to the left2. Reshape the matrix from shape $(..., L, L+1)$ to shape $(..., L+1, L)$3. Slice the last $L$ rows from the second-to-last axis of this tensor - this is $S^{rel}$So finally,$${\text{RelativeAttention}(Q, K, V, E) = \text{Softmax} \left(\frac{QK^\top + \text{skew}\left(QE^\top\right)}{\sqrt{d_{k}}}\right)V}\tag{3}$$Isn't that cool?Another benefit of skewing is that once all the required elements are placed on or below the main diagonal in $\text{skew}(QE^\top)$, all irrelevant elements in the upper triangle will be masked out by the look_ahead_mask.
###Code
def skew(t: tf.Tensor):
"""
Implements skewing algorithm given by Huang et. al 2018 to reorder the
dot(Q, RelativePositionEmbeddings) matrix into the correct ordering for which
Tij = compatibility of ith query in Q with relative position (j - i)
This implementation accounts for rank n tensors
Algorithm:
1. Pad T
2. Reshape
3. Slice
T is supposed to be of shape (..., L, L), but the function generalizes to any shape
"""
# pad the input tensor
middle_paddings = [[0, 0] for _ in range(len(t.shape) - 1)]
padded = tf.pad(t, [*middle_paddings, [1, 0]])
# reshape
Srel = tf.reshape(padded, (-1, t.shape[-1] + 1, t.shape[-2]))
Srel = Srel[:, 1:] # slice required positions
return tf.cast(tf.reshape(Srel, t.shape), t.dtype)
# example
u = tf.constant([[0, 1, 1, 0, 2], \
[1, 0, 0, 3, 2], \
[1, 1, 5, 3, 2], \
[0, 7, 5, 3, 2], \
[9, 7, 5, 3, 2]], dtype=tf.float32)
plots = [u, skew(u)]
fig = plt.figure(figsize=(10, 6.5))
rows = 1
cols = 2
labels = ['u', 'skew(u)']
fig.suptitle("Columns from the right are skewed into diagonals on and under the main, and elements\n"\
"not in these columns are thrown into the upper triangle and/or replaced by zeros", \
fontsize=15)
for i in range(rows*cols):
fig.add_subplot(1, 2, i+1).set_title(labels[i], fontsize=14)
plt.imshow(plots[i], cmap='viridis')
fig.tight_layout()
plt.show()
del u, plots, fig, rows, cols, labels
###Output
_____no_output_____
###Markdown
Relative Scaled Dot Product Attention Given the skewing algorithm, we can now define the Relative Attention function. This function is technically called Relative Scaled Dot Product Attention.
###Code
def rel_scaled_dot_prod_attention(q, k, v, e, mask=None):
"""
Implements equation 3 given in the previous section to calculate the attention weights,
Mask has different shapes depending on its type (padding, look_ahead or combined),
but by scaling and adding it to the attention logits, masking can be performed
Attention = softmax(mask(QKT + skew(QET))/sqrt(d_k))V
Args:
q: Queries matrix of shape (..., seq_len_q, d_model)
k: Keys matrix of shape (..., seq_len_k, d_model)
v: Values matrix of shape (..., seq_len_k, d_model)
e: Relative Position embedding matrix of shape (seq_len_k, d_model)
Returns:
output attention, attention weights
"""
QKt = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)
Srel = skew(tf.matmul(q, e, transpose_b=True)) # (..., seq_len_q, seq_len_k)
# calculate and scale logits
dk = math.sqrt(k.shape[-1])
scaled_attention_logits = (QKt + Srel) / dk
# add the mask to the attention logits
if mask is not None:
scaled_attention_logits += (mask * -1e09) # mask is added only to attention logits
# softmax is normalized on the last axis so that the ith row adds up to 1
# this is best for multiplication by v because the last axis (made into
# probabilities) interacts with the values v
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)
output = tf.matmul(attention_weights, v) # (..., seq_len_q, d_k)
return output, attention_weights
###Output
_____no_output_____
###Markdown
As softmax is performed on the last axis of the attention weights, whose dimensions are determined by $K$, the values in the Keys determine the importance of each Query in $Q$.The (softmaxed) attention weights being multiplied by $V$ to calculate the output ensures that the positions of the queries sequence that you want to focus on are kept, while those that are less important are zeroed out.
###Code
# examples of attention
temp_k = tf.constant([[0, 0, 10], [0, 10, 0], [10, 0, 0], [10, 0, 0]], dtype=tf.float32)
temp_v = tf.constant([[4, 2, 1], [5, 6, 3], [7, 8, 10], [9, 12, 45]], dtype=tf.float32)
temp_e = tf.zeros_like(temp_k) #zero the relative position embeddings to demonstrate original attention
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)
attn, attn_weights = rel_scaled_dot_prod_attention(temp_q, temp_k, temp_v, temp_e)
print("Attention weights are,")
print(attn_weights)
print("Output Attention is,")
print(attn)
###Output
Attention weights are,
tf.Tensor([[8.4332744e-26 1.0000000e+00 8.4332744e-26 8.4332744e-26]], shape=(1, 4), dtype=float32)
Output Attention is,
tf.Tensor([[5. 6. 3.]], shape=(1, 3), dtype=float32)
###Markdown
Notice that since ```temp_q``` corresponded to ```temp_k[1]```, the attention weights are maximum at that index, and the output attention is that value in ```temp_v```. The query aligned with a specific key, and thus prioritized the corresponding value to pay attention to.
###Code
# we should also see how relative position embeddings change the output
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # aligns with second key
temp_e = tf.constant([[-1, -1, -10], [2, 2, 2], [1, 1, 1], [4, 4, 4]], dtype=tf.float32)
attn, attn_weights = rel_scaled_dot_prod_attention(temp_q, temp_k, temp_v, temp_e)
print("Attention weights are,")
print(attn_weights)
print("Output Attention is,")
print(attn)
###Output
Attention weights are,
tf.Tensor([[2.5339164e-33 1.0000000e+00 2.6217687e-28 8.7255939e-21]], shape=(1, 4), dtype=float32)
Output Attention is,
tf.Tensor([[5. 6. 3.]], shape=(1, 3), dtype=float32)
###Markdown
Above, we can see that since the query aligned with the second key, and since the highest embedding is that for relative distance of 0 (the very last embedding vector), this position was prioritized and the second value was output. However, changing ```temp_q``` to align with the first key:
###Code
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)
temp_v = tf.constant([[4, 2, 1], [5, 6, 3], [7, 8, 10], [9, 12, 45]], dtype=tf.float32)
temp_e = tf.constant([[-1, -1, -10], [2, 2, 2], [1, 1, 1], [4, 4, 4]], dtype=tf.float32)
attn, attn_weights = rel_scaled_dot_prod_attention(temp_q, temp_k, temp_v, temp_e)
print("Attention weights are,")
print(attn_weights)
print("Output Attention is,")
print(attn)
###Output
Attention weights are,
tf.Tensor([[9.3410162e-11 9.6648464e-06 3.0046529e-08 9.9999034e-01]], shape=(1, 4), dtype=float32)
Output Attention is,
tf.Tensor([[ 8.999962 11.999942 44.999596]], shape=(1, 3), dtype=float32)
###Markdown
We see that even though the query aligned with the first key, the output attention is closer to the last value. This is because of the high embedding in the last index of ```temp_e```. Even though it corresponds to a relative distance of 0, the highest value in $S^{rel}$ is the matrix multiplication ```q @ temp_e[-1].transpose```, simply because of the value of that embedding. In order to get the desired output, we have to mask out those values.
###Code
attn, attn_weights = rel_scaled_dot_prod_attention(temp_q, temp_k, temp_v, temp_e,\
mask=create_look_ahead_mask(temp_k.shape[-2])[0])
print("Attention weights are,")
print(attn_weights)
print("Output Attention is,")
print(attn)
del temp_k, temp_v, temp_e, temp_q, attn, attn_weights
###Output
Attention weights are,
tf.Tensor([[1. 0. 0. 0.]], shape=(1, 4), dtype=float32)
Output Attention is,
tf.Tensor([[4. 2. 1.]], shape=(1, 3), dtype=float32)
###Markdown
And we get the first value, as desired. We can also play around with the embeddings to see how we can query one element to get a previous element, or an average of the current and previous elements, depending on the relative position embeddings:
###Code
# play around with temp_q and temp_e
temp_k = tf.constant([[0, 0, 10], [0, 10, 0], [10, 0, 0], [0, 0, 510]], dtype=tf.float32)
temp_v = tf.constant([[4, 2, 1], [5, 6, 3], [7, 8, 10], [9, 12, 45]], dtype=tf.float32)
temp_q = temp_k
# highest embedding is for distance of -2, and second highest is for 0
# so for the first 2 values, the distance of -2 is masked out, and it outputs the
# positions of relative distance 0
# but for the last 2, the values 2 positions behind are output
temp_e = tf.constant([[0, 0, 0], [1000000, 1000000, 1000000], [0, 0, 0], [100, 100, 100]], dtype=tf.float32)
attn, attn_weights = rel_scaled_dot_prod_attention(temp_q, temp_k, temp_v, temp_e,\
mask=create_look_ahead_mask(temp_k.shape[-2]))
print("Attention weights are,")
print(attn_weights)
print("Output Attention is,")
print(attn)
del temp_k, temp_v, temp_e, temp_q, attn, attn_weights
###Output
Attention weights are,
tf.Tensor(
[[1. 0. 0. 0.]
[0. 1. 0. 0.]
[1. 0. 0. 0.]
[0. 1. 0. 0.]], shape=(4, 4), dtype=float32)
Output Attention is,
tf.Tensor(
[[4. 2. 1.]
[5. 6. 3.]
[4. 2. 1.]
[5. 6. 3.]], shape=(4, 3), dtype=float32)
###Markdown
In this way, the attention function is able to decide which values in the sequence to pay attention to in order to predict the next tokens, and is aided in this with the correct relative position embeddings. Multi-Head Attention Instead of performing the attention on $d_{model}$-dimensional $Q$, $K$, and $V$, Vaswani et. al, 2017 found it beneficial to compute attention for these tensors along $h$ different "heads." One can see why this is beneficial simply by looking at the matrix transformations that take place. First, $Q$, $K$, $V$ and $E$ are split to have different parts of their _embeddings_ (not different parts of the sequence) along $h$ different heads. In doing so, each head is given embeddings from each tensor of length $d_h$, where $d_h = \frac{d_{model}}{h}$:$$\begin{matrix}Q \\K \\V\end{matrix} \Bigg\} \:\text{shape}(..., L, d_{model}) \rightarrow \text{shape}(..., h, L, d_h) $$$E$ is also reshaped from $(L, d_{model})$ to $(h, L, d_h)$. Now, along each of these heads, we compute ```rel_scaled_dot_prod_attention```, and at each head, attention weights of shape $(..., L, L)$ are produced, due to the deliberate reshaping of the input tensors. That is, attention weights are calculated $h$ times (though with resolution downsampled from $d_{model}$ to $d_h$). This means that the model creates $h$ different representations of the amount of attention that each position in each input sequence should pay to every previous position in the sequence. This allows the network to attend to information from several different representations simultaneously, increasing accuracy, as well as making the calculation of attention slightly quicker by computing attention over $h$ smaller tensors in parallel.After multiplication of the attention weights at each head by the $V$ tensor for that head, $V_i, i \in \{1...h\}$, $h$ tensors of shape $(..., L, d_h)$ are created for each sequence in the batch. They can then be concatenated back to a tensor of shape $(..., L, d_{model})$ to get back to the original shaping. Vaswani et. al, 2017 also project this Multi-Head Attention with a final parameter matrix $W^O$ of shape $(d_{model}, d_{model})$ to get the final output. Thus:$$\text{MultiHeadRelativeAttention} = Concat\left(\text{head}_1, ..., \text{head}_h\right) W^O \\\text{head}_i = \text{RelativeAttention}(Q_i, K_i, V_i, E_i)$$As was stated earlier, $Q$, $K$, and $V$ are computed from $X$, the input batch of sequences. We can encode this calculation, as well as the instantiation and use of the Relative Position Embedding Matrix, $E$, into the ```MultiHeadAttention``` block. But first, we must define some helper functions.
###Code
# helper function
def split_heads(x, num_heads, depth=None):
"""
assumes x is of shape (..., num_heads * depth)
split the last dimension of x into (num_heads, depth),
transposes to (..., num_heads, L, depth)
"""
if depth is None:
assert x.shape[-1] % num_heads == 0
depth = x.shape[-1] // num_heads
# split d_model into h, d_h
x = tf.reshape(x, (*x.shape[:-1], num_heads, depth)) # (..., L, num_heads, depth)
# transpose axes -2 and -3 - tf specifies this with perm so all this fluff needs to be done
final_perm = len(x.shape) - 1
prior_perms = np.arange(0, final_perm - 2) # axes before the ones that need to be transposed
# transpose to shape (..., num_heads, L, depth)
return tf.transpose(x, perm=[*prior_perms, final_perm-1, final_perm-2, final_perm])
# test
t = tf.random.normal((64, 10, 200))
print(split_heads(t, 8, 25).shape)
del t
# another helper function
def get_required_embeddings(E, seq_len, max_len=None):
"""
Given an input sequence of length seq_len, which does not necessary equal max_len, the
maximum relative distance the model is set to handle, embeddings in E from the right are
the required relative positional embeddings
Embeddings have to be taken from the right because E is considered to be
ordered from -max_len + 1 to 0
For all positions distanced past -max_len + 1, use E_{-max_len + 1}
"""
if not E.built:
E.build(seq_len)
if max_len is None:
max_len = E.embeddings.get_shape()[0] # assumes E is a keras.layers.Embedding
if max_len >= seq_len:
seq_len = min(seq_len, max_len)
return E(np.arange(max_len - seq_len, max_len))
return tf.concat(
values=[*[E(np.arange(0, 1)) for _ in range(seq_len - max_len)], E(np.arange(0, max_len))],
axis=0
)
# test
E = tf.keras.layers.Embedding(400, 200)
print(get_required_embeddings(E, 500).shape)
del E
###Output
(500, 200)
###Markdown
Now we can define the ```MultiHeadAttention``` block.
###Code
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, max_rel_dist=MAX_REL_DIST, use_bias=True):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
self.max_len = max_rel_dist
assert d_model % num_heads == 0, "d_model must be divisible into num_heads"
self.depth = self.d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model, use_bias=use_bias) # parameter matrix to generate Q from input
self.wk = tf.keras.layers.Dense(d_model, use_bias=use_bias) # parameter matrix to generate K from input
self.wv = tf.keras.layers.Dense(d_model, use_bias=use_bias) # parameter matrix to generate V from input
self.E = tf.keras.layers.Embedding(self.max_len, self.d_model) # relative position embeddings
self.wo = tf.keras.layers.Dense(d_model, use_bias=use_bias) # final output parameter matrix
def call(self, q, k, v, mask=None):
"""
Creates Q, K, and V, gets required embeddings in E, splits into heads,
computes attention, concatenates, and passes through final output layer
"""
# Get Q, K, V
q = self.wq(q) # (batch_size, seq_len, d_model)
k = self.wk(k) # (batch_size, seq_len, d_model)
v = self.wv(v) # (batch_size, seq_len, d_model)
# Get E
seq_len_k = k.shape[-2]
e = get_required_embeddings(self.E, seq_len_k, self.max_len) # (seq_len_k, d_model)
# split into heads
q = split_heads(q, self.num_heads, self.depth) # (batch_size, h, seq_len_q, depth)
k = split_heads(k, self.num_heads, self.depth) # (batch_size, h, seq_len_k, depth)
v = split_heads(v, self.num_heads, self.depth) # (batch_size, h, seq_len_k, depth)
e = split_heads(e, self.num_heads, self.depth) # ( h, seq_len_k, depth)
# rel_scaled_attention shape = (batch_size, h, seq_len_q, depth)
# attention_weights shape = (batch_size, h, seq_len_q, seq_len_k)
rel_scaled_attention, attention_weights = rel_scaled_dot_prod_attention(q, k, v, e, mask=mask)
# transpose rel_scaled_attention back to (batch_size seq_len_q, h, depth)
final_perm = len(rel_scaled_attention.shape) - 1 # can't use rank for some reason
prior_perms = np.arange(0, final_perm - 2) # axes before the ones that need to be transposed
rel_scaled_attention = tf.transpose(rel_scaled_attention,
perm=[*prior_perms, final_perm-1, final_perm-2, final_perm])
# concatenate heads -> (batch_size, seq_len, d_model)
sh = rel_scaled_attention.shape
concat_attention = tf.reshape(rel_scaled_attention, (*sh[:-2], self.d_model))
output = self.wo(concat_attention)
return output, attention_weights
# Create a MultiHeadAttention Block to test
t = tf.random.uniform((10, 1500, 256))
mha = MultiHeadAttention(256, 8, use_bias=True)
out, attn = mha(t, t, t, create_mask(tf.random.uniform((10, 1500))))
print(f"Shape of the output: {out.shape}")
print(f"Shape of the attention weights: {attn.shape}")
print(f"Number of trainable variables in the MHA block: {len(mha.trainable_variables)}")
del t, mha, out, attn
###Output
Shape of the output: (10, 1500, 256)
Shape of the attention weights: (10, 8, 1500, 1500)
Number of trainable variables in the MHA block: 9
###Markdown
Now that we've defined the core mechanism behind the transformer, we can move on to other layers and actually building the model. Pointwise Feed Forward Network In each layer of the Transformer Decoder, the Multi-Head Attention block is followed by a fully-connected Feed Foward Network, which is simply a 2 layer network with a ReLU activation in between.
###Code
class PointwiseFFN(tf.keras.layers.Layer):
def __init__(self, d_model, dff, use_bias=True):
super(PointwiseFFN, self).__init__()
self.main = tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu', use_bias=use_bias), # (batch_size, seq_len, dff)
tf.keras.layers.Dense(d_model, use_bias=use_bias) # (batch_size, seq_len, d_model)
])
def call(self, x):
return self.main(x)
# test it out
test_ffn = PointwiseFFN(512, 2048, True)
print(f"Shape of the output: {test_ffn(tf.random.uniform((60, 24, 512))).shape}")
print(f"Number of trainable variables in the FFN sublayer: {len(list(test_ffn.trainable_variables))}")
del test_ffn
###Output
Shape of the output: (60, 24, 512)
Number of trainable variables in the FFN sublayer: 4
###Markdown
Decoder Layer While the original Transformer consisted of an Encoder and a Decoder designed for seq2seq tasks, the Transformer Decoder Model was adapted to handle sequence generation. While every Encoder Layer in the original Transformer had 2 sublayers, and every Decoder Layer had 3 sublayers, the Transformer Decoder model, as adapted by Radford et. al, 2019 and others before (such as Liu, et. al, 2018: https://arxiv.org/pdf/1801.10198.pdf), scrapped the Encoder, and consisted solely of a stack of Decoder Layers, each with 2 sublayers:1. Masked Multi-Head Attention2. Pointwise Feed Forward LayerEach sublayer also employs a residual connection followed by a LayerNorm on the last axis.
###Code
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, max_rel_dist=MAX_REL_DIST,
use_bias=True, dropout=0.1, layernorm_eps=1e-06):
super(DecoderLayer, self).__init__()
self.mha = MultiHeadAttention(d_model, num_heads, max_rel_dist=max_rel_dist, use_bias=use_bias)
self.ffn = PointwiseFFN(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(axis=-1, epsilon=layernorm_eps)
self.layernorm2 = tf.keras.layers.LayerNormalization(axis=-1, epsilon=layernorm_eps)
self.dropout1 = tf.keras.layers.Dropout(dropout)
self.dropout2 = tf.keras.layers.Dropout(dropout)
def call(self, x, training=False, mask=None):
attn_output, attn_weights = self.mha(x, x, x, mask=mask) # calculate attention
attn_output = self.dropout1(attn_output, training=training) # dropout
# layernorm on residual connection
out1 = self.layernorm1(x + attn_output) # (batch_size, seq_len, d_model)
ffn_output = self.ffn(out1) # pass through FFN
ffn_output = self.dropout2(ffn_output, training=training) # dropout
# layernorm on residual connection
out2 = self.layernorm2(out1 + ffn_output) # (batch_size, seq_len, d_model)
return out2, attn_weights
# test Decoder Layer
t = tf.random.uniform((32, 1500, 256))
sample_decoder_layer = DecoderLayer(256, 8, 1024)
out, attn = sample_decoder_layer(t, mask=create_look_ahead_mask(t.shape[-2]))
print(f"Shape of the output: {out.shape}")
print(f"Shape of the attention weights: {attn.shape}")
print(f"Number of trainable variables in Decoder Layer: {len(list(sample_decoder_layer.trainable_variables))}")
del t, out, sample_decoder_layer, attn
###Output
Shape of the output: (32, 1500, 256)
Shape of the attention weights: (32, 8, 1500, 1500)
Number of trainable variables in Decoder Layer: 17
###Markdown
Transformer Decoder Now that we have defined the Decoder Layer, we can build the Transformer Decoder as a stack of N decoder layers, along with the functionality to deal with an input sequence of tokens.The Transformer Decoder consists of:1. Input Embedding2. N Decoder Layers3. Final Linear LayerThe Input Embedding is the embeddings of size ```vocab_size``` for the input sequence. After the input embedding, absolute position encoding is added. The embedded input sequences are passed into the stack of decoder layers, and the output of that stack is passed into the Final Linear layer to take the decoder output from shape $(..., L, d_{model})$ to ($..., L$, ```vocab_size```). This final layer can be the original input embedding weight matrix, as per Press and Wolf, 2016, (https://arxiv.org/pdf/1608.05859.pdf), as the input and output are from the same vocabulary, or it can be a new Dense layer altogether.
###Code
class TransformerDecoder(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, vocab_size, max_rel_dist=MAX_REL_DIST,
max_abs_position=20000, use_bias=True, dropout=0.1, layernorm_eps=1e-06, tie_emb=False):
super(TransformerDecoder, self).__init__()
self.num_layers = num_layers
self.d_model = d_model
self.tie_emb = tie_emb
self.le = layernorm_eps
self.max_position = max_abs_position # might need for decode
self.embedding = tf.keras.layers.Embedding(vocab_size, d_model) # input embeddings
self.positional_encoding = abs_positional_encoding(max_abs_position, d_model) # absolute position encoding
self.dropout = tf.keras.layers.Dropout(dropout) # embedding dropout
# decoder layers
self.dec_layers = [DecoderLayer(d_model, num_heads, dff, max_rel_dist, use_bias, dropout, layernorm_eps)\
for _ in range(self.num_layers)]
# final layer is linear or embedding weight depending on tie emb
if not tie_emb:
self.final_layer = tf.keras.layers.Dense(vocab_size, use_bias=use_bias)
def call(self, x, training=False, mask=None):
# initialize attention weights dict to output
attention_weights = {}
# embed x and add absolute positional encoding
x = self.embedding(x) # (batch_size, seq_len) -> (batch_size, seq_len, d_model)
x *= math.sqrt(self.d_model)
if self.max_position > 0:
x += self.positional_encoding[:, :x.shape[-2], :]
x = self.dropout(x, training=training)
# pass through decoder layers
for i in range(len(self.dec_layers)):
x, w_attn = self.dec_layers[i](x, training, mask)
attention_weights[f'DecoderLayer{i+1}'] = w_attn
# final layer
if self.tie_emb:
x = tf.matmul(x, self.embedding.embeddings, transpose_b=True)
else:
x = self.final_layer(x)
# returns unsoftmaxed logits
return x, attention_weights
# example transformer
with strategy.scope():
sample_transformer = TransformerDecoder(
num_layers=6, d_model=256, num_heads=8, dff=1024, vocab_size=tu.vocab_size, max_rel_dist=1537,
max_abs_position=20000, use_bias=True, dropout=0.1, tie_emb=True
)
out, attn = sample_transformer(tf.random.uniform((16, 1600))) # build the model
start = time.time()
out, attn = sample_transformer(tf.random.uniform((16, 1600), minval=0, maxval=400, dtype=tf.int32))
print(f"Shape of the output: {out.shape}")
print(f"Shape of the attention weights: {attn['DecoderLayer1'].shape}")
print(f"Number of parameters in the Tranformer Decoder: {len(list(sample_transformer.trainable_variables))}")
print(f"Time taken to compute over an input batch when not training: {time.time()-start} seconds")
del out, attn, sample_transformer, start
###Output
Shape of the output: (16, 1600, 416)
Shape of the attention weights: (16, 8, 1600, 1600)
Number of parameters in the Tranformer Decoder: 103
Time taken to compute over an input batch when not training: 4.357294797897339 seconds
###Markdown
Training Set Hyperparameters Sadly, the TPU can only compute loss on an input batch of batch size 48 and sequence length ~2000 tokens without crashing. The reference paper Huang et. al, 2018 does not specify the batch size but specifies the sequences trained on were of length 2048 tokens. While the batch size and sequence length cannot be changed, other relevant hyperparameters need to be experimented with to get the best results.
###Code
num_layers = 6
d_model = 256
dff = 1024
num_heads = 8
max_rel_dist = MAX_REL_DIST
max_abs_position = 1
use_bias = True
tie_emb = False
layernorm_eps = 1e-06
vocab_size = tu.vocab_size # don't change this
dropout_rate = 0.1
###Output
_____no_output_____
###Markdown
Learning Rate Schedule As per Vaswani et. al, 2017, the Adam optimizer with a custom learning rate scheduler is used when training the transformer model:$$lr = d_{model}^{\:-0.5} \cdot \min{\left(step\_num^{-0.5}, step\_num \:\cdot warmup\_steps^{-1.5} \right)}$$Additionally, the betas to be used in the Adam optimizer are 0.9 and 0.98, with epsilon equal to 1e-9.
###Code
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
# how to set up the optimizer
learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-09)
del learning_rate, optimizer
plt.plot(CustomSchedule(256)(tf.range(25000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
###Output
_____no_output_____
###Markdown
Loss and Metrics Since some of the inputs are padded, it is important to apply a padding mask when calculating the loss.The loss object will be Sparse Categorical Entropy Loss. This is the loss to be used when dealing with indices in a vocabulary, rather than one-hot vectors.
###Code
with strategy.scope():
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True,
reduction=tf.keras.losses.Reduction.NONE
)
def loss_function(target, predictions, criterion=loss_object):
"""
If defining custom criterion, make sure reduction is none
"""
mask = tf.not_equal(target, tf.zeros_like(target))
_loss = criterion(target, predictions)
mask = tf.cast(mask, _loss.dtype) # make mask of same dtype as loss
_loss *= mask
return tf.reduce_sum(_loss) / tf.reduce_sum(mask)
with strategy.scope():
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
val_loss = tf.keras.metrics.Mean(name='val loss')
val_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='val accuracy')
###Output
_____no_output_____
###Markdown
Training and Checkpointing The transformer is an autoregressive model, which means that at the inference stage, it will make next predictions based on its previous outputs.However, while training, we can use teacher forcing - feeding the target into the model as previous output regardless of the true output of the model. This significantly cuts down on the compute required, while usually reducing loss (at the expense of generalizability, nonetheless).Since we are training a generative model, the targets are simply the inputs shifted right by 1 position. The data has already been cut this way during the Input Pipeline.Now, we need to create the model and optimizer, set up the checkpointing mechanism on a GCS Bucket, and train.
###Code
# set before training
num_train_steps = 1000000
epochs = tf.convert_to_tensor(math.ceil(num_train_steps / num_train_batches))
print(epochs)
###Output
tf.Tensor(207, shape=(), dtype=int32)
###Markdown
Create the model and optimizer and the epoch from which to start training.
###Code
with strategy.scope():
transformer = TransformerDecoder(
num_layers, d_model, num_heads, dff, vocab_size, MAX_REL_DIST, max_abs_position,
use_bias, dropout_rate, layernorm_eps, tie_emb
)
learning_rate = CustomSchedule(d_model, warmup_steps=4000)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-09)
start_epoch = tf.Variable(0) # to handle restarting training from a checkpoint
###Output
_____no_output_____
###Markdown
Now, since we'll have to train for a while, we'll use checkpoints to train the model. Every ```ckpt_interval``` epochs, we'll save the model weights, the optimizer state and the epoch number.
###Code
# build the. model
with strategy.scope():
_ = transformer(tf.random.uniform((GLOBAL_BATCH_SIZE, MAX_LENGTH)))
del _
# set up the checkpoints
checkpoint_path = "checkpoint/path/in/bucket"
ckpt_interval = 1 # checkpoint every ckpt_interval epochs
with strategy.scope():
checkpoint = tf.train.Checkpoint(transformer=transformer,
optimizer=optimizer,
epoch=start_epoch)
ckpt_manager = tf.train.CheckpointManager(checkpoint, checkpoint_path,
max_to_keep=5)
if ckpt_manager.latest_checkpoint:
checkpoint.restore(ckpt_manager.latest_checkpoint)
print('Latest checkpoint restored.')
print(f'Training will resume from epoch {start_epoch.numpy()}.')
print(f'{optimizer.iterations.numpy()} train steps have already been completed.')
###Output
Latest checkpoint restored.
Training will resume from epoch 199.
966480 train steps have already been completed.
###Markdown
Now, to train!
###Code
# define the train step and validation step functions
def train_step(target, inputs):
# forward pass
with tf.GradientTape() as tape:
predictions, _ = transformer(inputs, training=True, mask=create_mask(inputs))
loss = loss_function(target, predictions) #/ (MAX_LENGTH)
# update weights
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
# accuracy
train_accuracy(target, predictions)
train_loss(loss)
return loss
def val_step(target, inputs):
# forward pass
predictions, _ = transformer(inputs, training=True, mask=create_mask(inputs))
loss = loss_function(target, predictions) #/ (MAX_LENGTH)
# accuracy
val_accuracy(target, predictions)
val_loss(loss)
return loss
###Output
_____no_output_____
###Markdown
Since the step functions return the loss computed at each TPU core, we need to reduce the distributed loss back into one value.
###Code
# distributed steps
@tf.function
def distributed_train_step(dataset_inp):
per_replica_losses = strategy.run(train_step, args=(dataset_inp))
final_loss = strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)
# get metrics
train_loss(final_loss)
return final_loss
@tf.function
def distributed_val_step(dataset_inp):
per_replica_losses = strategy.run(val_step, args=(dataset_inp))
final_loss = strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)
# get metrics
val_loss(final_loss)
return final_loss
###Output
_____no_output_____
###Markdown
Finally, the training loop!
###Code
# train loop
try:
for epoch in range(start_epoch.numpy(), epochs):
start = time.time()
batch_timer = time.time()
# train steps
train_loss.reset_states()
train_accuracy.reset_states()
for batch, ds_inp in enumerate(train_dist_ds):
distributed_train_step(ds_inp)
if (batch + 1) % 202 == 0 or batch == 0:
print(f"Processing Epoch {epoch} Train Batch {batch} " \
f"Loss {round(train_loss.result().numpy().item(), 6)} " \
f"Accuracy {round(train_accuracy.result().numpy().item(), 6)} " \
f"Time taken {round(time.time() - batch_timer, 2)} secs")
batch_timer = time.time()
batch_timer = time.time()
if (epoch + 1) % ckpt_interval == 0:
start_epoch.assign(epoch + 1)
print("Checkpointing...", end="")
save_path = ckpt_manager.save()
print(f"Done! Saved at {save_path}")
print(f"Epoch {epoch} "\
f"Train Loss {round(train_loss.result().numpy().item(), 6)} " \
f"Train Accuracy {round(train_accuracy.result().numpy().item(), 6)}", end=" ")
print(f"Val Loss {round(val_loss.result().numpy().item(), 6)} "\
f"Val Accuracy {round(val_accuracy.result().numpy().item(), 6)}")
print(f"Time taken for 1 epoch {round(time.time() - start, 2)} secs\n")
except KeyboardInterrupt:
print("\nKeyboard Interrupt")
print(f"{optimizer.iterations.numpy()} train steps have been computed.")
print(f"Current Train Loss {round(train_loss.result().numpy().item(), 6)} and " \
f"Train Accuracy {round(train_accuracy.result().numpy().item(), 6)} \n"
f"Current Val Loss {round(val_loss.result().numpy().item(), 6)} and "\
f"Val Accuracy {round(val_accuracy.result().numpy().item(), 6)}\n")
save = input("Save the model?\n")
if save == 'y' or save == 'yes':
model_save_path = PATH + f"Models/1920_model_{optimizer.iterations.numpy()}_train_steps.h5"
print(f"Saving at {model_save_path}...", end="")
transformer.save_weights(model_save_path)
print("Done!")
###Output
_____no_output_____
###Markdown
Saving and Loading Now that the transformer has been trained, we can save it with the ```save_weights``` function. Note that we cannot save the model as it is a custom subclassed model, and that the save format must be h5.
###Code
model_save_path = PATH + "Models/"
transformer.save_weights(model_save_path + f"1920_model_{optimizer.iterations.numpy()}_train_steps.h5")
###Output
_____no_output_____
###Markdown
Now, we can load the model as follows, with or without a strategy. Since we have saved only the weights, we have to build the model before the weights can be loaded.
###Code
# create the model
transformer = TransformerDecoder(
num_layers, d_model, num_heads, dff, vocab_size, MAX_LENGTH, max_abs_position,
use_bias, dropout_rate, layernorm_eps, tie_emb
)
# build the model
_ = transformer(tf.random.uniform((2, MAX_LENGTH)))
del _
# load the weights
transformer.load_weights(model_save_path + "1920_model_1006840_train_steps.h5")
###Output
_____no_output_____
###Markdown
Generate! Once the model is trained, we have to define a decoding function for the model to autoregressively compute its outputs. Note that in order to generate outputs much quicker without having to use a distribution strategy, change to GPU runtime. It takes about 90 seconds to generate 1500 tokens.The decode function will let the model generate until it predicts an end token. Since the model can only take a fixed length of inputs at a time, this means iteratively appending and clipping the input to the model, but storing the outputs at every step.
###Code
def greedy_decode(transformer, inp, mode='categorical', temperature=1.0, k=None, skip_ends=0, memory=1000):
"""
Decodes inp greedily by appending last outputs to the input and feeding
back into the model. Model is made to generate until end token is predicted
by feeding only the last model.max_len inputs to the model at each decode step
"""
# get tokens
if not isinstance(inp, tf.Tensor) and not isinstance(inp, np.ndarray):
inp = tu.events_to_indices(inp)
if inp[0] != tu.start_token:
middle_dims = [[0, 0] for _ in range(tf.rank(inp) - 1)]
inp = tf.pad(inp, paddings=[*middle_dims, [1, 0]], constant_values=tu.start_token)
# check if temperature / k is a function
if not callable(temperature):
temperature_ = temperature; del temperature
temperature = lambda x: temperature_
if not callable(k) and k is not None:
k_ = k; del k
k = lambda x: k_
# dimension for the mask
n = tf.rank(inp) + 2 if tf.rank(inp) > 0 else 3
# make inp 2d
inp = [tf.expand_dims(inp, 0)]
# initialize attention weights in case inp.shape[-1] is already > max_len
attention_weights = {}
# maximum number of tokens to input to the model
try:
while True:
predictions, attention_weights = transformer(inp[-1], training=False,
mask=create_mask(inp[-1], n))
# divide logits by temperature
predictions /= temperature(inp[-1].shape[-1])
# get last prediction
if mode == 'argmax' or mode == 'a':
prediction = tf.expand_dims(tf.argmax(predictions[..., -1, :], axis=-1, output_type=tf.int32), 0)
elif k is not None:
top_k_final_predictions = tf.math.top_k(predictions[..., -1, :],
k=k(inp[-1].shape[-1]))
predicted_idx = tf.random.categorical(
logits=top_k_final_predictions.values,
num_samples=1,
dtype=tf.int32
)
predicted_idx = tf.squeeze(predicted_idx)
prediction = tf.expand_dims(tf.expand_dims(top_k_final_predictions.indices[0, predicted_idx], 0), 0)
elif mode == 'categorical' or mode == 'c':
prediction = tf.random.categorical(logits=predictions[..., -1, :], num_samples=1, dtype=tf.int32)
else:
print(f"Unsupported mode '{mode}'. Use 'argmax' or 'categorical'")
return None
# return if prediction is end token
if prediction == tu.end_token: #or inp[-1].shape[-1] == MAX_LENGTH:
if skip_ends <= 0:
out = tf.concat(inp, axis=-1)
return tf.squeeze(out)[1:], attention_weights
else:
skip_ends -= 1
vec = inp[-1]
inp.append(vec[:, :-memory])
# maybe i need to put the start token here so that it actually ends at 1920 positions
inp.append(vec[:, -memory:])
inp = inp[:-3] + inp[-2:]
# else concatenate last output to inp
inp[-1] = tf.concat([inp[-1], prediction], axis=-1)
except KeyboardInterrupt:
pass
out = tf.concat(inp, axis=-1)
return tf.squeeze(out)[1:], attention_weights
###Output
_____no_output_____
###Markdown
Now, before executing that function, we can also use fluidsynth and IPython.display to turn the generated output into playable .wav files. To do so, however, we can define two new functions - one to save the generated .midi file and convert it to a .wav file and save it again in the specified path; and a second to simply combine this function with the greedy decode function.
###Code
def audiate(idx_list, path='./bloop.mid', tempo=512820, gain=1.0, sr=44100, wav=True, verbose=False):
# check path is mid or midi, set to mid, else invalid path
if path.endswith("midi"):
path = path[:-1]
elif path.endswith("mid"):
pass
else:
print("Invalid extension. Use '.mid' or '.midi'.")
return None
# create and save the midi file
print("Saving midi file...") if verbose else None
mid = tu.Listparser(index_list=idx_list, tempo=tempo)
mid.save(path)
if not wav:
print(f"Midi saved at {path}")
return None
# run the FluidSynth command - could also use font.sf2
print("Creating wav file...\n") if verbose else None
os.system(f"fluidsynth -ni Yamaha-C5-Salamander-JNv5.1.sf2 {path} -F {path[:-4]}.wav -r 44100 -g {gain}")
return Audio(f"{path[:-4]}.wav")
def generate(transformer, inp, path='./bloop.mid', mode='categorical', temperature=1.0,
k=None, skip_ends=0, memory=1000, tempo=512820, wav=True, verbose=False):
# get the index list
if verbose:
print("Greedy decoding...", end='')
start = time.time()
idx_list, attn_weights = greedy_decode(transformer, inp, mode, temperature,
k, skip_ends, memory)
end = time.time()
print(f"Generated {len(idx_list)} tokens.", end=" ")
print(f"Time taken: {round(end - start, 2)} secs.")
else:
idx_list, attn_weights = greedy_decode(transformer, inp, mode, temperature,
k, skip_ends, memory)
# generate audio
return audiate(idx_list, path, tempo, wav=wav, verbose=verbose)
generate(transformer, ['<start>'], k=30, tempo=600000, verbose=True)
###Output
_____no_output_____ |
CustomerChurn/Customer Churn Prediction (IBM Telecom Dataset).ipynb | ###Markdown
Customer Churn Prediction Most companies would like to retain their customers as customer acquisition is an expensive exercise. Hence predicting churn is important for most organizations. Also, churn datasets typically are imbalanced data sets. What this means is that there will be very few samples in the dataset for the case one would like to predict (whether the customer will churn or not) In this proof of concept, the dataset being referenced is the IBM Telecom Data Set
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
from sklearn.preprocessing import MinMaxScaler, RobustScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from xgboost import XGBClassifier, DMatrix
from sklearn.ensemble import AdaBoostClassifier
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import NearMiss
from sklearn.metrics import recall_score, accuracy_score, confusion_matrix, cohen_kappa_score
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
###Output
Using TensorFlow backend.
###Markdown
Read the Data Set The first step is to read the data set and see the first five rows of the data. This ensures that the data is loaded in the Pandas dataframe
###Code
data = pd.read_csv("data/CustomerChurn.csv")
data.head()
###Output
_____no_output_____
###Markdown
Check for Missing values To check for missing values, we check for NaN or even blank values. Sometimes blank values are loaded as ' '
###Code
print("Checking for NA")
print(data.isna().sum())
print("#######################################################")
print("Checking for Blank Data")
print(data.isin([' ','',' ']).sum())
###Output
Checking for NA
customerID 0
gender 0
SeniorCitizen 0
Partner 0
Dependents 0
tenure 0
PhoneService 0
MultipleLines 0
InternetService 0
OnlineSecurity 0
OnlineBackup 0
DeviceProtection 0
TechSupport 0
StreamingTV 0
StreamingMovies 0
Contract 0
PaperlessBilling 0
PaymentMethod 0
MonthlyCharges 0
TotalCharges 0
Churn 0
dtype: int64
#######################################################
Checking for Blank Data
customerID 0
gender 0
SeniorCitizen 0
Partner 0
Dependents 0
tenure 0
PhoneService 0
MultipleLines 0
InternetService 0
OnlineSecurity 0
OnlineBackup 0
DeviceProtection 0
TechSupport 0
StreamingTV 0
StreamingMovies 0
Contract 0
PaperlessBilling 0
PaymentMethod 0
MonthlyCharges 0
TotalCharges 11
Churn 0
dtype: int64
###Markdown
From the above analysis, 11 records have missing data for TotalCharges. However, since we have Monthly Charge and TotalCharge, we decide to keep MonthlyCharge variable So now we need to check unique values and count of the unique values. For this we define a local function to compute value counts
###Code
## Print value Counts
def value_counts(df):
colnms= df.columns
for cnm in colnms:
print("Column :" + cnm)
print(str(round(df[cnm].value_counts()/len(df)*100)))
value_counts(data.drop(['MonthlyCharges','customerID','TotalCharges','tenure'],axis=1))
###Output
Column :gender
Male 50.0
Female 50.0
Name: gender, dtype: float64
Column :SeniorCitizen
0 84.0
1 16.0
Name: SeniorCitizen, dtype: float64
Column :Partner
No 52.0
Yes 48.0
Name: Partner, dtype: float64
Column :Dependents
No 70.0
Yes 30.0
Name: Dependents, dtype: float64
Column :PhoneService
Yes 90.0
No 10.0
Name: PhoneService, dtype: float64
Column :MultipleLines
No 48.0
Yes 42.0
No phone service 10.0
Name: MultipleLines, dtype: float64
Column :InternetService
Fiber optic 44.0
DSL 34.0
No 22.0
Name: InternetService, dtype: float64
Column :OnlineSecurity
No 50.0
Yes 29.0
No internet service 22.0
Name: OnlineSecurity, dtype: float64
Column :OnlineBackup
No 44.0
Yes 34.0
No internet service 22.0
Name: OnlineBackup, dtype: float64
Column :DeviceProtection
No 44.0
Yes 34.0
No internet service 22.0
Name: DeviceProtection, dtype: float64
Column :TechSupport
No 49.0
Yes 29.0
No internet service 22.0
Name: TechSupport, dtype: float64
Column :StreamingTV
No 40.0
Yes 38.0
No internet service 22.0
Name: StreamingTV, dtype: float64
Column :StreamingMovies
No 40.0
Yes 39.0
No internet service 22.0
Name: StreamingMovies, dtype: float64
Column :Contract
Month-to-month 55.0
Two year 24.0
One year 21.0
Name: Contract, dtype: float64
Column :PaperlessBilling
Yes 59.0
No 41.0
Name: PaperlessBilling, dtype: float64
Column :PaymentMethod
Electronic check 34.0
Mailed check 23.0
Bank transfer (automatic) 22.0
Credit card (automatic) 22.0
Name: PaymentMethod, dtype: float64
Column :Churn
No 73.0
Yes 27.0
Name: Churn, dtype: float64
###Markdown
We see that while most categorical variables have been classified as Yes, No...senior citizens are classified as 0 and 1. So we map 0 to No and 1 to Yes
###Code
print(data['SeniorCitizen'].value_counts()/len(data)*100)
data['SeniorCitizen']= data['SeniorCitizen'].map({0: 'No', 1: 'Yes'})
print(data.head())
print(data['MultipleLines'].value_counts())
data['MultipleLines'] = data['MultipleLines'].map({'No': 'No','Yes': 'Yes','No phone service': 'NoPhoneService'})
print(data['MultipleLines'].value_counts())
print(data['InternetService'].value_counts())
data['InternetService'] = data['InternetService'].map({'No': 'No','DSL': 'DSL','Fiber optic': 'FiberOptic'})
print(data['InternetService'].value_counts())
print(data['OnlineSecurity'].value_counts())
data['OnlineSecurity'] = data['OnlineSecurity'].map({'No': 'No','Yes': 'Yes','No internet service': 'NoInternetService'})
print(data['OnlineSecurity'].value_counts())
print(data['OnlineBackup'].value_counts())
data['OnlineBackup'] = data['OnlineBackup'].map({'No': 'No','Yes': 'Yes','No internet service': 'NoInternetService'})
print(data['OnlineBackup'].value_counts())
print(data['DeviceProtection'].value_counts())
data['DeviceProtection'] = data['DeviceProtection'].map({'No': 'No','Yes': 'Yes','No internet service': 'NoInternetService'})
print(data['DeviceProtection'].value_counts())
print(data['TechSupport'].value_counts())
data['TechSupport'] = data['TechSupport'].map({'No': 'No','Yes': 'Yes','No internet service': 'NoInternetService'})
print(data['TechSupport'].value_counts())
print(data['StreamingTV'].value_counts())
data['StreamingTV'] = data['StreamingTV'].map({'No': 'No','Yes': 'Yes','No internet service': 'NoInternetService'})
print(data['StreamingTV'].value_counts())
print(data['StreamingMovies'].value_counts())
data['StreamingMovies'] = data['StreamingMovies'].map({'No': 'No','Yes': 'Yes','No internet service': 'NoInternetService'})
print(data['StreamingMovies'].value_counts())
print(data['Contract'].value_counts())
data['Contract'] = data['Contract'].map({'Month-to-month':'M2M','Two year': 'TwoYear','One year': 'OneYear'})
print(data['Contract'].value_counts())
data['PaperlessBilling'].value_counts()
print(data['PaymentMethod'].value_counts())
data['PaymentMethod'] = data['PaymentMethod'].map({'Electronic check': 'ElectronicChk','Mailed check': 'MailedChk','Bank transfer (automatic)': 'BankTransferAuto','Credit card (automatic)': 'CreditCardAuto'})
print(data['PaymentMethod'].value_counts())
###Output
Electronic check 2365
Mailed check 1612
Bank transfer (automatic) 1544
Credit card (automatic) 1522
Name: PaymentMethod, dtype: int64
ElectronicChk 2365
MailedChk 1612
BankTransferAuto 1544
CreditCardAuto 1522
Name: PaymentMethod, dtype: int64
###Markdown
Additional Features Added/Dropped We see that Payment Method has an information on Automatic Payments. This can be added as an additional variable
###Code
data['Automatic'] = data['PaymentMethod'].apply(lambda x: "Yes" if x in ['BankTransferAuto','CreditCardAuto'] else "No")
###Output
_____no_output_____
###Markdown
Tenure is a continuous variable. It is best to bin this variable. Here, we have used time period
###Code
def processTenure(x):
if x<=12:
return("LT12M")
elif x>12 and x<=24:
return("BT1Y2Y")
elif x>24 and x<=36:
return("BT2Y3Y")
elif x>36 and x<=48:
return("BT3Y4Y")
elif x>48 and x<=60:
return("BT4Y5Y")
else:
return("GT5Y")
data['TenureBinned'] = data.tenure.apply(lambda x: processTenure(int(x)))
## Scale the Monthly Charges variable using RobustScaler
scaler = RobustScaler()
data['MonthlyCharges'] = scaler.fit_transform(data['MonthlyCharges'].values.reshape(-1,1))
###Output
_____no_output_____
###Markdown
Convert the categorical variables into Dummy variables
###Code
data= pd.get_dummies(data=data, columns = ['gender', 'SeniorCitizen', 'Partner', 'Dependents', 'PhoneService', 'MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport','StreamingTV', 'StreamingMovies', 'Contract', 'PaperlessBilling','PaymentMethod','Automatic','TenureBinned', 'Churn'],drop_first=True)
data.drop(['customerID','TotalCharges','tenure'],axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Splitting the Data into training and testing
###Code
X = data.drop(['Churn_Yes'],axis=1).values
y = data[['Churn_Yes']].values
X_train, X_test,y_train, y_test = train_test_split(X,y,random_state=100, stratify=y)
###Output
_____no_output_____
###Markdown
Model Development Here we decided to create a single Function for creating models. This function takes the Training and Testing Inputs, Model Type (Logistic Regression, Decision Trees, Random Forest, Support Vector Machine, XGBoost and ADABoost). Since there is a problem of class imbalance, we use two methods for sampling - Over Sampling (SMOTE) and Under Sampling (NEARMISS) This function returns Accuracy, Recall, Confusion Matrix and Cohen's Kappa Score
###Code
def predictionF(X_tr,y_tr,X_tt,y_tt,model_type, imb_method='SMOTE'):
if imb_method == 'SMOTE':
sm = SMOTE()
X_train,y_train = sm.fit_sample(X=X_tr,y=y_tr)
elif imb_method == 'NEARMISS':
nm = NearMiss()
X_train,y_train = nm.fit_sample(X_tr,y_tr)
if model_type == 'LOGREG':
model = LogisticRegression()
elif model_type == 'DECISIONTREE':
model = DecisionTreeClassifier()
elif model_type == 'RANDOMFOREST':
model = RandomForestClassifier()
elif model_type == 'SVM':
model = SVC()
elif model_type == 'XGBOOST':
model = XGBClassifier()
elif model_type == 'ADABOOST':
model = AdaBoostClassifier()
elif model_type == 'NEURALNET':
model=Sequential()
model.add(Dense(60, input_dim=X_train.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(30, kernel_initializer='normal', activation='relu'))
model.add(Dense(10, kernel_initializer='normal', activation='tanh'))
model.add(Dense(10, kernel_initializer='normal', activation='tanh'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
else:
print("Invalid Model Type")
return()
if model_type == 'NEURALNET':
results = model.fit(X_train, y_train, epochs= 50, batch_size = 500, validation_data = (X_tt, y_tt),verbose=False)
ypred = model.predict(X_tt)[:,0]
ypred = np.round(ypred)
else:
model.fit(X_train,y_train)
ypred = model.predict(X_tt)
accuracy = accuracy_score(y_pred=ypred, y_true=y_tt)
recall = recall_score(y_pred=ypred, y_true=y_tt)
conf_matrix = confusion_matrix(y_pred=ypred, y_true=y_tt)
coh_kappa= cohen_kappa_score(y1=ypred, y2=y_tt)
return(accuracy, recall, conf_matrix,coh_kappa)
## Run Logistic Regression
acc_lm_sm, recall_lm_sm, confm_lm_sm, coh_lm_sm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='SMOTE',model_type='LOGREG')
acc_lm_nm, recall_lm_nm, confm_lm_nm, coh_lm_nm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='NEARMISS',model_type='LOGREG')
## Run Decision Trees
acc_dt_sm, recall_dt_sm, confm_dt_sm, coh_dt_sm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='SMOTE',model_type='DECISIONTREE')
acc_dt_nm, recall_dt_nm, confm_dt_nm, coh_dt_nm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='NEARMISS',model_type='DECISIONTREE')
acc_rf_sm, recall_rf_sm, confm_rf_sm, coh_rf_sm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='SMOTE',model_type='RANDOMFOREST')
acc_rf_nm, recall_rf_nm, confm_rf_nm, coh_rf_nm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='NEARMISS',model_type='RANDOMFOREST')
acc_sv_sm, recall_sv_sm, confm_sv_sm, coh_sv_sm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='SMOTE',model_type='SVM')
acc_sv_nm, recall_sv_nm, confm_sv_nm, coh_sv_nm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='NEARMISS',model_type='SVM')
acc_xg_sm, recall_xg_sm, confm_xg_sm, coh_xg_sm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='SMOTE',model_type='XGBOOST')
acc_xg_nm, recall_xg_nm, confm_xg_nm, coh_xg_nm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='NEARMISS',model_type='XGBOOST')
acc_ad_sm, recall_ad_sm, confm_ad_sm, coh_ad_sm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='SMOTE',model_type='ADABOOST')
acc_ad_nm, recall_ad_nm, confm_ad_nm, coh_ad_nm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='NEARMISS',model_type='ADABOOST')
acc_nn_sm, recall_nn_sm, confm_nn_sm, coh_nn_sm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='SMOTE',model_type='NEURALNET')
acc_nn_nm, recall_nn_nm, confm_nn_nm, coh_nn_nm = predictionF(X_tr=X_train,y_tr=y_train, X_tt=X_test, y_tt=y_test, imb_method='SMOTE',model_type='NEURALNET')
comparisondf = pd.DataFrame(data= [[acc_lm_sm, acc_lm_nm, recall_lm_sm, recall_lm_nm, coh_lm_sm, coh_lm_nm],
[acc_dt_sm,acc_dt_nm, recall_dt_sm, recall_dt_nm, coh_dt_sm, coh_dt_nm],
[acc_rf_sm,acc_rf_nm, recall_rf_sm, recall_rf_nm, coh_rf_sm, coh_rf_nm],
[acc_sv_sm,acc_sv_nm, recall_sv_sm, recall_sv_nm, coh_sv_sm, coh_sv_nm],
[acc_xg_sm,acc_xg_nm, recall_xg_sm, recall_xg_nm, coh_xg_sm, coh_xg_nm],
[acc_ad_sm,acc_ad_nm, recall_ad_sm, recall_ad_nm, coh_ad_sm, coh_ad_nm],
[acc_nn_sm,acc_nn_nm, recall_nn_sm, recall_nn_nm, coh_nn_sm, coh_nn_nm]], columns=['ACCURACY_SMOTE','ACCURACY_NEARMISS', 'RECALL_SMOTE','RECALL_NEARMISS','COHEN_KAPPA_SMOTE','COHEN_KAPPA_NEARMISS'], index=['LOGREG','DECISIONTREE','RANDOMFOREST','SVC','XGBOOST','ADABOOST','NEURALNET'])
print(acc_lm_sm)
print(acc_lm_nm)
print(recall_lm_sm)
print(recall_lm_nm)
round(comparisondf.iloc[:,0:4]*100)
###Output
_____no_output_____ |
blender.ipynb | ###Markdown
[Oregon Curriculum Network](http://4dsolutions.net/ocn/) The School of Tomorrow[Home Page](School_of_Tomorrow.ipynb)(scroll down for embedded source code and more remarks about its runtime context) Blender[View on nbviewer](https://nbviewer.jupyter.org/github/4dsolutions/School_of_Tomorrow/blob/master/blender.ipynb)When it comes to XYZ coordinates and 3D graphics, what tool might we use? The answer is pretty obvious: [Blender](http://blender.org) of course.But then do you have the Personal Workspace (PWS) you need, equipped with the necessary hardware? Perhaps you're not interested enough to want a dedicated workspace all your own and that's OK. A friend might let you test drive from time to time. Check it out. Try (test the waters) before you buy (dive in).One theme we might take up within Blender, are the internal and external furnishings of this space to learn Blender, among other topics. What does your School of Tomorrow look like, inside (concave) and from without (convex)?Blender is free open source so don't take this as a sales pitch in the conventional sense. I'm not selling Blender. I am selling using Quadrays inside of Blender, why not? The code below is about four arrows, or rays, of equal length, pointing from a common origin, the center of a regular tetrahedron. These rays are labeled (1,0,0,0) (0,1,0,0) (0,0,1,0) (0,0,0,1) and may be called Quadrays, quad meaning four, and there are four of them. [Check Wikipedia](https://mybizmo.blogspot.com/2020/04/comparing-two-pr-films.html) for more details.We use Quadrays quite a bit at the School of Tomorrow, because twelve additive combinations of two of one, and one of two others, none of one last, give the 12 directions to neighboring ball centers, imagining twelve equi-diameter balls around a nucleus. In fact, our regular tetrahedron is one of those connecting any four inter-tangent balls in such a packing, which, as it expands on out (12, 42, 92, 162...) begets the CCP or Cubic Close Packing, also known as the FCC or [Face Centered Cubic Packing](https://github.com/4dsolutions/Python5/blob/master/Generating%20the%20FCC.ipynb). You may be wondering what a cube has to do with it, and we'll get to that in the course of our studies. ```pythonimport bpyfrom qrays import Qvectorfrom itertools import permutationsg = permutations((2,1,1,0))ORIGIN_IVM = Qvector((0,0,0,0)) set comprehension in list comprehensionSPOKES_IVM = [Qvector(v) for v in {p for p in g}] ORIGIN_XYZ = ORIGIN_IVM.xyz().xyz (0,0,0)c6xty_ball = bpy.ops.mesh.primitive_ico_sphere_addc6xty_ball(radius=0.5, enter_editmode=False, location=ORIGIN_XYZ)bpy.ops.object.shade_smooth()for qv in SPOKES_IVM: xyz = qv.xyz().xyz c6xty_ball(radius=0.5, enter_editmode=False, location=xyz) bpy.ops.object.shade_smooth()``` In the Youtube below, I rant about the "Concentric Hierarchy" which "my generation did not deign to share". If you're not sure what I'm talking about, you must be new to the School of Tomorrow, as it's what's at the core of the Bucky stuff, as the source of the domes and his thinking in general (see [Synergetics: The Invention Behind the Inventions](http://www.4dsolutions.net/synergetica/synergetica1.html)).Who am I? [Some autobio](https://medium.com/@kirbyurner/adventures-in-math-teaching-54228496dbbf).Given the context (namespace) of this School, of course a primary use of Blender is to bring forward [the work of previous decades](http://www.4dsolutions.net/ocn/cp4e.html), which focused on VRML, X3D, Vpython, POV-Ray as strategies for rendering the 3D graphics. In particular, I'm continuing recent work on the Flextegrity Lattice here at the School of Tomorrow. Blender is the most capable tool yet, and you might wonder why I've been slow to bake it into this curriculum.As students, you get to be the next faculty, meaning take what you've learned from others and value add, perform the alchemy of synergy to advance and enchance our curriculum of tomorrow.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo("w9OU6B-qvjA")
YouTubeVideo("D1nw1PH4wjs") # no sound
YouTubeVideo("lrr4InSMX2E") # no sound
###Output
_____no_output_____
###Markdown
Don't expect the above source code to "just work" in this context, here in a Jupyter Notebook on Github or one of those. I'm memorializing the above as my first bona fide Blender program, of June 1, 2020.Let me curate some of the excellent Youtubes that helped me build up a head of steam...
###Code
YouTubeVideo("rHzf3Dku_cE")
YouTubeVideo("hfYgCwC_4iE")
###Output
_____no_output_____
###Markdown
My immediate application has been to continue the work with POV-Ray (ray tracer) and Rhino (CAD). Blender has the best Python integration of anything I've yet used. I've not had occasion to use the ESRI product line up close, even if I got to lead a training for some of its people that time.My Python framework developed around [making animated GIFs about Flextegrity](Flextegrity_Lattice.ipynb), part of our School's literature, may now be adapted for use within Blender. The code above uses the techniques I share in [Generating the FCC](https://github.com/4dsolutions/Python5/blob/master/Generating%20the%20FCC.ipynb). Looking ahead, [I see using pandas and sqlite3](https://blender.stackexchange.com/questions/51067/using-anaconda-python-3-in-blender-winx64) from inside Blender why not? The work we do around databases and polyhedrons will pay off inside this capable "3D" environment (using the namespace of 3D XYZ, versus the 4D of the IVM namespace).
###Code
YouTubeVideo("axyCPw6GAzI") # "dimension" in Synergetics
###Output
_____no_output_____
###Markdown
The code below results in an exception raised, no surprise there, as we're not inside Blender. You'll be able to fire up Blender's scripting window and run it from there, provided you have access to the Quadray Coordinates dependency (```qrays.py```).If you run this script, don't forget to change the path to wherever you decide to put your [qrays.py](qrays.py), available here at the school repo.You'll notice though, that these mesh icosaspheres have no materiality nor texture. The whole business of treating surfaces with invented textures and materials comprises a huge piece of what Blender is all about, especially its procedural node system. I link to a good teacher of Blender's procedural nodes below. ```pythonimport bpyimport syssys.path.append("C:\\Users\\Kirby\\School_of_Tomorrow")from qrays import Qvectorfrom functools import partialfrom itertools import permutationsg = permutations((2,1,1,0))ORIGIN_IVM = Qvector((0,0,0,0)) set comprehension in list comprehensionSPOKES_IVM = [Qvector(v) for v in {p for p in g}] nucleus = tuple([ORIGIN_IVM])def next_layer(curr_layer, prev_layer): """ generates a next layer of FCC spheres by trying 12-around-1 for each in the current layer (curr_layer) but without keeping any duplicates i.e. discarding redundant sphere centers. """ next_layer = set() for qv in curr_layer: for bv in SPOKES_IVM: v_sum = qv + bv if (not v_sum in curr_layer and not v_sum in prev_layer): next_layer.add(v_sum) return sorted(list(next_layer))nl = next_layer(nucleus, nucleus) 1-freqnnl = next_layer(nl, nucleus) 2-freqnnnl = next_layer(nnl, nl) 3-freqnnnnl= next_layer(nnnl, nnl) 4-freqdef get_xyz(qvectors): xyz_vectors = [] for qv in qvectors: xyz_vectors.append(qv.xyz()) return xyz_vectors c6xty_ball = bpy.ops.mesh.primitive_ico_sphere_addc6xty_ball = partial(bpy.ops.mesh.primitive_ico_sphere_add, subdivisions=1, radius=0.5, enter_editmode=False)for ball in get_xyz(nl): c6xty_ball(location=ball.xyz)for ball in get_xyz(nnl): c6xty_ball(location=ball.xyz) for ball in get_xyz(nnnl): c6xty_ball(location=ball.xyz)for ball in get_xyz(nnnnl): c6xty_ball(location=ball.xyz)```
###Code
YouTubeVideo("O3gLBhC353Y")
YouTubeVideo("qJEWOTZnFeg")
YouTubeVideo("WSQFt1Nruns")
###Output
_____no_output_____
###Markdown
###Code
#@title Blender Parameters
renderer = "CYCLES" #@param ["CYCLES", "BLENDER_RENDER"]
frame = 1 #@param {type: "number"}
###Output
_____no_output_____
###Markdown
**First upload .blend and script file!!!**
###Code
!apt install blender
!apt install libboost-all-dev
!apt install libgl1-mesa-dev
!mkdir Dcs
!mkdir Boards
!blender -b Chess\ Scene.blend -noaudio -o ./test_ -E $renderer -x 1 -P script.py
!zip -r Zipped_Boards.zip Boards/
from google.colab import files
files.download("Zipped_Boards.zip")
###Output
_____no_output_____ |
projeto2/genetic_algorithm.ipynb | ###Markdown
Falta fazer Relatório Documentação
###Code
# import das bibliotecas
import os
import pandas as pd
import random
import time
from greedy_filter import *
from math import inf
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
# definicao das constantes
PATH_EXCEL = os.getcwd() + '/excel_files/'
###Output
_____no_output_____
###Markdown
Estruturas que irao armazenar os dataframes com o histórico de preços das ações.1. dict_excels: dicionario onde a chave é o nome da ação e o valor da chave é o dataframe2. filenames: lista com o nome dos arquivos3. excels: lista com os dataframes
###Code
dict_excels = {}
filenames = []
excels = []
for filename in os.listdir(PATH_EXCEL):
filenames.append(filename[:filename.find('.')])
excels.append(pd.read_excel(PATH_EXCEL + filename).sort_values(by=['Exchange Date']).reset_index(drop=True))
dict_excels[filename[:filename.find('.')]] = (pd.read_excel(PATH_EXCEL + filename).sort_values(by=['Exchange Date']).reset_index(drop=True))
###Output
_____no_output_____
###Markdown
Modelagem do problema Algoritmo Genético 1 - Escolha dos Parâmetros das Regras de Filtro1. Gene: cada um dos parâmetros do filtro 1. x - porcentagem de variação: porcentagem acima/abaixo da ultima subida ou queda 1. varia de 0 a 1 2. varia de 0.01 a 0.10 2. h - hold days: após um sinal de compra/venda esperamos por h dias 1. varia de 1 até o número de dias entre a data atual e a data onde acabam as informacoes 2. varia de 1 até 30 3. d - delay days: após receber um sinal ignora-se os próximos d dias 1. varia de 1 até o número de dias entre a data atual e a data onde acabam as informacoes 2. varia de 1 até 30 4. p - previous days: olha-se p dias para tomar uma decisão 1. varia de 0 até o número de dias transcorridos até o momento 2. varia de 30 até o número máximo de dias2. Cromossomo: conjunto de parâmetros 1. Estrutura usada: lista com os parâmetros → [x, h, d, p]3. População: conjunto de cromossomos 1. Tamanho da População:4. Mutação: 1. taxa de mutação:5. Crossover: 1. taxa de crossover:6. Critério de Parada:7. Seleção: 1. Fitness: cálculo do lucro obtido por cada cromossomo 2. Técnica de Seleção: 1. Técnica 1: 2. Técnica 2: Funções para realizar o crossover entre dois cromossomos
###Code
def crossover(chromosome1, chromosome2):
"""
Funcao para realizar o crossover entre dois cromossomos
:param: chromosome1 - cromossomo pai
:param: chromosome2 - cromossomo mae
:return: new_chromosome1 - primeiro filho gerado no crossover
:return: new_chromosome2 - segundo filho gerado no crossover
"""
# escolhe um gene aleatorio para realizar o crossover
rand = random.randint(1,len(chromosome1) - 1)
# gera os dois novos cromossomos
new_chromosome1 = chromosome1[:rand] + chromosome2[rand:]
new_chromosome2 = chromosome2[:rand] + chromosome1[rand:]
return new_chromosome1, new_chromosome2
def crossover2(chromosome1, chromosome2):
"""
Funcao para realizar o crossover entre dois cromossomos com 2 pontos de corte
:param: chromosome1 - cromossomo pai
:param: chromosome2 - cromossomo mae
:return: new_chromosome1 - primeiro filho gerado no crossover
:return: new_chromosome2 - segundo filho gerado no crossover
"""
# escolhe um gene aleatorio para realizar o crossover
rand = random.randint(1,len(chromosome1) - 2)
rand2 = random.randint(rand+1, len(chromosome1) - 1)
# gera os dois novos cromossomos
new_chromosome1 = chromosome1[:rand] + chromosome2[rand:rand2] + chromosome1[rand2:]
new_chromosome2 = chromosome2[:rand] + chromosome1[rand:rand2] + chromosome2[rand2:]
return new_chromosome1, new_chromosome2
###Output
_____no_output_____
###Markdown
Funções para realizar a mutação de um cromossomo
###Code
def mutation(chromossome):
"""
Funcao para realizar a mutacao de um dado cromossomo
:param: chromossome - cromossomo que ira passar pelo processo de mutacao
:param: today_index - indice da tabela referente a data atual da acao
:param: last_day_index - indice da tabela referente a ultima data da qual se tem informacao da acao
:return: new_chromossome - novo cromossomo apos o processo de mutacao
"""
# faz a copia do cromossomo original
new_chromossome = chromossome
# escolhe, aleatoriamente, um gene para ser alterado
gene_initial_position = random.randint(0,len(chromossome) - 1)
gene_final_position = random.randint(gene_initial_position, len(chromossome) - 1)
# modifica o conjunto de genes escolhidos seguindo, como unica regra, os valores que eles podem assumir
for i in range(gene_initial_position, gene_final_position + 1):
if i == 0:
new_chromossome[0] = random.uniform(0.01,0.10)
elif i == 1:
new_chromossome[1] = random.randint(1,30)
elif i == 2:
new_chromossome[2] = random.randint(1,30)
elif i == 3:
new_chromossome[3] = random.randint(30,500)
else:
raise Exception('Gene inexistente no cromossomo!')
return new_chromossome
def mutation_v2(chromossome):
"""
Funcao para realizar a mutacao de um dado cromossomo
:param: chromossome - cromossomo que ira passar pelo processo de mutacao
:return: new_chromossome - novo cromossomo apos o processo de mutacao
"""
# faz a copia do cromossomo original
new_chromossome = chromossome
# escolhe, aleatoriamente, um gene para ser alterado
gene_position = random.randint(0,len(chromossome) - 1)
# modifica o gene escolhido seguindo, como unica regra, os valores que ele pode assumir
if gene_position == 0:
new_chromossome[0] = random.uniform(0.01,0.10)
elif gene_position == 1:
new_chromossome[1] = random.randint(1,30)
elif gene_position == 2:
new_chromossome[2] = random.randint(1,30)
elif gene_position == 3:
new_chromossome[3] = random.randint(30,500)
else:
raise Exception('Gene inexistente no cromossomo!')
return new_chromossome
###Output
_____no_output_____
###Markdown
Função para realizar a inicialização randômica de uma população
###Code
def create_population(population_size):
"""
Funcao para criar uma populacao randomica de cromossomos
:param: population_size - tamanho da populacao que sera criada
:return: population - nova populacao
"""
population = []
for i in range(0, population_size):
population.append([random.uniform(0,0.1), random.randint(1,30), random.randint(1,30), random.randint(1,500)])
return population
def fitness(np_array, chromossomes, budget):
"""
Funcao para calcular o lucro de cada um dos cromossomos de uma dada populacao
:param: np_array - conjunto de dados de determinada acao
:param: chromossomes - lista de cromossomos que sera avaliada
:param: budget - dinheiro inicial do problema
:return: fit_chromossomes - matriz com os cromossomos e o lucro(em porcentagem) obtido por eles
"""
fit_chromossomes = []
for chromossome in chromossomes:
money = greedy_filter_rule(np_array, chromossome, budget)
fit_chromossomes.append([chromossome, (money-budget)/budget])
return fit_chromossomes
def selection(stock_value, list_chromossomes, budget, cut_size):
"""
:param: stock_value -
:param: list_chromossomes -
:param: budget -
:param: cut_size -
:return: new_generation -
:return: fitness_array -
"""
fitness_array = fitness(stock_value, list_chromossomes, budget)
fitness_array.sort(key=lambda x: x[1], reverse = True)
new_generation = []
for i in range (0,cut_size):
new_generation.append(fitness_array[i][0])
return new_generation, fitness_array[:cut_size]
def roulette_selection(stock_value, list_chromossomes, budget, cut_size):
"""
:param: stock_value -
:param: list_chromossomes -
:param: budget -
:param: cut_size -
:return: new_population -
:return: cut_size -
"""
fitness_array = fitness(stock_value, list_chromossomes, budget)
# fitness_array = linear_normalization(fitness_array)
adds_skills = 0
for fit in fitness_array:
adds_skills = adds_skills + fit[1]
new_population = []
for i in range(cut_size):
r = random.uniform(0, adds_skills)
temp_soma = 0
for fit in fitness_array:
temp_soma = temp_soma + fit[1]
if temp_soma >= r:
new_population.append(fit[0])
break
return new_population, fitness(stock_value, new_population, budget)
def stop_criterion(old_population, new_population, limit_to_converge):
"""
:param: old_population - populacao ao iniciar a iteracao
:param: new_population - populacao ao fim da iteracao
:param: limit_to_converge - limiar abaixo do qual iremos considerar que ambas as pop convergem
:return: True se for para parar, False c.c.
"""
soma_old = 0
for x in old_population:
soma_old = soma_old + x[1]
soma_new = 0
for x in new_population:
soma_new = soma_new + x[1]
media_old = soma_old / len(old_population)
media_new = soma_new / len(new_population)
if abs(media_new - media_old) < limit_to_converge:
return True
else:
return False
def generate_children(old_generation, crossover_function, crossover_rate):
"""
:param: old_generation -
:param: crossover_function -
:param: crossover_rate -
:return: children -
"""
size_generation = len(old_generation)
number_to_crossover = int(size_generation * crossover_rate)
random.shuffle(old_generation)
children = []
for i in range (number_to_crossover):
for j in range (i+1, number_to_crossover):
new_chromossome1, new_chromossome2 = crossover_function(old_generation[i], old_generation[j])
children.append(new_chromossome1)
children.append(new_chromossome2)
return children
def mutation_chromossome(chromossomes, mutation_function, mutation_rate):
"""
:param: chromossomes -
:param: mutation_function -
:param: mutation_rate -
:return: chromossomes -
"""
number_chromossomes_to_mutate = int(len(chromossomes) * mutation_rate)
random.shuffle(chromossomes)
for i in range (0,number_chromossomes_to_mutate):
chromossomes[i] = mutation_function(chromossomes[i])
return chromossomes
def evolutionary_strategy1(stock_values, first_population, budget, crossover_function, delta, mutation_rate, crossover_rate, min_iteration_converge):
"""
:param: stock_values -
:param: first_population -
:param: budget -
:param: crossover_function -
:param: delta -
:return: old_population -
"""
flag = False
iteration = 0
old_population = first_population
while (not flag):
fitness_old_population = fitness(stock_values, old_population, budget)
children = generate_children(first_population, crossover_function, crossover_rate)
parents_and_children = old_population + children
chromossomes_parents_children_mutated = mutation_chromossome(parents_and_children, mutation_v2, mutation_rate)
new_population, fitness_new_population = selection(stock_values, chromossomes_parents_children_mutated, budget, len(old_population))
flag = (stop_criterion(fitness_old_population, fitness_new_population, delta) and iteration > min_iteration_converge)
iteration = iteration + 1
old_population = new_population
return old_population
def linear_normalization(fitness_population, increment=20):
pop_size = len(fitness_population)
fitness_population = sorted(fitness_population, key=lambda x: x[1], reverse = False)
min_value = 1
max_value = 201
normalized_fitness = []
for i in range(pop_size):
temp_fit = min_value + ((max_value - min_value)/(pop_size - 1)) * (i)
normalized_fitness.append([fitness_population[i][0],temp_fit])
return normalized_fitness
def evolutionary_strategy2(stock_values, first_population, budget, crossover_function, mutation_func,delta, mutation_rate, crossover_rate, min_iteration_converge):
"""
:param: stock_values -
:param: first_population -
:param: budget -
:param: crossover_function -
:param: mutation_func -
:param: delta -
:return: old_population -
"""
flag = False
iteration = 0
old_population = first_population
while (not flag):
fitness_old_population = fitness(stock_values, old_population, budget)
children = generate_children(first_population, crossover_function, crossover_rate)
parents_and_children = old_population + children
chromossomes_parents_children_mutated = mutation_chromossome(parents_and_children, mutation_func, mutation_rate)
chromossomes_parents_children_mutated = random.sample(chromossomes_parents_children_mutated, len(chromossomes_parents_children_mutated))
new_population, fitness_new_population = roulette_selection(stock_values, chromossomes_parents_children_mutated, budget, len(old_population))
flag = (stop_criterion(fitness_old_population, fitness_new_population, delta) and iteration > min_iteration_converge)
iteration = iteration + 1
old_population = new_population
return old_population
###Output
_____no_output_____
###Markdown
Let's play Primeiro conjunto de testes Variaveis que serão usadas para a primeira populaçãoPara as ações da Alphabet serão testados:1. Dados Fixos: 1. Budget → 10.000,00 2. Delta para considerar convergência → 0.012. Dados variáveis: 1. População: 1. Para uma população de 100 e outra de 1000, testamos: 1. Duas funções de Crossover: 1. Taxa de Crossover → 0.3 2. Taxa de Crossover → 0.8 2. Duas funções de Mutação: 1. Taxa de Mutação → 0.3 2. Taxa de Mutação → 0.8 Primeira iteração:1. População → 1002. Budget → 10.0003. Delta → 0.014. Crossover → crossover 1. Taxa → 0.35. Mutação → mutation 1. Taxa → 0.36. Melhor Resultado: [[0.029757806673384436, 16, 10, 339], 1.6809029999999998]7. Pior Resultado: [[0.0903877154523101, 24, 3, 304], 0.0]8. Média: 0.35725319. Desvio: 0.409838862518003 Dados fixos que serão usados para todas as iterações de 100 cromossomos
###Code
population_len = 100
budget = 10000
delta_to_converge = 0.01
min_iteration_converget = 3
np_array = excels[5].values # Irá utilizar a primeira lista de valores da bolsa
chromossomes = create_population(population_len)
###Output
_____no_output_____
###Markdown
Aplicação da estratégia evolucionária
###Code
# Dados variáveis
mutation_rate = 0.3
crossover_rate = 0.3
begin_time = time.time()
best_chromossomes_temp = evolutionary_strategy2(np_array, chromossomes, budget, crossover, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultados da Primeira Iteração
###Code
tempo_1 = 1271.77
best_chromossomes_1 = [[[0.050207624767356906, 21, 23, 377], 0.07772800000000006], [[0.01064506817383616, 5, 18, 417], 0.18118100000000012], [[0.048995799481743464, 19, 10, 371], 0.16437599999999983], [[0.0903877154523101, 24, 3, 304], 0.0], [[0.05019186610051978, 10, 10, 367], 0.05908700000000008], [[0.005323667662522003, 5, 7, 344], 0.2504859999999993], [[0.08494884463078742, 21, 24, 225], 0.02642399999999998], [[0.048995799481743464, 17, 12, 304], 0.2071], [[0.028887627004365535, 14, 13, 315], 0.2385540000000001], [[0.00122072665795554, 24, 23, 328], 0.23187000000000044], [[0.00122072665795554, 24, 29, 315], 0.21566000000000005], [[0.022477117010065675, 28, 10, 305], 0.11191500000000014], [[0.022242045091904496, 11, 1, 446], 1.4520790000000008], [[0.0903877154523101, 7, 13, 315], 0.0], [[0.05012342514910495, 19, 27, 96], 0.0922539999999999], [[0.043306600143784874, 6, 4, 305], 1.6690630000000004], [[0.05012342514910495, 21, 5, 65], 0.18339899999999998], [[0.022242045091904496, 19, 7, 423], 0.14597299999999977], [[0.046521471977928, 5, 28, 473], 0.08834399999999987], [[0.043306600143784874, 6, 22, 170], 0.14569299999999966], [[0.043306600143784874, 30, 27, 115], 0.14217199999999974], [[0.050207624767356906, 21, 10, 339], 1.2155529999999999], [[0.03082086229808331, 28, 10, 437], 0.35915699999999995], [[0.02964661289526336, 14, 24, 60], 0.2636600000000002], [[0.0062391463051995055, 19, 10, 328], 0.45291399999999976], [[0.024648981510943135, 28, 9, 163], 0.1368630000000003], [[0.022242045091904496, 11, 1, 58], 1.4520790000000008], [[0.00122072665795554, 5, 18, 417], 0.2634499999999993], [[0.054305497987971185, 13, 8, 79], 1.203171], [[0.00204442420619777, 7, 4, 225], 0.25887300000000013], [[0.02964661289526336, 30, 29, 58], 0.21698000000000012], [[0.07861351589880473, 7, 1, 281], 0.7105040000000005], [[0.048995799481743464, 17, 24, 163], 0.2071], [[0.008065151074294464, 24, 3, 304], 0.24347999999999992], [[0.05012342514910495, 30, 21, 115], 0.29411000000000004], [[0.022477117010065675, 30, 21, 344], 0.166541], [[0.043306600143784874, 30, 27, 115], 0.14217199999999974], [[0.03082086229808331, 30, 22, 170], 0.24147000000000007], [[0.028887627004365535, 30, 28, 179], 0.22841000000000003], [[0.00122072665795554, 19, 10, 371], 1.4712399999999994], [[0.023796352473019446, 23, 20, 66], 0.22564000000000034], [[0.05012342514910495, 19, 27, 304], 0.0922539999999999], [[0.027188793659815927, 11, 1, 281], 1.1611730000000002], [[0.048995799481743464, 17, 6, 281], 0.24391000000000004], [[0.005713760919368172, 21, 2, 193], 0.17967999999999956], [[0.028887627004365535, 5, 8, 325], 0.1315439999999995], [[0.022242045091904496, 11, 28, 305], 0.19982000000000008], [[0.027188793659815927, 30, 24, 60], 0.21271000000000004], [[0.00204442420619777, 14, 24, 473], 0.09856899999999987], [[0.05019186610051978, 10, 17, 308], 0.2071], [[0.029757806673384436, 16, 10, 339], 1.6809029999999998], [[0.0062391463051995055, 19, 10, 325], 0.45291399999999976], [[0.06344805404741997, 19, 8, 86], 0.030496000000000096], [[0.062395927307535355, 25, 13, 315], 0.030496000000000096], [[0.01064506817383616, 1, 6, 281], 0.10055900000000019], [[0.0062391463051995055, 19, 28, 328], 0.2868669999999998], [[0.022242045091904496, 11, 23, 420], 0.1631970000000003], [[0.024648981510943135, 28, 29, 248], 0.29595000000000016], [[0.008065151074294464, 3, 24, 108], 0.23379000000000014], [[0.07861351589880473, 7, 8, 325], 0.030496000000000096], [[0.03082086229808331, 24, 14, 219], 0.2518100000000002], [[0.053757291541581605, 25, 13, 351], 0.139975], [[0.024648981510943135, 28, 24, 60], 0.11826300000000028], [[0.00122072665795554, 24, 26, 473], 0.26011000000000006], [[0.0062391463051995055, 19, 10, 170], 0.45291399999999976], [[0.005713760919368172, 21, 22, 115], 0.1881], [[0.00122072665795554, 24, 29, 377], 0.21566000000000005], [[0.07949754760247976, 22, 4, 301], 0.9030880000000001], [[0.008065151074294464, 25, 23, 377], 0.2597899999999998], [[0.00122072665795554, 5, 18, 417], 0.2634499999999993], [[0.005323667662522003, 21, 10, 328], 0.22609000000000015], [[0.02964661289526336, 30, 8, 86], 0.26787000000000005], [[0.02964661289526336, 30, 22, 328], 0.2528], [[0.023169600399788794, 4, 6, 324], 0.10477299999999977], [[0.0741645896629186, 24, 6, 65], 0.9030880000000001], [[0.03257940286587034, 20, 27, 220], 0.23043999999999995], [[0.053757291541581605, 25, 13, 351], 0.139975], [[0.043306600143784874, 6, 18, 417], 0.1853729999999998], [[0.028887627004365535, 19, 10, 371], 0.11828999999999978], [[0.005713760919368172, 21, 22, 225], 0.1881], [[0.050207624767356906, 3, 16, 96], 0.30051000000000005], [[0.03082086229808331, 28, 6, 281], 0.23628799999999975], [[0.0062391463051995055, 19, 28, 328], 0.2868669999999998], [[0.02070272658270548, 30, 17, 324], 0.23488999999999996], [[0.04361287396159986, 10, 15, 281], 1.1937330000000002], [[0.012449230192112639, 25, 11, 332], 0.22644999999999982], [[0.048995799481743464, 5, 18, 417], 0.20868000000000012], [[0.048995799481743464, 19, 10, 371], 0.16437599999999983], [[0.08494884463078742, 21, 8, 325], 0.899016], [[0.024648981510943135, 18, 20, 2], 0.17263999999999996], [[0.022477117010065675, 5, 7, 304], 0.02316899999999987], [[0.09162045420112536, 1, 6, 339], 0.0], [[0.024648981510943135, 28, 8, 339], 0.23413600000000023], [[0.008065151074294464, 3, 16, 437], 1.1584339999999993], [[0.027188793659815927, 1, 6, 281], 1.2226369999999995], [[0.02964661289526336, 30, 22, 225], 0.2528], [[0.022242045091904496, 11, 24, 60], 0.325143], [[0.00122072665795554, 16, 10, 86], 0.13244599999999918], [[0.022242045091904496, 11, 1, 446], 1.4520790000000008], [[0.065870974525096, 5, 16, 81], 0.061904000000000084]]
best_chromossomes_1 = sorted(best_chromossomes_1, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_1 = best_chromossomes_1[-1]
best_1
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_1 = best_chromossomes_1[0]
worst_1
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_1 = 0
for x in best_chromossomes_1: mean_1 = mean_1 + x[1]
mean_1 = mean_1/len(best_chromossomes_1)
mean_1
std_1 = 0
for x in best_chromossomes_1: std_1 = std_1 + (x[1] - mean_1) ** 2
std_1 = (std_1 / (len(best_chromossomes_1) - 1)) ** (1/2)
std_1
###Output
_____no_output_____
###Markdown
Segunda iteração:1. População → 1002. Budget → 10.0003. Delta → 0.0014. Crossover → crossover 1. Taxa → 0.85. Mutação → mutation 1. Taxa → 0.3
###Code
# Dados variáveis
mutation_rate = 0.3
crossover_rate = 0.8
begin_time = time.time()
best_chromossomes_2 = evolutionary_strategy2(np_array, chromossomes, budget, crossover, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultado da segunda iteração
###Code
tempo_2 = 6348.25
best_chromossomes_2 = [[[0.0013360847408618428, 6, 23, 420], 0.16943399999999983], [[0.09162045420112536, 23, 5, 276], 0.0], [[0.008065151074294464, 3, 16, 276], 1.1584339999999993], [[0.00815051608972378, 5, 2, 198], 0.2728669999999991], [[0.06531599746553525, 14, 18, 60], 0.030496000000000096], [[0.07979558638426126, 8, 26, 72], 0.030496000000000096], [[0.001796107327290031, 28, 21, 24], 0.15089899999999978], [[0.019441452173399265, 2, 2, 237], 0.9905709999999999], [[0.023169600399788794, 4, 6, 243], 0.10477299999999977], [[0.0741645896629186, 24, 6, 478], 0.9030880000000001], [[0.031822289173938055, 20, 24, 358], 0.2003900000000005], [[0.016183825892593453, 18, 4, 12], 0.2027400000000007], [[0.023169600399788794, 4, 22, 372], 0.1802029999999999], [[0.06827775770281684, 25, 15, 4], 0.03673600000000006], [[0.05586159841995257, 5, 13, 81], 0.20279000000000014], [[0.05019186610051978, 10, 17, 59], 0.2071], [[0.08146793546086231, 24, 6, 281], 0.9030880000000001], [[0.030745428298706634, 19, 8, 150], 0.13549099999999997], [[0.031822289173938055, 29, 6, 145], 1.2590599999999998], [[0.03788981821074894, 4, 20, 10], 0.1698289999999999], [[0.07979558638426126, 13, 26, 276], 0.030496000000000096], [[0.065870974525096, 14, 14, 281], 1.223918], [[0.02070272658270548, 24, 20, 276], 0.17374099999999998], [[0.050207624767356906, 24, 17, 410], 0.20868000000000012], [[0.06809507047504157, 4, 17, 410], 0.030496000000000096], [[0.050207624767356906, 24, 12, 177], 0.14217199999999974], [[0.028887627004365535, 9, 7, 26], 0.11487400000000016], [[0.02070272658270548, 26, 20, 205], 0.2112000000000002], [[0.07979558638426126, 13, 26, 303], 0.030496000000000096], [[0.015637631285750787, 21, 8, 78], 0.17098700000000008], [[0.05083793610228253, 26, 16, 328], 1.46065], [[0.06160055473698424, 18, 18, 181], 0.14939899999999998], [[0.01626553660531728, 14, 9, 292], 0.2636], [[0.023169600399788794, 4, 29, 35], 0.20453999999999997], [[0.024648981510943135, 28, 16, 96], 0.1417050000000001], [[0.014095146692794292, 12, 6, 243], 0.20612100000000083], [[0.05329949022617602, 4, 6, 28], 0.2991], [[0.019441452173399265, 2, 2, 41], 0.9905709999999999], [[0.08286426146169484, 12, 6, 121], 0.9030880000000001], [[0.01281962509420627, 30, 17, 81], 0.12539200000000036], [[0.03400350420973082, 16, 5, 328], 1.4780790000000001], [[0.030068640793726856, 9, 20, 287], 0.36167699999999986], [[0.038388455522326705, 24, 7, 407], 0.2852530000000001], [[0.001796107327290031, 28, 9, 96], 0.2621459999999997], [[0.05329949022617602, 25, 10, 243], 0.2602800000000001], [[0.008065151074294464, 3, 16, 243], 1.1584339999999993], [[0.03474486192495021, 27, 28, 47], 0.1500199999999999], [[0.020952662264810143, 13, 23, 377], 0.2661200000000001], [[0.022477117010065675, 5, 7, 59], 0.02316899999999987], [[0.08275908007133093, 9, 7, 198], 0.8730160000000003], [[0.02037321174764334, 5, 8, 273], 0.3009190000000002], [[0.05329949022617602, 21, 4, 169], 0.20976399999999995], [[0.0741645896629186, 24, 6, 83], 0.9030880000000001], [[0.030745428298706634, 13, 6, 333], 0.15399000000000032], [[0.054305497987971185, 8, 16, 289], 0.23831000000000022], [[0.01064506817383616, 10, 17, 410], 0.149601], [[0.00815051608972378, 18, 7, 407], 0.36143200000000014], [[0.022242045091904496, 11, 27, 437], 0.189492], [[0.00204442420619777, 25, 23, 377], 0.2395199999999997], [[0.026068358327222443, 26, 29, 243], 0.22057000000000007], [[0.05019186610051978, 19, 7, 217], 0.1705629999999999], [[0.023796352473019446, 20, 9, 495], 0.15197999999999992], [[0.059482626966795474, 3, 16, 96], 0.030496000000000096], [[0.022815636137678053, 26, 26, 121], 0.20188999999999996], [[0.07515897626005882, 22, 6, 59], 0.030496000000000096], [[0.008065151074294464, 3, 16, 333], 1.1584339999999993], [[0.030745428298706634, 19, 8, 150], 0.13549099999999997], [[0.0699143850571265, 16, 5, 276], 0.989664], [[0.05012342514910495, 19, 27, 281], 0.0922539999999999], [[0.023169600399788794, 4, 6, 199], 0.10477299999999977], [[0.06901941114548577, 24, 6, 133], 1.019936], [[0.07899991612189242, 6, 6, 281], 0.9030880000000001], [[0.08275908007133093, 9, 29, 58], 0.8730160000000003], [[0.050207624767356906, 13, 6, 333], 0.08276400000000013], [[0.0013360847408618428, 3, 8, 81], 0.18928099999999923], [[0.07509057650478154, 29, 24, 439], 0.9042240000000001], [[0.038388455522326705, 24, 23, 72], 0.14232999999999993], [[0.02037321174764334, 5, 8, 273], 0.3009190000000002], [[0.03474486192495021, 4, 6, 439], 0.18102599999999985], [[0.05572969330746658, 27, 28, 47], 0.20279000000000014], [[0.03400350420973082, 28, 20, 108], 0.15739500000000006], [[0.020952662264810143, 13, 26, 121], 0.17241500000000015], [[0.03082086229808331, 28, 10, 420], 0.35915699999999995], [[0.001796107327290031, 28, 17, 410], 0.17158700000000007], [[0.06901941114548577, 5, 6, 28], 0.061904000000000084], [[0.020952662264810143, 11, 11, 41], 0.1783949999999999], [[0.008065151074294464, 3, 4, 12], 0.28208800000000084], [[0.08275908007133093, 9, 7, 303], 0.8730160000000003], [[0.001796107327290031, 28, 9, 121], 0.2621459999999997], [[0.045634181438061666, 20, 23, 377], 0.03785000000000018], [[0.05876211709403922, 16, 9, 498], 0.989664], [[0.04618827410942147, 5, 23, 94], 0.2071], [[0.05162910642877635, 24, 5, 346], 0.2970299999999999], [[0.02070272658270548, 21, 4, 234], 0.09856399999999994], [[0.05019186610051978, 10, 17, 237], 0.2071], [[0.08275908007133093, 9, 27, 489], 0.8730160000000003], [[0.028887627004365535, 5, 24, 163], 0.17948599999999987], [[0.04771945683427357, 17, 5, 93], 1.199282], [[0.02070272658270548, 17, 5, 228], 0.15089199999999983], [[0.00122072665795554, 11, 1, 281], 0.16817699999999933]]
best_chromossomes_2 = sorted(best_chromossomes_2, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_2 = best_chromossomes_2[-1]
best_2
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_2 = best_chromossomes_2[0]
worst_2
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_2 = 0
for x in best_chromossomes_2: mean_2 = mean_2 + x[1]
mean_2 = mean_2/len(best_chromossomes_2)
mean_2
std_2 = 0
for x in best_chromossomes_2: std_2 = std_2 + (x[1] - mean_2) ** 2
std_2 = (std_2 / (len(best_chromossomes_2) - 1)) ** (1/2)
std_2
###Output
_____no_output_____
###Markdown
Terceira iteração:1. População → 1002. Budget → 10.0003. Delta → 0.0014. Crossover → crossover 1. Taxa → 0.35. Mutação → mutation 1. Taxa → 0.8
###Code
# Dados variáveis
mutation_rate = 0.8
crossover_rate = 0.3
begin_time = time.time()
best_chromossomes_3 = evolutionary_strategy2(np_array, chromossomes, budget, crossover, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultado da terceira iteração
###Code
tempo_3 = 1398.42
best_chromossomes_3 = [[[0.030745428298706634, 21, 8, 460], 0.2693119999999997], [[0.0062391463051995055, 19, 6, 262], 0.23909899999999998], [[0.03400350420973082, 16, 3, 122], 0.22446299999999975], [[0.07242754980453811, 6, 13, 209], 0.9042240000000001], [[0.06827775770281684, 6, 25, 74], 0.030496000000000096], [[0.07114217564815761, 14, 27, 64], 0.9042240000000001], [[0.03532226504719589, 3, 14, 439], 1.2136039999999995], [[0.02862435403001, 29, 22, 234], 0.16928100000000013], [[0.05608963794013028, 9, 24, 234], 0.27042000000000027], [[0.03400350420973082, 16, 18, 95], 0.28669199999999984], [[0.06827775770281684, 6, 2, 455], 0.9839979999999999], [[0.02862435403001, 29, 22, 234], 0.16928100000000013], [[0.062395927307535355, 8, 22, 12], 0.02000000000000018], [[0.00122072665795554, 24, 30, 436], 0.21396000000000004], [[0.0062391463051995055, 19, 11, 41], 0.4006660000000002], [[0.00122072665795554, 24, 18, 417], 0.2457], [[0.025357865964253386, 10, 15, 24], 0.173316], [[0.06827775770281684, 6, 3, 213], 0.9472480000000003], [[0.062395927307535355, 21, 22, 471], 0.030496000000000096], [[0.00122072665795554, 24, 5, 356], 0.2588399999999994], [[0.02862435403001, 29, 15, 24], 0.22847999999999993], [[0.07794358309673811, 6, 16, 187], 0.9030880000000001], [[0.0444053133055844, 9, 23, 53], 0.2071], [[0.02070272658270548, 20, 24, 74], 0.22904000000000013], [[0.06715230274391572, 21, 3, 233], 1.223918], [[0.00122072665795554, 24, 23, 377], 0.23187000000000044], [[0.07623110819109605, 12, 1, 377], 0.9042240000000001], [[0.05608963794013028, 22, 5, 438], 0.14490100000000003], [[0.061396784883645805, 27, 20, 175], 0.147375], [[0.001796107327290031, 28, 1, 228], 0.1544559999999994], [[0.02862435403001, 29, 9, 371], 0.20817000000000008], [[0.04771945683427357, 3, 14, 243], 0.29157999999999995], [[0.024322809371076545, 17, 3, 439], 0.2527610000000004], [[0.07979558638426126, 13, 23, 474], 0.030496000000000096], [[0.02070272658270548, 20, 6, 121], 1.4056350000000002], [[0.031822289173938055, 21, 3, 437], 0.23497500000000035], [[0.03532226504719589, 3, 22, 12], 0.18615999999999985], [[0.031822289173938055, 29, 21, 438], 0.051923000000000503], [[0.04771945683427357, 3, 14, 243], 0.29157999999999995], [[0.05019186610051978, 22, 5, 201], 1.199282], [[0.005713760919368172, 29, 14, 10], 0.2352199999999999], [[0.02862435403001, 29, 23, 381], 0.15159999999999982], [[0.06708530725725319, 6, 10, 489], 0.1448539999999999], [[0.0062391463051995055, 22, 7, 280], 1.218266], [[0.06827775770281684, 6, 3, 422], 0.9472480000000003], [[0.02862435403001, 29, 2, 280], 1.4001850000000002], [[0.073048577850296, 29, 10, 175], 0.9042240000000001], [[0.025651115035084195, 1, 22, 211], 0.18765299999999988], [[0.04771945683427357, 9, 24, 353], 0.2071], [[0.06344805404741997, 6, 23, 12], 0.030496000000000096], [[0.02070272658270548, 17, 5, 12], 0.15089199999999983], [[0.00122072665795554, 24, 23, 377], 0.23187000000000044], [[0.06715230274391572, 21, 10, 183], 0.030496000000000096], [[0.06344805404741997, 11, 17, 88], 0.030496000000000096], [[0.054305497987971185, 8, 16, 256], 0.23831000000000022], [[0.005713760919368172, 29, 28, 373], 0.23103999999999977], [[0.005713760919368172, 22, 11, 196], 0.1844549999999992], [[0.030745428298706634, 21, 22, 74], 0.2146600000000002], [[0.03532226504719589, 15, 15, 438], 0.28558999999999996], [[0.062395927307535355, 9, 6, 439], 0.08189500000000008], [[0.040508792650225944, 2, 20, 205], 1.4457200000000001], [[0.06160055473698424, 21, 14, 10], 0.030496000000000096], [[0.062395927307535355, 25, 6, 489], 1.2264389999999998], [[0.05608963794013028, 22, 5, 438], 0.14490100000000003], [[0.02624877735794562, 25, 5, 410], 0.07190400000000009], [[0.062395927307535355, 19, 6, 175], 1.2264389999999998], [[0.05548198426233548, 17, 22, 12], 0.20279000000000014], [[0.00122072665795554, 24, 2, 121], 0.1476679999999993], [[0.04328905243080558, 19, 5, 175], 0.2699759999999998], [[0.03400350420973082, 5, 11, 414], 0.2501280000000001], [[0.03400350420973082, 8, 23, 53], 0.30494000000000016], [[0.05032460442682943, 6, 23, 74], 0.2071], [[0.00122072665795554, 24, 5, 130], 0.2588399999999994], [[0.054305497987971185, 2, 11, 41], 1.4760900000000001], [[0.061396784883645805, 27, 10, 473], 0.147375], [[0.09159841614509343, 18, 25, 435], 0.0], [[0.02070272658270548, 21, 19, 47], 0.14256900000000022], [[0.05608963794013028, 9, 11, 253], 1.40175], [[0.0062391463051995055, 19, 8, 74], 0.36951699999999926], [[0.001796107327290031, 28, 7, 205], 0.21578200000000014], [[0.08685146377448573, 29, 20, 393], 0.899016], [[0.034906807699308824, 21, 15, 348], 0.22135], [[0.054305497987971185, 8, 3, 59], 1.237945], [[0.054305497987971185, 24, 26, 438], 0.08794400000000005], [[0.025651115035084195, 1, 22, 211], 0.18765299999999988], [[0.030745428298706634, 21, 22, 74], 0.2146600000000002], [[0.05608963794013028, 9, 14, 213], 0.11454400000000005], [[0.03532226504719589, 3, 4, 160], 0.1907159999999998], [[0.09340219861574826, 24, 20, 122], 0.0], [[0.036680889373811136, 28, 28, 200], 0.22135], [[0.021962347094345126, 14, 13, 178], 0.29788700000000007], [[0.05228793720648881, 24, 19, 454], 0.20868000000000012], [[0.03400350420973082, 16, 18, 95], 0.28669199999999984], [[0.025651115035084195, 1, 22, 211], 0.18765299999999988], [[0.02070272658270548, 17, 14, 176], 0.22550999999999968], [[0.053343708839903255, 27, 12, 160], 0.09084399999999987], [[0.0062391463051995055, 19, 11, 41], 0.4006660000000002], [[0.05660047212762885, 13, 1, 479], 0.7601190000000005], [[0.03977804273841444, 3, 17, 176], 0.1647050000000001], [[0.02089770604173856, 3, 13, 350], 0.13081000000000004]]
best_chromossomes_3 = sorted(best_chromossomes_3, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_3 = best_chromossomes_3[-1]
best_3
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_3 = best_chromossomes_3[0]
worst_3
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_3 = 0
for x in best_chromossomes_3: mean_3 = mean_3 + x[1]
mean_3 = mean_3/len(best_chromossomes_3)
mean_3
std_3 = 0
for x in best_chromossomes_3: std_3 = std_3 + (x[1] - mean_3) ** 2
std_3 = (std_3 / (len(best_chromossomes_3) - 1)) ** (1/2)
std_3
###Output
_____no_output_____
###Markdown
Quarta iteração:1. População → 1002. Budget → 10.0003. Delta → 0.0014. Crossover → crossover2 1. Taxa → 0.35. Mutação → mutation 1. Taxa → 0.3
###Code
# Dados variáveis
mutation_rate = 0.3
crossover_rate = 0.3
begin_time = time.time()
best_chromossomes_4 = evolutionary_strategy2(np_array, chromossomes, budget, crossover2, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultado da quarta iteração
###Code
tempo_4 = 864.27
best_chromossomes_4 = [[[0.05608963794013028, 9, 7, 406], 0.042571000000000095], [[0.06230404235587087, 29, 1, 72], 1.233583], [[0.05876211709403922, 8, 1, 96], 0.9782880000000005], [[0.033146552557375385, 11, 1, 229], 1.189699], [[0.035879770384169604, 4, 17, 461], 0.2601219999999999], [[0.023169600399788794, 1, 11, 233], 0.17875699999999978], [[0.04740901059544287, 12, 17, 349], 0.2071], [[0.03214555237686838, 18, 13, 198], 1.2617090000000004], [[0.06866961608446967, 17, 1, 413], 1.223918], [[0.00122072665795554, 24, 1, 280], 0.26350000000000057], [[0.03148736342138864, 7, 23, 408], 0.1081130000000001], [[0.050207624767356906, 25, 3, 356], 0.03980300000000007], [[0.005713760919368172, 2, 4, 360], 0.1297659999999998], [[0.033146552557375385, 20, 16, 426], 0.2985800000000001], [[0.033146552557375385, 4, 8, 229], 0.2491779999999995], [[0.06678827163793041, 25, 11, 495], 0.030496000000000096], [[0.07004185447433911, 19, 26, 349], 0.9042240000000001], [[0.02070272658270548, 15, 5, 160], 0.9614830000000005], [[0.015637631285750787, 4, 1, 237], 0.9745040000000008], [[0.023169600399788794, 4, 20, 233], 0.19071799999999967], [[0.043306600143784874, 6, 20, 187], 1.2550580000000002], [[0.09688267090539789, 11, 18, 34], 0.0], [[0.027465233103897445, 21, 2, 181], 1.2244619999999995], [[0.03570389334870818, 30, 13, 313], 0.23869200000000002], [[0.038388455522326705, 21, 23, 346], 0.1153260000000002], [[0.03570389334870818, 9, 23, 313], 0.2579839999999998], [[0.06230404235587087, 3, 23, 72], 0.030496000000000096], [[0.023169600399788794, 14, 1, 233], 1.2272059999999998], [[0.005713760919368172, 29, 16, 372], 0.32249900000000015], [[0.00122072665795554, 18, 11, 280], 0.24427999999999975], [[0.04159704880617712, 20, 9, 66], 0.09031899999999969], [[0.09791613129393337, 2, 17, 41], 0.0], [[0.03401284446555551, 30, 17, 439], 0.22135], [[0.00122072665795554, 24, 26, 280], 0.26011000000000006], [[0.03570389334870818, 21, 16, 313], 0.3989749999999998], [[0.043306600143784874, 6, 23, 187], 0.13538899999999995], [[0.038388455522326705, 20, 2, 346], 0.1721290000000001], [[0.02862435403001, 29, 1, 489], 1.4461980000000003], [[0.02070272658270548, 17, 1, 160], 1.2284230000000003], [[0.021269476003510472, 20, 29, 408], 0.1509760000000004], [[0.027465233103897445, 21, 1, 181], 1.259168], [[0.05876211709403922, 8, 1, 96], 0.9782880000000005], [[0.026641342252550396, 28, 11, 413], 0.1651619999999999], [[0.03570389334870818, 30, 11, 313], 0.16270300000000007], [[0.06678827163793041, 8, 1, 495], 1.007049], [[0.023169600399788794, 4, 20, 462], 0.19071799999999967], [[0.03862314733220682, 8, 8, 67], 0.2338789999999999], [[0.08407457690022326, 18, 26, 375], 0.9001520000000001], [[0.03862314733220682, 4, 26, 67], 0.2071], [[0.04740901059544287, 29, 5, 349], 0.2970299999999999], [[0.05608963794013028, 9, 18, 406], 0.20279000000000014], [[0.033146552557375385, 23, 26, 229], 0.14931199999999972], [[0.03862314733220682, 4, 26, 67], 0.2071], [[0.05608963794013028, 4, 14, 406], 0.12449899999999998], [[0.033146552557375385, 19, 27, 229], 0.30398400000000003], [[0.03862314733220682, 24, 3, 67], 0.3601420000000006], [[0.031066368682962052, 23, 18, 482], 0.2732499999999998], [[0.03862314733220682, 4, 14, 286], 0.2071], [[0.035879770384169604, 11, 9, 461], 0.23043999999999995], [[0.03862314733220682, 6, 23, 67], 0.09703999999999996], [[0.015637631285750787, 4, 17, 237], 0.13992999999999992], [[0.033146552557375385, 8, 17, 229], 0.14836799999999983], [[0.03570389334870818, 30, 6, 313], 0.2553499999999998], [[0.0065567985254041845, 15, 7, 78], 0.09647900000000009], [[0.023169600399788794, 24, 1, 233], 0.17279199999999983], [[0.038388455522326705, 4, 13, 346], 0.12421600000000017], [[0.038388455522326705, 21, 28, 346], 0.15230100000000002], [[0.014095146692794292, 12, 3, 267], 0.1831360000000006], [[0.015637631285750787, 3, 26, 237], 0.08630599999999977], [[0.033146552557375385, 23, 9, 482], 0.13776899999999986], [[0.04159704880617712, 18, 11, 94], 0.20182800000000006], [[0.04740901059544287, 25, 5, 349], 1.2145410000000003], [[0.0065567985254041845, 20, 9, 78], 1.2766359999999994], [[0.03401284446555551, 3, 8, 228], 0.21708899999999995], [[0.04159704880617712, 6, 3, 94], 0.9669339999999996], [[0.03862314733220682, 18, 11, 213], 0.19009699999999993], [[0.0897301428830124, 25, 12, 445], 0.0], [[0.005713760919368172, 25, 23, 339], 0.2395199999999997], [[0.033146552557375385, 30, 21, 229], 0.2553499999999998], [[0.05876211709403922, 8, 2, 96], 0.947629], [[0.05876211709403922, 8, 1, 96], 0.9782880000000005], [[0.033146552557375385, 8, 17, 229], 0.14836799999999983], [[0.0013360847408618428, 18, 9, 209], 0.2406199999999999], [[0.03401284446555551, 3, 1, 228], 1.2518539999999998], [[0.04159704880617712, 4, 1, 94], 0.7669010000000002], [[0.03365333646925563, 12, 14, 372], 0.2718969999999998], [[0.06678827163793041, 4, 1, 495], 1.007049], [[0.0897301428830124, 8, 2, 445], 0.0], [[0.06230404235587087, 29, 1, 340], 1.233583], [[0.00122072665795554, 24, 11, 280], 0.22517299999999996], [[0.005713760919368172, 29, 6, 200], 0.21634299999999984], [[0.04159704880617712, 6, 3, 94], 0.9669339999999996], [[0.043306600143784874, 6, 13, 187], 0.2149300000000003], [[0.06928079038737694, 18, 7, 406], 0.030496000000000096], [[0.03570389334870818, 26, 15, 313], 1.4780790000000001], [[0.0426122550918255, 24, 3, 342], 0.09529500000000007], [[0.038388455522326705, 5, 2, 346], 0.9400940000000002], [[0.023169600399788794, 4, 20, 462], 0.19071799999999967], [[0.03862314733220682, 1, 10, 67], 1.206466], [[0.022477117010065675, 30, 23, 163], 0.1154229999999996]]
best_chromossomes_4 = sorted(best_chromossomes_4, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_4 = best_chromossomes_4[-1]
best_4
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_4 = best_chromossomes_4[0]
worst_4
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_4 = 0
for x in best_chromossomes_4: mean_4 = mean_4 + x[1]
mean_4 = mean_4/len(best_chromossomes_4)
mean_4
std_4 = 0
for x in best_chromossomes_4: std_4 = std_4 + (x[1] - mean_4) ** 2
std_4 = (std_4 / (len(best_chromossomes_4) - 1)) ** (1/2)
std_4
###Output
_____no_output_____
###Markdown
Quinta iteração:1. População → 1002. Budget → 10.0003. Delta → 0.0014. Crossover → crossover2 1. Taxa → 0.85. Mutação → mutation 1. Taxa → 0.3
###Code
# Dados variáveis
mutation_rate = 0.3
crossover_rate = 0.8
begin_time = time.time()
best_chromossomes_5 = evolutionary_strategy2(np_array, chromossomes, budget, crossover2, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultado da quinta iteração
###Code
tempo_5 = 4823.03
best_chromossomes_5 = [[[0.02070272658270548, 17, 17, 160], 0.2835500000000002], [[0.05329949022617602, 25, 15, 375], 0.1350789999999999], [[0.03082086229808331, 17, 10, 292], 0.05129899999999998], [[0.03401284446555551, 14, 13, 228], 0.22127400000000017], [[0.012126557676406044, 28, 22, 53], 0.13020599999999977], [[0.06715230274391572, 3, 10, 224], 0.030496000000000096], [[0.022477117010065675, 1, 27, 131], 0.12393099999999976], [[0.07794358309673811, 14, 5, 133], 0.9042240000000001], [[0.01731263056609641, 25, 8, 349], 0.1213959999999999], [[0.0013360847408618428, 22, 9, 209], 0.2243599999999995], [[0.04740901059544287, 20, 26, 109], 0.11594300000000003], [[0.07911512819311342, 30, 3, 319], 0.9030880000000001], [[0.03401284446555551, 3, 23, 228], 0.2489499999999998], [[0.0013360847408618428, 20, 20, 209], 0.25627999999999973], [[0.08643051683088326, 5, 18, 198], 0.899016], [[0.02862435403001, 29, 8, 489], 0.11382699999999986], [[0.06795736146008101, 30, 6, 333], 1.223918], [[0.040564192055882284, 24, 22, 222], 0.16444899999999998], [[0.03532226504719589, 25, 26, 389], 0.23599000000000014], [[0.03788981821074894, 4, 8, 13], 0.24897700000000003], [[0.017156471419258447, 3, 25, 96], 0.13548699999999989], [[0.04159704880617712, 17, 8, 148], 0.26476799999999984], [[0.04159704880617712, 6, 5, 148], 0.2650810000000001], [[0.07945830486792235, 8, 27, 231], 0.030496000000000096], [[0.04328905243080558, 5, 13, 202], 0.2382380000000001], [[0.08104514433464621, 25, 15, 364], 0.030496000000000096], [[0.08240171173531165, 14, 26, 469], 0.030496000000000096], [[0.05012342514910495, 19, 11, 387], 0.1404119999999999], [[0.005713760919368172, 30, 26, 372], 0.2653599999999999], [[0.06353016910599807, 5, 11, 179], 0.14939899999999998], [[0.08434940499858055, 2, 8, 41], 0.02642399999999998], [[0.02037321174764334, 1, 15, 400], 1.4467800000000004], [[0.017156471419258447, 7, 12, 96], 0.06297499999999982], [[0.09445583379409503, 13, 5, 333], 0.0], [[0.08643051683088326, 27, 1, 198], 0.9001520000000001], [[0.043306600143784874, 4, 4, 295], 0.0796449999999997], [[0.09688267090539789, 19, 13, 408], 0.0], [[0.06795736146008101, 30, 26, 296], 0.030496000000000096], [[0.04771945683427357, 25, 24, 283], 0.22172000000000008], [[0.05638765758795582, 20, 5, 59], 1.249479], [[0.01064506817383616, 13, 15, 315], 0.2974200000000001], [[0.04553439663748283, 27, 24, 238], 0.2071], [[0.038388455522326705, 21, 16, 346], 0.28015200000000023], [[0.023169600399788794, 4, 29, 233], 0.20453999999999997], [[0.02070272658270548, 17, 4, 145], 0.1418510000000004], [[0.005504390607776133, 20, 24, 368], 0.20871999999999988], [[0.022242045091904496, 11, 3, 482], 1.1767120000000002], [[0.06433993415159887, 22, 22, 338], 0.989664], [[0.04328905243080558, 1, 18, 436], 0.2785699999999999], [[0.017156471419258447, 3, 17, 96], 0.22932000000000025], [[0.024648981510943135, 3, 3, 157], 0.18239900000000034], [[0.06757860599832702, 20, 9, 364], 0.030496000000000096], [[0.023169600399788794, 4, 27, 233], 0.21837999999999994], [[0.02744630312805818, 13, 6, 220], 1.0348359999999996], [[0.005504390607776133, 29, 17, 428], 0.10422299999999995], [[0.01064506817383616, 29, 23, 315], 0.1572029999999997], [[0.06715230274391572, 21, 1, 140], 1.223918], [[0.022477117010065675, 19, 27, 131], 0.18631299999999992], [[0.01064506817383616, 3, 25, 315], 0.28769000000000033], [[0.08791388683228833, 18, 13, 276], 0.9001520000000001], [[0.020931484439531467, 4, 1, 31], 1.204287000000001], [[0.03400350420973082, 29, 5, 326], 0.11466399999999957], [[0.03570389334870818, 30, 6, 328], 0.2553499999999998], [[0.05329949022617602, 25, 26, 375], 0.09084399999999987], [[0.054305497987971185, 29, 25, 414], 0.20279000000000014], [[0.09986434523205347, 5, 1, 178], 0.0], [[0.00122072665795554, 24, 23, 425], 0.23187000000000044], [[0.08643051683088326, 5, 18, 198], 0.899016], [[0.09340219861574826, 25, 27, 378], 0.0], [[0.023169600399788794, 8, 13, 233], 0.27506400000000014], [[0.0426122550918255, 20, 28, 391], 0.12005999999999986], [[0.03400350420973082, 5, 15, 57], 0.21345599999999995], [[0.09990496744284828, 2, 23, 150], 0.0], [[0.02037321174764334, 7, 15, 217], 0.16601899999999986], [[0.040564192055882284, 8, 18, 83], 0.1547869999999999], [[0.06353016910599807, 21, 1, 179], 1.233583], [[0.04771945683427357, 1, 28, 283], 0.2071], [[0.06866961608446967, 17, 5, 413], 0.14282999999999993], [[0.02037321174764334, 5, 20, 217], 0.08113499999999967], [[0.035879770384169604, 21, 22, 461], 0.2310999999999998], [[0.005713760919368172, 29, 8, 431], 0.31252799999999953], [[0.05329949022617602, 22, 12, 375], 0.2991], [[0.061396784883645805, 27, 27, 183], 0.02000000000000018], [[0.06866961608446967, 30, 27, 413], 0.030496000000000096], [[0.02862435403001, 29, 6, 489], 1.405275], [[0.005713760919368172, 29, 10, 372], 0.1657659999999998], [[0.033146552557375385, 22, 20, 229], 0.30808499999999983], [[0.06433993415159887, 5, 28, 42], 0.030496000000000096], [[0.05448171521773263, 16, 12, 357], 0.20279000000000014], [[0.058650384018878274, 19, 1, 350], 0.9489810000000005], [[0.03401284446555551, 2, 3, 228], 0.9375509999999995], [[0.014095146692794292, 12, 16, 267], 0.28401100000000024], [[0.040564192055882284, 25, 20, 83], 0.1679460000000001], [[0.015637631285750787, 29, 8, 237], 0.2039220000000003], [[0.0062391463051995055, 27, 6, 103], 0.13820199999999933], [[0.02744630312805818, 14, 10, 220], 0.15775699999999998], [[0.015637631285750787, 4, 27, 237], 0.06466199999999972], [[0.06809507047504157, 18, 22, 439], 1.019936], [[0.06353016910599807, 5, 23, 179], 0.030496000000000096], [[0.04740901059544287, 4, 26, 272], 0.2071]]
best_chromossomes_5 = sorted(best_chromossomes_5, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_5 = best_chromossomes_5[-1]
best_5
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_5 = best_chromossomes_5[0]
worst_5
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_5 = 0
for x in best_chromossomes_5: mean_5 = mean_5 + x[1]
mean_5 = mean_5/len(best_chromossomes_5)
mean_5
std_5 = 0
for x in best_chromossomes_5: std_5 = std_5 + (x[1] - mean_5) ** 2
std_5 = (std_5 / (len(best_chromossomes_5) - 1)) ** (1/2)
std_5
###Output
_____no_output_____
###Markdown
Sexta iteração:1. População → 1002. Budget → 10.0003. Delta → 0.0014. Crossover → crossover2 1. Taxa → 0.35. Mutação → mutation 1. Taxa → 0.8
###Code
# Dados variáveis
mutation_rate = 0.8
crossover_rate = 0.3
begin_time = time.time()
best_chromossomes_6 = evolutionary_strategy2(np_array, chromossomes, budget, crossover2, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultado da sexta iteração
###Code
tempo_6 = 1780.06
best_chromossomes_6 = [[[0.07794358309673811, 30, 10, 317], 0.9030880000000001], [[0.06827775770281684, 14, 3, 284], 0.061904000000000084], [[0.011373315881148042, 5, 28, 41], 0.14474399999999987], [[0.09146004158216198, 19, 21, 179], 0.0], [[0.03762319921867761, 23, 26, 474], 0.1857119999999999], [[0.06901941114548577, 2, 11, 495], 0.030496000000000096], [[0.0013360847408618428, 28, 4, 67], 0.20838999999999977], [[0.029462257455380468, 24, 22, 465], 0.09731599999999999], [[0.02107976729256826, 19, 20, 291], 0.19215099999999985], [[0.02967152969356126, 29, 1, 313], 0.17561800000000038], [[0.04771945683427357, 1, 7, 392], 0.9381200000000001], [[0.06827775770281684, 3, 3, 364], 0.9472480000000003], [[0.011826888947826103, 16, 28, 102], 0.21662099999999992], [[0.03788981821074894, 19, 6, 233], 0.20382300000000014], [[0.03857800810307234, 30, 20, 378], 0.03695800000000036], [[0.015446728120846053, 16, 27, 432], 0.14656799999999967], [[0.04740901059544287, 28, 14, 241], 0.3264580000000002], [[0.026530512139258312, 28, 26, 102], 0.1582970000000001], [[0.026530512139258312, 16, 17, 79], 0.18704800000000013], [[0.03082086229808331, 28, 27, 67], 0.3545170000000002], [[0.001796107327290031, 17, 13, 197], 0.043684000000000014], [[0.015446728120846053, 24, 21, 59], 0.1280979999999996], [[0.05012342514910495, 19, 22, 31], 0.13648899999999994], [[0.023963329450606856, 24, 15, 225], 0.17266899999999988], [[0.04728446438009225, 29, 4, 291], 0.003606999999999971], [[0.08745346794646858, 14, 9, 116], 0.899016], [[0.06531599746553525, 28, 30, 247], 0.030496000000000096], [[0.07794358309673811, 24, 26, 355], 0.030496000000000096], [[0.040564192055882284, 4, 24, 176], 1.4060599999999999], [[0.011373315881148042, 15, 15, 41], 0.15513799999999991], [[0.02332053773609267, 4, 23, 317], 0.14265899999999984], [[0.024108661369177553, 19, 22, 333], 1.2583519999999997], [[0.06353016910599807, 7, 6, 294], 1.2264389999999998], [[0.07794358309673811, 30, 10, 317], 0.9030880000000001], [[0.026530512139258312, 13, 17, 102], 0.23668999999999996], [[0.011373315881148042, 28, 4, 41], 0.14090699999999978], [[0.050207624767356906, 6, 3, 356], 0.03373099999999995], [[0.03762319921867761, 22, 5, 171], 0.36643100000000034], [[0.04771945683427357, 25, 30, 392], 0.14721900000000004], [[0.06531599746553525, 6, 20, 309], 0.1448539999999999], [[0.015446728120846053, 24, 21, 59], 0.1280979999999996], [[0.05096130523340367, 8, 26, 341], 0.2014], [[0.026530512139258312, 16, 7, 315], -0.028762999999999376], [[0.05096130523340367, 19, 28, 341], 0.2071], [[0.03762319921867761, 23, 26, 474], 0.1857119999999999], [[0.024108661369177553, 22, 3, 333], 0.061618000000000395], [[0.0019159106074458365, 27, 19, 227], 1.2582539999999993], [[0.024108661369177553, 5, 28, 459], 0.22841000000000003], [[0.09146004158216198, 15, 1, 257], 0.0], [[0.04728446438009225, 28, 26, 291], 0.2071], [[0.04740901059544287, 18, 8, 272], 1.249912], [[0.08015381266919075, 1, 9, 364], 0.851736], [[0.03788981821074894, 4, 19, 48], 0.3563870000000003], [[0.08006097085289227, 16, 19, 319], 0.9042240000000001], [[0.04728446438009225, 28, 26, 291], 0.2071], [[0.03722500719483046, 3, 10, 204], 1.020607], [[0.029462257455380468, 24, 24, 465], 0.18865199999999988], [[0.08643051683088326, 19, 10, 283], 0.899016], [[0.03394472518173914, 20, 11, 393], 0.3532470000000001], [[0.02967152969356126, 29, 1, 313], 0.17561800000000038], [[0.04404701629430993, 24, 26, 229], 0.18628099999999995], [[0.04740901059544287, 4, 28, 272], 0.2071], [[0.03631474775767192, 13, 8, 225], 0.08631899999999951], [[0.08006097085289227, 24, 6, 327], 0.9030880000000001], [[0.08015381266919075, 30, 3, 154], 0.9030880000000001], [[0.08015381266919075, 30, 4, 188], 0.9030880000000001], [[0.04070381683464959, 24, 8, 342], 0.1529239999999998], [[0.03762319921867761, 23, 4, 398], 0.25163000000000013], [[0.01731263056609641, 13, 30, 67], 0.15228799999999973], [[0.020952662264810143, 13, 2, 225], 0.1457850000000004], [[0.049667871125811044, 4, 27, 176], 0.16329400000000005], [[0.08006097085289227, 24, 6, 327], 0.9030880000000001], [[0.026530512139258312, 24, 11, 102], 0.16137599999999985], [[0.015446728120846053, 20, 5, 187], 0.9976450000000008], [[0.029462257455380468, 19, 27, 370], 0.2630699999999999], [[0.06827775770281684, 3, 3, 364], 0.9472480000000003], [[0.04728446438009225, 24, 26, 182], 0.07837900000000009], [[0.04728446438009225, 6, 30, 151], 0.2071], [[0.03631474775767192, 13, 8, 225], 0.08631899999999951], [[0.011373315881148042, 22, 4, 148], 0.17286800000000022], [[0.01741819591732396, 13, 7, 67], 1.4660830000000005], [[0.07246378790095855, 26, 10, 302], 0.989664], [[0.06531599746553525, 6, 19, 349], 0.030496000000000096], [[0.027465233103897445, 19, 23, 303], 1.1658529999999998], [[0.04740901059544287, 12, 20, 272], 0.03477999999999993], [[0.029462257455380468, 24, 22, 465], 0.09731599999999999], [[0.06901941114548577, 25, 8, 495], 0.030496000000000096], [[0.018088104146713333, 12, 4, 465], 1.0062780000000002], [[0.03082086229808331, 21, 8, 204], 0.2693119999999997], [[0.019643549748907703, 11, 26, 320], 0.214351], [[0.0019159106074458365, 19, 23, 253], 0.23925699999999978], [[0.0019159106074458365, 3, 22, 485], 0.2590589999999998], [[0.08015381266919075, 30, 10, 358], 0.9030880000000001], [[0.07794358309673811, 11, 18, 486], 0.030496000000000096], [[0.029462257455380468, 16, 18, 173], 0.16866900000000024], [[0.011373315881148042, 28, 10, 353], 0.1498550000000003], [[0.01731263056609641, 2, 5, 82], 0.04977600000000038], [[0.06393161063530112, 20, 14, 225], 0.989664], [[0.05012342514910495, 6, 27, 272], 0.16280199999999986], [[0.0013360847408618428, 20, 17, 308], 0.3634830000000007]]
best_chromossomes_6 = sorted(best_chromossomes_6, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_6 = best_chromossomes_6[-1]
best_6
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_6 = best_chromossomes_6[0]
worst_6
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_6 = 0
for x in best_chromossomes_6: mean_6 = mean_6 + x[1]
mean_6 = mean_6/len(best_chromossomes_6)
mean_6
std_6 = 0
for x in best_chromossomes_6: std_6 = std_6 + (x[1] - mean_6) ** 2
std_6 = (std_6 / (len(best_chromossomes_6) - 1)) ** (1/2)
std_6
###Output
_____no_output_____
###Markdown
Sétima iteração:1. População → 1002. Budget → 10.0003. Delta → 0.0014. Crossover → crossover 1. Taxa → 0.35. Mutação → mutation_v2 1. Taxa → 0.3
###Code
# Dados variáveis
mutation_rate = 0.3
crossover_rate = 0.3
begin_time = time.time()
best_chromossomes_7 = evolutionary_strategy2(np_array, chromossomes, budget, crossover, mutation_v2, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultado da sétima iteração
###Code
tempo_7 = 2130.24
best_chromossomes_7 = [[[0.014095146692794292, 16, 13, 197], 0.17163299999999998], [[0.05577398200717687, 24, 28, 275], 0.20279000000000014], [[0.02272562060237673, 4, 20, 197], 0.19662199999999974], [[0.05638765758795582, 4, 20, 237], 0.20279000000000014], [[0.04352054988237672, 16, 8, 413], 0.10985499999999974], [[0.01731263056609641, 25, 28, 289], 0.23197999999999994], [[0.030806572110157457, 10, 23, 341], 0.22135], [[0.03082086229808331, 28, 2, 275], 0.27029100000000017], [[0.014095146692794292, 24, 1, 489], 1.15989], [[0.015637631285750787, 4, 20, 225], 0.26856999999999986], [[0.05106509101455589, 1, 24, 213], 0.2071], [[0.01941681834556475, 24, 8, 392], 0.10091600000000017], [[0.0013360847408618428, 4, 9, 275], 0.27778299999999984], [[0.03082086229808331, 30, 8, 413], 0.3015390000000001], [[0.00122072665795554, 6, 27, 349], 0.1887900000000005], [[0.014095146692794292, 20, 9, 323], 1.0133570000000007], [[0.05019186610051978, 10, 8, 413], 1.249912], [[0.04771945683427357, 1, 26, 100], 0.2071], [[0.04771945683427357, 1, 26, 100], 0.2071], [[0.012126557676406044, 28, 19, 168], 0.17769400000000005], [[0.029462257455380468, 8, 12, 341], 0.2203370000000001], [[0.020952662264810143, 13, 15, 377], 0.2382199999999999], [[0.06531599746553525, 20, 9, 323], 0.030496000000000096], [[0.0019159106074458365, 28, 10, 204], 0.14953800000000028], [[0.02721116991599805, 1, 5, 349], 0.22739900000000016], [[0.0569737751897275, 14, 24, 229], 0.08794400000000005], [[0.02721116991599805, 11, 19, 244], 0.11898599999999969], [[0.011689529738893785, 5, 20, 83], 1.2209010000000002], [[0.020952662264810143, 16, 13, 197], 0.13081000000000004], [[0.05638765758795582, 12, 6, 267], 0.20279000000000014], [[0.05608963794013028, 11, 28, 275], 0.20279000000000014], [[0.014095146692794292, 6, 27, 349], 0.21479000000000031], [[0.0019159106074458365, 19, 28, 275], 0.18255], [[0.050207624767356906, 18, 25, 484], 0.2071], [[0.05019186610051978, 10, 8, 67], 1.249912], [[0.015637631285750787, 4, 28, 289], 0.25380999999999987], [[0.015637631285750787, 23, 4, 55], 0.10104599999999955], [[0.0019159106074458365, 16, 13, 142], 0.26353000000000026], [[0.0013360847408618428, 20, 27, 168], 0.1481470000000003], [[0.020952662264810143, 13, 26, 181], 0.17241500000000015], [[0.03532226504719589, 14, 2, 197], 0.14011199999999935], [[0.06531599746553525, 6, 27, 229], 0.030496000000000096], [[0.01731263056609641, 13, 8, 489], 0.20085], [[0.04125036500131737, 24, 6, 267], 0.13920299999999952], [[0.015637631285750787, 10, 20, 96], 0.17073100000000013], [[0.014095146692794292, 6, 27, 349], 0.21479000000000031], [[0.07736594075137825, 16, 13, 341], 0.030496000000000096], [[0.06602698673421992, 20, 28, 384], 0.030496000000000096], [[0.03532226504719589, 13, 15, 225], 1.2622349999999998], [[0.03082086229808331, 30, 8, 413], 0.3015390000000001], [[0.04014749294839869, 23, 6, 87], 0.2653], [[0.02721116991599805, 1, 5, 474], 0.22739900000000016], [[0.014095146692794292, 1, 20, 392], 0.15867999999999974], [[0.015637631285750787, 4, 20, 275], 0.26856999999999986], [[0.03082086229808331, 28, 10, 392], 0.35915699999999995], [[0.0019159106074458365, 19, 15, 225], 0.18363099999999977], [[0.001796107327290031, 13, 8, 67], 1.2241790000000001], [[0.03082086229808331, 4, 20, 375], 0.13396200000000028], [[0.011689529738893785, 5, 20, 83], 1.2209010000000002], [[0.03532226504719589, 14, 2, 31], 0.14011199999999935], [[0.0804570785992857, 24, 7, 119], 0.9030880000000001], [[0.05638765758795582, 24, 6, 67], 0.20279000000000014], [[0.017156471419258447, 19, 22, 245], 0.1471], [[0.014095146692794292, 12, 19, 53], 0.13584599999999972], [[0.020952662264810143, 1, 28, 289], 0.20335], [[0.03082086229808331, 28, 9, 323], 0.35461699999999985], [[0.06866961608446967, 23, 17, 474], 0.030496000000000096], [[0.017156471419258447, 16, 13, 197], 0.14611099999999988], [[0.0804570785992857, 28, 10, 204], 0.030496000000000096], [[0.04771945683427357, 1, 8, 398], 0.22172000000000008], [[0.010735938475514859, 13, 8, 67], 0.17915999999999968], [[0.020952662264810143, 13, 15, 377], 0.2382199999999999], [[0.015637631285750787, 19, 27, 349], 0.29576], [[0.017156471419258447, 9, 16, 67], 0.23554999999999982], [[0.06353016910599807, 20, 13, 405], 1.158983], [[0.08285846336172957, 4, 29, 32], 0.9042240000000001], [[0.05608963794013028, 11, 28, 275], 0.20279000000000014], [[0.06531599746553525, 6, 27, 229], 0.030496000000000096], [[0.05019186610051978, 24, 7, 119], 1.4000050000000004], [[0.07945830486792235, 1, 28, 205], 0.030496000000000096], [[0.05638765758795582, 24, 13, 197], 0.08794400000000005], [[0.06708530725725319, 6, 27, 349], 0.030496000000000096], [[0.022477117010065675, 27, 6, 87], 0.26725], [[0.05096130523340367, 8, 28, 45], 1.4060599999999999], [[0.010593944793495557, 13, 20, 237], 0.3313050000000001], [[0.0804570785992857, 12, 17, 474], 0.030496000000000096], [[0.020952662264810143, 13, 15, 244], 0.2382199999999999], [[0.0013360847408618428, 20, 24, 213], 0.19925300000000007], [[0.001796107327290031, 16, 27, 349], 0.29339999999999944], [[0.06678827163793041, 18, 27, 349], 0.1448539999999999], [[0.00122072665795554, 24, 24, 267], 0.15537000000000026], [[0.06344805404741997, 1, 1, 179], 1.0155690000000002], [[0.05608963794013028, 11, 23, 377], 0.20279000000000014], [[0.017156471419258447, 9, 24, 213], 0.1552039999999999], [[0.015637631285750787, 8, 12, 341], 1.2765959999999998], [[0.07955838461283993, 4, 20, 237], 0.030496000000000096], [[0.02721116991599805, 28, 10, 204], 0.15446999999999989], [[0.06531599746553525, 6, 27, 398], 0.030496000000000096], [[0.00122072665795554, 10, 23, 145], 0.27805599999999975], [[0.06353016910599807, 20, 1, 484], 1.233583]]
best_chromossomes_7 = sorted(best_chromossomes_7, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_7 = best_chromossomes_7[-1]
best_7
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_7 = best_chromossomes_7[0]
worst_7
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_7 = 0
for x in best_chromossomes_7: mean_7 = mean_7 + x[1]
mean_7 = mean_7/len(best_chromossomes_7)
mean_7
std_7 = 0
for x in best_chromossomes_7: std_7 = std_7 + (x[1] - mean_7) ** 2
std_7 = (std_7 / (len(best_chromossomes_7) - 1)) ** (1/2)
std_7
###Output
_____no_output_____
###Markdown
Oitava iteração:1. População → 1002. Budget → 10.0003. Delta → 0.0014. Crossover → crossover 1. Taxa → 0.85. Mutação → mutation_v2 1. Taxa → 0.3
###Code
# Dados variáveis
mutation_rate = 0.3
crossover_rate = 0.8
begin_time = time.time()
best_chromossomes_8 = evolutionary_strategy2(np_array, chromossomes, budget, crossover, mutation_v2, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultado da oitava iteração
###Code
tempo_8 = 3403.16
best_chromossomes_8 = [[[0.08075308354844817, 20, 28, 275], 0.030496000000000096], [[0.05638765758795582, 24, 5, 31], 0.20279000000000014], [[0.015446728120846053, 4, 15, 225], 0.06313600000000023], [[0.08015381266919075, 9, 16, 96], 0.8730160000000003], [[0.012126557676406044, 28, 14, 176], 0.26797000000000043], [[0.011449935867789965, 13, 4, 350], 0.08678699999999935], [[0.05019186610051978, 27, 10, 204], 0.21376100000000006], [[0.02744630312805818, 14, 20, 176], 0.0685960000000001], [[0.035821660132304124, 25, 20, 229], 0.19087600000000002], [[0.06353016910599807, 21, 1, 453], 1.233583], [[0.06353016910599807, 20, 1, 489], 1.233583], [[0.03532226504719589, 24, 9, 489], 0.20992000000000008], [[0.06602698673421992, 1, 24, 317], 0.030496000000000096], [[0.012126557676406044, 30, 18, 498], 0.18563900000000014], [[0.05019186610051978, 27, 5, 295], 1.199282], [[0.014851240687694323, 21, 15, 225], 1.409789], [[0.0019159106074458365, 13, 15, 225], 0.18562999999999957], [[0.06795736146008101, 30, 10, 119], 0.06076800000000003], [[0.06809507047504157, 4, 23, 148], 0.030496000000000096], [[0.024648981510943135, 28, 30, 398], 0.15293999999999977], [[0.027188793659815927, 7, 6, 377], 1.2226369999999995], [[0.011373315881148042, 28, 8, 177], 0.2975670000000005], [[0.014851240687694323, 21, 15, 225], 1.409789], [[0.06708530725725319, 20, 28, 176], 0.030496000000000096], [[0.020952662264810143, 13, 15, 31], 0.2382199999999999], [[0.08006097085289227, 20, 5, 326], 0.9030880000000001], [[0.024108661369177553, 22, 16, 170], 0.24620999999999968], [[0.040564192055882284, 28, 10, 204], 0.18423999999999996], [[0.05019186610051978, 27, 11, 117], 0.2370789999999999], [[0.08286426146169484, 13, 22, 45], 0.030496000000000096], [[0.020952662264810143, 13, 16, 317], 0.29942000000000024], [[0.015637631285750787, 4, 20, 13], 0.26856999999999986], [[0.0769857784497185, 5, 30, 176], 0.030496000000000096], [[0.022477117010065675, 14, 23, 148], 0.24320999999999984], [[0.024108661369177553, 29, 8, 229], 0.21227500000000019], [[0.0019159106074458365, 19, 22, 229], 0.15749900000000017], [[0.038388455522326705, 21, 22, 466], 0.1704130000000001], [[0.0019159106074458365, 9, 16, 362], 0.18587800000000007], [[0.06347815512920123, 5, 20, 356], 1.233583], [[0.05329949022617602, 25, 25, 356], 0.20568999999999996], [[0.011449935867789965, 13, 4, 41], 0.08678699999999935], [[0.04125036500131737, 24, 5, 498], 0.26142600000000005], [[0.011449935867789965, 1, 28, 289], 0.21610999999999986], [[0.08381548115217148, 4, 1, 13], 0.8689440000000003], [[0.04771945683427357, 11, 5, 438], 0.20502999999999993], [[0.08075308354844817, 13, 20, 237], 0.030496000000000096], [[0.019643549748907703, 11, 24, 181], 0.325143], [[0.0019159106074458365, 15, 22, 32], 0.21730999999999986], [[0.04125036500131737, 24, 5, 382], 0.26142600000000005], [[0.040564192055882284, 5, 27, 453], 0.2071], [[0.020952662264810143, 14, 2, 187], 0.9908760000000002], [[0.06017672787411463, 14, 20, 382], 0.030496000000000096], [[0.024648981510943135, 28, 30, 398], 0.15293999999999977], [[0.017156471419258447, 9, 16, 489], 0.23554999999999982], [[0.027188793659815927, 7, 6, 220], 1.2226369999999995], [[0.030806572110157457, 7, 12, 114], 0.07787800000000007], [[0.06795736146008101, 30, 9, 489], 0.030496000000000096], [[0.022704178096130477, 6, 28, 289], 0.23488999999999996], [[0.0569737751897275, 2, 27, 168], 0.20279000000000014], [[0.08285846336172957, 19, 27, 225], 0.030496000000000096], [[0.024624887281042333, 21, 4, 379], 1.1881159999999997], [[0.08006097085289227, 10, 23, 377], 0.851736], [[0.06678827163793041, 18, 11, 297], 1.019936], [[0.03082086229808331, 1, 8, 392], 0.0983689999999995], [[0.05329949022617602, 5, 13, 405], 1.2262950000000001], [[0.03532226504719589, 14, 2, 350], 0.14011199999999935], [[0.08104514433464621, 13, 8, 67], 0.9042240000000001], [[0.024648981510943135, 21, 22, 346], 0.17147299999999996], [[0.014851240687694323, 30, 10, 333], 0.18711899999999987], [[0.02070272658270548, 17, 5, 179], 0.15089199999999983], [[0.012126557676406044, 28, 16, 170], 0.22001800000000038], [[0.015446728120846053, 20, 1, 179], 1.2297199999999997], [[0.08285846336172957, 27, 6, 114], 0.030496000000000096], [[0.026288838356353122, 21, 8, 350], 0.15912799999999988], [[0.04740901059544287, 6, 26, 217], 0.13787899999999972], [[0.050207624767356906, 27, 6, 87], 0.205724], [[0.08104514433464621, 25, 1, 245], 0.9042240000000001], [[0.059151529991777196, 6, 15, 181], 0.08773999999999996], [[0.035879770384169604, 11, 17, 489], 0.22135], [[0.05019186610051978, 27, 2, 187], 0.009857999999999994], [[0.020952662264810143, 13, 29, 32], 0.22127000000000008], [[0.06955139641501501, 20, 5, 326], 0.9030880000000001], [[0.06809507047504157, 4, 1, 179], 1.007049], [[0.011449935867789965, 14, 1, 482], 1.6498640000000007], [[0.03532226504719589, 14, 2, 238], 0.14011199999999935], [[0.035821660132304124, 25, 2, 377], 1.4411349999999998], [[0.06602698673421992, 1, 6, 297], 0.14282999999999993], [[0.0013360847408618428, 18, 9, 32], 0.2406199999999999], [[0.03400350420973082, 20, 5, 321], 0.2813970000000001], [[0.035821660132304124, 25, 18, 215], 0.2582189999999997], [[0.08015381266919075, 30, 18, 272], 0.030496000000000096], [[0.025377115975037175, 14, 23, 176], 0.12381200000000007], [[0.0019159106074458365, 19, 22, 176], 0.15749900000000017], [[0.027188793659815927, 7, 9, 489], 0.33653700000000025], [[0.001796107327290031, 20, 5, 339], 0.2333779999999999], [[0.02070272658270548, 17, 5, 225], 0.15089199999999983], [[0.08018363919702412, 1, 4, 41], 0.8730160000000003], [[0.025043840547406, 21, 15, 217], 0.11779899999999979], [[0.01731263056609641, 13, 28, 176], 0.14071599999999962], [[0.025377115975037175, 14, 5, 160], 1.0047240000000002]]
best_chromossomes_8 = sorted(best_chromossomes_8, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_8 = best_chromossomes_8[-1]
best_8
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_8 = best_chromossomes_8[0]
worst_8
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_8 = 0
for x in best_chromossomes_8: mean_8 = mean_8 + x[1]
mean_8 = mean_8/len(best_chromossomes_8)
mean_8
std_8 = 0
for x in best_chromossomes_8: std_8 = std_8 + (x[1] - mean_8) ** 2
std_8 = (std_8 / (len(best_chromossomes_8) - 8)) ** (1/2)
std_8
###Output
_____no_output_____
###Markdown
Nona iteração:1. População → 1002. Budget → 10.0003. Delta → 0.0014. Crossover → crossover 1. Taxa → 0.35. Mutação → mutation_v2 1. Taxa → 0.8
###Code
# Dados variáveis
mutation_rate = 0.8
crossover_rate = 0.3
begin_time = time.time()
best_chromossomes_9 = evolutionary_strategy2(np_array, chromossomes, budget, crossover, mutation_v2, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
###Output
_____no_output_____
###Markdown
Resultado da nona iteração
###Code
tempo_9 = 864.27
best_chromossomes_9 = [[[0.019643549748907703, 11, 23, 148], 0.1711930000000002], [[0.023169600399788794, 11, 26, 215], 0.18186399999999975], [[0.036737453405450725, 15, 23, 114], 0.23747000000000007], [[0.019643549748907703, 21, 15, 51], 1.253768], [[0.001796107327290031, 18, 11, 430], 0.24427999999999975], [[0.07623110819109605, 1, 24, 342], 0.030496000000000096], [[0.01731263056609641, 13, 8, 273], 0.20085], [[0.07333060537032887, 11, 13, 342], 0.030496000000000096], [[0.025377115975037175, 30, 4, 498], 0.14651799999999984], [[0.01731263056609641, 6, 26, 45], 0.055366999999999646], [[0.019643549748907703, 11, 9, 273], 1.2061419999999998], [[0.01731263056609641, 13, 17, 215], 0.12125300000000025], [[0.03082086229808331, 20, 10, 347], 0.055583000000000174], [[0.040564192055882284, 21, 9, 398], 0.12870900000000038], [[0.027023146862853703, 1, 24, 119], 0.21573000000000012], [[0.01731263056609641, 13, 16, 133], 0.17128599999999988], [[0.07333060537032887, 27, 1, 245], 0.9042240000000001], [[0.001796107327290031, 17, 18, 368], 0.27698199999999995], [[0.06866961608446967, 30, 4, 498], 0.1448539999999999], [[0.025377115975037175, 30, 4, 498], 0.14651799999999984], [[0.01731263056609641, 13, 4, 391], 0.10515000000000019], [[0.001796107327290031, 25, 23, 391], 0.2395199999999997], [[0.017254950454450965, 11, 24, 32], 0.1820859999999997], [[0.06353016910599807, 20, 27, 197], 0.030496000000000096], [[0.06809507047504157, 16, 13, 197], 0.030496000000000096], [[0.027188793659815927, 7, 26, 288], 0.26924], [[0.023169600399788794, 11, 26, 215], 0.18186399999999975], [[0.019643549748907703, 25, 18, 41], 0.10586099999999969], [[0.07955838461283993, 25, 24, 260], 0.030496000000000096], [[0.07000478973789967, 15, 2, 83], 0.9705440000000002], [[0.015345742902981838, 15, 20, 391], 0.10736099999999969], [[0.026530512139258312, 25, 17, 389], 0.13087200000000013], [[0.06678827163793041, 20, 14, 179], 0.989664], [[0.023169600399788794, 4, 1, 243], 1.1685060000000005], [[0.05638765758795582, 24, 9, 155], 0.27042000000000027], [[0.06508089208397781, 18, 11, 114], 1.019936], [[0.08028600797417695, 24, 7, 359], 0.9030880000000001], [[0.019643549748907703, 25, 18, 41], 0.10586099999999969], [[0.06353016910599807, 20, 23, 391], 0.030496000000000096], [[0.024624887281042333, 13, 20, 389], 0.24759000000000014], [[0.023169600399788794, 4, 17, 209], 0.23567000000000007], [[0.01464608625495483, 15, 23, 13], 0.2063], [[0.024569143717511573, 30, 17, 64], 0.21554000000000031], [[0.012126557676406044, 17, 18, 258], 0.23374299999999984], [[0.0648104935768754, 20, 12, 215], 0.030496000000000096], [[0.025377115975037175, 11, 8, 413], 0.12738399999999983], [[0.08680001054490148, 4, 1, 64], 0.847664], [[0.040564192055882284, 15, 23, 100], 0.1404119999999999], [[0.025377115975037175, 4, 8, 245], 0.09257100000000046], [[0.05638765758795582, 24, 10, 13], 0.20279000000000014], [[0.029462257455380468, 13, 26, 436], 1.2184629999999996], [[0.08680001054490148, 4, 2, 213], 0.9001520000000001], [[0.0341639922425036, 11, 24, 342], 0.1257609999999997], [[0.020952662264810143, 18, 11, 391], 0.12726499999999996], [[0.01479949600958597, 9, 16, 336], 0.22899999999999981], [[0.01281962509420627, 28, 12, 110], 0.15932900000000008], [[0.026530512139258312, 25, 23, 179], 0.22747000000000006], [[0.03031194617520818, 29, 28, 275], 0.22135], [[0.024624887281042333, 20, 16, 396], 0.18000400000000008], [[0.001796107327290031, 1, 24, 281], 0.22699000000000033], [[0.024624887281042333, 21, 5, 326], 1.4536209999999998], [[0.04740901059544287, 20, 26, 392], 0.11594300000000003], [[0.01281962509420627, 20, 9, 326], 1.269636], [[0.024624887281042333, 11, 10, 342], 0.9662940000000002], [[0.024624887281042333, 19, 15, 391], 0.18160899999999983], [[0.03400350420973082, 20, 25, 436], 0.24297999999999992], [[0.01464608625495483, 15, 23, 351], 0.2063], [[0.04740901059544287, 6, 4, 148], 1.0064819999999999], [[0.001796107327290031, 16, 13, 498], 0.162634], [[0.040564192055882284, 2, 20, 413], 1.4457200000000001], [[0.05012342514910495, 3, 7, 119], 1.4000050000000004], [[0.01731263056609641, 13, 10, 256], 1.1829429999999996], [[0.020952662264810143, 13, 16, 396], 0.29942000000000024], [[0.06866961608446967, 30, 4, 498], 0.1448539999999999], [[0.06866961608446967, 30, 17, 273], 0.061904000000000084], [[0.01464608625495483, 3, 23, 495], 0.1260470000000001], [[0.017156471419258447, 9, 16, 155], 0.23554999999999982], [[0.04771945683427357, 1, 4, 256], 0.981029], [[0.06678827163793041, 21, 13, 155], 0.030496000000000096], [[0.07623110819109605, 1, 19, 392], 0.030496000000000096], [[0.06866961608446967, 4, 1, 245], 1.007049], [[0.024624887281042333, 28, 12, 114], 0.1356960000000001], [[0.04740901059544287, 6, 26, 342], 0.13787899999999972], [[0.07623110819109605, 30, 6, 396], 0.9030880000000001], [[0.06901941114548577, 10, 8, 436], -0.0332], [[0.025377115975037175, 11, 9, 155], 0.17818099999999995], [[0.05096130523340367, 29, 22, 104], 0.2071], [[0.040564192055882284, 15, 30, 319], 0.2594300000000001], [[0.023169600399788794, 30, 4, 498], 0.14928399999999947], [[0.06901941114548577, 18, 11, 391], 1.019936], [[0.08028600797417695, 24, 7, 359], 0.9030880000000001], [[0.019643549748907703, 11, 24, 183], 0.325143], [[0.0648104935768754, 11, 25, 13], 0.989664], [[0.07333060537032887, 27, 1, 245], 0.9042240000000001], [[0.019643549748907703, 11, 5, 13], 0.19307599999999947], [[0.049303459936792596, 21, 9, 273], 0.23281000000000004], [[0.001796107327290031, 16, 13, 148], 0.162634], [[0.029987471760866947, 21, 9, 64], 0.2405], [[0.024624887281042333, 21, 15, 498], 0.11779899999999979], [[0.07333060537032887, 11, 26, 391], 0.030496000000000096]]
best_chromossomes_9 = sorted(best_chromossomes_9, key=lambda x: x[1], reverse = False)
###Output
_____no_output_____
###Markdown
Melhor cromossomo
###Code
best_9 = best_chromossomes_9[-1]
best_9
###Output
_____no_output_____
###Markdown
Pior cromossomo
###Code
worst_9 = best_chromossomes_9[0]
worst_9
###Output
_____no_output_____
###Markdown
Valor médio dos fitness
###Code
mean_9 = 0
for x in best_chromossomes_9: mean_9 = mean_9 + x[1]
mean_9 = mean_9/len(best_chromossomes_9)
mean_9
std_9 = 0
for x in best_chromossomes_9: std_9 = std_9 + (x[1] - mean_9) ** 2
std_9 = (std_9 / (len(best_chromossomes_9) - 1)) ** (1/2)
std_9
###Output
_____no_output_____
###Markdown
Segundo conjunto de Testes Primeiro teste - Crescimento da população Em um segundo momento será feito um teste visando avaliar o comportamento do tempo gasto pela função de acordo com o tamanho da população. Para tal, serão fixadas as taxas e funções de mutação que trouxeram os melhores resultados no primeiro conjunto de testes e iremos avaliar a população de um valor fixo. Parâmetros setados:1. Budget → 10.0002. Delta → 0.014. Crossover → crossover2 1. Taxa → 0.35. Mutação → mutation 1. Taxa → 0.3
###Code
budget = 10000
delta_to_converge = 0.01
min_iteration_converget = 3
np_array = excels[5].values # Irá utilizar a primeira lista de valores da bolsa
mutation_rate = 0.3
crossover_rate = 0.3
tempo_gasto = []
for i in range(10,110,10):
print("Testando populacao de tamanho: " + str(i))
chromossomes = create_population(i)
begin_time = time.time()
best_chromossomes_4 = evolutionary_strategy2(np_array, chromossomes, budget, crossover2, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
tempo_gasto.append(round(time.time() - begin_time, 2))
print("\t Tempo gasto: " + str(round(time.time() - begin_time, 2)))
x = [10,20,30,40,50,60,70,80,90,100]
y = tempo_gasto
fit = np.polyfit(x,y,1)
fit_fn = np.poly1d(fit)
plt.plot(x,fit_fn(x), color='r', )
plt.scatter(x,y)
plt.xlabel("Tamanho da população")
plt.ylabel("Tempo Gasto (s)")
plt.show()
###Output
_____no_output_____
###Markdown
Segundo teste - Crescimento da Taxa de Mutação O segundo teste feito nessa seção visa avaliar a performance da solução a medida em que aumentamos a taxa de mutação. Para tal, iremos fixar todos os parâmetros usados na estratégia evolutiva e iremos variar a taxa de mutação entre 0 e 1. É importante ressaltar que a fim de avaliar exclusivamente a influência da mutação iremos considerar uma taxa de crossover igual a 0.Parâmetros usados: 1. Tamanho da População → 102. Budget → 10.0003. Delta → 0.014. Crossover → crossover2 1. Taxa → 0.05. Mutação → mutation 1. Taxa → variável de 0 a 1
###Code
population_size = 10
budget = 10000
delta_to_converge = 0.01
min_iteration_converget = 10
np_array = excels[5].values # Irá utilizar a primeira lista de valores da bolsa
crossover_rate = 0.0
chromossomes = create_population(population_size)
tempo_gasto_2 = []
i = 0.00
while i < 1.01:
mutation_rate = i
print("Testando taxa de: " + str(i))
begin_time = time.time()
temp = evolutionary_strategy2(np_array, chromossomes, budget, crossover2, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
i = i + 0.10
tempo_gasto_2.append(round(time.time() - begin_time, 2))
print("\t Tempo gasto: " + str(round(time.time() - begin_time, 2)))
x_2 = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
y_2 = tempo_gasto_2
fit = np.polyfit(x_2,y_2,0)
fit_fn = np.poly1d(fit)
plt.plot(x_2,fit_fn(x_2), color='r')
plt.scatter(x_2,y_2)
plt.xlabel("Taxa de Mutação")
plt.ylabel("Tempo Gasto (s)")
plt.show()
###Output
_____no_output_____
###Markdown
Terceiro teste - Crescimento da Taxa de Crossover O terceiro teste feito nessa seção visa avaliar a performance da solução a medida em que aumentamos a taxa de crossover. Para tal, iremos fixar todos os parâmetros usados na estratégia evolutiva e iremos variar a taxa de crossover entre 0 e 1. É importante ressaltar que a fim de avaliar exclusivamente a influência do crossover iremos considerar uma taxa de mutação igual a 0. Parâmetros usados: 1. Tamanho da População → 102. Budget → 10.0003. Delta → 0.014. Crossover → crossover2 1. Taxa → variável de 0 a 15. Mutação → mutation 1. Taxa → 0
###Code
population_size = 10
budget = 10000
delta_to_converge = 0.01
min_iteration_converget = 10
np_array = excels[5].values # Irá utilizar a primeira lista de valores da bolsa
mutation_rate = 0.0
chromossomes = create_population(population_size)
tempo_gasto_3 = []
i = 0.00
while i < 1.01:
crossover_rate = i
print("Testando taxa de: " + str(i))
begin_time = time.time()
temp = evolutionary_strategy2(np_array, chromossomes, budget, crossover2, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
i = i + 0.10
tempo_gasto_3.append(round(time.time() - begin_time, 2))
print("\t Tempo gasto: " + str(round(time.time() - begin_time, 2)))
x_2 = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
y_3 = tempo_gasto_3
fit = np.polyfit(x_2,y_3,2)
fit_fn = np.poly1d(fit)
plt.plot(x_2,fit_fn(x_2), color='r')
plt.scatter(x_2,y_3)
plt.xlabel("Taxa de Crossover")
plt.ylabel("Tempo Gasto (s)")
plt.show()
###Output
_____no_output_____
###Markdown
Testes simulando investimentos em Bitcoins Teste usando o primeiro método de seleção
###Code
population_len = 100
budget = 10000
delta_to_converge = 0.01
min_iteration_converget = 3
np_array = excels[1].values # Irá utilizar a primeira lista de valores da bolsa
mutation_rate = 0.3
crossover_rate = 0.3
chromossomes = create_population(population_len)
begin_time = time.time()
best_chromossomes_bit = evolutionary_strategy1(np_array, chromossomes, budget, crossover2, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
fitness(np_array, best_chromossomes_bit, budget)
###Output
_____no_output_____
###Markdown
Teste usando o segundo método de seleção
###Code
population_len = 100
budget = 10000
delta_to_converge = 0.01
min_iteration_converget = 3
np_array = excels[1].values # Irá utilizar a primeira lista de valores da bolsa
mutation_rate = 0.3
crossover_rate = 0.3
chromossomes = create_population(population_len)
begin_time = time.time()
best_chromossomes_bit2 = evolutionary_strategy2(np_array, chromossomes, budget, crossover2, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
fitness(np_array, best_chromossomes_bit2, budget)
###Output
_____no_output_____
###Markdown
Teste usando o segundo método de seleção sem normalizar a seleção
###Code
population_len = 100
budget = 10000
delta_to_converge = 0.01
min_iteration_converget = 3
np_array = excels[1].values # Irá utilizar a primeira lista de valores da bolsa
mutation_rate = 0.3
crossover_rate = 0.3
chromossomes = create_population(population_len)
begin_time = time.time()
best_chromossomes_bit3 = evolutionary_strategy2(np_array, chromossomes, budget, crossover2, mutation, delta_to_converge, mutation_rate, crossover_rate, min_iteration_converget)
print ("Tempo para convergir: ", round(time.time() - begin_time, 2), "segundos")
fitness(np_array, best_chromossomes_bit3, budget)
###Output
_____no_output_____
###Markdown
Análise do lucro ótimo Iremos analisar quanto teríamos ganho se tivessemos comprado todas as ações no dia de menor valor e vendido no de maior
###Code
np_array = excels[5]
np_array.min()
np_array.max()
compra = int(10000/(np_array["Close"].min()))
compra
sobra = 10000 % (np_array["Close"].min())
sobra
valor_venda = compra * np_array["Close"].max() + sobra
valor_venda
lucro = (valor_venda - 10000)/10000
lucro
###Output
_____no_output_____ |
python/Numpy.ipynb | ###Markdown
NUMPY Create 3D point test
###Code
import numpy as np
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
import cufflinks as cf
init_notebook_mode(connected=True)
cf.go_offline
import pandas as pd
point = 10
x = np.random.randint(-10,10,size=(10))
y, z = map(np.random.rand, [point for _ in range(2)])
add = np.linspace(0,100,10)
x, y, z = (x, y, z) + add
x, y, z = x.round(), y.round()-100, z.round()+100
xyz = np.vstack((x,y,z))
df = pd.DataFrame(data=xyz.T, columns='x y z'.split())
df2 = df + 10
data1 = dict(
type='scatter3d',
mode='lines',
x=df.x, y=df.y, z=df.z,
line=dict(width=3)
)
data2 = dict(
type='scatter3d',
mode='lines',
x=df2.x, y=df2.y, z=df2.z,
line=dict(width=5)
)
# layout = dict(
# autosize=True,
# title='Test',
# )
iplot([data1,data2])
###Output
_____no_output_____ |
Unknow1.ipynb | ###Markdown
查看数据
###Code
historical_data_chapter = historical_data[['begin','cnt']].groupby('begin').sum().reset_index()
historical_data_chapter['time'] = historical_data_chapter.begin.apply(lambda x:x.time())
###Output
_____no_output_____
###Markdown
查看不同时间点的数据情况
###Code
historical_data_chapter[['time','cnt']].groupby(['time']).describe()
###Output
_____no_output_____
###Markdown
查看一年中不通周的数据情况
###Code
historical_data[['weekofyear','cnt']].groupby(['weekofyear']).describe()
###Output
_____no_output_____
###Markdown
按天计算总人数
###Code
historical_data_cnt = historical_data[['date', 'cnt']].groupby('date').sum()
fig = plt.figure(0,figsize=(80, 19))
ax = fig.add_subplot(1,1,1)
ax.bar(historical_data_cnt.index, historical_data_cnt.cnt.values)
fig.show()
###Output
_____no_output_____
###Markdown
模型
###Code
# 交叉验证
def split_data(data,frac = 0.2):
return np.split(data.sample(frac=1), [int(frac*len(data))])
###Output
_____no_output_____
###Markdown
GradientBoostingRegressor
###Code
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.tree import DecisionTreeRegressor
# 整理数据
week = Week()
time_index = [ i.strftime("%H:%M") for i in week.day_times]
historical_data['time_index'] = historical_data.time.apply(lambda x: time_index.index(x))
test, train = split_data(historical_data)
x_train = train[['chapter', 'time_index', 'day', 'dayofweek', 'dayofyear', 'week','weekofyear']].values
y_train = train.cnt.values
x_test = test[['chapter', 'time_index', 'day', 'dayofweek', 'dayofyear', 'week','weekofyear']].values
y_test = test.cnt.values
est = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1,
max_depth=6, random_state=12, loss='ls').fit(x_train, y_train)
mean_squared_error(y_test, est.predict(x_test))
preict_data = est.predict(historical_data[['chapter', 'time_index', 'day', 'dayofweek', 'dayofyear', 'week','weekofyear']].values)
fig, ax = plt.subplots(figsize=(80, 19))
sub = 50000
ax.bar(historical_data.index[sub:], historical_data.cnt.values[sub:])
ax.errorbar(historical_data.index[sub:], preict_data[sub:], alpha=0.4)
plt.show()
###Output
_____no_output_____ |
codes/labs_lecture10/lab01_vrnn/.ipynb_checkpoints/rnn_demo-checkpoint.ipynb | ###Markdown
Lab 01: Vanilla RNN - demo
###Code
import torch
import torch.nn.functional as F
import torch.nn as nn
import math
import time
import utils
###Output
_____no_output_____
###Markdown
With or without GPU?
###Code
#device= torch.device("cuda")
device= torch.device("cpu")
print(device)
###Output
cpu
###Markdown
Download Penn Tree BankThe tensor train_data consists of 20 columns of 46,479 words.The tensor test_data consists of 20 columns of 4,121 words.
###Code
from utils import check_ptb_dataset_exists
data_path=check_ptb_dataset_exists()
train_data = torch.load(data_path+'ptb/train_data.pt')
test_data = torch.load(data_path+'ptb/test_data.pt')
print( train_data.size() )
print( test_data.size() )
###Output
torch.Size([46479, 20])
torch.Size([4121, 20])
###Markdown
Some constants associated with the data set
###Code
bs = 20
vocab_size = 10000
###Output
_____no_output_____
###Markdown
Make a recurrent net class
###Code
class three_layer_recurrent_net(nn.Module):
def __init__(self, hidden_size):
super(three_layer_recurrent_net, self).__init__()
self.layer1 = nn.Embedding( vocab_size , hidden_size )
self.layer2 = nn.RNN( hidden_size , hidden_size )
self.layer3 = nn.Linear( hidden_size , vocab_size )
def forward(self, word_seq, h_init ):
g_seq = self.layer1( word_seq )
h_seq , h_final = self.layer2( g_seq , h_init )
score_seq = self.layer3( h_seq )
return score_seq, h_final
###Output
_____no_output_____
###Markdown
Build the net. Choose the hidden size to be 150. How many parameters in total?
###Code
hidden_size=150
net = three_layer_recurrent_net( hidden_size )
print(net)
utils.display_num_param(net)
###Output
three_layer_recurrent_net(
(layer1): Embedding(10000, 150)
(layer2): RNN(150, 150)
(layer3): Linear(in_features=150, out_features=10000, bias=True)
)
There are 3055300 (3.06 million) parameters in this neural network
###Markdown
Send the weights of the networks to the GPU
###Code
net = net.to(device)
###Output
_____no_output_____
###Markdown
Set up manually the weights of the embedding module and Linear module
###Code
net.layer1.weight.data.uniform_(-0.1, 0.1)
net.layer3.weight.data.uniform_(-0.1, 0.1)
print('')
###Output
###Markdown
Choose the criterion, as well as the following important hyperparameters: * initial learning rate = 1* sequence length = 35
###Code
criterion = nn.CrossEntropyLoss()
my_lr = 1
seq_length = 35
###Output
_____no_output_____
###Markdown
Function to evaluate the network on the test set
###Code
def eval_on_test_set():
running_loss=0
num_batches=0
h = torch.zeros(1, bs, hidden_size)
h=h.to(device)
for count in range( 0 , 4120-seq_length , seq_length) :
minibatch_data = test_data[ count : count+seq_length ]
minibatch_label = test_data[ count+1 : count+seq_length+1 ]
minibatch_data=minibatch_data.to(device)
minibatch_label=minibatch_label.to(device)
scores, h = net( minibatch_data, h )
minibatch_label = minibatch_label.view( bs*seq_length )
scores = scores.view( bs*seq_length , vocab_size)
loss = criterion( scores , minibatch_label )
h=h.detach()
running_loss += loss.item()
num_batches += 1
total_loss = running_loss/num_batches
print('test: exp(loss) = ', math.exp(total_loss) )
###Output
_____no_output_____
###Markdown
Do 10 passes through the training set (100 passes would reach 135 on test set)
###Code
start=time.time()
for epoch in range(10):
# keep the learning rate to 1 during the first 4 epochs, then divide by 1.1 at every epoch
if epoch >= 4:
my_lr = my_lr / 1.1
# create a new optimizer and give the current learning rate.
optimizer=torch.optim.SGD( net.parameters() , lr=my_lr )
# set the running quantities to zero at the beginning of the epoch
running_loss=0
num_batches=0
# set the initial h to be the zero vector
h = torch.zeros(1, bs, hidden_size)
# send it to the gpu
h=h.to(device)
for count in range( 0 , 46478-seq_length , seq_length):
# Set the gradients to zeros
optimizer.zero_grad()
# create a minibatch
minibatch_data = train_data[ count : count+seq_length ]
minibatch_label = train_data[ count+1 : count+seq_length+1 ]
# send them to the gpu
minibatch_data=minibatch_data.to(device)
minibatch_label=minibatch_label.to(device)
# Detach to prevent from backpropagating all the way to the beginning
# Then tell Pytorch to start tracking all operations that will be done on h and c
h=h.detach()
h=h.requires_grad_()
# forward the minibatch through the net
scores, h = net( minibatch_data, h )
# reshape the scores and labels to huge batch of size bs*seq_length
scores = scores.view( bs*seq_length , vocab_size)
minibatch_label = minibatch_label.view( bs*seq_length )
# Compute the average of the losses of the data points in this huge batch
loss = criterion( scores , minibatch_label )
# backward pass to compute dL/dR, dL/dV and dL/dW
loss.backward()
# do one step of stochastic gradient descent: R=R-lr(dL/dR), V=V-lr(dL/dV), ...
utils.normalize_gradient(net)
optimizer.step()
# update the running loss
running_loss += loss.item()
num_batches += 1
# compute stats for the full training set
total_loss = running_loss/num_batches
elapsed = time.time()-start
print('')
print('epoch=',epoch, '\t time=', elapsed,'\t lr=', my_lr, '\t exp(loss)=', math.exp(total_loss))
eval_on_test_set()
###Output
epoch= 0 time= 153.87093091011047 lr= 1 exp(loss)= 521.7072143400301
test: exp(loss) = 323.1467831029934
###Markdown
Choose one sentence (taken from the test set)
###Code
sentence1 = "some analysts expect oil prices to remain relatively"
sentence2 = "over the next days and weeks they say investors should look for stocks to"
sentence3 = "prices averaging roughly $ N a barrel higher in the third"
sentence4 = "i think my line has been very consistent mrs. hills said at a news"
sentence5 = "this appears particularly true at gm which had strong sales in"
# or make your own sentence. No capital letter or punctuation allowed. Each word must be in the allowed vocabulary.
sentence6= "he was very"
# SELECT THE SENTENCE HERE
mysentence = sentence1
###Output
_____no_output_____
###Markdown
Convert the sentence into a vector, then send to GPU
###Code
minibatch_data=utils.sentence2vector(mysentence)
minibatch_data=minibatch_data.to(device)
print(minibatch_data)
###Output
tensor([[ 307],
[1140],
[ 334],
[1486],
[1786],
[ 64],
[ 719],
[ 377]])
###Markdown
Set the initial hidden state to zero, then run the RNN.
###Code
h = torch.zeros(1, 1, hidden_size)
h=h.to(device)
scores , h = net( minibatch_data , h )
###Output
_____no_output_____
###Markdown
Display the network prediction for the next word
###Code
print(mysentence, '... \n')
utils.show_next_word(scores)
###Output
some analysts expect oil prices to remain relatively ...
13.1% <unk>
3.4% a
2.0% more
1.6% the
1.0% damage
0.9% big
0.9% <eos>
0.8% $
0.8% good
0.8% other
0.6% interest
0.6% less
0.6% better
0.6% cash
0.6% on
0.6% only
0.5% as
0.5% money
0.5% low
0.5% an
0.5% much
0.5% in
0.4% shares
0.4% major
0.4% and
0.4% high
0.4% most
0.4% results
0.4% growth
0.4% higher
|
0.15/_downloads/plot_visualize_epochs.ipynb | ###Markdown
Visualize Epochs data=====================
###Code
import os.path as op
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(
op.join(data_path, 'sample_audvis_raw.fif'), preload=True)
raw.load_data().filter(None, 9, fir_design='firwin')
raw.set_eeg_reference('average', projection=True) # set EEG average reference
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5)
###Output
_____no_output_____
###Markdown
This tutorial focuses on visualization of epoched data. All of the functionsintroduced here are basically high level matplotlib functions with built inintelligence to work with epoched data. All the methods return a handle tomatplotlib figure instance.Events used for constructing the epochs here are the triggers for subjectbeing presented a smiley face at the center of the visual field. More of theparadigm at `BABDHIFJ`.All plotting functions start with ``plot``. Let's start with the mostobvious. :func:`mne.Epochs.plot` offers an interactive browser that allowsrejection by hand when called in combination with a keyword ``block=True``.This blocks the execution of the script until the browser window is closed.
###Code
epochs.plot(block=True)
###Output
_____no_output_____
###Markdown
The numbers at the top refer to the event id of the epoch. The number at thebottom is the running numbering for the epochs.Since we did no artifact correction or rejection, there are epochscontaminated with blinks and saccades. For instance, epoch number 1 seems tobe contaminated by a blink (scroll to the bottom to view the EOG channel).This epoch can be marked for rejection by clicking on top of the browserwindow. The epoch should turn red when you click it. This means that it willbe dropped as the browser window is closed.It is possible to plot event markers on epoched data by passing ``events``keyword to the epochs plotter. The events are plotted as vertical lines andthey follow the same coloring scheme as :func:`mne.viz.plot_events`. Theevents plotter gives you all the events with a rough idea of the timing.Since the colors are the same, the event plotter can also function as alegend for the epochs plotter events. It is also possible to pass your owncolors via ``event_colors`` keyword. Here we can plot the reaction timesbetween seeing the smiley face and the button press (event 32).When events are passed, the epoch numbering at the bottom is switched off bydefault to avoid overlaps. You can turn it back on via settings dialog bypressing `o` key. You should check out `help` at the lower left corner of thewindow for more information about the interactive features.
###Code
events = mne.pick_events(events, include=[5, 32])
mne.viz.plot_events(events)
epochs['smiley'].plot(events=events)
###Output
_____no_output_____
###Markdown
To plot individual channels as an image, where you see all the epochs at oneglance, you can use function :func:`mne.Epochs.plot_image`. It shows theamplitude of the signal over all the epochs plus an average (evoked response)of the activation. We explicitly set interactive colorbar on (it is also onby default for plotting functions with a colorbar except the topo plots). Ininteractive mode you can scale and change the colormap with mouse scroll andup/down arrow keys. You can also drag the colorbar with left/right mousebutton. Hitting space bar resets the scale.
###Code
epochs.plot_image(278, cmap='interactive', sigma=1., vmin=-250, vmax=250)
###Output
_____no_output_____
###Markdown
We can also give an overview of all channels by calculating the globalfield power (or other other aggregation methods). However, combiningmultiple channel types (e.g., MEG and EEG) in this way is not sensible.Instead, we can use the ``group_by`` parameter. Setting ``group_by`` to'type' combines channels by type.``group_by`` can also be used to group channels into arbitrary groups, e.g.regions of interests, by providing a dictionary containinggroup name -> channel indices mappings.
###Code
epochs.plot_image(combine='gfp', group_by='type', sigma=2., cmap="YlGnBu_r")
###Output
_____no_output_____
###Markdown
You also have functions for plotting channelwise information arranged into ashape of the channel array. The image plotting uses automatic scaling bydefault, but noisy channels and different channel types can cause the scalingto be a bit off. Here we define the limits by hand.
###Code
epochs.plot_topo_image(vmin=-250, vmax=250, title='ERF images', sigma=2.)
###Output
_____no_output_____ |
notebooks/Tools/Frequently_used_code/Using_load_ard.ipynb | ###Markdown
Using load_ard to load and cloud mask multiple satellite sensors * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser* **Compatibility:** Notebook currently compatible with both the `NCI` and `DEA Sandbox` environments* **Products used:** [ga_ls5t_ard_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls5t_ard_3),[ga_ls7e_ard_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls7e_ard_3),[ga_ls8c_ard_3](https://explorer.sandbox.dea.ga.gov.au/ga_ls8c_ard_3),[s2a_ard_granule](https://explorer.sandbox.dea.ga.gov.au/s2a_ard_granule), [s2b_ard_granule](https://explorer.sandbox.dea.ga.gov.au/s2b_ard_granule) DescriptionThis notebook demonstrates how to use the `load_ard` function to import a time series of cloud-free observations from either multiple Landsat (i.e. Landsat 5, 7 and 8) or Sentinel-2 satellites (i.e. Sentinel-2A and 2B).The function will automatically apply pixel quality masking (e.g. cloud masking) or contiguity masking to the input data and returns all available data from multiple sensors as a single combined `xarray.Dataset`.Optionally, the function can be used to return only observations that contain a minimum proportion of good quality, non-cloudy or shadowed pixels.This can be used to extract visually appealing time series of observations that are not affected by cloud.The function supports the following products:Landsat (GA Collection 3):* `ga_ls5t_ard_3`, `ga_ls7e_ard_3`, `ga_ls8c_ard_3`Sentinel-2 Definitive:* `s2a_ard_granule`, `s2b_ard_granule`Sentinel-2 Near Real Time:* `s2a_nrt_granule`, `s2b_nrt_granule`This notebook demonstrates how to use `load_ard` to:1. Load and combine Landsat 5, 7 and 8 data into a single `xarray.Dataset`2. Optionally apply a cloud mask to the resulting data3. Filter resulting data to keep only cloud-free observations4. Discard Landsat 7 SLC-off failure data5. Filter data before loading using a custom function4. Load and combine Sentinel-2A and Sentinel-2B data into a single `xarray.Dataset`5. Lazily load data using Dask*** Getting startedTo run this analysis, run all the cells in the notebook, starting with the "Load packages" cell. Load packages
###Code
%matplotlib inline
import datacube
import matplotlib.pyplot as plt
import sys
sys.path.insert(1, '../Tools/')
from dea_tools.datahandling import load_ard
###Output
_____no_output_____
###Markdown
Connect to the datacube
###Code
dc = datacube.Datacube(app='Using_load_ard')
###Output
_____no_output_____
###Markdown
Loading multiple Landsat sensorsThe `load_ard` function can be used to load a single, combined timeseries of cloud-masked data from multiple `DEA` products or satellite sensors. At its simplest, you can use the function similarly to `dc.load` by passing a set of spatiotemporal query parameters (e.g. `x`, `y`, `time`, `measurements`, `output_crs`, `resolution`, `group_by` etc) directly into the function ([see the dc.load documentation for all possible options](https://datacube-core.readthedocs.io/en/latest/dev/api/generate/datacube.Datacube.load.html)). The key difference from `dc.load` is that `load_ard` also requires an existing `Datacube` object, which is passed using the `dc` parameter. This gives us the flexibilty to load data from development or experimental datacubes.In the examples below, we load a single band of data (`nbart_green`) from the three Landsat Collection 3 products (Landsat 5, 7 and 8) by specifying: `products=['ga_ls5t_ard_3', 'ga_ls7e_ard_3', 'ga_ls8c_ard_3']`.The function always outputs the number of observations for each product, and the total number loaded.For the following examples, the function output shows that 0 Landsat 5 observations, 11 Landsat 7 observations, and 12 Landsat 8 observations were loaded, for a combined total of 23 observations. Explicit syntaxThe following example demonstrates how key parameters can be passed directly to `load_ard`.
###Code
# Load available data from all three Landsat satellites
ds = load_ard(dc=dc,
products=['ga_ls5t_ard_3', 'ga_ls7e_ard_3', 'ga_ls8c_ard_3'],
x=(153.38, 153.47),
y=(-28.83, -28.92),
time=('2018-04', '2018-06'),
measurements=['nbart_green'],
output_crs='EPSG:3577',
resolution=(-30, 30),
group_by='solar_day')
# Print output data
ds
###Output
Finding datasets
ga_ls5t_ard_3
ga_ls7e_ard_3
ga_ls8c_ard_3
Applying pixel quality/cloud mask
Loading 16 time steps
###Markdown
Query syntaxThe following example demonstrates how key parameters can be stored in a `query` dictionary, to be passed as a single parameter to `load_ard`.The `query` can then be reused in other `load_ard` calls.
###Code
# Create a reusable query
query = {
'x': (153.38, 153.47),
'y': (-28.83, -28.92),
'time': ('2019-01', '2019-05'),
'measurements': ['nbart_green'],
'output_crs': 'EPSG:3577',
'resolution': (-30, 30),
'group_by': 'solar_day'
}
# Load available data from all three Landsat satellites
ds = load_ard(dc=dc,
products=['ga_ls5t_ard_3', 'ga_ls7e_ard_3', 'ga_ls8c_ard_3'],
**query)
# Print output data
ds
###Output
Finding datasets
ga_ls5t_ard_3
ga_ls7e_ard_3
ga_ls8c_ard_3
Applying pixel quality/cloud mask
Loading 22 time steps
###Markdown
Working with cloud maskingBy plotting a time slice from the data we loaded above, you can see an area of white pixels where clouds have been masked out and set to `NaN`:
###Code
# Plot single observation
ds.isel(time=5).nbart_green.plot()
plt.show()
###Output
_____no_output_____
###Markdown
By default, `load_ard` applies a pixel quality mask to loaded data using the `fmask` band. The default mask is created based on `fmask` categories `['valid', 'snow', 'water']` which will preserve non-cloudy or shadowed land, snow and water pixels, and set all invalid, cloudy or shadowy pixels to `NaN`. This can be customised using the `fmask_categories` parameter.To deactive cloud masking completely, set `mask_pixel_quality=False`:
###Code
# Load available data with cloud masking deactivated
ds = load_ard(dc=dc,
products=['ga_ls5t_ard_3', 'ga_ls7e_ard_3', 'ga_ls8c_ard_3'],
mask_pixel_quality=False,
**query)
# Plot single observation
ds.isel(time=5).nbart_green.plot()
plt.show()
###Output
Finding datasets
ga_ls5t_ard_3
ga_ls7e_ard_3
ga_ls8c_ard_3
Loading 22 time steps
###Markdown
Filtering to non-cloudy observationsIn addition to masking out cloud, `load_ard` allows you to discard any satellite observation that contains less than a minimum proportion of good quality (e.g. non-cloudy) pixels.This can be used to obtain a time series of only clear, cloud-free observations.To discard all observations with less than `X`% good quality pixels, use the `min_gooddata` parameter. For example, `min_gooddata=0.99` will return only observations where less than 1% of pixels contain cloud, cloud shadow or other invalid data, resulting in a smaller number of clear, cloud free images being returned by the function:
###Code
# Load available data filtered to 99% clear observations
ds_noclouds = load_ard(dc=dc,
products=['ga_ls5t_ard_3',
'ga_ls7e_ard_3',
'ga_ls8c_ard_3'],
min_gooddata=0.99,
**query)
# Plot single observation
ds_noclouds.isel(time=0).nbart_green.plot()
plt.show()
###Output
Finding datasets
ga_ls5t_ard_3
ga_ls7e_ard_3
ga_ls8c_ard_3
Counting good quality pixels for each time step
Filtering to 2 out of 22 time steps with at least 99.0% good quality pixels
Applying pixel quality/cloud mask
Loading 2 time steps
###Markdown
Discarding Landsat 7 SLC-off failure dataOn [May 31 2003, Landsat 7's Scan Line Corrector (SLC) that compensated for the satellite's forward motion failed](http://usgs.gov/land-resources/nli/landsat/landsat-7), introducing linear data gaps in all subsequent Landsat 7 observations. For example, the following Landsat 7 image contains visible striping:
###Code
# Plot Landsat data
ds.isel(time=1).nbart_green.plot()
###Output
_____no_output_____
###Markdown
Although this data still contains valuable information, for some applications (e.g. generating clean composites from multiple images) it can be useful to discard Landsat 7 imagery acquired after the SLC failure.This data is known as "SLC-off" data.This can be achieved using `load_ard` using the `ls7_slc_off`. By default this is set to `ls7_slc_off=True` which will include all SLC-off data.Set to `ls7_slc_off=False` to discard this data instead; observe that the function now reports that it is ignoring SLC-off observations:```Finding datasets ga_ls5t_ard_3 ga_ls7e_ard_3 (ignoring SLC-off observations) ga_ls8c_ard_3 ```
###Code
# Load available data after discarding Landsat 7 SLC-off data
ds = load_ard(dc=dc,
products=['ga_ls5t_ard_3', 'ga_ls7e_ard_3', 'ga_ls8c_ard_3'],
ls7_slc_off=False,
**query)
# Print output data
ds
###Output
Finding datasets
ga_ls5t_ard_3
ga_ls7e_ard_3 (ignoring SLC-off observations)
ga_ls8c_ard_3
Applying pixel quality/cloud mask
Loading 14 time steps
###Markdown
Filtering data before load using a custom functionThe `load_ard` function has a powerful `predicate` parameter that allows you to filter out satellite observations before they are actually loaded using a custom function. Some examples of where this may be useful include:* Filtering to return data from a specific season (e.g. summer, winter)* Filtering to return data acquired on a particular day of the year* Filtering to return data based on an external dataset (e.g. data acquired during specific climatic conditions such as drought or flood)A predicate function should take a `datacube.model.Dataset` object as an input (e.g. as returned from `dc_landsat.find_datasets(product='ga_ls8c_ard_3', **query)[0]`), and return either `True` or `False`.For example, a predicate function could be used to return `True` for only datasets acquired in April: `dataset.time.begin.month == 4`In the example below, we create a simple predicate function that will filter our data to return only satellite data acquired in April:
###Code
# Simple function that returns True if month is April
def filter_april(dataset):
return dataset.time.begin.month == 4
# Load data that passes the `filter_april` function
ds = load_ard(dc=dc,
products=['ga_ls5t_ard_3', 'ga_ls7e_ard_3', 'ga_ls8c_ard_3'],
predicate=filter_april,
**query)
# Print output data
ds
###Output
Finding datasets
ga_ls5t_ard_3
ga_ls7e_ard_3
ga_ls8c_ard_3
Filtering datasets using predicate function
Applying pixel quality/cloud mask
Loading 6 time steps
###Markdown
We can print the time steps returned by `load_ard` to verify that they now include only April observations (e.g. `2018-04-...`):
###Code
ds.time.values
###Output
_____no_output_____
###Markdown
Filter to a single seasonAn example of a predicate function that will return data from a season of interest would look as follows: def seasonal_filter(dataset, season=[12, 1, 2]): return true if month is in defined season return dataset.time.begin.month in seasonAfter applying this predicate function, running the following command demonstrates that our dataset only contains months during the Dec, Jan, Feb period ds.time.dt.season : array(['DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF', 'DJF'], dtype='<U3') Coordinates: * time (time) datetime64[ns] 2016-01-05T10:27:44.213284 ... 2017-12-26T10:23:43.129624 Loading Sentinel-2 dataData from the Sentinel-2A and Sentinel-2B satellites can also be loaded using `load_ard`. To do this, we need to specify Sentinel-2 products in place of the Landsat products above.The `query` parameter can be reused to load Sentinel-2 data for the same specifcations used for the Landsat data above:
###Code
# Load available data from both Sentinel 2 satellites
ds = load_ard(dc=dc,
products=['s2a_ard_granule', 's2b_ard_granule'],
**query)
# Print output data
ds
###Output
Finding datasets
s2a_ard_granule
s2b_ard_granule
Applying pixel quality/cloud mask
Loading 31 time steps
###Markdown
Cloudy pixels are masked out by default from the resulting observations similarly to Landsat:
###Code
# Plot single observation
ds.isel(time=2).nbart_green.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Lazy loading with DaskRather than load data directly - which can take a long time and large amounts of memory - all datacube data can be lazy loaded using `Dask`.This can be a very useful approach for when you need to load large amounts of data without crashing your analysis, or if you want to subsequently scale your analysis by distributing tasks in parallel across multiple workers. The `load_ard` function can be easily adapted to lazily load data rather than loading it into memory by providing a `dask_chunks` parameter using either the [explicit](Explicit-syntax) or [query](Query-syntax) syntax.The minimum required to lazily load data is `dask_chunks={}`, but chunking can also be performed spatially (e.g. `dask_chunks={'x': 1000, 'y': 1000}`) or by time (e.g. `dask_chunks={'time': 1}`) depending on the analysis being conducted.> **Note:** For more information about using Dask, refer to the [Parallel processing with Dask](07_Parallel_processing_with_Dask.ipynb) notebook.
###Code
# Lazily load available Sentinel 2 data
ds = load_ard(dc=dc,
products=['s2a_ard_granule', 's2b_ard_granule'],
dask_chunks={},
**query)
# Print output data
ds
###Output
Finding datasets
s2a_ard_granule
s2b_ard_granule
Applying pixel quality/cloud mask
Returning 31 time steps as a dask array
###Markdown
Note that the data loads almost instantaneously, and that that each of the arrays listed under `Data variables` are now described as `dask.arrays`.To load the data into memory, you can run:
###Code
ds.compute()
###Output
_____no_output_____
###Markdown
--- Additional information**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).**Last modified:** September 2021**Compatible datacube version:**
###Code
print(datacube.__version__)
###Output
1.8.5
###Markdown
TagsBrowse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
###Code
**Tags**: :index:`sandbox compatible`, :index:`NCI compatible`, :index:`load_ard`, :index:`time series analysis`, :index:`landsat 5`, :index:`landsat 7`, :index:`landsat 8`, :index:`sentinel 2`, :index:`cloud masking`, :index:`cloud filtering`, :index:`pixel quality`, :index:`SLC-off`, :index:`predicate function`, :index:`Dask`, :index:`lazy loading`
###Output
_____no_output_____ |
examples/01_forecasting.ipynb | ###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. `sktime` provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from warnings import simplefilter
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.arima import ARIMA, AutoARIMA
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import (
EnsembleForecaster,
ReducedRegressionForecaster,
TransformedTargetForecaster,
)
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
temporal_train_test_split,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.performance_metrics.forecasting import sMAPE, smape_loss
from sktime.transformations.series.detrend import Deseasonalizer, Detrender
from sktime.utils.plotting import plot_series
simplefilter("ignore", FutureWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. `sktime` supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use `sktime`'s `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.`sktime` comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in `sktime`:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets.
Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num) :]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features]
# (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values,
# to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(
np.arange(len(y)), 10, len(fh)
)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh) :])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regression`sktime` provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasters`sktime` has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In `sktime`, we interface [`pmdarima`](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
A single ARIMA model can also be manually configured.
###Code
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
BATS and TBATS are two other time series forecasting algorithms that are contained in `sktime` by means of wrapping the package [`tbats`](https://github.com/intive-DataScience/tbats).
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
`sktime` also provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook. Please note that `fbprophet` is strongly related to data with a time stamp of type `pd.DatetimeIndex`, so we have to convert the index type first:
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z.index = y.index.to_timestamp()
z_train, z_test = temporal_train_test_split(z, test_size=36)
from sktime.forecasting.fbprophet import Prophet
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
INFO:fbprophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:fbprophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.
###Markdown
Composite model building`sktime` provides a modular API for composite model building for forecasting. EnsemblingLike `scikit-learn`, `sktime` provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped_trend=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped_trend=True, seasonal="multiplicative", sp=12
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=15, strategy="recursive"
)
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window,
# and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5, 10, 15, 20, 25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(
regressor, window_length=15, strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 200}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(
forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE()
)
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.`sktime` provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Online ForecastingFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sklearn.metrics import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped_trend=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped_trend=True, seasonal="multiplicative", sp=12
),
),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. `sktime`'s interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from warnings import simplefilter
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import (
EnsembleForecaster,
ReducedRegressionForecaster,
TransformedTargetForecaster,
)
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
temporal_train_test_split,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.performance_metrics.forecasting import sMAPE, smape_loss
from sktime.transformers.series.detrend import Deseasonalizer, Detrender
from sktime.utils.plotting import plot_series
simplefilter("ignore", FutureWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. sktime supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use sktime's `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets.
Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num) :]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features]
# (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values,
# to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(
np.arange(len(y)), 10, len(fh)
)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh) :])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Composite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped=True, seasonal="multiplicative", sp=12
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=15, strategy="recursive"
)
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window,
# and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5, 10, 15, 20, 25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(
regressor, window_length=15, strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 300}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(
forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE()
)
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_series(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from warnings import simplefilter
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import (
EnsembleForecaster,
ReducedRegressionForecaster,
TransformedTargetForecaster,
)
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
temporal_train_test_split,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.performance_metrics.forecasting import sMAPE, smape_loss
from sktime.transformers.series.detrend import Deseasonalizer, Detrender
from sktime.utils.plotting import plot_series
simplefilter("ignore", FutureWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. sktime supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use sktime's `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets.
Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num) :]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features]
# (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values,
# to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(
np.arange(len(y)), 10, len(fh)
)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh) :])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
BATS and TBATS are two other time series forecasting algorithms that are contained in sktime by means of wrapping the package [tbats](https://github.com/intive-DataScience/tbats).
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Composite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped=True, seasonal="multiplicative", sp=12
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=15, strategy="recursive"
)
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window,
# and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5, 10, 15, 20, 25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(
regressor, window_length=15, strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 300}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(
forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE()
)
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Online ForecastingFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sklearn.metrics import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped=True, seasonal="multiplicative", sp=12
),
),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, an editable developer installation is recommended, see the [sktime developer install guide](https://www.sktime.org/en/stable/installation.htmldevelopment-versions) for instructions. Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. Table of Contents* [1. Basic forecasting workflows](chapter1) * [1.1 Data container format](section_1_1) * [1.2 Basic deployment workflow - batch fitting and forecasting](section_1_2) * [1.2.1 Basic deployment workflow in a nutshell](section_1_2_1) * [1.2.2 Forecasters that require the horizon already in `fit`](section_1_2_2) * [1.2.3 Forecasters that can make use of exogeneous data](section_1_2_3) * [1.2.4 Multivariate Forecasters](section_1_2_4) * [1.2.5 Prediction intervals and quantile forecasts](section_1_2_5) * [1.2.6 Panel forecasts and hierarchical forecasts](section_1_2_6) * [1.3 Basic evaluation workflow - evaluating a batch of forecasts against ground truth observations](section_1_3) * [1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interface](section_1_3_1) * [1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interface](section_1_3_2) * [1.4 Advanced deployment workflow: rolling updates & forecasts](section_1_4) * [1.4.1 Updating a forecaster with the update method](section_1_4_1) * [1.4.2 Moving the "now" state without updating the model](section_1_4_2) * [1.4.3 Walk-forward predictions on a batch of data](section_1_4_3) * [1.5 Advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testing](section_1_5) * [2. Forecasters in sktime - searching, tags, common families](chapter2) * [2.1 Forecaster lookup - the registry](section_2_1) * [2.2 Forecaster tags](section_2_2) * [2.2.1 Capability tags: multivariate, probabilistic, hierarchical](section_2_2_1) * [2.2.2 Finding and listing forecasters by tag](section_2_2_2) * [2.2.3 Listing all forecaster tags](section_2_2_3) * [2.3 Common forecaster types](section_2_3) * [2.3.1 exponential smoothing, theta forecaster, autoETS from statsmodels](section_2_3_1) * [2.3.2 ARIMA and autoARIMA](section_2_3_2) * [2.3.3 BATS and TBATS](section_2_3_3) * [2.3.4 Facebook prophet](section_2_3_4) * [2.3.5 State Space Model (Structural Time Series)](section_2_3_5) * [2.3.6 AutoArima from StatsForecast](section_2_3_6) * [3. Advanced composition patterns - pipelines, reduction, autoML, and more](chapter3) * [3.1 Reduction: from forecasting to regression](section_3_1) * [3.2 Pipelining, detrending and deseasonalization](section_3_2) * [3.2.1 The basic forecasting pipeline](section_3_2_1) * [3.2.2 The Detrender as pipeline component](section_3_2_2) * [3.2.3 Complex pipeline composites and parameter inspection](section_3_2_3) * [3.3 Parameter tuning](section_3_3) * [3.3.1 Basic tuning using ForecastingGridSearchCV](section_3_3_1) * [3.3.2 Tuning of complex composites](section_3_3_2) * [3.3.3 Selecting the metric and retrieving scores](section_3_3_3) * [3.4 autoML aka automated model selection, ensembling and hedging](section_3_4) * [3.4.1 autoML aka automatic model selection, using tuning plus multiplexer](section_3_4_1) * [3.4.2 autoML: selecting transformer combinations via OptimalPassthrough](section_3_4_2) * [3.4.3 simple ensembling strategies](section_3_4_3) * [3.4.4 Prediction weighted ensembles and hedge ensembles](section_3_4_4) * [4. Extension guide - implementing your own forecaster](chapter4) * [5. Summary](chapter5) package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following four workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data contanier formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.DataFrame` for time series and sequences, primarily. Rows represent time indices, columns represent variables.* `pd.Series` can also be used for univariate time series and sequences* `numpy` arrays (1D and 2D) can also be passed, but `pandas` use is encouraged.The `Series.index` and `DataFrame.index` are used for representing the time series or sequence index.`sktime` supports pandas integer, period and timestamp indices for simple time series.`sktime` supports further, additional container formats for panel and hierarchical time series, these are discussed in Section 1.6.**Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit` Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4. multivariate forecasting Some forecasters in sktime support multivariate forecasts. Some examples of multivariate forecasters are: `MultiplexForecaster`, `EnsembleForecaster`,`TransformedTargetForecaster` etc. In order to determine is a forecaster can be multivariate, one can look at the `scitype:y` in `tags`, which should be set to `multivariate` or '`both`. To display complete list of multivariate forecasters, search for forecasters with 'multivariate' or 'both' tag value for the tag 'scitype:y', as follows:
###Code
from sktime.registry import all_estimators
for forecaster in all_estimators(filter_tags={"scitype:y": ["multivariate", "both"]}):
print(forecaster[0])
###Output
_____no_output_____
###Markdown
Below is an example of the general workflow of multivariate `ColumnEnsembleForecaster` using the longley dataset from `sktime.datasets`. The workflow is the same as in the univariate forecasters, but the input has more than one variables (columns).
###Code
from sktime.datasets import load_longley
from sktime.forecasting.compose import ColumnEnsembleForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.trend import PolynomialTrendForecaster
_, y = load_longley()
y = y.drop(columns=["UNEMP", "ARMED", "POP"])
forecasters = [
("trend", PolynomialTrendForecaster(), 0),
("ses", ExponentialSmoothing(trend="add"), 1),
]
forecaster = ColumnEnsembleForecaster(forecasters=forecasters)
forecaster.fit(y, fh=[1, 2, 3])
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
The input to the multivariate forecaster `y` is a `pandas.DataFrame` where each column is a variable.
###Code
y
###Output
_____no_output_____
###Markdown
The result of the multivariate forecaster `y_pred` is a `pandas.DataFrame` where columns are the predicted values for each variable. The variables in `y_pred` are the same as in `y`, the input to the multivariate forecaster.
###Code
y_pred
###Output
_____no_output_____
###Markdown
1.2.5 probabilistic forecasting: prediction intervals, quantile, variance, and distributional forecasts `sktime` provides a unified interface to make probabilistic forecasts.The following methods are possibly available for probabilistic forecasts:* `predict_interval` produces interval forecasts. Additionally to any `predict` arguments, an argument `coverage` (nominal interval coverage) must be provided.* `predict_quantiles` produces quantile forecasts. Additionally to any `predict` arguments, an argument `alpha` (quantile values) must be provided.* `predict_var` produces variance forecasts. This has same arguments as `predict`.* `predict_proba` produces full distributional forecasts. This has same arguments as `predict`.Not all forecasters are capable of returning probabilistic forecast, but if a forecasters provides one kind of probabilistic forecast, it is also capable of returning the others. The list of forecasters with such capability can be queried by `registry.all_estimators`, searching for those where the `capability:pred_int` tag has value`True`.The basic worfklow for probabilistic forecasts is similar to the basic forecasting workflow, with the difference that instead of `predict`, one of the probabilistic forecasting methods is used:
###Code
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.theta import ThetaForecaster
# until fit, identical with the simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y, fh=fh)
###Output
_____no_output_____
###Markdown
Now we present the different probabilistic forecasting methods. `predict_interval` - interval predictions `predict_interval` takes an argument `coverage`, which is a float (or list of floats), the nominal coverage of the prediction interval(s) queried. `predict_interval` produces symmetric prediction intervals, for example, a coverage of `0.9` returns a "lower" forecast at quantile `0.5 - coverage/2 = 0.05`, and an "upper" forecast at quantile `0.5 + coverage/2 = 0.95`.
###Code
coverage = 0.9
y_pred_ints = forecaster.predict_interval(coverage=coverage)
y_pred_ints
###Output
_____no_output_____
###Markdown
The return `y_pred_ints` is a `pandas.DataFrame` with a column multi-index: The first level is variable name from `y` in fit (or `Coverage` if no variable names were present), second level coverage fractions for which intervals were computed, in the same order as in input `coverage`; third level columns `lower` and `upper`. Rows are the indices for which forecasts were made (same as in `y_pred` or `fh`). Entries are lower/upper (as column name) bound of the nominal coverage predictive interval for the index in the same row. pretty-plotting the predictive interval forecasts:
###Code
from sktime.utils import plotting
# also requires predictions
y_pred = forecaster.predict()
fig, ax = plotting.plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["Coverage"][coverage]["lower"],
y_pred_ints["Coverage"][coverage]["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{coverage}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
`predict_quantiles` - quantile forecasts sktime offers `predict_quantiles` as a unified interface to return quantile values of predictions. Similar to `predict_interval`.`predict_quantiles` has an argument `alpha`, containing the quantile values being queried. Similar to the case of the `predict_interval`, `alpha` can be a `float`, or a `list of floats`.
###Code
y_pred_quantiles = forecaster.predict_quantiles(alpha=[0.275, 0.975])
y_pred_quantiles
###Output
_____no_output_____
###Markdown
`y_pred_quantiles`, the output of predict_quantiles, is a `pandas.DataFrame` with a two-level column multiindex. The first level is variable name from `y` in fit (or `Quantiles` if no variable names were present), second level are the quantile values (from `alpha`) for which quantile predictions were queried. Rows are the indices for which forecasts were made (same as in `y_pred` or `fh`). Entries are the quantile predictions for that variable, that quantile value, for the time index in the same row. Remark: for clarity: quantile and (symmetric) interval forecasts can be translated into each other as follows.**alpha < 0.5:** The alpha-quantile prediction is equal to the lower bound of a predictive interval with coverage = (0.5 - alpha) * 2**alpha > 0.5:** The alpha-quantile prediction is equal to the upper bound of a predictive interval with coverage = (alpha - 0.5) * 2 `predict_var` - variance predictions `predict_var` produces variance predictions:
###Code
y_pred_var = forecaster.predict_var()
y_pred_var
###Output
_____no_output_____
###Markdown
The format of the output `y_pred_var` is the same as for `predict`, except that this is always coerced to a `pandas.DataFrame`, and entries are not point predictions but variance predictions. `predict_proba` - distribution predictions To predict full predictive distributions, `predict_proba` can be used.As this returns `tensorflow` `Distribution` objects, the deep learning dependency set `dl` of `sktime` (which includes `tensorflow` and `tensorflow-probability` dependencies) must be installed.
###Code
y_pred_proba = forecaster.predict_proba()
y_pred_proba
###Output
_____no_output_____
###Markdown
Distributions returned by `predict_proba` are by default marginal at time points, not joint over time points.More precisely, the returned `Distribution` object is formatted and to be interpreted as follows:* batch shape is 1D and same length as fh* event shape is 1D, with length equal to number of variables being forecast* i-th (batch) distribution is forecast for i-th entry of fh* j-th (event) component is j-th variable, same order as y in `fit`/`update`To return joint forecast distributions, the `marginal` parameter can be set to `False` (currently work in progress). In this case, a `Distribution` with 2D event shape `(len(fh), len(y))` is returned. 1.2.6 Panel forecasts and hierarchical forecasts `sktime` provides a unified interface to make panel and hierarchical forecasts.All `sktime` forecasters can be applied to panel and hierarchical data, which needs to be presented in specific input formats.Forecasters that are not genuinely panel or hierarchical forecasters will be applied by instance.The recommended (not the only) format to pass panel and hierarchical data is a `pandas.DataFrame` with `MultiIndex` row. In this `MultiIndex`, the last level must be in an `sktime` compatible time index format, the remaining levels are panel or hierarchy nodes.Example data:
###Code
from sktime.utils._testing.hierarchical import _bottom_hier_datagen
y = _bottom_hier_datagen(no_levels=2)
y
###Output
_____no_output_____
###Markdown
As stated, all forecasters, genuinely hierarchical or not, can be applied, with all workflows described in this section, to produce hierarchical forecasts.The syntax is exactly the same as for plain time series, except for the hierarchy levels in input and output data:
###Code
from sktime.forecasting.arima import ARIMA
fh = [1, 2, 3]
forecaster = ARIMA()
forecaster.fit(y, fh=fh)
forecaster.predict()
###Output
_____no_output_____
###Markdown
Further details on hierarchical forecasting, including reduction, aggregation, reconciliation, are presented in the "hierarchical forecasting" tutorial. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.12.x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1957
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - lookup, properties, main familiesThis section summarizes how to:* search for forecasters in sktime* properties of forecasters, corresponding search options and tags* commonly used types of forecasters in `sktime` 2.1 Listing all forecasters in `sktime` Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command.This will list all forecasters in `sktime`, even those whose soft dependencies are not installed.
###Code
from sktime.registry import all_estimators
all_estimators("forecaster", as_dataframe=True)
###Output
_____no_output_____
###Markdown
The entries of the last column of the resulting dataframe are classes which could be directly used for construction, or simply inspected for the correct import path.For logic that loops over forecasters, the default output format may be more convenient:
###Code
forecaster_list = all_estimators("forecaster", as_dataframe=False)
# this returns a list of (name, estimator) tuples
forecaster_list[0]
###Output
_____no_output_____
###Markdown
2.2 Forecaster tags All forecasters `sktime` have so-called tags which describe properties of the estimator, e.g., whether it is multivariate, probabilistic, or not. Use of tags, inspection, and retrieval will be described in this section. 2.2.1 Capability tags: multivariate, probabilistic, hierarchical Every forecaster has tags, which are key-value pairs that can describe capabilities or internal implementation details.The most important "capability" style tags are the following:`requires-fh-in-fit` - a boolean. Whether the forecaster requires the forecasting horizon `fh` already in `fit` (`True`), or whether it can be passed late in `predict` (`False`).`scitype:y` - a string. Whether the forecaster is univariate (`"univariate"`), strictly multivariate (`"multivariate"`), or can deal with any number of variables (`"both"`).`capability:pred_int` - a boolean. Whether the forecaster can return probabilistic predictions via `predict_interval` etc, see Section 1.5.`ignores-exogeneous-X` - a boolean. Whether the forecaster makes use of exogeneous variables `X` (`False`) or not (`True`). If the forecaster does not use `X`, it can still be passed for interface uniformity, and will be ignored.`handles-missing-data` - a boolean. Whether the forecaster can deal with missing data in the inputs `X` or `y`.Tags of a forecaster instance can be inspected via the `get_tags` (lists all tags) and `get_tag` (gets value for one tag) methods.Tag values may depend on hyper-parameter choices.
###Code
from sktime.forecasting.arima import ARIMA
ARIMA().get_tags()
###Output
_____no_output_____
###Markdown
The `y_inner_mtype` and `X_inner_mtype` indicate whether the forecaster can deal with panel or hierarchical data natively - if an panel or hierarchical mtype occurs here, it does (see data types tutorial).An explanation for all tags can be obtained using the `all_tags` utility, see Section 2.2.3. 2.2.2 Finding and listing forecasters by tag To list forecasters with their tags, the `all_estimators` utility can be used with its `return_tags` argument.The resulting data frame can then be used for table queries or sub-setting.
###Code
from sktime.registry import all_estimators
all_estimators(
"forecaster", as_dataframe=True, return_tags=["scitype:y", "requires-fh-in-fit"]
)
###Output
_____no_output_____
###Markdown
To filter beforehand on certain tags and tag values, the `filter_tags` argument can be used:
###Code
# this lists all forecasters that can deal with multivariate data
all_estimators(
"forecaster", as_dataframe=True, filter_tags={"scitype:y": ["multivariate", "both"]}
)
###Output
_____no_output_____
###Markdown
Important note: as said above, tag values can depend on hyper-parameter settings, e.g., a `ForecastingPipeline` can handle multivariate data only if the forecaster in it can handle multivariate data.In retrieval as above, the tags for a class are usually set to indicate the most general potential value, e.g., if for some parameter choice the estimator can handle multivariate, it will appear on the list. 2.2.3 Listing all forecaster tags To list all forecaster tags with an explanation of the tag, the `all_tags` utility can be used:
###Code
import pandas as pd
from sktime.registry import all_tags
# wrapping this in a pandas DataFrame for pretty display
pd.DataFrame(all_tags(estimator_types="forecaster"))[[0, 3]]
###Output
_____no_output_____
###Markdown
2.3 Common forecasters in `sktime` `sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.Some classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `AutoARIMA` from `pmdarima`* `AutoARIMA` from `statsforecast`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`This is not the full list, use `all_estimators` as demonstrated in Sections 2.1 and 2.2 for that.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.3.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and auto-ETS.For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following. Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.3.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.3.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex. Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.3.5 State Space Model (Structural Time Series)We can also use the [`UnobservedComponents`](https://www.statsmodels.org/stable/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.html) class from [`statsmodels`](https://www.statsmodels.org/stable/index.html) to generate predictions using a state space model.
###Code
from sktime.forecasting.structural import UnobservedComponents
# We can model seasonality using Fourier modes as in the Prophet model.
forecaster = UnobservedComponents(
level="local linear trend", freq_seasonal=[{"period": 12, "harmonics": 10}]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.3.6 AutoARIMA from [StatsForecast](https://github.com/Nixtla/statsforecast)`sktime` interfaces `StatsForecast` for its `AutoARIMA` class models. `AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.statsforecast import StatsForecastAutoARIMA
forecaster = StatsForecastAutoARIMA(sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. The same pipeline, as above, can also be constructed with the multiplication dunder method `*`.This creates a `TransformedTargetForecaster` as above, with components given default names.
###Code
forecaster = Deseasonalizer(model="multiplicative", sp=12) * ARIMA()
forecaster
###Output
_____no_output_____
###Markdown
The names in a dunder constructed pipeline are made unique in case, e.g., two deseasonalizers are used.Example of a multiple seasonality model:
###Code
forecaster = (
Deseasonalizer(model="multiplicative", sp=12)
* Deseasonalizer(model="multiplicative", sp=3)
* ARIMA()
)
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper-parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.4.3 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y=y_train, fh=fh)
y_pred = forecaster.update_predict_single(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
4. Extension guide - implementing your own forecaster`sktime` is meant to be easily extensible, for direct contribution to `sktime` as well as for local/private extension with custom methods.To get started:* follow the ["implementing estimator" developer guide](https://www.sktime.org/en/stable/developer_guide/add_estimators.html)* use the [simple forecasting extension template](https://github.com/alan-turing-institute/sktime/blob/main/extension_templates/forecasting_simple.py) for forecasters without stream, probabilistic, or hierarchical functionality* use the [advanced forecasting extension template](https://github.com/alan-turing-institute/sktime/blob/main/extension_templates/forecasting.py) for forecasters with stream, probabilistic or hierarchical functionality* for probabilistic and hierarchical forecasters, it is recommended to familiarize yourself with the interfaces via the tutorialsExtension template = python "fill-in" template with to-do blocks that allow you to implement your own, sktime-compatible forecasting algorithm.Implemented estimators can be easily checked via the `check_estimator` utility.`check_estimator` collects all tests specific to the estimator and runs them:
###Code
# suppose we just implemented ARIMA
from sktime.forecasting.arima import ARIMA
from sktime.utils.estimator_checks import check_estimator
check_estimator(ARIMA)
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from warnings import simplefilter
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import (
EnsembleForecaster,
ReducedRegressionForecaster,
TransformedTargetForecaster,
)
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
temporal_train_test_split,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.performance_metrics.forecasting import sMAPE, smape_loss
from sktime.transformers.series.detrend import Deseasonalizer, Detrender
from sktime.utils.plotting import plot_series
simplefilter("ignore", FutureWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. sktime supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use sktime's `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets.
Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num) :]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features]
# (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values,
# to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(
np.arange(len(y)), 10, len(fh)
)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh) :])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Composite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped=True, seasonal="multiplicative", sp=12
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=15, strategy="recursive"
)
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window,
# and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5, 10, 15, 20, 25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(
regressor, window_length=15, strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 300}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(
forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE()
)
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Online ForecastingFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sklearn.metrics import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped=True, seasonal="multiplicative", sp=12
),
),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from sktime.forecasting.all import *
from warnings import simplefilter
simplefilter("ignore", FutureWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. sktime supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use sktime's `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets. Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num):]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features] (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values, to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(np.arange(len(y)), 10, len(fh))
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh):])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Composite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5,10,15,20,25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(regressor, window_length=15, strategy="recursive")
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 300}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE())
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_series(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(ax.get_lines()[-1].get_xdata(), pred_ints["lower"], pred_ints["upper"],
alpha=0.2, color=ax.get_lines()[-1].get_c(), label=f"{1 - alpha}% prediction intervals")
ax.legend();
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. Table of Contents* [1. Basic forecasting workflows](chapter1) * [1.1 Data container format](section_1_1) * [1.2 Basic deployment workflow - batch fitting and forecasting](section_1_2) * [1.2.1 Basic deployment workflow in a nutshell](section_1_2_1) * [1.2.2 Forecasters that require the horizon already in `fit`](section_1_2_2) * [1.2.3 Forecasters that can make use of exogeneous data](section_1_2_3) * [1.2.4 Prediction intervals](section_1_2_4) * [1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observations](section_1_3) * [1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interface](section_1_3_1) * [1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interface](section_1_3_2) * [1.4 advanced deployment workflow: rolling updates & forecasts](section_1_4) * [1.4.1 updating a forecaster with the update method](section_1_4_1) * [1.4.2 moving the "now" state without updating the model](section_1_4_2) * [1.4.3 walk-forward predictions on a batch of data](section_1_4_3) * [1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testing](section_1_5) * [2. Forecasters in sktime - main families](chapter2) * [2.1 exponential smoothing, theta forecaster, autoETS from statsmodels](section_2_1) * [2.2 ARIMA and autoARIMA](section_2_2) * [2.3 BATS and TBATS](section_2_3) * [2.4 Facebook prophet](section_2_4) * [2.5 State Space Model (Structural Time Series)](section_2_5) * [3. Advanced composition patterns - pipelines, reduction, autoML, and more](chapter3) * [3.1 Reduction: from forecasting to regression](section_3_1) * [3.2 Pipelining, detrending and deseasonalization](section_3_2) * [3.2.1 The basic forecasting pipeline](section_3_2_1) * [3.2.2 The Detrender as pipeline component](section_3_2_2) * [3.2.3 Complex pipeline composites and parameter inspection](section_3_2_3) * [3.3 Parameter tuning](section_3_3) * [3.3.1 Basic tuning using ForecastingGridSearchCV](section_3_3_1) * [3.3.2 Tuning of complex composites](section_3_3_2) * [3.3.3 Selecting the metric and retrieving scores](section_3_3_3) * [3.4 autoML aka automated model selection, ensembling and hedging](section_3_4) * [3.4.1 autoML aka automatic model selection, using tuning plus multiplexer](section_3_4_1) * [3.4.2 autoML: selecting transformer combinations via OptimalPassthrough](section_3_4_2) * [3.4.3 simple ensembling strategies](section_3_4_3) * [3.4.4 Prediction weighted ensembles and hedge ensembles](section_3_4_4) * [4. Extension guide - implementing your own forecaster](chapter4) * [5. Summary](chapter5) package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following four workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data contanier formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.Series` for univariate time series and sequences* `pd.DataFrame` for multivariate time series and sequencesThe `Series.index` and `DataFrame.index` are used for representing the time series or sequence index. `sktime` supports pandas integer, period and timestamp indices.NOTE: at current time (v0.9x), forecasting of multivariate time series is a stable functionality, but not covered in this tutorial. Contributions to extend the tutorial are welcome.**Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit` Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4 prediction intervals`sktime` provides a unified interface to return prediction interval when forecasting. This is possible directly in the `predict` function, by setting the `return_pred_int` argument to `True`. The `predict` method then returns a second argument, Not all forecasters are capable of returning prediction intervals, in which case an error will be raised.Obtaining prediction intervals can be done as part of any workflow involving `predict`, by adding the argument `return_pred_int` - below, we illustrate this by modifying the basic workflow in Section 1.2:
###Code
from sktime.forecasting.theta import ThetaForecaster
# simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y)
# setting return_pred_int argument to True; alpha determines percentiles
# intervals are lower = alpha/2-percentile, upper = (1-alpha/2)-percentile
alpha = 0.05 # 2.5%/97.5% prediction intervals
y_pred, y_pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
###Output
_____no_output_____
###Markdown
`y_pred_ints` is a `pandas.DataFrame` with columns `lower` and `upper`, and rows the indices for which forecasts were made (same as in `y_pred`). Entries are lower/upper (as column name) bound of the nominal alpha predictive interval for the index in the same row.
###Code
y_pred_ints
###Output
_____no_output_____
###Markdown
pretty-plotting the predictive interval forecasts:
###Code
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["lower"],
y_pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
NOTE: this should be turned into a one-liner, by moving this to `utils.plotting` - contributions are appreciated. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.9x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1975
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - main families`sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.The main classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `autoARIMA` from `pmdarima`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command:
###Code
from sktime.registry import all_estimators
import pandas as pd
# all_estimators returns list of pairs - data frame conversion for pretty printing
all_estimators("forecaster", as_dataframe=True)
###Output
_____no_output_____
###Markdown
All forecasters follow the same interface, and can be used in the workflows presented in Section 1.We proceed by showcasing some commonnly used classes of forecasters.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and auto-ETS.For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following. Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex. Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.5 State Space Model (Structural Time Series)We can also use the [`UnobservedComponents`](https://www.statsmodels.org/stable/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.html) class from [`statsmodels`](https://www.statsmodels.org/stable/index.html) to generate predictions using a state space model.
###Code
from sktime.forecasting.structural import UnobservedComponents
# We can model seasonality using Fourier modes as in the Prophet model.
forecaster = UnobservedComponents(
level="local linear trend", freq_seasonal=[{"period": 12, "harmonics": 10}]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. 3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper-parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.3 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y=y_train, fh=fh)
y_pred = forecaster.update_predict_single(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from warnings import simplefilter
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import (
EnsembleForecaster,
ReducedRegressionForecaster,
TransformedTargetForecaster,
)
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
temporal_train_test_split,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.performance_metrics.forecasting import sMAPE, smape_loss
from sktime.transformations.series.detrend import Deseasonalizer, Detrender
from sktime.utils.plotting import plot_series
simplefilter("ignore", FutureWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. sktime supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use sktime's `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets.
Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num) :]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features]
# (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values,
# to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(
np.arange(len(y)), 10, len(fh)
)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh) :])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
BATS and TBATS are two other time series forecasting algorithms that are contained in sktime by means of wrapping the package [tbats](https://github.com/intive-DataScience/tbats).
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Composite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped_trend=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped_trend=True, seasonal="multiplicative", sp=12
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=15, strategy="recursive"
)
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window,
# and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5, 10, 15, 20, 25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(
regressor, window_length=15, strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 100}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(
forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE()
)
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Online ForecastingFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sklearn.metrics import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped_trend=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped_trend=True, seasonal="multiplicative", sp=12
),
),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.utils.plotting.forecasting import plot_ys
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataFor this tutorial, we will use the famous Box-Jenkins airline data set, which shows the number of international airlinepassengers per month from 1949-1960.As well as using the original time series (which is a classic example of a *multiplicative* time series), we will create an *additive* time series by performing a log-transform on the original data, so we may compare forecasters against both types of model.
###Code
y = load_airline()
fig, ax = plot_ys(y)
ax.set(xlabel="Time", ylabel="Number of airline passengers");
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_ys(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. We can specify the forecasting horizon as a simple numpy array of the steps ahead, relative to the end of the training series:
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead.Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` ForecastingLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches.1. We always predict the last value observed (in the training series),2. We predict the last value observed in the same season.
###Code
# we can do that with a few lines of code
y_pred = np.repeat(y_train.iloc[-1], len(fh))
y_pred = pd.Series(y_pred, index=y_train.index[-1] + fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
# using sktime
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_last = forecaster.predict(fh)
plot_ys(y_train, y_test, y_last, labels=["y_train", "y_test", "y_last"]);
smape_loss(y_last, y_test)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
But can I not just use scikit-learn?In principle, yes, but many pitfalls ... Pitfall 1: model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_ys(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage: > The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_ys(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: how to apply regression algorithms?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Reduction: from forecasting to regressionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets. Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num):]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features] (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values, to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(y.index.values, 10, len(fh))
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable. > Note also that these steps involve a number of implicit hyper-parameters:* the way you slice the time series into windows (e.g. the window length)* the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: how to generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh):])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task! To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is: * **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sktime.forecasting.compose import ReducedRegressionForecaster
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Internally, sktime uses a temporal time series splitter, similar to the cross-validation splitter in scikit-learn. Here we show how this works for the first 20 observations of the training series:
###Code
from sktime.forecasting.model_selection import SlidingWindowSplitter
cv = SlidingWindowSplitter(window_length=10, start_with_window=True)
for input_window, output_window in cv.split(y_train.iloc[:20]):
print(input_window, output_window)
###Output
[0 1 2 3 4 5 6 7 8 9] [10]
[ 1 2 3 4 5 6 7 8 9 10] [11]
[ 2 3 4 5 6 7 8 9 10 11] [12]
[ 3 4 5 6 7 8 9 10 11 12] [13]
[ 4 5 6 7 8 9 10 11 12 13] [14]
[ 5 6 7 8 9 10 11 12 13 14] [15]
[ 6 7 8 9 10 11 12 13 14 15] [16]
[ 7 8 9 10 11 12 13 14 15 16] [17]
[ 8 9 10 11 12 13 14 15 16 17] [18]
[ 9 10 11 12 13 14 15 16 17 18] [19]
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Compositite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
from sktime.forecasting.compose import EnsembleForecaster
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
from sktime.forecasting.model_selection import ForecastingGridSearchCV
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
from sktime.forecasting.compose import RecursiveRegressionForecaster
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5,10,15,20,25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = RecursiveRegressionForecaster(regressor, window_length=15)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 15} {'n_estimators': 300}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
from sktime.performance_metrics.forecasting import sMAPE
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE())
gscv.fit(y_train)
print(gscv.cv_results_)
###Output
{'mean_fit_time': array([4.7923193 , 7.03527498, 4.6533103 , 4.84656096, 4.54636288]), 'mean_score_time': array([1.39692378, 1.46415329, 1.09011745, 1.38976789, 0.58445573]), 'param_window_length': masked_array(data=[5, 10, 15, 20, 25],
mask=[False, False, False, False, False],
fill_value='?',
dtype=object), 'params': [{'window_length': 5}, {'window_length': 10}, {'window_length': 15}, {'window_length': 20}, {'window_length': 25}], 'mean_test_sMAPE': array([0.29032851, 0.261543 , 0.24161449, 0.24749638, 0.2379254 ]), 'rank_test_sMAPE': array([5, 4, 2, 3, 1])}
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformers.single_series.detrend import Detrender
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_ys(y_train, y_pred, yt, labels=["y_train", "Fitted linear trend", "Residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformers.single_series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_ys(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
from sktime.forecasting.theta import ThetaForecaster
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(y_pred.index, pred_ints["lower"], pred_ints["upper"], alpha=0.2, color="green", label=f"{1 - alpha}% prediction intervals")
plt.legend();
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from sktime.forecasting.all import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. sktime supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use sktime's `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets. Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num):]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features] (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values, to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(np.arange(len(y)), 10, len(fh))
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh):])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
/Users/mloning/Documents/Research/software/sktime/sktime/sktime/forecasting/exp_smoothing.py:100: FutureWarning: the 'damped'' keyword is deprecated, use 'damped_trend' instead
seasonal_periods=self.sp,
/Users/mloning/.conda/envs/sktime-dev/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:429: FutureWarning: After 0.13 initialization must be handled at model creation
FutureWarning,
/Users/mloning/.conda/envs/sktime-dev/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1116: FutureWarning: Setting use_boxcox during fit has been deprecated and will be removed after 0.13. It must be set during model initialization.
FutureWarning
/Users/mloning/.conda/envs/sktime-dev/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1136: FutureWarning: use_basinhopping is deprecated. Set optimization method using 'method'. This option will be removed after 0.13 is released.
FutureWarning,
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Composite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
/Users/mloning/Documents/Research/software/sktime/sktime/sktime/forecasting/exp_smoothing.py:100: FutureWarning: the 'damped'' keyword is deprecated, use 'damped_trend' instead
seasonal_periods=self.sp,
/Users/mloning/.conda/envs/sktime-dev/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:429: FutureWarning: After 0.13 initialization must be handled at model creation
FutureWarning,
/Users/mloning/.conda/envs/sktime-dev/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1116: FutureWarning: Setting use_boxcox during fit has been deprecated and will be removed after 0.13. It must be set during model initialization.
FutureWarning
/Users/mloning/.conda/envs/sktime-dev/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1136: FutureWarning: use_basinhopping is deprecated. Set optimization method using 'method'. This option will be removed after 0.13 is released.
FutureWarning,
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5,10,15,20,25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(regressor, window_length=15, strategy="recursive")
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 100}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE())
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_series(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(ax.get_lines()[-1].get_xdata(), pred_ints["lower"], pred_ints["upper"],
alpha=0.2, color=ax.get_lines()[-1].get_c(), label=f"{1 - alpha}% prediction intervals")
ax.legend();
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from sktime.forecasting.all import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. sktime supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use sktime's `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets. Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num):]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features] (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values, to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(np.arange(len(y)), 10, len(fh))
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh):])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Composite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5,10,15,20,25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(regressor, window_length=15, strategy="recursive")
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 100}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE())
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_series(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(ax.get_lines()[-1].get_xdata(), pred_ints["lower"], pred_ints["upper"],
alpha=0.2, color=ax.get_lines()[-1].get_c(), label=f"{1 - alpha}% prediction intervals")
ax.legend();
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following three workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data container formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.Series` for univariate time series and sequences* `pd.DataFrame` for multivariate time series and sequencesThe `Series.index` and `DataFrame.index` are used for representing the time series or sequence index. `sktime` supports pandas integer, period and timestamp indices.NOTE: at current time (v0.6x), forecasting of multivariate time seres is not a stable functionality, this is a priority roadmap item. Multivariate exogeneous time series are part of stable functionality. **Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit`Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4 prediction intervals`sktime` provides a unified interface to return prediction interval when forecasting. This is possible directly in the `predict` function, by setting the `return_pred_int` argument to `True`. The `predict` method then returns a second argument, Not all forecasters are capable of returning prediction intervals, in which case an error will be raised.Obtaining prediction intervals can be done as part of any workflow involving `predict`, by adding the argument `return_pred_int` - below, we illustrate this by modifying the basic workflow in Section 1.2:
###Code
from sktime.forecasting.theta import ThetaForecaster
# simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y)
# setting return_pred_int argument to True; alpha determines percentiles
# intervals are lower = alpha/2-percentile, upper = (1-alpha/2)-percentile
alpha = 0.05 # 2.5%/97.5% prediction intervals
y_pred, y_pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
###Output
_____no_output_____
###Markdown
`y_pred_ints` is a `pandas.DataFrame` with columns `lower` and `upper`, and rows the indices for which forecasts were made (same as in `y_pred`). Entries are lower/upper (as column name) bound of the nominal alpha predictive interval for the index in the same row.
###Code
y_pred_ints
###Output
_____no_output_____
###Markdown
pretty-plotting the predictive interval forecasts:
###Code
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["lower"],
y_pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
NOTE: this should be turned into a one-liner, by moving this to `utils.plotting` - contributions are appreciated. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.6x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1975
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - main families`sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.The main classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `autoARIMA` from `pmdarima`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command:
###Code
from sktime.utils import all_estimators
import pandas as pd
# all_estimators returns list of pairs - data frame conversion for pretty printing
pd.DataFrame(all_estimators("forecaster"), columns=["name", "class"])
###Output
_____no_output_____
###Markdown
All forecasters follow the same interface, and can be used in the workflows presented in Section 1.We proceed by showcasing some commonnly used classes of forecasters.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and aut-ETS. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following.Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex.Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. 3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper=parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.3 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.utils.plotting.forecasting import plot_ys
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataFor this tutorial, we will use the famous Box-Jenkins airline data set, which shows the number of international airlinepassengers per month from 1949-1960.As well as using the original time series (which is a classic example of a *multiplicative* time series), we will create an *additive* time series by performing a log-transform on the original data, so we may compare forecasters against both types of model.
###Code
y = load_airline()
fig, ax = plot_ys(y)
ax.set(xlabel="Time", ylabel="Number of airline passengers");
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_ys(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. We can specify the forecasting horizon as a simple numpy array of the steps ahead, relative to the end of the training series:
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead.Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` ForecastingLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches.1. We always predict the last value observed (in the training series),2. We predict the last value observed in the same season.
###Code
# we can do that with a few lines of code
y_pred = np.repeat(y_train.iloc[-1], len(fh))
y_pred = pd.Series(y_pred, index=y_train.index[-1] + fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
# using sktime
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_last = forecaster.predict(fh)
plot_ys(y_train, y_test, y_last, labels=["y_train", "y_test", "y_last"]);
smape_loss(y_last, y_test)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
But can I not just use scikit-learn?In principle, yes, but many pitfalls ... Pitfall 1: model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_ys(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage: > The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_ys(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: how to apply regression algorithms?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Reduction: from forecasting to regressionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets. Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num):]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features] (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values, to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(y.index.values, 10, len(fh))
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable. > Note also that these steps involve a number of implicit hyper-parameters:* the way you slice the time series into windows (e.g. the window length)* the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: how to generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh):])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task! To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is: * **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sktime.forecasting.compose import ReducedRegressionForecaster
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Internally, sktime uses a temporal time series splitter, similar to the cross-validation splitter in scikit-learn. Here we show how this works for the first 20 observations of the training series:
###Code
from sktime.forecasting.model_selection import SlidingWindowSplitter
cv = SlidingWindowSplitter(window_length=10, start_with_window=True)
for input_window, output_window in cv.split(y_train.iloc[:20]):
print(input_window, output_window)
###Output
[0 1 2 3 4 5 6 7 8 9] [10]
[ 1 2 3 4 5 6 7 8 9 10] [11]
[ 2 3 4 5 6 7 8 9 10 11] [12]
[ 3 4 5 6 7 8 9 10 11 12] [13]
[ 4 5 6 7 8 9 10 11 12 13] [14]
[ 5 6 7 8 9 10 11 12 13 14] [15]
[ 6 7 8 9 10 11 12 13 14 15] [16]
[ 7 8 9 10 11 12 13 14 15 16] [17]
[ 8 9 10 11 12 13 14 15 16 17] [18]
[ 9 10 11 12 13 14 15 16 17 18] [19]
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
/Users/Hongyi/Documents/GitHub/sktime/sktime/forecasting/exp_smoothing.py:99: FutureWarning: the 'damped'' keyword is deprecated, use 'damped_trend' instead
seasonal_periods=self.sp,
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:429: FutureWarning: After 0.13 initialization must be handled at model creation
FutureWarning,
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1116: FutureWarning: Setting use_boxcox during fit has been deprecated and will be removed after 0.13. It must be set during model initialization.
FutureWarning
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1136: FutureWarning: use_basinhopping is deprecated. Set optimization method using 'method'. This option will be removed after 0.13 is released.
FutureWarning,
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True,sp=12,n_jobs=-1,allow_multiplicative_trend=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Compositite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
from sktime.forecasting.compose import EnsembleForecaster
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
/Users/Hongyi/Documents/GitHub/sktime/sktime/forecasting/exp_smoothing.py:99: FutureWarning: the 'damped'' keyword is deprecated, use 'damped_trend' instead
seasonal_periods=self.sp,
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:429: FutureWarning: After 0.13 initialization must be handled at model creation
FutureWarning,
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1116: FutureWarning: Setting use_boxcox during fit has been deprecated and will be removed after 0.13. It must be set during model initialization.
FutureWarning
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1136: FutureWarning: use_basinhopping is deprecated. Set optimization method using 'method'. This option will be removed after 0.13 is released.
FutureWarning,
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
from sktime.forecasting.model_selection import ForecastingGridSearchCV
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
from sktime.forecasting.compose import RecursiveRegressionForecaster
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5,10,15,20,25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = RecursiveRegressionForecaster(regressor, window_length=15)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 300}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
from sktime.performance_metrics.forecasting import sMAPE
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE())
gscv.fit(y_train)
print(gscv.cv_results_)
###Output
{'mean_fit_time': array([5.38487506, 5.85001922, 5.85350299, 7.46346712, 6.55099392]), 'mean_score_time': array([0.62610698, 0.94912004, 1.32290769, 0.92151284, 1.7113502 ]), 'param_window_length': masked_array(data=[5, 10, 15, 20, 25],
mask=[False, False, False, False, False],
fill_value='?',
dtype=object), 'params': [{'window_length': 5}, {'window_length': 10}, {'window_length': 15}, {'window_length': 20}, {'window_length': 25}], 'mean_test_sMAPE': array([0.29819397, 0.26025257, 0.24936505, 0.25164789, 0.23979438]), 'rank_test_sMAPE': array([5, 4, 2, 3, 1], dtype=int32)}
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformers.single_series.detrend import Detrender
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_ys(y_train, y_pred, yt, labels=["y_train", "Fitted linear trend", "Residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformers.single_series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_ys(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
from sktime.forecasting.theta import ThetaForecaster
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(y_pred.index, pred_ints["lower"], pred_ints["upper"], alpha=0.2, color="green", label=f"{1 - alpha}% prediction intervals")
plt.legend();
###Output
/Users/Hongyi/Documents/GitHub/sktime/sktime/forecasting/exp_smoothing.py:99: FutureWarning: the 'damped'' keyword is deprecated, use 'damped_trend' instead
seasonal_periods=self.sp,
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:429: FutureWarning: After 0.13 initialization must be handled at model creation
FutureWarning,
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1116: FutureWarning: Setting use_boxcox during fit has been deprecated and will be removed after 0.13. It must be set during model initialization.
FutureWarning
/usr/local/lib/python3.7/site-packages/statsmodels/tsa/holtwinters/model.py:1136: FutureWarning: use_basinhopping is deprecated. Set optimization method using 'method'. This option will be removed after 0.13 is released.
FutureWarning,
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. `sktime` provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from warnings import simplefilter
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.arima import ARIMA, AutoARIMA
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import (
EnsembleForecaster,
ReducedRegressionForecaster,
TransformedTargetForecaster,
)
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
temporal_train_test_split,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.performance_metrics.forecasting import sMAPE, smape_loss
from sktime.transformations.series.detrend import Deseasonalizer, Detrender
from sktime.utils.plotting import plot_series
simplefilter("ignore", FutureWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. `sktime` supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use `sktime`'s `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.`sktime` comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in `sktime`:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets.
Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num) :]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features]
# (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values,
# to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(
np.arange(len(y)), 10, len(fh)
)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh) :])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regression`sktime` provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasters`sktime` has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In `sktime`, we interface [`pmdarima`](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
A single ARIMA model can also be manually configured.
###Code
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
BATS and TBATS are two other time series forecasting algorithms that are contained in `sktime` by means of wrapping the package [`tbats`](https://github.com/intive-DataScience/tbats).
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
`sktime` also provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook. Please note that `fbprophet` is strongly related to data with a time stamp of type `pd.DatetimeIndex`, so we have to convert the index type first:
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
from sktime.forecasting.fbprophet import Prophet
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
INFO:fbprophet:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.
INFO:fbprophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.
###Markdown
Composite model building`sktime` provides a modular API for composite model building for forecasting. EnsemblingLike `scikit-learn`, `sktime` provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped_trend=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped_trend=True, seasonal="multiplicative", sp=12
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=15, strategy="recursive"
)
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window,
# and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5, 10, 15, 20, 25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(
regressor, window_length=15, strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 200}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(
forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE()
)
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.`sktime` provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Online ForecastingFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sklearn.metrics import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped_trend=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped_trend=True, seasonal="multiplicative", sp=12
),
),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. `sktime`'s interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. Table of Contents* [1. Basic forecasting workflows](chapter1) * [1.1 Data container format](section_1_1) * [1.2 Basic deployment workflow - batch fitting and forecasting](section_1_2) * [1.2.1 Basic deployment workflow in a nutshell](section_1_2_1) * [1.2.2 Forecasters that require the horizon already in `fit`](section_1_2_2) * [1.2.3 Forecasters that can make use of exogeneous data](section_1_2_3) * [1.2.4 Prediction intervals](section_1_2_4) * [1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observations](section_1_3) * [1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interface](section_1_3_1) * [1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interface](section_1_3_2) * [1.4 advanced deployment workflow: rolling updates & forecasts](section_1_4) * [1.4.1 updating a forecaster with the update method](section_1_4_1) * [1.4.2 moving the "now" state without updating the model](section_1_4_2) * [1.4.3 walk-forward predictions on a batch of data](section_1_4_3) * [1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testing](section_1_5) * [2. Forecasters in sktime - main families](chapter2) * [2.1 exponential smoothing, theta forecaster, autoETS from statsmodels](section_2_1) * [2.2 ARIMA and autoARIMA](section_2_2) * [2.3 BATS and TBATS](section_2_3) * [2.4 Facebook prophet](section_2_4) * [2.5 State Space Model (Structural Time Series)](section_2_5) * [2.6 AutoArima from StatsForecast](section_2_6) * [3. Advanced composition patterns - pipelines, reduction, autoML, and more](chapter3) * [3.1 Reduction: from forecasting to regression](section_3_1) * [3.2 Pipelining, detrending and deseasonalization](section_3_2) * [3.2.1 The basic forecasting pipeline](section_3_2_1) * [3.2.2 The Detrender as pipeline component](section_3_2_2) * [3.2.3 Complex pipeline composites and parameter inspection](section_3_2_3) * [3.3 Parameter tuning](section_3_3) * [3.3.1 Basic tuning using ForecastingGridSearchCV](section_3_3_1) * [3.3.2 Tuning of complex composites](section_3_3_2) * [3.3.3 Selecting the metric and retrieving scores](section_3_3_3) * [3.4 autoML aka automated model selection, ensembling and hedging](section_3_4) * [3.4.1 autoML aka automatic model selection, using tuning plus multiplexer](section_3_4_1) * [3.4.2 autoML: selecting transformer combinations via OptimalPassthrough](section_3_4_2) * [3.4.3 simple ensembling strategies](section_3_4_3) * [3.4.4 Prediction weighted ensembles and hedge ensembles](section_3_4_4) * [4. Extension guide - implementing your own forecaster](chapter4) * [5. Summary](chapter5) package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following four workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data contanier formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.Series` for univariate time series and sequences* `pd.DataFrame` for multivariate time series and sequencesThe `Series.index` and `DataFrame.index` are used for representing the time series or sequence index. `sktime` supports pandas integer, period and timestamp indices.NOTE: at current time (v0.9x), forecasting of multivariate time series is a stable functionality, but not covered in this tutorial. Contributions to extend the tutorial are welcome.**Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit` Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4 prediction intervals`sktime` provides a unified interface to return prediction interval when forecasting. This is possible directly in the `predict` function, by setting the `return_pred_int` argument to `True`. The `predict` method then returns a second argument, Not all forecasters are capable of returning prediction intervals, in which case an error will be raised.Obtaining prediction intervals can be done as part of any workflow involving `predict`, by adding the argument `return_pred_int` - below, we illustrate this by modifying the basic workflow in Section 1.2:
###Code
from sktime.forecasting.theta import ThetaForecaster
# simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y)
# setting return_pred_int argument to True; alpha determines percentiles
# intervals are lower = alpha/2-percentile, upper = (1-alpha/2)-percentile
alpha = 0.05 # 2.5%/97.5% prediction intervals
y_pred, y_pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
###Output
_____no_output_____
###Markdown
`y_pred_ints` is a `pandas.DataFrame` with columns `lower` and `upper`, and rows the indices for which forecasts were made (same as in `y_pred`). Entries are lower/upper (as column name) bound of the nominal alpha predictive interval for the index in the same row.
###Code
y_pred_ints
###Output
_____no_output_____
###Markdown
pretty-plotting the predictive interval forecasts:
###Code
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["lower"],
y_pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
NOTE: this should be turned into a one-liner, by moving this to `utils.plotting` - contributions are appreciated. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.9x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1975
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - main families`sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.The main classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `autoARIMA` from `pmdarima`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command:
###Code
from sktime.registry import all_estimators
import pandas as pd
# all_estimators returns list of pairs - data frame conversion for pretty printing
all_estimators("forecaster", as_dataframe=True)
###Output
_____no_output_____
###Markdown
All forecasters follow the same interface, and can be used in the workflows presented in Section 1.We proceed by showcasing some commonnly used classes of forecasters.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and auto-ETS.For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following. Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex. Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.5 State Space Model (Structural Time Series)We can also use the [`UnobservedComponents`](https://www.statsmodels.org/stable/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.html) class from [`statsmodels`](https://www.statsmodels.org/stable/index.html) to generate predictions using a state space model.
###Code
from sktime.forecasting.structural import UnobservedComponents
# We can model seasonality using Fourier modes as in the Prophet model.
forecaster = UnobservedComponents(
level="local linear trend", freq_seasonal=[{"period": 12, "harmonics": 10}]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.6 AutoARIMA from [StatsForecast](https://github.com/Nixtla/statsforecast)`sktime` interfaces `StatsForecast` for its `AutoARIMA` class models. `AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.statsforecast import AutoARIMA
forecaster = AutoARIMA(period=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. 3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper-parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.3 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y=y_train, fh=fh)
y_pred = forecaster.update_predict_single(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.utils.plotting.forecasting import plot_ys
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataFor this tutorial, we will use the famous Box-Jenkins airline data set, which shows the number of international airlinepassengers per month from 1949-1960.As well as using the original time series (which is a classic example of a *multiplicative* time series), we will create an *additive* time series by performing a log-transform on the original data, so we may compare forecasters against both types of model.
###Code
y = load_airline()
fig, ax = plot_ys(y)
ax.set(xlabel="Time", ylabel="Number of airline passengers");
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_ys(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. We can specify the forecasting horizon as a simple numpy array of the steps ahead, relative to the end of the training series:
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead.Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` ForecastingLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches.1. We always predict the last value observed (in the training series),2. We predict the last value observed in the same season.
###Code
# we can do that with a few lines of code
y_pred = np.repeat(y_train.iloc[-1], len(fh))
y_pred = pd.Series(y_pred, index=y_train.index[-1] + fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
# using sktime
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_last = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_last, y_test)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
But can I not just use scikit-learn?In principle, yes, but many pitfalls ... Pitfall 1: model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_ys(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage: > The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_ys(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: how to apply regression algorithms?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Reduction: from forecasting to regressionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets. Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num):]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features] (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values, to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(y.index.values, 10, len(fh))
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable. > Note also that these steps involve a number of implicit hyper-parameters:* the way you slice the time series into windows (e.g. the window length)* the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: how to generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh):])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task! To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is: * **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sktime.forecasting.compose import ReducedRegressionForecaster
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Internally, sktime uses a temporal time series splitter, similar to the cross-validation splitter in scikit-learn. Here we show how this works for the first 20 observations of the training series:
###Code
from sktime.forecasting.model_selection import SlidingWindowSplitter
cv = SlidingWindowSplitter(window_length=10, start_with_window=True)
for input_window, output_window in cv.split(y_train.iloc[:20]):
print(input_window, output_window)
###Output
[0 1 2 3 4 5 6 7 8 9] [10]
[ 1 2 3 4 5 6 7 8 9 10] [11]
[ 2 3 4 5 6 7 8 9 10 11] [12]
[ 3 4 5 6 7 8 9 10 11 12] [13]
[ 4 5 6 7 8 9 10 11 12 13] [14]
[ 5 6 7 8 9 10 11 12 13 14] [15]
[ 6 7 8 9 10 11 12 13 14 15] [16]
[ 7 8 9 10 11 12 13 14 15 16] [17]
[ 8 9 10 11 12 13 14 15 16 17] [18]
[ 9 10 11 12 13 14 15 16 17 18] [19]
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Compositite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
from sktime.forecasting.compose import EnsembleForecaster
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
from sktime.forecasting.model_selection import ForecastingGridSearchCV
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
from sktime.forecasting.compose import RecursiveRegressionForecaster
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5,10,15,20,25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = RecursiveRegressionForecaster(regressor, window_length=15)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 15} {'n_estimators': 300}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
from sktime.performance_metrics.forecasting import sMAPE
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE())
gscv.fit(y_train)
print(gscv.cv_results_)
###Output
{'mean_fit_time': array([4.7923193 , 7.03527498, 4.6533103 , 4.84656096, 4.54636288]), 'mean_score_time': array([1.39692378, 1.46415329, 1.09011745, 1.38976789, 0.58445573]), 'param_window_length': masked_array(data=[5, 10, 15, 20, 25],
mask=[False, False, False, False, False],
fill_value='?',
dtype=object), 'params': [{'window_length': 5}, {'window_length': 10}, {'window_length': 15}, {'window_length': 20}, {'window_length': 25}], 'mean_test_sMAPE': array([0.29032851, 0.261543 , 0.24161449, 0.24749638, 0.2379254 ]), 'rank_test_sMAPE': array([5, 4, 2, 3, 1])}
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformers.single_series.detrend import Detrender
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_ys(y_train, y_pred, yt, labels=["y_train", "Fitted linear trend", "Residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformers.single_series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_ys(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
from sktime.forecasting.theta import ThetaForecaster
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(y_pred.index, pred_ints["lower"], pred_ints["upper"], alpha=0.2, color="green", label=f"{1 - alpha}% prediction intervals")
plt.legend();
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. Table of Contents* [1. Basic forecasting workflows](chapter1) * [1.1 Data container format](section_1_1) * [1.2 Basic deployment workflow - batch fitting and forecasting](section_1_2) * [1.2.1 Basic deployment workflow in a nutshell](section_1_2_1) * [1.2.2 Forecasters that require the horizon already in `fit`](section_1_2_2) * [1.2.3 Forecasters that can make use of exogeneous data](section_1_2_3) * [1.2.4 Prediction intervals](section_1_2_4) * [1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observations](section_1_3) * [1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interface](section_1_3_1) * [1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interface](section_1_3_2) * [1.4 advanced deployment workflow: rolling updates & forecasts](section_1_4) * [1.4.1 updating a forecaster with the update method](section_1_4_1) * [1.4.2 moving the "now" state without updating the model](section_1_4_2) * [1.4.3 walk-forward predictions on a batch of data](section_1_4_3) * [1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testing](section_1_5) * [2. Forecasters in sktime - main families](chapter2) * [2.1 exponential smoothing, theta forecaster, autoETS from statsmodels](section_2_1) * [2.2 ARIMA and autoARIMA](section_2_2) * [2.3 BATS and TBATS](section_2_3) * [2.4 Facebook prophet](section_2_4) * [2.5 State Space Model (Structural Time Series)](section_2_5) * [3. Advanced composition patterns - pipelines, reduction, autoML, and more](chapter3) * [3.1 Reduction: from forecasting to regression](section_3_1) * [3.2 Pipelining, detrending and deseasonalization](section_3_2) * [3.2.1 The basic forecasting pipeline](section_3_2_1) * [3.2.2 The Detrender as pipeline component](section_3_2_2) * [3.2.3 Complex pipeline composites and parameter inspection](section_3_2_3) * [3.3 Parameter tuning](section_3_3) * [3.3.1 Basic tuning using ForecastingGridSearchCV](section_3_3_1) * [3.3.2 Tuning of complex composites](section_3_3_2) * [3.3.3 Selecting the metric and retrieving scores](section_3_3_3) * [3.4 autoML aka automated model selection, ensembling and hedging](section_3_4) * [3.4.1 autoML aka automatic model selection, using tuning plus multiplexer](section_3_4_1) * [3.4.2 autoML: selecting transformer combinations via OptimalPassthrough](section_3_4_2) * [3.4.3 simple ensembling strategies](section_3_4_3) * [3.4.4 Prediction weighted ensembles and hedge ensembles](section_3_4_4) * [4. Extension guide - implementing your own forecaster](chapter4) * [5. Summary](chapter5) package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following three workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data contanier formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.Series` for univariate time series and sequences* `pd.DataFrame` for multivariate time series and sequencesThe `Series.index` and `DataFrame.index` are used for representing the time series or sequence index. `sktime` supports pandas integer, period and timestamp indices.NOTE: at current time (v0.6x), forecasting of multivariate time seres is not a stable functionality, this is a priority roadmap item. Multivariate exogeneous time series are part of stable functionality. **Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit` Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4 prediction intervals`sktime` provides a unified interface to return prediction interval when forecasting. This is possible directly in the `predict` function, by setting the `return_pred_int` argument to `True`. The `predict` method then returns a second argument, Not all forecasters are capable of returning prediction intervals, in which case an error will be raised.Obtaining prediction intervals can be done as part of any workflow involving `predict`, by adding the argument `return_pred_int` - below, we illustrate this by modifying the basic workflow in Section 1.2:
###Code
from sktime.forecasting.theta import ThetaForecaster
# simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y)
# setting return_pred_int argument to True; alpha determines percentiles
# intervals are lower = alpha/2-percentile, upper = (1-alpha/2)-percentile
alpha = 0.05 # 2.5%/97.5% prediction intervals
y_pred, y_pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
###Output
_____no_output_____
###Markdown
`y_pred_ints` is a `pandas.DataFrame` with columns `lower` and `upper`, and rows the indices for which forecasts were made (same as in `y_pred`). Entries are lower/upper (as column name) bound of the nominal alpha predictive interval for the index in the same row.
###Code
y_pred_ints
###Output
_____no_output_____
###Markdown
pretty-plotting the predictive interval forecasts:
###Code
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["lower"],
y_pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
NOTE: this should be turned into a one-liner, by moving this to `utils.plotting` - contributions are appreciated. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.6x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1975
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - main families`sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.The main classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `autoARIMA` from `pmdarima`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command:
###Code
from sktime.registry import all_estimators
import pandas as pd
# all_estimators returns list of pairs - data frame conversion for pretty printing
all_estimators("forecaster", as_dataframe=True)
###Output
_____no_output_____
###Markdown
All forecasters follow the same interface, and can be used in the workflows presented in Section 1.We proceed by showcasing some commonnly used classes of forecasters.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and aut-ETS. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following.Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex.Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.5 State Space Model (Structural Time Series)We can also use the [`UobservedComponents`](https://www.statsmodels.org/stable/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.html) class from [`statsmodels`](https://www.statsmodels.org/stable/index.html) to generate predictions using a state space model.
###Code
from sktime.forecasting.structural import UnobservedComponents
# We can model seasonality using Fourier modes as in the Prophet model.
forecaster = UnobservedComponents(
level="local linear trend", freq_seasonal=[{"period": 12, "harmonics": 10}]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. 3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper=parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.3 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y=y_train, fh=fh)
y_pred = forecaster.update_predict_single(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred)
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following three workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data container formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.Series` for univariate time series and sequences* `pd.DataFrame` for multivariate time series and sequencesThe `Series.index` and `DataFrame.index` are used for representing the time series or sequence index. `sktime` supports pandas integer, period and timestamp indices.NOTE: at current time (v0.6x), forecasting of multivariate time seres is not a stable functionality, this is a priority roadmap item. Multivariate exogeneous time series are part of stable functionality. **Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit`Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4 prediction intervals`sktime` provides a unified interface to return prediction interval when forecasting. This is possible directly in the `predict` function, by setting the `return_pred_int` argument to `True`. The `predict` method then returns a second argument, Not all forecasters are capable of returning prediction intervals, in which case an error will be raised.Obtaining prediction intervals can be done as part of any workflow involving `predict`, by adding the argument `return_pred_int` - below, we illustrate this by modifying the basic workflow in Section 1.2:
###Code
from sktime.forecasting.theta import ThetaForecaster
# simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y)
# setting return_pred_int argument to True; alpha determines percentiles
# intervals are lower = alpha/2-percentile, upper = (1-alpha/2)-percentile
alpha = 0.05 # 2.5%/97.5% prediction intervals
y_pred, y_pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
###Output
_____no_output_____
###Markdown
`y_pred_ints` is a `pandas.DataFrame` with columns `lower` and `upper`, and rows the indices for which forecasts were made (same as in `y_pred`). Entries are lower/upper (as column name) bound of the nominal alpha predictive interval for the index in the same row.
###Code
y_pred_ints
###Output
_____no_output_____
###Markdown
pretty-plotting the predictive interval forecasts:
###Code
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["lower"],
y_pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
NOTE: this should be turned into a one-liner, by moving this to `utils.plotting` - contributions are appreciated. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.6x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1975
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - main families`sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.The main classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `autoARIMA` from `pmdarima`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command:
###Code
from sktime.utils import all_estimators
import pandas as pd
# all_estimators returns list of pairs - data frame conversion for pretty printing
pd.DataFrame(all_estimators("forecaster"), columns=["name", "class"])
###Output
_____no_output_____
###Markdown
All forecasters follow the same interface, and can be used in the workflows presented in Section 1.We proceed by showcasing some commonnly used classes of forecasters.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and aut-ETS. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following.Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex.Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. 3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper=parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.3 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
from warnings import simplefilter
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import (
EnsembleForecaster,
ReducedRegressionForecaster,
TransformedTargetForecaster,
)
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
temporal_train_test_split,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.forecasting.theta import ThetaForecaster
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.performance_metrics.forecasting import sMAPE, smape_loss
from sktime.transformers.series.detrend import Deseasonalizer, Detrender
from sktime.utils.plotting import plot_series
simplefilter("ignore", FutureWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataTo start, we use the Box-Jenkins univariate airline data set, which shows the number of international airlinepassengers per month from 1949 - 1960.
###Code
y = load_airline()
plot_series(y);
###Output
_____no_output_____
###Markdown
A time series consists of a sequence of timepoint-value pairs, where the value represents the value we observed and the timepoint the point in time at which we observed that value.We represent time series as a `pd.Series` where the index represents the timepoints. sktime supports pandas integer, period and timestamp indices. In this example, we have a period index:
###Code
y.index
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. Relative forecasting horizonOne of the simplest ways is to define a `np.array` with the steps ahead that you want to predict relative to the end of the training series.
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead. Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` Absolute forecasting horizonAlternatively, we can specify the forecasting horizon using the absolute time points we want to predict. In order to do that, we need to use sktime's `ForecastingHorizon` class. This way, we can simply create the forecasting horizon from the time points from the test set:
###Code
fh = ForecastingHorizon(y_test.index, is_relative=False)
fh
###Output
_____no_output_____
###Markdown
Generating forecastsLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches. Predicting the last value
###Code
# using sktime
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Predicting the last value of the same season
###Code
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Why not just use scikit-learn?You may wonder why we do not simply use scikit-learn for forecasting. Isn't forecasting in the end just a regression problem?In principle, yes. But scikit-learn is not designed for solving forecasting tasks, so beware of the pitfalls! Pitfall 1: Model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_series(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage:> The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_series(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: How exactly do we apply regression algorithms to a forecasting task?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Key idea: ReductionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets.
Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num) :]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features]
# (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values,
# to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(
np.arange(len(y)), 10, len(fh)
)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Here we show the generated windows expressed as integer indices:
###Code
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable.> Note also that these steps involve a number of implicit hyper-parameters:> * the way you slice the time series into windows (e.g. the window length)> * the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: Given a fitted regression algorithm, how can we generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh) :])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task!To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is:* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Composite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
forecaster = EnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped=True, seasonal="multiplicative", sp=12
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
forecaster = ReducedRegressionForecaster(
regressor=regressor, window_length=15, strategy="recursive"
)
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window,
# and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
Using scikit-learn's `GridSearchCV`, we can tune regressors imported from scikit-learn, in addition to tuning `window_length`.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_estimators": [100, 200, 300]}
forecaster_param_grid = {"window_length": [5, 10, 15, 20, 25]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(RandomForestRegressor(), param_grid=regressor_param_grid)
forecaster = ReducedRegressionForecaster(
regressor, window_length=15, strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
print(gscv.best_params_, gscv.best_forecaster_.regressor_.best_params_)
###Output
{'window_length': 25} {'n_estimators': 300}
###Markdown
To access performance on a particular metric during tuning, we can use the `scoring` argument of `ForecastingGridSearchCV`.
###Code
gscv = ForecastingGridSearchCV(
forecaster, cv=cv, param_grid=forecaster_param_grid, scoring=sMAPE()
)
gscv.fit(y_train)
pd.DataFrame(gscv.cv_results_)
###Output
_____no_output_____
###Markdown
DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
ReducedRegressionForecaster(
regressor=regressor, window_length=12, strategy="recursive"
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Online ForecastingFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sklearn.metrics import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
(
"holt",
ExponentialSmoothing(
trend="add", damped=False, seasonal="multiplicative", sp=12
),
),
(
"damped",
ExponentialSmoothing(
trend="add", damped=True, seasonal="multiplicative", sp=12
),
),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
SummaryAs we have seen, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.* sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon.* sktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following. Useful resources* For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study.* For a good introduction to forecasting, see [Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice. OTexts, 2018](https://otexts.com/fpp2/).* For comparative benchmarking studies/forecasting competitions, see the [M4 competition](https://www.sciencedirect.com/science/article/pii/S0169207019301128) and the currently running [M5 competition](https://www.kaggle.com/c/m5-forecasting-accuracy/overview). Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
SummaryAs we have seen, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.* sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon.* sktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following. Useful resources* For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study.* For a good introduction to forecasting, see [Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and practice. OTexts, 2018](https://otexts.com/fpp2/).* For comparative benchmarking studies/forecasting competitions, see the [M4 competition](https://www.sciencedirect.com/science/article/pii/S0169207019301128) and the currently running [M5 competition](https://www.kaggle.com/c/m5-forecasting-accuracy/overview). Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
pred_ints["lower"],
pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. Table of Contents* [1. Basic forecasting workflows](chapter1) * [1.1 Data container format](section_1_1) * [1.2 Basic deployment workflow - batch fitting and forecasting](section_1_2) * [1.2.1 Basic deployment workflow in a nutshell](section_1_2_1) * [1.2.2 Forecasters that require the horizon already in `fit`](section_1_2_2) * [1.2.3 Forecasters that can make use of exogeneous data](section_1_2_3) * [1.2.4 Multivariate Forecasters](section_1_2_4) * [1.2.5 Prediction intervals and quantile forecasts](section_1_2_5) * [1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observations](section_1_3) * [1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interface](section_1_3_1) * [1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interface](section_1_3_2) * [1.4 advanced deployment workflow: rolling updates & forecasts](section_1_4) * [1.4.1 updating a forecaster with the update method](section_1_4_1) * [1.4.2 moving the "now" state without updating the model](section_1_4_2) * [1.4.3 walk-forward predictions on a batch of data](section_1_4_3) * [1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testing](section_1_5) * [2. Forecasters in sktime - main families](chapter2) * [2.1 exponential smoothing, theta forecaster, autoETS from statsmodels](section_2_1) * [2.2 ARIMA and autoARIMA](section_2_2) * [2.3 BATS and TBATS](section_2_3) * [2.4 Facebook prophet](section_2_4) * [2.5 State Space Model (Structural Time Series)](section_2_5) * [2.6 AutoArima from StatsForecast](section_2_6) * [3. Advanced composition patterns - pipelines, reduction, autoML, and more](chapter3) * [3.1 Reduction: from forecasting to regression](section_3_1) * [3.2 Pipelining, detrending and deseasonalization](section_3_2) * [3.2.1 The basic forecasting pipeline](section_3_2_1) * [3.2.2 The Detrender as pipeline component](section_3_2_2) * [3.2.3 Complex pipeline composites and parameter inspection](section_3_2_3) * [3.3 Parameter tuning](section_3_3) * [3.3.1 Basic tuning using ForecastingGridSearchCV](section_3_3_1) * [3.3.2 Tuning of complex composites](section_3_3_2) * [3.3.3 Selecting the metric and retrieving scores](section_3_3_3) * [3.4 autoML aka automated model selection, ensembling and hedging](section_3_4) * [3.4.1 autoML aka automatic model selection, using tuning plus multiplexer](section_3_4_1) * [3.4.2 autoML: selecting transformer combinations via OptimalPassthrough](section_3_4_2) * [3.4.3 simple ensembling strategies](section_3_4_3) * [3.4.4 Prediction weighted ensembles and hedge ensembles](section_3_4_4) * [4. Extension guide - implementing your own forecaster](chapter4) * [5. Summary](chapter5) package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following four workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data contanier formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.Series` for univariate time series and sequences* `pd.DataFrame` for multivariate time series and sequencesThe `Series.index` and `DataFrame.index` are used for representing the time series or sequence index. `sktime` supports pandas integer, period and timestamp indices.NOTE: at current time (v0.9x), forecasting of multivariate time series is a stable functionality, but not covered in this tutorial. Contributions to extend the tutorial are welcome.**Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit` Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4. multivariate forecasting Some forecasters in sktime support multivariate forecasts. Some examples of multivariate forecasters are: `MultiplexForecaster`, `EnsembleForecaster`,`TransformedTargetForecaster` etc. In order to determine is a forecaster can be multivariate, one can look at the `scitype:y` in `tags`, which should be set to `multivariate` or '`both`. To display complete list of multivariate forecasters, search for forecasters with 'multivariate' or 'both' tag value for the tag 'scitype:y', as follows:
###Code
from sktime.registry import all_estimators
for forecaster in all_estimators(filter_tags={"scitype:y": ["multivariate", "both"]}):
print(forecaster[0])
###Output
_____no_output_____
###Markdown
Below is an example of the general workflow of multivariate `ColumnEnsembleForecaster` using the longley dataset from `sktime.datasets`. The workflow is the same as in the univariate forecasters, but the input has more than one variables (columns).
###Code
from sktime.datasets import load_longley
from sktime.forecasting.compose import ColumnEnsembleForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.trend import PolynomialTrendForecaster
_, y = load_longley()
y = y.drop(columns=["UNEMP", "ARMED", "POP"])
forecasters = [
("trend", PolynomialTrendForecaster(), 0),
("ses", ExponentialSmoothing(trend="add"), 1),
]
forecaster = ColumnEnsembleForecaster(forecasters=forecasters)
forecaster.fit(y, fh=[1, 2, 3])
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
The input to the multivariate forecaster `y` is a `pandas.DataFrame` where each column is a variable.
###Code
y
###Output
_____no_output_____
###Markdown
The result of the multivariate forecaster `y_pred` is a `pandas.DataFrame` where columns are the predicted values for each variable. The variables in `y_pred` are the same as in `y`, the input to the multivariate forecaster.
###Code
y_pred
###Output
_____no_output_____
###Markdown
1.2.5 probabilistic forecasting: prediction intervals, quantile, variance, and distributional forecasts `sktime` provides a unified interface to make probabilistic forecasts.The following methods are possibly available for probabilistic forecasts:* `predict_interval` produces interval forecasts. Additionally to any `predict` arguments, an argument `coverage` (nominal interval coverage) must be provided.* `predict_quantiles` produces quantile forecasts. Additionally to any `predict` arguments, an argument `alpha` (quantile values) must be provided.* `predict_var` produces variance forecasts. This has same arguments as `predict`.* `predict_proba` produces full distributional forecasts. This has same arguments as `predict`.Not all forecasters are capable of returning probabilistic forecast, but if a forecasters provides one kind of probabilistic forecast, it is also capable of returning the others. The list of forecasters with such capability can be queried by `registry.all_estimators`, searching for those where the `capability:pred_int` tag has value`True`.The basic worfklow for probabilistic forecasts is similar to the basic forecasting workflow, with the difference that instead of `predict`, one of the probabilistic forecasting methods is used:
###Code
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.theta import ThetaForecaster
# until fit, identical with the simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y, fh=fh)
###Output
_____no_output_____
###Markdown
Now we present the different probabilistic forecasting methods. `predict_interval` - interval predictions `predict_interval` takes an argument `coverage`, which is a float (or list of floats), the nominal coverage of the prediction interval(s) queried. `predict_interval` produces symmetric prediction intervals, for example, a coverage of `0.9` returns a "lower" forecast at quantile `0.5 - coverage/2 = 0.05`, and an "upper" forecast at quantile `0.5 + coverage/2 = 0.95`.
###Code
coverage = 0.9
y_pred_ints = forecaster.predict_interval(coverage=coverage)
y_pred_ints
###Output
_____no_output_____
###Markdown
The return `y_pred_ints` is a `pandas.DataFrame` with a column multi-index: The first level is variable name from `y` in fit (or `Coverage` if no variable names were present), second level coverage fractions for which intervals were computed, in the same order as in input `coverage`; third level columns `lower` and `upper`. Rows are the indices for which forecasts were made (same as in `y_pred` or `fh`). Entries are lower/upper (as column name) bound of the nominal coverage predictive interval for the index in the same row. pretty-plotting the predictive interval forecasts:
###Code
from sktime.utils import plotting
# also requires predictions
y_pred = forecaster.predict()
fig, ax = plotting.plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["Coverage"][coverage]["lower"],
y_pred_ints["Coverage"][coverage]["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{coverage}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
`predict_quantiles` - quantile forecasts sktime offers `predict_quantiles` as a unified interface to return quantile values of predictions. Similar to `predict_interval`.`predict_quantiles` has an argument `alpha`, containing the quantile values being queried. Similar to the case of the `predict_interval`, `alpha` can be a `float`, or a `list of floats`.
###Code
y_pred_quantiles = forecaster.predict_quantiles(alpha=[0.275, 0.975])
y_pred_quantiles
###Output
_____no_output_____
###Markdown
`y_pred_quantiles`, the output of predict_quantiles, is a `pandas.DataFrame` with a two-level column multiindex. The first level is variable name from `y` in fit (or `Quantiles` if no variable names were present), second level are the quantile values (from `alpha`) for which quantile predictions were queried. Rows are the indices for which forecasts were made (same as in `y_pred` or `fh`). Entries are the quantile predictions for that variable, that quantile value, for the time index in the same row. Remark: for clarity: quantile and (symmetric) interval forecasts can be translated into each other as follows.**alpha < 0.5:** The alpha-quantile prediction is equal to the lower bound of a predictive interval with coverage = (0.5 - alpha) * 2**alpha > 0.5:** The alpha-quantile prediction is equal to the upper bound of a predictive interval with coverage = (alpha - 0.5) * 2 `predict_var` - variance predictions `predict_var` produces variance predictions:
###Code
y_pred_var = forecaster.predict_var()
y_pred_var
###Output
_____no_output_____
###Markdown
The format of the output `y_pred_var` is the same as for `predict`, except that this is always coerced to a `pandas.DataFrame`, and entries are not point predictions but variance predictions. `predict_proba` - distribution predictions To predict full predictive distributions, `predict_proba` can be used.As this returns `tensorflow` `Distribution` objects, the deep learning dependency set `dl` of `sktime` (which includes `tensorflow` and `tensorflow-probability` dependencies) must be installed.
###Code
y_pred_proba = forecaster.predict_proba()
y_pred_proba
###Output
_____no_output_____
###Markdown
Distributions returned by `predict_proba` are by default marginal at time points, not joint over time points.More precisely, the returned `Distribution` object is formatted and to be interpreted as follows:* batch shape is 1D and same length as fh* event shape is 1D, with length equal to number of variables being forecast* i-th (batch) distribution is forecast for i-th entry of fh* j-th (event) component is j-th variable, same order as y in `fit`/`update`To return joint forecast distributions, the `marginal` parameter can be set to `False` (currently work in progress). In this case, a `Distribution` with 2D event shape `(len(fh), len(y))` is returned. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.9x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1957
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - main families`sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.The main classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `autoARIMA` from `pmdarima`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command:
###Code
from sktime.registry import all_estimators
import pandas as pd
# all_estimators returns list of pairs - data frame conversion for pretty printing
all_estimators("forecaster", as_dataframe=True)
###Output
_____no_output_____
###Markdown
All forecasters follow the same interface, and can be used in the workflows presented in Section 1.We proceed by showcasing some commonnly used classes of forecasters.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and auto-ETS.For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following. Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex. Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.5 State Space Model (Structural Time Series)We can also use the [`UnobservedComponents`](https://www.statsmodels.org/stable/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.html) class from [`statsmodels`](https://www.statsmodels.org/stable/index.html) to generate predictions using a state space model.
###Code
from sktime.forecasting.structural import UnobservedComponents
# We can model seasonality using Fourier modes as in the Prophet model.
forecaster = UnobservedComponents(
level="local linear trend", freq_seasonal=[{"period": 12, "harmonics": 10}]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.6 AutoARIMA from [StatsForecast](https://github.com/Nixtla/statsforecast)`sktime` interfaces `StatsForecast` for its `AutoARIMA` class models. `AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.statsforecast import StatsForecastAutoARIMA
forecaster = StatsForecastAutoARIMA(sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. 3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper-parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.4.3 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y=y_train, fh=fh)
y_pred = forecaster.update_predict_single(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study.In particular, you'll learn how to* use statistical models to make forecasts,* build composite machine learning models, including common techniques like reduction to regression, ensembling and pipelining. Preliminaries
###Code
import matplotlib.pyplot as plt
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.utils.plotting.forecasting import plot_ys
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataFor this tutorial, we will use the famous Box-Jenkins airline data set, which shows the number of international airlinepassengers per month from 1949-1960.As well as using the original time series (which is a classic example of a *multiplicative* time series), we will create an *additive* time series by performing a log-transform on the original data, so we may compare forecasters against both types of model.
###Code
y = load_airline()
fig, ax = plot_ys(y)
ax.set(xlabel="Time", ylabel="Number of airline passengers");
###Output
_____no_output_____
###Markdown
Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. We can specify the forecasting horizon as a simple numpy array of the steps ahead, relative to the end of the training series:
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
ForecastingLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches.1. We always predict the last value observed (in the training series),2. We predict the last value observed in the same season.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_last = forecaster.predict(fh)
smape_loss(y_last, y_test)
forecaster = NaiveForecaster(strategy="seasonal_last", sp=12)
forecaster.fit(y_train)
y_last_seasonal = forecaster.predict(fh)
smape_loss(y_last_seasonal, y_test)
plot_ys(y_train, y_test, y_last, y_last_seasonal,
labels=["y_train", "y_test", "last", "seasonal_last"]);
###Output
_____no_output_____
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Compositite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
from sktime.forecasting.compose import EnsembleForecaster
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Applying machine learning: reduction to regressionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consist of the subsequent observation for each window.sktime provides a meta-estimator for this approach, which is compatible with scikit-learn, so that we can use any scikit-learn regressor to solve our forecasting problem.
###Code
from sktime.forecasting.compose import ReducedRegressionForecaster
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=10, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Internally, sktime uses a temporal time series splitter, similar to the cross-validation splitter in scikit-learn. Here we show how this works for the first 20 observations of the training series:
###Code
from sktime.forecasting.model_selection import SlidingWindowSplitter
cv = SlidingWindowSplitter(window_length=10, start_with_window=True)
for input_window, output_window in cv.split(y_train.iloc[:20]):
print(input_window, output_window)
###Output
[0 1 2 3 4 5 6 7 8 9] [10]
[ 1 2 3 4 5 6 7 8 9 10] [11]
[ 2 3 4 5 6 7 8 9 10 11] [12]
[ 3 4 5 6 7 8 9 10 11 12] [13]
[ 4 5 6 7 8 9 10 11 12 13] [14]
[ 5 6 7 8 9 10 11 12 13 14] [15]
[ 6 7 8 9 10 11 12 13 14 15] [16]
[ 7 8 9 10 11 12 13 14 15 16] [17]
[ 8 9 10 11 12 13 14 15 16 17] [18]
[ 9 10 11 12 13 14 15 16 17 18] [19]
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
from sktime.forecasting.model_selection import ForecastingGridSearchCV
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
You could of course also try to tune the regressor inside `ReducedRegressionForecaster` using scikit-learn's `GridSearchCV`. DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformers.single_series.detrend import Detrender
# liner detrending
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_ys(y_train, y_pred, yt, labels=["y_train", "Fitted linear trend", "Residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformers.single_series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_ys(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
from sktime.forecasting.theta import ThetaForecaster
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(y_pred.index, pred_ints["lower"], pred_ints["upper"], alpha=0.2, color="green", label=f"{1 - alpha}% prediction intervals")
plt.legend();
###Output
sMAPE: 0.09
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. Table of Contents* [1. Basic forecasting workflows](chapter1) * [1.1 Data container format](section_1_1) * [1.2 Basic deployment workflow - batch fitting and forecasting](section_1_2) * [1.2.1 Basic deployment workflow in a nutshell](section_1_2_1) * [1.2.2 Forecasters that require the horizon already in `fit`](section_1_2_2) * [1.2.3 Forecasters that can make use of exogeneous data](section_1_2_3) * [1.2.4 Prediction intervals](section_1_2_4) * [1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observations](section_1_3) * [1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interface](section_1_3_1) * [1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interface](section_1_3_2) * [1.4 advanced deployment workflow: rolling updates & forecasts](section_1_4) * [1.4.1 updating a forecaster with the update method](section_1_4_1) * [1.4.2 moving the "now" state without updating the model](section_1_4_2) * [1.4.3 walk-forward predictions on a batch of data](section_1_4_3) * [1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testing](section_1_5) * [2. Forecasters in sktime - main families](chapter2) * [2.1 exponential smoothing, theta forecaster, autoETS from statsmodels](section_2_1) * [2.2 ARIMA and autoARIMA](section_2_2) * [2.3 BATS and TBATS](section_2_3) * [2.4 Facebook prophet](section_2_4) * [3. Advanced composition patterns - pipelines, reduction, autoML, and more](chapter3) * [3.1 Reduction: from forecasting to regression](section_3_1) * [3.2 Pipelining, detrending and deseasonalization](section_3_2) * [3.2.1 The basic forecasting pipeline](section_3_2_1) * [3.2.2 The Detrender as pipeline component](section_3_2_2) * [3.2.3 Complex pipeline composites and parameter inspection](section_3_2_3) * [3.3 Parameter tuning](section_3_3) * [3.3.1 Basic tuning using ForecastingGridSearchCV](section_3_3_1) * [3.3.2 Tuning of complex composites](section_3_3_2) * [3.3.3 Selecting the metric and retrieving scores](section_3_3_3) * [3.4 autoML aka automated model selection, ensembling and hedging](section_3_4) * [3.4.1 autoML aka automatic model selection, using tuning plus multiplexer](section_3_4_1) * [3.4.2 autoML: selecting transformer combinations via OptimalPassthrough](section_3_4_2) * [3.4.3 simple ensembling strategies](section_3_4_3) * [3.4.4 Prediction weighted ensembles and hedge ensembles](section_3_4_4) * [4. Extension guide - implementing your own forecaster](chapter4) * [5. Summary](chapter5) package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following three workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data contanier formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.Series` for univariate time series and sequences* `pd.DataFrame` for multivariate time series and sequencesThe `Series.index` and `DataFrame.index` are used for representing the time series or sequence index. `sktime` supports pandas integer, period and timestamp indices.NOTE: at current time (v0.6x), forecasting of multivariate time seres is not a stable functionality, this is a priority roadmap item. Multivariate exogeneous time series are part of stable functionality. **Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit` Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4 prediction intervals`sktime` provides a unified interface to return prediction interval when forecasting. This is possible directly in the `predict` function, by setting the `return_pred_int` argument to `True`. The `predict` method then returns a second argument, Not all forecasters are capable of returning prediction intervals, in which case an error will be raised.Obtaining prediction intervals can be done as part of any workflow involving `predict`, by adding the argument `return_pred_int` - below, we illustrate this by modifying the basic workflow in Section 1.2:
###Code
from sktime.forecasting.theta import ThetaForecaster
# simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y)
# setting return_pred_int argument to True; alpha determines percentiles
# intervals are lower = alpha/2-percentile, upper = (1-alpha/2)-percentile
alpha = 0.05 # 2.5%/97.5% prediction intervals
y_pred, y_pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
###Output
_____no_output_____
###Markdown
`y_pred_ints` is a `pandas.DataFrame` with columns `lower` and `upper`, and rows the indices for which forecasts were made (same as in `y_pred`). Entries are lower/upper (as column name) bound of the nominal alpha predictive interval for the index in the same row.
###Code
y_pred_ints
###Output
_____no_output_____
###Markdown
pretty-plotting the predictive interval forecasts:
###Code
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
fig, ax = plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["lower"],
y_pred_ints["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{1 - alpha}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
NOTE: this should be turned into a one-liner, by moving this to `utils.plotting` - contributions are appreciated. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.6x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1975
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - main families`sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.The main classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `autoARIMA` from `pmdarima`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command:
###Code
from sktime.registry import all_estimators
import pandas as pd
# all_estimators returns list of pairs - data frame conversion for pretty printing
all_estimators("forecaster", as_dataframe=True)
###Output
_____no_output_____
###Markdown
All forecasters follow the same interface, and can be used in the workflows presented in Section 1.We proceed by showcasing some commonnly used classes of forecasters.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and aut-ETS. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following.Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
2.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex.Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. 3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper=parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.3 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y_train)
y_pred = forecaster.update_predict(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test[1:], y_pred)
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, we're interested in using past data to make temporal forward predictions. sktime provides common statistical forecasting algorithms and tools for building composite machine learning models.For more details, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss the forecasting API in more detail and use it to replicate and extend the M4 study. Preliminaries
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.utils.plotting.forecasting import plot_ys
%matplotlib inline
###Output
_____no_output_____
###Markdown
DataFor this tutorial, we will use the famous Box-Jenkins airline data set, which shows the number of international airlinepassengers per month from 1949-1960.As well as using the original time series (which is a classic example of a *multiplicative* time series), we will create an *additive* time series by performing a log-transform on the original data, so we may compare forecasters against both types of model.
###Code
y = load_airline()
fig, ax = plot_ys(y)
ax.set(xlabel="Time", ylabel="Number of airline passengers");
###Output
_____no_output_____
###Markdown
Specifying the forecasting task Next we will define a forecasting task.* We will try to predict the last 3 years of data, using the previous years as training data. Each point in the series represents a month, so we should hold out the last 36 points as test data, and use 36-step ahead forecasting horizon to evaluate forecasting performance.* We will use the sMAPE (symmetric mean absolute percentage error) to quantify the accuracy of our forecasts. A lower sMAPE means higher accuracy.We can split the data as follows:
###Code
y_train, y_test = temporal_train_test_split(y, test_size=36)
plot_ys(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
108 36
###Markdown
When we want to generate forecasts, we need to specify the forecasting horizon and pass that to our forecasting algorithm. We can specify the forecasting horizon as a simple numpy array of the steps ahead, relative to the end of the training series:
###Code
fh = np.arange(len(y_test)) + 1
fh
###Output
_____no_output_____
###Markdown
So here we're interested in predicting from the first to to the 36th step ahead.Of course you could you use other forecasting horizons. For example, to predict only the second and fifth step ahead, you could write:```pythonfh = np.array([2, 5]) 2nd and 5th step ahead``` ForecastingLike in scikit-learn, in order to make forecasts, we need to first specify (or build) a model, then fit it to the training data, and finally call predict to generate forecasts for the given forecasting horizon.sktime comes with several forecasting algorithms (or forecasters) and tools for composite model building. All forecaster share a common interface. Forecasters are trained on a single series of data and make forecasts for the provided forecasting horizon. Naïve baselinesLet's start with two naïve forecasting strategies which can serve as references for comparison of more sophisticated approaches.1. We always predict the last value observed (in the training series),2. We predict the last value observed in the same season.
###Code
# we can do that with a few lines of code
y_pred = np.repeat(y_train.iloc[-1], len(fh))
y_pred = pd.Series(y_pred, index=y_train.index[-1] + fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
# using sktime
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
y_last = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_last, y_test)
forecaster = NaiveForecaster(strategy="seasonal_last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
smape_loss(y_pred, y_test)
###Output
_____no_output_____
###Markdown
But can I not just use scikit-learn?In principle, yes, but many pitfalls ... Pitfall 1: model validation
###Code
from sklearn.model_selection import train_test_split
y_train, y_test = train_test_split(y)
plot_ys(y_train.sort_index(), y_test.sort_index(), labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
This leads to leakage: > The data you are using to train a machine learning algorithm happens to have the information you are trying to predict.But `train_test_split(y, shuffle=False)` works, which is what `temporal_train_test_split(y)` does in sktime:
###Code
y_train, y_test = temporal_train_test_split(y)
plot_ys(y_train, y_test, labels=["y_train", "y_test"]);
###Output
_____no_output_____
###Markdown
Pitfall 2: how to apply regression algorithms?In order to use scikit-learn, we have to first transform the data into the required tabular format, then fit a regressor and finally generate forecasts. Reduction: from forecasting to regressionForecasting is often solved via regression. This approach is sometimes called reduction, because we reduce the forecasting task to the simpler but related task of tabular regression. This allows to apply any regression algorithm to the forecasting problem.Reduction to regression works as follows: We first need to transform the data into the required tabular format. We can do this by cutting the training series into windows of a fixed length and stacking them on top of each other. Our target variable consists of the subsequent observation for each window.We could write some code to do that, as for example in the [M4 competition](https://github.com/Mcompetitions/M4-methods):
###Code
# slightly modified code from the M4 competition
def split_into_train_test(data, in_num, fh):
"""
Splits the series into train and test sets. Each step takes multiple points as inputs
:param data: an individual TS
:param fh: number of out of sample points
:param in_num: number of input points for the forecast
:return:
"""
train, test = data[:-fh], data[-(fh + in_num):]
x_train, y_train = train[:-1], np.roll(train, -in_num)[:-in_num]
x_test, y_test = test[:-1], np.roll(test, -in_num)[:-in_num]
# x_test, y_test = train[-in_num:], np.roll(test, -in_num)[:-in_num]
# reshape input to be [samples, time steps, features] (N-NF samples, 1 time step, 1 feature)
x_train = np.reshape(x_train, (-1, 1))
x_test = np.reshape(x_test, (-1, 1))
temp_test = np.roll(x_test, -1)
temp_train = np.roll(x_train, -1)
for x in range(1, in_num):
x_train = np.concatenate((x_train[:-1], temp_train[:-1]), 1)
x_test = np.concatenate((x_test[:-1], temp_test[:-1]), 1)
temp_test = np.roll(temp_test, -1)[:-1]
temp_train = np.roll(temp_train, -1)[:-1]
return x_train, y_train, x_test, y_test
# here we split the time index, rather than the actual values, to show how we split the windows
feature_window, target_window, _, _ = split_into_train_test(y.index.values, 10, len(fh))
feature_window[:5, :]
target_window[:5]
# now we can split the actual values of the time series
x_train, y_train, x_test, y_test = split_into_train_test(y.values, 10, len(fh))
print(x_train.shape, y_train.shape)
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
model.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
What are potential pitfalls here?> This requires a lot of hand-written code which is often error-prone, not modular and not tuneable. > Note also that these steps involve a number of implicit hyper-parameters:* the way you slice the time series into windows (e.g. the window length)* the way you generate forecasts (recursive strategy, direct strategy, other hybrid strategies) Pitfall 3: how to generate forecasts?
###Code
print(x_test.shape, y_test.shape)
# add back time index to y_test
y_test = pd.Series(y_test, index=y.index[-len(fh):])
y_pred = model.predict(x_test)
smape_loss(pd.Series(y_pred, index=y_test.index), y_test)
###Output
_____no_output_____
###Markdown
But what's the problem here?> We actually don't make a multi-step-ahead forecast up to the 36th step ahead. Instead, we make 36 single-step-ahead forecasts always using the most recent data. But that's a solution to a different learning task! To fix this problem, we could write some code to do this recursively as in the M4 competition:
###Code
# slightly modified code from the M4 study
predictions = []
last_window = x_train[-1, :].reshape(1, -1) # make it into 2d array
last_prediction = model.predict(last_window)[0] # take value from array
for i in range(len(fh)):
# append prediction
predictions.append(last_prediction)
# update last window using previously predicted value
last_window[0] = np.roll(last_window[0], -1)
last_window[0, (len(last_window[0]) - 1)] = last_prediction
# predict next step ahead
last_prediction = model.predict(last_window)[0]
y_pred_rec = pd.Series(predictions, index=y_test.index)
smape_loss(y_pred_rec, y_test)
###Output
_____no_output_____
###Markdown
Forecasting with sktime Reduction: from forecasting to regressionsktime provides a meta-estimator for this approach, which is: * **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **tuneable**, allowing us to tune hyper-parameters like the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model
###Code
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
print(y_train.shape[0], y_test.shape[0])
from sktime.forecasting.compose import ReducedRegressionForecaster
from sklearn.neighbors import KNeighborsRegressor
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=12, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
To better understand the prior data transformation, we can look at how we can split the training series into windows. Internally, sktime uses a temporal time series splitter, similar to the cross-validation splitter in scikit-learn. Here we show how this works for the first 20 observations of the training series:
###Code
from sktime.forecasting.model_selection import SlidingWindowSplitter
cv = SlidingWindowSplitter(window_length=10, start_with_window=True)
for input_window, output_window in cv.split(y_train.iloc[:20]):
print(input_window, output_window)
###Output
[0 1 2 3 4 5 6 7 8 9] [10]
[ 1 2 3 4 5 6 7 8 9 10] [11]
[ 2 3 4 5 6 7 8 9 10 11] [12]
[ 3 4 5 6 7 8 9 10 11 12] [13]
[ 4 5 6 7 8 9 10 11 12 13] [14]
[ 5 6 7 8 9 10 11 12 13 14] [15]
[ 6 7 8 9 10 11 12 13 14 15] [16]
[ 7 8 9 10 11 12 13 14 15 16] [17]
[ 8 9 10 11 12 13 14 15 16 17] [18]
[ 9 10 11 12 13 14 15 16 17 18] [19]
###Markdown
Statistical forecasterssktime has a number of statistical forecasting algorithms, based on implementations in statsmodels. For example, to use exponential smoothing with an additive trend component and multiplicative seasonality, we can write the following.Note that since this is monthly data, the seasonal periodicity (sp), or the number of periods per year, is 12.
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="multiplicative", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Another common model is the ARIMA model. In sktime, we interface [pmdarima](https://github.com/alkaline-ml/pmdarima), a package for automatically selecting the best ARIMA model. This since searches over a number of possible model parametrisations, it may take a bit longer.
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Compositite model buildingsktime provides a modular API for composite model building for forecasting. EnsemblingLike scikit-learn, sktime provides a meta-forecaster to ensemble multiple forecasting algorithms. For example, we can combine different variants of exponential smoothing as follows:
###Code
from sktime.forecasting.compose import EnsembleForecaster
forecaster = EnsembleForecaster([
("ses", ExponentialSmoothing(seasonal="multiplicative", sp=12)),
("holt", ExponentialSmoothing(trend="add", damped=False, seasonal="multiplicative", sp=12)),
("damped", ExponentialSmoothing(trend="add", damped=True, seasonal="multiplicative", sp=12))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
TuningIn the `ReducedRegressionForecaster`, both the `window_length` and `strategy` arguments are hyper-parameters which we may want to optimise.
###Code
from sktime.forecasting.model_selection import ForecastingGridSearchCV
forecaster = ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [5, 10, 15]}
# we fit the forecaster on the initial window, and then use temporal cross-validation to find the optimal parameter
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5))
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
gscv.best_params_
###Output
_____no_output_____
###Markdown
You could of course also try to tune the regressor inside `ReducedRegressionForecaster` using scikit-learn's `GridSearchCV`. DetrendingNote that so far the reduction approach above does not take any seasonal or trend into account, but we can easily specify a pipeline which first detrends the data.sktime provides a generic detrender, a transformer which uses any forecaster and returns the in-sample residuals of the forecaster's predicted values. For example, to remove the linear trend of a time series, we can write:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformers.single_series.detrend import Detrender
# liner detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_ys(y_train, y_pred, yt, labels=["y_train", "Fitted linear trend", "Residuals"]);
###Output
_____no_output_____
###Markdown
PipeliningLet's use the detrender in a pipeline together with de-seasonalisation. Note that in forecasting, when we apply data transformations before fitting, we need to apply the inverse transformation to the predicted values. For this purpose, we provide the following pipeline class:
###Code
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformers.single_series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster([
("deseasonalise", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ReducedRegressionForecaster(regressor=regressor, window_length=15, strategy="recursive"))
])
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"]);
smape_loss(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Of course, we could try again to optimise the hyper-parameters of components of the pipeline.Below we discuss two other aspects of forecasting: online learning, where we want to dynamically update forecasts as new data comes in, and prediction intervals, which allow us to quantify the uncertainty of our forecasts. Dynamic forecastsFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, all forecasters in sktime have a `update_predict` method. Here we make repeated single-step ahead forecasts over the test set.Note that the forecasting task is changed: while we still make 36 predictions, we do not predict 36 steps ahead, but instead make 36 single-step-ahead predictions.
###Code
forecaster = NaiveForecaster(strategy="last")
forecaster.fit(y_train)
cv = SlidingWindowSplitter(fh=1)
y_pred = forecaster.update_predict(y_test, cv)
smape_loss(y_test, y_pred)
plot_ys(y_train, y_test, y_pred);
###Output
_____no_output_____
###Markdown
For a single update, you can use the `update` method. Prediction intervalsSo far, we've only looked at point forecasts. In many cases, we're also interested in prediction intervals. sktime's interface support prediction intervals, but we haven't implemented them for all algorithms yet.Here, we use the Theta forecasting algorithm:
###Code
from sktime.forecasting.theta import ThetaForecaster
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y_train)
alpha = 0.05 # 95% prediction intervals
y_pred, pred_ints = forecaster.predict(fh, return_pred_int=True, alpha=alpha)
smape_loss(y_test, y_pred)
fig, ax = plot_ys(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
ax.fill_between(y_pred.index, pred_ints["lower"], pred_ints["upper"], alpha=0.2, color="green", label=f"{1 - alpha}% prediction intervals")
plt.legend();
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, either uncomment and run the below, or `pip install -e` a local clone of the `sktime` `main` branch.
###Code
# from os import sys
# sys.path.append("..")
###Output
_____no_output_____
###Markdown
Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* for further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* for a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. Table of Contents* [1. Basic forecasting workflows](chapter1) * [1.1 Data container format](section_1_1) * [1.2 Basic deployment workflow - batch fitting and forecasting](section_1_2) * [1.2.1 Basic deployment workflow in a nutshell](section_1_2_1) * [1.2.2 Forecasters that require the horizon already in `fit`](section_1_2_2) * [1.2.3 Forecasters that can make use of exogeneous data](section_1_2_3) * [1.2.4 Multivariate Forecasters](section_1_2_4) * [1.2.5 Prediction intervals and quantile forecasts](section_1_2_5) * [1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observations](section_1_3) * [1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interface](section_1_3_1) * [1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interface](section_1_3_2) * [1.4 advanced deployment workflow: rolling updates & forecasts](section_1_4) * [1.4.1 updating a forecaster with the update method](section_1_4_1) * [1.4.2 moving the "now" state without updating the model](section_1_4_2) * [1.4.3 walk-forward predictions on a batch of data](section_1_4_3) * [1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testing](section_1_5) * [2. Forecasters in sktime - main families](chapter2) * [2.1 exponential smoothing, theta forecaster, autoETS from statsmodels](section_2_1) * [2.2 ARIMA and autoARIMA](section_2_2) * [2.3 BATS and TBATS](section_2_3) * [2.4 Facebook prophet](section_2_4) * [2.5 State Space Model (Structural Time Series)](section_2_5) * [2.6 AutoArima from StatsForecast](section_2_6) * [3. Advanced composition patterns - pipelines, reduction, autoML, and more](chapter3) * [3.1 Reduction: from forecasting to regression](section_3_1) * [3.2 Pipelining, detrending and deseasonalization](section_3_2) * [3.2.1 The basic forecasting pipeline](section_3_2_1) * [3.2.2 The Detrender as pipeline component](section_3_2_2) * [3.2.3 Complex pipeline composites and parameter inspection](section_3_2_3) * [3.3 Parameter tuning](section_3_3) * [3.3.1 Basic tuning using ForecastingGridSearchCV](section_3_3_1) * [3.3.2 Tuning of complex composites](section_3_3_2) * [3.3.3 Selecting the metric and retrieving scores](section_3_3_3) * [3.4 autoML aka automated model selection, ensembling and hedging](section_3_4) * [3.4.1 autoML aka automatic model selection, using tuning plus multiplexer](section_3_4_1) * [3.4.2 autoML: selecting transformer combinations via OptimalPassthrough](section_3_4_2) * [3.4.3 simple ensembling strategies](section_3_4_3) * [3.4.4 Prediction weighted ensembles and hedge ensembles](section_3_4_4) * [4. Extension guide - implementing your own forecaster](chapter4) * [5. Summary](chapter5) package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following four workflows:* basic deployment workflow: batch fitting and forecasting* basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* advanced deployment workflow: fitting and rolling updates/forecasts* advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data contanier formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.Series` for univariate time series and sequences* `pd.DataFrame` for multivariate time series and sequencesThe `Series.index` and `DataFrame.index` are used for representing the time series or sequence index. `sktime` supports pandas integer, period and timestamp indices.NOTE: at current time (v0.9x), forecasting of multivariate time series is a stable functionality, but not covered in this tutorial. Contributions to extend the tutorial are welcome.**Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. preparation of the data2. specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. fitting the forecaster to the data, using the forecaster's `fit` method5. making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). step 1 - preparation of the dataas discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
step 2 - specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. using a numpy forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
step 3 - specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
step 4 - fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
step 5 - requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 the basic deployment workflow in a nutshellfor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 forecasters that require the horizon already in `fit` Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4. multivariate forecasting Some forecasters in sktime support multivariate forecasts. Some examples of multivariate forecasters are: `MultiplexForecaster`, `EnsembleForecaster`,`TransformedTargetForecaster` etc. In order to determine is a forecaster can be multivariate, one can look at the `scitype:y` in `tags`, which should be set to `multivariate` or '`both`. To display complete list of multivariate forecasters, search for forecasters with 'multivariate' or 'both' tag value for the tag 'scitype:y', as follows:
###Code
from sktime.registry import all_estimators
for forecaster in all_estimators(filter_tags={"scitype:y": ["multivariate", "both"]}):
print(forecaster[0])
###Output
_____no_output_____
###Markdown
Below is an example of the general workflow of multivariate `ColumnEnsembleForecaster` using the longley dataset from `sktime.datasets`. The workflow is the same as in the univariate forecasters, but the input has more than one variables (columns).
###Code
from sktime.datasets import load_longley
from sktime.forecasting.compose import ColumnEnsembleForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.trend import PolynomialTrendForecaster
_, y = load_longley()
y = y.drop(columns=["UNEMP", "ARMED", "POP"])
forecasters = [
("trend", PolynomialTrendForecaster(), 0),
("ses", ExponentialSmoothing(trend="add"), 1),
]
forecaster = ColumnEnsembleForecaster(forecasters=forecasters)
forecaster.fit(y, fh=[1, 2, 3])
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
The input to the multivariate forecaster `y` is a `pandas.DataFrame` where each column is a variable.
###Code
y
###Output
_____no_output_____
###Markdown
The result of the multivariate forecaster `y_pred` is a `pandas.DataFrame` where columns are the predicted values for each variable. The variables in `y_pred` are the same as in `y`, the input to the multivariate forecaster.
###Code
y_pred
###Output
_____no_output_____
###Markdown
1.2.5 probabilistic forecasting: prediction intervals, quantile, variance, and distributional forecasts `sktime` provides a unified interface to make probabilistic forecasts.The following methods are possibly available for probabilistic forecasts:* `predict_interval` produces interval forecasts. Additionally to any `predict` arguments, an argument `coverage` (nominal interval coverage) must be provided.* `predict_quantiles` produces quantile forecasts. Additionally to any `predict` arguments, an argument `alpha` (quantile values) must be provided.* `predict_var` produces variance forecasts. This has same arguments as `predict`.* `predict_proba` produces full distributional forecasts. This has same arguments as `predict`.Not all forecasters are capable of returning probabilistic forecast, but if a forecasters provides one kind of probabilistic forecast, it is also capable of returning the others. The list of forecasters with such capability can be queried by `registry.all_estimators`, searching for those where the `capability:pred_int` tag has value`True`.The basic worfklow for probabilistic forecasts is similar to the basic forecasting workflow, with the difference that instead of `predict`, one of the probabilistic forecasting methods is used:
###Code
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.theta import ThetaForecaster
# until fit, identical with the simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y, fh=fh)
###Output
_____no_output_____
###Markdown
Now we present the different probabilistic forecasting methods. `predict_interval` - interval predictions `predict_interval` takes an argument `coverage`, which is a float (or list of floats), the nominal coverage of the prediction interval(s) queried. `predict_interval` produces symmetric prediction intervals, for example, a coverage of `0.9` returns a "lower" forecast at quantile `0.5 - coverage/2 = 0.05`, and an "upper" forecast at quantile `0.5 + coverage/2 = 0.95`.
###Code
coverage = 0.9
y_pred_ints = forecaster.predict_interval(coverage=coverage)
y_pred_ints
###Output
_____no_output_____
###Markdown
The return `y_pred_ints` is a `pandas.DataFrame` with a column multi-index: The first level is variable name from `y` in fit (or `Coverage` if no variable names were present), second level coverage fractions for which intervals were computed, in the same order as in input `coverage`; third level columns `lower` and `upper`. Rows are the indices for which forecasts were made (same as in `y_pred` or `fh`). Entries are lower/upper (as column name) bound of the nominal coverage predictive interval for the index in the same row. pretty-plotting the predictive interval forecasts:
###Code
from sktime.utils import plotting
# also requires predictions
y_pred = forecaster.predict()
fig, ax = plotting.plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["Coverage"][coverage]["lower"],
y_pred_ints["Coverage"][coverage]["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{coverage}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
`predict_quantiles` - quantile forecasts sktime offers `predict_quantiles` as a unified interface to return quantile values of predictions. Similar to `predict_interval`.`predict_quantiles` has an argument `alpha`, containing the quantile values being queried. Similar to the case of the `predict_interval`, `alpha` can be a `float`, or a `list of floats`.
###Code
y_pred_quantiles = forecaster.predict_quantiles(alpha=[0.275, 0.975])
y_pred_quantiles
###Output
_____no_output_____
###Markdown
`y_pred_quantiles`, the output of predict_quantiles, is a `pandas.DataFrame` with a two-level column multiindex. The first level is variable name from `y` in fit (or `Quantiles` if no variable names were present), second level are the quantile values (from `alpha`) for which quantile predictions were queried. Rows are the indices for which forecasts were made (same as in `y_pred` or `fh`). Entries are the quantile predictions for that variable, that quantile value, for the time index in the same row. Remark: for clarity: quantile and (symmetric) interval forecasts can be translated into each other as follows.**alpha < 0.5:** The alpha-quantile prediction is equal to the lower bound of a predictive interval with coverage = (0.5 - alpha) * 2**alpha > 0.5:** The alpha-quantile prediction is equal to the upper bound of a predictive interval with coverage = (alpha - 0.5) * 2 `predict_var` - variance predictions `predict_var` produces variance predictions:
###Code
y_pred_var = forecaster.predict_var()
y_pred_var
###Output
_____no_output_____
###Markdown
The format of the output `y_pred_var` is the same as for `predict`, except that this is always coerced to a `pandas.DataFrame`, and entries are not point predictions but variance predictions. `predict_proba` - distribution predictions To predict full predictive distributions, `predict_proba` can be used.As this returns `tensorflow` `Distribution` objects, the deep learning dependency set `dl` of `sktime` (which includes `tensorflow` and `tensorflow-probability` dependencies) must be installed.
###Code
y_pred_proba = forecaster.predict_proba()
y_pred_proba
###Output
_____no_output_____
###Markdown
Distributions returned by `predict_proba` are by default marginal at time points, not joint over time points.More precisely, the returned `Distribution` object is formatted and to be interpreted as follows:* batch shape is 1D and same length as fh* event shape is 1D, with length equal to number of variables being forecast* i-th (batch) distribution is forecast for i-th entry of fh* j-th (event) component is j-th variable, same order as y in `fit`/`update`To return joint forecast distributions, the `marginal` parameter can be set to `False` (currently work in progress). In this case, a `Distribution` with 2D event shape `(len(fh), len(y))` is returned. 1.3 basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. specifying a quantitative performance metric to compare the actual test set against predictions4. computing the quantitative performance on the test set5. testing whether this performance is statistically better than a chosen baseline performanceNOTE: step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). step 1 - splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
step 2 - making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
steps 3 and 4 - specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: the workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. step 5 - testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.9x), this is a roadmap development item. Contributions are very welcome. 1.3.1 the basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 the basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. specify a forecasting strategy2. specify a relative forecasting horizon3. fit the forecaster to an initial batch of data using `fit`4. make forecasts for the relative forecasting horizon, using `predict`5. obtain new data; use `update` to ingest new data6. make forecasts using `predict` for the updated data7. repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1957
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - main families`sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.The main classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `autoARIMA` from `pmdarima`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command:
###Code
from sktime.registry import all_estimators
import pandas as pd
# all_estimators returns list of pairs - data frame conversion for pretty printing
all_estimators("forecaster", as_dataframe=True)
###Output
_____no_output_____
###Markdown
All forecasters follow the same interface, and can be used in the workflows presented in Section 1.We proceed by showcasing some commonnly used classes of forecasters.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.1 exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and auto-ETS.For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following. Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex. Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.5 State Space Model (Structural Time Series)We can also use the [`UnobservedComponents`](https://www.statsmodels.org/stable/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.html) class from [`statsmodels`](https://www.statsmodels.org/stable/index.html) to generate predictions using a state space model.
###Code
from sktime.forecasting.structural import UnobservedComponents
# We can model seasonality using Fourier modes as in the Prophet model.
forecaster = UnobservedComponents(
level="local linear trend", freq_seasonal=[{"period": 12, "harmonics": 10}]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.6 AutoARIMA from [StatsForecast](https://github.com/Nixtla/statsforecast)`sktime` interfaces `StatsForecast` for its `AutoARIMA` class models. `AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.statsforecast import StatsForecastAutoARIMA
forecaster = StatsForecastAutoARIMA(sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* autoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. 3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper-parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.4.3 simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y=y_train, fh=fh)
y_pred = forecaster.update_predict_single(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
**Set-up instructions:** this notebook give a tutorial on the forecasting learning task supported by `sktime`.On binder, this should run out-of-the-box.To run this notebook as intended, ensure that `sktime` with basic dependency requirements is installed in your python environment.To run this notebook with a local development version of sktime, an editable developer installation is recommended, see the [sktime developer install guide](https://www.sktime.org/en/stable/installation.htmldevelopment-versions) for instructions. Forecasting with sktimeIn forecasting, past data is used to make temporal forward predictions of a time series. This is notably different from tabular prediction tasks supported by `scikit-learn` and similar libraries.`sktime` provides a common, `scikit-learn`-like interface to a variety of classical and ML-style forecasting algorithms, together with tools for building pipelines and composite machine learning models, including temporal tuning schemes, or reductions such as walk-forward application of `scikit-learn` regressors.**Section 1** provides an overview of common forecasting workflows supported by `sktime`.**Section 2** discusses the families of forecasters available in `sktime`.**Section 3** discusses advanced composition patterns, including pipeline building, reduction, tuning, ensembling, and autoML.**Section 4** gives an introduction to how to write custom estimators compliant with the `sktime` interface.Further references:* For further details on how forecasting is different from supervised prediction à la `scikit-learn`, and pitfalls of misdiagnosing forecasting as supervised prediction, have a look at [this notebook](./01a_forecasting_sklearn.ipynb)* For a scientific reference, take a look at [our paper on forecasting with sktime](https://arxiv.org/abs/2005.08067) in which we discuss `sktime`'s forecasting module in more detail and use it to replicate and extend the M4 study. Table of Contents* [1. Basic forecasting workflows](chapter1) * [1.1 Data container format](section_1_1) * [1.2 Basic deployment workflow - batch fitting and forecasting](section_1_2) * [1.2.1 Basic deployment workflow in a nutshell](section_1_2_1) * [1.2.2 Forecasters that require the horizon already in `fit`](section_1_2_2) * [1.2.3 Forecasters that can make use of exogeneous data](section_1_2_3) * [1.2.4 Multivariate Forecasters](section_1_2_4) * [1.2.5 Prediction intervals and quantile forecasts](section_1_2_5) * [1.2.6 Panel forecasts and hierarchical forecasts](section_1_2_6) * [1.3 Basic evaluation workflow - evaluating a batch of forecasts against ground truth observations](section_1_3) * [1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interface](section_1_3_1) * [1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interface](section_1_3_2) * [1.4 Advanced deployment workflow: rolling updates & forecasts](section_1_4) * [1.4.1 Updating a forecaster with the update method](section_1_4_1) * [1.4.2 Moving the "now" state without updating the model](section_1_4_2) * [1.4.3 Walk-forward predictions on a batch of data](section_1_4_3) * [1.5 Advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testing](section_1_5) * [2. Forecasters in sktime - searching, tags, common families](chapter2) * [2.1 Forecaster lookup - the registry](section_2_1) * [2.2 Forecaster tags](section_2_2) * [2.2.1 Capability tags: multivariate, probabilistic, hierarchical](section_2_2_1) * [2.2.2 Finding and listing forecasters by tag](section_2_2_2) * [2.2.3 Listing all forecaster tags](section_2_2_3) * [2.3 Common forecaster types](section_2_3) * [2.3.1 Exponential smoothing, theta forecaster, autoETS from statsmodels](section_2_3_1) * [2.3.2 ARIMA and autoARIMA](section_2_3_2) * [2.3.3 BATS and TBATS](section_2_3_3) * [2.3.4 Facebook prophet](section_2_3_4) * [2.3.5 State Space Model (Structural Time Series)](section_2_3_5) * [2.3.6 AutoArima from StatsForecast](section_2_3_6) * [3. Advanced composition patterns - pipelines, reduction, autoML, and more](chapter3) * [3.1 Reduction: from forecasting to regression](section_3_1) * [3.2 Pipelining, detrending and deseasonalization](section_3_2) * [3.2.1 The basic forecasting pipeline](section_3_2_1) * [3.2.2 The Detrender as pipeline component](section_3_2_2) * [3.2.3 Complex pipeline composites and parameter inspection](section_3_2_3) * [3.3 Parameter tuning](section_3_3) * [3.3.1 Basic tuning using ForecastingGridSearchCV](section_3_3_1) * [3.3.2 Tuning of complex composites](section_3_3_2) * [3.3.3 Selecting the metric and retrieving scores](section_3_3_3) * [3.4 autoML aka automated model selection, ensembling and hedging](section_3_4) * [3.4.1 autoML aka automatic model selection, using tuning plus multiplexer](section_3_4_1) * [3.4.2 autoML: selecting transformer combinations via OptimalPassthrough](section_3_4_2) * [3.4.3 Simple ensembling strategies](section_3_4_3) * [3.4.4 Prediction weighted ensembles and hedge ensembles](section_3_4_4) * [4. Extension guide - implementing your own forecaster](chapter4) * [5. Summary](chapter5) Package imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
1. Basic forecasting workflows This section explains the basic forecasting workflows, and key interface points for it.We cover the following four workflows:* Basic deployment workflow: batch fitting and forecasting* Basic evaluation workflow: evaluating a batch of forecasts against ground truth observations* Advanced deployment workflow: fitting and rolling updates/forecasts* Advanced evaluation worfklow: using rolling forecast splits and computing split-wise and aggregate errors, including common back-testing schemes 1.1 Data container formatAll workflows make common assumptions on the input data format.`sktime` uses `pandas` for representing time series:* `pd.DataFrame` for time series and sequences, primarily. Rows represent time indices, columns represent variables.* `pd.Series` can also be used for univariate time series and sequences* `numpy` arrays (1D and 2D) can also be passed, but `pandas` use is encouraged.The `Series.index` and `DataFrame.index` are used for representing the time series or sequence index.`sktime` supports pandas integer, period and timestamp indices for simple time series.`sktime` supports further, additional container formats for panel and hierarchical time series, these are discussed in Section 1.6.**Example:** as the running example in this tutorial, we use a textbook data set, the Box-Jenkins airline data set, which consists of the number of monthly totals of international airline passengers, from 1949 - 1960. Values are in thousands. See "Makridakis, Wheelwright and Hyndman (1998) Forecasting: methods and applications", exercises sections 2 and 3.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
y = load_airline()
# plotting for visualization
plot_series(y)
y.index
###Output
_____no_output_____
###Markdown
Generally, users are expected to use the in-built loading functionality of `pandas` and `pandas`-compatible packages to load data sets for forecasting, such as `read_csv` or the `Series` or `DataFrame` constructors if data is available in another in-memory format, e.g., `numpy.array`.`sktime` forecasters may accept input in `pandas`-adjacent formats, but will produce outputs in, and attempt to coerce inputs to, `pandas` formats.NOTE: if your favourite format is not properly converted or coerced, kindly consider to contribute that functionality to `sktime`. 1.2 Basic deployment workflow - batch fitting and forecasting The simplest use case workflow is batch fitting and forecasting, i.e., fitting a forecasting model to one batch of past data, then asking for forecasts at time point in the future.The steps in this workflow are as follows:1. Preparation of the data2. Specification of the time points for which forecasts are requested. This uses a `numpy.array` or the `ForecastingHorizon` object.3. Specification and instantiation of the forecaster. This follows a `scikit-learn`-like syntax; forecaster objects follow the familiar `scikit-learn` `BaseEstimator` interface.4. Fitting the forecaster to the data, using the forecaster's `fit` method5. Making a forecast, using the forecaster's `predict` methodThe below first outlines the vanilla variant of the basic deployment workflow, step-by-step.At the end, one-cell workflows are provided, with common deviations from the pattern (Sections 1.2.1 and following). Step 1 - Preparation of the dataAs discussed in Section 1.1, the data is assumed to be in `pd.Series` or `pd.DataFrame` format.
###Code
from sktime.datasets import load_airline
from sktime.utils.plotting import plot_series
# in the example, we use the airline data set.
y = load_airline()
plot_series(y)
###Output
_____no_output_____
###Markdown
Step 2 - Specifying the forecasting horizon Now we need to specify the forecasting horizon and pass that to our forecasting algorithm.There are two main ways:* Using a `numpy.array` of integers. This assumes either integer index or periodic index (`PeriodIndex`) in the time series; the integer indicates the number of time points or periods ahead we want to make a forecast for. E.g., `1` means forecast the next period, `2` the second next period, and so on.* Using a `ForecastingHorizon` object. This can be used to define forecast horizons, using any supported index type as an argument. No periodic index is assumed.Forecasting horizons can be absolute, i.e., referencing specific time points in the future, or relative, i.e., referencing time differences to the present. As a default, the present is that latest time point seen in any `y` passed to the forecaster.`numpy.array` based forecasting horizons are always relative; `ForecastingHorizon` objects can be both relative and absolute. In particular, absolute forecasting horizons can only be specified using `ForecastingHorizon`. Using a `numpy` forecasting horizon
###Code
fh = np.arange(1, 37)
fh
###Output
_____no_output_____
###Markdown
This will ask for monthly predictions for the next three years, since the original series period is 1 month.In another example, to predict only the second and fifth month ahead, one could write:```pythonimport numpy as npfh = np.array([2, 5]) 2nd and 5th step ahead``` Using a `ForecastingHorizon` based forecasting horizonThe `ForecastingHorizon` object takes absolute indices as input, but considers the input absolute or relative depending on the `is_relative` flag.`ForecastingHorizon` will automatically assume a relative horizon if temporal difference types from `pandas` are passed; if value types from `pandas` are passed, it will assume an absolute horizon.To define an absolute `ForecastingHorizon` in our example:
###Code
from sktime.forecasting.base import ForecastingHorizon
fh = ForecastingHorizon(
pd.PeriodIndex(pd.date_range("1961-01", periods=36, freq="M")), is_relative=False
)
fh
###Output
_____no_output_____
###Markdown
`ForecastingHorizon`-s can be converted from relative to absolute and back via the `to_relative` and `to_absolute` methods. Both of these conversions require a compatible `cutoff` to be passed:
###Code
cutoff = pd.Period("1960-12", freq="M")
fh.to_relative(cutoff)
fh.to_absolute(cutoff)
###Output
_____no_output_____
###Markdown
Step 3 - Specifying the forecasting algorithmTo make forecasts, a forecasting algorithm needs to be specified. This is done using a `scikit-learn`-like interface. Most importantly, all `sktime` forecasters follow the same interface, so the preceding and remaining steps are the same, no matter which forecaster is being chosen.For this example, we choose the naive forecasting method of predicting the last seen value. More complex specifications are possible, using pipeline and reduction construction syntax; this will be covered later in Section 2.
###Code
from sktime.forecasting.naive import NaiveForecaster
forecaster = NaiveForecaster(strategy="last")
###Output
_____no_output_____
###Markdown
Step 4 - Fitting the forecaster to the seen dataNow the forecaster needs to be fitted to the seen data:
###Code
forecaster.fit(y)
###Output
_____no_output_____
###Markdown
Step 5 - Requesting forecastsFinally, we request forecasts for the specified forecasting horizon. This needs to be done after fitting the forecaster:
###Code
y_pred = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.1 The basic deployment workflow in a nutshellFor convenience, we present the basic deployment workflow in one cell.This uses the same data, but different forecaster: predicting the latest value observed in the same month.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.naive import NaiveForecaster
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y)
# step 5: querying predictions
y_pred = forecaster.predict(fh)
# optional: plotting predictions and past data
plot_series(y, y_pred, labels=["y", "y_pred"])
###Output
_____no_output_____
###Markdown
1.2.2 Forecasters that require the horizon already in `fit` Some forecasters need the forecasting horizon provided already in `fit`. Such forecasters will produce informative error messages when it is not passed in `fit`. All forecaster will remember the horizon when already passed in `fit` for prediction. The modified workflow to allow for such forecasters in addition is as follows:
###Code
# step 1: data specification
y = load_airline()
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
1.2.3 Forecasters that can make use of exogeneous dataMany forecasters can make use of exogeneous time series, i.e., other time series that are not forecast, but are useful for forecasting `y`. Exogeneous time series are always passed as an `X` argument, in `fit`, `predict`, and other methods (see below). Exogeneous time series should always be passed as `pandas.DataFrames`. Most forecasters that can deal with exogeneous time series will assume that the time indices of `X` passed to `fit` are a super-set of the time indices in `y` passed to `fit`; and that the time indices of `X` passed to `predict` are a super-set of time indices in `fh`, although this is not a general interface restriction. Forecasters that do not make use of exogeneous time series still accept the argument (and do not use it internally).The general workflow for passing exogeneous data is as follows:
###Code
# step 1: data specification
y = load_airline()
# we create some dummy exogeneous data
X = pd.DataFrame(index=y.index)
# step 2: specifying forecasting horizon
fh = np.arange(1, 37)
# step 3: specifying the forecasting algorithm
forecaster = NaiveForecaster(strategy="last", sp=12)
# step 4: fitting the forecaster
forecaster.fit(y, X=X, fh=fh)
# step 5: querying predictions
y_pred = forecaster.predict(X=X)
###Output
_____no_output_____
###Markdown
NOTE: as in workflows 1.2.1 and 1.2.2, some forecasters that use exogeneous variables may also require the forecasting horizon only in `predict`. Such forecasters may also be called with steps 4 and 5 being```pythonforecaster.fit(y, X=X)y_pred = forecaster.predict(fh=fh, X=X)``` 1.2.4. Multivariate forecasting Some forecasters in sktime support multivariate forecasts. Some examples of multivariate forecasters are: `MultiplexForecaster`, `EnsembleForecaster`,`TransformedTargetForecaster` etc. In order to determine is a forecaster can be multivariate, one can look at the `scitype:y` in `tags`, which should be set to `multivariate` or '`both`. To display complete list of multivariate forecasters, search for forecasters with 'multivariate' or 'both' tag value for the tag 'scitype:y', as follows:
###Code
from sktime.registry import all_estimators
for forecaster in all_estimators(filter_tags={"scitype:y": ["multivariate", "both"]}):
print(forecaster[0])
###Output
_____no_output_____
###Markdown
Below is an example of the general workflow of multivariate `ColumnEnsembleForecaster` using the longley dataset from `sktime.datasets`. The workflow is the same as in the univariate forecasters, but the input has more than one variables (columns).
###Code
from sktime.datasets import load_longley
from sktime.forecasting.compose import ColumnEnsembleForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.trend import PolynomialTrendForecaster
_, y = load_longley()
y = y.drop(columns=["UNEMP", "ARMED", "POP"])
forecasters = [
("trend", PolynomialTrendForecaster(), 0),
("ses", ExponentialSmoothing(trend="add"), 1),
]
forecaster = ColumnEnsembleForecaster(forecasters=forecasters)
forecaster.fit(y, fh=[1, 2, 3])
y_pred = forecaster.predict()
###Output
_____no_output_____
###Markdown
The input to the multivariate forecaster `y` is a `pandas.DataFrame` where each column is a variable.
###Code
y
###Output
_____no_output_____
###Markdown
The result of the multivariate forecaster `y_pred` is a `pandas.DataFrame` where columns are the predicted values for each variable. The variables in `y_pred` are the same as in `y`, the input to the multivariate forecaster.
###Code
y_pred
###Output
_____no_output_____
###Markdown
1.2.5 Probabilistic forecasting: prediction intervals, quantile, variance, and distributional forecasts `sktime` provides a unified interface to make probabilistic forecasts.The following methods are possibly available for probabilistic forecasts:* `predict_interval` produces interval forecasts. Additionally to any `predict` arguments, an argument `coverage` (nominal interval coverage) must be provided.* `predict_quantiles` produces quantile forecasts. Additionally to any `predict` arguments, an argument `alpha` (quantile values) must be provided.* `predict_var` produces variance forecasts. This has same arguments as `predict`.* `predict_proba` produces full distributional forecasts. This has same arguments as `predict`.Not all forecasters are capable of returning probabilistic forecast, but if a forecasters provides one kind of probabilistic forecast, it is also capable of returning the others. The list of forecasters with such capability can be queried by `registry.all_estimators`, searching for those where the `capability:pred_int` tag has value`True`.The basic worfklow for probabilistic forecasts is similar to the basic forecasting workflow, with the difference that instead of `predict`, one of the probabilistic forecasting methods is used:
###Code
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.theta import ThetaForecaster
# until fit, identical with the simple workflow
y = load_airline()
fh = np.arange(1, 13)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(y, fh=fh)
###Output
_____no_output_____
###Markdown
Now we present the different probabilistic forecasting methods. `predict_interval` - interval predictions `predict_interval` takes an argument `coverage`, which is a float (or list of floats), the nominal coverage of the prediction interval(s) queried. `predict_interval` produces symmetric prediction intervals, for example, a coverage of `0.9` returns a "lower" forecast at quantile `0.5 - coverage/2 = 0.05`, and an "upper" forecast at quantile `0.5 + coverage/2 = 0.95`.
###Code
coverage = 0.9
y_pred_ints = forecaster.predict_interval(coverage=coverage)
y_pred_ints
###Output
_____no_output_____
###Markdown
The return `y_pred_ints` is a `pandas.DataFrame` with a column multi-index: The first level is variable name from `y` in fit (or `Coverage` if no variable names were present), second level coverage fractions for which intervals were computed, in the same order as in input `coverage`; third level columns `lower` and `upper`. Rows are the indices for which forecasts were made (same as in `y_pred` or `fh`). Entries are lower/upper (as column name) bound of the nominal coverage predictive interval for the index in the same row. Pretty-plotting the predictive interval forecasts:
###Code
from sktime.utils import plotting
# also requires predictions
y_pred = forecaster.predict()
fig, ax = plotting.plot_series(y, y_pred, labels=["y", "y_pred"])
ax.fill_between(
ax.get_lines()[-1].get_xdata(),
y_pred_ints["Coverage"][coverage]["lower"],
y_pred_ints["Coverage"][coverage]["upper"],
alpha=0.2,
color=ax.get_lines()[-1].get_c(),
label=f"{coverage}% prediction intervals",
)
ax.legend();
###Output
_____no_output_____
###Markdown
`predict_quantiles` - quantile forecasts sktime offers `predict_quantiles` as a unified interface to return quantile values of predictions. Similar to `predict_interval`.`predict_quantiles` has an argument `alpha`, containing the quantile values being queried. Similar to the case of the `predict_interval`, `alpha` can be a `float`, or a `list of floats`.
###Code
y_pred_quantiles = forecaster.predict_quantiles(alpha=[0.275, 0.975])
y_pred_quantiles
###Output
_____no_output_____
###Markdown
`y_pred_quantiles`, the output of predict_quantiles, is a `pandas.DataFrame` with a two-level column multiindex. The first level is variable name from `y` in fit (or `Quantiles` if no variable names were present), second level are the quantile values (from `alpha`) for which quantile predictions were queried. Rows are the indices for which forecasts were made (same as in `y_pred` or `fh`). Entries are the quantile predictions for that variable, that quantile value, for the time index in the same row. Remark: for clarity: quantile and (symmetric) interval forecasts can be translated into each other as follows.**alpha < 0.5:** The alpha-quantile prediction is equal to the lower bound of a predictive interval with coverage = (0.5 - alpha) * 2**alpha > 0.5:** The alpha-quantile prediction is equal to the upper bound of a predictive interval with coverage = (alpha - 0.5) * 2 `predict_var` - variance predictions `predict_var` produces variance predictions:
###Code
y_pred_var = forecaster.predict_var()
y_pred_var
###Output
_____no_output_____
###Markdown
The format of the output `y_pred_var` is the same as for `predict`, except that this is always coerced to a `pandas.DataFrame`, and entries are not point predictions but variance predictions. `predict_proba` - distribution predictions To predict full predictive distributions, `predict_proba` can be used.As this returns `tensorflow` `Distribution` objects, the deep learning dependency set `dl` of `sktime` (which includes `tensorflow` and `tensorflow-probability` dependencies) must be installed.
###Code
y_pred_proba = forecaster.predict_proba()
y_pred_proba
###Output
_____no_output_____
###Markdown
Distributions returned by `predict_proba` are by default marginal at time points, not joint over time points.More precisely, the returned `Distribution` object is formatted and to be interpreted as follows:* Batch shape is 1D and same length as fh* Event shape is 1D, with length equal to number of variables being forecast* i-th (batch) distribution is forecast for i-th entry of fh* j-th (event) component is j-th variable, same order as y in `fit`/`update`To return joint forecast distributions, the `marginal` parameter can be set to `False` (currently work in progress). In this case, a `Distribution` with 2D event shape `(len(fh), len(y))` is returned. 1.2.6 Panel forecasts and hierarchical forecasts `sktime` provides a unified interface to make panel and hierarchical forecasts.All `sktime` forecasters can be applied to panel and hierarchical data, which needs to be presented in specific input formats.Forecasters that are not genuinely panel or hierarchical forecasters will be applied by instance.The recommended (not the only) format to pass panel and hierarchical data is a `pandas.DataFrame` with `MultiIndex` row. In this `MultiIndex`, the last level must be in an `sktime` compatible time index format, the remaining levels are panel or hierarchy nodes.Example data:
###Code
from sktime.utils._testing.hierarchical import _bottom_hier_datagen
y = _bottom_hier_datagen(no_levels=2)
y
###Output
_____no_output_____
###Markdown
As stated, all forecasters, genuinely hierarchical or not, can be applied, with all workflows described in this section, to produce hierarchical forecasts.The syntax is exactly the same as for plain time series, except for the hierarchy levels in input and output data:
###Code
from sktime.forecasting.arima import ARIMA
fh = [1, 2, 3]
forecaster = ARIMA()
forecaster.fit(y, fh=fh)
forecaster.predict()
###Output
_____no_output_____
###Markdown
Further details on hierarchical forecasting, including reduction, aggregation, reconciliation, are presented in the "hierarchical forecasting" tutorial. 1.3 Basic evaluation workflow - evaluating a batch of forecasts against ground truth observationsIt is good practice to evaluate statistical performance of a forecaster before deploying it, and regularly re-evaluate performance if in continuous deployment. The evaluation workflow for the basic batch forecasting task, as solved by the workflow in Section 1.2, consists of comparing batch forecasts with actuals. This is sometimes called (batch-wise) backtesting.The basic evaluation workflow is as follows:1. Splitting a representatively chosen historical series into a temporal training and test set. The test set should be temporally in the future of the training set.2. Obtaining batch forecasts, as in Section 1.2, by fitting a forecaster to the training set, and querying predictions for the test set3. Specifying a quantitative performance metric to compare the actual test set against predictions4. Computing the quantitative performance on the test set5. Testing whether this performance is statistically better than a chosen baseline performanceNOTE: Step 5 (testing) is currently not supported in `sktime`, but is on the development roadmap. For the time being, it is advised to use custom implementations of appropriate methods (e.g., Diebold-Mariano test; stationary confidence intervals).NOTE: Note that this evaluation set-up determines how well a given algorithm would have performed on past data. Results are only insofar representative as future performance can be assumed to mirror past performance. This can be argued under certain assumptions (e.g., stationarity), but will in general be false. Monitoring of forecasting performance is hence advised in case an algorithm is applied multiple times. **Example:** In the example, we will us the same airline data as in Section 1.2. But, instead of predicting the next 3 years, we hold out the last 3 years of the airline data (below: `y_test`), and see how the forecaster would have performed three years ago, when asked to forecast the most recent 3 years (below: `y_pred`), from the years before (below: `y_train`). "how" is measured by a quantitative performance metric (below: `mean_absolute_percentage_error`). This is then considered as an indication of how well the forecaster would perform in the coming 3 years (what was done in Section 1.2). This may or may not be a stretch depending on statistical assumptions and data properties (caution: it often is a stretch - past performance is in general not indicative of future performance). Step 1 - Splitting a historical data set in to a temporal train and test batch
###Code
from sktime.forecasting.model_selection import temporal_train_test_split
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# we will try to forecast y_test from y_train
# plotting for illustration
plot_series(y_train, y_test, labels=["y_train", "y_test"])
print(y_train.shape[0], y_test.shape[0])
###Output
_____no_output_____
###Markdown
Step 2 - Making forecasts for y_test from y_trainThis is almost verbatim the workflow in Section 1.2, using `y_train` to predict the indices of `y_test`.
###Code
# we can simply take the indices from `y_test` where they already are stored
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
# y_pred will contain the predictions
y_pred = forecaster.predict(fh)
# plotting for illustration
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
###Output
_____no_output_____
###Markdown
Steps 3 and 4 - Specifying a forecasting metric, evaluating on the test setThe next step is to specify a forecasting metric. These are functions that return a number when input with prediction and actual series. They are different from `sklearn` metrics in that they accept series with indices rather than `np.array`s. Forecasting metrics can be invoked in two ways:* using the lean function interface, e.g., `mean_absolute_percentage_error` which is a python function `(y_true : pd.Series, y_pred : pd.Series) -> float`* using the composable class interface, e.g., `MeanAbsolutePercentageError`, which is a python class, callable with the same signatureCasual users may opt to use the function interface. The class interface supports advanced use cases, such as parameter modification, custom metric composition, tuning over metric parameters (not covered in this tutorial)
###Code
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# option 1: using the lean function interface
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# note: the FIRST argument is the ground truth, the SECOND argument are the forecasts
# the order matters for most metrics in general
###Output
_____no_output_____
###Markdown
To properly interpret numbers like this, it is useful to understand properties of the metric in question (e.g., lower is better), and to compare against suitable baselines and contender algorithms (see step 5).
###Code
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# option 2: using the composable class interface
mape = MeanAbsolutePercentageError(symmetric=False)
# the class interface allows to easily construct variants of the MAPE
# e.g., the non-symmetric verion
# it also allows for inspection of metric properties
# e.g., are higher values better (answer: no)?
mape.greater_is_better
# evaluation works exactly like in option 2, but with the instantiated object
mape(y_test, y_pred)
###Output
_____no_output_____
###Markdown
NOTE: Some metrics, such as `mean_absolute_scaled_error`, also require the training set for evaluation. In this case, the training set should be passed as a `y_train` argument. Refer to the API reference on individual metrics.NOTE: The workflow is the same for forecasters that make use of exogeneous data - no `X` is passed to the metrics. Step 5 - Testing performance against benchmarksIn general, forecast performances should be quantitatively tested against benchmark performances.Currently (`sktime` v0.12.x), this is a roadmap development item. Contributions are very welcome. 1.3.1 The basic batch forecast evaluation workflow in a nutshell - function metric interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the lean function metric interface.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric and
# step 4: computing the forecast performance
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.3.2 The basic batch forecast evaluation workflow in a nutshell - metric class interfaceFor convenience, we present the basic batch forecast evaluation workflow in one cell.This cell is using the advanced class specification interface for metrics.
###Code
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError
# step 1: splitting historical data
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
# step 2: running the basic forecasting workflow
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster = NaiveForecaster(strategy="last", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
# step 3: specifying the evaluation metric
mape = MeanAbsolutePercentageError(symmetric=False)
# if function interface is used, just use the function directly in step 4
# step 4: computing the forecast performance
mape(y_test, y_pred)
# step 5: testing forecast performance against baseline
# under development
###Output
_____no_output_____
###Markdown
1.4 Advanced deployment workflow: rolling updates & forecastsA common use case requires the forecaster to regularly update with new data and make forecasts on a rolling basis. This is especially useful if the same kind of forecast has to be made at regular time points, e.g., daily or weekly. `sktime` forecasters support this type of deployment workflow via the `update` and `update_predict` methods. 1.4.1 Updating a forecaster with the `update` methodThe `update` method can be called when a forecaster is already fitted, to ingest new data and make updated forecasts - this is referred to as an "update step".After the update, the forecaster's internal "now" state (the `cutoff`) is set to the latest time stamp seen in the update batch (assumed to be later than previously seen data).The general pattern is as follows:1. Specify a forecasting strategy2. Specify a relative forecasting horizon3. Fit the forecaster to an initial batch of data using `fit`4. Make forecasts for the relative forecasting horizon, using `predict`5. Obtain new data; use `update` to ingest new data6. Make forecasts using `predict` for the updated data7. Repeat 5 and 6 as often as required**Example**: suppose that, in the airline example, we want to make forecasts a year ahead, but every month, starting December 1957. The first few months, forecasts would be made as follows:
###Code
from sktime.datasets import load_airline
from sktime.forecasting.ets import AutoETS
from sktime.utils.plotting import plot_series
# we prepare the full data set for convenience
# note that in the scenario we will "know" only part of this at certain time points
y = load_airline()
# December 1957
# this is the data known in December 1957
y_1957Dec = y[:-36]
# step 1: specifying the forecasting strategy
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon: one year ahead, all months
fh = np.arange(1, 13)
# step 3: this is the first time we use the model, so we fit it
forecaster.fit(y_1957Dec)
# step 4: obtaining the first batch of forecasts for Jan 1958 - Dec 1958
y_pred_1957Dec = forecaster.predict(fh)
# plotting predictions and past data
plot_series(y_1957Dec, y_pred_1957Dec, labels=["y_1957Dec", "y_pred_1957Dec"])
# January 1958
# new data is observed:
y_1958Jan = y[[-36]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Jan)
# step 6: making forecasts with the updated data
y_pred_1958Jan = forecaster.predict(fh)
# note that the fh is relative, so forecasts are automatically for 1 month later
# i.e., from Feb 1958 to Jan 1959
y_pred_1958Jan
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan"],
)
# February 1958
# new data is observed:
y_1958Feb = y[[-35]]
# step 5: we update the forecaster with the new data
forecaster.update(y_1958Feb)
# step 6: making forecasts with the updated data
y_pred_1958Feb = forecaster.predict(fh)
# plotting predictions and past data
plot_series(
y[:-35],
y_pred_1957Dec,
y_pred_1958Jan,
y_pred_1958Feb,
labels=["y_1957Dec", "y_pred_1957Dec", "y_pred_1958Jan", "y_pred_1958Feb"],
)
###Output
_____no_output_____
###Markdown
... and so on.A shorthand for running first `update` and then `predict` is `update_predict_single` - for some algorithms, this may be more efficient than the separate calls to `update` and `predict`:
###Code
# March 1958
# new data is observed:
y_1958Mar = y[[-34]]
# step 5&6: update/predict in one step
forecaster.update_predict_single(y_1958Mar, fh=fh)
###Output
_____no_output_____
###Markdown
1.4.2 Moving the "now" state without updating the modelIn the rolling deployment mode, may be useful to move the estimator's "now" state (the `cutoff`) to later, for example if no new data was observed, but time has progressed; or, if computations take too long, and forecasts have to be queried.The `update` interface provides an option for this, via the `update_params` argument of `update` and other update funtions.If `update_params` is set to `False`, no model update computations are performed; only data is stored, and the internal "now" state (the `cutoff`) is set to the most recent date.
###Code
# April 1958
# new data is observed:
y_1958Apr = y[[-33]]
# step 5: perform an update without re-computing the model parameters
forecaster.update(y_1958Apr, update_params=False)
###Output
_____no_output_____
###Markdown
1.4.3 Walk-forward predictions on a batch of data`sktime` can also simulate the update/predict deployment mode with a full batch of data.This is not useful in deployment, as it requires all data to be available in advance; however, it is useful in playback, such as for simulations or model evaluation.The update/predict playback mode can be called using `update_predict` and a re-sampling constructor which encodes the precise walk-forward scheme.
###Code
# from sktime.datasets import load_airline
# from sktime.forecasting.ets import AutoETS
# from sktime.forecasting.model_selection import ExpandingWindowSplitter
# from sktime.utils.plotting import plot_series
###Output
_____no_output_____
###Markdown
NOTE: commented out - this part of the interface is currently undergoing a re-work. Contributions and PR are appreciated.
###Code
# for playback, the full data needs to be loaded in advance
# y = load_airline()
# step 1: specifying the forecasting strategy
# forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
# step 2: specifying the forecasting horizon
# fh - np.arange(1, 13)
# step 3: specifying the cross-validation scheme
# cv = ExpandingWindowSplitter()
# step 4: fitting the forecaster - fh should be passed here
# forecaster.fit(y[:-36], fh=fh)
# step 5: rollback
# y_preds = forecaster.update_predict(y, cv)
###Output
_____no_output_____
###Markdown
1.5 Advanced evaluation worfklow: rolling re-sampling and aggregate errors, rolling back-testingTo evaluate forecasters with respect to their performance in rolling forecasting, the forecaster needs to be tested in a set-up mimicking rolling forecasting, usually on past data. Note that the batch back-testing as in Section 1.3 would not be an appropriate evaluation set-up for rolling deployment, as that tests only a single forecast batch. The advanced evaluation workflow can be carried out using the `evaluate` benchmarking function.`evalute` takes as arguments:- a `forecaster` to be evaluated- a `scikit-learn` re-sampling strategy for temporal splitting (`cv` below), e.g., `ExpandingWindowSplitter` or `SlidingWindowSplitter`- a `strategy` (string): whether the forecaster should be always be refitted or just fitted once and then updated
###Code
from sktime.forecasting.arima import AutoARIMA
from sktime.forecasting.model_evaluation import evaluate
from sktime.forecasting.model_selection import ExpandingWindowSplitter
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
cv = ExpandingWindowSplitter(
step_length=12, fh=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], initial_window=72
)
df = evaluate(forecaster=forecaster, y=y, cv=cv, strategy="refit", return_data=True)
df.iloc[:, :5]
# visualization of a forecaster evaluation
fig, ax = plot_series(
y,
df["y_pred"].iloc[0],
df["y_pred"].iloc[1],
df["y_pred"].iloc[2],
df["y_pred"].iloc[3],
df["y_pred"].iloc[4],
df["y_pred"].iloc[5],
markers=["o", "", "", "", "", "", ""],
labels=["y_true"] + ["y_pred (Backtest " + str(x) + ")" for x in range(6)],
)
ax.legend();
###Output
_____no_output_____
###Markdown
Todo: performance metrics, averages, and testing - contributions to `sktime` and the tutorial are welcome. 2. Forecasters in `sktime` - lookup, properties, main familiesThis section summarizes how to:* search for forecasters in sktime* properties of forecasters, corresponding search options and tags* commonly used types of forecasters in `sktime` 2.1 Listing all forecasters in `sktime` Generally, all forecasters available in `sktime` can be listed with the `all_estimators` command.This will list all forecasters in `sktime`, even those whose soft dependencies are not installed.
###Code
from sktime.registry import all_estimators
all_estimators("forecaster", as_dataframe=True)
###Output
_____no_output_____
###Markdown
The entries of the last column of the resulting dataframe are classes which could be directly used for construction, or simply inspected for the correct import path.For logic that loops over forecasters, the default output format may be more convenient:
###Code
forecaster_list = all_estimators("forecaster", as_dataframe=False)
# this returns a list of (name, estimator) tuples
forecaster_list[0]
###Output
_____no_output_____
###Markdown
2.2 Forecaster tags All forecasters `sktime` have so-called tags which describe properties of the estimator, e.g., whether it is multivariate, probabilistic, or not. Use of tags, inspection, and retrieval will be described in this section. 2.2.1 Capability tags: multivariate, probabilistic, hierarchical Every forecaster has tags, which are key-value pairs that can describe capabilities or internal implementation details.The most important "capability" style tags are the following:`requires-fh-in-fit` - a boolean. Whether the forecaster requires the forecasting horizon `fh` already in `fit` (`True`), or whether it can be passed late in `predict` (`False`).`scitype:y` - a string. Whether the forecaster is univariate (`"univariate"`), strictly multivariate (`"multivariate"`), or can deal with any number of variables (`"both"`).`capability:pred_int` - a boolean. Whether the forecaster can return probabilistic predictions via `predict_interval` etc, see Section 1.5.`ignores-exogeneous-X` - a boolean. Whether the forecaster makes use of exogeneous variables `X` (`False`) or not (`True`). If the forecaster does not use `X`, it can still be passed for interface uniformity, and will be ignored.`handles-missing-data` - a boolean. Whether the forecaster can deal with missing data in the inputs `X` or `y`.Tags of a forecaster instance can be inspected via the `get_tags` (lists all tags) and `get_tag` (gets value for one tag) methods.Tag values may depend on hyper-parameter choices.
###Code
from sktime.forecasting.arima import ARIMA
ARIMA().get_tags()
###Output
_____no_output_____
###Markdown
The `y_inner_mtype` and `X_inner_mtype` indicate whether the forecaster can deal with panel or hierarchical data natively - if an panel or hierarchical mtype occurs here, it does (see data types tutorial).An explanation for all tags can be obtained using the `all_tags` utility, see Section 2.2.3. 2.2.2 Finding and listing forecasters by tag To list forecasters with their tags, the `all_estimators` utility can be used with its `return_tags` argument.The resulting data frame can then be used for table queries or sub-setting.
###Code
from sktime.registry import all_estimators
all_estimators(
"forecaster", as_dataframe=True, return_tags=["scitype:y", "requires-fh-in-fit"]
)
###Output
_____no_output_____
###Markdown
To filter beforehand on certain tags and tag values, the `filter_tags` argument can be used:
###Code
# this lists all forecasters that can deal with multivariate data
all_estimators(
"forecaster", as_dataframe=True, filter_tags={"scitype:y": ["multivariate", "both"]}
)
###Output
_____no_output_____
###Markdown
Important note: as said above, tag values can depend on hyper-parameter settings, e.g., a `ForecastingPipeline` can handle multivariate data only if the forecaster in it can handle multivariate data.In retrieval as above, the tags for a class are usually set to indicate the most general potential value, e.g., if for some parameter choice the estimator can handle multivariate, it will appear on the list. 2.2.3 Listing all forecaster tags To list all forecaster tags with an explanation of the tag, the `all_tags` utility can be used:
###Code
import pandas as pd
from sktime.registry import all_tags
# wrapping this in a pandas DataFrame for pretty display
pd.DataFrame(all_tags(estimator_types="forecaster"))[[0, 3]]
###Output
_____no_output_____
###Markdown
2.3 Common forecasters in `sktime` `sktime` supports a number of commonly used forecasters, many of them interfaced from state-of-art forecasting packages. All forecasters are available under the unified `sktime` interface.Some classes that are currently stably supported are:* `ExponentialSmoothing`, `ThetaForecaster`, and `autoETS` from `statsmodels`* `ARIMA` and `AutoARIMA` from `pmdarima`* `AutoARIMA` from `statsforecast`* `BATS` and `TBATS` from `tbats`* `PolynomialTrend` for forecasting polynomial trends* `Prophet` which interfaces Facebook `prophet`This is not the full list, use `all_estimators` as demonstrated in Sections 2.1 and 2.2 for that.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
2.3.1 Exponential smoothing, theta forecaster, autoETS from `statsmodels``sktime` interfaces a number of statistical forecasting algorithms from `statsmodels`: exponential smoothing, theta, and auto-ETS.For example, to use exponential smoothing with an additive trend component and multiplicative seasonality on the airline data set, we can write the following. Note that since this is monthly data, a good choic for seasonal periodicity (sp) is 12 (= hypothesized periodicity of a year).
###Code
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
forecaster = ExponentialSmoothing(trend="add", seasonal="additive", sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
The exponential smoothing of state space model can also be automated similar to the [ets](https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/ets) function in R. This is implemented in the `AutoETS` forecaster.
###Code
from sktime.forecasting.ets import AutoETS
forecaster = AutoETS(auto=True, sp=12, n_jobs=-1)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
# todo: explain Theta; explain how to get theta-lines
###Output
_____no_output_____
###Markdown
2.3.2 ARIMA and autoARIMA`sktime` interfaces `pmdarima` for its ARIMA class models.For a classical ARIMA model with set parameters, use the `ARIMA` forecaster:
###Code
from sktime.forecasting.arima import ARIMA
forecaster = ARIMA(
order=(1, 1, 0), seasonal_order=(0, 1, 0, 12), suppress_warnings=True
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
`AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.arima import AutoARIMA
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
forecaster = AutoARIMA(sp=12, suppress_warnings=True)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
# to obtain the fitted parameters, run
forecaster.get_fitted_params()
# should these not include pdq?
###Output
_____no_output_____
###Markdown
2.3.3 BATS and TBATS`sktime` interfaces BATS and TBATS from the [`tbats`](https://github.com/intive-DataScience/tbats) package.
###Code
from sktime.forecasting.bats import BATS
forecaster = BATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
from sktime.forecasting.tbats import TBATS
forecaster = TBATS(sp=12, use_trend=True, use_box_cox=False)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.3.4 Facebook prophet`sktime` provides an interface to [`fbprophet`](https://github.com/facebook/prophet) by Facebook.
###Code
from sktime.forecasting.fbprophet import Prophet
###Output
_____no_output_____
###Markdown
The current interface does not support period indices, only pd.DatetimeIndex. Consider improving this by contributing the `sktime`.
###Code
# Convert index to pd.DatetimeIndex
z = y.copy()
z = z.to_timestamp(freq="M")
z_train, z_test = temporal_train_test_split(z, test_size=36)
forecaster = Prophet(
seasonality_mode="multiplicative",
n_changepoints=int(len(y_train) / 12),
add_country_holidays={"country_name": "Germany"},
yearly_seasonality=True,
weekly_seasonality=False,
daily_seasonality=False,
)
forecaster.fit(z_train)
y_pred = forecaster.predict(fh.to_relative(cutoff=y_train.index[-1]))
y_pred.index = y_test.index
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.3.5 State Space Model (Structural Time Series)We can also use the [`UnobservedComponents`](https://www.statsmodels.org/stable/generated/statsmodels.tsa.statespace.structural.UnobservedComponents.html) class from [`statsmodels`](https://www.statsmodels.org/stable/index.html) to generate predictions using a state space model.
###Code
from sktime.forecasting.structural import UnobservedComponents
# We can model seasonality using Fourier modes as in the Prophet model.
forecaster = UnobservedComponents(
level="local linear trend", freq_seasonal=[{"period": 12, "harmonics": 10}]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
2.3.6 AutoARIMA from [StatsForecast](https://github.com/Nixtla/statsforecast)`sktime` interfaces `StatsForecast` for its `AutoARIMA` class models. `AutoARIMA` is an automatically tuned `ARIMA` variant that obtains the optimal pdq parameters automatically:
###Code
from sktime.forecasting.statsforecast import StatsForecastAutoARIMA
forecaster = StatsForecastAutoARIMA(sp=12)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
3. Advanced composition patterns - pipelines, reduction, autoML, and more`sktime` supports a number of advanced composition patterns to create forecasters out of simpler components:* Reduction - building a forecaster from estimators of "simpler" scientific types, like `scikit-learn` regressors. A common example is feature/label tabulation by rolling window, aka the "direct reduction strategy".* Tuning - determining values for hyper-parameters of a forecaster in a data-driven manner. A common example is grid search on temporally rolling re-sampling of train/test splits.* Pipelining - concatenating transformers with a forecaster to obtain one forecaster. A common example is detrending and deseasonalizing then forecasting, an instance of this is the common "STL forecaster".* AutoML, also known as automated model selection - using automated tuning strategies to select not only hyper-parameters but entire forecasting strategies. A common example is on-line multiplexer tuning.For illustration, all estimators below will be presented on the basic forecasting workflow - though they also support the advanced forecasting and evaluation workflows under the unified `sktime` interface (see Section 1).For use in the other workflows, simply replace the "forecaster specification block" ("`forecaster=`") by the forecaster specification block in the examples presented below.
###Code
# imports necessary for this chapter
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import mean_absolute_percentage_error
from sktime.utils.plotting import plot_series
# data loading for illustration (see section 1 for explanation)
y = load_airline()
y_train, y_test = temporal_train_test_split(y, test_size=36)
fh = ForecastingHorizon(y_test.index, is_relative=False)
###Output
_____no_output_____
###Markdown
3.1 Reduction: from forecasting to regression`sktime` provides a meta-estimator that allows the use of any `scikit-learn` estimator for forecasting.* **modular** and **compatible with scikit-learn**, so that we can easily apply any scikit-learn regressor to solve our forecasting problem,* **parametric** and **tuneable**, allowing us to tune hyper-parameters such as the window length or strategy to generate forecasts* **adaptive**, in the sense that it adapts the scikit-learn's estimator interface to that of a forecaster, making sure that we can tune and properly evaluate our model **Example**: we will define a tabulation reduction strategy to convert a k-nearest neighbors regressor (`sklearn` `KNeighborsRegressor`) into a forecaster. The composite algorithm is an object compliant with the `sktime` forecaster interface (picture: big robot), and contains the regressor as a parameter accessible component (picture: little robot). In `fit`, the composite algorithm uses a sliding window strategy to tabulate the data, and fit the regressor to the tabulated data (picture: left half). In `predict`, the composite algorithm presents the regressor with the last observed window to obtain predictions (picture: right half).Below, the composite is constructed using the shorthand function `make_reduction` which produces a `sktime` estimator of forecaster scitype. It is called with a constructed `scikit-learn` regressor, `regressor`, and additional parameter which can be later tuned as hyper-parameters
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
regressor = KNeighborsRegressor(n_neighbors=1)
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
In the above example we use the "recursive" reduction strategy. Other implemented strategies are: * "direct", * "dirrec", * "multioutput". Parameters can be inspected using `scikit-learn` compatible `get_params` functionality (and set using `set_params`). This provides tunable and nested access to parameters of the `KNeighborsRegressor` (as `estimator_etc`), and the `window_length` of the reduction strategy. Note that the `strategy` is not accessible, as underneath the utility function this is mapped on separate algorithm classes. For tuning over algorithms, see the "autoML" section below.
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2 Pipelining, detrending and deseasonalizationA common composition motif is pipelining: for example, first deseasonalizing or detrending the data, then forecasting the detrended/deseasonalized series. When forecasting, one needs to add the trend and seasonal component back to the data. 3.2.1 The basic forecasting pipeline`sktime` provides a generic pipeline object for this kind of composite modelling, the `TransforemdTargetForecaster`. It chains an arbitrary number of transformations with a forecaster. The transformations should be instances of estimators with series-to-series-transformer scitype. An example of the syntax is below:
###Code
from sktime.forecasting.arima import ARIMA
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.transformations.series.detrend import Deseasonalizer
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
The `TransformedTargetForecaster` is constructed with a list of steps, each a pair of name and estimator. The last estimator should be of forecaster scitype, the other estimators should be series-to-series transformers which possess both a `transform` and `inverse_transform` method. The resulting estimator is of forecaster scitype and has all interface defining methods. In `fit`, all transformers apply `fit_transforms` to the data, then the forecaster's `fit`; in `predict`, first the forecaster's `predict` is applied, then the transformers' `inverse_transform` in reverse order. The same pipeline, as above, can also be constructed with the multiplication dunder method `*`.This creates a `TransformedTargetForecaster` as above, with components given default names.
###Code
forecaster = Deseasonalizer(model="multiplicative", sp=12) * ARIMA()
forecaster
###Output
_____no_output_____
###Markdown
The names in a dunder constructed pipeline are made unique in case, e.g., two deseasonalizers are used.Example of a multiple seasonality model:
###Code
forecaster = (
Deseasonalizer(model="multiplicative", sp=12)
* Deseasonalizer(model="multiplicative", sp=3)
* ARIMA()
)
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.2.2 The `Detrender` as pipeline componentFor detrending, we can use the `Detrender`. This is an estimator of series-to-transformer scitype that wraps an arbitrary forecaster. For example, for linear detrending, we can use `PolynomialTrendForecaster` to fit a linear trend, and then subtract/add it using the `Detrender` transformer inside `TransformedTargetForecaster`.To understand better what happens, we first examine the detrender separately:
###Code
from sktime.forecasting.trend import PolynomialTrendForecaster
from sktime.transformations.series.detrend import Detrender
# linear detrending
forecaster = PolynomialTrendForecaster(degree=1)
transformer = Detrender(forecaster=forecaster)
yt = transformer.fit_transform(y_train)
# internally, the Detrender uses the in-sample predictions
# of the PolynomialTrendForecaster
forecaster = PolynomialTrendForecaster(degree=1)
fh_ins = -np.arange(len(y_train)) # in-sample forecasting horizon
y_pred = forecaster.fit(y_train).predict(fh=fh_ins)
plot_series(y_train, y_pred, yt, labels=["y_train", "fitted linear trend", "residuals"]);
###Output
_____no_output_____
###Markdown
Since the `Detrender` is of scitype series-to-series-transformer, it can be used in the `TransformedTargetForecaster` for detrending any forecaster:
###Code
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
("forecast", ARIMA()),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.2.3 Complex pipeline composites and parameter inspection`sktime` follows the `scikit-learn` philosophy of composability and nested parameter inspection. As long as an estimator has the right scitype, it can be used as part of any composition principle requiring that scitype. Above, we have already seen the example of a forecaster inside a `Detrender`, which is an estimator of scitype series-to-series-transformer, with one component of forecaster scitype. Similarly, in a `TransformedTargetForecaster`, we can use the reduction composite from Section 3.1 as the last forecaster element in the pipeline, which inside has an estimator of tabular regressor scitype, the `KNeighborsRegressor`:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
forecaster = TransformedTargetForecaster(
[
("deseasonalize", Deseasonalizer(model="multiplicative", sp=12)),
("detrend", Detrender(forecaster=PolynomialTrendForecaster(degree=1))),
(
"forecast",
make_reduction(
KNeighborsRegressor(),
scitype="tabular-regressor",
window_length=15,
strategy="recursive",
),
),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
As with `scikit-learn` models, we can inspect and access parameters of any component via `get_params` and `set_params`:
###Code
forecaster.get_params()
###Output
_____no_output_____
###Markdown
3.3 Parameter tuning`sktime` provides parameter tuning strategies as compositors of forecaster scitype, similar to `scikit-learn`'s `GridSearchCV`. 3.3.1 Basic tuning using `ForecastingGridSearchCV`The compositor `ForecastingGridSearchCV` (and other tuners) are constructed with a forecaster to tune, a cross-validation constructor, a `scikit-learn` parameter grid, and parameters specific to the tuning strategy. Cross-validation constructors follow the `scikit-learn` interface for re-samplers, and can be slotted in exchangeably.As an example, we show tuning of the window length in the reduction compositor from Section 3.1, using temporal sliding window tuning:
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
regressor = KNeighborsRegressor()
forecaster = make_reduction(regressor, window_length=15, strategy="recursive")
param_grid = {"window_length": [7, 12, 15]}
# We fit the forecaster on an initial window which is 80% of the historical data
# then use temporal sliding window cross-validation to find the optimal hyper-parameters
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=20)
gscv = ForecastingGridSearchCV(
forecaster, strategy="refit", cv=cv, param_grid=param_grid
)
###Output
_____no_output_____
###Markdown
As with other composites, the resulting forecaster provides the unified interface of `sktime` forecasters - window splitting, tuning, etc requires no manual effort and is done behind the unified interface:
###Code
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
Tuned parameters can be accessed in the `best_params_` attribute:
###Code
gscv.best_params_
###Output
_____no_output_____
###Markdown
An instance of the best forecaster, with hyper-parameters set, can be retrieved by accessing the `best_forecaster_` attribute:
###Code
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.3.2 Tuning of complex compositesAs in `scikit-learn`, parameters of nested components can be tuned by accessing their `get_params` key - by default this is `[estimatorname]__[parametername]` if `[estimatorname]` is the name of the component, and `[parametername]` the name of a parameter within the estimator `[estimatorname]`. For example, below we tune the `KNeighborsRegressor` component's `n_neighbors`, in addition to tuning `window_length`. The tuneable parameters can easily be queried using `forecaster.get_params()`.
###Code
from sklearn.neighbors import KNeighborsRegressor
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
param_grid = {"window_length": [7, 12, 15], "estimator__n_neighbors": np.arange(1, 10)}
regressor = KNeighborsRegressor()
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
gscv.best_params_
###Output
_____no_output_____
###Markdown
An alternative to the above is tuning the regressor separately, using `scikit-learn`'s `GridSearchCV` and a separate parameter grid. As this does not use the "overall" performance metric to tune the inner regressor, performance of the composite forecaster may vary.
###Code
from sklearn.model_selection import GridSearchCV
# tuning the 'n_estimator' hyperparameter of RandomForestRegressor from scikit-learn
regressor_param_grid = {"n_neighbors": np.arange(1, 10)}
forecaster_param_grid = {"window_length": [7, 12, 15]}
# create a tunnable regressor with GridSearchCV
regressor = GridSearchCV(KNeighborsRegressor(), param_grid=regressor_param_grid)
forecaster = make_reduction(
regressor, scitype="tabular-regressor", strategy="recursive"
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
NOTE: a smart implementation of this would use caching to save partial results from the inner tuning and reduce runtime substantially - currently `sktime` does not support this. Consider helping to improve `sktime`. 3.3.3 Selecting the metric and retrieving scoresAll tuning algorithms in `sktime` allow the user to set a score; for forecasting the default is mean absolute percentage error. The score can be set using the `score` argument, to any scorer function or class, as in Section 1.3.Re-sampling tuners retain performances on individual forecast re-sample folds, which can be retrieved from the `cv_results_` argument after the forecaster has been fit via a call to `fit`.In the above example, using the mean squared error instead of the mean absolute percentage error for tuning would be done by defining the forecaster as follows:
###Code
from sktime.performance_metrics.forecasting import MeanSquaredError
mse = MeanSquaredError()
param_grid = {"window_length": [7, 12, 15]}
regressor = KNeighborsRegressor()
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.8), window_length=30)
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=param_grid, scoring=mse)
###Output
_____no_output_____
###Markdown
The performances on individual folds can be accessed as follows, after fitting:
###Code
gscv.fit(y_train)
gscv.cv_results_
###Output
_____no_output_____
###Markdown
3.4 autoML aka automated model selection, ensembling and hedging`sktime` provides a number of compositors for ensembling and automated model selection. In contrast to tuning, which uses data-driven strategies to find optimal hyper-parameters for a fixed forecaster, the strategies in this section combine or select on the level of estimators, using a collection of forecasters to combine or select from.The strategies discussed in this section are:* autoML aka automated model selection* simple ensembling* prediction weighted ensembles with weight updates, and hedging strategies 3.4.1 autoML aka automatic model selection, using tuning plus multiplexerThe most flexible way to perform model selection over forecasters is by using the `MultiplexForecaster`, which exposes the choice of a forecaster from a list as a hyper-parameter that is tunable by generic hyper-parameter tuning strategies such as in Section 3.3.In isolation, `MultiplexForecaster` is constructed with a named list `forecasters`, of forecasters. It has a single hyper-parameter, `selected_forecaster`, which can be set to the name of any forecaster in `forecasters`, and behaves exactly like the forecaster keyed in `forecasters` by `selected_forecaster`.
###Code
from sktime.forecasting.compose import MultiplexForecaster
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.forecasting.naive import NaiveForecaster
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
],
)
forecaster.set_params(**{"selected_forecaster": "naive"})
# now forecaster behaves like NaiveForecaster(strategy="last")
forecaster.set_params(**{"selected_forecaster": "ets"})
# now forecaster behaves like ExponentialSmoothing(trend="add", sp=12))
###Output
_____no_output_____
###Markdown
The `MultiplexForecaster` is not too useful in isolation, but allows for flexible autoML when combined with a tuning wrapper. The below defines a forecaster that selects one of `NaiveForecaster` and `ExponentialSmoothing` by sliding window tuning as in Section 3.3.Combined with rolling use of the forecaster via the `update` functionality (see Section 1.4), the tuned multiplexer can switch back and forth between `NaiveForecaster` and `ExponentialSmoothing`, depending on performance, as time progresses.
###Code
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
forecaster = MultiplexForecaster(
forecasters=[
("naive", NaiveForecaster(strategy="last")),
("ets", ExponentialSmoothing(trend="add", sp=12)),
]
)
cv = SlidingWindowSplitter(initial_window=int(len(y_train) * 0.5), window_length=30)
forecaster_param_grid = {"selected_forecaster": ["ets", "naive"]}
gscv = ForecastingGridSearchCV(forecaster, cv=cv, param_grid=forecaster_param_grid)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
As with any tuned forecaster, best parameters and an instance of the tuned forecaster can be retrieved using `best_params_` and `best_forecaster_`:
###Code
gscv.best_params_
gscv.best_forecaster_
###Output
_____no_output_____
###Markdown
3.4.2 autoML: selecting transformer combinations via `OptimalPassthrough``sktime` also provides capabilities for automated selection of pipeline components *inside* a pipeline, i.e., pipeline structure. This is achieved with the `OptionalPassthrough` transformer.The `OptionalPassthrough` transformer allows to tune whether a transformer inside a pipeline is applied to the data or not. For example, if we want to tune whether `sklearn.StandardScaler` is bringing an advantage to the forecast or not, we wrap it in `OptionalPassthrough`. Internally, `OptionalPassthrough` has a hyperparameter `passthrough: bool` that is tuneable; when `False` the composite behaves like the wrapped transformer, when `True`, it ignores the transformer within.To make effective use of `OptionalPasstrhough`, define a suitable parameter set using the `__` (double underscore) notation familiar from `scikit-learn`. This allows to access and tune attributes of nested objects like TabularToSeriesAdaptor(StandardScaler()). We can use `__` multiple times if we have more than two levels of nesting.In the following example, we take a deseasonalize/scale pipeline and tune over the four possible combinations of deseasonalizer and scaler being included in the pipeline yes/no (2 times 2 = 4); as well as over the forecaster's and the scaler's parameters.Note: this could be arbitrarily combined with `MultiplexForecaster`, as in Section 3.4.1, to select over pipeline architecture as well as over pipeline structure.Note: `scikit-learn` and `sktime` do not support conditional parameter sets at current (unlike, e.g., the `mlr3` package). This means that the grid search will optimize over the `scaler`'s parameters even when it is skipped. Designing/implementing this capability would be an interesting area for contributions or research.
###Code
from sklearn.preprocessing import StandardScaler
from sktime.datasets import load_airline
from sktime.forecasting.compose import TransformedTargetForecaster
from sktime.forecasting.model_selection import (
ForecastingGridSearchCV,
SlidingWindowSplitter,
)
from sktime.forecasting.naive import NaiveForecaster
from sktime.transformations.series.adapt import TabularToSeriesAdaptor
from sktime.transformations.series.compose import OptionalPassthrough
from sktime.transformations.series.detrend import Deseasonalizer
# create pipeline
pipe = TransformedTargetForecaster(
steps=[
("deseasonalizer", OptionalPassthrough(Deseasonalizer())),
("scaler", OptionalPassthrough(TabularToSeriesAdaptor(StandardScaler()))),
("forecaster", NaiveForecaster()),
]
)
# putting it all together in a grid search
cv = SlidingWindowSplitter(
initial_window=60, window_length=24, start_with_window=True, step_length=24
)
param_grid = {
"deseasonalizer__passthrough": [True, False],
"scaler__transformer__transformer__with_mean": [True, False],
"scaler__passthrough": [True, False],
"forecaster__strategy": ["drift", "mean", "last"],
}
gscv = ForecastingGridSearchCV(forecaster=pipe, param_grid=param_grid, cv=cv, n_jobs=-1)
gscv.fit(y_train)
y_pred = gscv.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.4.3 Simple ensembling strategiesTODO - contributions in this section are appreciated
###Code
from sktime.forecasting.compose import EnsembleForecaster
ses = ExponentialSmoothing(sp=12)
holt = ExponentialSmoothing(trend="add", damped_trend=False, sp=12)
damped = ExponentialSmoothing(trend="add", damped_trend=True, sp=12)
forecaster = EnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
]
)
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____
###Markdown
3.4.4 Prediction weighted ensembles and hedge ensemblesFor model evaluation, we sometimes want to evaluate multiple forecasts, using temporal cross-validation with a sliding window over the test data. For this purpose, we can leverage the forecasters from the `online_forecasting` module which use a composite forecaster, `PredictionWeightedEnsemble`, to keep track of the loss accumulated by each forecaster and create a prediction weighted by the predictions of the most "accurate" forecasters.Note that the forecasting task is changed: we make 35 predictions since we need the first prediction to help update the weights, we do not predict 36 steps ahead.
###Code
from sktime.forecasting.all import mean_squared_error
from sktime.forecasting.online_learning import (
NormalHedgeEnsemble,
OnlineEnsembleForecaster,
)
###Output
_____no_output_____
###Markdown
First we need to initialize a `PredictionWeightedEnsembler` that will keep track of the loss accumulated by each forecaster and define which loss function we would like to use.
###Code
hedge_expert = NormalHedgeEnsemble(n_estimators=3, loss_func=mean_squared_error)
###Output
_____no_output_____
###Markdown
We can then create the forecaster by defining the individual forecasters and specifying the `PredictionWeightedEnsembler` we are using. Then by fitting our forecasters and performing updates and prediction with the `update_predict` function, we get:
###Code
forecaster = OnlineEnsembleForecaster(
[
("ses", ses),
("holt", holt),
("damped", damped),
],
ensemble_algorithm=hedge_expert,
)
forecaster.fit(y=y_train, fh=fh)
y_pred = forecaster.update_predict_single(y_test)
plot_series(y_train, y_test, y_pred, labels=["y_train", "y_test", "y_pred"])
mean_absolute_percentage_error(y_test, y_pred, symmetric=False)
###Output
_____no_output_____ |
notebooks/generators/02-baseline-generation.ipynb | ###Markdown
Simulated Baseline Generation The simulated baseline curve will be constructed using a simple polynomial curve that will be randomly adjusted between a minimum and maximum exponent value. The varying exponent in the baseline curve is used to simulate samples having different baselines independent of the concentration level.
###Code
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
from scipy.stats import norm
mpl.style.use('seaborn-notebook')
plt.rcParams["figure.figsize"] = (12, 5)
###Output
_____no_output_____
###Markdown
The following plot shows the baseline curves using both the minimum and maximum exponent values.
###Code
xnum = 600
np.random.seed(42)
x = np.arange(0, xnum, 1.0)
E1_ = (-1e-7*x**2.1)
E1F_ = E1_ + np.min(E1_)*-1.0
E2_ = (-1e-7*x**2.2)
E2F_ = E2_ + np.min(E2_)*-1.0
fig, axs = plt.subplots()
axs.plot(x, E1F_, label='min')
axs.plot(x, E2F_, label='max')
fig.suptitle('Baseline Curve Base Shapes')
plt.legend()
###Output
_____no_output_____
###Markdown
Baseline Curves for Dataset Generation This project will vary the baseline curve exponent to generate the simulated datasets. The goal of this project is to determine how well each algorithm can handle signals with varying baselines and concentration levels. The following plots show the pure signal combined with each of the minimum and maximum baseline curves at 25% and 75% concentration level.
###Code
S_1 = norm.pdf(x, loc=310.0, scale=40.0)
S_2 = norm.pdf(x, loc=390.0, scale=20.0)
S_true = np.vstack((S_1, S_2))
C_true = np.array([[0.25, 0.75], [0.75, 0.25]])
signal = np.dot(C_true, S_true)
fig, axs = plt.subplots()
for i, level in enumerate(C_true):
axs.plot(x, signal[i], label='{0:.2f}-signal'.format(C_true[i, 0]))
axs.plot(x, signal[i]+E1F_, label='{0:.2f}-min'.format(C_true[i, 0]))
axs.plot(x, signal[i]+E2F_, label='{0:.2f}-max'.format(C_true[i, 0]))
fig.suptitle('Baseline Range')
plt.legend()
###Output
_____no_output_____ |
5_5/analyze.ipynb | ###Markdown
The plots of the scoresI changed the epsilon from 0.01 to 0.001 at the 6035th game.
###Code
import matplotlib.pyplot as plt
import pickle
import numpy as np
with open('mem.pickle','rb') as f:
(_,_,_,scores)=pickle.load(f)
def moving_avg(x,p=10):
y = []
for i in range(len(x)-p+1):
y.append(np.mean(x[i:i+p]))
return y
np.max(scores)
plt.plot(scores)
plt.plot(moving_avg(scores,3))
plt.plot(moving_avg(scores,5))
plt.plot(moving_avg(scores,10))
plt.xlabel('score')
plt.ylabel('games')
plt.savefig('plot.png')
###Output
_____no_output_____ |
No Show Appointments Project.ipynb | ###Markdown
NoShowAppointments-kagglevI have selected [NoShowAppointment](https://www.kaggle.com/joniarroba/noshowappointments/home) data, which have collected patient's details. Data has the scheduledDay, appointmentDay, Gender, Age, other related including ***NoShow***. Here, ***NoShow*** tells that patient came on scheduledDay or not.I am going to do analysis to find out ***what would be various reasons for missing their scheduledDay*** ***1) Reading data from csv files***
###Code
# importing libraries
import unicodecsv
import numpy as np
import pandas as pds
import matplotlib.pyplot as plt
from matplotlib import pylab
import seaborn as sns
sns.set_style("whitegrid")
import numpy as np
import pandas as pds
import matplotlib.pyplot as plt
from matplotlib import pylab
import seaborn as sns
#reading csv file using pandas
noshow_appointments = pds.read_csv('C:/Users/Administrator/ML/Project/noshowappointments_kaggle.csv')
#print first 5 rows from tables
print(noshow_appointments.head())
len(noshow_appointments)
noshow_appointments.info()
noshow_appointments.describe()
print(noshow_appointments.columns)
###Output
Index(['PatientId', 'AppointmentID', 'Gender', 'ScheduledDay',
'AppointmentDay', 'Age', 'Neighbourhood', 'Scholarship', 'Hipertension',
'Diabetes', 'Alcoholism', 'Handcap', 'SMS_received', 'No-show'],
dtype='object')
###Markdown
***2) Correcting typo mistake in column name, so it shows a consistancy in name format***
###Code
noshow_appointments.rename(columns = {'AppointmentID' : 'AppointmentId','Hipertension' : 'Hypertension','Alcoholism':'Alchoholism', 'Handcap': 'Handicap', 'No-show' : 'No_show'}, inplace = True)
print(noshow_appointments.columns)
###Output
Index(['PatientId', 'AppointmentId', 'Gender', 'ScheduledDay',
'AppointmentDay', 'Age', 'Neighbourhood', 'Scholarship', 'Hypertension',
'Diabetes', 'Alchoholism', 'Handicap', 'SMS_received', 'No_show'],
dtype='object')
###Markdown
**3) Correcting data-type of values** Date and time are mix together and it looks so messy
###Code
print(noshow_appointments.ScheduledDay.head())
print(noshow_appointments.AppointmentDay.head())
###Output
0 2016-04-29T18:38:08Z
1 2016-04-29T16:08:27Z
2 2016-04-29T16:19:04Z
3 2016-04-29T17:29:31Z
4 2016-04-29T16:07:23Z
Name: ScheduledDay, dtype: object
0 2016-04-29T00:00:00Z
1 2016-04-29T00:00:00Z
2 2016-04-29T00:00:00Z
3 2016-04-29T00:00:00Z
4 2016-04-29T00:00:00Z
Name: AppointmentDay, dtype: object
###Markdown
***3.1) For convenience, I am going to convert the ScheduledDay and AppointmentDay columns into datetime64 format***
###Code
noshow_appointments.ScheduledDay = noshow_appointments.ScheduledDay.apply(np.datetime64)
noshow_appointments.AppointmentDay = noshow_appointments.AppointmentDay.apply(np.datetime64)
print(noshow_appointments.ScheduledDay.head())
print(noshow_appointments.AppointmentDay.head())
###Output
0 2016-04-29 18:38:08
1 2016-04-29 16:08:27
2 2016-04-29 16:19:04
3 2016-04-29 17:29:31
4 2016-04-29 16:07:23
Name: ScheduledDay, dtype: datetime64[ns]
0 2016-04-29
1 2016-04-29
2 2016-04-29
3 2016-04-29
4 2016-04-29
Name: AppointmentDay, dtype: datetime64[ns]
###Markdown
**3.2) Number of waiting days**
###Code
#writing a function to get date object
def converting_date_format(file):
date_format = file.date()
return date_format
#converting all data in each row of 'ScheduledDay' and 'AppointmentDay'
noshow_appointments['Scheduled_Date'] = noshow_appointments['ScheduledDay'].apply(converting_date_format)
noshow_appointments['Appointment_Date'] = noshow_appointments['AppointmentDay'].apply(converting_date_format)
noshow_appointments['NumberWaitingDays'] = noshow_appointments['Appointment_Date'] - noshow_appointments['Scheduled_Date']
#Converting timedelta type to int type
noshow_appointments.NumberWaitingDays = (noshow_appointments.NumberWaitingDays / np.timedelta64(1, 'D')).astype(int)
###Output
_____no_output_____
###Markdown
**3.3) Schedued Day and Month**adding a colmun for scheduled day and month
###Code
from datetime import datetime
#function to get weekday and month of scheduled day
def get_day_name(datefile):
date_trip = datefile.date()
day = date_trip.strftime("%a")
return day
def get_month_name(datefile):
date_trip = datefile.date()
month = date_trip.strftime("%b")
return month
noshow_appointments['Scheduled_Day_Name'] = noshow_appointments.ScheduledDay.apply(get_day_name)
noshow_appointments['Scheduled_Month_Name'] = noshow_appointments.ScheduledDay.apply(get_month_name)
###Output
_____no_output_____
###Markdown
We also create a new feature called HourOfTheDay, which will indicate the hour of the day at which the appointment was booked.
###Code
def calculateHour(timestamp):
timestamp = str(timestamp)
hour = int(timestamp[11:13])
minute = int(timestamp[14:16])
second = int(timestamp[17:])
return round(hour + minute/60 + second/3600)
noshow_appointments['HourOfTheDay'] = noshow_appointments.ScheduledDay.apply(calculateHour)
###Output
_____no_output_____
###Markdown
**Reviewing data for erroneous values and NaNs**check for any erroneous values and NaNs in data.
###Code
print('Age:',sorted(noshow_appointments.Age.unique()))
print('Gender:',noshow_appointments.Gender.unique())
print('Diabetes:',noshow_appointments.Diabetes.unique())
print('Alchoholism:',noshow_appointments.Alchoholism.unique())
print('Hypertension:',noshow_appointments.Hypertension.unique())
print('Handicap:',noshow_appointments.Handicap.unique())
print('Scholarship:',noshow_appointments.Scholarship.unique())
print('SMS_received:',noshow_appointments.SMS_received.unique())
print('NumberWaitingDays: ',noshow_appointments.NumberWaitingDays.unique())
print('Scheduled_Day_Name: ',noshow_appointments.Scheduled_Day_Name.unique())
print('Scheduled_Month_Name: ',noshow_appointments.Scheduled_Month_Name.unique())
sns.stripplot(data = noshow_appointments.Age, jitter = True)
plt.ylim(0, 200)
plt.show()
###Output
_____no_output_____
###Markdown
We can see some impossible age as -1 and extra ordinary age which are greater than 100. Age which have value greater than 100 that appear very rare case. So I am going to exclude from data. So I will treat the ages greater than 95 as outliers.
###Code
noshow_appointments = noshow_appointments[(noshow_appointments.Age >= 0) & (noshow_appointments.Age <= 95)]
noshow_appointments.Age.unique()
sns.stripplot(data = noshow_appointments.Age, jitter = True)
plt.ylim(0, 100)
plt.show()
###Output
_____no_output_____
###Markdown
**Checking for outliers in AwaitingTime**
###Code
#noshow_appointments['NumberWaitingDays'] = noshow_appointments.index
sns.stripplot(data = noshow_appointments, y = 'NumberWaitingDays', jitter = True)
plt.ylim(0, 500)
plt.show()
#type(noshow_appointments.NumberWaitingDays[0])
###Output
_____no_output_____
###Markdown
looK liKe waiting_days is not having outlier. Good to go **Checing value of No_show column**
###Code
target = sns.countplot(x="No_show", data=noshow_appointments)
###Output
_____no_output_____
###Markdown
According above diagram, 'No' column tells that number of people have shown on appointment date and 'Yes' column tells about number of people have not show on appointment date. These columns are creating confusions.
###Code
#adding a new column as 'Noshow' and adding 0 value inplace of 'No' and 1 value inplace of 'Yes'
noshow_appointments['Noshow'] = [0 if i == "No" else 1 for i in noshow_appointments['No_show']]
###Output
_____no_output_____
###Markdown
***4) Data Investigation*** **4.1) investigation based on Age ****4.1.1) Age below 18 **In this analysis, i assuming that since 18 is the minimum age for responsibility, then the child is still a teenager and not yet responsible for this actions looking at from the economic perspective.
###Code
teens_start_age = 18
teenagers = noshow_appointments[noshow_appointments.Age < teens_start_age] # no of people below the age of 18
print ("Here is the number of people below the age of 18 : " + str(len(teenagers)))
age_hist = sns.boxplot(x="Noshow", y="Age", data=teenagers)
###Output
_____no_output_____
###Markdown
**4.1.2) Age range from 18 to 40**people in this range are responsible and they can take care of themselves by law.a variable was created 40, i am using this as a max for people to have settled down
###Code
settled_down_age = 40 # to ask my mentor if i have to turn this into a function.
settling_down = noshow_appointments[(noshow_appointments['Age'] >= teens_start_age) & (noshow_appointments['Age'] <= settled_down_age)]
print ("This is the number of people from 18 to 40 of the data set : " + str(len(settling_down)))
age_hist = sns.boxplot(x="Noshow", y="Age", data=settling_down)
###Output
_____no_output_____
###Markdown
I want to see how many people went for appointment or not in this category **4.1.3) Age range from 40 to 60**
###Code
old_age = 60 # to ask my mentor if i have to turn this into a function.
old_age_data = noshow_appointments[(noshow_appointments['Age'] > settled_down_age) & (noshow_appointments['Age'] <= old_age)]
print ("This is the number of people from 40 to 60 of the data set : " + str(len(old_age_data)))
###Output
This is the number of people from 40 to 60 of the data set : 30081
###Markdown
**4.1.3) Age range from 60 to 95**
###Code
retired_old_age = 95 # to ask my mentor if i have to turn this into a function.
retired_old_age_data = noshow_appointments[(noshow_appointments['Age'] > old_age) & (noshow_appointments['Age'] <= retired_old_age)]
print ("This is the number of people from 40 to 60 of the data set : " + str(len(retired_old_age_data)))
retired_old_age_data.head()
###Output
This is the number of people from 40 to 60 of the data set : 19716
###Markdown
**Removing rows which have age less than 0**
###Code
#removing rows which have age less than 0
noshow_appointments = noshow_appointments[noshow_appointments.Age > 0]
age_hist = sns.boxplot(x="Noshow", y="Age", data=noshow_appointments)
noshow_appointments.groupby('Noshow').Age.plot(kind='kde')
#dividing age into 5 categories(0-20, 20-40, 40-60, 60-80, 80-100)
noshow_appointments['Age_Category'] = pds.qcut(noshow_appointments['Age'], 5, labels=False)
sns.countplot(x="Age_Category", hue = "Noshow", data=noshow_appointments)
###Output
_____no_output_____
###Markdown
According to diagram, Younger people (age < 48 years) are better at showing up on scheduled days but trend reverses after age of 48 years. One of the reasons to categorize this variable. **4.2) Schedules and Appointment dates**
###Code
sns.boxplot(x="Noshow", y="NumberWaitingDays", data=noshow_appointments)
###Output
_____no_output_____
###Markdown
Clearly, people who are missing appoints have larger gap between the scheduled day and appointment day.
###Code
noshow_appointments['WaitingDays_cat'] = [1 if i < 1 else i for i in noshow_appointments['NumberWaitingDays']]
noshow_appointments['WaitingDays_cat'] = [2 if i > 1 and i <= 7 else i for i in noshow_appointments['WaitingDays_cat']]
noshow_appointments['WaitingDays_cat'] = [3 if i > 7 and i <= 30 else i for i in noshow_appointments['WaitingDays_cat']]
noshow_appointments['WaitingDays_cat'] = [4 if i > 30 else i for i in noshow_appointments['WaitingDays_cat']]
sns.countplot(x="WaitingDays_cat", hue = "Noshow", data=noshow_appointments)
###Output
_____no_output_____
###Markdown
Higher the time between the appointment, higher the probability of missing appointment.
###Code
noshow_appointments.head()
###Output
_____no_output_____
###Markdown
**weekday**
###Code
sns.countplot(x="Scheduled_Day_Name", hue = "Noshow", data=noshow_appointments)
###Output
_____no_output_____
###Markdown
It's not clear according this table. there is no reason **Monthwise**
###Code
sns.countplot(x="Scheduled_Month_Name", hue = "Noshow", data=noshow_appointments)
###Output
_____no_output_____
###Markdown
Accroding to table, May month has more number of appointment booking date and no_show also. **Hours**
###Code
sns.countplot(x="HourOfTheDay", hue = "Noshow", data=noshow_appointments)
###Output
_____no_output_____
###Markdown
According to above diagram, patients has missed the appoitment when they had more time between the day when the appointment was scheduled and actual appointment time. **4.3) Gender**
###Code
sns.countplot(x="Gender", hue = "Noshow", data=noshow_appointments)
###Output
_____no_output_____
###Markdown
As compare to male, female are more concern about health and they have more number of appointment date. But also female has large number of missing appointments date. **4.4) Area**
###Code
# Lets analyse the location of hospital
# In the figure it is observed that No show is almost in the same proportion in all neighbourhood, but it can be seen that count is exorbitant in two of the location and they are
# Jardim camburi and Maria Ortiz
location=noshow_appointments.groupby(['Neighbourhood'], sort=True).size()
fig, ax = plt.subplots()
fig.set_size_inches(32, 16)
sns.countplot(x='Neighbourhood',data=noshow_appointments, hue='Noshow')
plt.xticks(rotation=90,size=20)
plt.yticks(size=20)
plt.title("All neighbourhoods and count of patients ",fontsize=40)
plt.setp(ax.get_legend().get_texts(), fontsize='22')
plt.setp(ax.get_legend().get_title(), fontsize='32')
plt.xlabel("Name of neighbourhood ",fontsize=40)
plt.ylabel("Number of patients ",fontsize=40)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.