markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
LSTM cellNext, we'll create our LSTM cells to use in the recurrent network ([TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn)). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.To create a basic LSTM cell for the graph, you'll want to use `tf.contrib.rnn.BasicLSTMCell`. Looking at the function documentation:```tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=)```you can see it takes a parameter called `num_units`, the number of units in the cell, called `lstm_size` in this code. So then, you can write something like ```lstm = tf.contrib.rnn.BasicLSTMCell(num_units)```to create an LSTM cell with `num_units`. Next, you can add dropout to the cell with `tf.contrib.rnn.DropoutWrapper`. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like```drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)```Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with `tf.contrib.rnn.MultiRNNCell`:```cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)```Here, `[drop] * lstm_layers` creates a list of cells (`drop`) that is `lstm_layers` long. The `MultiRNNCell` wrapper builds this into multiple layers of RNN cells, one for each cell in the list.So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.> **Exercise:** Below, use `tf.contrib.rnn.BasicLSTMCell` to create an LSTM cell. Then, add drop out to it with `tf.contrib.rnn.DropoutWrapper`. Finally, create multiple LSTM layers with `tf.contrib.rnn.MultiRNNCell`.Here is [a tutorial on building RNNs](https://www.tensorflow.org/tutorials/recurrent) that will help you out. | with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32) | _____no_output_____ | MIT | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity |
RNN forward passNow we need to actually run the data through the RNN nodes. You can use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) to do this. You'd pass in the RNN cell you created (our multiple layered LSTM `cell` for instance), and the inputs to the network.```outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)```Above I created an initial state, `initial_state`, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. `tf.nn.dynamic_rnn` takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.> **Exercise:** Use `tf.nn.dynamic_rnn` to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, `embed`. | with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) | _____no_output_____ | MIT | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity |
OutputWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with `outputs[:, -1]`, the calculate the cost from that and `labels_`. | with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) | _____no_output_____ | MIT | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity |
Validation accuracyHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass. | with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) | _____no_output_____ | MIT | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity |
BatchingThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the `x` and `y` arrays and returns slices out of those arrays with size `[batch_size]`. | def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size] | _____no_output_____ | MIT | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity |
TrainingBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the `checkpoints` directory exists. | epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt") | Epoch: 0/10 Iteration: 5 Train loss: 0.240
Epoch: 0/10 Iteration: 10 Train loss: 0.240
Epoch: 0/10 Iteration: 15 Train loss: 0.221
Epoch: 0/10 Iteration: 20 Train loss: 0.184
Epoch: 0/10 Iteration: 25 Train loss: 0.230
Val acc: 0.629
Epoch: 0/10 Iteration: 30 Train loss: 0.235
Epoch: 0/10 Iteration: 35 Train loss: 0.238
Epoch: 0/10 Iteration: 40 Train loss: 0.231
Epoch: 1/10 Iteration: 45 Train loss: 0.201
Epoch: 1/10 Iteration: 50 Train loss: 0.205
Val acc: 0.673
Epoch: 1/10 Iteration: 55 Train loss: 0.171
Epoch: 1/10 Iteration: 60 Train loss: 0.185
Epoch: 1/10 Iteration: 65 Train loss: 0.211
Epoch: 1/10 Iteration: 70 Train loss: 0.213
Epoch: 1/10 Iteration: 75 Train loss: 0.214
Val acc: 0.640
Epoch: 1/10 Iteration: 80 Train loss: 0.210
Epoch: 2/10 Iteration: 85 Train loss: 0.174
Epoch: 2/10 Iteration: 90 Train loss: 0.164
Epoch: 2/10 Iteration: 95 Train loss: 0.133
Epoch: 2/10 Iteration: 100 Train loss: 0.129
Val acc: 0.745
Epoch: 2/10 Iteration: 105 Train loss: 0.119
Epoch: 2/10 Iteration: 110 Train loss: 0.197
Epoch: 2/10 Iteration: 115 Train loss: 0.174
Epoch: 2/10 Iteration: 120 Train loss: 0.174
Epoch: 3/10 Iteration: 125 Train loss: 0.119
Val acc: 0.786
Epoch: 3/10 Iteration: 130 Train loss: 0.132
Epoch: 3/10 Iteration: 135 Train loss: 0.120
Epoch: 3/10 Iteration: 140 Train loss: 0.102
Epoch: 3/10 Iteration: 145 Train loss: 0.121
Epoch: 3/10 Iteration: 150 Train loss: 0.119
Val acc: 0.790
Epoch: 3/10 Iteration: 155 Train loss: 0.133
Epoch: 3/10 Iteration: 160 Train loss: 0.158
Epoch: 4/10 Iteration: 165 Train loss: 0.116
Epoch: 4/10 Iteration: 170 Train loss: 0.119
Epoch: 4/10 Iteration: 175 Train loss: 0.119
Val acc: 0.781
Epoch: 4/10 Iteration: 180 Train loss: 0.104
Epoch: 4/10 Iteration: 185 Train loss: 0.114
Epoch: 4/10 Iteration: 190 Train loss: 0.103
Epoch: 4/10 Iteration: 195 Train loss: 0.138
Epoch: 4/10 Iteration: 200 Train loss: 0.126
Val acc: 0.800
Epoch: 5/10 Iteration: 205 Train loss: 0.087
Epoch: 5/10 Iteration: 210 Train loss: 0.108
Epoch: 5/10 Iteration: 215 Train loss: 0.119
Epoch: 5/10 Iteration: 220 Train loss: 0.093
Epoch: 5/10 Iteration: 225 Train loss: 0.088
Val acc: 0.749
Epoch: 5/10 Iteration: 230 Train loss: 0.086
Epoch: 5/10 Iteration: 235 Train loss: 0.092
Epoch: 5/10 Iteration: 240 Train loss: 0.098
Epoch: 6/10 Iteration: 245 Train loss: 0.076
Epoch: 6/10 Iteration: 250 Train loss: 0.092
Val acc: 0.812
Epoch: 6/10 Iteration: 255 Train loss: 0.067
Epoch: 6/10 Iteration: 260 Train loss: 0.132
Epoch: 6/10 Iteration: 265 Train loss: 0.119
Epoch: 6/10 Iteration: 270 Train loss: 0.074
Epoch: 6/10 Iteration: 275 Train loss: 0.099
Val acc: 0.796
Epoch: 6/10 Iteration: 280 Train loss: 0.434
Epoch: 7/10 Iteration: 285 Train loss: 0.494
Epoch: 7/10 Iteration: 290 Train loss: 0.497
Epoch: 7/10 Iteration: 295 Train loss: 0.491
Epoch: 7/10 Iteration: 300 Train loss: 0.353
Val acc: 0.526
Epoch: 7/10 Iteration: 305 Train loss: 0.317
Epoch: 7/10 Iteration: 310 Train loss: 0.285
Epoch: 7/10 Iteration: 315 Train loss: 0.266
Epoch: 7/10 Iteration: 320 Train loss: 0.254
Epoch: 8/10 Iteration: 325 Train loss: 0.244
Val acc: 0.594
Epoch: 8/10 Iteration: 330 Train loss: 0.229
Epoch: 8/10 Iteration: 335 Train loss: 0.242
Epoch: 8/10 Iteration: 340 Train loss: 0.203
Epoch: 8/10 Iteration: 345 Train loss: 0.170
Epoch: 8/10 Iteration: 350 Train loss: 0.246
Val acc: 0.675
Epoch: 8/10 Iteration: 355 Train loss: 0.184
Epoch: 8/10 Iteration: 360 Train loss: 0.176
Epoch: 9/10 Iteration: 365 Train loss: 0.293
Epoch: 9/10 Iteration: 370 Train loss: 0.165
Epoch: 9/10 Iteration: 375 Train loss: 0.160
Val acc: 0.748
Epoch: 9/10 Iteration: 380 Train loss: 0.117
Epoch: 9/10 Iteration: 385 Train loss: 0.092
Epoch: 9/10 Iteration: 390 Train loss: 0.101
Epoch: 9/10 Iteration: 395 Train loss: 0.121
Epoch: 9/10 Iteration: 400 Train loss: 0.116
Val acc: 0.768
| MIT | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity |
Testing | test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc))) | INFO:tensorflow:Restoring parameters from checkpoints/sentiment.ckpt
Test accuracy: 0.776
| MIT | term-2-concentrations/lab-5-nlp-sentiment-analysis/Sentiment_RNN.ipynb | rstraker/ai-nanodegree-udacity |
Copyright 2018 Google LLC. | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Recommendation Systems with TensorFlowThis Colab notebook complements the course on [Recommendation Systems](https://developers.google.com/machine-learning/recommendation/). Specifically, we'll be using matrix factorization to learn user and movie embeddings. IntroductionWe will create a movie recommendation system based on the [MovieLens](https://movielens.org/) dataset available [here](http://grouplens.org/datasets/movielens/). The data consists of movies ratings (on a scale of 1 to 5). Outline 1. Exploring the MovieLens Data (10 minutes) 1. Preliminaries (25 minutes) 1. Training a matrix factorization model (15 minutes) 1. Inspecting the Embeddings (15 minutes) 1. Regularization in matrix factorization (15 minutes) 1. Softmax model training (30 minutes) SetupLet's get started by importing the required packages. | # @title Imports (run this cell)
from __future__ import print_function
import numpy as np
import pandas as pd
import collections
from mpl_toolkits.mplot3d import Axes3D
from IPython import display
from matplotlib import pyplot as plt
import sklearn
import sklearn.manifold
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
tf.logging.set_verbosity(tf.logging.ERROR)
# Add some convenience functions to Pandas DataFrame.
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.3f}'.format
def mask(df, key, function):
"""Returns a filtered dataframe, by applying function to key"""
return df[function(df[key])]
def flatten_cols(df):
df.columns = [' '.join(col).strip() for col in df.columns.values]
return df
pd.DataFrame.mask = mask
pd.DataFrame.flatten_cols = flatten_cols
# Install Altair and activate its colab renderer.
print("Installing Altair...")
!pip install git+git://github.com/altair-viz/altair.git
import altair as alt
alt.data_transformers.enable('default', max_rows=None)
alt.renderers.enable('colab')
print("Done installing Altair.")
# Install spreadsheets and import authentication module.
USER_RATINGS = False
!pip install --upgrade -q gspread
from google.colab import auth
import gspread
from oauth2client.client import GoogleCredentials | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
We then download the MovieLens Data, and create DataFrames containing movies, users, and ratings. | # @title Load the MovieLens data (run this cell).
# Download MovieLens data.
print("Downloading movielens data...")
from urllib.request import urlretrieve
import zipfile
urlretrieve("http://files.grouplens.org/datasets/movielens/ml-100k.zip", "movielens.zip")
zip_ref = zipfile.ZipFile('movielens.zip', "r")
zip_ref.extractall()
print("Done. Dataset contains:")
print(zip_ref.read('ml-100k/u.info'))
# Load each data set (users, movies, and ratings).
users_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']
users = pd.read_csv(
'ml-100k/u.user', sep='|', names=users_cols, encoding='latin-1')
ratings_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
ratings = pd.read_csv(
'ml-100k/u.data', sep='\t', names=ratings_cols, encoding='latin-1')
# The movies file contains a binary feature for each genre.
genre_cols = [
"genre_unknown", "Action", "Adventure", "Animation", "Children", "Comedy",
"Crime", "Documentary", "Drama", "Fantasy", "Film-Noir", "Horror",
"Musical", "Mystery", "Romance", "Sci-Fi", "Thriller", "War", "Western"
]
movies_cols = [
'movie_id', 'title', 'release_date', "video_release_date", "imdb_url"
] + genre_cols
movies = pd.read_csv(
'ml-100k/u.item', sep='|', names=movies_cols, encoding='latin-1')
# Since the ids start at 1, we shift them to start at 0.
users["user_id"] = users["user_id"].apply(lambda x: str(x-1))
movies["movie_id"] = movies["movie_id"].apply(lambda x: str(x-1))
movies["year"] = movies['release_date'].apply(lambda x: str(x).split('-')[-1])
ratings["movie_id"] = ratings["movie_id"].apply(lambda x: str(x-1))
ratings["user_id"] = ratings["user_id"].apply(lambda x: str(x-1))
ratings["rating"] = ratings["rating"].apply(lambda x: float(x))
# Compute the number of movies to which a genre is assigned.
genre_occurences = movies[genre_cols].sum().to_dict()
# Since some movies can belong to more than one genre, we create different
# 'genre' columns as follows:
# - all_genres: all the active genres of the movie.
# - genre: randomly sampled from the active genres.
def mark_genres(movies, genres):
def get_random_genre(gs):
active = [genre for genre, g in zip(genres, gs) if g==1]
if len(active) == 0:
return 'Other'
return np.random.choice(active)
def get_all_genres(gs):
active = [genre for genre, g in zip(genres, gs) if g==1]
if len(active) == 0:
return 'Other'
return '-'.join(active)
movies['genre'] = [
get_random_genre(gs) for gs in zip(*[movies[genre] for genre in genres])]
movies['all_genres'] = [
get_all_genres(gs) for gs in zip(*[movies[genre] for genre in genres])]
mark_genres(movies, genre_cols)
# Create one merged DataFrame containing all the movielens data.
movielens = ratings.merge(movies, on='movie_id').merge(users, on='user_id')
# Utility to split the data into training and test sets.
def split_dataframe(df, holdout_fraction=0.1):
"""Splits a DataFrame into training and test sets.
Args:
df: a dataframe.
holdout_fraction: fraction of dataframe rows to use in the test set.
Returns:
train: dataframe for training
test: dataframe for testing
"""
test = df.sample(frac=holdout_fraction, replace=False)
train = df[~df.index.isin(test.index)]
return train, test | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
I. Exploring the Movielens DataBefore we dive into model building, let's inspect our MovieLens dataset. It is usually helpful to understand the statistics of the dataset. UsersWe start by printing some basic statistics describing the numeric user features. | users.describe() | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
We can also print some basic statistics describing the categorical user features | users.describe(include=[np.object]) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
We can also create histograms to further understand the distribution of the users. We use Altair to create an interactive chart. | # @title Altair visualization code (run this cell)
# The following functions are used to generate interactive Altair charts.
# We will display histograms of the data, sliced by a given attribute.
# Create filters to be used to slice the data.
occupation_filter = alt.selection_multi(fields=["occupation"])
occupation_chart = alt.Chart().mark_bar().encode(
x="count()",
y=alt.Y("occupation:N"),
color=alt.condition(
occupation_filter,
alt.Color("occupation:N", scale=alt.Scale(scheme='category20')),
alt.value("lightgray")),
).properties(width=300, height=300, selection=occupation_filter)
# A function that generates a histogram of filtered data.
def filtered_hist(field, label, filter):
"""Creates a layered chart of histograms.
The first layer (light gray) contains the histogram of the full data, and the
second contains the histogram of the filtered data.
Args:
field: the field for which to generate the histogram.
label: String label of the histogram.
filter: an alt.Selection object to be used to filter the data.
"""
base = alt.Chart().mark_bar().encode(
x=alt.X(field, bin=alt.Bin(maxbins=10), title=label),
y="count()",
).properties(
width=300,
)
return alt.layer(
base.transform_filter(filter),
base.encode(color=alt.value('lightgray'), opacity=alt.value(.7)),
).resolve_scale(y='independent')
| _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Next, we look at the distribution of ratings per user. Clicking on an occupation in the right chart will filter the data by that occupation. The corresponding histogram is shown in blue, and superimposed with the histogram for the whole data (in light gray). You can use SHIFT+click to select multiple subsets.What do you observe, and how might this affect the recommendations? | users_ratings = (
ratings
.groupby('user_id', as_index=False)
.agg({'rating': ['count', 'mean']})
.flatten_cols()
.merge(users, on='user_id')
)
# Create a chart for the count, and one for the mean.
alt.hconcat(
filtered_hist('rating count', '# ratings / user', occupation_filter),
filtered_hist('rating mean', 'mean user rating', occupation_filter),
occupation_chart,
data=users_ratings) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
MoviesIt is also useful to look at information about the movies and their ratings. | movies_ratings = movies.merge(
ratings
.groupby('movie_id', as_index=False)
.agg({'rating': ['count', 'mean']})
.flatten_cols(),
on='movie_id')
genre_filter = alt.selection_multi(fields=['genre'])
genre_chart = alt.Chart().mark_bar().encode(
x="count()",
y=alt.Y('genre'),
color=alt.condition(
genre_filter,
alt.Color("genre:N"),
alt.value('lightgray'))
).properties(height=300, selection=genre_filter)
(movies_ratings[['title', 'rating count', 'rating mean']]
.sort_values('rating count', ascending=False)
.head(10))
(movies_ratings[['title', 'rating count', 'rating mean']]
.mask('rating count', lambda x: x > 20)
.sort_values('rating mean', ascending=False)
.head(10)) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Finally, the last chart shows the distribution of the number of ratings and average rating. | # Display the number of ratings and average rating per movie.
alt.hconcat(
filtered_hist('rating count', '# ratings / movie', genre_filter),
filtered_hist('rating mean', 'mean movie rating', genre_filter),
genre_chart,
data=movies_ratings) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
II. PreliminariesOur goal is to factorize the ratings matrix $A$ into the product of a user embedding matrix $U$ and movie embedding matrix $V$, such that $A \approx UV^\top$ with$U = \begin{bmatrix} u_{1} \\ \hline \vdots \\ \hline u_{N} \end{bmatrix}$ and$V = \begin{bmatrix} v_{1} \\ \hline \vdots \\ \hline v_{M} \end{bmatrix}$.Here- $N$ is the number of users,- $M$ is the number of movies,- $A_{ij}$ is the rating of the $j$th movies by the $i$th user,- each row $U_i$ is a $d$-dimensional vector (embedding) representing user $i$,- each row $V_j$ is a $d$-dimensional vector (embedding) representing movie $j$,- the prediction of the model for the $(i, j)$ pair is the dot product $\langle U_i, V_j \rangle$. Sparse Representation of the Rating MatrixThe rating matrix could be very large and, in general, most of the entries are unobserved, since a given user will only rate a small subset of movies. For effcient representation, we will use a [tf.SparseTensor](https://www.tensorflow.org/api_docs/python/tf/SparseTensor). A `SparseTensor` uses three tensors to represent the matrix: `tf.SparseTensor(indices, values, dense_shape)` represents a tensor, where a value $A_{ij} = a$ is encoded by setting `indices[k] = [i, j]` and `values[k] = a`. The last tensor `dense_shape` is used to specify the shape of the full underlying matrix. Toy exampleAssume we have $2$ users and $4$ movies. Our toy ratings dataframe has three ratings,user\_id | movie\_id | rating--:|--:|--:0 | 0 | 5.00 | 1 | 3.01 | 3 | 1.0The corresponding rating matrix is$$A =\begin{bmatrix}5.0 & 3.0 & 0 & 0 \\0 & 0 & 0 & 1.0\end{bmatrix}$$And the SparseTensor representation is,```pythonSparseTensor( indices=[[0, 0], [0, 1], [1,3]], values=[5.0, 3.0, 1.0], dense_shape=[2, 4])``` Exercise 1: Build a tf.SparseTensor representation of the Rating Matrix.In this exercise, we'll write a function that maps from our `ratings` DataFrame to a `tf.SparseTensor`.Hint: you can select the values of a given column of a Dataframe `df` using `df['column_name'].values`. | def build_rating_sparse_tensor(ratings_df):
"""
Args:
ratings_df: a pd.DataFrame with `user_id`, `movie_id` and `rating` columns.
Returns:
A tf.SparseTensor representing the ratings matrix.
"""
# ========================= Complete this section ============================
# indices =
# values =
# ============================================================================
return tf.SparseTensor(
indices=indices,
values=values,
dense_shape=[users.shape[0], movies.shape[0]])
#@title Solution
def build_rating_sparse_tensor(ratings_df):
"""
Args:
ratings_df: a pd.DataFrame with `user_id`, `movie_id` and `rating` columns.
Returns:
a tf.SparseTensor representing the ratings matrix.
"""
indices = ratings_df[['user_id', 'movie_id']].values
values = ratings_df['rating'].values
return tf.SparseTensor(
indices=indices,
values=values,
dense_shape=[users.shape[0], movies.shape[0]]) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Calculating the errorThe model approximates the ratings matrix $A$ by a low-rank product $UV^\top$. We need a way to measure the approximation error. We'll start by using the Mean Squared Error of observed entries only (we will revisit this later). It is defined as$$\begin{align*}\text{MSE}(A, UV^\top)&= \frac{1}{|\Omega|}\sum_{(i, j) \in\Omega}{( A_{ij} - (UV^\top)_{ij})^2} \\&= \frac{1}{|\Omega|}\sum_{(i, j) \in\Omega}{( A_{ij} - \langle U_i, V_j\rangle)^2}\end{align*}$$where $\Omega$ is the set of observed ratings, and $|\Omega|$ is the cardinality of $\Omega$. Exercise 2: Mean Squared ErrorWrite a TensorFlow function that takes a sparse rating matrix $A$ and the two embedding matrices $U, V$ and returns the mean squared error $\text{MSE}(A, UV^\top)$.Hints: * in this section, we only consider observed entries when calculating the loss. * a `SparseTensor` `sp_x` is a tuple of three Tensors: `sp_x.indices`, `sp_x.values` and `sp_x.dense_shape`. * you may find [`tf.gather_nd`](https://www.tensorflow.org/api_docs/python/tf/gather_nd) and [`tf.losses.mean_squared_error`](https://www.tensorflow.org/api_docs/python/tf/losses/mean_squared_error) helpful. | def sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):
"""
Args:
sparse_ratings: A SparseTensor rating matrix, of dense_shape [N, M]
user_embeddings: A dense Tensor U of shape [N, k] where k is the embedding
dimension, such that U_i is the embedding of user i.
movie_embeddings: A dense Tensor V of shape [M, k] where k is the embedding
dimension, such that V_j is the embedding of movie j.
Returns:
A scalar Tensor representing the MSE between the true ratings and the
model's predictions.
"""
# ========================= Complete this section ============================
# loss =
# ============================================================================
return loss
#@title Solution
def sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):
"""
Args:
sparse_ratings: A SparseTensor rating matrix, of dense_shape [N, M]
user_embeddings: A dense Tensor U of shape [N, k] where k is the embedding
dimension, such that U_i is the embedding of user i.
movie_embeddings: A dense Tensor V of shape [M, k] where k is the embedding
dimension, such that V_j is the embedding of movie j.
Returns:
A scalar Tensor representing the MSE between the true ratings and the
model's predictions.
"""
predictions = tf.gather_nd(
tf.matmul(user_embeddings, movie_embeddings, transpose_b=True),
sparse_ratings.indices)
loss = tf.losses.mean_squared_error(sparse_ratings.values, predictions)
return loss | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Note: One approach is to compute the full prediction matrix $UV^\top$, then gather the entries corresponding to the observed pairs. The memory cost of this approach is $O(NM)$. For the MovieLens dataset, this is fine, as the dense $N \times M$ matrix is small enough to fit in memory ($N = 943$, $M = 1682$).Another approach (given in the alternate solution below) is to only gather the embeddings of the observed pairs, then compute their dot products. The memory cost is $O(|\Omega| d)$ where $d$ is the embedding dimension. In our case, $|\Omega| = 10^5$, and the embedding dimension is on the order of $10$, so the memory cost of both methods is comparable. But when the number of users or movies is much larger, the first approach becomes infeasible. | #@title Alternate Solution
def sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):
"""
Args:
sparse_ratings: A SparseTensor rating matrix, of dense_shape [N, M]
user_embeddings: A dense Tensor U of shape [N, k] where k is the embedding
dimension, such that U_i is the embedding of user i.
movie_embeddings: A dense Tensor V of shape [M, k] where k is the embedding
dimension, such that V_j is the embedding of movie j.
Returns:
A scalar Tensor representing the MSE between the true ratings and the
model's predictions.
"""
predictions = tf.reduce_sum(
tf.gather(user_embeddings, sparse_ratings.indices[:, 0]) *
tf.gather(movie_embeddings, sparse_ratings.indices[:, 1]),
axis=1)
loss = tf.losses.mean_squared_error(sparse_ratings.values, predictions)
return loss | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Exercise 3 (Optional): adding your own ratings to the data set You have the option to add your own ratings to the data set. If you choose to do so, you will be able to see recommendations for yourself.Start by checking the box below. Running the next cell will authenticate you to your google Drive account, and create a spreadsheet, that contains all movie titles in column 'A'. Follow the link to the spreadsheet and take 3 minutes to rate some of the movies. Your ratings should be entered in column 'B'. | USER_RATINGS = True #@param {type:"boolean"}
# @title Run to create a spreadsheet, then use it to enter your ratings.
# Authenticate user.
if USER_RATINGS:
auth.authenticate_user()
gc = gspread.authorize(GoogleCredentials.get_application_default())
# Create the spreadsheet and print a link to it.
try:
sh = gc.open('MovieLens-test')
except(gspread.SpreadsheetNotFound):
sh = gc.create('MovieLens-test')
worksheet = sh.sheet1
titles = movies['title'].values
cell_list = worksheet.range(1, 1, len(titles), 1)
for cell, title in zip(cell_list, titles):
cell.value = title
worksheet.update_cells(cell_list)
print("Link to the spreadsheet: "
"https://docs.google.com/spreadsheets/d/{}/edit".format(sh.id)) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Run the next cell to load your ratings and add them to the main `ratings` DataFrame. | # @title Run to load your ratings.
# Load the ratings from the spreadsheet and create a DataFrame.
if USER_RATINGS:
my_ratings = pd.DataFrame.from_records(worksheet.get_all_values()).reset_index()
my_ratings = my_ratings[my_ratings[1] != '']
my_ratings = pd.DataFrame({
'user_id': "943",
'movie_id': list(map(str, my_ratings['index'])),
'rating': list(map(float, my_ratings[1])),
})
# Remove previous ratings.
ratings = ratings[ratings.user_id != "943"]
# Add new ratings.
ratings = ratings.append(my_ratings, ignore_index=True)
# Add new user to the users DataFrame.
if users.shape[0] == 943:
users = users.append(users.iloc[942], ignore_index=True)
users["user_id"][943] = "943"
print("Added your %d ratings; you have great taste!" % len(my_ratings))
ratings[ratings.user_id=="943"].merge(movies[['movie_id', 'title']]) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
III. Training a Matrix Factorization model CFModel (Collaborative Filtering Model) helper classThis is a simple class to train a matrix factorization model using stochastic gradient descent.The class constructor takes- the user embeddings U (a `tf.Variable`).- the movie embeddings V, (a `tf.Variable`).- a loss to optimize (a `tf.Tensor`).- an optional list of metrics dictionaries, each mapping a string (the name of the metric) to a tensor. These are evaluated and plotted during training (e.g. training error and test error).After training, one can access the trained embeddings using the `model.embeddings` dictionary.Example usage:```U_var = ...V_var = ...loss = ...model = CFModel(U_var, V_var, loss)model.train(iterations=100, learning_rate=1.0)user_embeddings = model.embeddings['user_id']movie_embeddings = model.embeddings['movie_id']``` | # @title CFModel helper class (run this cell)
class CFModel(object):
"""Simple class that represents a collaborative filtering model"""
def __init__(self, embedding_vars, loss, metrics=None):
"""Initializes a CFModel.
Args:
embedding_vars: A dictionary of tf.Variables.
loss: A float Tensor. The loss to optimize.
metrics: optional list of dictionaries of Tensors. The metrics in each
dictionary will be plotted in a separate figure during training.
"""
self._embedding_vars = embedding_vars
self._loss = loss
self._metrics = metrics
self._embeddings = {k: None for k in embedding_vars}
self._session = None
@property
def embeddings(self):
"""The embeddings dictionary."""
return self._embeddings
def train(self, num_iterations=100, learning_rate=1.0, plot_results=True,
optimizer=tf.train.GradientDescentOptimizer):
"""Trains the model.
Args:
iterations: number of iterations to run.
learning_rate: optimizer learning rate.
plot_results: whether to plot the results at the end of training.
optimizer: the optimizer to use. Default to GradientDescentOptimizer.
Returns:
The metrics dictionary evaluated at the last iteration.
"""
with self._loss.graph.as_default():
opt = optimizer(learning_rate)
train_op = opt.minimize(self._loss)
local_init_op = tf.group(
tf.variables_initializer(opt.variables()),
tf.local_variables_initializer())
if self._session is None:
self._session = tf.Session()
with self._session.as_default():
self._session.run(tf.global_variables_initializer())
self._session.run(tf.tables_initializer())
tf.train.start_queue_runners()
with self._session.as_default():
local_init_op.run()
iterations = []
metrics = self._metrics or ({},)
metrics_vals = [collections.defaultdict(list) for _ in self._metrics]
# Train and append results.
for i in range(num_iterations + 1):
_, results = self._session.run((train_op, metrics))
if (i % 10 == 0) or i == num_iterations:
print("\r iteration %d: " % i + ", ".join(
["%s=%f" % (k, v) for r in results for k, v in r.items()]),
end='')
iterations.append(i)
for metric_val, result in zip(metrics_vals, results):
for k, v in result.items():
metric_val[k].append(v)
for k, v in self._embedding_vars.items():
self._embeddings[k] = v.eval()
if plot_results:
# Plot the metrics.
num_subplots = len(metrics)+1
fig = plt.figure()
fig.set_size_inches(num_subplots*10, 8)
for i, metric_vals in enumerate(metrics_vals):
ax = fig.add_subplot(1, num_subplots, i+1)
for k, v in metric_vals.items():
ax.plot(iterations, v, label=k)
ax.set_xlim([1, num_iterations])
ax.legend()
return results | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Exercise 4: Build a Matrix Factorization model and train itUsing your `sparse_mean_square_error` function, write a function that builds a `CFModel` by creating the embedding variables and the train and test losses. | def build_model(ratings, embedding_dim=3, init_stddev=1.):
"""
Args:
ratings: a DataFrame of the ratings
embedding_dim: the dimension of the embedding vectors.
init_stddev: float, the standard deviation of the random initial embeddings.
Returns:
model: a CFModel.
"""
# Split the ratings DataFrame into train and test.
train_ratings, test_ratings = split_dataframe(ratings)
# SparseTensor representation of the train and test datasets.
# ========================= Complete this section ============================
# A_train =
# A_test =
# ============================================================================
# Initialize the embeddings using a normal distribution.
U = tf.Variable(tf.random_normal(
[A_train.dense_shape[0], embedding_dim], stddev=init_stddev))
V = tf.Variable(tf.random_normal(
[A_train.dense_shape[1], embedding_dim], stddev=init_stddev))
# ========================= Complete this section ============================
# train_loss =
# test_loss =
# ============================================================================
metrics = {
'train_error': train_loss,
'test_error': test_loss
}
embeddings = {
"user_id": U,
"movie_id": V
}
return CFModel(embeddings, train_loss, [metrics])
#@title Solution
def build_model(ratings, embedding_dim=3, init_stddev=1.):
"""
Args:
ratings: a DataFrame of the ratings
embedding_dim: the dimension of the embedding vectors.
init_stddev: float, the standard deviation of the random initial embeddings.
Returns:
model: a CFModel.
"""
# Split the ratings DataFrame into train and test.
train_ratings, test_ratings = split_dataframe(ratings)
# SparseTensor representation of the train and test datasets.
A_train = build_rating_sparse_tensor(train_ratings)
A_test = build_rating_sparse_tensor(test_ratings)
# Initialize the embeddings using a normal distribution.
U = tf.Variable(tf.random_normal(
[A_train.dense_shape[0], embedding_dim], stddev=init_stddev))
V = tf.Variable(tf.random_normal(
[A_train.dense_shape[1], embedding_dim], stddev=init_stddev))
train_loss = sparse_mean_square_error(A_train, U, V)
test_loss = sparse_mean_square_error(A_test, U, V)
metrics = {
'train_error': train_loss,
'test_error': test_loss
}
embeddings = {
"user_id": U,
"movie_id": V
}
return CFModel(embeddings, train_loss, [metrics]) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Great, now it's time to train the model!Go ahead and run the next cell, trying different parameters (embedding dimension, learning rate, iterations). The training and test errors are plotted at the end of training. You can inspect these values to validate the hyper-parameters.Note: by calling `model.train` again, the model will continue training starting from the current values of the embeddings. | # Build the CF model and train it.
model = build_model(ratings, embedding_dim=30, init_stddev=0.5)
model.train(num_iterations=1000, learning_rate=10.) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
The movie and user embeddings are also displayed in the right figure. When the embedding dimension is greater than 3, the embeddings are projected on the first 3 dimensions. The next section will have a more detailed look at the embeddings. IV. Inspecting the EmbeddingsIn this section, we take a closer look at the learned embeddings, by- computing your recommendations- looking at the nearest neighbors of some movies,- looking at the norms of the movie embeddings,- visualizing the embedding in a projected embedding space. Exercise 5: Write a function that computes the scores of the candidatesWe start by writing a function that, given a query embedding $u \in \mathbb R^d$ and item embeddings $V \in \mathbb R^{N \times d}$, computes the item scores.As discussed in the lecture, there are different similarity measures we can use, and these can yield different results. We will compare the following:- dot product: the score of item j is $\langle u, V_j \rangle$.- cosine: the score of item j is $\frac{\langle u, V_j \rangle}{\|u\|\|V_j\|}$.Hints:- you can use [`np.dot`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) to compute the product of two np.Arrays.- you can use [`np.linalg.norm`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.norm.html) to compute the norm of a np.Array. | DOT = 'dot'
COSINE = 'cosine'
def compute_scores(query_embedding, item_embeddings, measure=DOT):
"""Computes the scores of the candidates given a query.
Args:
query_embedding: a vector of shape [k], representing the query embedding.
item_embeddings: a matrix of shape [N, k], such that row i is the embedding
of item i.
measure: a string specifying the similarity measure to be used. Can be
either DOT or COSINE.
Returns:
scores: a vector of shape [N], such that scores[i] is the score of item i.
"""
# ========================= Complete this section ============================
# scores =
# ============================================================================
return scores
#@title Solution
DOT = 'dot'
COSINE = 'cosine'
def compute_scores(query_embedding, item_embeddings, measure=DOT):
"""Computes the scores of the candidates given a query.
Args:
query_embedding: a vector of shape [k], representing the query embedding.
item_embeddings: a matrix of shape [N, k], such that row i is the embedding
of item i.
measure: a string specifying the similarity measure to be used. Can be
either DOT or COSINE.
Returns:
scores: a vector of shape [N], such that scores[i] is the score of item i.
"""
u = query_embedding
V = item_embeddings
if measure == COSINE:
V = V / np.linalg.norm(V, axis=1, keepdims=True)
u = u / np.linalg.norm(u)
scores = u.dot(V.T)
return scores | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Equipped with this function, we can compute recommendations, where the query embedding can be either a user embedding or a movie embedding. | # @title User recommendations and nearest neighbors (run this cell)
def user_recommendations(model, measure=DOT, exclude_rated=False, k=6):
if USER_RATINGS:
scores = compute_scores(
model.embeddings["user_id"][943], model.embeddings["movie_id"], measure)
score_key = measure + ' score'
df = pd.DataFrame({
score_key: list(scores),
'movie_id': movies['movie_id'],
'titles': movies['title'],
'genres': movies['all_genres'],
})
if exclude_rated:
# remove movies that are already rated
rated_movies = ratings[ratings.user_id == "943"]["movie_id"].values
df = df[df.movie_id.apply(lambda movie_id: movie_id not in rated_movies)]
display.display(df.sort_values([score_key], ascending=False).head(k))
def movie_neighbors(model, title_substring, measure=DOT, k=6):
# Search for movie ids that match the given substring.
ids = movies[movies['title'].str.contains(title_substring)].index.values
titles = movies.iloc[ids]['title'].values
if len(titles) == 0:
raise ValueError("Found no movies with title %s" % title_substring)
print("Nearest neighbors of : %s." % titles[0])
if len(titles) > 1:
print("[Found more than one matching movie. Other candidates: {}]".format(
", ".join(titles[1:])))
movie_id = ids[0]
scores = compute_scores(
model.embeddings["movie_id"][movie_id], model.embeddings["movie_id"],
measure)
score_key = measure + ' score'
df = pd.DataFrame({
score_key: list(scores),
'titles': movies['title'],
'genres': movies['all_genres']
})
display.display(df.sort_values([score_key], ascending=False).head(k)) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Your recommendationsIf you chose to input your recommendations, you can run the next cell to generate recommendations for you. | user_recommendations(model, measure=COSINE, k=5) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
How do the recommendations look? Movie Nearest neighborsLet's look at the neareast neighbors for some of the movies. | movie_neighbors(model, "Aladdin", DOT)
movie_neighbors(model, "Aladdin", COSINE) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
It seems that the quality of learned embeddings may not be very good. This will be addressed in Section V by adding several regularization techniques. First, we will further inspect the embeddings. Movie Embedding NormWe can also observe that the recommendations with dot-product and cosine are different: with dot-product, the model tends to recommend popular movies. This can be explained by the fact that in matrix factorization models, the norm of the embedding is often correlated with popularity (popular movies have a larger norm), which makes it more likely to recommend more popular items. We can confirm this hypothesis by sorting the movies by their embedding norm, as done in the next cell. | # @title Embedding Visualization code (run this cell)
def movie_embedding_norm(models):
"""Visualizes the norm and number of ratings of the movie embeddings.
Args:
model: A MFModel object.
"""
if not isinstance(models, list):
models = [models]
df = pd.DataFrame({
'title': movies['title'],
'genre': movies['genre'],
'num_ratings': movies_ratings['rating count'],
})
charts = []
brush = alt.selection_interval()
for i, model in enumerate(models):
norm_key = 'norm'+str(i)
df[norm_key] = np.linalg.norm(model.embeddings["movie_id"], axis=1)
nearest = alt.selection(
type='single', encodings=['x', 'y'], on='mouseover', nearest=True,
empty='none')
base = alt.Chart().mark_circle().encode(
x='num_ratings',
y=norm_key,
color=alt.condition(brush, alt.value('#4c78a8'), alt.value('lightgray'))
).properties(
selection=nearest).add_selection(brush)
text = alt.Chart().mark_text(align='center', dx=5, dy=-5).encode(
x='num_ratings', y=norm_key,
text=alt.condition(nearest, 'title', alt.value('')))
charts.append(alt.layer(base, text))
return alt.hconcat(*charts, data=df)
def visualize_movie_embeddings(data, x, y):
nearest = alt.selection(
type='single', encodings=['x', 'y'], on='mouseover', nearest=True,
empty='none')
base = alt.Chart().mark_circle().encode(
x=x,
y=y,
color=alt.condition(genre_filter, "genre", alt.value("whitesmoke")),
).properties(
width=600,
height=600,
selection=nearest)
text = alt.Chart().mark_text(align='left', dx=5, dy=-5).encode(
x=x,
y=y,
text=alt.condition(nearest, 'title', alt.value('')))
return alt.hconcat(alt.layer(base, text), genre_chart, data=data)
def tsne_movie_embeddings(model):
"""Visualizes the movie embeddings, projected using t-SNE with Cosine measure.
Args:
model: A MFModel object.
"""
tsne = sklearn.manifold.TSNE(
n_components=2, perplexity=40, metric='cosine', early_exaggeration=10.0,
init='pca', verbose=True, n_iter=400)
print('Running t-SNE...')
V_proj = tsne.fit_transform(model.embeddings["movie_id"])
movies.loc[:,'x'] = V_proj[:, 0]
movies.loc[:,'y'] = V_proj[:, 1]
return visualize_movie_embeddings(movies, 'x', 'y')
movie_embedding_norm(model) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Note: Depending on how the model is initialized, you may observe that some niche movies (ones with few ratings) have a high norm, leading to spurious recommendations. This can happen if the embedding of that movie happens to be initialized with a high norm. Then, because the movie has few ratings, it is infrequently updated, and can keep its high norm. This will be alleviated by using regularization.Try changing the value of the hyper-parameter `init_stddev`. One quantity that can be helpful is that the expected norm of a $d$-dimensional vector with entries $\sim \mathcal N(0, \sigma^2)$ is approximatley $\sigma \sqrt d$.How does this affect the embedding norm distribution, and the ranking of the top-norm movies? | #@title Solution
model_lowinit = build_model(ratings, embedding_dim=30, init_stddev=0.05)
model_lowinit.train(num_iterations=1000, learning_rate=10.)
movie_neighbors(model_lowinit, "Aladdin", DOT)
movie_neighbors(model_lowinit, "Aladdin", COSINE)
movie_embedding_norm([model, model_lowinit]) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Embedding visualizationSince it is hard to visualize embeddings in a higher-dimensional space (when the embedding dimension $k > 3$), one approach is to project the embeddings to a lower dimensional space. T-SNE (T-distributed Stochastic Neighbor Embedding) is an algorithm that projects the embeddings while attempting to preserve their pariwise distances. It can be useful for visualization, but one should use it with care. For more information on using t-SNE, see [How to Use t-SNE Effectively](https://distill.pub/2016/misread-tsne/). | tsne_movie_embeddings(model_lowinit) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
You can highlight the embeddings of a given genre by clicking on the genres panel (SHIFT+click to select multiple genres).We can observe that the embeddings do not seem to have any notable structure, and the embeddings of a given genre are located all over the embedding space. This confirms the poor quality of the learned embeddings. One of the main reasons, which we will address in the next section, is that we only trained the model on observed pairs, and without regularization. V. Regularization In Matrix FactorizationIn the previous section, our loss was defined as the mean squared error on the observed part of the rating matrix. As discussed in the lecture, this can be problematic as the model does not learn how to place the embeddings of irrelevant movies. This phenomenon is known as *folding*.We will add regularization terms that will address this issue. We will use two types of regularization:- Regularization of the model parameters. This is a common $\ell_2$ regularization term on the embedding matrices, given by $r(U, V) = \frac{1}{N} \sum_i \|U_i\|^2 + \frac{1}{M}\sum_j \|V_j\|^2$.- A global prior that pushes the prediction of any pair towards zero, called the *gravity* term. This is given by $g(U, V) = \frac{1}{MN} \sum_{i = 1}^N \sum_{j = 1}^M \langle U_i, V_j \rangle^2$.The total loss is then given by$$\frac{1}{|\Omega|}\sum_{(i, j) \in \Omega} (A_{ij} - \langle U_i, V_j\rangle)^2 + \lambda _r r(U, V) + \lambda_g g(U, V)$$where $\lambda_r$ and $\lambda_g$ are two regularization coefficients (hyper-parameters). Exercise 6: Build a regularized Matrix Factorization model and train itWrite a function that builds a regularized model. You are given a function `gravity(U, V)` that computes the gravity term given the two embedding matrices $U$ and $V$. | def gravity(U, V):
"""Creates a gravity loss given two embedding matrices."""
return 1. / (U.shape[0].value*V.shape[0].value) * tf.reduce_sum(
tf.matmul(U, U, transpose_a=True) * tf.matmul(V, V, transpose_a=True))
def build_regularized_model(
ratings, embedding_dim=3, regularization_coeff=.1, gravity_coeff=1.,
init_stddev=0.1):
"""
Args:
ratings: the DataFrame of movie ratings.
embedding_dim: The dimension of the embedding space.
regularization_coeff: The regularization coefficient lambda.
gravity_coeff: The gravity regularization coefficient lambda_g.
Returns:
A CFModel object that uses a regularized loss.
"""
# Split the ratings DataFrame into train and test.
train_ratings, test_ratings = split_dataframe(ratings)
# SparseTensor representation of the train and test datasets.
A_train = build_rating_sparse_tensor(train_ratings)
A_test = build_rating_sparse_tensor(test_ratings)
U = tf.Variable(tf.random_normal(
[A_train.dense_shape[0], embedding_dim], stddev=init_stddev))
V = tf.Variable(tf.random_normal(
[A_train.dense_shape[1], embedding_dim], stddev=init_stddev))
# ========================= Complete this section ============================
# error_train =
# error_test =
# gravity_loss =
# regularization_loss =
# ============================================================================
total_loss = error_train + regularization_loss + gravity_loss
losses = {
'train_error': error_train,
'test_error': error_test,
}
loss_components = {
'observed_loss': error_train,
'regularization_loss': regularization_loss,
'gravity_loss': gravity_loss,
}
embeddings = {"user_id": U, "movie_id": V}
return CFModel(embeddings, total_loss, [losses, loss_components])
# @title Solution
def gravity(U, V):
"""Creates a gravity loss given two embedding matrices."""
return 1. / (U.shape[0].value*V.shape[0].value) * tf.reduce_sum(
tf.matmul(U, U, transpose_a=True) * tf.matmul(V, V, transpose_a=True))
def build_regularized_model(
ratings, embedding_dim=3, regularization_coeff=.1, gravity_coeff=1.,
init_stddev=0.1):
"""
Args:
ratings: the DataFrame of movie ratings.
embedding_dim: The dimension of the embedding space.
regularization_coeff: The regularization coefficient lambda.
gravity_coeff: The gravity regularization coefficient lambda_g.
Returns:
A CFModel object that uses a regularized loss.
"""
# Split the ratings DataFrame into train and test.
train_ratings, test_ratings = split_dataframe(ratings)
# SparseTensor representation of the train and test datasets.
A_train = build_rating_sparse_tensor(train_ratings)
A_test = build_rating_sparse_tensor(test_ratings)
U = tf.Variable(tf.random_normal(
[A_train.dense_shape[0], embedding_dim], stddev=init_stddev))
V = tf.Variable(tf.random_normal(
[A_train.dense_shape[1], embedding_dim], stddev=init_stddev))
error_train = sparse_mean_square_error(A_train, U, V)
error_test = sparse_mean_square_error(A_test, U, V)
gravity_loss = gravity_coeff * gravity(U, V)
regularization_loss = regularization_coeff * (
tf.reduce_sum(U*U)/U.shape[0].value + tf.reduce_sum(V*V)/V.shape[0].value)
total_loss = error_train + regularization_loss + gravity_loss
losses = {
'train_error_observed': error_train,
'test_error_observed': error_test,
}
loss_components = {
'observed_loss': error_train,
'regularization_loss': regularization_loss,
'gravity_loss': gravity_loss,
}
embeddings = {"user_id": U, "movie_id": V}
return CFModel(embeddings, total_loss, [losses, loss_components]) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
It is now time to train the regularized model! You can try different values of the regularization coefficients, and different embedding dimensions. | reg_model = build_regularized_model(
ratings, regularization_coeff=0.1, gravity_coeff=1.0, embedding_dim=35,
init_stddev=.05)
reg_model.train(num_iterations=2000, learning_rate=20.) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Observe that adding the regularization terms results in a higher MSE, both on the training and test set. However, as we will see, the quality of the recommendations improves. This highlights a tension between fitting the observed data and minimizing the regularization terms. Fitting the observed data often emphasizes learning high similarity (between items with many interactions), but a good embedding representation also requires learning low similarity (between items with few or no interactions). Inspect the resultsLet's see if the results with regularization look better. | user_recommendations(reg_model, DOT, exclude_rated=True, k=10) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Hopefully, these recommendations look better. You can change the similarity measure from COSINE to DOT and observe how this affects the recommendations.Since the model is likely to recommend items that you rated highly, you have the option to exclude the items you rated, using `exclude_rated=True`.In the following cells, we display the nearest neighbors, the embedding norms, and the t-SNE projection of the movie embeddings. | movie_neighbors(reg_model, "Aladdin", DOT)
movie_neighbors(reg_model, "Aladdin", COSINE) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Here we compare the embedding norms for `model` and `reg_model`. Selecting a subset of the embeddings will highlight them on both charts simultaneously. | movie_embedding_norm([model, model_lowinit, reg_model])
# Visualize the embeddings
tsne_movie_embeddings(reg_model) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
We should observe that the embeddings have a lot more structure than the unregularized case. Try selecting different genres and observe how they tend to form clusters (for example Horror, Animation and Children). ConclusionThis concludes this section on matrix factorization models. Note that while the scale of the problem is small enough to allow efficient training using SGD, many practical problems need to be trained using more specialized algorithms such as Alternating Least Squares (see [tf.contrib.factorization.WALSMatrixFactorization](https://www.tensorflow.org/api_docs/python/tf/contrib/factorization/WALSMatrixFactorization) for a TF implementation). VI. Softmax modelIn this section, we will train a simple softmax model that predicts whether a given user has rated a movie.**Note**: if you are taking the self-study version of the class, make sure to read through the part of the class covering Softmax training before working on this part.The model will take as input a feature vector $x$ representing the list of movies the user has rated. We start from the ratings DataFrame, which we group by user_id. | rated_movies = (ratings[["user_id", "movie_id"]]
.groupby("user_id", as_index=False)
.aggregate(lambda x: list(x)))
rated_movies.head() | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
We then create a function that generates an example batch, such that each example contains the following features:- movie_id: A tensor of strings of the movie ids that the user rated.- genre: A tensor of strings of the genres of those movies- year: A tensor of strings of the release year. | #@title Batch generation code (run this cell)
years_dict = {
movie: year for movie, year in zip(movies["movie_id"], movies["year"])
}
genres_dict = {
movie: genres.split('-')
for movie, genres in zip(movies["movie_id"], movies["all_genres"])
}
def make_batch(ratings, batch_size):
"""Creates a batch of examples.
Args:
ratings: A DataFrame of ratings such that examples["movie_id"] is a list of
movies rated by a user.
batch_size: The batch size.
"""
def pad(x, fill):
return pd.DataFrame.from_dict(x).fillna(fill).values
movie = []
year = []
genre = []
label = []
for movie_ids in ratings["movie_id"].values:
movie.append(movie_ids)
genre.append([x for movie_id in movie_ids for x in genres_dict[movie_id]])
year.append([years_dict[movie_id] for movie_id in movie_ids])
label.append([int(movie_id) for movie_id in movie_ids])
features = {
"movie_id": pad(movie, ""),
"year": pad(year, ""),
"genre": pad(genre, ""),
"label": pad(label, -1)
}
batch = (
tf.data.Dataset.from_tensor_slices(features)
.shuffle(1000)
.repeat()
.batch(batch_size)
.make_one_shot_iterator()
.get_next())
return batch
def select_random(x):
"""Selectes a random elements from each row of x."""
def to_float(x):
return tf.cast(x, tf.float32)
def to_int(x):
return tf.cast(x, tf.int64)
batch_size = tf.shape(x)[0]
rn = tf.range(batch_size)
nnz = to_float(tf.count_nonzero(x >= 0, axis=1))
rnd = tf.random_uniform([batch_size])
ids = tf.stack([to_int(rn), to_int(nnz * rnd)], axis=1)
return to_int(tf.gather_nd(x, ids))
| _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Loss functionRecall that the softmax model maps the input features $x$ to a user embedding $\psi(x) \in \mathbb R^d$, where $d$ is the embedding dimension. This vector is then multiplied by a movie embedding matrix $V \in \mathbb R^{m \times d}$ (where $m$ is the number of movies), and the final output of the model is the softmax of the product$$\hat p(x) = \text{softmax}(\psi(x) V^\top).$$Given a target label $y$, if we denote by $p = 1_y$ a one-hot encoding of this target label, then the loss is the cross-entropy between $\hat p(x)$ and $p$. Exercise 7: Write a loss function for the softmax model.In this exercise, we will write a function that takes tensors representing the user embeddings $\psi(x)$, movie embeddings $V$, target label $y$, and return the cross-entropy loss.Hint: You can use the function [`tf.nn.sparse_softmax_cross_entropy_with_logits`](https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits), which takes `logits` as input, where `logits` refers to the product $\psi(x) V^\top$. | def softmax_loss(user_embeddings, movie_embeddings, labels):
"""Returns the cross-entropy loss of the softmax model.
Args:
user_embeddings: A tensor of shape [batch_size, embedding_dim].
movie_embeddings: A tensor of shape [num_movies, embedding_dim].
labels: A sparse tensor of dense_shape [batch_size, 1], such that
labels[i] is the target label for example i.
Returns:
The mean cross-entropy loss.
"""
# ========================= Complete this section ============================
# logits =
# loss =
# ============================================================================
return loss
# @title Solution
def softmax_loss(user_embeddings, movie_embeddings, labels):
"""Returns the cross-entropy loss of the softmax model.
Args:
user_embeddings: A tensor of shape [batch_size, embedding_dim].
movie_embeddings: A tensor of shape [num_movies, embedding_dim].
labels: A tensor of [batch_size], such that labels[i] is the target label
for example i.
Returns:
The mean cross-entropy loss.
"""
# Verify that the embddings have compatible dimensions
user_emb_dim = user_embeddings.shape[1].value
movie_emb_dim = movie_embeddings.shape[1].value
if user_emb_dim != movie_emb_dim:
raise ValueError(
"The user embedding dimension %d should match the movie embedding "
"dimension % d" % (user_emb_dim, movie_emb_dim))
logits = tf.matmul(user_embeddings, movie_embeddings, transpose_b=True)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels))
return loss | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Exercise 8: Build a softmax model, train it, and inspect its embeddings.We are now ready to build a softmax CFModel. Complete the `build_softmax_model` function in the next cell. The architecture of the model is defined in the function `create_user_embeddings` and illustrated in the figure below. The input embeddings (movie_id, genre and year) are concatenated to form the input layer, then we have hidden layers with dimensions specified by the `hidden_dims` argument. Finally, the last hidden layer is multiplied by the movie embeddings to obtain the logits layer. For the target label, we will use a randomly-sampled movie_id from the list of movies the user rated.Complete the function below by creating the feature columns and embedding columns, then creating the loss tensors both for the train and test sets (using the `softmax_loss` function of the previous exercise). | def build_softmax_model(rated_movies, embedding_cols, hidden_dims):
"""Builds a Softmax model for MovieLens.
Args:
rated_movies: DataFrame of traing examples.
embedding_cols: A dictionary mapping feature names (string) to embedding
column objects. This will be used in tf.feature_column.input_layer() to
create the input layer.
hidden_dims: int list of the dimensions of the hidden layers.
Returns:
A CFModel object.
"""
def create_network(features):
"""Maps input features dictionary to user embeddings.
Args:
features: A dictionary of input string tensors.
Returns:
outputs: A tensor of shape [batch_size, embedding_dim].
"""
# Create a bag-of-words embedding for each sparse feature.
inputs = tf.feature_column.input_layer(features, embedding_cols)
# Hidden layers.
input_dim = inputs.shape[1].value
for i, output_dim in enumerate(hidden_dims):
w = tf.get_variable(
"hidden%d_w_" % i, shape=[input_dim, output_dim],
initializer=tf.truncated_normal_initializer(
stddev=1./np.sqrt(output_dim))) / 10.
outputs = tf.matmul(inputs, w)
input_dim = output_dim
inputs = outputs
return outputs
train_rated_movies, test_rated_movies = split_dataframe(rated_movies)
train_batch = make_batch(train_rated_movies, 200)
test_batch = make_batch(test_rated_movies, 100)
with tf.variable_scope("model", reuse=False):
# Train
train_user_embeddings = create_network(train_batch)
train_labels = select_random(train_batch["label"])
with tf.variable_scope("model", reuse=True):
# Test
test_user_embeddings = create_network(test_batch)
test_labels = select_random(test_batch["label"])
movie_embeddings = tf.get_variable(
"input_layer/movie_id_embedding/embedding_weights")
# ========================= Complete this section ============================
# train_loss =
# test_loss =
# test_precision_at_10 =
# ============================================================================
metrics = (
{"train_loss": train_loss, "test_loss": test_loss},
{"test_precision_at_10": test_precision_at_10}
)
embeddings = {"movie_id": movie_embeddings}
return CFModel(embeddings, train_loss, metrics)
# @title Solution
def build_softmax_model(rated_movies, embedding_cols, hidden_dims):
"""Builds a Softmax model for MovieLens.
Args:
rated_movies: DataFrame of traing examples.
embedding_cols: A dictionary mapping feature names (string) to embedding
column objects. This will be used in tf.feature_column.input_layer() to
create the input layer.
hidden_dims: int list of the dimensions of the hidden layers.
Returns:
A CFModel object.
"""
def create_network(features):
"""Maps input features dictionary to user embeddings.
Args:
features: A dictionary of input string tensors.
Returns:
outputs: A tensor of shape [batch_size, embedding_dim].
"""
# Create a bag-of-words embedding for each sparse feature.
inputs = tf.feature_column.input_layer(features, embedding_cols)
# Hidden layers.
input_dim = inputs.shape[1].value
for i, output_dim in enumerate(hidden_dims):
w = tf.get_variable(
"hidden%d_w_" % i, shape=[input_dim, output_dim],
initializer=tf.truncated_normal_initializer(
stddev=1./np.sqrt(output_dim))) / 10.
outputs = tf.matmul(inputs, w)
input_dim = output_dim
inputs = outputs
return outputs
train_rated_movies, test_rated_movies = split_dataframe(rated_movies)
train_batch = make_batch(train_rated_movies, 200)
test_batch = make_batch(test_rated_movies, 100)
with tf.variable_scope("model", reuse=False):
# Train
train_user_embeddings = create_network(train_batch)
train_labels = select_random(train_batch["label"])
with tf.variable_scope("model", reuse=True):
# Test
test_user_embeddings = create_network(test_batch)
test_labels = select_random(test_batch["label"])
movie_embeddings = tf.get_variable(
"input_layer/movie_id_embedding/embedding_weights")
test_loss = softmax_loss(
test_user_embeddings, movie_embeddings, test_labels)
train_loss = softmax_loss(
train_user_embeddings, movie_embeddings, train_labels)
_, test_precision_at_10 = tf.metrics.precision_at_k(
labels=test_labels,
predictions=tf.matmul(test_user_embeddings, movie_embeddings, transpose_b=True),
k=10)
metrics = (
{"train_loss": train_loss, "test_loss": test_loss},
{"test_precision_at_10": test_precision_at_10}
)
embeddings = {"movie_id": movie_embeddings}
return CFModel(embeddings, train_loss, metrics) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Train the Softmax modelWe are now ready to train the softmax model. You can set the following hyperparameters:- learning rate- number of iterations. Note: you can run `softmax_model.train()` again to continue training the model from its current state.- input embedding dimensions (the `input_dims` argument)- number of hidden layers and size of each layer (the `hidden_dims` argument)Note: since our input features are string-valued (movie_id, genre, and year), we need to map them to integer ids. This is done using [`tf.feature_column.categorical_column_with_vocabulary_list`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), which takes a vocabulary list specifying all the values the feature can take. Then each id is mapped to an embedding vector using [`tf.feature_column.embedding_column`](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column). | # Create feature embedding columns
def make_embedding_col(key, embedding_dim):
categorical_col = tf.feature_column.categorical_column_with_vocabulary_list(
key=key, vocabulary_list=list(set(movies[key].values)), num_oov_buckets=0)
return tf.feature_column.embedding_column(
categorical_column=categorical_col, dimension=embedding_dim,
# default initializer: trancated normal with stddev=1/sqrt(dimension)
combiner='mean')
with tf.Graph().as_default():
softmax_model = build_softmax_model(
rated_movies,
embedding_cols=[
make_embedding_col("movie_id", 35),
make_embedding_col("genre", 3),
make_embedding_col("year", 2),
],
hidden_dims=[35])
softmax_model.train(
learning_rate=8., num_iterations=3000, optimizer=tf.train.AdagradOptimizer) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
Inspect the embeddingsWe can inspect the movie embeddings as we did for the previous models. Note that in this case, the movie embeddings are used at the same time as input embeddings (for the bag of words representation of the user history), and as softmax weights. | movie_neighbors(softmax_model, "Aladdin", DOT)
movie_neighbors(softmax_model, "Aladdin", COSINE)
movie_embedding_norm([reg_model, softmax_model])
tsne_movie_embeddings(softmax_model) | _____no_output_____ | Apache-2.0 | ml/recommendation-systems/recommendation-systems.ipynb | howl-anderson/eng-edu |
**[Machine Learning Course Home Page](kaggle.com/learn/machine-learning).**--- Selecting Data for ModelingYour dataset had too many variables to wrap your head around, or even to print out nicely. How can you pare down this overwhelming amount of data to something you can understand?We'll start by picking a few variables using our intuition. Later courses will show you statistical techniques to automatically prioritize variables.To choose variables/columns, we'll need to see a list of all columns in the dataset. That is done with the **columns** property of the DataFrame (the bottom line of code below). | import pandas as pd
melbourne_file_path = '../input/melbourne-housing-snapshot/melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
melbourne_data.columns
# The Melbourne data has some missing values (some houses for which some variables weren't recorded.)
# We'll learn to handle missing values in a later tutorial.
# Your Iowa data doesn't have missing values in the columns you use.
# So we will take the simplest option for now, and drop houses from our data.
# Don't worry about this much for now, though the code is:
# dropna drops missing values (think of na as "not available")
melbourne_data = melbourne_data.dropna(axis=0) | _____no_output_____ | Apache-2.0 | learntools/machine_learning/nbs/tut3.ipynb | rosbo/learntools |
There are many ways to select a subset of your data. The [Pandas Course](https://www.kaggle.com/learn/pandas) covers these in more depth, but we will focus on two approaches for now.1. Dot notation, which we use to select the "prediction target"2. Selecting with a column list, which we use to select the Selecting The Prediction Target You can pull out a variable with **dot-notation**. This single column is stored in a **Series**, which is broadly like a DataFrame with only a single column of data. We'll use the dot notation to select the column we want to predict, which is called the **prediction target**. By convention, the prediction target is called **y**. So the code we need to save the house prices in the Melbourne data is | y = melbourne_data.Price | _____no_output_____ | Apache-2.0 | learntools/machine_learning/nbs/tut3.ipynb | rosbo/learntools |
Choosing "Features"The columns that are inputted into our model (and later used to make predictions) are called "features." In our case, those would be the columns used to determine the home price. Sometimes, you will use all columns except the target as features. Other times you'll be better off with fewer features. For now, we'll build a model with only a few features. Later on you'll see how to iterate and compare models built with different features.We select multiple features by providing a list of column names inside brackets. Each item in that list should be a string (with quotes).Here is an example: | melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'Lattitude', 'Longtitude'] | _____no_output_____ | Apache-2.0 | learntools/machine_learning/nbs/tut3.ipynb | rosbo/learntools |
By convention, this data is called **X**. | X = melbourne_data[melbourne_features] | _____no_output_____ | Apache-2.0 | learntools/machine_learning/nbs/tut3.ipynb | rosbo/learntools |
Let's quickly review the data we'll be using to predict house prices using the `describe` method and the `head` method, which shows the top few rows. | X.describe()
X.head() | _____no_output_____ | Apache-2.0 | learntools/machine_learning/nbs/tut3.ipynb | rosbo/learntools |
Visually checking your data with these commands is an important part of a data scientist's job. You'll frequently find surprises in the dataset that deserve further inspection. --- Building Your ModelYou will use the **scikit-learn** library to create your models. When coding, this library is written as **sklearn**, as you will see in the sample code. Scikit-learn is easily the most popular library for modeling the types of data typically stored in DataFrames. The steps to building and using a model are:* **Define:** What type of model will it be? A decision tree? Some other type of model? Some other parameters of the model type are specified too.* **Fit:** Capture patterns from provided data. This is the heart of modeling.* **Predict:** Just what it sounds like* **Evaluate**: Determine how accurate the model's predictions are.Here is an example of defining a decision tree model with scikit-learn and fitting it with the features and target variable. | from sklearn.tree import DecisionTreeRegressor
# Define model. Specify a number for random_state to ensure same results each run
melbourne_model = DecisionTreeRegressor(random_state=1)
# Fit model
melbourne_model.fit(X, y) | _____no_output_____ | Apache-2.0 | learntools/machine_learning/nbs/tut3.ipynb | rosbo/learntools |
Many machine learning models allow some randomness in model training. Specifying a number for `random_state` ensures you get the same results in each run. This is considered a good practice. You use any number, and model quality won't depend meaningfully on exactly what value you choose.We now have a fitted model that we can use to make predictions.In practice, you'll want to make predictions for new houses coming on the market rather than the houses we already have prices for. But we'll make predictions for the first few rows of the training data to see how the predict function works. | print("Making predictions for the following 5 houses:")
print(X.head())
print("The predictions are")
print(melbourne_model.predict(X.head())) | _____no_output_____ | Apache-2.0 | learntools/machine_learning/nbs/tut3.ipynb | rosbo/learntools |
Assignment:Beat the performance of my Lasso regression by **using different feature engineering steps ONLY!!**.The performance of my current model, as shown in this notebook is:- test rmse: 44798.497576784845- test r2: 0.7079639526659389To beat my model you will need a test r2 bigger than 0.71 and a rmse smaller than 44798. Conditions:- You MUST NOT change the hyperparameters of the Lasso.- You MUST use the same seeds in Lasso and train_test_split as I show in this notebook (random_state)- You MUST use all the features of the dataset (except Id) - you MUST NOT select features If you beat my model:Make a pull request with your notebook to this github repo:https://github.com/solegalli/udemy-feml-challengeRemember that you need to fork this repo first, upload your winning notebook to your repo, and then make a PR (pull request) to my repo. I will then revise and accept the PR, which will appear in my repo and be available to all the students in the course. This way, other students can learn from your creativity when transforming the variables in your dataset. Summary of my resultsMain changes:- calculate `elapsed_years` with respect to `YearBuilt` instead of `YrSold`- OneHot encoding of categorical variables- do not discretize continuous numerical variables- used ScikitLearn instead of Feature-EngineResults on the test set:- rmse = 38063.04673161993- r2 = 0.7891776453011499 House Prices dataset | from math import sqrt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for the model
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Lasso
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.metrics import mean_squared_error, r2_score
# for feature engineering
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, MinMaxScaler, OneHotEncoder, PowerTransformer
# from feature_engine import missing_data_imputers as mdi
# from feature_engine import discretisers as dsc
# from feature_engine import categorical_encoders as ce | _____no_output_____ | BSD-3-Clause | 13-Assignement_Gian.ipynb | rahuliitb/udemy-feml-challenge |
Load Datasets | # load dataset
data = pd.read_csv('../houseprice.csv')
# make lists of variable types
categorical_vars = [var for var in data.columns if data[var].dtype == 'O']
year_vars = [var for var in data.columns if 'Yr' in var or 'Year' in var]
discrete_vars = [
var for var in data.columns if data[var].dtype != 'O'
and len(data[var].unique()) < 15 and var not in year_vars
]
numerical_vars = [
var for var in data.columns if data[var].dtype != 'O'
if var not in discrete_vars and var not in ['Id', 'SalePrice']
and var not in year_vars
]
print('There are {} continuous variables'.format(len(numerical_vars)))
print('There are {} discrete variables'.format(len(discrete_vars)))
print('There are {} temporal variables'.format(len(year_vars)))
print('There are {} categorical variables'.format(len(categorical_vars))) | There are 19 continuous variables
There are 13 discrete variables
There are 4 temporal variables
There are 43 categorical variables
| BSD-3-Clause | 13-Assignement_Gian.ipynb | rahuliitb/udemy-feml-challenge |
Separate train and test set | # IMPORTANT: keep the random_state to zero for reproducibility
# Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(data.drop(
['Id', 'SalePrice'], axis=1),
data['SalePrice'],
test_size=0.1,
random_state=0)
# calculate elapsed time
def elapsed_years(df, var):
# capture difference between year variable and year the house was *built*
df[var] = df[var] - df['YearBuilt']
return df
for var in ['YrSold', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# drop YrSold
X_train.drop('YearBuilt', axis=1, inplace=True)
X_test.drop('YearBuilt', axis=1, inplace=True)
# capture the column names for use later in the notebook
final_columns = X_train.columns | _____no_output_____ | BSD-3-Clause | 13-Assignement_Gian.ipynb | rahuliitb/udemy-feml-challenge |
Feature Engineering Pipeline | ## functions to encode rare categories
def find_non_rare_labels(df, variable, tolerance):
temp = df.groupby([variable])[variable].count()/len(df)
non_rare = [x for x in temp.loc[temp>tolerance].index.values]
return non_rare
def rare_encoding(X_train, X_test, variable, tolerance):
X_train = X_train.copy()
X_test = X_test.copy()
# find the most frequent category
frequent_cat = find_non_rare_labels(X_train, variable, tolerance)
# re-group rare labels
X_train[variable] = np.where(X_train[variable].isin(
frequent_cat), X_train[variable], 'Rare')
X_test[variable] = np.where(X_test[variable].isin(
frequent_cat), X_test[variable], 'Rare')
return X_train, X_test
## encoding rare categories
for var in categorical_vars+discrete_vars:
X_train, X_test = rare_encoding(X_train, X_test, var, 0.05)
## building our pipeline using scikitlearn
numeric_transformer = Pipeline(steps=[
('imputer_num', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
categorical_transformer = Pipeline(steps=[
('imputer_cat', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot_enc', OneHotEncoder(drop='first'))])
discrete_transformer = Pipeline(steps=[
('imputer_disc', SimpleImputer(strategy='most_frequent')),
('onehot_enc', OneHotEncoder(drop='first'))
])
preprocessor = ColumnTransformer(transformers=[
('num', numeric_transformer, numerical_vars),
('cat', categorical_transformer, categorical_vars),
('disc', discrete_transformer, discrete_vars)
])
house_pipe = Pipeline(steps=[('preprocessor', preprocessor),
('lasso', Lasso(random_state=0))])
# let's fit the pipeline
house_pipe.fit(X_train, y_train)
# let's get the predictions
X_train_preds = house_pipe.predict(X_train)
X_test_preds = house_pipe.predict(X_test)
# check model performance:
print('train mse: {}'.format(mean_squared_error(y_train, X_train_preds)))
print('train rmse: {}'.format(sqrt(mean_squared_error(y_train, X_train_preds))))
print('train r2: {}'.format(r2_score(y_train, X_train_preds)))
print()
print('test mse: {}'.format(mean_squared_error(y_test, X_test_preds)))
print('test rmse: {}'.format(sqrt(mean_squared_error(y_test, X_test_preds))))
print('test r2: {}'.format(r2_score(y_test, X_test_preds))) | train mse: 684649908.3271698
train rmse: 26165.815644217357
train r2: 0.8903477989380937
test mse: 1448795526.4934826
test rmse: 38063.04673161993
test r2: 0.7891776453011499
| BSD-3-Clause | 13-Assignement_Gian.ipynb | rahuliitb/udemy-feml-challenge |
We see an improvement on both rmse and r2 score with respect to the baseline as desired :) | # plot predictions vs real value
plt.scatter(y_test,X_test_preds)
plt.xlabel('True Price')
plt.ylabel('Predicted Price')
plt.xlim(0,800000)
plt.ylim(0,800000); | _____no_output_____ | BSD-3-Clause | 13-Assignement_Gian.ipynb | rahuliitb/udemy-feml-challenge |
Combining and merging dataframes Setup | # Connect my Google Drive to Google Colab
from google.colab import drive
drive.mount ('/content/gdrive')
# Change the working directory -Alex's version as he has a shared folder
# %cd /content/gdrive/MyDrive/swd1a-python-2021-10
# Change the working directory - Martin's version
%cd /content/gdrive/MyDrive/Colab Notebooks/arc_training/swd1a-python-2021-10
# Check the working directory
!pwd
# Check the contents
! ls -l | total 394
-rw------- 1 root root 35824 Oct 18 13:50 010_starting_with_python.ipynb
-rw------- 1 root root 96573 Oct 25 14:07 020_starting_with_data.ipynb
-rw------- 1 root root 216480 Oct 25 15:00 030_indexing_and_types.ipynb
-rw------- 1 root root 49257 Nov 1 13:56 040_dataframes.ipynb
drwx------ 2 root root 4096 Oct 18 14:12 data
| CC-BY-4.0 | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 |
Combining dataframes | # import pandas
import pandas as pd
surveys_df = pd.read_csv("data/surveys.csv", keep_default_na=False, na_values=[""])
surveys_df.head()
species_df = pd.read_csv("data/species.csv", keep_default_na=False, na_values=[""])
species_df.head()
# make some fragments of surveys_df
surveys_sub = surveys_df.head(10)
surveys_sub_last10 = surveys_df.tail(10)
surveys_sub_last10
surveys_sub
surveys_sub_last10.reset_index(drop=True)
surveys_sub_last10 = surveys_sub_last10.reset_index(drop=True)
pd.concat([surveys_sub, surveys_sub_last10], axis=0)
pd.concat([surveys_sub, surveys_sub_last10], axis=1)
concat_df = pd.concat([surveys_sub, surveys_sub_last10], axis=0)
concat_df.iloc[0, 1:5]
concat_df.loc[0, "hindfoot_length"]
concat_df.reset_index(drop=True, inplace=True)
concat_df
concat_df.loc[0, "hindfoot_length"]
concat_df.to_csv("data/master_surveys.csv", index=False)
| _____no_output_____ | CC-BY-4.0 | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 |
Joining dataframes together Combining DataFrames using a common field is called “joining”. The columns containing the common values are called “join key(s)”. Joining DataFrames in this way is often useful when one DataFrame is a “lookup table” containing additional data that we want to include in the other.NOTE: This process of joining tables is similar to what we do with tables in an SQL database.For example, the `species.csv` file that we’ve been working with is a lookup table. This table contains the genus, species and taxa code for 55 species. The species code is unique for each line. These species are identified in our survey data as well using the unique species code. Rather than adding 3 more columns for the genus, species and taxa to each of the 35,549 line Survey data table, we can maintain the shorter table with the species information. When we want to access that information, we can create a query that joins the additional columns of information to the Survey data.Storing data in this way has many benefits including:* It ensures consistency in the spelling of species attributes (genus, species and taxa) given each species is only entered once. Imagine the possibilities for spelling errors when entering the genus and species thousands of times!* It also makes it easy for us to make changes to the species information once without having to find each instance of it in the larger survey data.* It optimises the size of our data, we can reduce duplication and by doing so reduce the opportunity for some types of error to appear. | # Lets get some data
# Read in 10 lines of the surveys table
# import pandas first
import pandas as pd
surveys_df = pd.read_csv("data/surveys.csv", keep_default_na=False, na_values=[""])
survey_sub = surveys_df.head(10)
# Grab a small subset of the species data
species_sub = pd.read_csv('data/speciesSubset.csv', keep_default_na=False, na_values=[""])
# Identify join keys
species_sub.columns
survey_sub.columns | _____no_output_____ | CC-BY-4.0 | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 |
Inner JoinAn inner join combines two DataFrames based on a join key and returns a new DataFrame that contains only those rows that have matching values in both of the original DataFrames.  | # We do an inner join with the pandas 'merge' function
merged_inner = pd.merge(left=survey_sub, right=species_sub, left_on='species_id', right_on='species_id')
# Take a look at the data
merged_inner.shape
merged_inner | _____no_output_____ | CC-BY-4.0 | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 |
Left joinsWhat if we want to add information from species_sub to survey_sub without losing any of the information from survey_sub? In this case, we use a different type of join called a “left outer join”, or a “left join”.Like an inner join, a left join uses join keys to combine two DataFrames. Unlike an inner join, a left join will return all of the rows from the left DataFrame, even those rows whose join key(s) do not have values in the right DataFrame. Rows in the left DataFrame that are missing values for the join key(s) in the right DataFrame will simply have null (i.e., NaN or None) values for those columns in the resulting joined DataFrame.Note: a left join will still discard rows from the right DataFrame that do not have values for the join key(s) in the left DataFrame.  | merged_left = pd.merge(left=survey_sub, right=species_sub, how='left', left_on='species_id',
right_on='species_id')
merged_left
# If we wanted to find the rows with missing species data:
merged_left [pd.isnull(merged_left.genus)] | _____no_output_____ | CC-BY-4.0 | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 |
The pandas merge function supports two other join types:Right (outer) join: Invoked by passing how='right' as an argument. Similar to a left join, except all rows from the right DataFrame are kept, while rows from the left DataFrame without matching join key(s) values are discarded.Full (outer) join: Invoked by passing how='outer' as an argument. This join type returns the all pairwise combinations of rows from both DataFrames; i.e., the result DataFrame will NaN where data is missing in one of the dataframes. This join type is very rarely used. | # Challenge 1: Distributions
# Create a new dataframe
# by joining the contents of the surveys.csv and species.csv tables
merged_left = pd.merge (left = surveys_df, right=species_df, how="left",
on="species_id")
merged_left.shape
# Calculate and plot distribution of:
# 1. taxa per plot (number of species of each taxa per plot)
merged_left.groupby(["plot_id"])["taxa"].nunique().plot(kind="bar");
# 2. taxa by sex by plot
# Replace an NaN values of sex with a more meaningful indeterminate value
merged_left.loc[merged_left["sex"].isnull(), "sex"] = 'M|F'
# Number of taxa for each plot/sex combination
ntaxa_sex_site = merged_left.groupby(['plot_id', 'sex'])['taxa'].nunique().reset_index(level=1)
ntaxa_sex_site = ntaxa_sex_site.pivot_table (values = 'taxa', columns='sex', index=ntaxa_sex_site.index)
import matplotlib.pyplot as plt
ntaxa_sex_site.plot(kind='bar', legend=False)
plt.legend(loc='upper center', ncol=3, bbox_to_anchor=(0.5, 1.08), fontsize='small', frameon=False);
# Challenge 2: Diversity Index | _____no_output_____ | CC-BY-4.0 | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 |
You can calculate a biodiversity index as:the number of species in the plot / the total number of individuals in the plot = Biodiversity index | plot_info = pd.read_csv("data/plots.csv")
plot_info.groupby("plot_type").count()
# Diversity index
merged_site_type = pd.merge(merged_left, plot_info, on='plot_id')
# For each plot, get the number of species for each plot
nspecies_site = merged_site_type.groupby(["plot_id"])["species"].nunique().rename("nspecies")
# For each plot, get the number of individuals
nindividuals_site = merged_site_type.groupby(["plot_id"]).count()['record_id'].rename("nindiv")
# combine the two series
diversity_index = pd.concat([nspecies_site, nindividuals_site], axis=1)
diversity_index
# calculate the diversity index
diversity_index['diversity'] = diversity_index['nspecies']/diversity_index['nindiv']
# Bar chart
diversity_index['diversity'].plot(kind="barh");
| _____no_output_____ | CC-BY-4.0 | notebooks/040_dataframes.ipynb | ARCTraining/python-2021-04 |
Table of Contents-time-serie-0.1">-> time serieoutlier detection | import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from statsmodels.graphics import tsaplots
%matplotlib inline
sns.set(rc={"figure.figsize": (15, 6)})
sns.set_palette(sns.color_palette("Set2", 10))
sns.set_style("whitegrid")
CO2_measurement = pd.read_csv('data/CO2_sensor_measurements.csv', sep='\t')
CO2_measurement['timestamp'] = pd.to_datetime(CO2_measurement['timestamp'])
CO2_measurement.set_index(['LocationName', 'SensorUnit_ID', 'timestamp'], inplace=True); | _____no_output_____ | MIT | Exploration_Py.ipynb | gregunz/TimeSeries2018 |
-> time serie | choosen_location = 'AJGR'
choosen_id = 1122
time_serie_dirty = CO2_measurement.loc[choosen_location].loc[choosen_id] | _____no_output_____ | MIT | Exploration_Py.ipynb | gregunz/TimeSeries2018 |
outlier detection | time_serie = time_serie_dirty.copy()
time_serie[time_serie_dirty['CO2'] > 380] = np.nan
time_serie = time_serie.interpolate().resample('1H').mean()
time_serie_dif = time_serie.pct_change().dropna()
time_serie_dirty.plot()
plt.savefig('plots/raw_data.eps')
time_serie.plot()
plt.savefig('plots/raw_data_no_outliers.eps')
tsaplots.plot_acf(time_serie, lags=100)
plt.savefig('plots/raw_acf.eps')
time_serie_dif.plot()
plt.savefig('plots/dif_data.eps')
tsaplots.plot_acf(time_serie_dif, lags=100)
plt.savefig('plots/dif_acf.eps')
tsaplots.plot_pacf(time_serie_dif, lags=100)
plt.savefig('plots/dif_pacf.eps')
plt.show()
time_serie.to_csv('data/co2_ajgr.csv') | _____no_output_____ | MIT | Exploration_Py.ipynb | gregunz/TimeSeries2018 |
Homework 7: Kernel K-Means and EMThis homework is due on Thursday April 1,2021 Problem 1: Kernel K-MeansIn this exercise, we will consider how one may go about performing non-linear machine learning by adapting machine learning algorithms that we have discussed in class. We will discuss one particular approach that has been widely used throughout machine learning. Recall the discussion from lecture: we take our feature vectors $\boldsymbol{x}_1, ..., \boldsymbol{x}_n$ and apply a non-linear function $\phi$ to each point to yield $\phi(\boldsymbol{x}_1), ..., \phi(\boldsymbol{x}_n)$. Then, if we apply a linear machine learning algorithm (e.g., k-means or SVM) on the mapped data, the linear boundary in the mapped space will correspond to a non-linear boundary in the input space.We looked at one particular mapping in class. Consider a two-dimensional feature vector $\boldsymbol{x} = (x_1, x_2)^T$, and define the function $\phi$ as \begin{equation*}\phi(\boldsymbol{x}) = \left(\begin{array}{c}1 \\\sqrt{2} x_1 \\\sqrt{2} x_2 \\\sqrt{2} x_1 x_2\\x_1^2\\x_2^2\end{array} \right).\end{equation*}As discussed in class, the inner product $\phi(\boldsymbol{x}_i)^T \phi(\boldsymbol{x}_j)$ between two mapped vectors is equal to $(\boldsymbol{x}_i^T \boldsymbol{x}_j + 1)^2$; that is, one can compute the inner product between data points in the mapped space without explicitly forming the 6-dimensional mapped vectors for the data. Because applying such a mapping may be computationally expensive, this trick can allow us to run machine learning algorithms in the mapped space without explicitly forming the mappings. For instance, in a k-NN classifier, one must compute the (squared) Euclidean distance between a test point $\boldsymbol{x}_t$ and a training point $\boldsymbol{x}_i$. Expanding this distance out yields\begin{equation*}\|\boldsymbol{x}_t - \boldsymbol{x}_i\|^2_2 = (\boldsymbol{x}_t - \boldsymbol{x}_i)^T (\boldsymbol{x}_t - \boldsymbol{x}_i) = \boldsymbol{x}_t^T \boldsymbol{x}_t - 2 \boldsymbol{x}_t^T \boldsymbol{x}_i + \boldsymbol{x}_i^T \boldsymbol{x}_i.\end{equation*}Then, computing this distance after applying the mapping $\phi$ would be easy:\begin{equation*}\|\phi(\boldsymbol{x}_t) - \phi(\boldsymbol{x}_i)\|^2_2 = (\boldsymbol{x}_t^T \boldsymbol{x}_t + 1)^2 - 2 (\boldsymbol{x}_t^T \boldsymbol{x}_i + 1)^2 + (\boldsymbol{x}_i^T \boldsymbol{x}_i + 1)^2.\end{equation*}**a.** In the example above, the original feature vector was 2-dimensional. Show how to generalize the $\phi$ mapping to $d$-dimensional vector inputs such that the inner product between mapped vectors is $(\boldsymbol{x}_i^T \boldsymbol{x}_j + 1)^2$. Explicitly describe the embedding $\phi$; what dimensions does it have, and what values will it represent?**b.** Consider extending the k-means algorithm to discover non-linear boundaries using the above mapping. In the k-means algorithm, the assignment step involves computing $\|\boldsymbol{x}_i - \boldsymbol{\mu}_j\|_2^2$ for each point $\boldsymbol{x}_i$ and each cluster mean $\boldsymbol{\mu}_j$. Suppose we map the data via $\phi$. How would one compute the distance $\|\phi(\boldsymbol{x}_i) - \boldsymbol{\mu}_j\|^2_2$, where now $\boldsymbol{\mu}_j$ is the mean of the mapped data points? Be careful: one cannot simply compute\begin{equation*} (\boldsymbol{x}_i^T \boldsymbol{x}_i + 1)^2 - 2 (\boldsymbol{x}_i^T \boldsymbol{\mu}_j + 1)^2 + (\boldsymbol{\mu}_j^T \boldsymbol{\mu}_j + 1)^2.\end{equation*}**c.** Write out pseudocode for the extension of k-means where this mapping is applied to the data. In your algorithm, be careful not to ever explicitly compute $\phi(\boldsymbol{x}_i)$ for any data vector; *only work with inner products in the algorithm.***d.** With this new mapping, what properties will the decision surface have (i.e, what could it look like)? Why is this? A. - $\phi(Xi)^T*\phi(Xj) = (xi^Txj+1)^2 $- xi is a nX1 vector and so is xj - doing xi^Txj yeilds a 1x1 scalar - doing $\phi^T\phi$ yeilds a 1x1 scalar - $(xi^Txj+1)^2 =$-$ [|1|*|1| = 1 + $- $| \sqrt{2}xi1|*|\sqrt{2}xj1| = 2 xi1*xj1+$- $| \sqrt{2}xi2|*|\sqrt{2}xj2| = 2 xi2*xj2+$- $| \sqrt{2}xi1*xi2|*|\sqrt{2}xj1*xj2| = 2 xi1*xj1*xi2*xj2+$- $| xi1^2|*|xj1^2| = xi1^2*xj1^2+$- $| xi2^2|*|xj2^2| = xi2^2*xj2^2]$- $1+ 2x_{i1}x_{j1}+ 2x_{i2}x_{j2}+2 x_{i1}x_{j1}x_{i2}x_{j2}+x_{i1}^2x_{j1}^2+x_{i2}^2x_{j2}^2$ =- $(x_{i1}x_{j1}+x_{i2}x_{j2}+1)^2$ - $(x_{i1}x_{j1}+x_{i2}x_{j2}+1)(x_{i1}x_{j1}+x_{i2}x_{j2}+1)$ - $(x_{i1}x_{j1}+x_{i2}x_{j2}+1)(x_{i1}x_{j1}+x_{i2}x_{j2})(x_{i1}x_{j1}+x_{i2}x_{j2})$ - $(x_{i1}x_{j1}+x_{i2}x_{j2}+1)(x_{i1}^2x_{j1}^2+x_{i2}x_{j2}x_{i1}x_{j1})(x_{i2}x_{j2})(x_{i2}x_{j2})$ - $(x_{i1}x_{j1}+x_{i2}x_{j2}+1+x_{i1}^2x_{j1}^2+x_{i2}x_{j2}x_{i1}x_{j1}+x_{i2}^2x_{j2}^2$ foil error but -$1+ 2x_{i1}x_{j1}+ 2x_{i2}x_{j2}+2 x_{i1}x_{j1}x_{i2}x_{j2}+x_{i1}^2x_{j1}^2+x_{i2}^2x_{j2}^2$ - the key here is to make the mapping of $\phi =$ the inner terms of a foil - so $\phi_d = $ \begin{equation*}\phi(\boldsymbol{x}) = \left(\begin{array}{c}1 \\\sqrt{d} x_1 \\\sqrt{d} x_2 \\...\\\sqrt{d} x_d \\\sqrt{d*(1)} x_1 x_2\\\sqrt{d*(1)} x_1 x_3\\...\\\sqrt{d*(1)} x_1 x_d\\\sqrt{d*(2)} x_1 x_2 x_3\\\sqrt{d*(2)} x_1 x_2 x_4\\...\\\sqrt{d*(2)} x_1 x_2 x_d\\...\\\sqrt{d*(d-1)} x_1 x_2 x_3...x_d\\x_1^d\\x_2^d\\...\\x_d^d \\\end{array} \right).\end{equation*}B. - so let $\mu_r$ be the average in the non-$\phi$(?) domain - $\mu^T\mu =$ scalar but also $ = ( \mu_r^T\mu_r+1)^2$ - $\sqrt{\mu^T\mu} = ( \mu_r^T\mu_r +1)$ - $\sqrt{\mu^T\mu}-1 = \mu_r$^T$\mu_r$ - $ \sqrt{\mu^T\mu}-1 $ - that was dumb - $||\phi(xi) - \mu||_2^2=(\phi(xi)^T\phi(xi)) -2(\phi(xi)^T\mu)+(\mu^t\mu) $ - *$ (xi^Txi+1)^2$ is quicker probably C.- set k random means in var M = which is kxd where d is the number of features - obj = -1000// kmeans objective function-current = 0 -thresholdval = 5 - labels =zeros(n) - 1-while abs(current- obj) =>thresholdval- - obj = KmeansObjectiveF(clusters, M, Data)- -for I in k: x = X(of indexs labels == I) M(I) = mean(x)- - for i in n data points: - - A = (Data(i).T@Data(i) +1)**2//scallar - - B = -2(Data(i).T*M.T+1)**2 //1xk - - C = (diag([email protected])+1)**2 // 1xk// a diagonal of the kxk matrix - - norms = A + B + C // 1Xk matrix - - minlabel = mina(A+B+C) - - labels(i) = minlabel- - current = KmeansObjectiveF(clusters, M, Data)D.The new decsion surface will be a hyper(?)-perabala of degree d. For instance if there were 3 different features it could be a parabolic decsion surface Problem 2: Expectation-Maximization (E-M) on Gaussian Mixtrue ModelAs you saw in lecture, the expectation-maximization algorithm is an iterative method to find maximum likelihood (ML) estimates of parameters in statistical models. The E-M algorithm alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. This alternation repeats until convergence. The EM algorithm attempts to find maximum likelihood estimates for models with latent variables. Let X be the entire set of observed variables and Z the entire set of latent variables. Usually we can avoid a compicated expression for MLE when we introduce the latent variable $Z$. In this problem we will implement E-M algorithm for 2-d Gaussian Mixture. Let's first review the process from 1-d case. Assume we observe $x_1,...,x_n$ from one of $K$ mixture components. Each random variable $x_i$ is associated with a latent variable $z_i$, where $z_{i} \in\{1, \ldots, K\}$. The mixture weights are defined as $P\left(x_i\mid z_{i}=k\right) = \pi_k$, where $\sum_{k=1}^{K} \pi_{k}=1$. Take 1-d Gaussian Mixtrue Model as an example. We have the conditional distribution $x_{i} \mid z_{i}=k \sim N\left(\mu_{k}, \sigma_{k}^{2}\right)$. $N\left(\mu, \sigma^{2}\right)$ is the 1-d Gaussian distritbution with pdf $\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp -\frac{\left(x_{i}-\mu\right)^{2}}{2 \sigma^{2}}$. In this 1-d Gaussian case, the unknown parameter $\Theta$ includes $\pi, \mu, \sigma$. Then the expression of likelihood in termss of $\pi_k$, $\mu_k$ and $\sigma_k$ can be written as: $L\left(x_{1}, \ldots, x_{n}\mid\theta \right)=\prod_{i=1}^{n} \sum_{k=1}^{K} \pi_{k} N\left(x_{i} ; \mu_{k}, \sigma_{k}^{2}\right)$so the log-likelihood is :$\ell(\theta)=\sum_{i=1}^{n} \log \left(\sum_{k=1}^{K} \pi_{k} N\left(x_{i} ; \mu_{k}, \sigma_{k}^{2}\right)\right)$Then we can set the partial derivatives of the log-likelihood function over $\pi_k$, $\mu_k$ and $\sigma_k^2$ and set them to zero. Then solve the value of $\hat{\pi_k}$, $\hat{\mu_k}$ and $\hat{\sigma_{k}^{2}}$. When solving it, we set $P\left(z_{i}=k \mid x_{i}\right)=\frac{P\left(x_{i} \mid z_{i}=k\right) P\left(z_{i}=k\right)}{P\left(x_{i}\right)}=\frac{\pi_{k} N\left(\mu_{k}, \sigma_{k}^{2}\right)}{\sum_{k=1}^{K} \pi_{k} N\left(\mu_{k}, \sigma_{k}\right)}=\gamma_{z_{i}}(k)$ as a constant value. Set $N_{k}=\sum_{i=1}^{n} \gamma_{z_{i}}(k)$, we have the final expression:$$\hat{\mu_{k}}=\frac{\sum_{i=1}^{n} \gamma_{z_{i}}(k) x_{i}}{\sum_{i=1}^{n} \gamma_{z_{i}}(k)}=\frac{1}{N_{k}} \sum_{i=1}^{n} \gamma_{z_{i}}(k) x_{i}$$$$\hat{\sigma_{k}^{2}}=\frac{1}{N_{k}} \sum_{i=1}^{n} \gamma_{z_{i}}(k)\left(x_{i}-\mu_{k}\right)^{2}$$$$\hat{\pi}_{k}=\frac{N_{k}}{n}$$ Conclusion: we compute the one iteration of EM algorithm.1. E-step: Evaluate the posterior probabilities using the current values of the μk’s and σk’s with equation $P\left(z_{i}=k \mid x_{i}\right)=\frac{P\left(x_{i} \mid z_{i}=k\right) P\left(z_{i}=k\right)}{P\left(x_{i}\right)}=\frac{\pi_{k} N\left(\mu_{k}, \sigma_{k}^{2}\right)}{\sum_{k=1}^{K} \pi_{k} N\left(\mu_{k}, \sigma_{k}\right)}=\gamma_{z_{i}}(k)$2. M-step: Estimate new parameters $\hat{\pi_k}$, $\hat{\mu_k}$ and $\hat{\sigma_{k}^{2}}$. We would like you to perform E-M on a sample 2-d Gaussian mixture model (GMM). Doing this will allow you to prove that your algorithm works, since you already know the parameters of the model. And you will get an intuition from visualizations. Follow the instructions step by step below. | from matplotlib.patches import Ellipse
from scipy.special import logsumexp
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import math | _____no_output_____ | MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
**Data creation.** Create 3 2D Gaussian clusters of data, with the following means and covariances:$\boldsymbol{\mu}_1 = [2,2]^T, \boldsymbol{\mu}_2 = [-2,0]^T, \boldsymbol{\mu}_3 = [0,-2]^T$,$\Sigma_1 = [[0.1,0];[0,0.1]]$, $\Sigma_2 = [[0.2,0];[0,0.2]]$, $\Sigma_3 = [[1,0.7];[0.7,1]]$ Create 50 points in each cluster and plot the data. The combination of these will serve as your Gaussian mixture model. This part is already given to you. | # Part a - data creation. This code is from the previous homework. You do not have to edit it.
num_pts = 50
np.random.seed(10)
Xa = np.random.multivariate_normal([2,2], [[0.1,0],[0,0.1]], num_pts)
Xb = np.random.multivariate_normal([-2,0], [[0.2,0],[0,0.2]], num_pts)
Xc = np.random.multivariate_normal([0,-2], [[1,0.7],[0.7,1]], num_pts)
# Concatenate clusters into one dataset
data = np.concatenate((Xa,Xb,Xc),axis=0)
print(data.shape)
# Plotting
plt.scatter(data[:,0], data[:,1], s=40, cmap='viridis');
ax = plt.gca()
ax.set_xlim([-5,5])
ax.set_ylim([-5,5])
plt.title('Multivariate Gaussian - 3 Variables')
plt.show() | (150, 2)
| MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
**Fill in the code to complete the EM algorithm given below.** Remember, the EM algorithm is given by a process similar to k-means/DP-means in nature, since it is iterative. However, the actual calculations done are very different. For a Gaussian mixture model, they are described by:*E-Step (Compute probabilities with given Gaussian parameters.* **This has already been completed for you.**)*M-Step (Update parameters. The subscript k denotes the parameter for a given cluster k, so this is calculated for each cluster.):*Similar from 1-d case\begin{equation*}n\_per\_cluster = \sum_{i=1}^{n\_points} \gamma_{z_{i}}(k)\end{equation*}\begin{equation*}\pi_k = \frac{n\_per\_cluster}{n\_points}\end{equation*}\begin{equation*}\mu_k = \frac{1}{n\_per\_cluster} * \sum_{i=1}^{n\_points} \gamma_{z_{i}}(k) * x_i \end{equation*}\begin{equation*}\Sigma_k = \frac{1}{n\_per\_cluster} * \sum_{i=1}^{n\_points} \gamma_{z_{i}}(k) * (x_i - \mu_k)(x_i - \mu_k)^T \end{equation*}*Repeat until convergence. To check for convergence, we check if the log-likelihood estimate is close enough to the previous estimate to stop the algorithm. To compute the log-likelihood estimate:*\begin{equation*}LL(\theta) = \sum_{i=1}^{n\_points} log \sum_{k=1}^{K} \pi_k * \frac{1}{2\pi|\Sigma_k|^\frac{1}{2}} exp(-0.5*(x_i - \mu_k)^T\Sigma_k^{-1}(x_i - \mu_k))\end{equation*}*Note that the "absolute value" signs around $\Sigma_j$ are actually indicative of the determinant of the covariance matrix. **In completing the algorithm below, you will complete the M-Step and the log-likelihood estimate. To compute the log-likelihood, we strongly recommend using `scipy.special.logsumexp`, as it is more numerically stable than manually computing.** |
def EStep(data, n_points, k, pi, mu, cov):
## Performs the expectation (E) step ##
## You do not need to edit this function (actually, please do not edit it..)
# The end result is an n_points x k matrix, where each element is the probability that
# the ith point will be in the jth cluster.
expectations = np.zeros((n_points, k)) # n_points x k np.array, where each row adds to 1
denominators = []
for i in np.arange(n_points):
denominator = 0
for j in np.arange(k):
# Calculate denominator, which is a sum over k
denominator_scale = pi[j] * 1/(2 * math.pi * np.sqrt(np.linalg.det(cov[j])))
denom = denominator_scale * np.exp(-0.5 * (data[i].reshape(2,1) - mu[j]).T @ np.linalg.inv(cov[j]) @ (data[i].reshape(2,1) - mu[j]))
denominator = np.add(denominator, denom)
denominator = np.asscalar(denominator)
denominators.append(denominator)
for i in np.arange(n_points):
numerator = 0
for j in np.arange(k):
# Calculate the numerator
numerator_scale = pi[j] * 1/(2 * math.pi * np.sqrt(np.linalg.det(cov[j])))
numer = np.exp(-0.5 * (data[i].reshape(2,1) - mu[j]).T @ np.linalg.inv(cov[j]) @ (data[i].reshape(2,1) - mu[j]))
numerator = numerator_scale * numer
# Set the probability of the ith point for the jth cluster
expectations[i][j] = numerator/denominators[i]
return expectations
def ExpectationMaximization_GMM(data, n_per_cluster, n_points, k, pi, mu, cov):
## Performs expectation-maximization iteratively until convergence is reached ##
# You do not need to edit this function.
converged = False
ML_estimate = 0
iteration = 0
while not converged:
iteration +=1
# E-Step: find probabilities
expectations = EStep(data, n_points, k, pi, mu, cov)
# M-Step: recompute parameters
n_per_cluster, pi, mu, cov = MStep(data, n_points, k, expectations)
# Plot the current parameters against the data
# Ignore this, it just makes it look nice using some cool properties of eigenvectors!
## PLOT CODE ##
lambda_1, v1 = np.linalg.eig(cov[0])
lambda_1 = np.sqrt(lambda_1)
lambda_2, v2 = np.linalg.eig(cov[1])
lambda_2 = np.sqrt(lambda_2)
lambda_3, v3 = np.linalg.eig(cov[2])
lambda_3 = np.sqrt(lambda_3)
# Plot data
fig, ax = plt.subplots(subplot_kw={'aspect': 'equal'})
# plt.plot(x_total,y_total,'x')
plt.scatter(data[:,0], data[:,1], s=40, cmap='viridis');
# Plot ellipses
ell1 = Ellipse(xy=(mu[0][0], mu[0][1]),
width=lambda_1[0]*3, height=lambda_1[1]*3,
angle=np.rad2deg(np.arccos(v1[0, 0])), linewidth=5, edgecolor='red', facecolor='none')
ax.add_artist(ell1)
ell2 = Ellipse(xy=(mu[1][0], mu[1][1]),
width=lambda_2[0]*3, height=lambda_2[1]*3,
angle=np.rad2deg(np.arccos(v2[0, 0])), linewidth=5, edgecolor='green', facecolor='none')
ax.add_artist(ell2)
ell3 = Ellipse(xy=(mu[2][0], mu[2][1]),
width=lambda_3[0]*3, height=lambda_3[1]*3,
angle=np.rad2deg(np.arccos(v3[0, 0])), linewidth=5, edgecolor='yellow', facecolor='none')
ax.add_artist(ell3)
axe = plt.gca()
axe.set_xlim([-5,5])
axe.set_ylim([-5,5])
plt.title('Multivariate Gaussian - 3 Variables')
plt.show()
## END PLOT CODE ##
# Check for convergence via log likelihood
old_ML_estimate = np.copy(ML_estimate)
ML_estimate = loglikelihood(data, n_points, k, pi, mu, cov)
if abs(old_ML_estimate - ML_estimate) < 0.01:
converged = 1
return mu, cov
| _____no_output_____ | MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
**Perform EM on the GMM you created.** Put it all together! Run the completed EM function on your dataset. (This part is already done for you, just run it and see the output. The expected results are given to you) |
def MStep(data, n_points, k, expectations):
## Performs the maximization (M) step ##
# We clear the parameters completely, since we recompute them each time
mu = [np.zeros((2,1)) for _ in np.arange(k)] # 3 2x1 np.arrays in a list
cov = [np.zeros((2,2)) for _ in np.arange(k)] # 3 2x2 np.arrays in a list
n_per_cluster = [0, 0, 0]
pi = [0, 0, 0]
## need step here where you compute yi(k) from
## YOUR CODE HERE ##
# print(k,expectations.shape)
n_per_cluster = np.sum(expectations,axis =0 )
# print(k,n_per_cluster.shape)
# Update number of points in each cluster
# Update mixing weights
pi = n_per_cluster/n_points
# print(n_points,pi.shape)
# Update means
# out should be a 1xk * a 1*k where you want the output to be 1*k
n,d = data.shape
interVecSum = np.zeros((d,expectations.shape[1]))
for i in range(n):
y = expectations[i,:]
x = data[i,:]
y.shape = (y.shape[0],1)
x.shape = (x.shape[0],1)
Res = ([email protected])
interVecSum = interVecSum + Res.T
# print("innershape=",interVecSum.shape)
outer = (1/n_per_cluster)
# print("innershape=",outer.shape)
muNpy = outer*interVecSum# before sum should be a 1xk, inside npsum is a nxk * a n*d where I want each element to multiply with its corisponding element
# print("innershape=",muNpy.shape)
# Update covariances
#covVecSum = np.zeros((d,expectations.shape[1]))
for i in range(n):
x = data[i,:]
x.shape = (x.shape[0],1)
mux = x - muNpy# should be 3x2 with the resulting diffs.
for j in range(k):
kmux = mux[:,j]
kmux.shape = (kmux.shape[0],1)
newCov = [email protected]
cov[j] = cov[j]+ newCov
cov1st_term =1/ n_per_cluster
for j in range(k):
cov[j] = cov[j]*cov1st_term[j]
mterm = muNpy[:,j]
mterm.shape = (mterm.shape[0],1)
mu[j] = mterm
# print(cov[0].shape)
n_per_cluster = list(n_per_cluster)
pi = list(pi)
## END YOUR CODE HERE ##
return n_per_cluster, pi, mu, cov
def loglikelihood(data, n_points, k, pi, mu, cov):
#where a is the exponenet and b is the weights
## Calculates ML estimate ##
likelihood = 0
scale = [] # When using logsumexp the scale is required to be in an array
exponents = [] # When using logsumexp the exponent is required to be in an array
## YOUR CODE HERE ##
logs = np.zeros((n_points,1))
# firstpart = (1/(2*math.pi))*pi*np.linalg.det(cov)
# eponentTerm = -0.5* (data-mu)[email protected](cov)@(data-mu)
# InnerProdMat = firstpart*np.exp(eponentTerm)
for i in range(n_points):
constant = (1/(2*math.pi))
b = np.zeros((k,1))
a = np.zeros((k,1))
x = data[i,:]
x.shape = (x.shape[0],1)
# print("x",x.shape)
for j in range(k):
b[j] = constant * pi[j]*np.linalg.det(cov[j])
invCov= np.linalg.inv(cov[j])
# print("invCov",invCov.shape)
xmu = x - mu[j]
# print("xmu",xmu.shape)# should be a 2x1
toBeExped = -0.5*xmu.T@invCov@xmu
# print("eX",toBeExped.shape)# should be a 2x1
a[j] = np.exp(toBeExped)
logsumvec = logsumexp(a, b=b)# all the individual points
logs[i] = logsumvec
# Compute the log-likelihood estimate
## END YOUR CODE HERE ##
l = np.sum(logs)
# log∑k=1Kπk∗12π|Σk|12exp(−0.5∗(xi−μk)TΣ−1k(xi−μk))
likelihood = l; # should be a scalar
return likelihood
# Initialize total number of points (n), number of clusters (k),
# mixing weights (pi), means (mu) and covariance matrices (cov)
n_points = 150 # 150 points total
k = 3 # we know there are 3 clusters
mu = [(3 - (-3)) * np.random.rand(2,1) + (-3) for _ in np.arange(k)]
cov = [10 * np.identity(2) for _ in np.arange(k)]
n_per_cluster = [n_points/k for _ in np.arange(k)] # even split for now
pi = n_per_cluster
mu_estimate, cov_estimate = ExpectationMaximization_GMM(data, n_per_cluster, n_points, k, pi, mu, cov)
print("The estimates of the parameters of the Gaussians are: ")
print("Mu:", mu_estimate)
print("Covariance:", cov_estimate) | /usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:20: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead
| MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
Problem 3: Comparison of K-Means and Gaussian MixtureWe would like you to perform K-Means and GMM for clustering using sklearn. In this Problem, we can visualize the difference of these two algorithm.First, we can general some clustered data as follows. | import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
from sklearn.datasets.samples_generator import make_blobs
X, y_true = make_blobs(n_samples=400, centers=4,
cluster_std=0.60, random_state=0)
X = X[:, ::-1] # flip axes for better plotting
print(X.shape)
plt.scatter(X[:, 0], X[:, 1], c=y_true, s=40, cmap='viridis'); | /usr/local/lib/python3.7/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.datasets.samples_generator module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.datasets. Anything that cannot be imported from sklearn.datasets is now part of the private API.
warnings.warn(message, FutureWarning)
| MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
**a. Perform Kmeans and GMM on data X using build-in sklearn functions.**You can find the documentation for instantiating and fitting `sklearn`'s `Kmeans` [here](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html). Set `n_clusters=4` and `random_state=0`. | from sklearn.cluster import KMeans
### ADD CODE HERE:
# Instantiate KMeans instance.
# Fit the Kmeans with the data X.
# Use the Kmeans to predict on the labels of X, here the labels is unordered.
nClust = 4
randSate = 0
kmeans = KMeans(n_clusters=nClust, random_state=randSate).fit(X)
labels = kmeans.labels_
plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis') | _____no_output_____ | MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
You can find the documentation for instantiating and fitting `sklearn`'s `GMM` [here](https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html). Set `n_clusters=4` and `random_state=0`. | from sklearn.mixture import GaussianMixture as GMM
### ADD CODE HERE:
# Instantiate GMM instance.
# Fit the GMM with the data X.
# Use the GMM to predict on the labels of X, here the labels is unordered.
gm = GMM(n_components=nClust, random_state=randSate).fit(X)
labels = gm.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='viridis') | _____no_output_____ | MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
**b. Perform Kmeans and GMM on data X_stretched using build-in sklearn functions.**First we stretch the data by a random matrix. | rng = np.random.RandomState(13)
X_stretched = np.dot(X, rng.randn(2, 2)) | _____no_output_____ | MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
Applying `Kmeans` on `X_stretched` and set `n_clusters=4` and `random_state=0`. | from sklearn.cluster import KMeans
### ADD CODE HERE:
# Instantiate KMeans instance.
# Fit the Kmeans with the data X.
# Use the Kmeans to predict on the labels of X, here the labels is unordered.
kmeans = KMeans(n_clusters=nClust, random_state=randSate).fit(X_stretched)
labels = kmeans.labels_
plt.scatter(X_stretched[:, 0], X_stretched[:, 1], c=labels, s=50, cmap='viridis') | _____no_output_____ | MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
Applying `GMM` on `X_stretched` and set `n_clusters=4` and `random_state=0`. | from sklearn.mixture import GaussianMixture as GMM
### ADD CODE HERE:
# Instantiate GMM instance.
# Fit the GMM with the data X.
# Use the GMM to predict on the labels of X, here the labels is unordered.
gm = GMM(n_components=nClust, random_state=randSate).fit(X_stretched)
labels = gm.predict(X_stretched)
plt.scatter(X_stretched[:, 0], X_stretched[:, 1], c=labels, s=50, cmap='viridis') | _____no_output_____ | MIT | ec414_Intro_to_machine_learning/Kernel_K-Means_and_EM_EC414_HW7.ipynb | pequode/class-projects |
Retirement ModelThis is a retirement model which models salary with both a constant growth rate for cost of living raises as well as regular salary increases for promotions. The model is broken up into the following sections:- [**Setup**](Setup): Runs any imports and other setup- [**Inputs**](Inputs): Defines the inputs for the model- [**Salaries**](Salaries): Determining the salary in each year, considering cost of living raises and promotions- [**Wealths**](Wealths): Determining the wealth in each year, considering a constant savings rate and investment rate- [**Retirement**](Retirement): Determines years to retirement from the wealths over time, the main output from the model.- [**Results Summary**](Results-Summary): Summarize the results with some visualizations SetupSetup for the later calculations are here. The necessary packages are imported. | from dataclasses import dataclass
import pandas as pd
%matplotlib inline | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
InputsAll of the inputs for the model are defined here. A class is constructed to manage the data, and an instance of the class containing the default inputs is created. | @dataclass
class ModelInputs:
starting_salary: int = 60000
promos_every_n_years: int = 5
cost_of_living_raise: float = 0.02
promo_raise: float = 0.15
savings_rate: float = 0.25
interest_rate: float = 0.05
desired_cash: int = 1500000
model_data = ModelInputs()
model_data | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
SalariesHere the salary for each year is calculated. We assume that the salary grows at a constant rate each year for cost of living raises, and then also every number of years, the salary increases by a further percentage due to a promotion or switching jobs. Based on this assumption, the salary would evolve over time with the following equation:$$s_t = s_0 (1 + r_{cl})^n (1 + r_p)^p$$Where:- $s_t$: Salary at year $t$- $s_0$: Starting salary (year 0)- $r_{cl}$: Annual cost of living raise- $r_p$: Promotion raise- $p$: Number of promotionsAnd in Python format: | def salary_at_year(data: ModelInputs, year):
"""
Gets the salary at a given year from the start of the model based on cost of living raises and regular promotions.
"""
# Every n years we have a promotion, so dividing the years and taking out the decimals gets the number of promotions
num_promos = int(year / data.promos_every_n_years)
# This is the formula above implemented in Python
salary_t = data.starting_salary * (1 + data.cost_of_living_raise) ** year * (1 + data.promo_raise) ** num_promos
return salary_t | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
That function will get the salary at a given year, so to get all the salaries we just run it on each year. But we will not know how many years to run as we should run it until the individual is able to retire. So we are just showing the first few salaries for now and will later use this function in the [Wealths](Wealths) section of the model. | for i in range(6):
year = i + 1
salary = salary_at_year(model_data, year)
print(f'The salary at year {year} is ${salary:,.0f}.') | The salary at year 1 is $61,200.
The salary at year 2 is $62,424.
The salary at year 3 is $63,672.
The salary at year 4 is $64,946.
The salary at year 5 is $76,182.
The salary at year 6 is $77,705.
| MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
As expected, with the default inputs, the salary is increasing at 2% per year. Then at year 5, there is a promotion so there is a larger increase in salary. WealthsThe wealths portion of the model is concerned with applying the savings rate to the earned salary to calculate the cash saved, accumulating the cash saved over time, and applying the investment rate to the accumulated wealth.To calculate cash saved, it is simply:$$c_t = s_t * r_s$$Where:- $c_t$: Cash saved during year $t$- $r_s$: Savings rate | def cash_saved_during_year(data: ModelInputs, year):
"""
Calculated the cash saved within a given year, by first calculating the salary at that year then applying the
savings rate.
"""
salary = salary_at_year(data, year)
cash_saved = salary * data.savings_rate
return cash_saved | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
To get the wealth at each year, it is just applying the investment return to last year's wealth, then adding this year's cash saved:$$w_t = w_{t-1} (1 + r_i) + c_t$$Where:- $w_t$: Wealth at year $t$- $r_i$: Investment rate | def wealth_at_year(data: ModelInputs, year, prior_wealth):
"""
Calculate the accumulated wealth for a given year, based on previous wealth, the investment rate,
and cash saved during the year.
"""
cash_saved = cash_saved_during_year(data, year)
wealth = prior_wealth * (1 + data.interest_rate) + cash_saved
return wealth | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
Again, just like in the [Salaries](Salaries) section, we can now get the output for each year, but we don't know ultimately how many years we will have to run it. That will be determined in the [Retirement](Retirement) section. So for now, just show the first few years of wealth accumulation: | prior_wealth = 0 # starting with no cash saved
for i in range(6):
year = i + 1
wealth = wealth_at_year(model_data, year, prior_wealth)
print(f'The wealth at year {year} is ${wealth:,.0f}.')
# Set next year's prior wealth to this year's wealth
prior_wealth = wealth | The wealth at year 1 is $15,300.
The wealth at year 2 is $31,671.
The wealth at year 3 is $49,173.
The wealth at year 4 is $67,868.
The wealth at year 5 is $90,307.
The wealth at year 6 is $114,248.
| MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
With default inputs, the wealth is going up by approximately 25% of the salary each year, plus a bit more for investment. Then in year 6 we see a substantially larger increase because the salary is substantially larger due to the promotion. So everything is looking correct. RetirementThis section of the model puts everything together to produce the final output of years to retirement. It uses the logic to get the wealths at each year, which in turn uses the logic to the get salary at each year. The wealth at each year is tracked over time until it hits the desired cash. Once the wealth hits the desired cash, the individual is able to retire so that year is returned as the years to retirement. | def years_to_retirement(data: ModelInputs):
# starting with no cash saved
prior_wealth = 0
wealth = 0
year = 0 # will become 1 on first loop
print('Wealths over time:') # \n makes a blank line in the output.
while wealth < data.desired_cash:
year = year + 1
wealth = wealth_at_year(model_data, year, prior_wealth)
print(f'The wealth at year {year} is ${wealth:,.0f}.')
# Set next year's prior wealth to this year's wealth
prior_wealth = wealth
# Now we have exited the while loop, so wealth must be >= desired_cash. Whatever last year was set
# is the years to retirement.
print(f'\nRetirement:\nIt will take {year} years to retire.') # \n makes a blank line in the output.
return year | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
With the default inputs: | years = years_to_retirement(model_data) | Wealths over time:
The wealth at year 1 is $15,300.
The wealth at year 2 is $31,671.
The wealth at year 3 is $49,173.
The wealth at year 4 is $67,868.
The wealth at year 5 is $90,307.
The wealth at year 6 is $114,248.
The wealth at year 7 is $139,775.
The wealth at year 8 is $166,975.
The wealth at year 9 is $195,939.
The wealth at year 10 is $229,918.
The wealth at year 11 is $266,080.
The wealth at year 12 is $304,542.
The wealth at year 13 is $345,431.
The wealth at year 14 is $388,878.
The wealth at year 15 is $439,025.
The wealth at year 16 is $492,294.
The wealth at year 17 is $548,853.
The wealth at year 18 is $608,878.
The wealth at year 19 is $672,557.
The wealth at year 20 is $745,168.
The wealth at year 21 is $822,190.
The wealth at year 22 is $903,859.
The wealth at year 23 is $990,422.
The wealth at year 24 is $1,082,140.
The wealth at year 25 is $1,185,745.
The wealth at year 26 is $1,295,520.
The wealth at year 27 is $1,411,793.
The wealth at year 28 is $1,534,910.
Retirement:
It will take 28 years to retire.
| MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
Results Summary Put Results in a TableNow I will visualize the salaries and wealths over time. First create a function which runs the model to put these results in a DataFrame. | def get_salaries_wealths_df(data):
"""
Runs the retirement model, collecting salary and wealth information year by year and storing
into a DataFrame for further analysis.
"""
# starting with no cash saved
prior_wealth = 0
wealth = 0
year = 0 # will become 1 on first loop
df_data_tups = []
while wealth < data.desired_cash:
year = year + 1
salary = salary_at_year(data, year)
wealth = wealth_at_year(model_data, year, prior_wealth)
# Set next year's prior wealth to this year's wealth
prior_wealth = wealth
# Save the results in a tuple for later building the DataFrame
df_data_tups.append((year, salary, wealth))
# Now we have exited the while loop, so wealth must be >= desired_cash
# Now create the DataFrame
df = pd.DataFrame(df_data_tups, columns=['Year', 'Salary', 'Wealth'])
return df | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
Also set up a function which formats the `DataFrame` for display. | def styled_salaries_wealths(df):
return df.style.format({
'Salary': '${:,.2f}',
'Wealth': '${:,.2f}'
}) | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
Now call the function to save the results into the `DataFrame`. | df = get_salaries_wealths_df(model_data)
styled_salaries_wealths(df) | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
Plot ResultsNow I will visualize the salaries and wealths over time. Salaries over Time | df.plot.line(x='Year', y='Salary') | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
Wealths over Time | df.plot.line(x='Year', y='Wealth') | _____no_output_____ | MIT | docsrc/source/_static/Examples/Visualization/Python/Dynamic Salary Retirement Model Visualized.ipynb | whoopnip/fin-model-course |
Loading the dataset | italy_dataset = pd.read_csv("../datasets/it - feb 2021.csv")
# italy_dataset.head() | _____no_output_____ | Apache-2.0 | notebooks/trade_demo/smpc/bug_reproducing/Data Owner - Italy.ipynb | Noob-can-Compile/PySyft |
Logging into the domain | it = sy.login(email="[email protected]", password="changethis", port=8082) | Connecting to http://localhost:8082... done! Logging into italy... done!
| Apache-2.0 | notebooks/trade_demo/smpc/bug_reproducing/Data Owner - Italy.ipynb | Noob-can-Compile/PySyft |
Upload the dataset to Domain node | # Selecting a subset of the dataset
italy_dataset = italy_dataset[:40000]
# We will upload only the first 40k rows and three columns
# All these three columns are of `int` type
sampled_italy_dataset = italy_dataset[["Trade Flow Code", "Partner Code", "Trade Value (US$)"]].values
# Convert the dataset to numpy array
sampled_italy_datset_numpy = sampled_italy_dataset
# Convert the numpy array to Tensor
italy_dataset_tensor = sy.Tensor(sampled_italy_datset_numpy).tag('data2')
italy_dataset_tensor.public_shape = italy_dataset_tensor.shape
ptr = italy_dataset_tensor.share(it)
# it.load_dataset(
# assets={"Italy-Numpy-feb2020-Tensor": italy_dataset_tensor},
# name="Italy Trade Data - First 40000 rows",
# description="""A collection of reports from Italy's statistics
# bureau about how much it thinks it imports and exports from other countries.""",
# )
# it.datasets
# it_domain_node.store.pandas['object_type'].unique()
# it_domain_node.store.pandas[it_domain_node.store.pandas['object_type'] == "<class 'syft.core.tensor.tensor.Tensor'>"] | _____no_output_____ | Apache-2.0 | notebooks/trade_demo/smpc/bug_reproducing/Data Owner - Italy.ipynb | Noob-can-Compile/PySyft |
Create a Data Scientist User | it.users.create(
**{
"name": "Sheldon Cooper",
"email": "[email protected]",
"password": "bazinga",
"budget":10
}
) | _____no_output_____ | Apache-2.0 | notebooks/trade_demo/smpc/bug_reproducing/Data Owner - Italy.ipynb | Noob-can-Compile/PySyft |
Accept/Deny Requests to the Domain | # it_domain_node.requests
# it_domain_node.requests[-1].accept()
# it_domain_node.store.pandas[it_domain_node.store.pandas["object_type"] == "<class 'syft.core.tensor.smpc.share_tensor.ShareTensor'>"] | _____no_output_____ | Apache-2.0 | notebooks/trade_demo/smpc/bug_reproducing/Data Owner - Italy.ipynb | Noob-can-Compile/PySyft |
Copyright 2018 The TF-Agents Authors. | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Train a Deep Q Network with TF-Agents View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Introduction This example shows how to train a [DQN (Deep Q Networks)](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) agent on the Cartpole environment using the TF-Agents library.It will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection.To run this code live, click the 'Run in Google Colab' link above. Setup If you haven't installed the following dependencies, run: | !sudo apt-get install -y xvfb ffmpeg
!pip install 'gym==0.10.11'
!pip install 'imageio==2.4.0'
!pip install PILLOW
!pip install 'pyglet==1.3.2'
!pip install pyvirtualdisplay
!pip install tf-agents
from __future__ import absolute_import, division, print_function
import base64
import imageio
import IPython
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
import pyvirtualdisplay
import tensorflow as tf
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.eval import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.networks import q_network
from tf_agents.policies import random_tf_policy
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.trajectories import trajectory
from tf_agents.utils import common
tf.compat.v1.enable_v2_behavior()
# Set up a virtual display for rendering OpenAI gym environments.
display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()
tf.version.VERSION | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Hyperparameters | num_iterations = 20000 # @param {type:"integer"}
initial_collect_steps = 100 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_max_length = 100000 # @param {type:"integer"}
batch_size = 64 # @param {type:"integer"}
learning_rate = 1e-3 # @param {type:"number"}
log_interval = 200 # @param {type:"integer"}
num_eval_episodes = 10 # @param {type:"integer"}
eval_interval = 1000 # @param {type:"integer"} | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
EnvironmentIn Reinforcement Learning (RL), an environment represents the task or problem to be solved. Standard environments can be created in TF-Agents using `tf_agents.environments` suites. TF-Agents has suites for loading environments from sources such as the OpenAI Gym, Atari, and DM Control.Load the CartPole environment from the OpenAI Gym suite. | env_name = 'CartPole-v0'
env = suite_gym.load(env_name) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
You can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up. | #@test {"skip": true}
env.reset()
PIL.Image.fromarray(env.render()) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
The `environment.step` method takes an `action` in the environment and returns a `TimeStep` tuple containing the next observation of the environment and the reward for the action.The `time_step_spec()` method returns the specification for the `TimeStep` tuple. Its `observation` attribute shows the shape of observations, the data types, and the ranges of allowed values. The `reward` attribute shows the same details for the reward. | print('Observation Spec:')
print(env.time_step_spec().observation)
print('Reward Spec:')
print(env.time_step_spec().reward) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
The `action_spec()` method returns the shape, data types, and allowed values of valid actions. | print('Action Spec:')
print(env.action_spec()) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.