path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
JupyterNotebooks/Welcome+to+Spark+with+Python.ipynb | ###Markdown
 Welcome to Apache Spark with Python> Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming. - http://spark.apache.org/In this notebook, we'll train two classifiers to predict survivors in the [Titanic dataset](../edit/datasets/COUNT/titanic.csv). We'll use this classic machine learning problem as a brief introduction to using Apache Spark local mode in a notebook.
###Code
! env
import pyspark
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.classification import LogisticRegressionWithSGD
from pyspark.mllib.tree import DecisionTree
###Output
_____no_output_____
###Markdown
First we create a [SparkContext](http://spark.apache.org/docs/latest/api/python/pyspark.htmlpyspark.SparkContext), the main object in the Spark API. This call may take a few seconds to return as it fires up a JVM under the covers.
###Code
sc = pyspark.SparkContext()
###Output
_____no_output_____
###Markdown
Sample the data We point the context at a CSV file on disk. The result is a [RDD](http://spark.apache.org/docs/latest/programming-guide.htmlresilient-distributed-datasets-rdds), not the content of the file. This is a Spark [transformation](http://spark.apache.org/docs/latest/programming-guide.htmltransformations).
###Code
raw_rdd = sc.textFile("/usr/local/share/dsLab/datasets/titanic.csv")
###Output
_____no_output_____
###Markdown
We query RDD for the number of lines in the file. The call here causes the file to be read and the result computed. This is a Spark [action](http://spark.apache.org/docs/latest/programming-guide.htmlactions).
###Code
raw_rdd.count()
###Output
_____no_output_____
###Markdown
We query for the first five rows of the RDD. Even though the data is small, we shouldn't get into the habit of pulling the entire dataset into the notebook. Many datasets that we might want to work with using Spark will be much too large to fit in memory of a single machine. We take a random sample of the data rows to better understand the possible values.
###Code
raw_rdd.take(5)
###Output
_____no_output_____
###Markdown
We see a header row followed by a set of data rows. We filter out the header to define a new RDD containing only the data rows.
###Code
header = raw_rdd.first()
data_rdd = raw_rdd.filter(lambda line: line != header)
data_rdd.takeSample(False, 5, 0)
###Output
_____no_output_____
###Markdown
We see that the first value in every row is a passenger number. The next three values are the passenger attributes we might use to predict passenger survival: ticket class, age group, and gender. The final value is the survival ground truth. Create labeled points (i.e., feature vectors and ground truth) Now we define a function to turn the passenger attributions into structured `LabeledPoint` objects.
###Code
def row_to_labeled_point(line):
'''
Builds a LabelPoint consisting of:
survival (truth): 0=no, 1=yes
ticket class: 0=1st class, 1=2nd class, 2=3rd class
age group: 0=child, 1=adults
gender: 0=man, 1=woman
'''
passenger_id, klass, age, sex, survived = [segs.strip('"') for segs in line.split(',')]
klass = int(klass[0]) - 1
if (age not in ['adults', 'child'] or
sex not in ['man', 'women'] or
survived not in ['yes', 'no']):
raise RuntimeError('unknown value')
features = [
klass,
(1 if age == 'adults' else 0),
(1 if sex == 'women' else 0)
]
return LabeledPoint(1 if survived == 'yes' else 0, features)
###Output
_____no_output_____
###Markdown
We apply the function to all rows.
###Code
labeled_points_rdd = data_rdd.map(row_to_labeled_point)
###Output
_____no_output_____
###Markdown
We take a random sample of the resulting points to inspect them.
###Code
labeled_points_rdd.takeSample(False, 5, 0)
###Output
_____no_output_____
###Markdown
Split for training and test We split the transformed data into a training (70%) and test set (30%), and print the total number of items in each segment.
###Code
training_rdd, test_rdd = labeled_points_rdd.randomSplit([0.7, 0.3], seed = 0)
training_count = training_rdd.count()
test_count = test_rdd.count()
training_count, test_count
###Output
_____no_output_____
###Markdown
Train and test a decision tree classifier Now we train a [DecisionTree](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.tree.DecisionTree) model. We specify that we're training a boolean classifier (i.e., there are two outcomes). We also specify that all of our features are categorical and the number of possible categories for each.
###Code
model = DecisionTree.trainClassifier(training_rdd,
numClasses=2,
categoricalFeaturesInfo={
0: 3,
1: 2,
2: 2
})
###Output
_____no_output_____
###Markdown
We now apply the trained model to the feature values in the test set to get the list of predicted outcomines.
###Code
predictions_rdd = model.predict(test_rdd.map(lambda x: x.features))
###Output
_____no_output_____
###Markdown
We bundle our predictions with the ground truth outcome for each passenger in the test set.
###Code
truth_and_predictions_rdd = test_rdd.map(lambda lp: lp.label).zip(predictions_rdd)
###Output
_____no_output_____
###Markdown
Now we compute the test error (% predicted survival outcomes == actual outcomes) and display the decision tree for good measure.
###Code
accuracy = truth_and_predictions_rdd.filter(lambda v_p: v_p[0] == v_p[1]).count() / float(test_count)
print('Accuracy =', accuracy)
print(model.toDebugString())
###Output
Accuracy = 0.7985074626865671
DecisionTreeModel classifier of depth 4 with 21 nodes
If (feature 2 in {0.0})
If (feature 1 in {0.0})
If (feature 0 in {0.0,1.0})
Predict: 1.0
Else (feature 0 not in {0.0,1.0})
Predict: 0.0
Else (feature 1 not in {0.0})
If (feature 0 in {1.0})
Predict: 0.0
Else (feature 0 not in {1.0})
If (feature 0 in {0.0})
Predict: 0.0
Else (feature 0 not in {0.0})
Predict: 0.0
Else (feature 2 not in {0.0})
If (feature 0 in {2.0})
If (feature 1 in {0.0})
Predict: 0.0
Else (feature 1 not in {0.0})
Predict: 0.0
Else (feature 0 not in {2.0})
If (feature 0 in {1.0})
If (feature 1 in {0.0})
Predict: 1.0
Else (feature 1 not in {0.0})
Predict: 1.0
Else (feature 0 not in {1.0})
If (feature 1 in {0.0})
Predict: 1.0
Else (feature 1 not in {0.0})
Predict: 1.0
###Markdown
Train and test a logistic regression classifier For a simple comparison, we also train and test a [LogisticRegressionWithSGD](http://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.classification.LogisticRegressionWithSGD) model.
###Code
model = LogisticRegressionWithSGD.train(training_rdd)
predictions_rdd = model.predict(test_rdd.map(lambda x: x.features))
labels_and_predictions_rdd = test_rdd.map(lambda lp: lp.label).zip(predictions_rdd)
accuracy = labels_and_predictions_rdd.filter(lambda v_p: v_p[0] == v_p[1]).count() / float(test_count)
print('Accuracy =', accuracy)
###Output
Accuracy = 0.7860696517412935
|
Tareas/Tarea 1/src/Actividad-3-1.ipynb | ###Markdown
Oscar Esaú Peralta Rosales Tarea 1: Fundamentos de Minería de Texto
###Code
import csv
import math
import argparse
from collections import defaultdict
import numpy as np
import pandas as pd
from tqdm import tqdm
from nltk.corpus import CategorizedPlaintextCorpusReader
from nltk.tokenize import WordPunctTokenizer
from nltk.tokenize import TweetTokenizer
from nltk.corpus import stopwords
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score, precision_recall_fscore_support, roc_auc_score
from sklearn import metrics, preprocessing
from sklearn import svm, datasets
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.feature_selection import SelectKBest, chi2
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Actividad 3: Detección de Agresividad con Análisis de Sentimiento Básico 2.1 Experimentos Parte 1 Carga de los datos
###Code
mex_corpus = CategorizedPlaintextCorpusReader('./data/corpus/', r'.*\.txt', cat_pattern=r'(\w+)/*')
tk = TweetTokenizer()
stopw = stopwords.words('spanish') + stopwords.words('english')
x_train = [
[token for token in tk.tokenize(tweet) if token not in stopw and len(token) > 2]
for tweet in mex_corpus.raw('mex_train.txt').split('\n') if tweet
]
y_train = [int(label) for label in mex_corpus.raw('mex_train_labels.txt').split('\n') if label ]
x_val = [
[token for token in tk.tokenize(tweet) if token not in stopw and len(token) > 2]
for tweet in mex_corpus.raw('mex_val.txt').split('\n') if tweet
]
y_val = [int(label) for label in mex_corpus.raw('mex_val_labels.txt').split('\n') if label ]
###Output
_____no_output_____
###Markdown
1. Utilice el recurso léxico del Consejo Nacional de Investigación de Canadá llamado "EmoLex" (https://www.saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm) para construir una "Bolsa de Emociones" de los Tweets de agresividad (Debe usar EmoLex en Español). Para esto, una estrategia sencilla sería enmascarar cada palabra con su emoción, y después construir la Bolsa de Emociones.
###Code
file_name = './data/NRC-Emotion-Lexicon-v0.92-In105Languages-Nov2017Translations.xlsx'
df = pd.read_excel(file_name, usecols='CI,DB:DK')
df.head()
spzip = zip(np.array([x.lower() for x in np.array(df['Spanish (es)'])]),
np.array(df['Positive']),
np.array(df['Negative']),
np.array(df['Anger']),
np.array(df['Anticipation']),
np.array(df['Disgust']),
np.array(df['Fear']),
np.array(df['Joy']),
np.array(df['Sadness']),
np.array(df['Surprise']),
np.array(df['Trust']))
spanish_map = sorted(spzip, key=lambda item:item[0])
spanish_map[100:110]
def spanish_map_search(spanish_map, word):
"""Returns a array with the emotions for any word"""
word = word.lower()
i = 0
j = len(spanish_map) - 1
while i < j:
m = int((i+j)/2)
match = spanish_map[m][0].lower()
if match == word:
return np.array(spanish_map[m][1:])
if word > match:
i = m + 1
else:
j = m - 1
return np.zeros(10)
spanish_map_search(spanish_map, 'absorbido')
def build_emotions_bow(docs, spanish_map, emotions=10):
""" Build a emotions bag """
bow = np.zeros((len(docs), emotions), dtype=float)
for index, doc in enumerate(tqdm(docs)):
for word in doc:
w_emotions = spanish_map_search(spanish_map, word)
bow[index] += w_emotions
return bow
bow = build_emotions_bow(x_train, spanish_map)
###Output
100%|██████████| 5544/5544 [00:00<00:00, 8669.30it/s]
###Markdown
2. Representa a los documentos y clasifica con SVM como en la Practica de Clase 3. Evalúa varias representaciones, y ponga una tabla comparativa a modo de resumen (e.g., binario, frecuencia, tfidf, etc.).
###Code
def classify(x_train, y_train, x_val, y_val, kbest=None):
""" Clasificación con SVM, feature selection with chi2 """
parameters = {'C': [.05, .12, .25, .5, 1, 2, 4]}
if kbest:
selectk = SelectKBest(chi2, k=kbest)
selectk.fit(x_train, y_train)
x_train = selectk.transform(x_train)
x_val = selectk.transform(x_val)
svr = svm.LinearSVC(class_weight='balanced')
grid = GridSearchCV(estimator=svr, param_grid=parameters, n_jobs=8, scoring="f1_macro", cv=5)
grid.fit(x_train, y_train)
y_pred = grid.predict(x_val)
p, r, f, _ = precision_recall_fscore_support(y_val, y_pred, average='macro', pos_label=None)
a = accuracy_score(y_val, y_pred)
print(confusion_matrix(y_val, y_pred) )
print(metrics.classification_report(y_val, y_pred))
return p, r , f, a
metrics_hist = []
###Output
_____no_output_____
###Markdown
Bolsa de emociones binaria
###Code
def build_binary_bow(emotions_bow):
""" Build a emotions binary bow """
bow = emotions_bow.copy()
bow[bow > 0] = 1
return bow
nx_train = build_binary_bow(build_emotions_bow(x_train, spanish_map))
nx_val = build_binary_bow(build_emotions_bow(x_val, spanish_map))
metrics_hist.append(("Bolsa de emociones binaria",
*classify(nx_train, y_train, nx_val, y_val, kbest=None)))
###Output
100%|██████████| 5544/5544 [00:00<00:00, 8327.33it/s]
100%|██████████| 616/616 [00:00<00:00, 9162.29it/s]
###Markdown
Bolsa de emociones frecuencias
###Code
def build_frecs_bow(emotions_bow, normalize=False):
""" Build a emotions frequencies bow """
# The bow already has the frequencies
bow = emotions_bow.copy()
if normalize:
for row in bow:
row /= np.linalg.norm(row) or 1.0
return bow
nx_train = build_frecs_bow(build_emotions_bow(x_train, spanish_map))
nx_val = build_frecs_bow(build_emotions_bow(x_val, spanish_map))
metrics_hist.append(("Bolsa de emociones frecuencias",
*classify(nx_train, y_train, nx_val, y_val, kbest=None)))
###Output
100%|██████████| 5544/5544 [00:00<00:00, 8688.39it/s]
100%|██████████| 616/616 [00:00<00:00, 9414.03it/s]
###Markdown
Bolsa de emociones de frecuencias normalizadas
###Code
nx_train = build_frecs_bow(build_emotions_bow(x_train, spanish_map), normalize=True)
nx_val = build_frecs_bow(build_emotions_bow(x_val, spanish_map), normalize=True)
metrics_hist.append(("Bolsa de emociones frecuencias norm",
*classify(nx_train, y_train, nx_val, y_val, kbest=None)))
###Output
100%|██████████| 5544/5544 [00:00<00:00, 9044.83it/s]
100%|██████████| 616/616 [00:00<00:00, 9185.90it/s]
###Markdown
Bolsa de emociones tfidf
###Code
def build_tfidf_bow(emotions_bows, normalize=False):
""" Build a emotions tfidf bow """
bows = emotions_bows.copy()
# compute tf
bows /= len(bows[0])
# Compute idf
ndocs_terms = np.sum(emotions_bows > 0, axis=0)
zeros = np.where(ndocs_terms == 0)[0]
ndocs_terms[zeros] = 1
for bow in bows:
bow *= np.log(emotions_bows.shape[0] / ndocs_terms)
bow[zeros] = 0.0
if normalize:
bow /= np.linalg.norm(bow) or 1.0
return bows
nx_train = build_tfidf_bow(build_emotions_bow(x_train, spanish_map))
nx_val = build_tfidf_bow(build_emotions_bow(x_val, spanish_map))
metrics_hist.append(("Bolsa de emociones tfidf",
*classify(nx_train, y_train, nx_val, y_val, kbest=None)))
###Output
100%|██████████| 5544/5544 [00:00<00:00, 7988.11it/s]
100%|██████████| 616/616 [00:00<00:00, 9006.33it/s]
###Markdown
Bolsa de emociones tfidf normalizada
###Code
nx_train = build_tfidf_bow(build_emotions_bow(x_train, spanish_map), normalize=True)
nx_val = build_tfidf_bow(build_emotions_bow(x_val, spanish_map), normalize=True)
metrics_hist.append(("Bolsa de emociones tfidf norm",
*classify(nx_train, y_train, nx_val, y_val, kbest=None)))
###Output
100%|██████████| 5544/5544 [00:00<00:00, 8885.97it/s]
100%|██████████| 616/616 [00:00<00:00, 9012.08it/s]
###Markdown
Tabla comparativa
###Code
dataset = pd.DataFrame(data=metrics_hist, columns = ['Embedding', 'Precision', 'Recall', 'Fscore', 'Accuracy'])
dataset
###Output
_____no_output_____ |
examples/01-filter/clipping.ipynb | ###Markdown
Clipping with Planes & Boxes============================Clip/cut any dataset using using planes or boxes.
###Code
# sphinx_gallery_thumbnail_number = 2
import pyvista as pv
from pyvista import examples
###Output
_____no_output_____
###Markdown
Clip with Plane===============Clip any dataset by a user defined plane using the`pyvista.DataSetFilters.clip`{.interpreted-text role="func"} filter
###Code
dataset = examples.download_bunny_coarse()
clipped = dataset.clip('y', invert=False)
p = pv.Plotter()
p.add_mesh(dataset, style='wireframe', color='blue', label='Input')
p.add_mesh(clipped, label='Clipped')
p.add_legend()
p.camera_position = [(0.24, 0.32, 0.7),
(0.02, 0.03, -0.02),
(-0.12, 0.93, -0.34)]
p.show()
###Output
_____no_output_____
###Markdown
Clip with Bounds================Clip any dataset by a set of XYZ bounds using the`pyvista.DataSetFilters.clip_box`{.interpreted-text role="func"} filter.
###Code
dataset = examples.download_office()
bounds = [2,4.5, 2,4.5, 1,3]
clipped = dataset.clip_box(bounds)
p = pv.Plotter()
p.add_mesh(dataset, style='wireframe', color='blue', label='Input')
p.add_mesh(clipped, label='Clipped')
p.add_legend()
p.show()
###Output
_____no_output_____
###Markdown
Clip with Rotated Box=====================Clip any dataset by an arbitrarily rotated solid box using the`pyvista.DataSetFilters.clip_box`{.interpreted-text role="func"} filter.
###Code
mesh = examples.load_airplane()
# Use `pv.Box()` or `pv.Cube()` to create a region of interest
roi = pv.Cube(center=(0.9e3, 0.2e3, mesh.center[2]),
x_length=500, y_length=500, z_length=500)
roi.rotate_z(33)
p = pv.Plotter()
p.add_mesh(roi, opacity=0.75, color="red")
p.add_mesh(mesh, opacity=0.5)
p.show()
###Output
_____no_output_____
###Markdown
Run the box clipping algorithm
###Code
extracted = mesh.clip_box(roi, invert=False)
p = pv.Plotter(shape=(1,2))
p.add_mesh(roi, opacity=0.75, color="red")
p.add_mesh(mesh)
p.subplot(0,1)
p.add_mesh(extracted)
p.add_mesh(roi, opacity=0.75, color="red")
p.link_views()
p.view_isometric()
p.show()
###Output
_____no_output_____
###Markdown
Clipping with Planes & Boxes {clip_with_plane_box_example}============================Clip/cut any dataset using using planes or boxes.
###Code
import pyvista as pv
from pyvista import examples
###Output
_____no_output_____
###Markdown
Clip with Plane===============Clip any dataset by a user defined plane using the`pyvista.DataSetFilters.clip`{.interpreted-text role="func"} filter
###Code
dataset = examples.download_bunny_coarse()
clipped = dataset.clip('y', invert=False)
p = pv.Plotter()
p.add_mesh(dataset, style='wireframe', color='blue', label='Input')
p.add_mesh(clipped, label='Clipped')
p.add_legend()
p.camera_position = [(0.24, 0.32, 0.7), (0.02, 0.03, -0.02), (-0.12, 0.93, -0.34)]
p.show()
###Output
_____no_output_____
###Markdown
Clip with Bounds================Clip any dataset by a set of XYZ bounds using the`pyvista.DataSetFilters.clip_box`{.interpreted-text role="func"} filter.
###Code
dataset = examples.download_office()
bounds = [2, 4.5, 2, 4.5, 1, 3]
clipped = dataset.clip_box(bounds)
p = pv.Plotter()
p.add_mesh(dataset, style='wireframe', color='blue', label='Input')
p.add_mesh(clipped, label='Clipped')
p.add_legend()
p.show()
###Output
_____no_output_____
###Markdown
Clip with Rotated Box=====================Clip any dataset by an arbitrarily rotated solid box using the`pyvista.DataSetFilters.clip_box`{.interpreted-text role="func"} filter.
###Code
mesh = examples.load_airplane()
# Use `pv.Box()` or `pv.Cube()` to create a region of interest
roi = pv.Cube(center=(0.9e3, 0.2e3, mesh.center[2]), x_length=500, y_length=500, z_length=500)
roi.rotate_z(33, inplace=True)
p = pv.Plotter()
p.add_mesh(roi, opacity=0.75, color="red")
p.add_mesh(mesh, opacity=0.5)
p.show()
###Output
_____no_output_____
###Markdown
Run the box clipping algorithm
###Code
extracted = mesh.clip_box(roi, invert=False)
p = pv.Plotter(shape=(1, 2))
p.add_mesh(roi, opacity=0.75, color="red")
p.add_mesh(mesh)
p.subplot(0, 1)
p.add_mesh(extracted)
p.add_mesh(roi, opacity=0.75, color="red")
p.link_views()
p.view_isometric()
p.show()
###Output
_____no_output_____
###Markdown
Crinkled Clipping=================Crinkled clipping is useful if you don't want the clip filter to trulyclip cells on the boundary, but want to preserve the input cellstructure and to pass the entire cell on through the boundary.This option is available for`pyvista.DataSetFilters.clip`{.interpreted-text role="func"},`pyvista.DataSetFilters.clip_box`{.interpreted-text role="func"}, and`pyvista.DataSetFilters.clip_sruface`{.interpreted-text role="func"},but not available when clipping by scalar in`pyvista.DataSetFilters.clip_scalar`{.interpreted-text role="func"}.
###Code
# Input mesh
mesh = pv.Wavelet()
###Output
_____no_output_____
###Markdown
Define clipping plane
###Code
normal = (1, 1, 1)
plane = pv.Plane(i_size=30, j_size=30, direction=normal)
###Output
_____no_output_____
###Markdown
Perform a standard clip
###Code
clipped = mesh.clip(normal=normal)
###Output
_____no_output_____
###Markdown
Perform a crinkled clip
###Code
crinkled = mesh.clip(normal=normal, crinkle=True)
###Output
_____no_output_____
###Markdown
Plot comparison
###Code
p = pv.Plotter(shape=(1, 2))
p.add_mesh(clipped, show_edges=True)
p.add_mesh(plane.extract_feature_edges(), color='r')
p.subplot(0, 1)
p.add_mesh(crinkled, show_edges=True)
p.add_mesh(plane.extract_feature_edges(), color='r')
p.link_views()
p.show()
###Output
_____no_output_____ |
module3-databackedassertions/LS_DS_114_Making_Data_backed_Assertions.ipynb | ###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
import pandas as pd
import numpy as np
url = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv'
df = pd.read_csv(url, index_col = 0)
df.head(25)
df.info()
df.describe()
pd.crosstab(df['weight'], df['exercise_time'],normalize='columns').head()
pd.crosstab(df['age'],df['weight'],normalize='columns').head()
pd.crosstab(df['age'], df['exercise_time'],normalize='columns').head()
pd.crosstab(df['weight'],df['age'],normalize='columns').head()
pd.crosstab(df['exercise_time'],df['weight'],normalize='columns').head()
age_bins = pd.cut(df['age'], 5) # 5 equal-sized bins
weight_bins = pd.cut(df['weight'],5)
exercise_bins = pd.cut(df['exercise_time'],5)
pd.crosstab(age_bins,df['weight'],normalize='columns')
pd.crosstab(weight_bins,df['exercise_time'],normalize='columns')
pd.crosstab(exercise_bins, df['weight'], normalize='columns')
import matplotlib.pyplot as plt
import seaborn as sns
plt.scatter(df['age'],df['weight'])
sns.set_style("whitegrid")
plot = sns.lmplot(x="weight",y="exercise_time",data=df,aspect=4)
plot = (plot.set_axis_labels("Weight","Exercise").set(xlim=(100,250),ylim=(0,310)))
plt.title("Weight to Excerise Time ")
plt.show(plot)
sns.catplot(x='weight', y="exercise_time",data=df, aspect=8)
sns.pairplot(df, aspect=2);
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
###Output
_____no_output_____
###Markdown
###Code
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#Import dataframe
df = pd.read_csv('https://raw.githubusercontent.com/AdinDeis/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')
#Drop redundant index column
del df['Unnamed: 0']
df.head()
df.corr()
!pip install pandas==0.23.4
age_bins = pd.cut(df['age'], 5)
exercise_bins = pd.cut(df['exercise_time'], 5)
weight_bins = pd.cut(df['weight'], 5)
pd.crosstab(weight_bins, exercise_bins)
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
# Import Dataset using pandas.read_csv()
import pandas as pd
people = pd.read_csv('https://raw.githubusercontent.com/ianforrest11/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')
# Hypothesize that the more time one exercises per week, the less one weighs
# Import matplotlib.pyplot
import matplotlib.pyplot as plt
# Create scatter plot to confirm hypothesis
plt.scatter(people['exercise_time'], people['weight'])
plt.xlabel("Exercise Time (Minutes per Week)")
plt.ylabel("Weight (Pounds)")
plt.title("Exercise Time vs. Weight")
plt.show()
# Hypothesis that age is a confounding variable in this scenario, so looking up statistics on age portion of persons.csv dataset
# Age Mean, Age Count
age_mean = people['age'].mean()
print('Age mean:',age_mean)
# Age mean is 48, so adding boolean column to dataframe to indicate if person is over average age
people['over_48_yo?'] = people['age'] >= 48
# Adding boolean column to dataframe to indicate senior citizens (over 68 years old)
people['senior?'] = people['age'] >= 68
# Rearrange columns of dataframe to make more clear
people = people[['exercise_time','weight','age','over_48_yo?', 'senior?']]
# Separate Dataframe into two; sort by 'over_48_yo?' boolean
older_people = people.loc[people['over_48_yo?'] == True]
younger_people = people.loc[people['over_48_yo?'] == False]
senior_people = people.loc[people['senior?'] == True]
# Create scatter plot to confirm hypothesis for older people
plt.scatter(older_people['exercise_time'], older_people['weight'])
plt.xlabel("Exercise Time (Minutes per Week)")
plt.ylabel("Weight (Pounds)")
plt.title("Older People - Exercise Time vs. Weight")
plt.show()
# Create scatter plot to confirm hypothesis for younger people
plt.scatter(younger_people['exercise_time'], younger_people['weight'])
plt.xlabel("Exercise Time (Minutes per Week)")
plt.ylabel("Weight (Pounds)")
plt.title("Younger People - Exercise Time vs. Weight")
plt.show()
# Create scatter plot to confirm hypothesis for seniors
plt.scatter(senior_people['exercise_time'], senior_people['weight'])
plt.xlabel("Exercise Time (Minutes per Week)")
plt.ylabel("Weight (Pounds)")
plt.title("Seniors - Exercise Time vs. Weight")
plt.show()
###Output
_____no_output_____
###Markdown
Assignment questionsAfter you've worked on some code, answer the following questions in this text block:1. What are the variable types in the data? - **'Age' is an ordinal, discrete variable. 'Exercise_time' is an ordinal, discrete variable in this dataset. 'Weight' is an ordinal, discrete variable. 'Is over 48?' is a categorical, discrete variable. 'Senior?' is a categorical, discrete variable.**2. What are the relationships between the variables? - **An inverse relationship exists between exercise time and weight. The more time spent exercising, the lower an individual's weight.**3. Which relationships are "real", and which spurious? - **The relationship between weight and exercise time is 'real'. Age does not appear to have a spurious effect on this dataset.** Stretch goals and resourcesFollowing are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub.- [Spurious Correlations](http://tylervigen.com/spurious-correlations)- [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/)Stretch goals:- Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it)- Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)
###Code
# Spurious Correlation - Cheese Consumption vs Bedsheet Strangulation
# Create Dataframe
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
data =[['2000', 29.5,375],
['2001', 30.1,475],
['2002', 30.7,575],
['2003', 30.8,550],
['2004', 31.3,610],
['2005', 31.6,590],
['2006', 32.7,700],
['2007', 33.1,780],
['2008', 32.7,810],
['2009', 32.8,750]]
cheese_bed = pd.DataFrame(data, columns = ['Year', 'Cheese_Per_Capita', 'Bedsheet_Deaths'])
# Create individual Series
year = cheese_bed['Year']
cpc = cheese_bed['Cheese_Per_Capita']
bsd = cheese_bed['Bedsheet_Deaths']
# Create plot
# Create initial plot
fig, ax1 = plt.subplots()
# Create plot 1 - set color & x/y labels
color = 'tab:red'
ax1.set_xlabel('Year')
ax1.set_ylabel('Cheese Per Capita (lbs)', color=color)
# Create plot 1 - year vs cheese per capita
ax1.plot(year, cpc, color=color)
ax1.tick_params(axis='y', labelcolor=color)
# Indicate both lines will share same axis
ax2 = ax1.twinx()
# Create plot 2 - set color & y label (x label already set)
color = 'tab:blue'
ax2.set_ylabel('Bedsheet Deaths (individuals)', color=color)
# Create plot 2 - year vs bedsheet deaths
ax2.plot(year, bsd, color=color)
ax2.tick_params(axis='y', labelcolor=color)
# Add title, grid, and display graph
plt.title("Per-Capita Cheese Consumption & Bedsheet Deaths: 2000 - 2009")
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(time_bins, user_data['purchased'])
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(time_bins, user_data['purchased'], normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
# First the imports (even though they were done above)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Let's grab the data from the provided persons csv
persons_url = 'https://raw.githubusercontent.com/alexkimxyz/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv'
persons_data = pd.read_csv(persons_url)
# We'll take a sample to see how it looks
persons_data.sample(10)
# It looks like the index is repeating as a data column.
# Let's fix that
# We'll use the reassign method rather than inplace
persons_data = persons_data.drop(persons_data.columns[0], axis=1)
# We'll take another sample to make sure we got it
persons_data.sample(10)
# Alright, now what kind of data are we dealing with
persons_data.dtypes
# That is SUPER convenient for us
# Now let's just make sure that the dataframe is clean
persons_data.dtypes.value_counts()
persons_data.isna().sum()
# GREAT!
# Knowing that we have all integers, lets do some visualizations
persons_data['age'].hist()
persons_data['weight'].hist()
persons_data['exercise_time'].hist()
# Now that we have an idea about how each attribute is distributed, lets do some comparisons
# First let's use visualizations
plt.bar(persons_data['age'], persons_data['weight'])
plt.bar(persons_data['age'], persons_data['exercise_time'])
plt.bar(persons_data['weight'], persons_data['exercise_time'])
# As per my breakout group, let's look at a pairplot
import seaborn as sns
sns.set(style='ticks', color_codes=True)
sns.pairplot(persons_data)
# We can see some trends in the data, but let's compare using crosstab
# Let's put age into 12 equal bins
age_bins = pd.cut(persons_data['age'], bins = 12)
pd.crosstab(age_bins, persons_data['weight'], normalize = 'all')
# Not very helpful
# Let's put weight into bins also
weight_bins = pd.cut(persons_data['weight'], bins = 14)
pd.crosstab(age_bins, weight_bins.astype(str), normalize='all')
#Let's do this for exercise time also
exercise_bins = pd.cut(persons_data['exercise_time'], bins = 15)
pd.crosstab(age_bins, exercise_bins.astype(str), normalize='all')
# weight vs exercise
pd.crosstab(weight_bins, exercise_bins.astype(str), normalize='all')
# And finally all 3
pd.crosstab(weight_bins, [age_bins.astype(str), exercise_bins.astype(str)], normalize='all')
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
import pandas as pd
import numpy as np
persons_data_url = 'https://raw.githubusercontent.com/eliza0shrug/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv'
df = pd.read_csv(persons_data_url)
print(df.shape)
df.head()
# age or weight better factor to predict exercise time?
# exercise time or age more deterministic of weight?
df_final.plot.scatter('age', 'exercise_time')
#this seems pretty useless
df_final = df.iloc[:, [1,2,3]]
df_final.plot(kind='bar', stacked=True)
# this looks like a hopeless mess
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
import numpy as np
random.random()
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = np.random.normal(700, 100)
purchased = random.random() < 0.1 + (time_on_site / 1500) #Boolean based on probability and time on site
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = np.random.normal(200, 90)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users) #unbias any streak of samples
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
user_data.sample(n=10)
user_data['time_on_site'].hist(bins=20)
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
#Fix unrealistic time on site
idx = user_data['time_on_site'] < 0
user_data.loc[idx,'time_on_site'] = np.NaN
#Or try by pandas ''.where' method
user_data['time_on_site'].hist(bins=20)
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
#Always a quick look at data in buckets
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(time_bins,user_data['purchased'])
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(time_bins,user_data['purchased'], normalize='columns')
# What if we normalize by row
pd.crosstab(time_bins,user_data['purchased'], normalize='index')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
pd.crosstab(time_bins,[user_data['mobile'],user_data['purchased']], normalize='all')
df = pd.DataFrame({'a': np.arange(1e5), 'b':2*np.arange(1e5)})
#np.arrage returns evenly space numbers in order. In example above goes 1,2,3,4,...1e5
df.head()
#%%time
# timeit magic command runs the command a certain 'loops' number of times and gives shortest out of 3
%%timeit
#apparently .apply command is slow compared to calculation below
df['c'] = df['a'].apply(lambda x: x**2)
#%%time
%%timeit
df['c2'] = df['a']**2
#Also, could use single % is for a SINGLE line of code
df.head()
def get_temp_for_coord(x,y):
"..."
temp = 60
return temp
#pseudo-code to describe what apply does. Shows that apply method applies calculation
# one at a time to element, compared to code block 47 that does calculation along array
#df['temp'] = df.apply(lambda x: get temp_for_coord(x['lat'],x['lon']))
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
# I chose to use the raw URL.
df = pd.read_csv('https://raw.githubusercontent.com/martinclehman/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')
df.head()
df.drop(columns=['Unnamed: 0'], inplace=True)
#or df = df.dropp(columns=['Unnamed: 0'])
df.head()
#Alternative way if you want to drop the index column and save to local collab
#df.to_csv("path_to_my_csv_file.csv", index=False)
#!ls
#!head path_to_my_csv_file.csv
df.dtypes.value_counts()
import seaborn as sns
sns.pairplot(df)
print('Correlation: age vs weight\n', np.corrcoef(df['age'],df['weight']),'\n')
print('Correlation: age vs exercise_time\n', np.corrcoef(df['age'],df['exercise_time']),'\n')
print('Correlation: weight vs exercise_time\n', np.corrcoef(df['weight'],df['exercise_time']))
###Output
Correlation: age vs weight
[[1. 0.14416819]
[0.14416819 1. ]]
Correlation: age vs exercise_time
[[ 1. -0.27381804]
[-0.27381804 1. ]]
Correlation: weight vs exercise_time
[[ 1. -0.47802133]
[-0.47802133 1. ]]
###Markdown
Assignment questionsAfter you've worked on some code, answer the following questions in this text block:1. What are the variable types in the data?> All Datatypes are 64-bit integers .2. What are the relationships between the variables?> There's a very small, positive correlation (0.14) between age and weight. There's a small negative correlation (-0.27) between age and exercise time. There's a somewhat negative correlation between weight and exercise time. 3. Which relationships are "real", and which spurious?> All relationships are spurious. They are all affected by a separate, confounding variable related to body composition. An older or heavier person is capable of less muscle to fat ratio, which gives rise to cutoffs in physical ability and size. Stretch goals and resourcesFollowing are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub.- [Spurious Correlations](http://tylervigen.com/spurious-correlations)- [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/)Stretch goals:- Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it)- Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)
###Code
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import Rectangle
import pandas as pd
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile']) # This is a tuple, meaning that the values inside are immutable
example_user = User(False, 12, False)
print(example_user)
example_user.time_on_site = 30 # Just an example to help me understand tuples -- Won't allow me to reset the attribute from 12 (see above) to 30
import numpy as np
np.random.normal(10,2)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
# time_on_site = random.uniform(10, 600) # Generates random float between 10 and 600
time_on_site = np.random.normal(9.4 * 60, 3 * 60) # Changed to np.random.normal because it will simulate normal distribution
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
# time_on_site = random.uniform(5, 300)
time_on_site = np.random.normal(7.5 * 60, 2.5 * 60) # Changed to 7.5 minutes (and 9.4 for desktop) by checking means on a website -- stdev is just as a guess
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
users[:10]
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
user_data[user_data['time_on_site'] < 0.0] # Only one value ended up being below 0 seconds (-11 seconds) -- In real life, you'd probably just drop it
user_data.loc[user_data['time_on_site'] < 0.0, 'time_on_site'] = 0.0 # This sets the value to 0 (before, it was below 0)
user_data[user_data['time_on_site'] < 0.0] # And now we check if it worked (that there are no longer values below 0 -- below should show no rows)
user_data['time_on_site_min'] = user_data['time_on_site'] / 60
user_data.head()
user_data.groupby('mobile').time_on_site_min.hist(bins=30, alpha=0.5, figsize=(20,10))
plt.title('Time on Site by Source\n(Desktop vs. Mobile in Minutes)')
plt.xlabel('Time in Minutes')
plt.ylabel('Count')
plt.legend(['Desktop','Mobile']);
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site_min'], 5) # 5 equal-sized bins
pd.crosstab(time_bins, user_data['purchased'])
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(time_bins, user_data['purchased'], normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']], normalize='columns')
df = pd.DataFrame({'a': np.arange(1e6),
'b': 2 * np.arange(1e6)})
print(df.shape)
df.head()
%timeit df['c'] = df['a']**2
%timeit df['c2'] = df['a'].apply(lambda x: x**2)
%%time
df = pd.read_csv['co_denver_2019_02_25.csv']
df.head()
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
col_headers = ['ID', 'Age', 'Weight', 'Exercise Minutes']
df = pd.read_csv('https://raw.githubusercontent.com/andrewwhite5/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv',
header=None, names=col_headers, skiprows=1)
print(df.shape)
df.head(10)
df.isna().sum()
df.describe()
exercise_bins = pd.cut(df['Exercise Minutes'], 5) # 5 bins of exercise time to compare with weight
pd.crosstab(exercise_bins, df['Weight'], normalize='columns')
weight_bins = pd.cut(df['Weight'], 5)
pd.crosstab(weight_bins, df['Exercise Minutes'], normalize='columns')
pd.crosstab([weight_bins, exercise_bins], df['Age'], normalize='columns')
import seaborn as sns
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
train, test = train_test_split(df.copy(), random_state=0)
train.shape, test.shape
features = ['Weight']
target = 'Exercise Minutes'
def error():
y_true = train[target]
y_pred = model.predict(train[features])
train_error = mean_absolute_error(y_true, y_pred)
y_true = test[target]
y_pred = model.predict(test[features])
test_error = mean_absolute_error(y_true, y_pred)
print('Train Error: ', round(train_error))
print('Test Error: ', round(test_error))
model = LinearRegression()
model.fit(train[features], train[target])
error()
model = LinearRegression()
weight = range(300)
model.fit(train[features], train[target])
predictions = model.predict(weight)
train.plot.scatter(x='Exercise Minutes', y='Weight', s=50)
plt.plot(Weight, predictions)
plt.title('Linear Regression');
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
import numpy as np
np.random.normal(10,2)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
# time_on_site = random.uniform(10, 600)
time_on_site = np.random.normal(9.4*60, 3*60)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
# time_on_site = random.uniform(5, 300)
time_on_site = np.random.normal(7.5*60, 2.5*60)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
user_data['time_on_site_min'] = user_data['time_on_site']/60
user_data.head()
import matplotlib.pyplot as plt
num_bins = 10
plt.hist(user_data['time_on_site'], num_bins, facecolor='blue', alpha=0.5)
user_data.groupby('mobile').time_on_site_min.hist(bins=20, alpha=0.5, figsize=(10,6));
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site_min'], 5) # 5 equal-sized bins
pd.crosstab(time_bins, user_data['purchased'], normalize='index')
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(time_bins, user_data['purchased'], normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']], normalize='columns')
###Output
_____no_output_____
###Markdown
Stanford Open Police Project
###Code
%%time
import pandas as pd
nj_data = pd.read_csv('nj_statewide_2019_02_25.csv')
print(nj_data.shape)
nj_data.head()
nj_data.isna().sum()
nj_data.violation.value_counts().head(10)
nj_data.vehicle_make.value_counts(normalize=True).head(10)
nj_data[nj_data.violation == '39:4-98 RATES OF SPEED'].vehicle_make.value_counts(normalize=True).head(10)
nj_data[nj_data.violation == '39:4-98 RATES OF SPEED'].vehicle_color.value_counts(normalize=True).head(10)
###Output
_____no_output_____
###Markdown
Use %%timeit to optimize code,import tqdm Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# Import the person.csv dataset
import pandas as pd
person_data = pd.read_csv('https://raw.githubusercontent.com/JimKing100/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv', index_col=0)
person_data.head()
# Let's take a look at the data using crosstab
age_bins = pd.cut(person_data['age'], 10) # 10 equal-sized bins
pd.crosstab(age_bins, person_data['weight'])
# Let's plot a histogram of the data to look for patterns - weight first
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
num_bins = 10
plt.hist(person_data['weight'], num_bins, facecolor='blue', alpha=0.5)
plt.xlabel('Weight')
plt.ylabel('Count')
plt.show()
# Let's plot a histogram of the data to look for patterns - age next
num_bins = 10
plt.hist(person_data['age'], num_bins, facecolor='blue', alpha=0.5)
plt.xlabel('Age')
plt.ylabel('Count')
plt.show()
# Let's plot a histogram of the data to look for patterns - exercise time
num_bins = 10
plt.hist(person_data['exercise_time'], num_bins, facecolor='blue', alpha=0.5)
plt.xlabel('Exercise Time')
plt.ylabel('Count')
plt.show()
# A scatter plot should show if a trend exists - check age and weight
plt.scatter(person_data['age'], person_data['weight'], alpha=0.5)
plt.title('Scatter plot')
plt.xlabel('Age')
plt.ylabel('Weight')
plt.show()
# A scatter plot should show if a trend exists - check age and exercise time
plt.scatter(person_data['age'], person_data['exercise_time'], alpha=0.5)
plt.title('Scatter plot')
plt.xlabel('Age')
plt.ylabel('Exercise Time')
plt.show()
# A scatter plot should show if a trend exists - exercise time and weight
plt.scatter(person_data['exercise_time'], person_data['weight'], alpha=0.5)
plt.title('Scatter plot')
plt.xlabel('Exercise Time')
plt.ylabel('Weight')
plt.show()
###Output
_____no_output_____
###Markdown
Assignment questionsAfter you've worked on some code, answer the following questions in this text block:1. What are the variable types in the data?They are all integer data types.2. What are the relationships between the variables?Age and Weight are randomly distributed, no specific relationship appears to exist.Exercise Time does appear to decline starting around 60 years old.Weight does appear to fall with an increase in Exercise Time.3. Which relationships are "real", and which spurious?Exercise Time and Age are "real",Weight and Exercise Time are "real",Age and Weight are spurious Stretch goals and resourcesFollowing are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub.- [Spurious Correlations](http://tylervigen.com/spurious-correlations)- [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/)Stretch goals:- Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it)- Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)
###Code
# See the last cell for the Spurious Correlation
# Picked linear regression to plot the relationship between Exercise Time and Weight
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.linear_model import LinearRegression
# Use linear regression model
model = LinearRegression()
# Show scatter plot of data
plt.scatter(person_data['exercise_time'], person_data['weight'], alpha=0.5)
plt.title('Scatter plot')
plt.xlabel('Exercise Time')
plt.ylabel('Weight')
plt.show()
# Show actual data
person_data.head()
# Run the linear regreassion
from sklearn.linear_model import LinearRegression
features = ['exercise_time'] # Set features to exercise time
target = 'weight' # Set target to weight
model = LinearRegression()
model.fit(person_data[features], person_data[target])
# create values to predict using the exercise range
exercise_time = [[w] for w in range(0, 300)]
# make predictions based on linear regression model
predictions = model.predict(exercise_time)
# graph it all
plt.scatter(person_data['exercise_time'], person_data['weight'], alpha=0.5)
plt.plot(exercise_time, predictions)
plt.title('Linear Regression')
plt.xlabel('Exercise Time')
plt.ylabel('Weight')
plt.show()
# Show the y = -.19x + 180 linear regression line values
model.coef_, model.intercept_
# Import the auto-mpg.csv dataset for a spurious correlation
import pandas as pd
car_data = pd.read_csv('auto-mpg.csv')
car_data.head()
###Output
_____no_output_____
###Markdown
###Code
convert_dict = {'mpg': int}
car_data = car_data.astype(convert_dict)
car_data.head()
# Run the linear regression
from sklearn.linear_model import LinearRegression
features = ['horsepower'] # Set features to horsepower
target = 'mpg' # Set target to mileage
model = LinearRegression()
model.fit(car_data[features], car_data[target])
# create values to predict using the horsepower range
horsepower_values = [[w] for w in range(50, 350)]
# make predictions based on linear regression model
predictions = model.predict(horsepower_values)
# graph it all
plt.scatter(car_data['horsepower'], person_data['mpg'], alpha=0.5)
plt.plot(exercise_time, predictions)
plt.title('Linear Regression')
plt.xlabel('Horsepower')
plt.ylabel('Mileage')
plt.show()
# Show the y = -.19x + 180 linear regression line values
model.coef_, model.intercept_
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
from collections import namedtuple
User = namedtuple('User', ['purchased', 'time_on_site', 'mobile'])
example_user = User(False, 12, False)
example_user
example_user.time_on_site=30
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes (600 sec) on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
# time_on_site = random.uniform(10, 600)
time_on_site = np.random.normal(9.4*60, 3*60) # based on ecommerce data from 2016
purchased = random.random() < 0.1 + (time_on_site / 1500) # 15 sec x 100%
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
# time_on_site = random.uniform(5, 300)
time_on_site = np.random.normal(7.5*60, 2.5*60)# based on ecommerce data from 2016
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users) # shuffling is just good practice, especially for predictive modeling
users[:10]
import numpy as np
np.random.normal()
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
user_data['time_on_site_min'] = user_data['time_on_site'] / 60
user_data.head()
ax = user_data.time_on_site.hist()
ax.legend(['Desktop', 'Mobile']);
import matplotlib.pyplot as plt
user_data.groupby('mobile').time_on_site_min.hist(bins=20, alpha=0.5, figsize=(10,6))
plt.title('Time on Site by Source (Desktop vs. Mobile in Minutes)')
plt.ylabel('Count')
plt.xlabel('Time in Minutes')
plt.legend(['Desktop', 'Mobile']);
user_data[user_data['time_on_site'] < 0.0]
user_data.loc[user_data['time_on_site'] < 0.0, 'time_on_site'] = 0.0
user_data[user_data['time_on_site'] < 0.0]
help(pd.crosstab)
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site_min'], 5) # 5 equal-sized bins
pd.crosstab(time_bins, user_data['purchased'])
# above we haven't taken into account mobile or desktop
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(time_bins, user_data['purchased'], normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']], normalize='columns')
###Output
_____no_output_____
###Markdown
Stanford Open Police Project
###Code
!unzip tr137st9964_tx_austin_2019_02_25.csv.zip
!ls
%%time
df = pd.read_csv('./share/data/opp-for-archive/tx_austin_2019_02_25.csv')
print(df.shape)
df.head()
df.isna().sum()
df['type'].value_counts()
df['vehicle_make'].value_counts(normalize=True).head(10) #normalize gives a percent, head(10) shows us the top 10
df[df['vehicle_make'] == 'FORD']['subject_sex'].value_counts()
df['reason_for_stop'].value_counts()
df = pd.DataFrame({'a': np.arange(1e6),
'b': 2*np.arange(1e6)})
print(df.shape)
df.head()
%timeit df['c'] = df['a'] ** 2
%timeit df['c2'] = df['a'].apply(lambda x: x**2)
# if you were working with a larger data set, this time
# difference would be much more impactful
from tqdm import tqdm # this gives you a progress bar functionality!
tqdm.pandas()
%timeit df['c2'] = df['a'].progress_apply(lambda x: x**2)
df.head()
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
!pip install pandas==0.23.4
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
df = pd.read_csv('https://raw.githubusercontent.com/lechemrc/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv', )
print(df.shape)
df.head()
df.dtypes
age_bins = pd.cut(df['age'], 5)
weight_bins = pd.cut(df['weight'], 5)
exercise_bins = pd.cut(df['exercise_time'], 5)
pd.crosstab(weight_bins, df['exercise_time'])
# I've tried a few different combinations and I'm not totally sure what's it's
# showing me here. I'll try graphing a few things to see if I can make sense
# of the data here
pd.crosstab(weight_bins, age_bins)
df['weight'].hist(bins=50);
plt.show(block=False)
df['age'].hist(bins=50);
plt.show(block=False)
df['exercise_time'].hist(bins=50)
df.plot.scatter(x='weight', y='exercise_time', title='Weight and Exercise Time',
color='orange', alpha = 0.5);
df.plot.scatter(x='exercise_time', y='weight', title='Weight and Exercise Time',
color='b', alpha = 0.5);
# It seems like there's no relationship until the weight starts
# to climb a little higher. Then it starts to lessen the more
# weight the person has. Perhaps we can infer it is either harder
# or they just don't want to or don't like to exercise
df.plot.scatter(x='age', y='exercise_time', title='Age and Exercise Time',
color = 'orange', alpha = 0.5);
df.plot.scatter(x='exercise_time', y='age', title='Age and Exercise Time',
color = 'b', alpha = 0.5);
# simliar trend here... the older they get, the less they exercise
df.plot.scatter(x='age', y='weight', title='Age and Weight',
color = 'orange', alpha = 0.5);
# There doesn't seem to be any correlation here at all.
heat = [df['age'], df['weight'], df['exercise_time']]
sns.heatmap(heat)
plt.plot()
# not super helpful
from mpl_toolkits import mplot3d
ax = plt.axes(projection="3d")
xdata = df['age']
ydata = df['weight']
zdata = df['exercise_time']
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Oranges')
ax.set_title('Age, Weight, and Exercise Time')
ax.set_xlabel('Age')
ax.set_ylabel('Weight')
ax.set_zlabel('Exercise Time')
ax = plt.axes(projection='3d')
xdata = df['weight']
ydata = df['age']
zdata = df['exercise_time']
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Oranges')
ax.set_title('Weight, Age, and Exercise Time')
ax.set_xlabel('Weight')
ax.set_ylabel('Age')
ax.set_zlabel('Exercise_Time')
ax = plt.axes(projection='3d')
xdata = df['exercise_time']
ydata = df['age']
zdata = df['weight']
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Oranges')
ax.set_title('Exercise Time, Age, and Weight')
ax.set_xlabel('Exercise Time')
ax.set_ylabel('Age')
ax.set_zlabel('Weight')
ax = plt.axes(projection='3d')
xdata = df['exercise_time']
ydata = df['weight']
zdata = df['age']
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Oranges')
ax.set_title('Exercise Time, Weight, and Age')
ax.set_xlabel('Exercise Time')
ax.set_ylabel('Weight')
ax.set_zlabel('Age')
# this is likely the most helpful graph. It shows that both weight and age are
# a factor in the exercise time spent. The higher the weight the lower the time,
# the higher the age, the lower the time.
ax = plt.axes(projection='3d')
xdata = df['weight']
ydata = df['exercise_time']
zdata = df['age']
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Oranges')
ax.set_title('Weight, Exercise Time, and Age')
ax.set_xlabel('Weight')
ax.set_ylabel('Exercise Time')
ax.set_zlabel('Age')
df.info()
df['exercise_time'].value_counts()
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
#Load data.
import pandas as pd
persons_dataset = pd.read_csv('https://raw.githubusercontent.com/tcbic/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')
#Convert dataset to dataframe.
persons_df = pd.DataFrame(persons_dataset)
persons_df.head()
#What can we do for some exploratory data analysis?
#Find out what our data types are.
print(persons_df.dtypes)
#Are there any missing values?
print(persons_df.isnull().sum())
#Find out our summary statistics.
persons_df.describe()
!pip install pandas==0.23.4
#Use crosstabulation to more closely assess relationships between variables.
weight_bins = pd.cut(persons_df['weight'], 5)
exercise_time_bins = pd.cut(persons_df['exercise_time'], 5)
#Measuring the affect of exercise time on weight. Answers the question: (What is the affect of exercise time on weight?)
pd.crosstab(weight_bins, exercise_time_bins)
#What if we took a look at all variables at once in a crosstable?
ct_all3variables = pd.crosstab(persons_df['age'], [weight_bins, exercise_time_bins])
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
!pip install pandas==0.23.4
import pandas as pd
# Load are data using raw file, setting the first column as the index
persons = pd.read_csv('https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv', index_col=0)
# Check that it looks right
persons.head()
# Check out the shape
persons.shape
# Lets get some summary statistics
persons.describe()
# Create a crosstab, see we need some binning
pd.crosstab(persons['exercise_time'], persons['weight'])
# Put all our attributes into bins to get better info
exercise_bins = pd.cut(persons['exercise_time'], 6)
weight_bins = pd.cut(persons['weight'], 6)
age_bins = pd.cut(persons['age'], 5)
# Check out our crosstabs with normalization - alot of the data still not really informative
# But we do see a huge dropoff when we look at higher exercise times for higher weight bins
pd.crosstab(exercise_bins, weight_bins, normalize='index')
# Now lets add in age
ct = pd.crosstab(age_bins, [weight_bins, exercise_bins], normalize='columns')
ct
# A busy uninformative plot
ct.plot(kind='bar', legend=False);
# OK - still not getting too much info
# let check out some subsets of our data
# Here we break our data on the line of 50yrs old and 200 minutes of exercise
ct_age50_et200 = pd.crosstab((persons['age'] > 50), (persons['exercise_time'] > 200))
ct_age50_et200
# Here we plot and see an inverse correlation between age and exercise time
ct_age50_et200.plot();
# Normalized to get an immediate sense
ct_age50_et200_normalized = pd.crosstab((persons['age'] > 50), (persons['exercise_time'] > 200), normalize='index')
ct_age50_et200_normalized
# 200 was too high a point to choose... 65% of younger people dont exercise that much
# Lets see how weight confounds this
pd.crosstab(exercise_bins, [age_bins, weight_bins]).plot(legend=False);
# Too much going on to tell
# Lets try splitting up our weight like we did age and exercise time
# Splitting everything near the middle of the dataset
ct_weight175_age50_et150 = pd.crosstab((persons['weight'] > 175), [(persons['age'] > 50), (persons['exercise_time'] > 150)], normalize='columns')
ct_weight175_age50_et150
# This shows a stark dropoff in exercise time for people over 175lbs, regardless of age
# Whereas, regardless of age, those under 175 split near 50/50 who exercised >150min/week
# Those over 175lbs almost uniformly did not exercise alot
### CAN WE FIGURE OUT WHICH IS THE CHICKEN AND WHICH IS THE EGG???
ct_weight175_age50_et150.plot()
# Re-bin to create a split in data near middle of each without bool comparison
exercise_bins = pd.cut(persons['exercise_time'], 2)
weight_bins = pd.cut(persons['weight'], 2)
age_bins = pd.cut(persons['age'], 2)
# Make crosstab of the bins
ct_two_bin = pd.crosstab(weight_bins, [age_bins, exercise_bins], normalize='columns')
ct_two_bin
# Barplot - again we see nearly all the high exercise times disappear when weight gets up there
ct_two_bin.plot.bar()
import seaborn as sns
# Played with seaborn, tried all the different kinds of jointplots
sns.jointplot(persons['weight'], persons['exercise_time'], kind='reg')
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('dark_background')
# 3D PLOTTING!!!
from mpl_toolkits.mplot3d import Axes3D
# Wish I understood these two lines right here.
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(persons['weight'], persons['age'], persons['exercise_time'])
ax.set_xlabel('Weight')
ax.set_ylabel('Age')
ax.set_zlabel('Exercise Time')
ax.set_title('Weight vs Age vs Exercise Time');
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/kmk028/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module4-databackedassertions/persons.csv')
df = df.drop(columns = ['Unnamed: 0'])
df.head()
df.dtypes.value_counts()
#correlations
# age vs weight
df['age'].corr(df['weight'])
#age vs exercise time
df['age'].corr(df['exercise_time'])
#weight vs exercise time
df['weight'].corr(df['exercise_time'])
#Pairplots
import seaborn as sns
sns.pairplot(df)
# from below pairplots I see there is a linear decrease in weight as exercise time increases -- This relation seems real
# there is a linear decrease in weight as exercise_time increases ----> this relation seems real
#there is sharp decrease in exercise time for people aged > 60 --> seems real
# as age decreases from 80 to 60 years there is a linear increase in exercise time from 100 to 300 mins (row 1, col 3)-- > this seems spurious. since this is a significant increase in exercise times as age decreases from 80>60. 60 year olds are not likely to exercise 3 times as much as 80 year olds.
#cross tabs
weight_bins = pd.cut(df['weight'], bins=6)
exercise_time_bins = pd.cut(df['exercise_time'], bins=6)
pd.crosstab(weight_bins,exercise_time_bins.astype(str),normalize='index' )
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
import numpy as np
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = np.random.normal(9.4 * 60, 3*60)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = np.random.normal(7.5*60, 2.5*60)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(time_bins, user_data['purchased'])
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(time_bins, user_data['purchased'], normalize='index')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
user_data['time_on_site_min'] = user_data['time_on_site'] / 60
user_data.head()
user_data.groupby('mobile').time_on_site_min.hist(bins=20, alpha=0.5, figsize=(10,6))
user_data.time_on_site.hist();
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
df = pd.read_csv('https://raw.githubusercontent.com/AbstractMonkey/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')
df.head()
df.info()
df.corr()
# Wow, so exercise time and weight have a pretty strong inverse correlation.
# After that, age and exercise time have another inverse correlation (-27%), and then age and weight are positively correlated (14%)
df.plot.scatter('exercise_time','weight')
df['weight'].hist(bins=20)
df['weight'].mode()
df.groupby(['exercise_time']).mean().hist(bins=20, figsize=(10,6))
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
from collections import namedtuple
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
example_user
example_user.time_on_site = 30
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
import numpy as np
np.random.normal(10,2)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
# time_on_site = random.uniform(10, 600)
time_on_site = np.random.normal(9.4*60, 3*60)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
# time_on_site = random.uniform(5, 300)
time_on_site = np.random.normal(7.5*60, 2.5*60)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
users[:10]
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Convert seconds to minutes (for comparison)
user_data['time_on_site_min'] = user_data['time_on_site'] / 60
user_data.head()
# Find negative time data
user_data[user_data.time_on_site < 0.0]
# Assign negative time_on_site values to 0.0
user_data.loc[user_data['time_on_site'] < 0.0, 'time_on_site'] = 0.0
user_data[user_data.time_on_site < 0.0] # confirm no zeros
import matplotlib.pyplot as plt
user_data.groupby('mobile').time_on_site_min.hist(bins=20, alpha=0.5, figsize=(10,6));
plt.title('Time on Site by Source (Desktop vs. Mobile in minutes)');
plt.ylabel('Count')
plt.xlabel('Time in Minutes')
plt.legend(['Desktop','Mobile']);
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site_min'], 5) # 5 equal-sized bins
pd.crosstab(time_bins, user_data['purchased'], normalize='columns')
pd.crosstab(columns=user_data['purchased'], index=time_bins, normalize='index')
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(time_bins, user_data['purchased'], normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']], normalize='columns')
pd.crosstab(time_bins, user_data['purchased'], normalize='columns')
###Output
_____no_output_____
###Markdown
Stanford Open Police Projecthttps://openpolicing.stanford.edu/findings/
###Code
!unzip jb084sr9005_nj_statewide_2019_02_25.csv.zip
!ls
%%time
df = pd.read_csv('./share/data/opp-for-archive/nj_statewide_2019_02_25.csv')
print(df.shape)
df.head()
df.isna().sum()
# What are the most common violations?
df.violation.value_counts().head(10)
# What can we infer about vehicle make and likelihood of getting pulled over?
df.vehicle_make.value_counts(normalize=True).head(10)
# What can we infer about this statement? Are other confounding relationships needed (male, female, race, etc.)?
df[df.violation == '39:4-98 RATES OF SPEED'].vehicle_make.value_counts(normalize=True).head(10)
# What about vehicle color?
df[df.violation == '39:4-98 RATES OF SPEED'].vehicle_color.value_counts(normalize=True).head(10)
###Output
_____no_output_____
###Markdown
Using %%timeit for Different Pandas Operations
###Code
df = pd.DataFrame({'a': np.arange(1e6),
'b': 2*np.arange(1e6)})
print(df.shape)
df.head()
%timeit df['c'] = df['a']**2
from tqdm import tqdm
tqdm.pandas()
%timeit df['c2'] = df['a'].apply(lambda x: x**2)
%timeit df['c3'] = df['a'].progress_apply(lambda x: x**2)
###Output
100%|██████████| 1000000/1000000 [00:01<00:00, 715486.46it/s]
100%|██████████| 1000000/1000000 [00:01<00:00, 709237.44it/s]
100%|██████████| 1000000/1000000 [00:01<00:00, 698433.83it/s]
100%|██████████| 1000000/1000000 [00:01<00:00, 700210.45it/s]
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
# dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
random.uniform(10,600)
import numpy as np
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = np.random.normal(700, 100)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = np.random.normal(400, 90)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
idx = user_data['time_on_site'] < 0
user_data.loc[idx]['time_on_site'] = 0
user_data.sample(n=10)
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
user_data['time_on_site'].hist(bins=20)
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
idx = user_data['time_on_site'] < 0
user_data.loc[idx, 'time_on_site'] = np.NaN
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(time_bins, user_data['purchased'])
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(time_bins, user_data['purchased'], normalize='columns')
pd.crosstab(time_bins, user_data['purchased'], normalize='index')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
pd.crosstab(time_bins, [user_data['mobile'], user_data['purchased']], normalize='columns')
df = pd.DataFrame({'a': np.arange(1e5), 'b': 2* np.arange(1e5)})
df.head()
%%timeit
df['c'] = df['a'].apply(lambda x: x**2)
%%timeit
df['c2'] = df['a']**2
def get_temp_for_coord(x,y):
'...'
temp = 60
return temp
df['temp'] = df.apply(lambda x: get_temp_for_coord(x['lat'], x['lom']))
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
df = pd.read_csv('https://raw.githubusercontent.com/AndrewMarksArt/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')
df.head()
df.drop(columns=['Unnamed: 0'], inplace= True)
df.dtypes
df.dtypes.value_counts()
import seaborn as sns
import matplotlib.pyplot as plt
sns.pairplot(df)
plt.scatter(df['exercise_time'], df['weight'])
plt.title("Exercise Time vs Weight")
plt.xlabel("Exercise Time")
plt.ylabel("Weight")
plt.show()
plt.scatter(df['age'], df['exercise_time'])
plt.title("Age vs Exercise Time")
plt.xlabel("Age")
plt.ylabel("Exercise Time")
plt.show()
plt.scatter(df['weight'], df['age'])
plt.title("Weight vs Age")
plt.xlabel("Weight")
plt.ylabel("Age")
plt.show()
###Output
_____no_output_____
###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion!
###Code
import random
dir(random) # Reminding ourselves what we can do here
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
!pip freeze
!pip install pandas==0.23.4
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head(20)
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
ct = pd.crosstab(user_data['mobile'], [user_data['purchased'], time_bins], normalize='index')
ct
type(ct)
ct.plot(kind='bar', legend=False);
ct.plot(kind='bar',stacked=True, legend=False);
user_data['time_on_site'].plot.hist(bins=50);
user_data[(user_data['mobile']==False) & (user_data['purchased']==True)].plot.hist();
user_data[(user_data['mobile']==True) & (user_data['purchased']==True)].plot.hist();
pt = pd.pivot_table(user_data, values='purchased', index = time_bins)
pt
pt.plot.bar();
ct = pd.crosstab(time_bins,[user_data['purchased'], user_data['mobile']], normalize='columns')
ct
ct.plot();
ct_sliced = ct.iloc[:,[2,3]]
ct_sliced.plot(kind='bar', stacked=True);
###Output
_____no_output_____
###Markdown
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
import pandas as pd
persons_data_url = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv'
persons_data = pd.read_csv(persons_data_url)
persons_data.head()
#deleting first column
persons_data = persons_data.drop(columns='Unnamed: 0')
persons_data
#find data types
persons_data.dtypes
weight_bins = pd.cut(persons_data['weight'], 7)
exercise_time_bins = pd.cut(persons_data['exercise_time'],7)
age_bins = pd.cut(persons_data['age'],7)
we = pd.crosstab(weight_bins,exercise_time_bins)
ae = pd.crosstab(age_bins,exercise_time_bins)
aw = pd.crosstab(age_bins, weight_bins)
ew = pd.crosstab(exercise_time_bins, weight_bins)
#first quick plots
we.plot();
ae.plot();
aw.plot();
#plotting bar graphs with rotated x-axis labels
we.plot(kind='bar',legend = False);
ew.plot(kind='bar',legend=False);
all_three = pd.crosstab(weight_bins,[age_bins,exercise_time_bins], normalize='index')
all_three
#selecting young people
all_three_young = all_three.iloc[:,[0,1,2,3,4,5,6]]
#selecting mid age people
all_three_mid = all_three.iloc[:,[7,8,9,10,11,12,13]]
#selecting older age people
all_three_old = all_three.iloc[:,[14,15,16,17,18,19,20]]
#displaying the three age groups separately
all_three_young.plot(legend=False);
all_three_mid.plot(legend=False);
all_three_old.plot(legend=False);
###Output
_____no_output_____ |
SPR_HW1/Ex01_Q6_d.ipynb | ###Markdown
- Sobhan Moradian Daghigh - 12/4/2021 - PR - EX01 - Q6 - Part d.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
###Output
_____no_output_____
###Markdown
Reading data
###Code
dataset = pd.read_csv('./inputs/Q6/first_half_logs.csv',
names=['timestamp', 'tag_id','x_pos', 'y_pos',
'heading', 'direction', 'energy', 'speed', 'total_distance'])
dataset.head()
dataset.info()
players = dataset.groupby(by=dataset.tag_id)
players.first()
for grp, pdf in players:
print('player: {} - total_distance: {}'.format(grp, pdf.iloc[:, -1].max()))
###Output
player: 1 - total_distance: 5921.5
player: 2 - total_distance: 5849.06
player: 3 - total_distance: 3.75693
player: 5 - total_distance: 6658.27
player: 6 - total_distance: 748.03
player: 7 - total_distance: 6622.92
player: 8 - total_distance: 6067.95
player: 9 - total_distance: 6046.06
player: 10 - total_distance: 7177.11
player: 11 - total_distance: 634.211
player: 12 - total_distance: 0.285215
player: 13 - total_distance: 6317.69
player: 14 - total_distance: 6692.62
player: 15 - total_distance: 6448.72
###Markdown
It seems that theres some non-player captures which Im wanna filter them. Also I decided to ignore one of the substitute player to have 11 players at all.
###Code
dataset = dataset.drop(dataset[dataset.tag_id == 6].index)
dataset = dataset.drop(dataset[dataset.tag_id == 12].index)
dataset = dataset.drop(dataset[dataset.tag_id == 11].index)
players = dataset.groupby(by=dataset.tag_id)
players.first()
for grp, pdf in players:
x_mean = pdf.loc[:, 'x_pos'].mean()
y_mean = pdf.loc[:, 'y_pos'].mean()
plt.scatter(x_mean, y_mean, label='player {}'.format(grp))
plt.legend(bbox_to_anchor=(1.3, 1.01))
plt.xlim([0, 105])
plt.ylim([0, 70])
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Part D.
###Code
img = Image.open('./SPR_HW1/Association_football.png')
img.resize((500, 350))
for grp, pdf in players:
x_mean = pdf.loc[:, 'x_pos'].mean()
y_mean = pdf.loc[:, 'y_pos'].mean()
plt.scatter(x_mean, y_mean, label='player {}'.format(grp))
plt.legend(bbox_to_anchor=(1.3, 1.01))
plt.xlim([20, 68])
plt.ylim([15, 50])
plt.grid()
plt.show()
###Output
_____no_output_____ |
Unit 1 Sprint Challenge 3/DS_Unit_1_Sprint_Challenge_3_Data_Storytelling.ipynb | ###Markdown
Data Science Unit 1 Sprint Challenge 3 Data StorytellingIn this sprint challenge you'll work with a dataset from **FiveThirtyEight's article, [Every Guest Jon Stewart Ever Had On ‘The Daily Show’](https://fivethirtyeight.com/features/every-guest-jon-stewart-ever-had-on-the-daily-show/)**! Part 0 — Run this starter codeYou don't need to add or change anything here. Just run this cell and it loads the data for you, into a dataframe named `df`.(You can explore the data if you want, but it's not required to pass the Sprint Challenge.)
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
url = 'https://raw.githubusercontent.com/fivethirtyeight/data/master/daily-show-guests/daily_show_guests.csv'
df = pd.read_csv(url).rename(columns={'YEAR': 'Year', 'Raw_Guest_List': 'Guest'})
def get_occupation(group):
if group in ['Acting', 'Comedy', 'Musician']:
return 'Acting, Comedy & Music'
elif group in ['Media', 'media']:
return 'Media'
elif group in ['Government', 'Politician', 'Political Aide']:
return 'Government and Politics'
else:
return 'Other'
df['Occupation'] = df['Group'].apply(get_occupation)
###Output
_____no_output_____
###Markdown
Part 1 — What's the breakdown of guests’ occupations per year?For example, in 1999, what percentage of guests were actors, comedians, or musicians? What percentage were in the media? What percentage were in politics? What percentage were from another occupation?Then, what about in 2000? In 2001? And so on, up through 2015.So, **for each year of _The Daily Show_, calculate the percentage of guests from each occupation:**- Acting, Comedy & Music- Government and Politics- Media- Other Hints:You can make a crosstab. (See pandas documentation for examples, explanation, and parameters.)You'll know you've calculated correctly when the percentage of "Acting, Comedy & Music" guests is 90.36% in 1999, and 45% in 2015.**Optional Bonus Challenge:** Do additional insightful data exploration.
###Code
df.info()
ct = pd.crosstab(df['Year'], df['Occupation'])
ct
percents = ct.div(ct.sum(axis=1), axis=0)
percents
formated = percents.style.format("{:.2%}")
formated
###Output
_____no_output_____
###Markdown
Part 2 — Recreate this explanatory visualization:
###Code
from IPython.display import display, Image
png = 'https://fivethirtyeight.com/wp-content/uploads/2015/08/hickey-datalab-dailyshow.png'
example = Image(png, width=500)
display(example)
###Output
_____no_output_____
###Markdown
**Hints:**- You can choose any Python visualization library you want. I've verified the plot can be reproduced with matplotlib, pandas plot, or seaborn. I assume other libraries like altair or plotly would work too.- If you choose to use seaborn, you may want to upgrade the version to 0.9.0.**Expectations:** Your plot should include:- 3 lines visualizing "occupation of guests, by year." The shapes of the lines should look roughly identical to 538's example. Each line should be a different color. (But you don't need to use the _same_ colors as 538.)- Legend or labels for the lines. (But you don't need each label positioned next to its line or colored like 538.)- Title in the upper left: _"Who Got To Be On 'The Daily Show'?"_ with more visual emphasis than the subtitle. (Bolder and/or larger font.)- Subtitle underneath the title: _"Occupation of guests, by year"_**Optional Bonus Challenge:**- Give your plot polished aesthetics, with improved resemblance to the 538 example.- Any visual element not specifically mentioned in the expectations is an optional bonus.
###Code
yr_bins = pd.cut(df['Year'], 4)
yr_bins
plt.style.use('fivethirtyeight')
guests_graph = percents.plot();
plt.title("Who Got To Be On 'The Daily Show'?",
fontsize=14, fontweight='bold')
guests_graph;
###Output
_____no_output_____
###Markdown
Part 3 — Who were the top 10 guests on _The Daily Show_?**Make a plot** that shows their names and number of appearances.**Add a title** of your choice.**Expectations:** It's ok to make a simple, quick plot: exploratory, instead of explanatory. **Optional Bonus Challenge:** You can change aesthetics and add more annotation. For example, in a relevant location, could you add the text "19" to show that Fareed Zakaria appeared 19 times on _The Daily Show_? (And so on, for each of the top 10 guests.)
###Code
df.drop
###Output
_____no_output_____ |
examples/ex_deesse_06_proba_constraint.ipynb | ###Markdown
MPS using the deesse wrapper - simulations with probability constraints Main points addressed:- deesse simulation with probability (proportion) constraints: - categorical or continuous variable - local or global probability constraints **Note:** if *global* probability constraints are used and if deesse is launched in parallel (with more than one thread), the reproducibility is not guaranteed. Import what is required
###Code
import numpy as np
import matplotlib.pyplot as plt
# import from package 'geone'
from geone import img
import geone.imgplot as imgplt
import geone.customcolors as ccol
import geone.deesseinterface as dsi
###Output
_____no_output_____
###Markdown
1. Categorical simulations - global probability constraints Training image (TI)Read the training image. (Source of the image: *D. Allard, D. D'or, and R. Froidevaux, An efficient maximum entropy approach for categorical variable prediction, EUROPEAN JOURNAL OF SOIL SCIENCE, 62(3):381-393, JUN 2011, doi: 10.1111/j.1365-2389.2011.01362.x*)
###Code
ti = img.readImageGslib('ti2.gslib')
###Output
_____no_output_____
###Markdown
Plot the image (using the function `imgplt.drawImage2D`).
###Code
col = ['lightblue', 'darkgreen', 'orange']
plt.figure(figsize=(5,5))
imgplt.drawImage2D(ti, categ=True, categCol=col, title='TI')
###Output
_____no_output_____
###Markdown
Simulation gridDefine the simulation grid (number of cells in each direction, cell unit, origin).
###Code
nx, ny, nz = 300, 300, 1 # number of cells
sx, sy, sz = ti.sx, ti.sy, ti.sz # cell unit
ox, oy, oz = 0.0, 0.0, 0.0 # origin (corner of the "first" grid cell)
###Output
_____no_output_____
###Markdown
Define the classes of valuesSet the number of classes, and for each class define the ensemble of values as a (union of) interval(s).
###Code
nclass = 3
class1 = [-0.5, 0.5] # interval [-0.5, 0.5[ (for facies code 0)
class2 = [ 0.5, 1.5] # interval [ 0.5, 1.5[ (for facies code 1)
class3 = [ 1.5, 2.5] # interval [ 1.5, 2.5[ (for facies code 2)
# classx = [[-0.5, 0.5],[ 1.5, 2.5]] # for the union [-0.5, 0.5[ U [1.5, 2.5[, containing facies codes 0 and 2
list_of_classes = [class1, class2, class3]
###Output
_____no_output_____
###Markdown
Define probability constraints (class `dsi.SoftProbability`)To save time and to avoid noisy simulations, probability constraints can be deactivated when the last pattern node (the farest away from the central cell) is at a distance less than a given value (`deactivationDistance`). Note that the distance is computed according to the units defined for the search neighborhood ellipsoid.
###Code
sp = dsi.SoftProbability(
probabilityConstraintUsage=1, # global probability constraints
nclass=nclass, # number of classes of values
classInterval=list_of_classes, # list of classes
globalPdf=[0.2, 0.5, 0.3], # global target PDF (list of length nclass)
comparingPdfMethod=5, # method for comparing PDF's (see doc: help(dsi.SoftProbability))
deactivationDistance=4.0, # deactivation distance (checking PDF is deactivated for narrow patterns)
constantThreshold=1.e-3) # acceptation threshold
###Output
_____no_output_____
###Markdown
Fill the input structure for deesse and launch deesseDeesse is launched with one thread (`nthreads=1`) to ensure reproducibility when global probability constraints are used.
###Code
nreal = 1
deesse_input = dsi.DeesseInput(
nx=nx, ny=ny, nz=nz,
sx=sx, sy=sy, sz=sz,
ox=ox, oy=oy, oz=oz,
nv=1, varname='categ',
nTI=1, TI=ti,
softProbability=sp, # set probability constraints
distanceType='categorical',
nneighboringNode=24,
distanceThreshold=0.05,
maxScanFraction=0.25,
npostProcessingPathMax=1,
seed=444,
nrealization=nreal)
deesse_output = dsi.deesseRun(deesse_input, nthreads=1)
###Output
Deesse running... [VERSION 3.2 / BUILD NUMBER 20200606 / OpenMP 1 thread(s)]
Deesse run complete
###Markdown
Retrieve the results (and display)
###Code
# Retrieve the realization
sim = deesse_output['sim']
# Display
plt.figure(figsize=(5,5))
imgplt.drawImage2D(sim[0], categ=True, categCol=col,
title='Sim. using global target proportions')
###Output
_____no_output_____
###Markdown
Compare facies proportions (TI, simulation, target)
###Code
ti.get_prop()
sim[0].get_prop() # target proportions are: [0.2, 0.5, 0.3]
###Output
_____no_output_____
###Markdown
2. Categorical simulations - local probability constraintsTarget proportions can be specified locally. For each cell, target proportions (for each class) in a region around the cell is considered. Hence, proportion maps are required as well as a support radius: the region is defined as the ensemble of the cells in the search neighborhood ellipsoid and at a distance to the central (simulated) cell inferior or equal to the given support radius. Note that the distance are computed according to the units defined for the search neighborhood ellipsoid. Build local target proportions maps
###Code
# xg, yg: coordinates of the centers of grid cell
xg = ox + 0.5*sx + sx*np.arange(nx)
yg = oy + 0.5*sy + sy*np.arange(ny)
xx, yy = np.meshgrid(xg, yg) # create meshgrid from the center of grid cells
# Define proportion maps for each class
c = 0.6
p1 = xx + yy
p1 = c * (p1 - np.min(p1))/ (np.max(p1) - np.min(p1))
p2 = c - p1
p0 = 1.0 - p1 - p2 # constant map (1-c = 0.4)
local_pdf = np.zeros((nclass, nz, ny, nx))
local_pdf[0,0,:,:] = p0
local_pdf[1,0,:,:] = p1
local_pdf[2,0,:,:] = p2
###Output
_____no_output_____
###Markdown
Plot these maps.
###Code
im = img.Img(nx, ny, nz, sx, sy, sz, ox, oy, oz, nv=3, val=local_pdf)
plt.subplots(1, 3, figsize=(17,5)) # 1 x 3 sub-plots
plt.subplot(1,3,1)
imgplt.drawImage2D(im, iv=0, vmin=0, vmax=c, title='Porportion for facies 0')
plt.subplot(1,3,2)
imgplt.drawImage2D(im, iv=1, vmin=0, vmax=c, title='Porportion for facies 1')
plt.subplot(1,3,3)
imgplt.drawImage2D(im, iv=2, vmin=0, vmax=c, title='Porportion for facies 2')
###Output
_____no_output_____
###Markdown
Define probability constraints (class `dsi.SoftProbability`)
###Code
sp = dsi.SoftProbability(
probabilityConstraintUsage=2, # local probability constraints
nclass=nclass, # number of classes of values
classInterval=list_of_classes, # list of classes
localPdf=local_pdf, # local target PDF
localPdfSupportRadius=12., # support radius
comparingPdfMethod=5, # method for comparing PDF's (see doc: help(dsi.SoftProbability))
deactivationDistance=4.0, # deactivation distance (checking PDF is deactivated for narrow patterns)
constantThreshold=1.e-3) # acceptation threshold
###Output
_____no_output_____
###Markdown
Fill the input structure for deesse and launch deesse
###Code
nreal = 1
deesse_input = dsi.DeesseInput(
nx=nx, ny=ny, nz=nz,
sx=sx, sy=sy, sz=sz,
ox=ox, oy=oy, oz=oz,
nv=1, varname='categ',
nTI=1, TI=ti,
softProbability=sp, # set probability constraints
distanceType='categorical',
nneighboringNode=24,
distanceThreshold=0.05,
maxScanFraction=0.25,
npostProcessingPathMax=1,
seed=444,
nrealization=nreal)
deesse_output = dsi.deesseRun(deesse_input)
# Retrieve the realization
sim = deesse_output['sim']
# Display
plt.figure(figsize=(5,5))
imgplt.drawImage2D(sim[0], categ=True, categCol=col,
title='Sim. using local target proportions')
###Output
_____no_output_____
###Markdown
3. Continuous simulation - global probability constraints Training image (TI)(Source of the image: *T. Zhang, P. Switzer, and A. Journel, Filter-based classification of training image patterns for spatial simulation, MATHEMATICAL GEOLOGY, 38(1):63-80, JAN 2006, doi: 10.1007/s11004-005-9004-x*)
###Code
ti = img.readImageGslib('tiContinuous.gslib')
plt.figure(figsize=(5,5))
imgplt.drawImage2D(ti, cmap=ccol.cmapB2W, title='TI')
###Output
_____no_output_____
###Markdown
Simulation gridDefine the simulation grid (number of cells in each direction, cell unit, origin).
###Code
nx, ny, nz = 200, 200, 1 # number of cells
sx, sy, sz = ti.sx, ti.sy, ti.sz # cell unit
ox, oy, oz = 0.0, 0.0, 0.0 # origin (corner of the "first" grid cell)
###Output
_____no_output_____
###Markdown
Define the classes of valuesSet the number of classes, and for each class define the ensemble of values as a (union of) interval(s).
###Code
vmin, vmax = 0., 256.
nclass = 10
breaks = np.linspace(vmin, vmax, nclass+1)
list_of_classes = [np.array([[breaks[i], breaks[i+1]]]) for i in range(nclass)]
###Output
_____no_output_____
###Markdown
Define probability constraints (class `dsi.SoftProbability`)
###Code
global_pdf = np.repeat(1./nclass, nclass) # global pdf (proportion for each class), uniform
sp = dsi.SoftProbability(
probabilityConstraintUsage=1, # global probability constraints
nclass=nclass, # number of classes of values
classInterval=list_of_classes, # list of classes
globalPdf=global_pdf, # global target PDF (list of length nclass)
comparingPdfMethod=5, # method for comparing PDF's (see doc: help(dsi.SoftProbability))
deactivationDistance=2.0, # deactivation distance (checking PDF is deactivated for narrow patterns)
constantThreshold=0) # acceptation threshold
###Output
_____no_output_____
###Markdown
Fill the input structure for deesse and launch deesseDeesse is launched with one thread (`nthreads=1`) to ensure reproducibility when global probability constraints are used.
###Code
nreal = 1
deesse_input = dsi.DeesseInput(
nx=nx, ny=ny, nz=nz,
sx=sx, sy=sy, sz=sz,
ox=ox, oy=oy, oz=oz,
nv=1, varname='code',
nTI=1, TI=ti,
softProbability=sp, # set probability constraints
distanceType='continuous',
nneighboringNode=24,
distanceThreshold=0.05,
maxScanFraction=0.5,
npostProcessingPathMax=0, # disable post-processing (to avoid losing target proportions)
seed=444,
nrealization=nreal)
deesse_output = dsi.deesseRun(deesse_input, nthreads=1)
# Retrieve the realization
sim = deesse_output['sim']
# Display
plt.subplots(1,2, figsize=(15,5)) # 1 x 2 sub-plots
plt.subplot(1,2,1)
imgplt.drawImage2D(sim[0], cmap=ccol.cmapB2W,
title='Sim. using global target proportions')
plt.subplot(1,2,2)
plt.hist(ti.val.reshape(-1), bins=breaks, density=True, color='lightblue', alpha=0.5, label='TI')
plt.hist(sim[0].val.reshape(-1), bins=breaks, density=True, color='orange', alpha=0.5, label='SIM')
plt.legend()
plt.title('Proportions of values on the classes')
###Output
_____no_output_____
###Markdown
4. Continuous simulations - local probability constraints Define new classes of values
###Code
nclass = 2
class1 = [0., 50.] # interval [0., 50.[ (low values)
class2 = [50., 256.] # interval [50., 256.[ (high values)
list_of_classes = [class1, class2]
###Output
_____no_output_____
###Markdown
Build local target proportions maps
###Code
# xg, yg: coordinates of the centers of grid cell
xg = ox + 0.5*sx + sx*np.arange(nx)
yg = oy + 0.5*sy + sy*np.arange(ny)
xx, yy = np.meshgrid(xg, yg) # create meshgrid from the center of grid cells
# Define proportion maps for each class
p0 = (yy - np.min(yy))/ (np.max(yy) - np.min(yy))
p1 = 1.0 - p0
local_pdf = np.zeros((nclass, nz, ny, nx))
local_pdf[0,0,:,:] = p0
local_pdf[1,0,:,:] = p1
# Plot these maps
im = img.Img(nx, ny, nz, sx, sy, sz, ox, oy, oz, nv=2, val=local_pdf)
plt.subplots(1, 2, figsize=(15,5)) # 1 x 2 sub-plots
plt.subplot(1,2,1)
imgplt.drawImage2D(im, iv=0, vmin=0, vmax=1, title='Porportion for low values')
plt.subplot(1,2,2)
imgplt.drawImage2D(im, iv=1, vmin=0, vmax=1, title='Porportion for high values')
###Output
_____no_output_____
###Markdown
Define probability constraints (class `dsi.SoftProbability`)
###Code
sp = dsi.SoftProbability(
probabilityConstraintUsage=2, # local probability constraints
nclass=nclass, # number of classes of values
classInterval=list_of_classes, # list of classes
localPdf=local_pdf, # local target PDF
localPdfSupportRadius=12., # support radius
comparingPdfMethod=5, # method for comparing PDF's (see doc: help(dsi.SoftProbability))
deactivationDistance=4.0, # deactivation distance (checking PDF is deactivated for narrow patterns)
constantThreshold=0.001) # acceptation threshold
###Output
_____no_output_____
###Markdown
Fill the input structure for deesse and launch deesse
###Code
nreal = 1
deesse_input = dsi.DeesseInput(
nx=nx, ny=ny, nz=nz,
sx=sx, sy=sy, sz=sz,
ox=ox, oy=oy, oz=oz,
nv=1, varname='code',
nTI=1, TI=ti,
softProbability=sp, # set probability constraints
distanceType='continuous',
nneighboringNode=24,
distanceThreshold=0.05,
maxScanFraction=0.5,
npostProcessingPathMax=1,
seed=444,
nrealization=nreal)
deesse_output = dsi.deesseRun(deesse_input)
# Retrieve the realization
sim = deesse_output['sim']
# Display
plt.figure(figsize=(5,5))
imgplt.drawImage2D(sim[0], cmap=ccol.cmapB2W,
title='Sim. using local target proportions')
###Output
_____no_output_____
###Markdown
MPS using the deesse wrapper - simulations with probability constraints Main points addressed:- deesse simulation with probability (proportion) constraints: - categorical or continuous variable - local or global probability constraints **Note:** if *global* probability constraints are used and if deesse is launched in parallel (with more than one thread), the reproducibility is not guaranteed. Import what is required
###Code
import numpy as np
import matplotlib.pyplot as plt
# import package 'geone'
import geone as gn
###Output
_____no_output_____
###Markdown
1. Categorical simulations - global probability constraints Training image (TI)Read the training image. (Source of the image: *D. Allard, D. D'or, and R. Froidevaux, An efficient maximum entropy approach for categorical variable prediction, EUROPEAN JOURNAL OF SOIL SCIENCE, 62(3):381-393, JUN 2011, doi: 10.1111/j.1365-2389.2011.01362.x*)
###Code
ti = gn.img.readImageGslib('ti2.gslib')
###Output
_____no_output_____
###Markdown
Plot the image (using the function `geone.imgplot.drawImage2D`).
###Code
col = ['lightblue', 'darkgreen', 'orange']
plt.figure(figsize=(5,5))
gn.imgplot.drawImage2D(ti, categ=True, categCol=col, title='TI')
plt.show()
###Output
_____no_output_____
###Markdown
Simulation gridDefine the simulation grid (number of cells in each direction, cell unit, origin).
###Code
nx, ny, nz = 300, 300, 1 # number of cells
sx, sy, sz = ti.sx, ti.sy, ti.sz # cell unit
ox, oy, oz = 0.0, 0.0, 0.0 # origin (corner of the "first" grid cell)
###Output
_____no_output_____
###Markdown
Define the classes of valuesSet the number of classes, and for each class define the ensemble of values as a (union of) interval(s).
###Code
nclass = 3
class1 = [-0.5, 0.5] # interval [-0.5, 0.5[ (for facies code 0)
class2 = [ 0.5, 1.5] # interval [ 0.5, 1.5[ (for facies code 1)
class3 = [ 1.5, 2.5] # interval [ 1.5, 2.5[ (for facies code 2)
# classx = [[-0.5, 0.5],[ 1.5, 2.5]] # for the union [-0.5, 0.5[ U [1.5, 2.5[, containing facies codes 0 and 2
list_of_classes = [class1, class2, class3]
###Output
_____no_output_____
###Markdown
Define probability constraints (class `geone.deesseinterface.SoftProbability`)To save time and to avoid noisy simulations, probability constraints can be deactivated when the last pattern node (the farest away from the central cell) is at a distance less than a given value (`deactivationDistance`). Note that the distance is computed according to the units defined for the search neighborhood ellipsoid.
###Code
sp = gn.deesseinterface.SoftProbability(
probabilityConstraintUsage=1, # global probability constraints
nclass=nclass, # number of classes of values
classInterval=list_of_classes, # list of classes
globalPdf=[0.2, 0.5, 0.3], # global target PDF (list of length nclass)
comparingPdfMethod=5, # method for comparing PDF's (see doc: help(gn.deesseinterface.SoftProbability))
deactivationDistance=4.0, # deactivation distance (checking PDF is deactivated for narrow patterns)
constantThreshold=1.e-3) # acceptation threshold
###Output
_____no_output_____
###Markdown
Fill the input structure for deesse and launch deesse
###Code
nreal = 1
deesse_input = gn.deesseinterface.DeesseInput(
nx=nx, ny=ny, nz=nz,
sx=sx, sy=sy, sz=sz,
ox=ox, oy=oy, oz=oz,
nv=1, varname='categ',
nTI=1, TI=ti,
softProbability=sp, # set probability constraints
distanceType='categorical',
nneighboringNode=24,
distanceThreshold=0.05,
maxScanFraction=0.25,
npostProcessingPathMax=1,
seed=444,
nrealization=nreal)
deesse_output = gn.deesseinterface.deesseRun(deesse_input)
###Output
DeeSse running... [VERSION 3.2 / BUILD NUMBER 20210922 / OpenMP 7 thread(s)]
* checking out license OK.
DeeSse run complete
Warnings encountered (1 times in all):
# 1: WARNING 00111: reproducibility guaranteed when using multiple threads and global probability constraint (performance possibly limited), but results differ from the serial version
###Markdown
Retrieve the results (and display)
###Code
# Retrieve the realization
sim = deesse_output['sim']
# Display
plt.figure(figsize=(5,5))
gn.imgplot.drawImage2D(sim[0], categ=True, categCol=col,
title='Sim. using global target proportions')
plt.show()
###Output
_____no_output_____
###Markdown
Compare facies proportions (TI, simulation, target)
###Code
ti.get_prop()
sim[0].get_prop() # target proportions are: [0.2, 0.5, 0.3]
###Output
_____no_output_____
###Markdown
2. Categorical simulations - local probability constraintsTarget proportions can be specified locally. For each cell, target proportions (for each class) in a region around the cell is considered. Hence, proportion maps are required as well as a support radius: the region is defined as the ensemble of the cells in the search neighborhood ellipsoid and at a distance to the central (simulated) cell inferior or equal to the given support radius. Note that the distance are computed according to the units defined for the search neighborhood ellipsoid. Build local target proportions maps
###Code
# xg, yg: coordinates of the centers of grid cell
xg = ox + 0.5*sx + sx*np.arange(nx)
yg = oy + 0.5*sy + sy*np.arange(ny)
xx, yy = np.meshgrid(xg, yg) # create meshgrid from the center of grid cells
# Define proportion maps for each class
c = 0.6
p1 = xx + yy
p1 = c * (p1 - np.min(p1))/ (np.max(p1) - np.min(p1))
p2 = c - p1
p0 = 1.0 - p1 - p2 # constant map (1-c = 0.4)
local_pdf = np.zeros((nclass, nz, ny, nx))
local_pdf[0,0,:,:] = p0
local_pdf[1,0,:,:] = p1
local_pdf[2,0,:,:] = p2
###Output
_____no_output_____
###Markdown
Plot these maps.
###Code
im = gn.img.Img(nx, ny, nz, sx, sy, sz, ox, oy, oz, nv=3, val=local_pdf)
plt.subplots(1, 3, figsize=(17,5)) # 1 x 3 sub-plots
plt.subplot(1,3,1)
gn.imgplot.drawImage2D(im, iv=0, vmin=0, vmax=c, title='Porportion for facies 0')
plt.subplot(1,3,2)
gn.imgplot.drawImage2D(im, iv=1, vmin=0, vmax=c, title='Porportion for facies 1')
plt.subplot(1,3,3)
gn.imgplot.drawImage2D(im, iv=2, vmin=0, vmax=c, title='Porportion for facies 2')
plt.show()
###Output
_____no_output_____
###Markdown
Define probability constraints (class `geone.deesseinterface.SoftProbability`)
###Code
sp = gn.deesseinterface.SoftProbability(
probabilityConstraintUsage=2, # local probability constraints
nclass=nclass, # number of classes of values
classInterval=list_of_classes, # list of classes
localPdf=local_pdf, # local target PDF
localPdfSupportRadius=12., # support radius
comparingPdfMethod=5, # method for comparing PDF's (see doc: help(gn.deesseinterface.SoftProbability))
deactivationDistance=4.0, # deactivation distance (checking PDF is deactivated for narrow patterns)
constantThreshold=1.e-3) # acceptation threshold
###Output
_____no_output_____
###Markdown
Fill the input structure for deesse and launch deesse
###Code
nreal = 1
deesse_input = gn.deesseinterface.DeesseInput(
nx=nx, ny=ny, nz=nz,
sx=sx, sy=sy, sz=sz,
ox=ox, oy=oy, oz=oz,
nv=1, varname='categ',
nTI=1, TI=ti,
softProbability=sp, # set probability constraints
distanceType='categorical',
nneighboringNode=24,
distanceThreshold=0.05,
maxScanFraction=0.25,
npostProcessingPathMax=1,
seed=444,
nrealization=nreal)
deesse_output = gn.deesseinterface.deesseRun(deesse_input)
# Retrieve the realization
sim = deesse_output['sim']
# Display
plt.figure(figsize=(5,5))
gn.imgplot.drawImage2D(sim[0], categ=True, categCol=col,
title='Sim. using local target proportions')
plt.show()
###Output
_____no_output_____
###Markdown
3. Continuous simulation - global probability constraints Training image (TI)(Source of the image: *T. Zhang, P. Switzer, and A. Journel, Filter-based classification of training image patterns for spatial simulation, MATHEMATICAL GEOLOGY, 38(1):63-80, JAN 2006, doi: 10.1007/s11004-005-9004-x*)
###Code
ti = gn.img.readImageGslib('tiContinuous.gslib')
plt.figure(figsize=(5,5))
gn.imgplot.drawImage2D(ti, cmap=gn.customcolors.cmapB2W, title='TI')
plt.show()
###Output
_____no_output_____
###Markdown
Simulation gridDefine the simulation grid (number of cells in each direction, cell unit, origin).
###Code
nx, ny, nz = 200, 200, 1 # number of cells
sx, sy, sz = ti.sx, ti.sy, ti.sz # cell unit
ox, oy, oz = 0.0, 0.0, 0.0 # origin (corner of the "first" grid cell)
###Output
_____no_output_____
###Markdown
Define the classes of valuesSet the number of classes, and for each class define the ensemble of values as a (union of) interval(s).
###Code
vmin, vmax = 0., 256.
nclass = 10
breaks = np.linspace(vmin, vmax, nclass+1)
list_of_classes = [np.array([[breaks[i], breaks[i+1]]]) for i in range(nclass)]
###Output
_____no_output_____
###Markdown
Define probability constraints (class `geone.deesseinterface.SoftProbability`)
###Code
global_pdf = np.repeat(1./nclass, nclass) # global pdf (proportion for each class), uniform
sp = gn.deesseinterface.SoftProbability(
probabilityConstraintUsage=1, # global probability constraints
nclass=nclass, # number of classes of values
classInterval=list_of_classes, # list of classes
globalPdf=global_pdf, # global target PDF (list of length nclass)
comparingPdfMethod=5, # method for comparing PDF's (see doc: help(gn.deesseinterface.SoftProbability))
deactivationDistance=2.0, # deactivation distance (checking PDF is deactivated for narrow patterns)
constantThreshold=0) # acceptation threshold
###Output
_____no_output_____
###Markdown
Fill the input structure for deesse and launch deesse
###Code
nreal = 1
deesse_input = gn.deesseinterface.DeesseInput(
nx=nx, ny=ny, nz=nz,
sx=sx, sy=sy, sz=sz,
ox=ox, oy=oy, oz=oz,
nv=1, varname='code',
nTI=1, TI=ti,
softProbability=sp, # set probability constraints
distanceType='continuous',
nneighboringNode=24,
distanceThreshold=0.05,
maxScanFraction=0.5,
npostProcessingPathMax=0, # disable post-processing (to avoid loosing target proportions)
seed=444,
nrealization=nreal)
deesse_output = gn.deesseinterface.deesseRun(deesse_input)
# Retrieve the realization
sim = deesse_output['sim']
# Display
plt.subplots(1,2, figsize=(15,5)) # 1 x 2 sub-plots
plt.subplot(1,2,1)
gn.imgplot.drawImage2D(sim[0], cmap=gn.customcolors.cmapB2W,
title='Sim. using global target proportions')
plt.subplot(1,2,2)
plt.hist(ti.val.reshape(-1), bins=breaks, density=True, color='lightblue', alpha=0.5, label='TI')
plt.hist(sim[0].val.reshape(-1), bins=breaks, density=True, color='orange', alpha=0.5, label='SIM')
plt.legend()
plt.title('Proportions of values on the classes')
plt.show()
###Output
_____no_output_____
###Markdown
4. Continuous simulations - local probability constraints Define new classes of values
###Code
nclass = 2
class1 = [0., 50.] # interval [0., 50.[ (low values)
class2 = [50., 256.] # interval [50., 256.[ (high values)
list_of_classes = [class1, class2]
###Output
_____no_output_____
###Markdown
Build local target proportions maps
###Code
# xg, yg: coordinates of the centers of grid cell
xg = ox + 0.5*sx + sx*np.arange(nx)
yg = oy + 0.5*sy + sy*np.arange(ny)
xx, yy = np.meshgrid(xg, yg) # create meshgrid from the center of grid cells
# Define proportion maps for each class
p0 = (yy - np.min(yy))/ (np.max(yy) - np.min(yy))
p1 = 1.0 - p0
local_pdf = np.zeros((nclass, nz, ny, nx))
local_pdf[0,0,:,:] = p0
local_pdf[1,0,:,:] = p1
# Plot these maps
im = gn.img.Img(nx, ny, nz, sx, sy, sz, ox, oy, oz, nv=2, val=local_pdf)
plt.subplots(1, 2, figsize=(15,5)) # 1 x 2 sub-plots
plt.subplot(1,2,1)
gn.imgplot.drawImage2D(im, iv=0, vmin=0, vmax=1, title='Porportion for low values')
plt.subplot(1,2,2)
gn.imgplot.drawImage2D(im, iv=1, vmin=0, vmax=1, title='Porportion for high values')
plt.show()
###Output
_____no_output_____
###Markdown
Define probability constraints (class `geone.deesseinterface.SoftProbability`)
###Code
sp = gn.deesseinterface.SoftProbability(
probabilityConstraintUsage=2, # local probability constraints
nclass=nclass, # number of classes of values
classInterval=list_of_classes, # list of classes
localPdf=local_pdf, # local target PDF
localPdfSupportRadius=12., # support radius
comparingPdfMethod=5, # method for comparing PDF's (see doc: help(gn.deesseinterface.SoftProbability))
deactivationDistance=4.0, # deactivation distance (checking PDF is deactivated for narrow patterns)
constantThreshold=0.001) # acceptation threshold
###Output
_____no_output_____
###Markdown
Fill the input structure for deesse and launch deesse
###Code
nreal = 1
deesse_input = gn.deesseinterface.DeesseInput(
nx=nx, ny=ny, nz=nz,
sx=sx, sy=sy, sz=sz,
ox=ox, oy=oy, oz=oz,
nv=1, varname='code',
nTI=1, TI=ti,
softProbability=sp, # set probability constraints
distanceType='continuous',
nneighboringNode=24,
distanceThreshold=0.05,
maxScanFraction=0.5,
npostProcessingPathMax=1,
seed=444,
nrealization=nreal)
deesse_output = gn.deesseinterface.deesseRun(deesse_input)
# Retrieve the realization
sim = deesse_output['sim']
# Display
plt.figure(figsize=(5,5))
gn.imgplot.drawImage2D(sim[0], cmap=gn.customcolors.cmapB2W,
title='Sim. using local target proportions')
plt.show()
###Output
_____no_output_____ |
with_numpyro/Intro2Numpyro.ipynb | ###Markdown
Basics of Numpyro for Bayesian Inference with MCMC
###Code
import math
import os
import arviz as az
import matplotlib.pyplot as plt
import pandas as pd
from causalgraphicalmodels import CausalGraphicalModel
from IPython.display import Image, set_matplotlib_formats
from matplotlib.patches import Ellipse, transforms
import jax.numpy as jnp # numpy, superfast
from jax import ops, random, vmap
from jax.scipy.special import expit
import numpy as onp # the numpy, original
import numpyro as numpyro
import numpyro as npr
import numpyro.distributions as dist
from numpyro.diagnostics import effective_sample_size, print_summary
from numpyro.infer import MCMC, NUTS, Predictive
if "SVG" in os.environ:
%config InlineBackend.figure_formats = ["svg"]
az.style.use("arviz-darkgrid")
numpyro.set_host_device_count(4)
import my_numpyro as npr
from my_numpyro import mcmc_sample
def model(): pass
sam = npr.Sampler(model)
sam.fit({'x': [1]})
###Output
_____no_output_____
###Markdown
Distributions
###Code
b = dist.Bernoulli(0.3)
random_samples = b.sample(random.PRNGKey(0), (1000,))
with numpyro.handlers.seed(rng_seed=0):
x = npr.sample('x', dist.Bernoulli(.3))
x
x.item()
x * 100
type(x)
###Output
_____no_output_____ |
jupyter notebook/Dota2.ipynb | ###Markdown
Imports
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import metrics
from io import StringIO
from csv import writer
###Output
_____no_output_____
###Markdown
Read in csv files
###Code
matches = pd.read_csv('./Data/match.csv', index_col=0)
players = pd.read_csv('./Data/players.csv')
ability_ids = pd.read_csv('./Data/ability_ids.csv')
ability_upgrades = pd.read_csv('./Data/ability_upgrades.csv')
hero_names = pd.read_csv('./Data/hero_names.csv')
cluster_regions = pd.read_csv('./Data/cluster_regions.csv')
test_labels = pd.read_csv('./Data/test_labels.csv', index_col=0)
test_players = pd.read_csv('./Data/test_player.csv')
train_labels = matches['radiant_win'].astype(int)
###Output
_____no_output_____
###Markdown
Json Exports
###Code
hero_names.to_json(path_or_buf='../json/heroes.json', orient='records')
cluster_regions.to_json(path_or_buf='../json/clusters.json', orient='records')
###Output
_____no_output_____
###Markdown
Data info Hero InfoMost and least popular heroes
###Code
num_heroes = len(hero_names)
plt.hist(players['hero_id'], num_heroes)
plt.show()
hero_counts = players['hero_id'].value_counts().rename_axis('hero_id').reset_index(name='num_matches')
pd.merge(hero_counts, hero_names, on='hero_id')
###Output
_____no_output_____
###Markdown
Server InfoWhere the most and least games are played
###Code
plt.hist(matches['cluster'], bins=np.arange(matches['cluster'].min(), matches['cluster'].max()+1))
plt.show()
cluster_counts = matches['cluster'].value_counts().rename_axis('cluster').reset_index(name='num_matches')
pd.merge(cluster_counts, cluster_regions, on='cluster')
short_players = players.iloc[:, :11]
short_players.insert(short_players.shape[1], 'win', value=False)
short_players
# short_matches = matches.iloc[:1000]
for index, row in matches.iterrows():
offset = 10 * index
short_players.at[0 + offset, 'win'] = row.radiant_win
short_players.at[1 + offset, 'win'] = row.radiant_win
short_players.at[2 + offset, 'win'] = row.radiant_win
short_players.at[3 + offset, 'win'] = row.radiant_win
short_players.at[4 + offset, 'win'] = row.radiant_win
short_players.at[5 + offset, 'win'] = not row.radiant_win
short_players.at[6 + offset, 'win'] = not row.radiant_win
short_players.at[7 + offset, 'win'] = not row.radiant_win
short_players.at[8 + offset, 'win'] = not row.radiant_win
short_players.at[9 + offset, 'win'] = not row.radiant_win
# print(index)
short_players.head(20)
short_players.tail()
hero_match_wins = short_players.groupby(by='hero_id').sum()['win']
hero_match_count = players['hero_id'].value_counts().rename_axis('hero_id').reset_index(name='total_matches')
hero_match_count = hero_match_count.merge(hero_match_wins, on='hero_id')
hero_match_count
hero_match_count['win_percent'] = hero_match_count['win'] / hero_match_count['total_matches'] * 100
hero_match_count
hero_match_count = pd.merge(hero_match_count, hero_names, on='hero_id')
hero_match_count
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 8
plt.rcParams["figure.figsize"] = fig_size
x = hero_match_count['total_matches']
y = hero_match_count['win_percent']
labels = hero_match_count['localized_name']
plt.scatter(x, y)
plt.title('Win Percent vs Number of Games Played', fontsize=18)
plt.xlabel('Number of Games Played', fontsize=18)
plt.ylabel('Percent Won', fontsize=18)
for i, label in enumerate(labels):
plt.annotate(label, (x[i], y[i]))
###Output
_____no_output_____
###Markdown
Data cleaningWe start with an empty list of DataFrams and add to it as we create DataFrames of bad match ids. In the end we combine all the DataFrames and remove their match ids from the Matches DataFrame.
###Code
dfs_bad_matches = []
###Output
_____no_output_____
###Markdown
Add missing hero idfor some reason there is no hero for hero_id 24 so we insert blank hero data
###Code
hero_names.loc['24'] = ['None', 24, 'None']
###Output
_____no_output_____
###Markdown
Abandonsremove games were a player has abandoned the match
###Code
abandoned_matches = players[players.leaver_status > 1][['match_id']]
abandoned_matches = abandoned_matches.drop_duplicates().reset_index(drop=True)
dfs_bad_matches.append(abandoned_matches)
abandoned_matches
###Output
_____no_output_____
###Markdown
Missing Hero idremove games where a player is not assigned a hero id, but didnt get flaged for an abandon
###Code
player_no_hero = players[players.hero_id == 0][['match_id']].reset_index(drop=True)
dfs_bad_matches.append(player_no_hero)
player_no_hero
###Output
_____no_output_____
###Markdown
Game length (short)remove games we deem too short (< 15 min)
###Code
short_length = 15 * 60
short_matches = matches[matches.duration < short_length].reset_index()[['match_id']]
dfs_bad_matches.append(short_matches)
short_matches
###Output
_____no_output_____
###Markdown
Game length (long)Next we want to get matches with a too long duration (>90 min)
###Code
long_length = 90 * 60
long_matches = matches[matches.duration > long_length].reset_index()[['match_id']]
dfs_bad_matches.append(long_matches)
long_matches
###Output
_____no_output_____
###Markdown
Combine all our lists of bad matchescombine matches and create a filtered match dataframe with only good matches
###Code
bad_match_ids = pd.concat(dfs_bad_matches, ignore_index=True).drop_duplicates()
bad_match_ids
filtered_matches = matches.drop(bad_match_ids.match_id)
filtered_matches
###Output
_____no_output_____
###Markdown
Convert our match listWe take our good matches and find the heroes in those matches. Each hero has a column for both teams and a boolean value representing if they were on that team(simmilar to a bitmask). Example: {r_1: False, r_2: False, r_3: True ... r_113: False, d_1: True, d_2: False, d_3: False ... d_113: False, r_win: True}. This is then exported to a csv.
###Code
r_names = []
d_names = []
for index, row in hero_names.iterrows():
r_name = 'r_' + str(row['hero_id'])
d_name = 'd_' + str(row['hero_id'])
r_names.append(r_name)
d_names.append(d_name)
columns = (r_names + d_names + ['r_win'])
d_offset = 113
r_wins = 113 + 113
new_row = [False] * (113 + 113 + 1)
# test_players = players.iloc[:500, :]
# test_matches = matches.iloc[:50, :]
columns = (r_names + d_names + ['r_win'])
new_match_list = pd.DataFrame(data=None, columns=columns)
output = StringIO()
csv_writer = writer(output)
for index, row in filtered_matches.iterrows():
match_players = players.loc[players['match_id'] == index]
if match_players.shape[0] == 10:
r_1 = match_players[match_players['player_slot'] == 0].iloc[0]['hero_id']
r_2 = match_players[match_players['player_slot'] == 1].iloc[0]['hero_id']
r_3 = match_players[match_players['player_slot'] == 2].iloc[0]['hero_id']
r_4 = match_players[match_players['player_slot'] == 3].iloc[0]['hero_id']
r_5 = match_players[match_players['player_slot'] == 4].iloc[0]['hero_id']
d_1 = match_players[match_players['player_slot'] == 128].iloc[0]['hero_id']
d_2 = match_players[match_players['player_slot'] == 129].iloc[0]['hero_id']
d_3 = match_players[match_players['player_slot'] == 130].iloc[0]['hero_id']
d_4 = match_players[match_players['player_slot'] == 131].iloc[0]['hero_id']
d_5 = match_players[match_players['player_slot'] == 132].iloc[0]['hero_id']
new_row[r_1 - 1] = True
new_row[r_2 - 1] = True
new_row[r_3 - 1] = True
new_row[r_4 - 1] = True
new_row[r_5 - 1] = True
new_row[d_1 + d_offset - 1] = True
new_row[d_2 + d_offset - 1] = True
new_row[d_3 + d_offset - 1] = True
new_row[d_4 + d_offset - 1] = True
new_row[d_5 + d_offset - 1] = True
# add if radiant win
new_row[r_wins] = row['radiant_win']
# append new row onto match list
csv_writer.writerow(new_row)
new_row[r_1 - 1] = False
new_row[r_2 - 1] = False
new_row[r_3 - 1] = False
new_row[r_4 - 1] = False
new_row[r_5 - 1] = False
new_row[d_1 + d_offset - 1] = False
new_row[d_2 + d_offset - 1] = False
new_row[d_3 + d_offset - 1] = False
new_row[d_4 + d_offset - 1] = False
new_row[d_5 + d_offset - 1] = False
# gives us some idea about how far into the data we are
if index % 1000 == 0:
print(index / 1000)
output.seek(0)
new_match_list = pd.read_csv(output, names=columns)
new_match_list.to_csv('./Data/new_match_list_filtered.csv', index=False)
r_names = []
d_names = []
for slot in range(1, 6):
r_name = 'r_' + str(slot)
d_name = 'd_' + str(slot)
r_names.append(r_name)
d_names.append(d_name)
columns = (r_names + d_names + ['r_win'])
new_row = [-1] * (5 + 5 + 1)
# test_players = players.iloc[:500, :]
# test_matches = matches.iloc[:50, :]
columns = (r_names + d_names + ['r_win'])
new_match_list = pd.DataFrame(data=None, columns=columns)
output = StringIO()
csv_writer = writer(output)
for index, row in filtered_matches.iterrows():
match_players = players.loc[players['match_id'] == index]
if match_players.shape[0] == 10:
new_row[0] = match_players[match_players['player_slot'] == 0].iloc[0]['hero_id']
new_row[1] = match_players[match_players['player_slot'] == 1].iloc[0]['hero_id']
new_row[2] = match_players[match_players['player_slot'] == 2].iloc[0]['hero_id']
new_row[3] = match_players[match_players['player_slot'] == 3].iloc[0]['hero_id']
new_row[4] = match_players[match_players['player_slot'] == 4].iloc[0]['hero_id']
new_row[5] = match_players[match_players['player_slot'] == 128].iloc[0]['hero_id']
new_row[6] = match_players[match_players['player_slot'] == 129].iloc[0]['hero_id']
new_row[7] = match_players[match_players['player_slot'] == 130].iloc[0]['hero_id']
new_row[8] = match_players[match_players['player_slot'] == 131].iloc[0]['hero_id']
new_row[9] = match_players[match_players['player_slot'] == 132].iloc[0]['hero_id']
# add if radiant win
new_row[10] = row['radiant_win']
# append new row onto match list
csv_writer.writerow(new_row)
# gives us some idea about how far into the data we are
if index % 1000 == 0:
print(index / 1000)
output.seek(0)
new_match_list = pd.read_csv(output, names=columns)
new_match_list.to_csv('./Data/new_match_list_filtered_ids.csv', index=False)
###Output
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0
12.0
13.0
14.0
15.0
17.0
18.0
19.0
20.0
21.0
22.0
24.0
25.0
26.0
27.0
28.0
29.0
30.0
31.0
33.0
34.0
35.0
36.0
37.0
38.0
39.0
40.0
41.0
42.0
43.0
44.0
45.0
46.0
47.0
48.0
49.0
###Markdown
Calculate heroes total games and heroes wins (Not needed anymore?)only run me if you have time to wait
###Code
# grouped_players = players.groupby('match_id')
# grouped_players.describe()
# pd.DataFrame(columns=[hero_names['hero_id']])
short_players = players[['match_id', 'hero_id', 'player_slot']]
test_players = short_players.iloc[:50, :]
# test_players
grouped_players = test_players.groupby('match_id')
for match_id, item in grouped_players:
print(grouped_players.get_group(match_id), "\n\n")
num_heroes = len(hero_names) + 1
heroes_wins = np.zeros((num_heroes, num_heroes), dtype=int)
heroes_total_games = np.zeros((num_heroes, num_heroes), dtype=int)
test_matches = matches.iloc[:10, :]
for index, row in matches.iterrows():
match_players = players.loc[players['match_id'] == index]
# print(row['radiant_win'])
# print(test_match_players, "\n\n")
for index2, row2 in match_players.iterrows():
for index3, row3 in match_players.iterrows():
if ((row2['player_slot'] ^ row3['player_slot']) > 7):
# print((row2['player_slot'] ^ row3['player_slot']))
# if(row2['hero_id'] == 39):
# print(row2['hero_id'])
# print(row3['hero_id'])
# print()
heroes_total_games[row2['hero_id'], row3['hero_id']] += 1
pd.set_option("display.max_rows", None, "display.max_columns", None)
# heroes_total_games = np.delete(heroes_total_games, 0, 0)
# heroes_total_games = np.delete(heroes_total_games, 0, 1)
# heroes_total_games = np.delete(heroes_total_games, 24, 1)
# heroes_total_games_df = pd.DataFrame(heroes_total_games, columns=[hero_names['hero_id']])
heroes_total_games_df = pd.DataFrame(heroes_total_games)
heroes_total_games_df
matches.head()
players.head()
###Output
_____no_output_____ |
Model/0924-card.ipynb | ###Markdown
###Code
import pandas as pd
df = pd.read_csv('credit_cards_dataset.csv')
df.columns
type(df)
array = df.values
type(array)
array.shape
X = array[:,0:24]
X.shape
Y = array[:,24] # array[:,24]
Y[:3]
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFE
# feature extraction
model = LogisticRegression()
rfe = RFE(model, 10)
fit = rfe.fit(X, Y)
print("Num Features: {}".format(fit.n_features_))
print("Selected Features:{}".format(fit.support_))
print("Feature Ranking: {}".format(fit.ranking_))
df.corr()
import matplotlib.pyplot as plt
import seaborn as sns
plt.subplots(figsize=(26,20))
corr = df.corr()
sns.heatmap(corr,annot=True)
plt.show()
###Output
_____no_output_____ |
08_Model_Development/08_M1_H1_KNN_example_teacher.ipynb | ###Markdown
建立 KNN 模型
###Code
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
###Output
_____no_output_____
###Markdown
讀取 玻璃分類資料
###Code
## 讀取 玻璃分類 data
df = pd.read_csv('glass_type.csv')
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 214 entries, 0 to 213
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 RI 214 non-null float64
1 Na 214 non-null float64
2 Mg 214 non-null float64
3 Al 214 non-null float64
4 Si 214 non-null float64
5 K 214 non-null float64
6 Ca 214 non-null float64
7 Ba 214 non-null float64
8 Fe 214 non-null float64
9 Type 214 non-null int64
dtypes: float64(9), int64(1)
memory usage: 16.8 KB
###Markdown
切分訓練、測試資料集
###Code
from sklearn.model_selection import train_test_split
# 取得特徵(移除 y label)
X = df.drop(labels=['Type'] ,axis=1)
# 取得 y label
y = df['Type'].values
X_train , X_test , y_train , y_test = train_test_split(X, y ,test_size=0.2 , random_state=5)
print(f'Length of training dataset: {len(X_train)} samples')
print(f'Length of testing dataset: {len(X_test)} samples')
###Output
Length of training dataset: 171 samples
Length of testing dataset: 43 samples
###Markdown
模型訓練
###Code
# 模型訓練
from sklearn.neighbors import KNeighborsClassifier
knnModel = KNeighborsClassifier(n_neighbors=2)
knnModel.fit(X_train, y_train)
# 輸出模型準確率
print('訓練集準確率: ', knnModel.score(X_train, y_train))
print('測試集準確率: ',knnModel.score(X_test, y_test))
# 模型預測測試集的結果
y_pred = knnModel.predict(X_test)
###Output
訓練集準確率: 0.8421052631578947
測試集準確率: 0.7441860465116279
###Markdown
模型評估結果
###Code
# 輸出 Classification Report
from sklearn.metrics import classification_report
print('Classification Report:')
print(classification_report(y_test, y_pred))
print('-'*50)
# 輸出 Confusion Matrix
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
cm = confusion_matrix(y_test, y_pred)
print('Confusion Matrix:')
print(cm)
###Output
Classification Report:
precision recall f1-score support
1 0.71 0.75 0.73 16
2 0.80 0.80 0.80 15
3 0.00 0.00 0.00 2
5 1.00 1.00 1.00 1
6 1.00 1.00 1.00 2
7 1.00 0.71 0.83 7
accuracy 0.74 43
macro avg 0.75 0.71 0.73 43
weighted avg 0.77 0.74 0.76 43
--------------------------------------------------
Confusion Matrix:
[[12 1 3 0 0 0]
[ 3 12 0 0 0 0]
[ 2 0 0 0 0 0]
[ 0 0 0 1 0 0]
[ 0 0 0 0 2 0]
[ 0 2 0 0 0 5]]
###Markdown
顯示資料集散佈圖
###Code
# 建立訓練集 DataFrme (包含x特徵與y label)
df_train = pd.DataFrame(X_train)
df_train['Type'] = y_train
# 建立測試集 DataFrme (包含x特徵與y label)
df_test = pd.DataFrame(X_test)
df_test['Type'] = y_test
# 訓練集特徵 "RI", "Na" 散佈圖
sns.lmplot("RI", "Na", hue='Type', data=df_train, fit_reg=False)
# 測試集特徵 "RI", "Na" 散佈圖
sns.lmplot("RI", "Na", hue='Type', data=df_test, fit_reg=False)
###Output
_____no_output_____
###Markdown
模型交叉驗證
###Code
from sklearn.model_selection import cross_val_score
# creating list of K for KNN (K from 1 to 20)
k_list = list(range(1, 20, 1))
# creating list of cv scores
cv_scores = []
# perform 10-fold cross validation for each K
for k in k_list:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X_train, y_train, cv=10, scoring='accuracy')
cv_scores.append(scores.mean())
# changing to misclassification error
MSE = [1 - x for x in cv_scores]
# show each K's KNN result score
plt.figure()
plt.figure(figsize=(15,10))
plt.title('The optimal number of neighbors', fontsize=20, fontweight='bold')
plt.xlabel('Number of Neighbors K', fontsize=15)
plt.ylabel('Accuracy', fontsize=15)
sns.set_style("whitegrid")
plt.plot(k_list, cv_scores)
plt.show()
plt.figure()
plt.figure(figsize=(15,10))
plt.title('The optimal number of neighbors', fontsize=20, fontweight='bold')
plt.xlabel('Number of Neighbors K', fontsize=15)
plt.ylabel('Misclassification Error', fontsize=15)
sns.set_style("whitegrid")
plt.plot(k_list, MSE)
plt.show()
###Output
_____no_output_____ |
dmu26/dmu26_XID+MIPS_ELAIS-N1/XID+MIPS_prior_SERVS.ipynb | ###Markdown
This notebook uses all the raw data from the masterlist, maps, PSF and relevant MOCs to create XID+ prior object and relevant tiling scheme Read in MOCsThe selection functions required are the main MOC associated with the masterlist. As the prior for XID+ is based on IRAC detected sources coming from two different surveys at different depths (SERVS and SWIRE) I will split the XID+ run into two different runs. Here we use the SERVS depth.
###Code
Sel_func=pymoc.MOC()
Sel_func.read('../MOCs/holes_ELAIS-N1_irac1_O16_MOC.fits')
SERVS_MOC=pymoc.MOC()
SERVS_MOC.read('../MOCs/DF-SERVS_ELAIS-N1_MOC.fits')
Final=Sel_func.intersection(SERVS_MOC)
###Output
_____no_output_____
###Markdown
Read in MasterlistNext step is to read in Masterlist and select only sources that are detected in mid-infrared and at least one other wavelength domain (i.e. optical or nir). This will remove most of the objects in the catalogue that are artefacts. We can do this by using the `flag_optnir_det` flag and selecting sources that have a binary value of $>= 5$
###Code
from astropy.io import fits
masterfile='master_catalogue_elais-n1_20170627.fits.gz'
masterlist=fits.open('../'+masterfile)
good=masterlist[1].data['flag_optnir_det']>=5
###Output
_____no_output_____
###Markdown
Create uninformative (i.e. conservative) upper and lower limits based on IRAC fluxesAs the default flux prior for XID+ is a uniform distribution, it makes sense to set reasonable upper and lower 24 micron flux limits based on the longest wavelength IRAC flux available. For a lower limit I take IRAC/500.0 and for upper limit I take IRACx500.
###Code
MIPS_lower=np.full(good.sum(),0.0)
MIPS_upper=np.full(good.sum(),1E5)
for i in range(0,good.sum()):
if masterlist[1].data['flag_irac4'][good][i]>0:
MIPS_lower[i]=masterlist[1].data['f_irac4'][good][i]/500.0
MIPS_upper[i]=masterlist[1].data['f_irac4'][good][i]*500.0
elif masterlist[1].data['flag_irac3'][good][i]>0:
MIPS_lower[i]=masterlist[1].data['f_irac3'][good][i]/500.0
MIPS_upper[i]=masterlist[1].data['f_irac3'][good][i]*500.0
elif masterlist[1].data['flag_irac2'][good][i]>0:
MIPS_lower[i]=masterlist[1].data['f_irac2'][good][i]/500.0
MIPS_upper[i]=masterlist[1].data['f_irac2'][good][i]*500.0
elif masterlist[1].data['flag_irac1'][good][i]>0:
MIPS_lower[i]=masterlist[1].data['f_irac1'][good][i]/500.0
MIPS_upper[i]=masterlist[1].data['f_irac1'][good][i]*500.0
###Output
_____no_output_____
###Markdown
Read in MapWe are now ready to read in the MIPS map
###Code
MIPS_Map=fits.open('./wp4_elais-n1_mips24_map_v1.0.fits.gz')
###Output
_____no_output_____
###Markdown
Read in PSF
###Code
MIPS_psf=fits.open('/Users/pdh21/astrodata/PSF_normalisation/notebooks/dmu17_MIPS_PSF_ELAIS-N1_20170629.fits')
centre=np.long((MIPS_psf[1].header['NAXIS1']-1)/2)
radius=20
import pylab as plt
plt.imshow(np.log10(MIPS_psf[1].data[centre-radius:centre+radius+1,centre-radius:centre+radius+1]/np.max(MIPS_psf[1].data[centre-radius:centre+radius+1,centre-radius:centre+radius+1])))
plt.colorbar()
###Output
_____no_output_____
###Markdown
Set XID+ prior class
###Code
#prior_MIPS=xidplus.prior(MIPS_Map[1].data,MIPS_Map[2].data,MIPS_Map[0].header,MIPS_Map[1].header,moc=Final)
#prior_MIPS.prior_cat(masterlist[1].data['ra'][good],masterlist[1].data['dec'][good],masterfile,flux_lower=MIPS_lower,
#flux_upper=MIPS_upper,ID=masterlist[1].data['help_id'][good])
priors[0].set_prf(MIPS_psf[1].data[centre-radius:centre+radius+1,centre-radius:centre+radius+1]/1.0E6,np.arange(0,41/2.0,0.5),np.arange(0,41/2.0,0.5))
###Output
_____no_output_____
###Markdown
Calculate tilesAs fitting the whole map would be too computationally expensive, I split based on HEALPix pixels. For MIPS, the optimum order is 11. So that I don't have to read the master prior based on the whole map into memory each time (which requires a lot more memory) I also create another layer of HEALPix pixels based at the lower order of 7.
###Code
import pickle
#from moc, get healpix pixels at a given order
from xidplus import moc_routines
order=11
tiles=moc_routines.get_HEALPix_pixels(order,prior_MIPS.sra,prior_MIPS.sdec,unique=True)
order_large=7
tiles_large=moc_routines.get_HEALPix_pixels(order_large,prior_MIPS.sra,prior_MIPS.sdec,unique=True)
print('----- There are '+str(len(tiles))+' tiles required for input catalogue and '+str(len(tiles_large))+' large tiles')
output_folder='./'
outfile=output_folder+'Master_prior.pkl'
with open(outfile, 'wb') as f:
pickle.dump({'priors':[prior_MIPS],'tiles':tiles,'order':order,'version':xidplus.io.git_version()},f)
outfile=output_folder+'Tiles.pkl'
with open(outfile, 'wb') as f:
pickle.dump({'tiles':tiles,'order':order,'tiles_large':tiles_large,'order_large':order_large,'version':xidplus.io.git_version()},f)
raise SystemExit()
###Output
_____no_output_____ |
CS1PX-resources/Cycle03/Cycle03.ipynb | ###Markdown
Cycle 3 Exercises - Reading from Files, Errors and Exceptions Aims and Objectives* Practice reading data from files* Practice checking for malformed input and throwing/catching exceptions This week’s exercises This week you’ll be building the biggest piece of code we’ve worked on yet - a birthday book that reads information from a file, and, if you have time to work on the last task, lets us retrieve information from the command line. You will probably want to use IDLE or another python interpreter to work on the tasks this week. Resources: Chapters 13 and 19 of How to Think Like a Computing Scientist contain information on files and exceptions, and you working on file reading in CS1CT. Task 1 - Data structure and processingThe idea of this exercise is to store people’s birthdays and produce reminders of birthdays that are coming up. A birthday consists of a month and a date, which can be represented by a dictionary such as{ "month":"Sep", "day":17 }The birthday book is a dictionary in which the keys are people’s names, and the values are birthdays, with each birthday represented as a dictionary as above. I want you to define a number of functions for dealing with a birthday book. Write all of this code in a file that is called birthday.py1. Set up a hard-coded sample birthday book dictionary so that you can test out the functions you will write. Here is a sample of a dictionary that has only my birthday in it: `birthdayBook = {"Jess" : {"month": "Dec", "day": 10}}`. Create your own, or add more to this one.
###Code
birthdayBook = {"Jess": {"month": "December", "day": 10},
"Ethan": {"month": "April", "day": 29},
"Stefani Joanne Angelina Germanotta": {"month": "March", "day": 28}
}
###Output
_____no_output_____
###Markdown
2. Define a function which, given a person’s name, prints his or her birthday. You function should take both the birthday book and the name as arguments.
###Code
def get_birthday(choice, dates):
if choice in birthdayBook.keys():
print("{}'s birthday is {} {}.".format(choice, birthdayBook[choice]["month"], birthdayBook[choice]["day"]))
else:
print("Unfortunately, we don't know {}'s birthday.".format(choice))
birthdayBook = {"Jess": {"month": "December", "day": 10},
"Ethan": {"month": "April", "day": 29},
"Stefani Joanne Angelina Germanotta": {"month": "March", "day": 28}
}
get_birthday("Ethan", birthdayBook)
###Output
Ethan's birthday is April 29.
###Markdown
3. Define a function which, given a month, prints a list of all the people who have birthdays in that month, with the dates.
###Code
def get_all_birthdays(birthdayBook):
for x in birthdayBook.keys():
get_birthday(x, birthdayBook)
birthdayBook = {"Jess": {"month": "December", "day": 10},
"Ethan": {"month": "April", "day": 29},
"Stefani Joanne Angelina Germanotta": {"month": "March", "day": 28}
}
get_all_birthdays(birthdayBook)
###Output
Jess's birthday is December 10.
Ethan's birthday is April 29.
Stefani Joanne Angelina Germanotta's birthday is March 28.
###Markdown
Task 2 - Reading information from a fileNow we are going to read information about the birthdays from a file. **We will add error-checking later.**The aim of this task is to define a function: getBirthdays which takes a filename as a parameter and reads birthdays from the file, storing them in a dictionary which should also be a parameter of the function. The first line of the function definition should therefore be`def getBirthdays(fileName,book):`The file should contain a number of lines with one birthday per line, in the following format:```John,Mar,23Susan,Feb,16```and so on. The file `birthdays.txt` (on Moodle) contains some data that you can use for testing; you can also create your own files using the normal Python editor. For this task, don't worry *yet* about handling errors: assume that the file exists, that it has the correct format, that every line gives a valid date, etc. The following points might be useful:-* remember to open the file and then call methods to read data* the easiest way to read data from this file is to use the readline function, but note that it gives you a string with a newline character at the end, so you will need to discard extra whitespace. You may want to look up the strip() function* remember the `split()` function from the string module: the call `line.split(",")` will be useful. It converts a string into a list * Test your function (how should you do this?)Once you have written this code to read in a file, read in the birthdays from file and try out your functions from Task 1.
###Code
def get_birthdays(filename, book):
with open(filename, 'r') as file:
for line in file:
line = line.strip('\n')
key, month, day = line.split(',')
book[key] = month, day
print(book)
###Output
_____no_output_____
###Markdown
Task 3 - Handling errorsNow we will try to make our code more robust, and deal with malformed input files. In lecture we talked about both exceptions and more hand-written error checks using if statements. Be sure to try out both in this task. There are many things that could go wrong in this program! The filename might be for a file that does not exist. The lines in a file might be missing commas, the functions you write as part of Task 1 might be given faulty input (months that don’t exist, etc).Modify the birthday book program so that as many errors as you can think of are detected. In some cases, for example, trying to open a non-existent file, you should handle the exception raised by the built-in Python function. In other cases, you might like to process other built-in exceptions or do input checks using if statements (for example, to check for valid months, or valid dates). Advanced option: you might even like to raise and handle your own exceptions.
###Code
## NB I haven't dealt with many exceptions here, just some examples!
def get_birthdays(filename, book):
with open(filename, 'r') as file:
for line in file:
line = line.strip('\n') # Get rid of whitespace
key, month, day = line.split(',') # Separate name, month and day
book[key] = month, day # Save to the dictionary
if book == {}: # I.e. file was empty
print("Sorry, I couldn't find any birthday records!")
file.close()
return book
def write_birthday(filename, name, month, day):
with open(filename, 'a') as file:
file.write("{},{},{}\n".format(name, month, day))
file.close()
def main():
book = {}
file = "birthday_records.txt"
book = get_birthdays(file, book)
if book == {}:
print("I don't have any birthday records.")
else:
print("I already know the following birthdays:")
for i in book:
print(" •", i)
print()
while True:
key = input("Enter a name to look up that person's birthdate (or q to quit):\n")
if key == "q":
break
try:
print('{}\'s birthday is {} {}.'.format(key, book[key][0], book[key][1]))
except KeyError:
print('Sorry, I don\'t have {}\'s birthday.'.format(key))
answer = input('Would you like to tell me {}\'s birthday? (y/n)'.format(key))
if answer.lower() == 'y':
date = input("Great! Please enter their birthdate, in the form \'Jan,1\'")
month, day = date.split(",")
write_birthday(file, key, month, day)
print("Got it, thanks! {}\'s birthday has now been added.".format(key))
book = get_birthdays(file, book)
continue
else:
print("No problem!")
continue
if __name__ == "__main__":
main()
###Output
I already know the following birthdays:
• Ethan Kelly
• Jess Enright
• Stefani Joanne Angelina Germanotta
• Ariana Grande
• Alan Turing
• Tim Berners-Lee
• Grace Hopper
• Ada Lovelace
###Markdown
*Extra Challenge if you have time*: Task 4 - A command-line menuThe task now is to combine the functions you have implemented so far into a complete application that takes user input from the command line.Write a program which repeatedly asks the user to enter a command, asks for further details if necessary, and carries out the corresponding operation. For example, one command could be "read"; in this case the program should ask the user to enter a filename, and then read birthdays from that file into the birthday book. There should be a textual menu with a command for each of the operations from Task 1 and 2, as well as a command "quit" which terminates the program. You might use a while loop to repeatedly ask for input so long as the input is not ‘quit’.
###Code
# See previous solution...
###Output
_____no_output_____ |
05_cli.ipynb | ###Markdown
Simulation interface> Main simulation class and command line clientCreate a class which:* parses the configuration file* gets the channels that the user wants to simulate* loops through these and returns/writes the maps
###Code
# default_exp cli
%load_ext autoreload
%autoreload 2
# export
import os
import toml
import healpy as hp
import numpy as np
import h5py
from pathlib import Path
import logging as log
from datetime import date
from s4_design_sim_tool.core import get_telescope
from s4_design_sim_tool import __version__
from s4_design_sim_tool.foregrounds import load_sky_emission
from s4_design_sim_tool.atmosphere import load_atmosphere, get_telecope_years
from s4_design_sim_tool.noise import load_noise
from s4_design_sim_tool.hitmap_wcov import load_hitmap_wcov
hp.disable_warnings()
# exports
import hashlib
def md5sum_string(string):
return hashlib.md5(string.encode("utf-8")).hexdigest()
def md5sum_file(filename):
"""Compute md5 checksum of the contents of a file"""
return md5sum_string(open(filename, "r").read())
# export
s4_channels = {
"LAT": [
"ULFPL1",
"LFL1",
"LFPL1",
"LFL2",
"LFPL2",
"MFPL1",
"MFL1",
"MFL2",
"MFPL2",
"HFL1",
"HFPL1",
"HFL2",
"HFPL2",
],
"SAT": ["LFS1", "LFS2", "MFLS1", "MFHS1", "MFLS2", "MFHS2", "HFS1", "HFS2"],
}
def parse_channels(channels):
"""Parse a comma separated list of channels or all or SAT/LAT into channel tag list"""
if channels in ["SAT", "LAT"]:
channels = s4_channels[channels]
elif channels in ["all", None]:
channels = s4_channels["SAT"] + s4_channels["LAT"]
elif isinstance(channels, str):
channels = channels.split(",")
return channels
###Output
_____no_output_____
###Markdown
Generate list of channelsThis is hardcoded above so we don't need to read the instrument model for processing channels
###Code
from s4_design_sim_tool.core import read_instrument_model
s4 = read_instrument_model()
exported_s4_channels = {}
for telescope, rows in zip(s4.group_by("telescope").groups.keys, s4.group_by("telescope").groups):
exported_s4_channels[telescope[0]] = list(rows["band"])
exported_s4_channels == s4_channels
assert parse_channels("LFL1") == ["LFL1"]
assert parse_channels(["LFL1"]) == ["LFL1"]
assert parse_channels("LFL1,LFL2") == ["LFL1", "LFL2"]
assert parse_channels("LAT") == ['ULFPL1',
'LFL1',
'LFPL1',
'LFL2',
'LFPL2',
'MFPL1',
'MFL1',
'MFL2',
'MFPL2',
'HFL1',
'HFPL1',
'HFL2',
'HFPL2']
assert len(parse_channels("all")) == 21
# exports
import collections
def merge_dict(d1, d2):
"""
Modifies d1 in-place to contain values from d2. If any value
in d1 is a dictionary (or dict-like), *and* the corresponding
value in d2 is also a dictionary, then merge them in-place.
"""
for k, v2 in d2.items():
v1 = d1.get(k) # returns None if v1 has no value for this key
if isinstance(v1, collections.Mapping) and isinstance(v2, collections.Mapping):
merge_dict(v1, v2)
else:
d1[k] = v2
def parse_config(*config_files):
"""Parse TOML configuration files
Later TOML configuration files override the previous ones,
dictionaries at the same level are merged.
Parameters
----------
config_files : one or more str
paths to TOML configuration files
Returns
-------
config : dict
parsed dictionary"""
config = toml.load(config_files[0])
for conf in config_files[1:]:
merge_dict(config, toml.load(conf))
return config
console_md5sum = !md5sum s4_design.toml
console_md5sum
assert md5sum_file("s4_design.toml") == console_md5sum[-1].split()[0]
# exports
class S4RefSimTool:
def __init__(self, config_filename, output_folder="output"):
"""Simulate CMB-S4 maps based on the experiment configuration
Parameters
----------
config : str or Path or List
CMB-S4 configuration stored in a TOML file
see for example s4_design.toml in the repository
It also supports multiple TOML files as a List, in this case
later files override configuration files of the earlier files.
check the `config` attribute to verify that the parsing behaved
as expected.
output_folder : str or Path
Output path
"""
self.config_filename = (
[config_filename]
if isinstance(config_filename, (str, Path))
else config_filename
)
self.config = parse_config(*self.config_filename)
self.output_filename_template = "cmbs4_KCMB_{telescope}-{band}_{site}_nside{nside}_{split}_of_{nsplits}.fits"
self.output_folder = Path(output_folder)
self.output_folder.mkdir(parents=True, exist_ok=True)
def run(self, channels="all", sites=["Pole", "Chile"]):
"""Run the simulation
Parameters
----------
channels : str or list[str]
list of channel tags, e.g.
* ["LFS1", "LFS2"] or
* "SAT" or "LAT"
* "all" (default)
site : list[str]
['Pole'] or ['Chile'], by default ["Pole", "Chile"]
"""
nsplits = self.config["experiment"].get("number_of_splits", 0)
if nsplits == 1:
nsplits = 0
assert (
nsplits < 8
), "We currently only have 7 independent realizations of atmosphere and noise"
conf_md5 = ",".join(map(md5sum_file, self.config_filename))
for site in sites:
for channel in parse_channels(channels):
if get_telecope_years(self.config, site, channel) == 0:
continue
telescope = get_telescope(channel)
subfolder = self.output_folder / f"{telescope}-{channel}_{site.lower()}"
subfolder.mkdir(parents=True, exist_ok=True)
log.info("Created output folder %s", str(subfolder))
for split in range(nsplits + 1):
nside = 512 if telescope == "SAT" else 4096
output_filename = self.output_filename_template.format(
nside=nside,
telescope=telescope,
band=channel,
site=site.lower(),
split=max(1, split), # split=0 is full mission and we want 1
nsplits=1 if split == 0 else nsplits,
)
if os.path.exists(subfolder / output_filename):
log.info("File %s already exists, SKIP", output_filename)
continue
if split == 0:
log.info(f"Simulate channel {channel} at {site}")
sky_emission = load_sky_emission(
self.config["sky_emission"], site, channel
)
output_map = np.zeros_like(sky_emission)
if self.config["experiment"].get("include_atmosphere", True):
output_map += load_atmosphere(
self.config, site, channel, realization=split
)
else:
log.info("Skip the atmosphere noise")
if self.config["experiment"].get("include_noise", True):
output_map += load_noise(
self.config, site, channel, realization=split
)
else:
log.info("Skip the instrument noise")
if split > 0:
output_map *= np.sqrt(nsplits)
output_map += sky_emission
# Use UNSEEN instead of nan for missing pixels
output_map[np.isnan(output_map)] = hp.UNSEEN
log.info(f"Writing {output_filename}")
noise_version = "1.0"
hp.write_map(
subfolder / output_filename,
output_map,
column_units="K_CMB",
extra_header=[
("SOFTWARE", "s4_design_sim_tool"),
("SW_VERS", __version__),
("SKY_VERS", "1.0"),
("ATM_VERS", "1.0"),
("NOI_VERS", noise_version),
("SITE", site),
("SPLIT", split),
("NSPLITS", nsplits),
("CHANNEL", channel),
("DATE", str(date.today())),
("CONFMD5", conf_md5),
],
coord="Q",
overwrite=True,
)
# only run of full mission and the first split
if (
split in [0, 1]
and self.config["experiment"].get("include_noise", True)
and self.config["experiment"].get("process_hitmap_wcov", True)
):
log.info(f"Loading hitmap and white noise covariance matrix")
if split == 0:
hitmap, wcov = load_hitmap_wcov(
self.config, site, channel, realization=0
)
else:
hitmap = np.round(hitmap / nsplits).astype(np.int64)
wcov = hp.ma(wcov) * nsplits
hitmap_filename = output_filename.replace("KCMB", "hitmap")
log.info(f"Writing {hitmap_filename}")
hp.write_map(
subfolder / hitmap_filename,
hitmap,
column_units="hits",
extra_header=[
("SOFTWARE", "s4_design_sim_tool"),
("SW_VERS", __version__),
("NOI_VERS", noise_version),
("SITE", site),
("SPLIT", split),
("NSPLITS", nsplits),
("CHANNEL", channel),
("DATE", str(date.today())),
("CONFMD5", conf_md5),
],
coord="Q",
overwrite=True,
)
wcov_filename = output_filename.replace("KCMB", "wcov")
log.info(f"Writing {wcov_filename}")
hp.write_map(
subfolder / wcov_filename,
wcov,
column_units="K_CMB**2",
extra_header=[
("SOFTWARE", "s4_design_sim_tool"),
("SW_VERS", __version__),
("NOI_VERS", noise_version),
("SITE", site),
("SPLIT", split),
("NSPLITS", nsplits),
("CHANNEL", channel),
("DATE", str(date.today())),
("CONFMD5", conf_md5),
],
coord="Q",
overwrite=True,
)
if split == 1:
del hitmap, wcov
# exports
def command_line_script(args=None):
import logging as log
log.basicConfig(level=log.INFO)
import argparse
parser = argparse.ArgumentParser(description="Run s4_design_sim_tool")
parser.add_argument("config", type=str, nargs="*", help="TOML Configuration files")
parser.add_argument(
"--channels",
type=str,
help="Channels e.g. all, SAT, LAT, LFL1 or comma separated list of channels",
required=False,
default="all",
)
parser.add_argument(
"--site",
type=str,
help="Pole, Chile or all, default all",
required=False,
default="all",
)
parser.add_argument(
"--output_folder",
type=str,
help="Output folder, optional",
required=False,
default="output",
)
res = parser.parse_args(args)
if res.site == "all":
sites = ["Chile", "Pole"]
else:
sites = [res.site]
sim = S4RefSimTool(res.config, output_folder=res.output_folder)
sim.run(channels=res.channels, sites=sites)
log.basicConfig(level = log.INFO)
sim = S4RefSimTool("s4_design.toml")
sim.run(channels="LFS1", sites=["Pole"])
%matplotlib inline
from astropy.io import fits
output_map, header = hp.read_map(
"output/SAT-LFS1_pole/cmbs4_KCMB_SAT-LFS1_pole_nside512_1_of_1.fits", (0,1,2),
h=True)
header
header_dict = {k:v for k,v in header}
assert header_dict["SW_VERS"] == __version__
assert header_dict["SOFTWARE"] == "s4_design_sim_tool"
np.min(output_map[1][output_map[1] != hp.UNSEEN]), np.max(output_map[1])
assert np.min(output_map[1][output_map[1] != hp.UNSEEN]) > -1e-2 and np.max(output_map[1]) < 1e-2, \
"Amplitude check failed"
hp.mollview(output_map[0], min=-1e-4, max=1e-4, unit="K", title="Total I")
hp.mollview(output_map[1], min=-1e-5, max=1e-5, unit="K", title="Total Q")
hp.mollview(output_map[2], min=-1e-5, max=1e-5, unit="K", title="Total U")
!rm -r output/SAT-LFS1_pole
###Output
_____no_output_____
###Markdown
Test multiple TOML files
###Code
%%file only_CMB_scalar.toml
[sky_emission]
foreground_emission = 0
CMB_unlensed = 1
CMB_lensing_signal = 1
CMB_tensor_to_scalar_ratio = 0
sim2 = S4RefSimTool(["s4_design.toml", "only_CMB_scalar.toml"])
assert sim2.config["sky_emission"]["foreground_emission"] == 0
assert sim2.config["sky_emission"]["CMB_tensor_to_scalar_ratio"] == 0
assert sim.config["telescopes"] == sim2.config["telescopes"]
!rm only_CMB_scalar.toml
###Output
_____no_output_____ |
09_deploy/ab-test/99_Perform_AB_Test_Reviews_BERT_TensorFlow_Versus_PyTorch_REST_Endpoints.ipynb | ###Markdown
Perform A/B Test using REST Endpoints You can test and deploy new models behind a single SageMaker Endpoint with a concept called “production variants.” These variants can differ by hardware (CPU/GPU), by data (comedy/drama movies), or by region (US West or Germany North). You can shift traffic between the models in your endpoint for canary rollouts and blue/green deployments. You can split traffic for A/B tests. And you can configure your endpoint to automatically scale your endpoints out or in based on a given metric like requests per second. As more requests come in, SageMaker will automatically scale the model prediction API to meet the demand. We can use traffic splitting to direct subsets of users to different model variants for the purpose of comparing and testing different models in live production. The goal is to see which variants perform better. Often, these tests need to run for a long period of time (weeks) to be statistically significant. The figure shows 2 different recommendation models deployed using a random 50-50 traffic split between the 2 variants.
###Code
import boto3
import sagemaker
import pandas as pd
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
sm = boto3.Session().client(service_name='sagemaker', region_name=region)
cw = boto3.Session().client("cloudwatch")
###Output
_____no_output_____
###Markdown
Clean Up Previous Endpoints to Save Resources
###Code
%store -r autopilot_endpoint_name
try:
autopilot_endpoint_name
sm.delete_endpoint(
EndpointName=autopilot_endpoint_name
)
print('Autopilot Endpoint has been deleted to save resources. This is good.')
except:
print('Endpoints are cleaned up. This is good. Keep moving forward!')
%store -r training_job_name
print(training_job_name)
%store -r tensorflow_model_name
print(tensorflow_model_name)
%store -r pytorch_model_name
print(pytorch_model_name)
###Output
_____no_output_____
###Markdown
Create Variant A Model From the Training Job in a Previous SectionNotes:* `primary_container_image` is required because the inference and training images are different.* By default, the training image will be used, so we need to override it. * See https://github.com/aws/sagemaker-python-sdk/issues/1379* This variant requires the Elastic Inference image since we are using Elastic Inference* If you are not using a US-based region, you may need to adapt the container image to your current region using the following table:https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-images.html
###Code
# import time
# timestamp = int(time.time())
# model_a_name = '{}-{}-{}'.format(training_job_name, 'var-a', timestamp)
# sess.create_model_from_job(name=model_a_name,
# training_job_name=training_job_name,
# role=role,
# primary_container_image='763104351884.dkr.ecr.{}.amazonaws.com/tensorflow-inference:2.1.0-cpu-py36-ubuntu18.04'.format(region))
###Output
_____no_output_____
###Markdown
Create Variant B Model From the Training Job in a Previous SectionNotes:* This is the same underlying model as variant A, but does not use an attached Elastic Inference Adapter (EIA).* `primary_container_image` is required because the inference and training images are different.* By default, the training image will be used, so we need to override it. * See https://github.com/aws/sagemaker-python-sdk/issues/1379* If you are not using a US-based region, you may need to adapt the container image to your current region using the following table:https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/deep-learning-containers-images.html
###Code
# model_b_name = '{}-{}-{}'.format(training_job_name, 'var-b', timestamp)
# sess.create_model_from_job(name=model_b_name,
# training_job_name=training_job_name,
# role=role,
# primary_container_image='763104351884.dkr.ecr.{}.amazonaws.com/tensorflow-inference:2.1.0-cpu-py36-ubuntu18.04'.format(region))
###Output
_____no_output_____
###Markdown
Canary Rollouts and A/B Testing Canary rollouts are used to release new models safely to only a small subset of users such as 5%. They are useful if you want to test in live production without affecting the entire user base. Since the majority of traffic goes to the existing model, the cluster size of the canary model can be relatively small since it’s only receiving 5% traffic. Instead of `deploy()`, we can create an `Endpoint Configuration` with multiple variants for canary rollouts and A/B testing.
###Code
from sagemaker.session import production_variant
timestamp = int(time.time())
model_ab_endpoint_config_name = '{}-{}-{}'.format(training_job_name, 'ab', timestamp)
tensorflow_variant = production_variant(model_name=tensorflow_model_name,
instance_type='ml.c5.9xlarge',
initial_instance_count=1,
variant_name='TensorFlow',
initial_weight=50)
pytorch_variant = production_variant(model_name=pytorch_model_name,
instance_type='ml.c5.9xlarge',
initial_instance_count=1,
variant_name='PyTorch',
initial_weight=50)
model_ab_endpoint_config = sm.create_endpoint_config(
EndpointConfigName=model_ab_endpoint_config_name,
ProductionVariants=[tensorflow_variant, pytorch_variant]
)
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpointConfig/{}">REST Endpoint Configuration</a></b>'.format(region, endpoint_config_name)))
model_ab_endpoint_name = '{}-{}-{}'.format(training_job_name, 'ab', timestamp)
endpoint_response = sm.create_endpoint(
EndpointName=model_ab_endpoint_name,
EndpointConfigName=model_ab_endpoint_config_name)
###Output
_____no_output_____
###Markdown
Store Endpoint Name for Next Notebook(s)
###Code
%store model_ab_endpoint_name
###Output
_____no_output_____
###Markdown
Track the Deployment Within our Experiment
###Code
%store -r experiment_name
print(experiment_name)
%store -r trial_name
print(trial_name)
from smexperiments.trial import Trial
trial = Trial.load(trial_name=trial_name)
print(trial)
from smexperiments.tracker import Tracker
tracker_deploy = Tracker.create(display_name='deploy',
sagemaker_boto_client=sm)
deploy_trial_component_name = tracker_deploy.trial_component.trial_component_name
print('Deploy trial component name {}'.format(deploy_trial_component_name))
###Output
_____no_output_____
###Markdown
Attach the `deploy` Trial Component and Tracker as a Component to the Trial
###Code
trial.add_trial_component(tracker_deploy.trial_component)
###Output
_____no_output_____
###Markdown
Track the Endpoint Name
###Code
tracker_deploy.log_parameters({
'endpoint_name': model_ab_endpoint_name,
})
# must save after logging
tracker_deploy.trial_component.save()
from sagemaker.analytics import ExperimentAnalytics
lineage_table = ExperimentAnalytics(
sagemaker_session=sess,
experiment_name=experiment_name,
metric_names=['validation:accuracy'],
sort_by="CreationTime",
sort_order="Ascending",
)
lineage_df = lineage_table.dataframe()
lineage_df.shape
lineage_df
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">REST Endpoint</a></b>'.format(region, model_ab_endpoint_name)))
###Output
_____no_output_____
###Markdown
_Wait Until the ^^ Endpoint ^^ is Deployed_
###Code
waiter = sm.get_waiter('endpoint_in_service')
waiter.wait(EndpointName=model_ab_endpoint_name)
###Output
_____no_output_____
###Markdown
Simulate a Prediction from an Application
###Code
from sagemaker.tensorflow.serving import Predictor
predictor = Predictor(endpoint_name=model_ab_endpoint_name,
sagemaker_session=sess,
content_type='application/json',
model_name='saved_model',
model_version=0)
###Output
_____no_output_____
###Markdown
Predict the `star_rating` with `review_body` Samples from our TSV's
###Code
import csv
df_reviews = pd.read_csv('./data/amazon_reviews_us_Digital_Software_v1_00.tsv.gz',
delimiter='\t',
quoting=csv.QUOTE_NONE,
compression='gzip')
df_sample_reviews = df_reviews[['review_body', 'star_rating']].sample(n=50)
df_sample_reviews = df_sample_reviews.reset_index()
df_sample_reviews.shape
import pandas as pd
def predict(review_body):
return predictor.predict([review_body])[0]
df_sample_reviews['predicted_class'] = df_sample_reviews['review_body'].map(predict)
df_sample_reviews.head(5)
###Output
_____no_output_____
###Markdown
Predict the `star_rating` with Ad Hoc `review_body` Samples
###Code
reviews = ["This is great!",
"This is not good."]
predicted_classes = predictor.predict(reviews)
for predicted_class, review in zip(predicted_classes, reviews):
print('[Predicted Star Rating: {}]'.format(predicted_class), review)
###Output
_____no_output_____
###Markdown
Review the REST Endpoint Performance Metrics in CloudWatch
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">REST Endpoint Performance Metrics</a></b>'.format(region, endpoint_name)))
###Output
_____no_output_____
###Markdown
Review the REST Endpoint Performance Metrics in a DataframeAmazon SageMaker emits metrics such as Latency and Invocations (full list of metrics [here](https://alpha-docs-aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html)) for each variant in Amazon CloudWatch. Let’s query CloudWatch to get the InvocationsPerVariant to show how invocations are split across variants.
###Code
from datetime import datetime, timedelta
import boto3
import pandas as pd
def get_invocation_metrics_for_endpoint_variant(endpoint_name,
namespace_name,
metric_name,
variant_name,
start_time,
end_time):
metrics = cw.get_metric_statistics(
Namespace=namespace_name,
MetricName=metric_name,
StartTime=start_time,
EndTime=end_time,
Period=60,
Statistics=["Sum"],
Dimensions=[
{
"Name": "EndpointName",
"Value": endpoint_name
},
{
"Name": "VariantName",
"Value": variant_name
}
]
)
if metrics['Datapoints']:
return pd.DataFrame(metrics["Datapoints"])\
.sort_values("Timestamp")\
.set_index("Timestamp")\
.drop("Unit", axis=1)\
.rename(columns={"Sum": variant_name})
else:
return pd.DataFrame()
def plot_endpoint_metrics_for_variants(endpoint_name,
namespace_name,
metric_name,
start_time=None):
try:
start_time = start_time or datetime.now() - timedelta(minutes=60)
end_time = datetime.now()
metrics_variantA = get_invocation_metrics_for_endpoint_variant(endpoint_name=endpoint_name,
namespace_name=namespace_name,
metric_name=metric_name,
variant_name=variantA["VariantName"],
start_time=start_time,
end_time=end_time)
metrics_variantB = get_invocation_metrics_for_endpoint_variant(endpoint_name=endpoint_name,
namespace_name=namespace_name,
metric_name=metric_name,
variant_name=variantB["VariantName"],
start_time=start_time,
end_time=end_time)
metrics_variants = metrics_variantA.join(metrics_variantB, how="outer")
metrics_variants.plot()
except:
pass
###Output
_____no_output_____
###Markdown
Show the Metrics for Each VariantIf you see `Metrics not yet available`, please be patient as metrics may take a few mins to appear in CloudWatch.Also, make sure the predictions ran successfully above.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(20)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='/aws/sagemaker/Endpoints',
metric_name='CPUUtilization')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='Invocations')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='InvocationsPerInstance')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='ModelLatency')
###Output
_____no_output_____
###Markdown
Shift All Traffic to Variant B_**No downtime** occurs during this traffic-shift activity._This may take a few minutes. Please be patient.
###Code
updated_endpoint_config = [
{
'VariantName': variantA['VariantName'],
'DesiredWeight': 0,
},
{
'VariantName': variantB['VariantName'],
'DesiredWeight': 100,
}
]
sm.update_endpoint_weights_and_capacities(
EndpointName=endpoint_name,
DesiredWeightsAndCapacities=updated_endpoint_config
)
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">REST Endpoint</a></b>'.format(region, endpoint_name)))
###Output
_____no_output_____
###Markdown
_Wait for the ^^ Endpoint Update ^^ to Complete Above_This may take a few minutes. Please be patient.
###Code
waiter = sm.get_waiter('endpoint_in_service')
waiter.wait(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Run Some More Predictions
###Code
import pandas as pd
def predict(review_body):
return predictor.predict([review_body])[0]
df_sample_reviews['predicted_class'] = df_sample_reviews['review_body'].map(predict)
df_sample_reviews
###Output
_____no_output_____
###Markdown
Show the Metrics for Each VariantIf you see `Metrics not yet available`, please be patient as metrics may take a few mins to appear in CloudWatch.Also, make sure the predictions ran successfully above.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(20)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='/aws/sagemaker/Endpoints',
metric_name='CPUUtilization')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='Invocations')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='InvocationsPerInstance')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='ModelLatency')
###Output
_____no_output_____
###Markdown
Remove Variant A to Reduce CostModify the Endpoint Configuration to only use variant B._**No downtime** occurs during this scale-down activity._This may take a few mins. Please be patient.
###Code
import time
timestamp = int(time.time())
updated_endpoint_config_name = '{}-{}'.format(training_job_name, timestamp)
updated_endpoint_config = sm.create_endpoint_config(
EndpointConfigName=updated_endpoint_config_name,
ProductionVariants=[
{
'VariantName': variantB['VariantName'],
'ModelName': model_b_name, # Only specify variant B to remove variant A
'InstanceType':'ml.m5.large',
'InitialInstanceCount': 1,
'InitialVariantWeight': 100
}
])
sm.update_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=updated_endpoint_config_name
)
###Output
_____no_output_____
###Markdown
_If You See An ^^ Error ^^ Above, Please Wait Until the Endpoint is Updated_
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">REST Endpoint</a></b>'.format(region, endpoint_name)))
###Output
_____no_output_____
###Markdown
_Wait for the ^^ Endpoint Update ^^ to Complete Above_This may take a few minutes. Please be patient.
###Code
waiter = sm.get_waiter('endpoint_in_service')
waiter.wait(EndpointName=endpoint_name)
###Output
_____no_output_____
###Markdown
Run Some More Predictions
###Code
import pandas as pd
def predict(review_body):
return predictor.predict([review_body])[0]
df_sample_reviews['predicted_class'] = df_sample_reviews['review_body'].map(predict)
df_sample_reviews
###Output
_____no_output_____
###Markdown
Show the Metrics for Each VariantIf you see `Metrics not yet available`, please be patient as metrics may take a few mins to appear in CloudWatch.Also, make sure the predictions ran successfully above.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(20)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='/aws/sagemaker/Endpoints',
metric_name='CPUUtilization')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='Invocations')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='InvocationsPerInstance')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
time.sleep(5)
plot_endpoint_metrics_for_variants(endpoint_name=endpoint_name,
namespace_name='AWS/SageMaker',
metric_name='ModelLatency')
###Output
_____no_output_____
###Markdown
Delete EndpointTo save money, we should delete the endpoint.
###Code
# sm.delete_endpoint(
# EndpointName=endpoint_name
# )
###Output
_____no_output_____
###Markdown
More Links* Optimize Cost with TensorFlow and Elastic Inferencehttps://aws.amazon.com/blogs/machine-learning/optimizing-costs-in-amazon-elastic-inference-with-amazon-tensorflow/* Using API Gateway with SageMaker Endpointshttps://aws.amazon.com/blogs/machine-learning/creating-a-machine-learning-powered-rest-api-with-amazon-api-gateway-mapping-templates-and-amazon-sagemaker/
###Code
%store
%%javascript
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
###Output
_____no_output_____ |
notebooks/3_NeuralNetworks/bidirectional_rnn.ipynb | ###Markdown
Bi-directional Recurrent Neural Network ExampleBuild a bi-directional recurrent neural network (LSTM) with TensorFlow.- Author: Aymeric Damien- Project: https://github.com/aymericdamien/TensorFlow-Examples/ BiRNN OverviewReferences:- [Long Short Term Memory](https://www.researchgate.net/profile/Sepp_Hochreiter/publication/13853244_Long_Short-term_Memory/links/5700e75608aea6b7746a0624/Long-Short-term-Memory.pdf), Sepp Hochreiter & Jurgen Schmidhuber, Neural Computation 9(8): 1735-1780, 1997. MNIST Dataset OverviewThis example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).To classify images using a recurrent neural network, we consider every image row as a sequence of pixels. Because MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.More info: http://yann.lecun.com/exdb/mnist/
###Code
from __future__ import print_function
import tensorflow as tf
from tensorflow.contrib import rnn
import numpy as np
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Training Parameters
learning_rate = 0.001
training_steps = 10000
batch_size = 128
display_step = 200
# Network Parameters
num_input = 28 # MNIST data input (img shape: 28*28)
timesteps = 28 # timesteps
num_hidden = 128 # hidden layer num of features
num_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
X = tf.placeholder("float", [None, timesteps, num_input])
Y = tf.placeholder("float", [None, num_classes])
# Define weights
weights = {
# Hidden layer weights => 2*n_hidden because of forward + backward cells
'out': tf.Variable(tf.random_normal([2*num_hidden, num_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([num_classes]))
}
def BiRNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, timesteps, n_input)
# Required shape: 'timesteps' tensors list of shape (batch_size, num_input)
# Unstack to get a list of 'timesteps' tensors of shape (batch_size, num_input)
x = tf.unstack(x, timesteps, 1)
# Define lstm cells with tensorflow
# Forward direction cell
lstm_fw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
# Backward direction cell
lstm_bw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
# Get lstm cell output
try:
outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
except Exception: # Old TensorFlow version only returns outputs not states
outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
logits = BiRNN(X, weights, biases)
prediction = tf.nn.softmax(logits)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
for step in range(1, training_steps+1):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, timesteps, num_input))
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
# Calculate accuracy for 128 mnist test images
test_len = 128
test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input))
test_label = mnist.test.labels[:test_len]
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))
###Output
Step 1, Minibatch Loss= 2.6218, Training Accuracy= 0.086
Step 200, Minibatch Loss= 2.1900, Training Accuracy= 0.211
Step 400, Minibatch Loss= 2.0144, Training Accuracy= 0.375
Step 600, Minibatch Loss= 1.8729, Training Accuracy= 0.445
Step 800, Minibatch Loss= 1.8000, Training Accuracy= 0.469
Step 1000, Minibatch Loss= 1.7244, Training Accuracy= 0.453
Step 1200, Minibatch Loss= 1.5657, Training Accuracy= 0.523
Step 1400, Minibatch Loss= 1.5473, Training Accuracy= 0.547
Step 1600, Minibatch Loss= 1.5288, Training Accuracy= 0.500
Step 1800, Minibatch Loss= 1.4203, Training Accuracy= 0.555
Step 2000, Minibatch Loss= 1.2525, Training Accuracy= 0.641
Step 2200, Minibatch Loss= 1.2696, Training Accuracy= 0.594
Step 2400, Minibatch Loss= 1.2000, Training Accuracy= 0.664
Step 2600, Minibatch Loss= 1.1017, Training Accuracy= 0.625
Step 2800, Minibatch Loss= 1.2656, Training Accuracy= 0.578
Step 3000, Minibatch Loss= 1.0830, Training Accuracy= 0.656
Step 3200, Minibatch Loss= 1.1522, Training Accuracy= 0.633
Step 3400, Minibatch Loss= 0.9484, Training Accuracy= 0.680
Step 3600, Minibatch Loss= 1.0470, Training Accuracy= 0.641
Step 3800, Minibatch Loss= 1.0609, Training Accuracy= 0.586
Step 4000, Minibatch Loss= 1.1853, Training Accuracy= 0.648
Step 4200, Minibatch Loss= 0.9438, Training Accuracy= 0.750
Step 4400, Minibatch Loss= 0.7986, Training Accuracy= 0.766
Step 4600, Minibatch Loss= 0.8070, Training Accuracy= 0.750
Step 4800, Minibatch Loss= 0.8382, Training Accuracy= 0.734
Step 5000, Minibatch Loss= 0.7397, Training Accuracy= 0.766
Step 5200, Minibatch Loss= 0.7870, Training Accuracy= 0.727
Step 5400, Minibatch Loss= 0.6380, Training Accuracy= 0.828
Step 5600, Minibatch Loss= 0.7975, Training Accuracy= 0.719
Step 5800, Minibatch Loss= 0.7934, Training Accuracy= 0.766
Step 6000, Minibatch Loss= 0.6628, Training Accuracy= 0.805
Step 6200, Minibatch Loss= 0.7958, Training Accuracy= 0.672
Step 6400, Minibatch Loss= 0.6582, Training Accuracy= 0.773
Step 6600, Minibatch Loss= 0.5908, Training Accuracy= 0.812
Step 6800, Minibatch Loss= 0.6182, Training Accuracy= 0.820
Step 7000, Minibatch Loss= 0.5513, Training Accuracy= 0.812
Step 7200, Minibatch Loss= 0.6683, Training Accuracy= 0.789
Step 7400, Minibatch Loss= 0.5337, Training Accuracy= 0.828
Step 7600, Minibatch Loss= 0.6428, Training Accuracy= 0.805
Step 7800, Minibatch Loss= 0.6708, Training Accuracy= 0.797
Step 8000, Minibatch Loss= 0.4664, Training Accuracy= 0.852
Step 8200, Minibatch Loss= 0.4249, Training Accuracy= 0.859
Step 8400, Minibatch Loss= 0.7723, Training Accuracy= 0.773
Step 8600, Minibatch Loss= 0.4706, Training Accuracy= 0.859
Step 8800, Minibatch Loss= 0.4800, Training Accuracy= 0.867
Step 9000, Minibatch Loss= 0.4636, Training Accuracy= 0.891
Step 9200, Minibatch Loss= 0.5734, Training Accuracy= 0.828
Step 9400, Minibatch Loss= 0.5548, Training Accuracy= 0.875
Step 9600, Minibatch Loss= 0.3575, Training Accuracy= 0.922
Step 9800, Minibatch Loss= 0.4566, Training Accuracy= 0.844
Step 10000, Minibatch Loss= 0.5125, Training Accuracy= 0.844
Optimization Finished!
Testing Accuracy: 0.890625
###Markdown
Bi-directional Recurrent Neural Network ExampleBuild a bi-directional recurrent neural network (LSTM) with TensorFlow.- Author: Aymeric Damien- Project: https://github.com/aymericdamien/TensorFlow-Examples/ BiRNN OverviewReferences:- [Long Short Term Memory](http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf), Sepp Hochreiter & Jurgen Schmidhuber, Neural Computation 9(8): 1735-1780, 1997. MNIST Dataset OverviewThis example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).To classify images using a recurrent neural network, we consider every image row as a sequence of pixels. Because MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.More info: http://yann.lecun.com/exdb/mnist/
###Code
from __future__ import print_function
import tensorflow as tf
from tensorflow.contrib import rnn
import numpy as np
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Training Parameters
learning_rate = 0.001
training_steps = 10000
batch_size = 128
display_step = 200
# Network Parameters
num_input = 28 # MNIST data input (img shape: 28*28)
timesteps = 28 # timesteps
num_hidden = 128 # hidden layer num of features
num_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
X = tf.placeholder("float", [None, timesteps, num_input])
Y = tf.placeholder("float", [None, num_classes])
# Define weights
weights = {
# Hidden layer weights => 2*n_hidden because of forward + backward cells
'out': tf.Variable(tf.random_normal([2*num_hidden, num_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([num_classes]))
}
def BiRNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, timesteps, n_input)
# Required shape: 'timesteps' tensors list of shape (batch_size, num_input)
# Unstack to get a list of 'timesteps' tensors of shape (batch_size, num_input)
x = tf.unstack(x, timesteps, 1)
# Define lstm cells with tensorflow
# Forward direction cell
lstm_fw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
# Backward direction cell
lstm_bw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
# Get lstm cell output
try:
outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
except Exception: # Old TensorFlow version only returns outputs not states
outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
logits = BiRNN(X, weights, biases)
prediction = tf.nn.softmax(logits)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
for step in range(1, training_steps+1):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_x = batch_x.reshape((batch_size, timesteps, num_input))
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
# Calculate accuracy for 128 mnist test images
test_len = 128
test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input))
test_label = mnist.test.labels[:test_len]
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))
###Output
Step 1, Minibatch Loss= 2.6218, Training Accuracy= 0.086
Step 200, Minibatch Loss= 2.1900, Training Accuracy= 0.211
Step 400, Minibatch Loss= 2.0144, Training Accuracy= 0.375
Step 600, Minibatch Loss= 1.8729, Training Accuracy= 0.445
Step 800, Minibatch Loss= 1.8000, Training Accuracy= 0.469
Step 1000, Minibatch Loss= 1.7244, Training Accuracy= 0.453
Step 1200, Minibatch Loss= 1.5657, Training Accuracy= 0.523
Step 1400, Minibatch Loss= 1.5473, Training Accuracy= 0.547
Step 1600, Minibatch Loss= 1.5288, Training Accuracy= 0.500
Step 1800, Minibatch Loss= 1.4203, Training Accuracy= 0.555
Step 2000, Minibatch Loss= 1.2525, Training Accuracy= 0.641
Step 2200, Minibatch Loss= 1.2696, Training Accuracy= 0.594
Step 2400, Minibatch Loss= 1.2000, Training Accuracy= 0.664
Step 2600, Minibatch Loss= 1.1017, Training Accuracy= 0.625
Step 2800, Minibatch Loss= 1.2656, Training Accuracy= 0.578
Step 3000, Minibatch Loss= 1.0830, Training Accuracy= 0.656
Step 3200, Minibatch Loss= 1.1522, Training Accuracy= 0.633
Step 3400, Minibatch Loss= 0.9484, Training Accuracy= 0.680
Step 3600, Minibatch Loss= 1.0470, Training Accuracy= 0.641
Step 3800, Minibatch Loss= 1.0609, Training Accuracy= 0.586
Step 4000, Minibatch Loss= 1.1853, Training Accuracy= 0.648
Step 4200, Minibatch Loss= 0.9438, Training Accuracy= 0.750
Step 4400, Minibatch Loss= 0.7986, Training Accuracy= 0.766
Step 4600, Minibatch Loss= 0.8070, Training Accuracy= 0.750
Step 4800, Minibatch Loss= 0.8382, Training Accuracy= 0.734
Step 5000, Minibatch Loss= 0.7397, Training Accuracy= 0.766
Step 5200, Minibatch Loss= 0.7870, Training Accuracy= 0.727
Step 5400, Minibatch Loss= 0.6380, Training Accuracy= 0.828
Step 5600, Minibatch Loss= 0.7975, Training Accuracy= 0.719
Step 5800, Minibatch Loss= 0.7934, Training Accuracy= 0.766
Step 6000, Minibatch Loss= 0.6628, Training Accuracy= 0.805
Step 6200, Minibatch Loss= 0.7958, Training Accuracy= 0.672
Step 6400, Minibatch Loss= 0.6582, Training Accuracy= 0.773
Step 6600, Minibatch Loss= 0.5908, Training Accuracy= 0.812
Step 6800, Minibatch Loss= 0.6182, Training Accuracy= 0.820
Step 7000, Minibatch Loss= 0.5513, Training Accuracy= 0.812
Step 7200, Minibatch Loss= 0.6683, Training Accuracy= 0.789
Step 7400, Minibatch Loss= 0.5337, Training Accuracy= 0.828
Step 7600, Minibatch Loss= 0.6428, Training Accuracy= 0.805
Step 7800, Minibatch Loss= 0.6708, Training Accuracy= 0.797
Step 8000, Minibatch Loss= 0.4664, Training Accuracy= 0.852
Step 8200, Minibatch Loss= 0.4249, Training Accuracy= 0.859
Step 8400, Minibatch Loss= 0.7723, Training Accuracy= 0.773
Step 8600, Minibatch Loss= 0.4706, Training Accuracy= 0.859
Step 8800, Minibatch Loss= 0.4800, Training Accuracy= 0.867
Step 9000, Minibatch Loss= 0.4636, Training Accuracy= 0.891
Step 9200, Minibatch Loss= 0.5734, Training Accuracy= 0.828
Step 9400, Minibatch Loss= 0.5548, Training Accuracy= 0.875
Step 9600, Minibatch Loss= 0.3575, Training Accuracy= 0.922
Step 9800, Minibatch Loss= 0.4566, Training Accuracy= 0.844
Step 10000, Minibatch Loss= 0.5125, Training Accuracy= 0.844
Optimization Finished!
Testing Accuracy: 0.890625
|
Python_Stock/Big_Three_Trading Strategy.ipynb | ###Markdown
Big Three Trading Strategy 20 Period Simple Moving Average40 Period Simple Moving Average80 Period Simple Moving Averagehttps://tradingstrategyguides.com/big-three-trading-strategy/
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2014-01-01'
end = '2018-12-31'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
df['SMA_20'] = df['Adj Close'].rolling(20).mean()
df['SMA_40'] = df['Adj Close'].rolling(40).mean()
df['SMA_80'] = df['Adj Close'].rolling(80).mean()
df.tail()
fig, ax = plt.subplots(figsize=(16,9))
ax.plot(df.index, df['Adj Close'], label='Price')
ax.plot(df.index, df['SMA_20'], label = '20-days SMA')
ax.plot(df.index, df['SMA_40'], label = '40-days SMA')
ax.plot(df.index, df['SMA_80'], label = '80-days SMA')
ax.legend(loc='best')
ax.set_ylabel('Price')
ax.set_title('Big Three Trading Strategy')
###Output
_____no_output_____
###Markdown
Closer Viewplot different dates
###Code
new_dates = df['2017-01-01':'2018-12-31']
new_dates.head()
plt.figure(figsize=(16,10))
plt.plot(new_dates['Adj Close'], label='Price')
plt.plot(new_dates['SMA_20'], label = '20-days SMA')
plt.plot(new_dates['SMA_40'], label = '40-days SMA')
plt.plot(new_dates['SMA_80'], label = '80-days SMA')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Dates')
plt.title('Big Three Trading Strategy')
plt.figure(figsize=(16,10))
plt.plot_date(x=new_dates.index ,y=new_dates['Adj Close'], label='Price')
plt.plot(new_dates['SMA_20'], label = '20-days SMA')
plt.plot(new_dates['SMA_40'], label = '40-days SMA')
plt.plot(new_dates['SMA_80'], label = '80-days SMA')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Dates')
plt.title('Big Three Trading Strategy')
plt.figure(figsize=(16,10))
plt.plot_date(x=new_dates.index ,y=new_dates['Adj Close'], fmt='r-', label='Price')
plt.plot(new_dates['SMA_20'], label = '20-days SMA', color='b')
plt.plot(new_dates['SMA_40'], label = '40-days SMA', color='y')
plt.plot(new_dates['SMA_80'], label = '80-days SMA', color='green')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Date')
plt.title('Big Three Trading Strategy')
###Output
_____no_output_____
###Markdown
Big Three Trading Strategy 20 Period Simple Moving Average40 Period Simple Moving Average80 Period Simple Moving Averagehttps://tradingstrategyguides.com/big-three-trading-strategy/
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2014-01-01'
end = '2018-12-31'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
df['SMA_20'] = df['Adj Close'].rolling(20).mean()
df['SMA_40'] = df['Adj Close'].rolling(40).mean()
df['SMA_80'] = df['Adj Close'].rolling(80).mean()
df.tail()
fig, ax = plt.subplots(figsize=(16,9))
ax.plot(df.index, df['Adj Close'], label='Price')
ax.plot(df.index, df['SMA_20'], label = '20-days SMA')
ax.plot(df.index, df['SMA_40'], label = '40-days SMA')
ax.plot(df.index, df['SMA_80'], label = '80-days SMA')
ax.legend(loc='best')
ax.set_ylabel('Price')
ax.set_title('Big Three Trading Strategy')
###Output
_____no_output_____
###Markdown
Closer Viewplot different dates
###Code
new_dates = df['2017-01-01':'2018-12-31']
new_dates.head()
plt.figure(figsize=(16,10))
plt.plot(new_dates['Adj Close'], label='Price')
plt.plot(new_dates['SMA_20'], label = '20-days SMA')
plt.plot(new_dates['SMA_40'], label = '40-days SMA')
plt.plot(new_dates['SMA_80'], label = '80-days SMA')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Dates')
plt.title('Big Three Trading Strategy')
plt.figure(figsize=(16,10))
plt.plot_date(x=new_dates.index ,y=new_dates['Adj Close'], label='Price')
plt.plot(new_dates['SMA_20'], label = '20-days SMA')
plt.plot(new_dates['SMA_40'], label = '40-days SMA')
plt.plot(new_dates['SMA_80'], label = '80-days SMA')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Dates')
plt.title('Big Three Trading Strategy')
plt.figure(figsize=(16,10))
plt.plot_date(x=new_dates.index ,y=new_dates['Adj Close'], fmt='r-', label='Price')
plt.plot(new_dates['SMA_20'], label = '20-days SMA', color='b')
plt.plot(new_dates['SMA_40'], label = '40-days SMA', color='y')
plt.plot(new_dates['SMA_80'], label = '80-days SMA', color='green')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Date')
plt.title('Big Three Trading Strategy')
###Output
_____no_output_____
###Markdown
Big Three Trading Strategy 20 Period Simple Moving Average40 Period Simple Moving Average80 Period Simple Moving Averagehttps://tradingstrategyguides.com/big-three-trading-strategy/
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2014-01-01'
end = '2020-12-31'
# Read data
df = yf.download(symbol, start, end)
# View Columns
df.head()
df['SMA_20'] = df['Adj Close'].rolling(20).mean()
df['SMA_40'] = df['Adj Close'].rolling(40).mean()
df['SMA_80'] = df['Adj Close'].rolling(80).mean()
df.tail()
fig, ax = plt.subplots(figsize=(16,9))
ax.plot(df.index, df['Adj Close'], label='Price')
ax.plot(df.index, df['SMA_20'], label='20-days SMA')
ax.plot(df.index, df['SMA_40'], label='40-days SMA')
ax.plot(df.index, df['SMA_80'], label='80-days SMA')
ax.legend(loc='best')
ax.set_ylabel('Price')
ax.set_title('Big Three Trading Strategy')
ax.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Closer Viewplot different dates
###Code
new_dates = df['2019-01-01':'2020-12-31']
new_dates.head()
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.figure(figsize=(16,10))
plt.plot(new_dates['Adj Close'], label='Price')
plt.plot(new_dates['SMA_20'], label='20-days SMA')
plt.plot(new_dates['SMA_40'], label='40-days SMA')
plt.plot(new_dates['SMA_80'], label='80-days SMA')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Dates')
plt.title('Big Three Trading Strategy')
plt.show()
plt.figure(figsize=(16,10))
plt.plot_date(x=new_dates.index, y=new_dates['Adj Close'], label='Price')
plt.plot(new_dates['SMA_20'], label = '20-days SMA')
plt.plot(new_dates['SMA_40'], label = '40-days SMA')
plt.plot(new_dates['SMA_80'], label = '80-days SMA')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Dates')
plt.title('Big Three Trading Strategy')
plt.show()
plt.figure(figsize=(16,10))
plt.plot_date(x=new_dates.index ,y=new_dates['Adj Close'], fmt='r-', label='Price')
plt.plot(new_dates['SMA_20'], label = '20-days SMA', color='b')
plt.plot(new_dates['SMA_40'], label = '40-days SMA', color='y')
plt.plot(new_dates['SMA_80'], label = '80-days SMA', color='green')
plt.legend(loc='best')
plt.grid(True)
plt.ylabel('Price')
plt.xlabel('Date')
plt.title('Big Three Trading Strategy')
plt.show()
###Output
_____no_output_____ |
Assignment 1/assignment_1.ipynb | ###Markdown
IEMS5780 - Assignment 1 > Last modified Oct 19, 2019 We will use a dataset of movie reviews collected from IMDb, which is a movie database where Internet users can leave their comments about movies. The dataset can be obtained from the following Webpage: http://ai.stanford.edu/~amaas/data/sentiment/.
###Code
# Imports
import glob
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
1. Data PreparationThe following function is used to do the combining and spliting work. It returns a tuple of pandas dataframes, which corresponding to training and test set.Because I am working on a Windows machine, the paths were written in black slashes '\'. If you are using a Unix machine, please modify them to the slashes '/'.
###Code
def combine(dataset_path, is_shuffle=False, save_path=None):
"""Combine the train and test dataset.
:param: dataset_path: str
:param: is_shuffle: boolean
:param: save_path: str, None for don't save
:return: (training_dataframe, test_dataframe): tuple
"""
print('Date pre-processing...')
data = []
# Open files in positive comments.
for filename in glob.glob(dataset_path + 'train\\pos\\*.txt'):
with open(filename, 'r', encoding='utf8') as f:
data += [[f.read().strip(), 1]]
for filename in glob.glob(dataset_path + 'test\\pos\\*.txt'):
with open(filename, 'r', encoding='utf8') as f:
data += [[f.read().strip(), 1]]
# Open files in negative comments.
for filename in glob.glob(dataset_path + 'train\\neg\\*.txt'):
with open(filename, 'r', encoding='utf8') as f:
data += [[f.read().strip(), 0]]
for filename in glob.glob(dataset_path + 'test\\neg\\*.txt'):
with open(filename, 'r', encoding='utf8') as f:
data += [[f.read().strip(), 0]]
# Load datalist into DataFrame
df = pd.DataFrame(data, columns=['comment', 'attitude'])
# Shuffle
if is_shuffle:
df = df.sample(frac=1)
# Split the dataset
df_train, df_test = train_test_split(df, test_size=0.3)
# Save DataFrame to csv file.
if save_path is not None:
with open(save_path + 'train.csv', 'w', encoding='utf8') as f:
df_train.to_csv(f)
with open(save_path + 'test.csv', 'w', encoding='utf8') as f:
df_test.to_csv(f)
# Return the dataframe.
return df_train, df_test
###Output
_____no_output_____
###Markdown
2. Using a Naive Bayes ClassificationIn this section, a pipeline will be built to read the data, then count it by `CountVectorizer` and `TfidfVectorizer`, then train a Naive Bayes Classifier.
###Code
def naive_bayes_count(train, test):
"""Train a Naive Bayes classifier with count vectorizer.
:param training set. pandas Dataframe.
:param test set. pandas Dataframe.
:param model save path. str. None for don't save.
:return sklearn model.
"""
print('Training Naive Bayes model with unigram CountVectorize...')
# Extract documents and labels.
docs_train = train['comment']
labels_train = train['attitude']
docs_test = test['comment']
labels_test = test['attitude']
# Start up a Pipeline
pipe = Pipeline([
('vec', CountVectorizer()),
('nb', MultinomialNB())
])
# Train the model.
pipe.fit(docs_train, labels_train)
# Do prediction.
y_pred = pipe.predict(docs_test)
# Get report.
print(classification_report(labels_test, y_pred))
###Output
_____no_output_____
###Markdown
Use `TfidfVectorizer`
###Code
def naive_bayes_tfidf(train, test):
"""Train a Naive Bayes classifier with Tf-Idf vectorizer.
:param training set. pandas Dataframe.
:param test set. pandas Dataframe.
:param model save path. str. None for don't save.
:return sklearn model.
"""
print('Training Naive Bayes model with unigram TfidfVectorize...')
# Extract documents and labels.
docs_train = train['comment']
labels_train = train['attitude']
docs_test = test['comment']
labels_test = test['attitude']
# Start up a Pipeline
pipe = Pipeline([
('vec', TfidfVectorizer()),
('nb', MultinomialNB())
])
# Train the model.
pipe.fit(docs_train, labels_train)
# Do prediction.
y_pred = pipe.predict(docs_test)
# Get report.
print(classification_report(labels_test, y_pred))
###Output
_____no_output_____
###Markdown
3. Using Logistic RegressionIn this section, a pipeline will be built to read the data, then count it by CountVectorizer and TfidfVectorizer, then train a logistic regression classifier.
###Code
def logistic_regression_count(train, test):
"""Train a logistic regression classifier with count vectorizer.
:param training set. pandas Dataframe.
:param test set. pandas Dataframe.
:param model save path. str. None for don't save.
:return sklearn model.
"""
print('Training Logistic Regression model with unigram CountVectorize...')
# Extract documents and labels.
docs_train = train['comment']
labels_train = train['attitude']
docs_test = test['comment']
labels_test = test['attitude']
# Start up a Pipeline
pipe = Pipeline([
('vec', CountVectorizer()),
('log', LogisticRegression())
])
# Train the model.
pipe.fit(docs_train, labels_train)
# Do prediction.
y_pred = pipe.predict(docs_test)
# Get report.
print(classification_report(labels_test, y_pred))
###Output
_____no_output_____
###Markdown
Use `TfidfVectorizer`.
###Code
def logistic_regression_tfidf(train, test):
"""Train a logistic regression classifier with Tf-idf vectorizer.
:param training set. pandas Dataframe.
:param test set. pandas Dataframe.
:param model save path. str. None for don't save.
:return sklearn model.
"""
print('Training Logistic Regression model with unigram TfidfVectorize...')
# Extract documents and labels.
docs_train = train['comment']
labels_train = train['attitude']
docs_test = test['comment']
labels_test = test['attitude']
# Start up a Pipeline
pipe = Pipeline([
('vec', TfidfVectorizer()),
('log', LogisticRegression())
])
# Train the model.
pipe.fit(docs_train, labels_train)
# Do prediction.
y_pred = pipe.predict(docs_test)
# Get report.
print(classification_report(labels_test, y_pred))
###Output
_____no_output_____
###Markdown
4. Bi-Gram ModelsRepeat all experiments with bi-gram. Naive Bayes Models
###Code
def naive_bayes_count_bigram(train, test):
"""Train a Naive Bayes classifier with count vectorizer.
:param training set. pandas Dataframe.
:param test set. pandas Dataframe.
:param model save path. str. None for don't save.
:return sklearn model.
"""
print('Training Naive Bayes model with bigram CountVectorize...')
# Extract documents and labels.
docs_train = train['comment']
labels_train = train['attitude']
docs_test = test['comment']
labels_test = test['attitude']
# Start up a Pipeline
pipe = Pipeline([
('vec', CountVectorizer(ngram_range=(1,2))),
('nb', MultinomialNB())
])
# Train the model.
pipe.fit(docs_train, labels_train)
# Do prediction.
y_pred = pipe.predict(docs_test)
# Get report.
print(classification_report(labels_test, y_pred))
###Output
_____no_output_____
###Markdown
Logistic Regression Models
###Code
def logistic_regression_count_bigram(train, test):
"""Train a logistic regression classifier with count vectorizer.
:param training set. pandas Dataframe.
:param test set. pandas Dataframe.
:param model save path. str. None for don't save.
:return sklearn model.
"""
print('Training Logistic Regression model with biigram CountVectorize...')
# Extract documents and labels.
docs_train = train['comment']
labels_train = train['attitude']
docs_test = test['comment']
labels_test = test['attitude']
# Start up a Pipeline
pipe = Pipeline([
('vec', CountVectorizer(ngram_range=(1,2))),
('log', LogisticRegression())
])
# Train the model.
pipe.fit(docs_train, labels_train)
# Do prediction.
y_pred = pipe.predict(docs_test)
# Get report.
print(classification_report(labels_test, y_pred))
# Data preprocessing Please fill your path of dataset and output file.
train, test = combine('D:\\Datasets\\aclImdb\\', True, None)
# Run all models.
naive_bayes_count(train, test)
naive_bayes_tfidf(train, test)
logistic_regression_count(train, test)
logistic_regression_tfidf(train, test)
naive_bayes_count_bigram(train, test)
logistic_regression_count_bigram(train, test)
###Output
_____no_output_____
###Markdown
Models Compares| Model | Accuracy | Precision (pos) | Recall (pos) | Percision (neg) | Recall (neg) || :---: | :------: | :-------------: | :----------: | :-------------: | :----------: || NB-Unigram | 0.84 | 0.87 | 0.81 | 0.82 | 0.88 || NB-Tfidf | 0.86 | 0.88 | 0.83 | 0.84 | 0.88 || Logistic-Unigram | 0.88 | 0.88 | 0.89 | 0.89 | 0.88 || Logistic-Tfidf | 0.89 | 0.89 | 0.90 | 0.90 | 0.88 || NB-Bigram | 0.88 | 0.89 | 0.86 | 0.87 | 0.90 || Logistic-Bigram | 0.91 | 0.90 | 0.91 | 0.91 | 0.90 |From the results above, we can see the best-performed model was **Logistics Regression model with bigram CountVectorizer**. So this model will be saved on the later section and used for the Telegram chatbot. 5. fastTextNow train a fastText model on the movie comments.
###Code
# Preprocess the data by fastText format.
import csv
def pre_process_fasttext(dataset_path, save_path):
"""Dump training set and test set from txt files with labels.
:param path to dataset. str
:param path to save the processed data. str
"""
data = []
# Open files in positive comments.
for filename in glob.glob(dataset_path + 'train\\pos\\*.txt'):
with open(filename, 'r', encoding='utf8') as f:
data += [['__label__positive ' + f.read().strip()]]
for filename in glob.glob(dataset_path + 'test\\pos\\*.txt'):
with open(filename, 'r', encoding='utf8') as f:
data += [['__label__positive ' + f.read().strip()]]
# Open files in negative comments.
for filename in glob.glob(dataset_path + 'train\\neg\\*.txt'):
with open(filename, 'r', encoding='utf8') as f:
data += [['__label__negaitive ' + f.read().strip()]]
for filename in glob.glob(dataset_path + 'test\\neg\\*.txt'):
with open(filename, 'r', encoding='utf8') as f:
data += [['__label__negaitive ' + f.read().strip()]]
# Load datalist into DataFrame
df = pd.DataFrame(data, columns=['comment_label'])
df = df.sample(frac=1)
# Split the dataset
df_train, df_test = train_test_split(df, test_size=0.3)
# Save DataFrame to csv file.
with open(save_path + 'train.txt', 'w', encoding='utf8') as f:
df_train.to_csv(f, header=None, index=None, mode='a', quoting=csv.QUOTE_NONE, escapechar='\\')
with open(save_path + 'test.txt', 'w', encoding='utf8') as f:
df_test.to_csv(f, header=None, index=None, mode='a', quoting=csv.QUOTE_NONE, escapechar='\\')
# Change to your dataset path.
pre_process_fasttext('D:\\Datasets\\aclImdb\\', 'D:\\Datasets\\aclImdb\\fastText\\')
# Train a fastText model.
from fasttext import train_supervised
def train_fasttext(train_path, test_path, epoch, learning_rate, n_gram):
model = train_supervised(
input=train_path,
epoch=epoch,
lr=learning_rate,
wordNgrams=n_gram,
verbose=2,
minCount=1
)
print(model.test(test_path))
return model
# Call the train function
ft_model = train_fasttext('D:\\Datasets\\aclImdb\\fastText\\train.txt', 'D:\\Datasets\\aclImdb\\fastText\\test.txt', 25, 1, 2)
# Save the model to current directory
ft_model.save_model('imdb_comments_ft.bin')
###Output
_____no_output_____
###Markdown
From the test output, we can see the accruacy is around 0.89.
###Code
# Load and test the fastText model.
from fasttext import load_model
model = load_model('imdb_comments_ft.bin')
text = 'I think this is a good movie.'
# Predict the top-choice class.
label, score = model.predict(text, k=1)
# Print out the output.
print('{}\t{}'.format(label, score))
###Output
('__label__positive',) [0.97075158]
###Markdown
6. Model SavingFrom the last section, the bigram logistic regression model has the best performance. So let's add the save statements on the code then save this model.
###Code
from joblib import dump
def logistic_regression_count_bigram_save(train, test, path):
"""Train a logistic regression classifier with count vectorizer.
:param training set. pandas Dataframe.
:param test set. pandas Dataframe.
:param model save path. str. None for don't save.
:return sklearn model.
"""
print('Training Logistic Regression model with biigram CountVectorize...')
# Extract documents and labels.
docs_train = train['comment']
labels_train = train['attitude']
docs_test = test['comment']
labels_test = test['attitude']
# Start up a Pipeline
pipe = Pipeline([
('vec', CountVectorizer(ngram_range=(1,2))),
('log', LogisticRegression())
])
# Train the model.
pipe.fit(docs_train, labels_train)
# Do prediction.
y_pred = pipe.predict(docs_test)
# Get report.
print(classification_report(labels_test, y_pred))
dump(pipe, path)
logistic_regression_count_bigram_save(train, test, 'model.joblib')
###Output
_____no_output_____ |
src/examples/pyNetLogo demo.ipynb | ###Markdown
Example 1: NetLogo interaction through the pyNetLogo connectorThis notebook provides a simple example of interaction between a NetLogo model and the Python environment, using the Wolf Sheep Predation model included in the NetLogo example library (Wilensky, 1999). This model is slightly modified to add additional agent properties and illustrate the exchange of different data types. All files used in the example are available from the pyNetLogo repository at https://github.com/quaquel/pyNetLogo.We start by instantiating a link to NetLogo, loading the model, and executing the `setup` command in NetLogo.
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
sns.set_context('talk')
import pyNetLogo
netlogo = pyNetLogo.NetLogoLink(gui=True)
netlogo.load_model(r'Wolf Sheep Predation_v6.nlogo')
netlogo.command('setup')
###Output
_____no_output_____
###Markdown
We can use the `write_NetLogo_attriblist` method to pass properties to agents from a Pandas dataframe -- for instance, initial values for given attributes. This improves performance by simultaneously setting multiple properties for multiple agents in a single function call.As an example, we first load data from an Excel file into a dataframe. Each row corresponds to an agent, with columns for each attribute (including the `who` NetLogo identifier, which is required). In this case, we set coordinates for the agents using the `xcor` and `ycor` attributes.
###Code
agent_xy = pd.read_excel('xy_DataFrame.xlsx')
agent_xy[['who','xcor','ycor']].head(5)
###Output
_____no_output_____
###Markdown
We can then pass the dataframe to NetLogo, specifying which attributes and which agent type we want to update:
###Code
netlogo.write_NetLogo_attriblist(agent_xy[['who','xcor','ycor']], 'a-sheep')
###Output
_____no_output_____
###Markdown
We can check the data exchange by returning data from NetLogo to the Python workspace, using the report method. In the example below, this returns arrays for the `xcor` and `ycor` coordinates of the `sheep` agents, sorted by their `who` number. These are then plotted on a conventional scatter plot.The `report` method directly passes a string to the NetLogo instance, so that the command syntax may need to be adjusted depending on the NetLogo version. The `netlogo_version` property of the link object can be used to check the current version. By default, the link object will use the most recent NetLogo version which was found.
###Code
if netlogo.netlogo_version == '6':
x = netlogo.report('map [s -> [xcor] of s] sort sheep')
y = netlogo.report('map [s -> [ycor] of s] sort sheep')
elif netlogo.netlogo_version == '5':
x = netlogo.report('map [[xcor] of ?1] sort sheep')
y = netlogo.report('map [[ycor] of ?1] sort sheep')
fig, ax = plt.subplots(1)
ax.scatter(x, y, s=4)
ax.set_xlabel('xcor')
ax.set_ylabel('ycor')
ax.set_aspect('equal')
fig.set_size_inches(5,5)
plt.show()
###Output
_____no_output_____
###Markdown
We can then run the model for 100 ticks and update the Python coordinate arrays for the sheep agents, and return an additional array for each agent's energy value. The latter is plotted on a histogram for each agent type.
###Code
#We can use either of the following commands to run for 100 ticks:
netlogo.command('repeat 100 [go]')
#netlogo.repeat_command('go', 100)
if netlogo.netlogo_version == '6':
#Return sorted arrays so that the x, y and energy properties of each agent are in the same order
x = netlogo.report('map [s -> [xcor] of s] sort sheep')
y = netlogo.report('map [s -> [ycor] of s] sort sheep')
energy_sheep = netlogo.report('map [s -> [energy] of s] sort sheep')
elif netlogo.netlogo_version == '5':
x = netlogo.report('map [[xcor] of ?1] sort sheep')
y = netlogo.report('map [[ycor] of ?1] sort sheep')
energy_sheep = netlogo.report('map [[energy] of ?1] sort sheep')
energy_wolves = netlogo.report('[energy] of wolves') #NetLogo returns these in random order
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(1, 2)
sc = ax[0].scatter(x, y, s=50, c=energy_sheep, cmap=plt.cm.coolwarm)
ax[0].set_xlabel('xcor')
ax[0].set_ylabel('ycor')
ax[0].set_aspect('equal')
divider = make_axes_locatable(ax[0])
cax = divider.append_axes('right', size='5%', pad=0.1)
cbar = plt.colorbar(sc, cax=cax, orientation='vertical')
cbar.set_label('Energy of sheep')
sns.distplot(energy_sheep, kde=False, bins=10, ax=ax[1], label='Sheep')
sns.distplot(energy_wolves, kde=False, bins=10, ax=ax[1], label='Wolves')
ax[1].set_xlabel('Energy')
ax[1].set_ylabel('Counts')
ax[1].legend()
fig.set_size_inches(14,5)
plt.show()
###Output
_____no_output_____
###Markdown
The `repeat_report` method returns a Pandas dataframe containing reported values over a given number of ticks, for one or multiple reporters. By default, this assumes the model is run with the "go" NetLogo command; this can be set by passing an optional `go` argument. The dataframe is indexed by ticks, with labeled columns for each reporter. In this case, we track the number of wolf and sheep agents over 200 ticks; the outcomes are first plotted as a function of time. The number of wolf agents is then plotted as a function of the number of sheep agents, to approximate a phase-space plot.
###Code
counts = netlogo.repeat_report(['count wolves','count sheep'], 200, go='go')
fig, ax = plt.subplots(1, 2)
counts.plot(x=counts.index, ax=ax[0])
ax[0].set_xlabel('Ticks')
ax[0].set_ylabel('Counts')
ax[1].plot(counts['count wolves'], counts['count sheep'])
ax[1].set_xlabel('Wolves')
ax[1].set_ylabel('Sheep')
fig.set_size_inches(12,5)
plt.show()
###Output
_____no_output_____
###Markdown
The `repeat_report` method can also be used with reporters that return a NetLogo list. In this case, the list is converted to a numpy array. As an example, we track the energy of the wolf and sheep agents over 5 ticks, and plot the distribution of the wolves' energy at the final tick recorded in the dataframe.To illustrate different data types, we also track the `[sheep_str] of sheep` reporter (which returns a string property across the sheep agents, converted to a numpy object array), `count sheep` (returning a single numerical variable), and `glob_str` (returning a single string variable).
###Code
energy_df = netlogo.repeat_report(['[energy] of wolves',
'[energy] of sheep',
'[sheep_str] of sheep',
'count sheep',
'glob_str'], 5)
fig, ax = plt.subplots(1)
sns.distplot(energy_df['[energy] of wolves'].iloc[-1], kde=False, bins=20, ax=ax)
ax.set_xlabel('Energy')
ax.set_ylabel('Counts')
fig.set_size_inches(4,4)
plt.show()
energy_df.head()
###Output
_____no_output_____
###Markdown
The `patch_report` method can be used to return a dataframe which (for this example) contains the `countdown` attribute of each NetLogo patch. This dataframe essentially replicates the NetLogo environment, with column labels corresponding to the xcor patch coordinates, and indices following the pycor coordinates.
###Code
countdown_df = netlogo.patch_report('countdown')
fig, ax = plt.subplots(1)
patches = sns.heatmap(countdown_df, xticklabels=5, yticklabels=5, cbar_kws={'label':'countdown'}, ax=ax)
ax.set_xlabel('pxcor')
ax.set_ylabel('pycor')
ax.set_aspect('equal')
fig.set_size_inches(8,4)
plt.show()
###Output
_____no_output_____
###Markdown
The dataframes can be manipulated with any of the existing Pandas functions, for instance by exporting to an Excel file. The `patch_set` method provides the inverse functionality to `patch_report`, and updates the NetLogo environment from a dataframe.
###Code
countdown_df.to_excel('countdown.xlsx')
netlogo.patch_set('countdown', countdown_df.max()-countdown_df)
countdown_update_df = netlogo.patch_report('countdown')
fig, ax = plt.subplots(1)
patches = sns.heatmap(countdown_update_df, xticklabels=5, yticklabels=5, cbar_kws={'label':'countdown'}, ax=ax)
ax.set_xlabel('pxcor')
ax.set_ylabel('pycor')
ax.set_aspect('equal')
fig.set_size_inches(8,4)
plt.show()
###Output
_____no_output_____
###Markdown
Finally, the `kill_workspace()` method shuts down the NetLogo instance.
###Code
netlogo.kill_workspace()
###Output
_____no_output_____ |
cso/4_refs_reasoning/stats.ipynb | ###Markdown
Find unique URI pairs and unique URIs:
###Code
pairs = set()
uris = set()
for _, r in df.iterrows():
if r['a'] < r['b']:
pair = r['a'] + '\t' + r['b']
else:
pair = r['b'] + '\t' + r['a']
uris.add(r['a'])
uris.add(r['b'])
pairs.add(pair)
print(f'All pairs found: {len(df)}')
print(f'Unique pairs: {len(pairs)}')
print(f'Unique URIs: {len(uris)}')
###Output
All pairs found: 14914
Unique pairs: 7457
Unique URIs: 3250
###Markdown
Dump results
###Code
with open('results/unique_pairs.tsv', 'wt') as f:
f.writelines(pair + '\n' for pair in pairs)
###Output
_____no_output_____ |
images_wm.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Load images with tf.data View on TensorFlow.org Run in Google Colab View source on GitHub This tutorial provides a simple example of how to load an image dataset using `tf.data`.The dataset used in this example is distributed as directories of images, with one class of image per directory. Setup !pip uninstall tensorflow
###Code
!pip install tensorflow==2.0.0-beta0
#!pip install tfds-nightly
#!pip install -q tf-nightly
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
# tf.enable_eager_execution() #-Internet says it's on by default now
tf.__version__
AUTOTUNE = tf.data.experimental.AUTOTUNE
###Output
_____no_output_____
###Markdown
Download and inspect the dataset Retrieve the imagesBefore you start any training, you'll need a set of images to teach the network about the new classes you want to recognize. We've created an archive of creative-commons licensed flower photos to use initially.
###Code
import pathlib
data_root_orig = tf.keras.utils.get_file('flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
data_root = pathlib.Path(data_root_orig)
print(data_root)
###Output
/Users/werlindo/.keras/datasets/flower_photos
###Markdown
⇧ Downloads tarred folder to predefined location After downloading 218MB, you should now have a copy of the flower photos available:
###Code
for item in data_root.iterdir():
print(item)
###Output
/Users/werlindo/.keras/datasets/flower_photos/roses
/Users/werlindo/.keras/datasets/flower_photos/sunflowers
/Users/werlindo/.keras/datasets/flower_photos/daisy
/Users/werlindo/.keras/datasets/flower_photos/dandelion
/Users/werlindo/.keras/datasets/flower_photos/tulips
/Users/werlindo/.keras/datasets/flower_photos/LICENSE.txt
###Markdown
⇧ Print out all the folder paths
###Code
import random
all_image_paths = list(data_root.glob('*/*'))
all_image_paths = [str(path) for path in all_image_paths]
random.shuffle(all_image_paths)
image_count = len(all_image_paths)
image_count
all_image_paths[:10]
###Output
_____no_output_____
###Markdown
- ⇧ glob together all the folder paths - ⇧ Then just get the paths as list of strings - ⇧ And then shuffle the list, just to see a variety Inspect the imagesNow let's have a quick look at a couple of the images, so we know what we're dealing with:
###Code
import os
attributions = (data_root/"LICENSE.txt").open(encoding='utf-8').readlines()[4:]
attributions = [line.split(' CC-BY') for line in attributions]
attributions = dict(attributions)
attributions
###Output
_____no_output_____
###Markdown
- ⇧ Make a dictionary of attribution info
###Code
import IPython.display as display
def caption_image(image_path):
image_rel = pathlib.Path(image_path).relative_to(data_root)
return "Image (CC BY 2.0) " + ' - '.join(attributions[str(image_rel)].split(' - ')[:-1])
for n in range(3):
image_path = random.choice(all_image_paths)
display.display(display.Image(image_path))
print(caption_image(image_path))
print()
###Output
_____no_output_____
###Markdown
- ⇧ Create a caption function - Then sample some images with dynamic captions Determine the label for each image List the available labels:
###Code
label_names = sorted(item.name for item in data_root.glob('*/') if item.is_dir())
label_names
###Output
_____no_output_____
###Markdown
- ⇧ Uses the folder names as labels Assign an index to each label:
###Code
label_to_index = dict((name, index) for index,name in enumerate(label_names))
label_to_index
###Output
_____no_output_____
###Markdown
Create a list of every file, and its label index - ⇧ Index the labels
###Code
all_image_labels = [label_to_index[pathlib.Path(path).parent.name]
for path in all_image_paths]
print("First 10 labels indices: ", all_image_labels[:10])
###Output
First 10 labels indices: [2, 3, 4, 3, 1, 1, 0, 2, 1, 2]
###Markdown
- ⇧ Check that indices made sense Load and format the images TensorFlow includes all the tools you need to load and process images:
###Code
img_path = all_image_paths[0]
img_path
###Output
_____no_output_____
###Markdown
- ⇧ Check again that we have all image paths here is the raw data:
###Code
#img_raw = tf.read_file(img_path) - .read_file appears to have been deprecated (wm)
img_raw = tf.io.read_file(img_path)
print(repr(img_raw)[:100]+"...")
###Output
<tf.Tensor: id=1, shape=(), dtype=string, numpy=b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x...
###Markdown
- ⇧ get one image Decode it into an image tensor:
###Code
#img_tensor = tf.image.decode_image(img_raw) #Original
img_tensor = tf.image.decode_image(img_raw)
print(img_tensor.shape)
print(img_tensor.dtype)
###Output
(228, 320, 3)
<dtype: 'uint8'>
###Markdown
- ⇧ Decode into image tensor ...like it says 😂 Resize it for your model:
###Code
img_final = tf.image.resize(img_tensor, [192, 192])
img_final = img_final/255.0
print(img_final.shape)
print(img_final.numpy().min())
print(img_final.numpy().max())
###Output
(192, 192, 3)
0.064093135
1.0
###Markdown
- ⇧ Resize it Wrap up these up in simple functions for later.
###Code
def preprocess_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [192, 192])
image /= 255.0 # normalize to [0,1] range
return image
def load_and_preprocess_image(path):
# image = tf.read_file(path)
image = tf.io.read_file(path)
return preprocess_image(image)
###Output
_____no_output_____
###Markdown
- ⇧ Wrap the last few steps into repeatable functions
###Code
import matplotlib.pyplot as plt
image_path = all_image_paths[0]
label = all_image_labels[0]
plt.imshow(load_and_preprocess_image(img_path))
plt.grid(False)
plt.xlabel(caption_image(img_path).encode('utf-8'))
plt.title(label_names[label].title())
print()
###Output
###Markdown
- ⇧ Demonstrate that the above functions worked! Build a `tf.data.Dataset` A dataset of images The easiest way to build a `tf.data.Dataset` is using the `from_tensor_slices` method.Slicing the array of strings, results in a dataset of strings:
###Code
path_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)
print(path_ds)
###Output
<TensorSliceDataset shapes: (), types: tf.string>
###Markdown
- ⇧ Create dataset of images from the list of paths The `output_shapes` and `output_types` fields describe the content of each item in the dataset. In this case it is a set of scalar binary-strings <2.0?print('shape: ', repr(path_ds.output_shapes))print('type: ', path_ds.output_types)print()print(path_ds) - ⇧ This isn't in the 2.0beta documentation, so maybe deprecated? Now create a new dataset that loads and formats images on the fly by mapping `preprocess_image` over the dataset of paths.
###Code
image_ds = path_ds.map(load_and_preprocess_image, num_parallel_calls=AUTOTUNE)
###Output
_____no_output_____
###Markdown
- ⇧ This creates the dataset?
###Code
import matplotlib.pyplot as plt
plt.figure(figsize=(8,8))
for n,image in enumerate(image_ds.take(4)):
plt.subplot(2,2,n+1)
plt.imshow(image)
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.xlabel(caption_image(all_image_paths[n]))
plt.show()
###Output
_____no_output_____
###Markdown
- ⇧ "Plot" out sample of photos! A dataset of `(image, label)` pairs Using the same `from_tensor_slices` method we can build a dataset of labels
###Code
label_ds = tf.data.Dataset.from_tensor_slices(tf.cast(all_image_labels, tf.int64))
###Output
_____no_output_____
###Markdown
- ⇧ Start building labels dataset.
###Code
for label in label_ds.take(10):
print(label_names[label.numpy()])
###Output
roses
sunflowers
tulips
sunflowers
dandelion
dandelion
daisy
roses
dandelion
roses
###Markdown
- ⇧ Checkout the first 10 labels. Since the datasets are in the same order we can just zip them together to get a dataset of `(image, label)` pairs.
###Code
image_label_ds = tf.data.Dataset.zip((image_ds, label_ds))
###Output
_____no_output_____
###Markdown
- ⇧ Zip the images up as tuples. The new dataset's `shapes` and `types` are tuples of shapes and types as well, describing each field:
###Code
print(image_label_ds)
###Output
<ZipDataset shapes: ((192, 192, 3), ()), types: (tf.float32, tf.int64)>
###Markdown
Note: When you have arrays like `all_image_labels` and `all_image_paths` an alternative to `tf.data.dataset.Dataset.zip` is to slice the pair of arrays.
###Code
ds = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels))
# The tuples are unpacked into the positional arguments of the mapped function
def load_and_preprocess_from_path_label(path, label):
return load_and_preprocess_image(path), label
image_label_ds = ds.map(load_and_preprocess_from_path_label)
image_label_ds
###Output
_____no_output_____
###Markdown
- ⇧ If you don't want to zip, you can slice the individual arrays. Basic methods for training To train a model with this dataset you will want the data:* To be well shuffled.* To be batched.* To repeat forever.* Batches to be available as soon as possible.These features can be easily added using the `tf.data` api.
###Code
BATCH_SIZE = 32
# Setting a shuffle buffer size as large as the dataset ensures that the data is
# completely shuffled.
ds = image_label_ds.shuffle(buffer_size=image_count)
ds = ds.repeat()
ds = ds.batch(BATCH_SIZE)
# `prefetch` lets the dataset fetch batches, in the background while the model is training.
ds = ds.prefetch(buffer_size=AUTOTUNE)
ds
###Output
_____no_output_____
###Markdown
There are a few things to note here:1. The order is important. * A `.shuffle` before a `.repeat` would shuffle items across epoch boundaries (some items will be seen twice before others are seen at all). * A `.shuffle` after a `.batch` would shuffle the order of the batches, but not shuffle the items across batches.1. We use a `buffer_size` the same size as the dataset for a full shuffle. Up to the dataset size, large values provide better randomization, but use more memory.1. The shuffle buffer is filled before any elements are pulled from it. So a large `buffer_size` may cause a delay when your `Dataset` is starting.1. The shuffeled dataset doesn't report the end of a dataset until the shuffle-buffer is completely empty. The `Dataset` is restarted by `.repeat`, causing another wait for the shuffle-buffer to be filled.This last point can be addressed by using the `tf.data.Dataset.apply` method with the fused `tf.data.experimental.shuffle_and_repeat` function:
###Code
ds = image_label_ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(buffer_size=AUTOTUNE)
ds
###Output
WARNING: Logging before flag parsing goes to stderr.
W0616 22:24:31.509460 4406490560 deprecation.py:323] From <ipython-input-48-4dc713bd4d84>:2: shuffle_and_repeat (from tensorflow.python.data.experimental.ops.shuffle_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.shuffle(buffer_size, seed)` followed by `tf.data.Dataset.repeat(count)`. Static tf.data optimizations will take care of using the fused implementation.
###Markdown
Pipe the dataset to a modelFetch a copy of MobileNet v2 from `tf.keras.applications`.This will be used for a simple transfer learning example.Set the MobileNet weights to be non-trainable:
###Code
mobile_net = tf.keras.applications.MobileNetV2(input_shape=(192, 192, 3), include_top=False)
mobile_net.trainable=False
###Output
Downloading data from https://github.com/JonathanCMitchell/mobilenet_v2_keras/releases/download/v1.1/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_192_no_top.h5
9412608/9406464 [==============================] - 4s 0us/step
###Markdown
This model expects its input to be normalized to the `[-1,1]` range:```help(keras_applications.mobilenet_v2.preprocess_input)```...This function applies the "Inception" preprocessing which convertsthe RGB values from [0, 255] to [-1, 1]... So before the passing it to the MobilNet model, we need to convert the input from a range of `[0,1]` to `[-1,1]`.
###Code
def change_range(image,label):
return 2*image-1, label
keras_ds = ds.map(change_range)
###Output
_____no_output_____
###Markdown
The MobileNet returns a `6x6` spatial grid of features for each image.Pass it a batch of images to see:
###Code
# The dataset may take a few seconds to start, as it fills its shuffle buffer.
image_batch, label_batch = next(iter(keras_ds))
feature_map_batch = mobile_net(image_batch)
print(feature_map_batch.shape)
###Output
(32, 6, 6, 1280)
###Markdown
So build a model wrapped around MobileNet, and use `tf.keras.layers.GlobalAveragePooling2D` to average over those space dimensions, before the output `tf.keras.layers.Dense` layer:
###Code
model = tf.keras.Sequential([
mobile_net,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(len(label_names))])
###Output
_____no_output_____
###Markdown
Now it produces outputs of the expected shape:
###Code
logit_batch = model(image_batch).numpy()
print("min logit:", logit_batch.min())
print("max logit:", logit_batch.max())
print()
print("Shape:", logit_batch.shape)
###Output
min logit: -2.6146884
max logit: 2.38903
Shape: (32, 5)
###Markdown
Compile the model to describe the training procedure: model.compile(optimizer=tf.train.AdamOptimizer(), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=["accuracy"])
###Code
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
There are 2 trainable variables: the Dense `weights` and `bias`:
###Code
len(model.trainable_variables)
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
mobilenetv2_1.00_192 (Model) (None, 6, 6, 1280) 2257984
_________________________________________________________________
global_average_pooling2d (Gl (None, 1280) 0
_________________________________________________________________
dense (Dense) (None, 5) 6405
=================================================================
Total params: 2,264,389
Trainable params: 6,405
Non-trainable params: 2,257,984
_________________________________________________________________
###Markdown
Train the model.Normally you would specify the real number of steps per epoch, but for demonstration purposes only run 3 steps. steps_per_epoch=tf.ceil(len(all_image_paths)/BATCH_SIZE).numpy()steps_per_epoch
###Code
steps_per_epoch=tf.math.ceil(len(all_image_paths)/BATCH_SIZE).numpy()
steps_per_epoch
model.fit(ds, epochs=20, steps_per_epoch=3)
###Output
Epoch 1/20
3/3 [==============================] - 16s 5s/step - loss: 2.2829 - accuracy: 0.3229
Epoch 2/20
3/3 [==============================] - 3s 1s/step - loss: 1.9508 - accuracy: 0.2812
Epoch 3/20
3/3 [==============================] - 4s 1s/step - loss: 1.7513 - accuracy: 0.3333
Epoch 4/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.2604
Epoch 5/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.2917
Epoch 6/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.3021
Epoch 7/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.2604
Epoch 8/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.1250
Epoch 9/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.2188
Epoch 10/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.2708
Epoch 11/20
3/3 [==============================] - 5s 2s/step - loss: 1.6094 - accuracy: 0.1875
Epoch 12/20
3/3 [==============================] - 4s 1s/step - loss: 1.7437 - accuracy: 0.3125
Epoch 13/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.1875
Epoch 14/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.1354
Epoch 15/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.1979
Epoch 16/20
3/3 [==============================] - 5s 2s/step - loss: 1.6094 - accuracy: 0.2292
Epoch 17/20
3/3 [==============================] - 5s 2s/step - loss: 1.6094 - accuracy: 0.2708
Epoch 18/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.2396
Epoch 19/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.2188
Epoch 20/20
3/3 [==============================] - 4s 1s/step - loss: 1.6094 - accuracy: 0.2188
###Markdown
PerformanceNote: This section just shows a couple of easy tricks that may help performance. For an in depth guide see [Input Pipeline Performance](https://www.tensorflow.org/guide/performance/datasets).The simple pipeline used above reads each file individually, on each epoch. This is fine for local training on CPU but may not be sufficient for GPU training, and is totally inappropriate for any sort of distributed training. To investigate, first build a simple function to check the performance of our datasets:
###Code
import time
def timeit(ds, batches=2*steps_per_epoch+1):
overall_start = time.time()
# Fetch a single batch to prime the pipeline (fill the shuffle buffer),
# before starting the timer
it = iter(ds.take(batches+1))
next(it)
start = time.time()
for i,(images,labels) in enumerate(it):
if i%10 == 0:
print('.',end='')
print()
end = time.time()
duration = end-start
print("{} batches: {} s".format(batches, duration))
print("{:0.5f} Images/s".format(BATCH_SIZE*batches/duration))
print("Total time: {}s".format(end-overall_start))
###Output
_____no_output_____
###Markdown
The performance of the current dataset is:
###Code
ds = image_label_ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds = ds.batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
ds
timeit(ds)
###Output
_____no_output_____
###Markdown
Cache Use `tf.data.Dataset.cache` to easily cache calculations across epochs. This is especially performant if the dataq fits in memory.Here the images are cached, after being pre-precessed (decoded and resized):
###Code
ds = image_label_ds.cache()
ds = ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds = ds.batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
ds
timeit(ds)
###Output
_____no_output_____
###Markdown
One disadvantage to using an in memory cache is that the cache must be rebuilt on each run, giving the same startup delay each time the dataset is started:
###Code
timeit(ds)
###Output
_____no_output_____
###Markdown
If the data doesn't fit in memory, use a cache file:
###Code
ds = image_label_ds.cache(filename='./cache.tf-data')
ds = ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds = ds.batch(BATCH_SIZE).prefetch(1)
ds
timeit(ds)
###Output
_____no_output_____
###Markdown
The cache file also has the advantage that it can be used to quickly restart the dataset without rebuilding the cache. Note how much faster it is the second time:
###Code
timeit(ds)
###Output
_____no_output_____
###Markdown
TFRecord File Raw image dataTFRecord files are a simple format to store a sequence of binary blobs. By packing multiple examples into the same file, TensorFlow is able to read multiple examples at once, which is especially important for performance when using a remote storage service such as GCS.First, build a TFRecord file from the raw image data:
###Code
image_ds = tf.data.Dataset.from_tensor_slices(all_image_paths).map(tf.read_file)
tfrec = tf.data.experimental.TFRecordWriter('images.tfrec')
tfrec.write(image_ds)
###Output
_____no_output_____
###Markdown
Next build a dataset that reads from the TFRecord file and decodes/reformats the images using the `preprocess_image` function we defined earlier.
###Code
image_ds = tf.data.TFRecordDataset('images.tfrec').map(preprocess_image)
###Output
_____no_output_____
###Markdown
Zip that with the labels dataset we defined earlier, to get the expected `(image,label)` pairs.
###Code
ds = tf.data.Dataset.zip((image_ds, label_ds))
ds = ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds=ds.batch(BATCH_SIZE).prefetch(AUTOTUNE)
ds
timeit(ds)
###Output
_____no_output_____
###Markdown
This is slower than the `cache` version because we have not cached the preprocessing. Serialized Tensors To save some preprocessing to the TFRecord file, first make a dataset of the processed images, as before:
###Code
paths_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)
image_ds = paths_ds.map(load_and_preprocess_image)
image_ds
###Output
_____no_output_____
###Markdown
Now instead of a dataset of `.jpeg` strings, this is a dataset of tensors.To serialize this to a TFRecord file you first convert the dataset of tensors to a dataset of strings.
###Code
ds = image_ds.map(tf.serialize_tensor)
ds
tfrec = tf.data.experimental.TFRecordWriter('images.tfrec')
tfrec.write(ds)
###Output
_____no_output_____
###Markdown
With the preprocessing cached, data can be loaded from the TFrecord file quite efficiently. Just remember to de-serialized tensor before trying to use it.
###Code
ds = tf.data.TFRecordDataset('images.tfrec')
def parse(x):
result = tf.parse_tensor(x, out_type=tf.float32)
result = tf.reshape(result, [192, 192, 3])
return result
ds = ds.map(parse, num_parallel_calls=AUTOTUNE)
ds
###Output
_____no_output_____
###Markdown
Now, add the labels and apply the same standard operations as before:
###Code
ds = tf.data.Dataset.zip((ds, label_ds))
ds = ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds=ds.batch(BATCH_SIZE).prefetch(AUTOTUNE)
ds
timeit(ds)
###Output
_____no_output_____ |
archive/data_analysis.ipynb | ###Markdown
Portugues
###Code
pt_vec = TfidfVectorizer(input="content", analyzer=lambda x: x.tolist(), max_features=20000, min_df=2)
pt_df_train_tfidf = pt_vec.fit_transform(pt_df_train.words)
pt_df_dev_tfidf = pt_vec.transform(pt_df_dev.words)
pt_model = SGDClassifier(n_jobs=-1, loss="log", verbose=10, random_state=42)
pt_model.fit(pt_df_train_tfidf, pt_df_train.category)
print(balanced_accuracy_score(pt_df_train.category, pt_model.predict(pt_df_train_tfidf)))
print(balanced_accuracy_score(pt_df_dev.category, pt_model.predict(pt_df_dev_tfidf)))
###Output
_____no_output_____ |
Bulldozer_sales.ipynb | ###Markdown
Predicting sale price of bulldozers using machine learning
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
df = pd.read_csv('data/bluebook-for-bulldozers/TrainAndValid.csv',
low_memory = False)
df.info()
df.isna().sum()
df.saledate.dtype
fig, ax = plt.subplots()
ax.scatter(df['saledate'][:1000], df['SalePrice'][:1000]);
df.SalePrice.plot.hist();
###Output
_____no_output_____
###Markdown
Parsing dates
###Code
# Import data again but this time parse dates
df = pd.read_csv("data/bluebook-for-bulldozers/TrainAndValid.csv",
low_memory=False,
parse_dates=['saledate'])
df.saledate.dtype
fig, ax = plt.subplots()
ax.scatter(df['saledate'][:1000], df['SalePrice'][:1000]);
df.head().T
df.saledate
# sort dataframe by saledate
df.sort_values(by=['saledate'], inplace=True, ascending=True)
df.saledate
df.head()
# Make a copy
df_temp = df.copy()
df_temp.head()
###Output
_____no_output_____
###Markdown
Add datetime parameters to `saledate` column
###Code
df_temp['saleYear'] = df_temp.saledate.dt.year
df_temp['saleMonth'] = df_temp.saledate.dt.month
df_temp['saleDay'] = df_temp.saledate.dt.day
df_temp['saleDayOfWeek'] = df_temp.saledate.dt.dayofweek
df_temp['saleDayOfYear'] = df_temp.saledate.dt.dayofyear
df_temp.head().T
# Now we have encriched the df with dattime params we can remove the saledate column
df_temp.drop('saledate', axis=1, inplace=True)
df_temp.saledate
df_temp.state.value_counts()
###Output
_____no_output_____
###Markdown
Modelling
###Code
from sklearn.ensemble import RandomForestRegressor
df_temp.info()
# Converting Data into numbers
pd.api.types.is_string_dtype(df_temp['UsageBand'])
# Find the columns containing strings
for label, content in df_temp.items():
if pd.api.types.is_string_dtype(content) == True:
print(label)
for label, content in df.items():
if pd.api.types.is_string_dtype(content) == True:
print(content)
break
# convert all strings into datatype of category
for label, content in df_temp.items():
if pd.api.types.is_string_dtype(content):
df_temp[label] = content.astype('category').cat.as_ordered()
df_temp.state.dtype
df_temp.isna().sum() / len(df_temp)
# Save preprocessed data
df_temp.to_csv('data/bluebook-for-bulldozers/train_temp.csv',
index=False)
# Import preprocessed data
df_temp = pd.read_csv("data/bluebook-for-bulldozers/train_temp.csv",
low_memory=False)
df_temp.head()
# check which columns are numeric
for label, content in df_temp.items():
if pd.api.types.is_numeric_dtype(content):
print(label)
# check which numeric columns
for label, content in df_temp.items():
if pd.api.types.is_numeric_dtype(content):
if pd.isnull(content).sum():
print(label)
# fill numeric values with median
for label, content in df_temp.items():
if pd.api.types.is_numeric_dtype(content):
if pd.isnull(content).sum():
df_temp[label+'is_missing'] = pd.isnull(content)
df_temp[label] = content.fillna(content.median())
# check if there are missing numeric values
for label, content in df_temp.items():
if pd.api.types.is_numeric_dtype(content):
if pd.isnull(content).sum():
print(label)
df_temp.auctioneerIDis_missing.value_counts()
df_temp.isnull().sum()
df_temp.state.dtype
# fill the missing categorical values
for label, content in df_temp.items():
if not pd.api.types.is_numeric_dtype(content):
df_temp[label+'_ismissing'] = pd.isnull(content)
df_temp[label] = pd.Categorical(content).codes + 1
df_temp.state.dtype
df_temp.state
df_temp.T
df_temp.isna().sum()[:10]
df_temp.saleYear
df_temp.saleMonth
# splitting data into training and validation
df_val = df_temp[df_temp['saleYear'] == 2012]
df_train = df_temp[df_temp.saleYear != 2012]
len(df_val), len(df_train)
# split data into X and y
X_train, y_train = df_train.drop('SalePrice', axis=1), df_train.SalePrice
X_valid, y_valid = df_val.drop('SalePrice', axis=1), df_val.SalePrice
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
# create evaluation function
from sklearn.metrics import mean_squared_log_error, mean_absolute_error, r2_score
def rmsle(y_test, y_preds):
'''
Calculates root mean squared log error between predictions
and true labels
'''
return np.sqrt(mean_squared_log_error(y_test, y_preds))
# create function to evaluate model on a few different levels
def show_score(model):
train_preds = model.predict(X_train)
val_preds = model.predict(X_valid)
score = {
"Training MAE": mean_absolute_error(y_train, train_preds),
"Valid MAE": mean_absolute_error(y_valid, val_preds),
"Training RMSLE": rmsle(y_train, train_preds),
"Valid RMSLE": rmsle(y_valid, val_preds),
"Traning R^2": r2_score(y_train, train_preds),
"Valid R^2": r2_score(y_valid, val_preds)
}
return score
###Output
_____no_output_____
###Markdown
Reducing data to train models faster
###Code
# create a model
model = RandomForestRegressor(n_jobs=-1,
random_state=42,
max_samples=10000)
%%time
model.fit(X_train, y_train)
show_score(model)
model.predict(X_train)[:10]
y_train[:10]
%%time
from sklearn.model_selection import RandomizedSearchCV
# Different RandomForestRegressor hyperparametrs
rf_grid = {
"n_estimators": np.arange(1, 100, 10),
"max_depth": [None, 3, 5, 10],
"min_samples_split": np.arange(2, 20, 2),
"min_samples_leaf": np.arange(1, 20, 2),
"max_features": [0.5, 1, "sqrt", "auto"],
"max_samples": [10000]
}
# Instantiate RandomForestRegressor hyperparameter
rs_model = RandomizedSearchCV(RandomForestRegressor(n_jobs=-1,
random_state=42),
param_distributions=rf_grid,
cv=5,
verbose=True,
n_iter=2
)
# fit the RandomizedSearchCV model
rs_model.fit(X_train, y_train)
rs_model.best_params_
show_score(rs_model)
%%time
# most ideal hyperparameters
ideal_model = RandomForestRegressor(n_estimators=40,
min_samples_leaf=1,
min_samples_split=14,
max_features=0.5,
n_jobs=-1,
max_samples=None,
random_state=42)
# fit the ideal model
ideal_model.fit(X_train, y_train)
show_score(ideal_model)
###Output
_____no_output_____
###Markdown
Make predictions on test data using ideal model
###Code
df_test = pd.read_csv("data/bluebook-for-bulldozers/Test.csv",
low_memory=False,
parse_dates=['saledate'])
df_test.head()
###Output
_____no_output_____
###Markdown
Preprocessing data to fit the format of training set
###Code
df = pd.read_csv('data/bluebook-for-bulldozers/TrainAndValid.csv',
low_memory=False,
parse_dates=['saledate'])
def preprocess_data(df):
'''
Performs transformations on df and returns transformed df
'''
df['saleYear'] = df.saledate.dt.year
df['saleMonth'] = df.saledate.dt.month
df['saleDay'] = df.saledate.dt.day
df['saleDayOfWeek'] = df.saledate.dt.dayofweek
df['saleDayOfYear'] = df.saledate.dt.dayofyear
df.drop('saledate', axis=1, inplace=True)
# Fill the numeric rows with median
for label, content in df.items():
if pd.api.types.is_numeric_dtype(content):
if pd.isnull(content).sum():
df[label+'is_missing'] = pd.isnull(content)
df[label] = content.fillna(content.median())
# fill categorical missing data and turn categories into numbers
if not pd.api.types.is_numeric_dtype(content):
df[label+'missing'] = pd.isnull(content)
df[label] = pd.Categorical(content).codes+1
return df
# process the test data
df_test = pd.read_csv("data/bluebook-for-bulldozers/Test.csv",
low_memory=False,
parse_dates=['saledate'])
df_test = preprocess_data(df_test)
df_test.head()
X_train.shape
df_temp = preprocess_data(df)
df_temp.head()
df_val = df_temp[df_temp['saleYear'] == 2012]
df_train = df_temp[df_temp.saleYear != 2012]
# split data into X and y
X_train, y_train = df_train.drop('SalePrice', axis=1), df_train.SalePrice
X_valid, y_valid = df_val.drop('SalePrice', axis=1), df_val.SalePrice
X_train.head()
set(X_train.columns).difference(set(df_test.columns))
# Manually adjust the auctioneerIDis_missing
df_test['auctioneerIDis_missing'] = False
df_test.head()
# finally make predictions the test data
test_preds = ideal_model.predict(df_test)
test_preds
# formatting it correctly
df_preds = pd.DataFrame()
df_preds['SalesID'] = df_test['SalesID']
df_preds['SalesPrice'] = test_preds
df_preds
# export prediction data to csv
df_preds.to_csv('data/bluebook-for-bulldozers/test_predictions.csv')
# Helper function to plot feature importance
def plot_features(columns, importances, n=20):
df = (pd.DataFrame({ "Features": columns,
"feature_importances": importances})
.sort_values("feature_importances", ascending=False)
.reset_index(drop=True))
# plot dataframe
fig, ax = plt.subplots()
ax.barh(df['Features'][:n], df["feature_importances"][:20])
ax.set_ylabel("Features")
ax.set_xlabel("Feature importance")
ax.invert_yaxis()
plot_features(X_train.columns, ideal_model.feature_importances_)
###Output
_____no_output_____ |
projects/mnist/notebooks/CNN.ipynb | ###Markdown
Pretty good accuracy! Now let's submit our predictions. Submitting predictionsThe predictions generated by the model is an array of length 28,000:
###Code
predictions.shape
###Output
_____no_output_____
###Markdown
Per the competition instructions, we need to submit a .csv file in the format of:```ImageId,Label1,02,03,0...```To help with this I'll import pandas, create a dataframe with these column headers, and save it to .csv.
###Code
import pandas as pd
df_predictions = pd.DataFrame({'ImageId' : np.arange(len(predictions))+1,
'Label' : predictions})
df_predictions.head(20)
df_predictions.to_csv('tf_cnn_v1_submission.csv', index=False)
###Output
_____no_output_____ |
step2_train-and-evaluation_face-recognize.ipynb | ###Markdown
Preparing Dataset
###Code
# Specify paths to the training and validation dataset directories
BATCH_SIZE = 32
TARGET_SIZE = (160, 160)
data_generator = ImageDataGenerator(rescale=1./255, validation_split=0.2)
# Generate batches of tensor image data for training
train_data_generator = data_generator.flow_from_directory(dataset_dir,
target_size = TARGET_SIZE,
batch_size = BATCH_SIZE,
class_mode = "categorical",
shuffle = False,
subset = "training")
# Generate batches of tensor image data for validation
validation_data_generator = data_generator.flow_from_directory(dataset_dir,
target_size = TARGET_SIZE,
batch_size = BATCH_SIZE,
class_mode = "categorical",
shuffle = False,
subset = "validation")
nb_train_samples = train_data_generator.samples
nb_validation_samples = validation_data_generator.samples
class_labels = train_data_generator.class_indices
print(class_labels)
import json
fp = open('class_labels.json', 'w')
json.dump(class_labels, fp)
fp.close()
sample_training_images, labels = next(train_data_generator)
def plot_images(images_arr, labels):
fig, axes = plt.subplots(4, 4 , figsize=(10,10))
axes = axes.flatten()
for img, lbs, ax in zip(images_arr, labels, axes):
ax.imshow(img)
ax.set_title(lbs)
ax.axis("off")
plt.tight_layout
plt.show()
plot_images(sample_training_images[:16], labels[:16])
###Output
C:\Users\fabio\anaconda3\lib\site-packages\matplotlib\text.py:1163: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
if s != self._text:
###Markdown
Architecture
###Code
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2
input_shape = (TARGET_SIZE[0], TARGET_SIZE[1], 3)
pretrained_model = MobileNetV2(input_shape = input_shape, weights = 'imagenet', include_top = False)
pretrained_model.trainable = False
model = Sequential()
model.add(pretrained_model)
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.3))
model.add(Dense(len(class_labels)))
model.add(Activation('softmax'))
model.summary()
# Define early stopping and model checkpoint for optimizing epoch number and saving the best model
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.callbacks import ModelCheckpoint
es = EarlyStopping(monitor="val_loss", mode='min', verbose = 1, patience = 20)
mc = ModelCheckpoint(model_dir + model_name, monitor = 'val_accuracy', mode = 'max', verbose = 1,
save_best_only = True)
# Compile and fit your model
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
epochs = 100
history = model.fit(train_data_generator,
steps_per_epoch = train_data_generator.samples/train_data_generator.batch_size,
epochs = epochs,
validation_data = validation_data_generator,
validation_steps = validation_data_generator.samples/validation_data_generator.batch_size,
callbacks =[es, mc])
###Output
Epoch 1/100
5/5 [==============================] - 5s 514ms/step - loss: 16.6217 - accuracy: 0.5935 - val_loss: 11.7839 - val_accuracy: 0.5000
Epoch 00001: val_accuracy improved from -inf to 0.50000, saving model to ./models\MobileNetV2.h5
Epoch 2/100
5/5 [==============================] - 2s 364ms/step - loss: 9.8742 - accuracy: 0.3805 - val_loss: 1.6220 - val_accuracy: 0.6190
Epoch 00002: val_accuracy improved from 0.50000 to 0.61905, saving model to ./models\MobileNetV2.h5
Epoch 3/100
5/5 [==============================] - 2s 361ms/step - loss: 1.1020 - accuracy: 0.8362 - val_loss: 0.3578 - val_accuracy: 0.9524
Epoch 00003: val_accuracy improved from 0.61905 to 0.95238, saving model to ./models\MobileNetV2.h5
Epoch 4/100
5/5 [==============================] - 2s 324ms/step - loss: 0.3426 - accuracy: 0.9495 - val_loss: 2.8215 - val_accuracy: 0.6429
Epoch 00004: val_accuracy did not improve from 0.95238
Epoch 5/100
5/5 [==============================] - 2s 321ms/step - loss: 0.6785 - accuracy: 0.8848 - val_loss: 0.1642 - val_accuracy: 0.9286
Epoch 00005: val_accuracy did not improve from 0.95238
Epoch 6/100
5/5 [==============================] - 2s 348ms/step - loss: 0.0122 - accuracy: 0.9905 - val_loss: 3.1158e-04 - val_accuracy: 1.0000
Epoch 00006: val_accuracy improved from 0.95238 to 1.00000, saving model to ./models\MobileNetV2.h5
Epoch 7/100
5/5 [==============================] - 2s 364ms/step - loss: 0.0732 - accuracy: 0.9878 - val_loss: 2.9020e-04 - val_accuracy: 1.0000
Epoch 00007: val_accuracy did not improve from 1.00000
Epoch 8/100
5/5 [==============================] - 2s 325ms/step - loss: 5.1336e-04 - accuracy: 1.0000 - val_loss: 2.9712e-04 - val_accuracy: 1.0000
Epoch 00008: val_accuracy did not improve from 1.00000
Epoch 9/100
5/5 [==============================] - 2s 362ms/step - loss: 2.9659e-08 - accuracy: 1.0000 - val_loss: 0.0011 - val_accuracy: 1.0000
Epoch 00009: val_accuracy did not improve from 1.00000
Epoch 10/100
5/5 [==============================] - 2s 332ms/step - loss: 6.8845e-04 - accuracy: 1.0000 - val_loss: 0.0026 - val_accuracy: 1.0000
Epoch 00010: val_accuracy did not improve from 1.00000
Epoch 11/100
5/5 [==============================] - 2s 341ms/step - loss: 2.0564e-07 - accuracy: 1.0000 - val_loss: 0.0059 - val_accuracy: 1.0000
Epoch 00011: val_accuracy did not improve from 1.00000
Epoch 12/100
5/5 [==============================] - 2s 327ms/step - loss: 5.2457e-05 - accuracy: 1.0000 - val_loss: 0.0088 - val_accuracy: 1.0000
Epoch 00012: val_accuracy did not improve from 1.00000
Epoch 13/100
5/5 [==============================] - 2s 332ms/step - loss: 6.7140e-04 - accuracy: 1.0000 - val_loss: 0.0125 - val_accuracy: 1.0000
Epoch 00013: val_accuracy did not improve from 1.00000
Epoch 14/100
5/5 [==============================] - 2s 336ms/step - loss: 4.4203e-05 - accuracy: 1.0000 - val_loss: 0.0150 - val_accuracy: 1.0000
Epoch 00014: val_accuracy did not improve from 1.00000
Epoch 15/100
5/5 [==============================] - 2s 332ms/step - loss: 1.8606e-05 - accuracy: 1.0000 - val_loss: 0.0162 - val_accuracy: 1.0000
Epoch 00015: val_accuracy did not improve from 1.00000
Epoch 16/100
5/5 [==============================] - 2s 330ms/step - loss: 2.8170e-04 - accuracy: 1.0000 - val_loss: 0.0139 - val_accuracy: 1.0000
Epoch 00016: val_accuracy did not improve from 1.00000
Epoch 17/100
5/5 [==============================] - 2s 333ms/step - loss: 0.0049 - accuracy: 1.0000 - val_loss: 0.0357 - val_accuracy: 0.9762
Epoch 00017: val_accuracy did not improve from 1.00000
Epoch 18/100
5/5 [==============================] - 2s 381ms/step - loss: 0.0073 - accuracy: 0.9944 - val_loss: 0.0206 - val_accuracy: 0.9762
Epoch 00018: val_accuracy did not improve from 1.00000
Epoch 19/100
5/5 [==============================] - 2s 333ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0119 - val_accuracy: 1.0000
Epoch 00019: val_accuracy did not improve from 1.00000
Epoch 20/100
5/5 [==============================] - 2s 354ms/step - loss: 4.2356e-08 - accuracy: 1.0000 - val_loss: 0.0113 - val_accuracy: 1.0000
Epoch 00020: val_accuracy did not improve from 1.00000
Epoch 21/100
5/5 [==============================] - 2s 358ms/step - loss: 3.3762e-07 - accuracy: 1.0000 - val_loss: 0.0108 - val_accuracy: 1.0000
Epoch 00021: val_accuracy did not improve from 1.00000
Epoch 22/100
5/5 [==============================] - 2s 321ms/step - loss: 9.7502e-07 - accuracy: 1.0000 - val_loss: 0.0098 - val_accuracy: 1.0000
Epoch 00022: val_accuracy did not improve from 1.00000
Epoch 23/100
5/5 [==============================] - 2s 323ms/step - loss: 2.2570e-07 - accuracy: 1.0000 - val_loss: 0.0098 - val_accuracy: 1.0000
Epoch 00023: val_accuracy did not improve from 1.00000
Epoch 24/100
5/5 [==============================] - 2s 406ms/step - loss: 1.7814e-05 - accuracy: 1.0000 - val_loss: 0.0099 - val_accuracy: 1.0000
Epoch 00024: val_accuracy did not improve from 1.00000
Epoch 25/100
5/5 [==============================] - 2s 381ms/step - loss: 6.6416e-08 - accuracy: 1.0000 - val_loss: 0.0100 - val_accuracy: 1.0000
Epoch 00025: val_accuracy did not improve from 1.00000
Epoch 26/100
5/5 [==============================] - 2s 316ms/step - loss: 1.1628e-08 - accuracy: 1.0000 - val_loss: 0.0101 - val_accuracy: 1.0000
Epoch 00026: val_accuracy did not improve from 1.00000
Epoch 27/100
5/5 [==============================] - 2s 322ms/step - loss: 3.7148e-07 - accuracy: 1.0000 - val_loss: 0.0102 - val_accuracy: 1.0000
Epoch 00027: val_accuracy did not improve from 1.00000
Epoch 00027: early stopping
###Markdown
Evaluation
###Code
# Plot accuracy and loss for testing and validation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy Value')
plt.xlabel('Epoch')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Loss Value')
plt.xlabel('Epoch')
plt.title('Training and Validation Loss')
plt.show()
# Lê o melhor modelo salvo
from tensorflow.keras.models import load_model
# Load the best saved model
model = load_model(model_dir + model_name)
from sklearn.metrics import classification_report, confusion_matrix
Y_pred = model.predict(validation_data_generator, nb_validation_samples // BATCH_SIZE+1)
y_pred = np.argmax(Y_pred, axis=1)
print('Confusion Matrix \n')
print(confusion_matrix(validation_data_generator.classes, y_pred))
print('\n')
print('Classification Report \n')
target_names = class_labels.keys()
print(classification_report(validation_data_generator.classes, y_pred, target_names=target_names))
###Output
Confusion Matrix
[[21 0]
[ 0 21]]
Classification Report
precision recall f1-score support
JessicaLuiza 1.00 1.00 1.00 21
LuizFabio 1.00 1.00 1.00 21
accuracy 1.00 42
macro avg 1.00 1.00 1.00 42
weighted avg 1.00 1.00 1.00 42
|
Lissajous.ipynb | ###Markdown
Lissajous drawing machineThis is an interface to experiment with the lissajous drawing machine, to preview the drawings before they are created.You can play around with the settings to create different images.
###Code
from simple_svg import Scene, Line
from IPython.display import SVG
import math
import serial
from ipywidgets import interact
@interact(multiplier=(1, 3), offset=(0.0, 0.03, 0.001), starter=(0.0, 3.142, 0.002), circle_size=(0.0, 1.0, 0.01), circle_speed=(0.0, 3.0, 1.0), circle_offset=(-0.05, 0.05, 0.001), circle_start=(0.0, 3.142, 0.002))
def harmonograph(multiplier, offset, starter, circle_size, circle_speed, circle_offset, circle_start):
pos = 0
spd = 0.05
decay = 6000
factor = multiplier + offset
circle_factor = circle_speed + circle_offset
min_x = max_x = min_y = max_y = prev_x = prev_y = 400
points = []
for pos in range(decay//10, decay):
adjx = int(400 + int(200 * (pos/float(decay)) * math.sin(pos * spd)))
adjy = int(400 + int(200 * (pos/float(decay)) * math.sin(starter + (pos * spd * factor))))
adjx += int(circle_size * 200 * (pos/float(decay)) * math.sin(circle_start + (pos * spd * circle_factor)))
adjy += int(circle_size * 200 * (pos/float(decay)) * math.cos(circle_start + (pos * spd * circle_factor)))
points.append([adjx, adjy])
min_x = min(min_x, adjx)
min_y = min(min_y, adjy)
max_x = max(max_x, adjx)
max_y = max(max_y, adjy)
scale = min(800.0/(max_x - min_x), 800.0/(max_y - min_y))
x_offset = ((max_x + min_x)/2 - 400)*70/400
y_offset = ((max_y + min_y)/2 - 400)*70/400
ser = serial.Serial('/dev/ttyACM0', 9600)
tosend = f'{factor} {starter} {circle_size} {circle_factor} {circle_start} {x_offset} {y_offset} {scale:.4f}'
print(tosend)
ser.write(tosend.encode('utf-8'))
scene = Scene('test', 800, 800)
for point in points:
point_x = scale*(point[0] - min_x)
point_y = scale*(point[1] - min_y)
scene.add(Line((prev_x, prev_y), (point_x, point_y)))
prev_x, prev_y = point_x, point_y
return SVG("\n".join(scene.strarray()))
###Output
_____no_output_____
###Markdown
Superposition of two waves in perpendicular direction\begin{equation}x = a \sin (2\pi f_1 t)\\y=b \sin (2\pi f_2 t - \phi)\end{equation}
###Code
a = [10,30] # amplitude of first wave
f1 = [1,2,4,8,12,16] # frequency of first wave
b=[10,30] # amplitude of second wave
f2=[1,2,4,8,12,16] # frequency of second wave
phi=[0,np.pi/4,np.pi/2] # phase angle(0,30,45,60,90)
t = np.arange(0,8.0,0.01) # time
###Output
_____no_output_____
###Markdown
Example 1.same amplitude,phase is zero ,different frequency Lissajous figure
###Code
plt.figure(figsize = [6,36])
for i in range(len(f2)):
plt.subplot(len(f2),1,i+1)
ax = plt.gca()
ax.set_facecolor('k') # backgound color
ax.grid(color='xkcd:sky blue') # grid color
x = a[0]*np.sin(2*np.pi*f1[2]*t)
y = b[0]*np.sin(2*np.pi*f2[i]*t-phi[0])
plt.plot(x,y, color ='g',label='f1='+str(f1[2])+',f2='+str(f2[i]))
plt.xlabel("x",color='r',fontsize=14)
plt.ylabel("y",color='r',fontsize=14)
ax.xaxis.set_minor_locator(AutoMinorLocator()) ##
ax.yaxis.set_minor_locator(AutoMinorLocator()) ###
ax.tick_params(which='both', width=2)
ax.tick_params(which='major', length=9)
ax.tick_params(which='minor', length=4)
plt.legend()
plt.subplots_adjust(wspace = 0.5, hspace = 0.5)
plt.show()
###Output
_____no_output_____
###Markdown
2.same frequency and amplitude,different phase
###Code
plt.figure(figsize = [6,24])
for i in range(len(phi)):
plt.subplot(len(phi),1,i+1)
ax = plt.gca()
ax.set_facecolor('xkcd:sky blue')
ax.grid(color='g')
x = a[0]*np.sin(2*np.pi*f1[2]*t)
y = b[0]*np.sin(2*np.pi*f2[2]*t-phi[i])
plt.plot(x,y, color ='purple',label='phase='+str(phi[i]*180/np.pi))
plt.xlabel("x",color='g',fontsize=14)
plt.ylabel("y",color='g',fontsize=14)
plt.legend()
plt.subplots_adjust(wspace = 0.5, hspace = 0.5)
plt.show()
###Output
_____no_output_____
###Markdown
3.same frequency ,different phase and amplitude
###Code
plt.figure(figsize = [8,24])
for i in range(len(phi)):
plt.subplot(len(phi),1,i+1)
ax = plt.gca()
ax.grid(color='tab:brown')
x = a[0]*np.sin(2*np.pi*f1[2]*t)
y = b[1]*np.sin(2*np.pi*f2[2]*t-phi[i])
plt.plot(x,y, color ='r',label='phase='+str(phi[i]*180/np.pi))
plt.xlabel("x",color='r',fontsize=14)
plt.ylabel("y",color='r',fontsize=14)
plt.legend()
plt.subplots_adjust(wspace = 0.5, hspace = 0.5)
plt.show()
###Output
_____no_output_____
###Markdown
Superposition of two waves in same direction\begin{equation}y_1 = a \sin (2\pi f_1 t)\\y_2=b \sin (2\pi f_2 t - \phi)\end{equation} Example 1.same amplitude and phase,different frequency
###Code
f1=[1,2,3,4,5]
f2=[1,2,3,4,5]
plt.figure(figsize = [12,16])
for i in range(len(f2)):
plt.subplot(len(f2),1,i+1)
ax = plt.gca() #graphic current axis
ax.set_facecolor('k')
ax.grid(False)
y1 = a[0]*np.sin(2*np.pi*f1[0]*t)
y2 = a[0]*np.sin(2*np.pi*f2[i]*t-phi[0])
y=y1+y2
plt.plot(t,y,color ='tab:olive',label='f1='+str(f1[0])+',f2='+str(f2[i]))
plt.xlabel("t",color='r',fontsize=14)
plt.ylabel("y",color='r',fontsize=14)
plt.legend()
plt.subplots_adjust(wspace = 0.5, hspace = 0.5)
plt.show()
###Output
_____no_output_____ |
notebooks/3.1-mc-preprocess-priv.ipynb | ###Markdown
Transforming data
###Code
cats = cols
cats.remove('NU_IDADE')
enc = preprocessing.OneHotEncoder(sparse=False)
# fit and transform in one call and print categories
out_enc = enc.fit_transform(df[cats])
new_cols = enc.get_feature_names(cats).tolist()
print(new_cols)
###Output
['QE_I01_A', 'QE_I01_B', 'QE_I01_C', 'QE_I01_D', 'QE_I01_E', 'QE_I02_A', 'QE_I02_B', 'QE_I02_C', 'QE_I02_D', 'QE_I02_E', 'QE_I02_F', 'QE_I04_A', 'QE_I04_B', 'QE_I04_C', 'QE_I04_D', 'QE_I04_E', 'QE_I04_F', 'QE_I05_A', 'QE_I05_B', 'QE_I05_C', 'QE_I05_D', 'QE_I05_E', 'QE_I05_F', 'QE_I06_A', 'QE_I06_B', 'QE_I06_C', 'QE_I06_D', 'QE_I06_F', 'QE_I07_A', 'QE_I07_B', 'QE_I07_C', 'QE_I07_D', 'QE_I07_E', 'QE_I07_F', 'QE_I07_G', 'QE_I07_H', 'QE_I08_A', 'QE_I08_B', 'QE_I08_C', 'QE_I08_D', 'QE_I08_E', 'QE_I08_F', 'QE_I08_G', 'QE_I09_A', 'QE_I09_B', 'QE_I09_C', 'QE_I09_D', 'QE_I09_E', 'QE_I09_F', 'QE_I10_A', 'QE_I10_B', 'QE_I10_C', 'QE_I10_D', 'QE_I10_E', 'QE_I11_A', 'QE_I11_B', 'QE_I11_C', 'QE_I11_D', 'QE_I11_E', 'QE_I11_F', 'QE_I11_G', 'QE_I11_H', 'QE_I11_I', 'QE_I11_J', 'QE_I11_K', 'QE_I12_A', 'QE_I12_B', 'QE_I12_C', 'QE_I12_D', 'QE_I12_E', 'QE_I12_F', 'QE_I13_A', 'QE_I13_B', 'QE_I13_C', 'QE_I13_D', 'QE_I13_E', 'QE_I13_F', 'QE_I14_A', 'QE_I14_B', 'QE_I14_E', 'QE_I14_F', 'QE_I15_A', 'QE_I15_B', 'QE_I15_C', 'QE_I15_D', 'QE_I15_E', 'QE_I15_F', 'QE_I17_A', 'QE_I17_B', 'QE_I17_C', 'QE_I17_D', 'QE_I17_E', 'QE_I17_F', 'QE_I18_A', 'QE_I18_B', 'QE_I18_C', 'QE_I18_D', 'QE_I18_E', 'QE_I19_A', 'QE_I19_B', 'QE_I19_C', 'QE_I19_D', 'QE_I19_E', 'QE_I19_F', 'QE_I19_G', 'QE_I20_A', 'QE_I20_B', 'QE_I20_C', 'QE_I20_D', 'QE_I20_E', 'QE_I20_F', 'QE_I20_G', 'QE_I20_H', 'QE_I20_I', 'QE_I20_J', 'QE_I20_K', 'QE_I21_A', 'QE_I21_B', 'QE_I22_A', 'QE_I22_B', 'QE_I22_C', 'QE_I22_D', 'QE_I22_E', 'QE_I23_A', 'QE_I23_B', 'QE_I23_C', 'QE_I23_D', 'QE_I23_E', 'QE_I24_A', 'QE_I24_B', 'QE_I24_C', 'QE_I24_D', 'QE_I24_E', 'QE_I25_A', 'QE_I25_B', 'QE_I25_C', 'QE_I25_D', 'QE_I25_E', 'QE_I25_F', 'QE_I25_G', 'QE_I25_H', 'TP_SEXO_F', 'TP_SEXO_M']
###Markdown
Crate temporary dataframe for concatenation with original data
###Code
df_enc = pd.DataFrame(data=out_enc, columns=new_cols)
df_enc.index = df.index
# drop original columns and concatenate new encoded columns
df.drop(cats, axis=1, inplace=True)
df = pd.concat([df, df_enc], axis=1)
print(df.columns)
###Output
Index(['NU_ANO', 'CO_IES', 'CO_GRUPO', 'NU_IDADE', 'ANO_FIM_EM', 'ANO_IN_GRAD',
'QE_I01_A', 'QE_I01_B', 'QE_I01_C', 'QE_I01_D',
...
'QE_I25_A', 'QE_I25_B', 'QE_I25_C', 'QE_I25_D', 'QE_I25_E', 'QE_I25_F',
'QE_I25_G', 'QE_I25_H', 'TP_SEXO_F', 'TP_SEXO_M'],
dtype='object', length=149)
###Markdown
Feature selection
###Code
selector = VarianceThreshold() # instantiate with no threshold
# prefit object with df[cols]
selector.fit(df[new_cols])
# check feature variances before selection
np.quantile(selector.variances_, [0.25, 0.5, 0.75])
sns.distplot(selector.variances_, bins=10)
# set threshold into selector object
selector.set_params(threshold=np.quantile(selector.variances_, 0.5))
# refit and transform, store output in out_sel
out_sel = selector.fit_transform(df[new_cols])
# check which features were chosen
print(selector.get_support())
# filter in the selected features
df_sel = df[new_cols].iloc[:, selector.get_support()]
df_sel.shape
df_sel
df[new_cols]
df[new_cols].to_csv('../data/preprocessed/enade_2016a2018_priv_onehot_full.csv', index=False)
df_sel.to_csv('../data/preprocessed/enade_2016a2018_priv_onehot_sel.csv', index=False)
###Output
_____no_output_____ |
Cloud Pak for Data/SPSS C&DS/notebooks/binary/AI OpenScale and SPSS C&DS Engine.ipynb | ###Markdown
Working with SPSS Collaboration and Deployment Services Compatible with Cloud Pak for Data Only This notebook shows how to log the payload for the model deployed on custom model serving engine using Watson OpenScale python sdk. Contents - Setup - Binding machine learning engine - Subscriptions - Performance monitor, scoring and payload logging - Quality monitor and feedback logging - Fairness,Drift monitoring and explanations **Note:** This notebook works correctly with kernel `Python 3.7.x`. Setup Sample model creation using SPSS Modeler - Download training data set from [here](https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/credit_risk_training.csv)- Download SPSS Modeler stream from [here](https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/SPSS%20C%26DS/assets/models/german_credit_risk_tutorial.str)- Deploy the model using SPSS C&DS as web service Installation and authentication
###Code
import warnings
warnings.filterwarnings('ignore')
!pip install --upgrade ibm-watson-openscale | tail -n 1
###Output
_____no_output_____
###Markdown
Import and initiate.
###Code
WOS_CREDENTIALS = {
"url": "***",
"username": "***",
"password": "***"
}
WML_CREDENTIALS = {
"url": "***",
"username": "***",
"password": "***",
"instance_id": "wml_local",
"version" : "3.5"
}
###Output
_____no_output_____
###Markdown
Cloud object storage detailsIn next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial. You can find COS_API_KEY_ID and COS_RESOURCE_CRN variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription.COS_ENDPOINT variable can be found in Endpoint field of the menu.
###Code
IAM_URL="https://iam.ng.bluemix.net/oidc/token"
COS_API_KEY_ID = "***"
COS_RESOURCE_CRN = "***" # eg "crn:v1:bluemix:public:cloud-object-storage:global:a/3bf0d9003abfb5d29761c3e97696b71c:d6f04d83-6c4f-4a62-a165-696756d63903::"
COS_ENDPOINT = "***" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints
BUCKET_NAME = "***" #example: "credit-risk-training-data"
training_data_file_name="credit_risk_training.csv"
instance_id='***' ## default instance ID: 00000000-0000-0000-0000-0000000000000000
from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator
from ibm_watson_openscale import APIClient
from ibm_watson_openscale import *
from ibm_watson_openscale.supporting_classes.enums import *
from ibm_watson_openscale.supporting_classes import *
authenticator = CloudPakForDataAuthenticator(
url=WOS_CREDENTIALS['url'],
username=WOS_CREDENTIALS['username'],
password=WOS_CREDENTIALS['password'],
disable_ssl_verification=True
)
wos_client = APIClient(service_url=WOS_CREDENTIALS['url'],service_instance_id=instance_id,authenticator=authenticator)
wos_client.version
###Output
_____no_output_____
###Markdown
Let's define some constants required to set up data mart:- AIOS_CREDENTIALS (ICP)- DATABASE_CREDENTIALS (DB2 on ICP)- SCHEMA_NAME
###Code
DB_CREDENTIALS=None
#DB_CREDENTIALS= {"hostname":"","username":"","password":"","database":"","port":"","ssl":True,"sslmode":"","certificate_base64":""}
SCHEMA_NAME = 'SPSSTF01'
###Output
_____no_output_____
###Markdown
DataMart setupWatson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were **not** supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there **unless** there is an existing datamart **and** the **KEEP_MY_INTERNAL_POSTGRES** variable is set to **True**. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.Prior instances of the German Credit model will be removed from OpenScale monitoring.
###Code
wos_client.data_marts.show()
data_marts = wos_client.data_marts.list().result.data_marts
if len(data_marts) == 0:
if DB_CREDENTIALS is not None:
if SCHEMA_NAME is None:
print("Please specify the SCHEMA_NAME and rerun the cell")
print('Setting up external datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
database_configuration=DatabaseConfigurationRequest(
database_type=DatabaseType.POSTGRESQL,
credentials=PrimaryStorageCredentialsLong(
hostname=DB_CREDENTIALS['hostname'],
username=DB_CREDENTIALS['username'],
password=DB_CREDENTIALS['password'],
db=DB_CREDENTIALS['database'],
port=DB_CREDENTIALS['port'],
ssl=True,
sslmode=DB_CREDENTIALS['sslmode'],
certificate_base64=DB_CREDENTIALS['certificate_base64']
),
location=LocationSchemaName(
schema_name= SCHEMA_NAME
)
)
).result
else:
print('Setting up internal datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
internal_database = True).result
data_mart_id = added_data_mart_result.metadata.id
else:
data_mart_id=data_marts[0].metadata.id
print('Using existing datamart {}'.format(data_mart_id))
###Output
_____no_output_____
###Markdown
Bind machine learning engines Bind `SPSS C&DS` machine learning engineProvide credentials using following fields:- `username`- `password`- `url`
###Code
SPSS_CDS_ENGINE_CREDENTIALS = {
"url": "***",
"username": "admin",
"password": "spss",
}
SERVICE_PROVIDER_NAME = "V2 SPSS test"
SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WOS notebook."
service_providers = wos_client.service_providers.list().result.service_providers
for service_provider in service_providers:
service_instance_name = service_provider.entity.name
if service_instance_name == SERVICE_PROVIDER_NAME:
service_provider_id = service_provider.metadata.id
wos_client.service_providers.delete(service_provider_id)
print("Deleted existing service_provider for WML instance: {}".format(service_provider_id))
added_service_provider_result = wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.SPSS_COLLABORATION_AND_DEPLOYMENT_SERVICES,
credentials=SPSSCredentials(
url=SPSS_CDS_ENGINE_CREDENTIALS['url'],
username=SPSS_CDS_ENGINE_CREDENTIALS["username"],
password=SPSS_CDS_ENGINE_CREDENTIALS['password']
),
background_mode=False
).result
service_provider_id = added_service_provider_result.metadata.id
asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id).result['resources']
asset_deployment_details
MODEL_NAME='german_credit_risk_tutorial_BiasQA' # use the model name here
model_asset_details_from_deployment = [asset for asset in asset_deployment_details if asset['entity']["name"]==MODEL_NAME]
source_uid = [asset['entity']['asset']['asset_id'] for asset in asset_deployment_details if asset['entity']["name"]==MODEL_NAME]
if len(model_asset_details_from_deployment)>0:
[model_asset_details_from_deployment] = model_asset_details_from_deployment
[source_uid] = source_uid
else:
raise ValueError('Model with name "{}" not found.'.format(MODEL_NAME))
###Output
_____no_output_____
###Markdown
Subscriptions Add subscriptions List available deployments.**Note:** Depending on number of assets it may take some time.
###Code
wos_client.subscriptions.show()
subscriptions = wos_client.subscriptions.list().result.subscriptions
for subscription in subscriptions:
sub_model_id = subscription.entity.asset.asset_id
if sub_model_id == source_uid:
wos_client.subscriptions.delete(subscription.metadata.id)
print('Deleted existing subscription for model', sub_model_id)
training_data_reference= TrainingDataReference(type='cos',
location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME,
file_name = training_data_file_name),
connection=COSTrainingDataReferenceConnection.from_dict({
"resource_instance_id": COS_RESOURCE_CRN,
"url": COS_ENDPOINT,
"api_key": COS_API_KEY_ID,
"iam_url": IAM_URL}))
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
service_provider_id=service_provider_id,
asset=Asset(
asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"],
name=model_asset_details_from_deployment["entity"]["asset"]["name"],
url=model_asset_details_from_deployment["entity"]["asset"]["url"],
asset_type=AssetTypes.MODEL,
input_data_type=InputDataType.STRUCTURED,
problem_type=ProblemType.BINARY_CLASSIFICATION
),
deployment=AssetDeploymentRequest(
deployment_id=model_asset_details_from_deployment['metadata']['guid'],
name=model_asset_details_from_deployment['entity']['name'],
deployment_type= DeploymentTypes.ONLINE,
url=model_asset_details_from_deployment['entity']['scoring_endpoint']['url']
),
asset_properties=AssetPropertiesRequest(
label_column='Risk',
probability_fields=['$NP-No Risk','$NP-Risk'],
prediction_field='$N-Risk',
feature_fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"],
categorical_fields = ["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"],
training_data_reference=training_data_reference,
input_data_schema=SparkStruct.from_dict(model_asset_details_from_deployment["entity"]["asset_properties"]["input_data_schema"]),
output_data_schema=SparkStruct.from_dict(model_asset_details_from_deployment["entity"]["asset_properties"]["output_data_schema"])
)
).result
subscription_id = subscription_details.metadata.id
subscription_id
###Output
_____no_output_____
###Markdown
Performance monitor, scoring and payload logging Score the credit risk model and measure response time
###Code
import requests
from requests.auth import HTTPBasicAuth
import time
import json
scoring_endpoint = subscription_details.to_dict()['entity']['deployment']['url']
input_table_id = subscription_details.to_dict()['entity']['asset_properties']['input_data_schema']['id']
node_id = subscription_details.to_dict()['entity']['asset']['name']
scoring_payload = {'requestInputTable': [{'id': input_table_id, 'requestInputRow': [{'input': [
{'name': 'CheckingStatus', 'value': '0_to_200'}, {'name': 'LoanDuration', 'value': 31},
{'name': 'CreditHistory', 'value': 'credits_paid_to_date'}, {'name': 'LoanPurpose', 'value': 'other'},
{'name': 'LoanAmount', 'value': 1889}, {'name': 'ExistingSavings', 'value': '100_to_500'},
{'name': 'EmploymentDuration', 'value': 'less_1'}, {'name': 'InstallmentPercent', 'value': 3},
{'name': 'Sex', 'value': 'female'}, {'name': 'OthersOnLoan', 'value': 'none'},
{'name': 'CurrentResidenceDuration', 'value': 3}, {'name': 'OwnsProperty', 'value': 'savings_insurance'},
{'name': 'Age', 'value': 32}, {'name': 'InstallmentPlans', 'value': 'none'},
{'name': 'Housing', 'value': 'own'}, {'name': 'ExistingCreditsCount', 'value': 1},
{'name': 'Job', 'value': 'skilled'}, {'name': 'Dependents', 'value': 1},
{'name': 'Telephone', 'value': 'none'}, {'name': 'ForeignWorker', 'value': 'yes'}]}]}], 'id': node_id}
start_time = time.time()
resp_score = requests.post(url=scoring_endpoint, json=scoring_payload, auth=HTTPBasicAuth(username=SPSS_CDS_ENGINE_CREDENTIALS['username'], password=SPSS_CDS_ENGINE_CREDENTIALS['password']))
response_time = int((time.time() - start_time)*1000)
result = resp_score.json()
print(result)
###Output
_____no_output_____
###Markdown
Store the request and response in payload logging table Store the payload using Python SDK **Hint:** You can embed payload logging code into your application so it is logged automatically each time you score the model.
###Code
import time
time.sleep(5)
payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id
if payload_data_set_id is None:
print("Payload data set not found. Please check subscription status.")
else:
print("Payload data set id: ", payload_data_set_id)
import pandas as pd
df_data = pd.read_csv("https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/SPSS%20C%26DS/assets/data/credit_risk_spss/payload_credit_risk.csv")
df_data=df_data.drop(['Risk'],axis=1)
df_data.head()
#score using couple sample records
payload=df_data.sample(2)
values = payload.values.tolist()
columns = payload.columns.tolist()
records_list=[]
for i in payload.to_dict('records'):
b=[{"name":x,"value":v} for x,v in i.items()]
records_list.append({'input': b})
scoring_payload = {'requestInputTable': [{'id': input_table_id, 'requestInputRow': records_list}], 'id': node_id}
resp_score = requests.post(url=scoring_endpoint, json=scoring_payload, auth=HTTPBasicAuth(username=SPSS_CDS_ENGINE_CREDENTIALS['username'], password=SPSS_CDS_ENGINE_CREDENTIALS['password']))
response_time = int((time.time() - start_time)*1000)
result = resp_score.json()
print(result)
###Output
_____no_output_____
###Markdown
Format scoring response for payload logging
###Code
res_values=[]
for i in result['rowValues']:
d= [j for j in i['value']]
res_values.append([k['value'] for k in d])
dtype_idx=[1,4,7,10,12,15,17] # change numeric features values in scoring response from String to Integer
dtype_predictions_idx=[21,22,23] # change prediction, probability column values in scoring response from String to Float
for val in res_values:
for idx in dtype_idx:
val[idx]=int(val[idx])
for idx in dtype_predictions_idx:
val[idx]=float(val[idx])
request = {
"fields": payload.columns.tolist(),
"values": payload.values.tolist()
}
response = {
"fields": result['columnNames']['name'],
"values": res_values
}
import uuid
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
time.sleep(5)
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=request,
response=response,
response_time=460
)])
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
wos_client.data_sets.show_records(payload_data_set_id)
###Output
_____no_output_____
###Markdown
Quality monitor and feedback logging Enable quality monitoring
###Code
import time
time.sleep(10)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_feedback_data_size": 50
}
thresholds = [
{
"metric_id": "area_under_roc",
"type": "lower_limit",
"value": .80
}
]
quality_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID,
target=target,
parameters=parameters,
thresholds=thresholds
).result
quality_monitor_instance_id = quality_monitor_details.metadata.id
quality_monitor_instance_id
###Output
_____no_output_____
###Markdown
Feedback records logging Feedback records are used to evaluate your model. The predicted values are compared to real values (feedback records).
###Code
feedback_dataset_id = None
feedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result
#print(feedback_dataset)
feedback_dataset_id = feedback_dataset.data_sets[0].metadata.id
if feedback_dataset_id is None:
print("Feedback data set not found. Please check quality monitor status.")
###Output
_____no_output_____
###Markdown
Store feedback using CSV format from file
###Code
#!wget https://raw.githubusercontent.com/pmservice/wml-sample-models/master/spss/credit-risk/data/credit_risk_feedback.csv
feed_data_load = pd.read_csv('https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/SPSS%20C%26DS/assets/data/credit_risk_spss/feedback_credit_risk.csv')
feedback_data = json.loads(feed_data_load.to_json(orient='records'))
feedback_data
wos_client.data_sets.store_records(feedback_dataset_id, request_body=feedback_data, background_mode=False)
wos_client.data_sets.show_records(data_set_id=feedback_dataset_id)
###Output
_____no_output_____
###Markdown
Run quality monitoring on demand By default, quality monitoring is run on hourly schedule. You can also trigger it on demand using below code.
###Code
run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result
time.sleep(5)
wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id)
###Output
_____no_output_____
###Markdown
Fairness monitoring and explanations Enable and run fairness monitoring
###Code
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"features": [
{"feature": "Sex",
"majority": ['male'],
"minority": ['female'],
"threshold": 0.95
},
{"feature": "Age",
"majority": [[26, 75]],
"minority": [[18, 25]],
"threshold": 0.95
}
],
"favourable_class": ["No Risk"],
"unfavourable_class": ["Risk"],
"min_records": 40
}
fairness_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID,
target=target,
parameters=parameters).result
fairness_monitor_instance_id =fairness_monitor_details.metadata.id
fairness_monitor_instance_id
###Output
_____no_output_____
###Markdown
Score, format and store payload records
###Code
payload=df_data.sample(50)
records_list=[]
for i in payload.to_dict('records'):
b=[{"name":x,"value":v} for x,v in i.items()]
records_list.append({'input': b})
scoring_payload = {'requestInputTable': [{'id': input_table_id, 'requestInputRow': records_list}], 'id': node_id}
resp_score = requests.post(url=scoring_endpoint, json=scoring_payload, auth=HTTPBasicAuth(username=SPSS_CDS_ENGINE_CREDENTIALS['username'], password=SPSS_CDS_ENGINE_CREDENTIALS['password']))
result = resp_score.json()
res_values=[]
for i in result['rowValues']:
d= [j for j in i['value']]
res_values.append([k['value'] for k in d])
dtype_idx=[1,4,7,10,12,15,17] # change numeric features values in scoring response from String to Integer
dtype_predictions_idx=[21,22,23] # change prediction, probability column values in scoring response from String to Float
for val in res_values:
for idx in dtype_idx:
val[idx]=int(val[idx])
for idx in dtype_predictions_idx:
val[idx]=float(val[idx])
len(res_values)
request = {
"fields": payload.columns.tolist(),
"values": payload.values.tolist()
}
response = {
"fields": result['columnNames']['name'],
"values": res_values
}
import uuid
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
time.sleep(5)
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=request,
response=response,
response_time=460
)])
time.sleep(10)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
run_details = wos_client.monitor_instances.run(monitor_instance_id=fairness_monitor_instance_id, background_mode=False)
time.sleep(10)
wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id)
###Output
_____no_output_____
###Markdown
Explainability configuration and run Enable explainability
###Code
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"enabled": True
}
explainability_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,
target=target,
parameters=parameters
).result
explainability_monitor_id = explainability_details.metadata.id
explainability_monitor_id
###Output
_____no_output_____
###Markdown
Get sample transaction_id from payload logging table (`scoring_id`)
###Code
pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result
scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]]
print("Running explanations on scoring IDs: {}".format(scoring_ids))
explanation_types = ["lime", "contrastive"]
result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result
print(result)
explanation_task_id=result.to_dict()['metadata']['explanation_task_ids'][0]
explanation_task_id
wos_client.monitor_instances.get_explanation_tasks(explanation_task_id=explanation_task_id).result.to_dict()
###Output
_____no_output_____
###Markdown
Enable and run drift monitoring
###Code
!rm -rf creditrisk_spss_drift_detection_model.tar.gz
!wget -O creditrisk_spss_drift_detection_model.tar.gz https://github.com/IBM/watson-openscale-samples/blob/main/Cloud%20Pak%20for%20Data/SPSS%20C%26DS/assets/models/spss_creditrisk_drift_detection_model.tar.gz?raw=true
wos_client.monitor_instances.upload_drift_model(
model_path='creditrisk_spss_drift_detection_model.tar.gz',
data_mart_id=data_mart_id,
subscription_id=subscription_id
)
monitor_instances = wos_client.monitor_instances.list().result.monitor_instances
for monitor_instance in monitor_instances:
monitor_def_id=monitor_instance.entity.monitor_definition_id
if monitor_def_id == "drift" and monitor_instance.entity.target.target_id == subscription_id:
wos_client.monitor_instances.delete(monitor_instance.metadata.id)
print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_samples": 40,
"drift_threshold": 0.1,
"train_drift_model": False,
"enable_model_drift": True,
"enable_data_drift": True
}
drift_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID,
target=target,
parameters=parameters
).result
drift_monitor_instance_id = drift_monitor_details.metadata.id
drift_monitor_instance_id
drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False)
time.sleep(5)
wos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id)
###Output
_____no_output_____
###Markdown
Working with SPSS Collaboration and Deployment Services Compatible with Cloud Pak for Data Only This notebook shows how to log the payload for the model deployed on custom model serving engine using Watson OpenScale python sdk. Contents - Setup - Binding machine learning engine - Subscriptions - Performance monitor, scoring and payload logging - Quality monitor and feedback logging - Fairness,Drift monitoring and explanations **Note:** This notebook works correctly with kernel `Python 3.7.x`. Setup Sample model creation using SPSS Modeler - Download training data set from [here](https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/WML/assets/data/credit_risk/credit_risk_training.csv)- Download SPSS Modeler stream from [here](https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/SPSS%20C%26DS/assets/models/german_credit_risk_tutorial.str)- Deploy the model using SPSS C&DS as web service Installation and authentication
###Code
import warnings
warnings.filterwarnings('ignore')
!pip install --upgrade ibm-watson-openscale | tail -n 1
###Output
_____no_output_____
###Markdown
Import and initiate.
###Code
WOS_CREDENTIALS = {
"url": "***",
"username": "***",
"password": "***"
}
WML_CREDENTIALS = {
"url": "***",
"username": "***",
"password": "***",
"instance_id": "wml_local",
"version" : "3.5"
}
###Output
_____no_output_____
###Markdown
Cloud object storage detailsIn next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit getting started with COS tutorial. You can find COS_API_KEY_ID and COS_RESOURCE_CRN variables in Service Credentials in menu of your COS instance. Used COS Service Credentials must be created with Role parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription.COS_ENDPOINT variable can be found in Endpoint field of the menu.
###Code
IAM_URL="https://iam.ng.bluemix.net/oidc/token"
COS_API_KEY_ID = "***"
COS_RESOURCE_CRN = "***" # eg "crn:v1:bluemix:public:cloud-object-storage:global:a/3bf0d9003abfb5d29761c3e97696b71c:d6f04d83-6c4f-4a62-a165-696756d63903::"
COS_ENDPOINT = "***" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints
BUCKET_NAME = "***" #example: "credit-risk-training-data"
training_data_file_name="credit_risk_training.csv"
instance_id='***' ## default instance ID: 00000000-0000-0000-0000-0000000000000000
from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator
from ibm_watson_openscale import APIClient
from ibm_watson_openscale import *
from ibm_watson_openscale.supporting_classes.enums import *
from ibm_watson_openscale.supporting_classes import *
authenticator = CloudPakForDataAuthenticator(
url=WOS_CREDENTIALS['url'],
username=WOS_CREDENTIALS['username'],
password=WOS_CREDENTIALS['password'],
disable_ssl_verification=True
)
wos_client = APIClient(service_url=WOS_CREDENTIALS['url'],service_instance_id=instance_id,authenticator=authenticator)
wos_client.version
###Output
_____no_output_____
###Markdown
Let's define some constants required to set up data mart:- AIOS_CREDENTIALS (ICP)- DATABASE_CREDENTIALS (DB2 on ICP)- SCHEMA_NAME
###Code
#IBM DB2 database connection format example
DB_CREDENTIALS = {
"hostname":"***",
"username":"***",
"password":"***",
"database":"***",
"port":50000, #provide your actual DB2 port number (as integer value)
"ssl":"***",
"sslmode":"***",
"certificate_base64":"***"}
SCHEMA_NAME = 'SPSSTF01'
###Output
_____no_output_____
###Markdown
DataMart setupWatson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were **not** supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there **unless** there is an existing datamart **and** the **KEEP_MY_INTERNAL_POSTGRES** variable is set to **True**. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.Prior instances of the German Credit model will be removed from OpenScale monitoring.
###Code
wos_client.data_marts.show()
data_marts = wos_client.data_marts.list().result.data_marts
if len(data_marts) == 0:
if DB_CREDENTIALS is not None:
if SCHEMA_NAME is None:
print("Please specify the SCHEMA_NAME and rerun the cell")
print('Setting up external datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
database_configuration=DatabaseConfigurationRequest(
database_type=DatabaseType.POSTGRESQL,
credentials=PrimaryStorageCredentialsLong(
hostname=DB_CREDENTIALS['hostname'],
username=DB_CREDENTIALS['username'],
password=DB_CREDENTIALS['password'],
db=DB_CREDENTIALS['database'],
port=DB_CREDENTIALS['port'],
ssl=True,
sslmode=DB_CREDENTIALS['sslmode'],
certificate_base64=DB_CREDENTIALS['certificate_base64']
),
location=LocationSchemaName(
schema_name= SCHEMA_NAME
)
)
).result
else:
print('Setting up internal datamart')
added_data_mart_result = wos_client.data_marts.add(
background_mode=False,
name="WOS Data Mart",
description="Data Mart created by WOS tutorial notebook",
internal_database = True).result
data_mart_id = added_data_mart_result.metadata.id
else:
data_mart_id=data_marts[0].metadata.id
print('Using existing datamart {}'.format(data_mart_id))
###Output
_____no_output_____
###Markdown
Bind machine learning engines Bind `SPSS C&DS` machine learning engineProvide credentials using following fields:- `username`- `password`- `url`
###Code
SPSS_CDS_ENGINE_CREDENTIALS = {
"url": "***",
"username": "admin",
"password": "spss",
}
SERVICE_PROVIDER_NAME = "V2 SPSS test"
SERVICE_PROVIDER_DESCRIPTION = "Added by tutorial WOS notebook."
service_providers = wos_client.service_providers.list().result.service_providers
for service_provider in service_providers:
service_instance_name = service_provider.entity.name
if service_instance_name == SERVICE_PROVIDER_NAME:
service_provider_id = service_provider.metadata.id
wos_client.service_providers.delete(service_provider_id)
print("Deleted existing service_provider for WML instance: {}".format(service_provider_id))
added_service_provider_result = wos_client.service_providers.add(
name=SERVICE_PROVIDER_NAME,
description=SERVICE_PROVIDER_DESCRIPTION,
service_type=ServiceTypes.SPSS_COLLABORATION_AND_DEPLOYMENT_SERVICES,
credentials=SPSSCredentials(
url=SPSS_CDS_ENGINE_CREDENTIALS['url'],
username=SPSS_CDS_ENGINE_CREDENTIALS["username"],
password=SPSS_CDS_ENGINE_CREDENTIALS['password']
),
background_mode=False
).result
service_provider_id = added_service_provider_result.metadata.id
asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id).result['resources']
asset_deployment_details
MODEL_NAME='german_credit_risk_tutorial_BiasQA' # use the model name here
model_asset_details_from_deployment = [asset for asset in asset_deployment_details if asset['entity']["name"]==MODEL_NAME]
source_uid = [asset['entity']['asset']['asset_id'] for asset in asset_deployment_details if asset['entity']["name"]==MODEL_NAME]
if len(model_asset_details_from_deployment)>0:
[model_asset_details_from_deployment] = model_asset_details_from_deployment
[source_uid] = source_uid
else:
raise ValueError('Model with name "{}" not found.'.format(MODEL_NAME))
###Output
_____no_output_____
###Markdown
Subscriptions Add subscriptions List available deployments.**Note:** Depending on number of assets it may take some time.
###Code
wos_client.subscriptions.show()
subscriptions = wos_client.subscriptions.list().result.subscriptions
for subscription in subscriptions:
sub_model_id = subscription.entity.asset.asset_id
if sub_model_id == source_uid:
wos_client.subscriptions.delete(subscription.metadata.id)
print('Deleted existing subscription for model', sub_model_id)
training_data_reference= TrainingDataReference(type='cos',
location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME,
file_name = training_data_file_name),
connection=COSTrainingDataReferenceConnection.from_dict({
"resource_instance_id": COS_RESOURCE_CRN,
"url": COS_ENDPOINT,
"api_key": COS_API_KEY_ID,
"iam_url": IAM_URL}))
subscription_details = wos_client.subscriptions.add(
data_mart_id=data_mart_id,
service_provider_id=service_provider_id,
asset=Asset(
asset_id=model_asset_details_from_deployment["entity"]["asset"]["asset_id"],
name=model_asset_details_from_deployment["entity"]["asset"]["name"],
url=model_asset_details_from_deployment["entity"]["asset"]["url"],
asset_type=AssetTypes.MODEL,
input_data_type=InputDataType.STRUCTURED,
problem_type=ProblemType.BINARY_CLASSIFICATION
),
deployment=AssetDeploymentRequest(
deployment_id=model_asset_details_from_deployment['metadata']['guid'],
name=model_asset_details_from_deployment['entity']['name'],
deployment_type= DeploymentTypes.ONLINE,
url=model_asset_details_from_deployment['entity']['scoring_endpoint']['url']
),
asset_properties=AssetPropertiesRequest(
label_column='Risk',
probability_fields=['$NP-No Risk','$NP-Risk'],
prediction_field='$N-Risk',
feature_fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"],
categorical_fields = ["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"],
training_data_reference=training_data_reference,
input_data_schema=SparkStruct.from_dict(model_asset_details_from_deployment["entity"]["asset_properties"]["input_data_schema"]),
output_data_schema=SparkStruct.from_dict(model_asset_details_from_deployment["entity"]["asset_properties"]["output_data_schema"])
)
).result
subscription_id = subscription_details.metadata.id
subscription_id
###Output
_____no_output_____
###Markdown
Performance monitor, scoring and payload logging Score the credit risk model and measure response time
###Code
import requests
from requests.auth import HTTPBasicAuth
import time
import json
scoring_endpoint = subscription_details.to_dict()['entity']['deployment']['url']
input_table_id = subscription_details.to_dict()['entity']['asset_properties']['input_data_schema']['id']
node_id = subscription_details.to_dict()['entity']['asset']['name']
scoring_payload = {'requestInputTable': [{'id': input_table_id, 'requestInputRow': [{'input': [
{'name': 'CheckingStatus', 'value': '0_to_200'}, {'name': 'LoanDuration', 'value': 31},
{'name': 'CreditHistory', 'value': 'credits_paid_to_date'}, {'name': 'LoanPurpose', 'value': 'other'},
{'name': 'LoanAmount', 'value': 1889}, {'name': 'ExistingSavings', 'value': '100_to_500'},
{'name': 'EmploymentDuration', 'value': 'less_1'}, {'name': 'InstallmentPercent', 'value': 3},
{'name': 'Sex', 'value': 'female'}, {'name': 'OthersOnLoan', 'value': 'none'},
{'name': 'CurrentResidenceDuration', 'value': 3}, {'name': 'OwnsProperty', 'value': 'savings_insurance'},
{'name': 'Age', 'value': 32}, {'name': 'InstallmentPlans', 'value': 'none'},
{'name': 'Housing', 'value': 'own'}, {'name': 'ExistingCreditsCount', 'value': 1},
{'name': 'Job', 'value': 'skilled'}, {'name': 'Dependents', 'value': 1},
{'name': 'Telephone', 'value': 'none'}, {'name': 'ForeignWorker', 'value': 'yes'}]}]}], 'id': node_id}
start_time = time.time()
resp_score = requests.post(url=scoring_endpoint, json=scoring_payload, auth=HTTPBasicAuth(username=SPSS_CDS_ENGINE_CREDENTIALS['username'], password=SPSS_CDS_ENGINE_CREDENTIALS['password']))
response_time = int((time.time() - start_time)*1000)
result = resp_score.json()
print(result)
###Output
_____no_output_____
###Markdown
Store the request and response in payload logging table Store the payload using Python SDK **Hint:** You can embed payload logging code into your application so it is logged automatically each time you score the model.
###Code
import time
time.sleep(5)
payload_data_set_id = None
payload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id
if payload_data_set_id is None:
print("Payload data set not found. Please check subscription status.")
else:
print("Payload data set id: ", payload_data_set_id)
import pandas as pd
df_data = pd.read_csv("https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/SPSS%20C%26DS/assets/data/credit_risk_spss/payload_credit_risk.csv")
df_data=df_data.drop(['Risk'],axis=1)
df_data.head()
#score using couple sample records
payload=df_data.sample(2)
values = payload.values.tolist()
columns = payload.columns.tolist()
records_list=[]
for i in payload.to_dict('records'):
b=[{"name":x,"value":v} for x,v in i.items()]
records_list.append({'input': b})
scoring_payload = {'requestInputTable': [{'id': input_table_id, 'requestInputRow': records_list}], 'id': node_id}
resp_score = requests.post(url=scoring_endpoint, json=scoring_payload, auth=HTTPBasicAuth(username=SPSS_CDS_ENGINE_CREDENTIALS['username'], password=SPSS_CDS_ENGINE_CREDENTIALS['password']))
response_time = int((time.time() - start_time)*1000)
result = resp_score.json()
print(result)
###Output
_____no_output_____
###Markdown
Format scoring response for payload logging
###Code
res_values=[]
for i in result['rowValues']:
d= [j for j in i['value']]
res_values.append([k['value'] for k in d])
dtype_idx=[1,4,7,10,12,15,17] # change numeric features values in scoring response from String to Integer
dtype_predictions_idx=[21,22,23] # change prediction, probability column values in scoring response from String to Float
for val in res_values:
for idx in dtype_idx:
val[idx]=int(val[idx])
for idx in dtype_predictions_idx:
val[idx]=float(val[idx])
request = {
"fields": payload.columns.tolist(),
"values": payload.values.tolist()
}
response = {
"fields": result['columnNames']['name'],
"values": res_values
}
import uuid
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
time.sleep(5)
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=request,
response=response,
response_time=460
)])
time.sleep(5)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
wos_client.data_sets.show_records(payload_data_set_id)
###Output
_____no_output_____
###Markdown
Quality monitor and feedback logging Enable quality monitoring
###Code
import time
time.sleep(10)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_feedback_data_size": 50
}
thresholds = [
{
"metric_id": "area_under_roc",
"type": "lower_limit",
"value": .80
}
]
quality_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID,
target=target,
parameters=parameters,
thresholds=thresholds
).result
quality_monitor_instance_id = quality_monitor_details.metadata.id
quality_monitor_instance_id
###Output
_____no_output_____
###Markdown
Feedback records logging Feedback records are used to evaluate your model. The predicted values are compared to real values (feedback records).
###Code
feedback_dataset_id = None
feedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK,
target_target_id=subscription_id,
target_target_type=TargetTypes.SUBSCRIPTION).result
#print(feedback_dataset)
feedback_dataset_id = feedback_dataset.data_sets[0].metadata.id
if feedback_dataset_id is None:
print("Feedback data set not found. Please check quality monitor status.")
###Output
_____no_output_____
###Markdown
Store feedback using CSV format from file
###Code
#!wget https://raw.githubusercontent.com/pmservice/wml-sample-models/master/spss/credit-risk/data/credit_risk_feedback.csv
feed_data_load = pd.read_csv('https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/Cloud%20Pak%20for%20Data/SPSS%20C%26DS/assets/data/credit_risk_spss/feedback_credit_risk.csv')
feedback_data = json.loads(feed_data_load.to_json(orient='records'))
feedback_data
wos_client.data_sets.store_records(feedback_dataset_id, request_body=feedback_data, background_mode=False)
wos_client.data_sets.show_records(data_set_id=feedback_dataset_id)
###Output
_____no_output_____
###Markdown
Run quality monitoring on demand By default, quality monitoring is run on hourly schedule. You can also trigger it on demand using below code.
###Code
run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result
time.sleep(5)
wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id)
###Output
_____no_output_____
###Markdown
Fairness monitoring and explanations Enable and run fairness monitoring
###Code
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"features": [
{"feature": "Sex",
"majority": ['male'],
"minority": ['female'],
"threshold": 0.95
},
{"feature": "Age",
"majority": [[26, 75]],
"minority": [[18, 25]],
"threshold": 0.95
}
],
"favourable_class": ["No Risk"],
"unfavourable_class": ["Risk"],
"min_records": 40
}
fairness_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID,
target=target,
parameters=parameters).result
fairness_monitor_instance_id =fairness_monitor_details.metadata.id
fairness_monitor_instance_id
###Output
_____no_output_____
###Markdown
Score, format and store payload records
###Code
payload=df_data.sample(50)
records_list=[]
for i in payload.to_dict('records'):
b=[{"name":x,"value":v} for x,v in i.items()]
records_list.append({'input': b})
scoring_payload = {'requestInputTable': [{'id': input_table_id, 'requestInputRow': records_list}], 'id': node_id}
resp_score = requests.post(url=scoring_endpoint, json=scoring_payload, auth=HTTPBasicAuth(username=SPSS_CDS_ENGINE_CREDENTIALS['username'], password=SPSS_CDS_ENGINE_CREDENTIALS['password']))
result = resp_score.json()
res_values=[]
for i in result['rowValues']:
d= [j for j in i['value']]
res_values.append([k['value'] for k in d])
dtype_idx=[1,4,7,10,12,15,17] # change numeric features values in scoring response from String to Integer
dtype_predictions_idx=[21,22,23] # change prediction, probability column values in scoring response from String to Float
for val in res_values:
for idx in dtype_idx:
val[idx]=int(val[idx])
for idx in dtype_predictions_idx:
val[idx]=float(val[idx])
len(res_values)
request = {
"fields": payload.columns.tolist(),
"values": payload.values.tolist()
}
response = {
"fields": result['columnNames']['name'],
"values": res_values
}
import uuid
from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord
time.sleep(5)
wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(
scoring_id=str(uuid.uuid4()),
request=request,
response=response,
response_time=460
)])
time.sleep(10)
pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)
print("Number of records in the payload logging table: {}".format(pl_records_count))
run_details = wos_client.monitor_instances.run(monitor_instance_id=fairness_monitor_instance_id, background_mode=False)
time.sleep(10)
wos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id)
###Output
_____no_output_____
###Markdown
Explainability configuration and run Enable explainability
###Code
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"enabled": True
}
explainability_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,
target=target,
parameters=parameters
).result
explainability_monitor_id = explainability_details.metadata.id
explainability_monitor_id
###Output
_____no_output_____
###Markdown
Get sample transaction_id from payload logging table (`scoring_id`)
###Code
pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result
scoring_ids = [pl_records_resp["records"][0]["entity"]["values"]["scoring_id"]]
print("Running explanations on scoring IDs: {}".format(scoring_ids))
explanation_types = ["lime", "contrastive"]
result = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result
print(result)
explanation_task_id=result.to_dict()['metadata']['explanation_task_ids'][0]
explanation_task_id
wos_client.monitor_instances.get_explanation_tasks(explanation_task_id=explanation_task_id).result.to_dict()
###Output
_____no_output_____
###Markdown
Enable and run drift monitoring
###Code
!rm -rf creditrisk_spss_drift_detection_model.tar.gz
!wget -O creditrisk_spss_drift_detection_model.tar.gz https://github.com/IBM/watson-openscale-samples/blob/main/Cloud%20Pak%20for%20Data/SPSS%20C%26DS/assets/models/spss_creditrisk_drift_detection_model.tar.gz?raw=true
wos_client.monitor_instances.upload_drift_model(
model_path='creditrisk_spss_drift_detection_model.tar.gz',
data_mart_id=data_mart_id,
subscription_id=subscription_id
)
monitor_instances = wos_client.monitor_instances.list().result.monitor_instances
for monitor_instance in monitor_instances:
monitor_def_id=monitor_instance.entity.monitor_definition_id
if monitor_def_id == "drift" and monitor_instance.entity.target.target_id == subscription_id:
wos_client.monitor_instances.delete(monitor_instance.metadata.id)
print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id)
target = Target(
target_type=TargetTypes.SUBSCRIPTION,
target_id=subscription_id
)
parameters = {
"min_samples": 40,
"drift_threshold": 0.1,
"train_drift_model": False,
"enable_model_drift": True,
"enable_data_drift": True
}
drift_monitor_details = wos_client.monitor_instances.create(
data_mart_id=data_mart_id,
background_mode=False,
monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID,
target=target,
parameters=parameters
).result
drift_monitor_instance_id = drift_monitor_details.metadata.id
drift_monitor_instance_id
drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False)
time.sleep(5)
wos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id)
###Output
_____no_output_____ |
module4-time-series/DS_434_LSTM_Time_Series_Forecasting_Assignment.ipynb | ###Markdown
Time Series Forecasting Assignment For Part 1, Choose _either_ Option A or Option B For Part 2, Find a time series (either univariate, or multivariate) and apply the time series methods from Part 1 to analyze it. Part 1, Option A: Software Engineering (1.5 to 2 hours max)Write a `ForecastingToolkit` class that packages up the workflow of time series forecasting, that we learned from today's Lecture Notebook. Add any desired "bells and whistles" to make it even better!
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from pandas import read_csv
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.layers.core import Dense, Activation, Dropout
import time #helper libraries
from tensorflow.keras import regularizers
class ForecastingToolkit(object):
def __init__(self, df = None, model = None):
"""
Variables that we passed into our functions should now be defined
as class attributes, i.e. class variables.
"""
# here are a few to get you started
# store data here
self.df = df
# store your forecasting model here
self.model = model
# store feature scalers here
self.scaler_dict = None
# store the training results of your model here
self.history = None
def load_transform_data(self):
pass
def scale_data(self):
pass
def invert_scaling(self):
pass
def create_dataset(self):
pass
def create_train_test_split(self):
pass
def build_model(self):
pass
def fit_model(self):
pass
def predict(self):
pass
def plot_model_loss_metrics(self):
pass
###Output
_____no_output_____
###Markdown
----
###Code
# once you've completed your class, you'll be able to perform a many operations with just a few lines of code!
tstk = ForecastingToolkit()
tstk.load_transform_data()
tstk.scale_data()
tstk.build_model()
tstk.fit_model()
tstk.plot_model_loss_metrics()
###Output
_____no_output_____
###Markdown
Part 1, Option B: A Deeper Dive in Time-Series Forecasting (1.5 to 2 hours max) Work through this notebook [time_series_forecasting](https://drive.google.com/file/d/1RgyaO9zuZ90vWEzQWo1iVip1Me7oiHiO/view?usp=sharing), which compares a number of forecasting methods and in the end finds that 1 Dimensional Convolutional Neural Networks is even better than LSTMs! Part 2 Time series forecasting on a real data set (2 hours max)Use one or more series forecasting methods (from either Part 1A or Part 1B) to make forecasts on a real time series data set. If time permits, perform hyperparameter tuning to make the forecasts as good as possible. Report the MAE (mean absolute error) of your forecast, and compare to a naive baseline model. Are you getting good forecasts? Why or why not? Data Sets: choose from 2.1, 2.2, 2.3, or 2.4 2.1 [Daily Sunspot data](https://wwwbis.sidc.be/silso/datafiles): your task is to predict future daily sunspot numbers from the past* Use the "Total sunspot number" CSV or TXT files (grey buttons)* Be sure to read INFO file (green button)* You'll have to come up with a strategy for dealing with missing data* [Data Credits: "Source: WDC-SILSO, Royal Observatory of Belgium, Brussels".]2.2 Light Curves for target stars from NASA's Kepler Mission* [Get the light curve files](https://www.nasa.gov/kepler/education/getlightcurves)* The data is stored in .FITS files (a format commonly used for astrophysical data). * You need to translate from .FITS format to .CSV -- search the web for code to do this.2.3 Here are another half-dozen or so datasets you could choose from: [7 Time Series Datasets for Machine Learning](https://machinelearningmastery.com/time-series-datasets-for-machine-learning/)2.4 OR: Freely available time series data is plentiful on the WWW. You can choose any time series data set of interest! YOUR ANSWERS HERE
###Code
## YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Time Series Forecasting Assignment For Part 1, Choose _either_ Option A or Option B For Part 2, Find a time series (either univariate, or multivariate) and apply the time series methods from Part 1 to analyze it. Part 1, Option A: Software Engineering (1.5 to 2 hours max)Write a `ForecastingToolkit` class that packages up the workflow of time series forecasting, that we learned from today's Lecture Notebook. Add any desired "bells and whistles" to make it even better!
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from pandas import read_csv
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.layers.core import Dense, Activation, Dropout
import time #helper libraries
from tensorflow.keras import regularizers
class ForecastingToolkit(object):
def __init__(self, df = None, model = None):
"""
Variables that we passed into our functions should now be defined
as class attributes, i.e. class variables.
"""
# here are a few to get you started
# store data here
self.df = df
# store your forecasting model here
self.model = model
# store feature scalers here
self.scaler_dict = None
# store the training results of your model here
self.history = None
def load_transform_data(self):
pass
def scale_data(self):
pass
def invert_scaling(self):
pass
def create_dataset(self):
pass
def create_train_test_split(self,):
pass
def build_model(self):
pass
def fit_model(self):
pass
def predict(self):
pass
def plot_model_loss_metrics(self):
pass
###Output
_____no_output_____
###Markdown
----
###Code
# once you've completed your class, you'll be able to perform a many operations with just a few lines of code!
tstk = ForecastingToolkit()
tstk.load_transform_data()
tstk.scale_data()
tstk.build_model()
tstk.fit_model()
tstk.plot_model_loss_metrics()
###Output
_____no_output_____
###Markdown
Part 1, Option B: A Deeper Dive in Time-Series Forecasting (1.5 to 2 hours max) Work through this notebook [time_series_forecasting](https://drive.google.com/file/d/1RgyaO9zuZ90vWEzQWo1iVip1Me7oiHiO/view?usp=sharing), which compares a number of forecasting methods and in the end finds that 1 Dimensional Convolutional Neural Networks is even better than LSTMs! Part 2 Time series forecasting on a real data set (2 hours max)Use one or more time series forecasting methods (from either Part 1A or Part 1B) to make forecasts on a real time series data set. If time permits, perform hyperparameter tuning to make the forecasts as good as possible. Report the MAE (mean absolute error) of your forecast, and compare to a naive baseline model. Are you getting good forecasts? Why or why not? Time Series Data Sets Here are a half-dozen or so datasets you could choose from: [7 Time Series Datasets for Machine Learning](https://machinelearningmastery.com/time-series-datasets-for-machine-learning/)OR: Freely available time series data is plentiful on the WWW. Feel free to choose any time series data set of interest! Your Time Series Forecasting CodeDescribe the time series forecasting problem you will solve, and which method you will use.
###Code
## YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Time Series Forecasting Assignment For Part 1, Choose _either_ Option A or Option B For Part 2, Find a time series (either univariate, or multivariate) and apply the time series methods from Part 1 to analyze it. Part 1, Option A: Software Engineering (1.5 to 2 hours max)Write a `ForecastingToolkit` class that packages up the workflow of time series forecasting, that we learned from today's Lecture Notebook. Add any desired "bells and whistles" to make it even better!
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from pandas import read_csv
import math
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.layers.core import Dense, Activation, Dropout
import time #helper libraries
from tensorflow.keras import regularizers
class ForecastingToolkit(object):
def __init__(self, df = None, model = None):
"""
Variables that we passed into our functions should now be defined
as class attributes, i.e. class variables.
"""
# here are a few to get you started
# store data here
self.df = df
# store your forecasting model here
self.model = model
# store feature scalers here
self.scaler_dict = None
# store the training results of your model here
self.history = None
def load_transform_data(self):
pass
def scale_data(self):
pass
def invert_scaling(self):
pass
def create_dataset(self):
pass
def create_train_test_split(self):
pass
def build_model(self):
pass
def fit_model(self):
pass
def predict(self):
pass
def plot_model_loss_metrics(self):
pass
###Output
_____no_output_____
###Markdown
----
###Code
# once you've completed your class, you'll be able to perform a many operations with just a few lines of code!
tstk = ForecastingToolkit()
tstk.load_transform_data()
tstk.scale_data()
tstk.build_model()
tstk.fit_model()
tstk.plot_model_loss_metrics()
###Output
_____no_output_____
###Markdown
Part 1, Option B: A Deeper Dive in Time-Series Forecasting (1.5 to 2 hours max) Work through this notebook [time_series_forecasting](https://drive.google.com/file/d/1RgyaO9zuZ90vWEzQWo1iVip1Me7oiHiO/view?usp=sharing), which compares a number of forecasting methods and in the end finds that 1 Dimensional Convolutional Neural Networks is even better than LSTMs! Part 2 Time series forecasting on a real data set (2 hours max)Use one or more series forecasting methods (from either Part 1A or Part 1B) to make forecasts on a real time series data set. If time permits, perform hyperparameter tuning to make the forecasts as good as possible. Report the MAE (mean absolute error) of your forecast, and compare to a naive baseline model. Are you getting good forecasts? Why or why not? Data Sets: choose from 2.1, 2.2, 2.3, or 2.4 2.1 [Daily Sunspot data](https://wwwbis.sidc.be/silso/datafiles): your task is to predict future daily sunspot numbers from the past* Use the "Total sunspot number" CSV or TXT files (grey buttons)* Be sure to read INFO file (green button)* You'll have to come up with a strategy for dealing with missing data* [Data Credits: "Source: WDC-SILSO, Royal Observatory of Belgium, Brussels".]2.2 Light Curves for target stars from NASA's Kepler Mission* [Get the light curve files](https://www.nasa.gov/kepler/education/getlightcurves)* The data is stored in .FITS files (a format commonly used for astrophysical data). * You need to translate from .FITS format to .CSV -- search the web for code to do this.2.3 Here are another half-dozen or so datasets you could choose from: [7 Time Series Datasets for Machine Learning](https://machinelearningmastery.com/time-series-datasets-for-machine-learning/)2.4 OR: Freely available time series data is plentiful on the WWW. You can choose any time series data set of interest! YOUR ANSWERS HERE
###Code
sunspots_df = pd.read_csv(
"SN_d_tot_V2.0.csv",
names=[
"Year",
"Month",
"Day",
"Date",
"Sunspots",
"std",
"num_of_obs",
" indicator",
],
na_values= {'Sunspots': -1, 'std': -1.0},
delimiter=";",
)
sunspots_df.dropna(inplace=True)
sunspots_df = sunspots_df.drop(columns=['Date'])
y = sunspots_df['Sunspots']
sunspots_df['Sunspots'].plot(figsize=(24,5))
sunspots_df['Date'] = sunspots_df['Year'].astype(str) + "/" + sunspots_df['Month'].astype(str) + "/" + sunspots_df['Day'].astype(str)
sunspots_df['Date']
sunspots_df.index = pd.to_datetime(sunspots_df['Date'])
sunspots_df.drop(columns=['Year', 'Month', 'Day', 'Date'], inplace=True)
sunspots_df
corr = sunspots_df.corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=bool))
f, ax = plt.subplots(figsize=(11,9))
sns.heatmap(corr, mask=mask, cmap='coolwarm', center=0, linewidths=.5, square=True)
sunspots_df
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
from tensorflow.keras import optimizers
from keras.callbacks import History
from tensorflow.keras.callbacks import LearningRateScheduler
generator = TimeseriesGenerator(sunspots_df.drop(columns=['Sunspots']), sunspots_df['Sunspots'], length=28, batch_size=64)
epochs = 25
batch_size = 32
dropout_prob = 0.5
input_shape = (28, len(sunspots_df.columns))
opt = optimizers.Nadam(learning_rate=0.0032)
model = Sequential()
model.add(LSTM(256, input_shape=input_shape, activation='tanh', return_sequences=False))
model.add(Dropout(dropout_prob))
model.add(Dense(7, activation='relu'))
model.compile(loss='mean_squared_error', optimizer=opt, metrics=['mean_absolute_error'])
lr_schedule = LearningRateScheduler(
lambda epoch: 1e-4* 10**(epoch/10))
schedule_results = model.fit(generator,
epochs=40,
batch_size=batch_size,
verbose=1,
callbacks = [lr_schedule])
from tensorflow.keras.callbacks import EarlyStopping
early_stopping = EarlyStopping(monitor='loss', patience=10, min_delta=1.e-6)
history = model.fit(generator,
epochs=100,
batch_size=batch_size,
verbose=1,
callbacks = [early_stopping])
# Not Done need to scale data and also add in predictions
###Output
_____no_output_____ |
slide_deck_file.ipynb | ###Markdown
Prosper Loan Data Investigation by Amarjeet Singh Investigation OverviewIn this investigation i wanted to look at prosper scores and prosper ratings and see if they could be helpful in predicting a loans performance and how this has changed over time. I also wanted to look at how estimated loss, estimated return and on-time payments are related to debt to income ratio Dataset OverviewThe Prosper Loan data set contains 113,937 listings with 81 variables on each listing with data available for listings from late 2005 till early 2014. There were 21 types of listing categories available with term length option of 12-month, 36-month or 60-month. Also there's data available about the borrower and certain scores which indicate the credibility of the borrower.
###Code
# import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.ticker as ticker
%matplotlib inline
# suppress warnings from final output
import warnings
warnings.simplefilter("ignore")
# load in the dataset into a pandas dataframe
df = pd.read_csv('prosperLoanData.csv')
# Create a new column LoanPerformance
df['LoanPerformance'] = df['LoanStatus']
# Replace the Chargedoff and Defaulted with NonPerforming in the LoanPerformance variable
df['LoanPerformance'].replace('Chargedoff', 'NonPerforming', inplace=True)
df['LoanPerformance'].replace('Defaulted', 'NonPerforming', inplace=True)
# Replace all other with Performing in the LoanPerformance variable
df['LoanPerformance'].replace('Current', 'Performing', inplace=True)
df['LoanPerformance'].replace('Completed', 'Performing', inplace=True)
df['LoanPerformance'].replace('Past Due (1-15 days)', 'Performing', inplace=True)
df['LoanPerformance'].replace('Past Due (31-60 days)', 'Performing', inplace=True)
df['LoanPerformance'].replace('Past Due (61-90 days)', 'Performing', inplace=True)
df['LoanPerformance'].replace('Past Due (91-120 days)', 'Performing', inplace=True)
df['LoanPerformance'].replace('Past Due (16-30 days)', 'Performing', inplace=True)
df['LoanPerformance'].replace('FinalPaymentInProgress', 'Performing', inplace=True)
df['LoanPerformance'].replace('Past Due (>120 days)', 'Performing', inplace=True)
df['LoanPerformance'].replace('Cancelled', 'Performing', inplace=True)
# Change the datatype of LoanPerformance variable
df['LoanPerformance'] = df['LoanPerformance'].astype('category')
# Order prosper ratings (alpha) HR to AA
prosper_ratings_alpha = ['HR', 'E', 'D', 'C', 'B', 'A', 'AA']
pd_ver = pd.__version__.split(".")
if (int(pd_ver[0]) > 0) or (int(pd_ver[1]) >= 21): # v0.21 or later
alpha_ratings = pd.api.types.CategoricalDtype(ordered = True, categories = prosper_ratings_alpha)
df['ProsperRating (Alpha)'] = df['ProsperRating (Alpha)'].astype(alpha_ratings)
else: # pre-v0.21
df['ProsperRating (Alpha)'] = df['ProsperRating (Alpha)'].astype('category', ordered = True,
categories = prosper_ratings_alpha)
# Order loan origination quarters in chronological order
origination_quarters = ['Q4 2005',
'Q1 2006', 'Q2 2006', 'Q3 2006', 'Q4 2006',
'Q1 2007', 'Q2 2007', 'Q3 2007', 'Q4 2007',
'Q1 2008', 'Q2 2008', 'Q3 2008', 'Q4 2008',
'Q1 2009', 'Q2 2009', 'Q3 2009', 'Q4 2009',
'Q1 2010', 'Q2 2010', 'Q3 2010', 'Q4 2010',
'Q1 2011', 'Q2 2011', 'Q3 2011', 'Q4 2011',
'Q1 2012', 'Q2 2012', 'Q3 2012', 'Q4 2012',
'Q1 2013', 'Q2 2013', 'Q3 2013', 'Q4 2013',
'Q1 2014']
pd_ver = pd.__version__.split(".")
if (int(pd_ver[0]) > 0) or (int(pd_ver[1]) >= 21): # v0.21 or later
quarters = pd.api.types.CategoricalDtype(ordered = True, categories = origination_quarters)
df['LoanOriginationQuarter'] = df['LoanOriginationQuarter'].astype(quarters)
else: # pre-v0.21
df['LoanOriginationQuarter'] = df['LoanOriginationQuarter'].astype('category', ordered = True,
categories = origination_quarters)
###Output
_____no_output_____
###Markdown
Loan Performance vs. Prosper Score at different levels of Total Prosper LoansThe listings with low(high risk) Prosper Scores, on average under 6 in this case, consistently fall under the non-performing category eventually no matter how many listings there are already with the borrower at the time that particular listing was created.
###Code
fig, axes = plt.subplots(3, 3, figsize=(11.69, 8.27))
grouped = df.groupby('TotalProsperLoans')
ax = sns.boxplot(data=grouped.get_group(0.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[0, 0]).set_title('TotalProsperLoans: 0')
ax = sns.boxplot(data=grouped.get_group(1.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[0, 1]).set_title('TotalProsperLoans: 1')
ax = sns.boxplot(data=grouped.get_group(2.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[0, 2]).set_title('TotalProsperLoans: 2')
ax = sns.boxplot(data=grouped.get_group(3.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[1, 0]).set_title('TotalProsperLoans: 3')
ax = sns.boxplot(data=grouped.get_group(4.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[1, 1]).set_title('TotalProsperLoans: 4')
ax = sns.boxplot(data=grouped.get_group(5.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[1, 2]).set_title('TotalProsperLoans: 5')
ax = sns.boxplot(data=grouped.get_group(6.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[2, 0]).set_title('TotalProsperLoans: 6')
ax = sns.boxplot(data=grouped.get_group(7.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[2, 1]).set_title('TotalProsperLoans: 7')
ax = sns.boxplot(data=grouped.get_group(8.0), x='LoanPerformance', y='ProsperScore', palette=['red', 'limegreen'], ax=axes[2, 2]).set_title('TotalProsperLoans: 8')
plt.subplots_adjust(hspace=0.5)
plt.suptitle('Loan Performance vs. Prosper Score at different levels of Total Prosper Loans')
plt.show()
###Output
_____no_output_____
###Markdown
Prosper Rating vs. Non-Performing LoansAs observed previously too, Prosper’s risk rating system is fairly predictive in identifying the group of loans that has a higher chance to eventually default. Indeed, over 8% of “high-risk” HR loans and around 10% of D loans have failed to perform, while less than 3% of “low-risk” AA loans have defaulted (or been charged off). The risk ratings in between decline in an almost linear fashion going down the scale of riskiness with an exception of loans with Prosper rating of D.However, looking at the loans on a quarterly basis shows that the Prosper Rating is not necessarily as predictive at smaller sample sizes. There are many(a lot) quarters in which E-, D-, or sometimes even B-rated loans end up with higher non-performance rates than HR loans. The data is chunkier from quarter to quarter, as opposed to over the whole sample size.Since this data provided is only till early 2014, so one limitation ust be considered that all of the loans which didn't matured till that time period were put in the performing bucket while doing the analysis.
###Code
# --------------------------------------- PLOT 1 ---------------------------------------
default_color = sns.color_palette()[0]
plt.figure(figsize=(11.69, 3.27))
ncount = len(df.groupby('LoanPerformance').get_group('NonPerforming'))
ax = sns.countplot(x='ProsperRating (Alpha)', data=df.groupby('LoanPerformance').get_group('NonPerforming'), color=default_color)
# Make twin axis
ax2=ax.twinx()
# Switch so count axis is on right, frequency on left
ax2.yaxis.tick_left()
ax.yaxis.tick_right()
# Also switch the labels over
ax.yaxis.set_label_position('right')
ax2.yaxis.set_label_position('left')
ax2.set_ylabel('Frequency[%] NonPerforming Loans')
ax.set_ylabel('Count of NonPerfroming Loans')
for p in ax.patches:
x=p.get_bbox().get_points()[:,0]
y=p.get_bbox().get_points()[1,1]
ax.annotate('{:.1f}%'.format(100.*y/ncount), (x.mean(), y),
ha='center', va='bottom') # set the alignment of the text
# Use a LinearLocator to ensure the correct number of ticks
ax.yaxis.set_major_locator(ticker.LinearLocator(11))
# Fix the frequency range to 0-100
ax2.set_ylim(0,12)
ax.set_ylim(0,2041)
# And use a MultipleLocator to ensure a tick spacing of 10
ax2.yaxis.set_major_locator(ticker.MultipleLocator(10))
# Need to turn the grid on ax2 off, otherwise the gridlines end up on top of the bars
ax2.grid(None)
plt.show()
# --------------------------------------- PLOT 2 ---------------------------------------
plt.figure(figsize=(11.69, 5))
ax = sns.countplot(data=df.groupby('LoanPerformance').get_group('NonPerforming'), x='LoanOriginationQuarter', hue='ProsperRating (Alpha)', palette=sns.hls_palette(7))
plt.xticks(rotation=60)
plt.ylabel('Count of NonPerfroming Loans')
plt.xlim(14, 33)
plt.show()
###Output
_____no_output_____
###Markdown
Debt-to-Income-Ratio vs. Estimated Loss/Estimated Return/On-Time PaymentsIn this visualization dark colored dots are visible which denotes higher(low risk) Prosper Score.If the debt to income ratio are low, the prosper score lies on the lowest risk side given that estimated loss is low or estimated return is high.As the estimated loss becomes high, the prosper score tends to shift towards high risk side irrespective of the debt to income ratio.Overall it could be concluded that for borrowers with low debt to income ratio if the Prosper score is high(low risk), then the estimated loss is low and estimated return is low too.Also it can be seen that borrowers with low debt to income ratio tend to take more loans but also have high Prosper scores and make on time payments as compared to their counterparts with high debt to income ratio.
###Code
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, figsize=(11.69, 8.27))
# --------------------------------------- PLOT 1 ---------------------------------------
sns.scatterplot(data=df, x='DebtToIncomeRatio', y='EstimatedLoss', hue='ProsperScore', ax=ax1)
ax1.set_xlim(0, 2)
# --------------------------------------- PLOT 2 ---------------------------------------
sns.scatterplot(data=df, x='DebtToIncomeRatio', y='EstimatedReturn', hue='ProsperScore', ax=ax2)
ax2.set_xlim(0, 2)
# --------------------------------------- PLOT 3 ---------------------------------------
sns.scatterplot(data=df, x='DebtToIncomeRatio', y='OnTimeProsperPayments', hue='ProsperScore', ax=ax3)
ax3.set_xlim(0, 2)
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
DP/Policy Evaluation.ipynb | ###Markdown
Bellman Equation: $v_{k+1}(s) = \sum_{a \in A(s)} \pi(a|s) \sum_{s', r} p(s',r|s,a)[r + \gamma v_{k}(s)]$
###Code
def policy_eval(policy, env, discount_factor=1.0, theta=0.00001):
"""
Evaluate a policy given an environment and a full description of the environment's dynamics.
Args:
policy: [S, A] shaped matrix representing the policy.
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
Vector of length env.nS representing the value function.
"""
# Start with a random (all 0) value function
V = np.zeros(env.nS)
V_prev = np.zeros(env.nS)
deltas = []
converged = False
while not converged:
delta = 0
for s in range(env.nS):
value = 0
#v_current =
# iterate over possible actions
for a, pr_a in enumerate(policy[s]):
# iterate over possible successor states
for pr_transition, s_next, r, done in env.P[s][a]:
value += pr_a * pr_transition * (r + discount_factor * V[s_next])
delta = max(delta, V[s] - value)
V[s] = value
deltas.append(delta)
if delta < theta:
converged = True
return np.array(V), deltas
print (env.P[1])
random_policy = np.ones([env.nS, env.nA]) / env.nA
v, deltas = policy_eval(random_policy, env)
# Test: Make sure the evaluated policy is what we expected
expected_v = np.array([0, -14, -20, -22, -14, -18, -20, -20, -20, -20, -18, -14, -22, -20, -14, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
v
plt.plot(deltas)
plt.xlabel('iterations')
plt.ylabel('max diff between estimate and target')
plt.title('Policy Evaluation')
plt.show()
###Output
_____no_output_____
###Markdown
$ \\v_{k+1}(s)=\sum_{a{\in}A}{\pi}(a|s)(R_{s}^{a}+{\gamma}\sum_{s{\in}S}P_{ss^{\prime}}^{a}v_{k}^{s^{\prime}}) \\\boldsymbol{v}^{k+1}=R^{\pi}+{\gamma}P^{\pi}\boldsymbol{v}^{k} \\R_{s}^{\pi}=\sum_{a{\in}A}{\pi}(a|s)R_{s}^{a} \\P_{s}^{\pi}=\sum_{a{\in}A}{\pi}(a|s)P_{ss^{\prime}}^{a} \\$
###Code
def policy_eval(policy, env, discount_factor=1.0, theta=0.00001):
"""
Evaluate a policy given an environment and a full description of the environment's dynamics.
Args:
policy: [S, A] shaped matrix representing the policy.
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
Vector of length env.nS representing the value function.
"""
# Start with a random (all 0) value function
V = np.zeros(env.nS)
V = V.reshape((-1,1))
R_pi = np.array([np.sum([policy[s][a] * env.P[s][a][0][2]
for a in range(env.nA)])
for s in range(env.nS)])
R_pi = R_pi.reshape((-1,1))
P_pi = np.array([
[
np.sum([policy[s_from][a]
for a in range(env.nA)
if env.P[s_from][a][0][1]==s_to])
for s_to in range(env.nS)]
for s_from in range(env.nS)])
while True:
# TODO: Implement!
V_new = R_pi + discount_factor * np.matmul(P_pi, V)
if np.max(np.abs(V_new - V)) < theta:
V = V_new
break
V = V_new
return V.flatten().tolist()
random_policy = np.ones([env.nS, env.nA]) / env.nA
v = policy_eval(random_policy, env)
# Test: Make sure the evaluated policy is what we expected
expected_v = np.array([0, -14, -20, -22, -14, -18, -20, -20, -20, -20, -18, -14, -22, -20, -14, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
import timeit
n = 100
t = timeit.timeit("policy_eval(random_policy, env)", globals=globals(), number = n) / n
print("avg time: {}".format(t))
###Output
avg time: 0.0038263267799629828
###Markdown
Policy EvaluationPolicy evaluation uses the Bellman Expectation Equation:\begin{align*} v_{k+1}(s) &= \mathbb{E}_\pi [R_{t+1} + \gamma v_k(S_{t+1}) \mid S_t=s] \\ &= \sum_a \pi(a|s) \sum_{s'}P_{ss'}^a [r + \gamma v_k(s')]\end{align*}
###Code
def policy_eval(policy, env, discount_factor=1.0, theta=0.00001):
"""
Evaluate a policy given an environment and a full description of the environment's dynamics.
Args:
policy: [S, A] shaped matrix representing the policy.
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
Vector of length env.nS representing the value function.
"""
# Start with a random (all 0) value function
V = np.zeros(env.nS)
while True:
# TODO: Implement!
delta = 0
for s in range(env.nS):
v = 0
for a, a_prob in enumerate(policy[s]):
for prob, next_state, reward, done in env.P[s][a]: # each action has only 1 next state with prob 100%
v += a_prob * prob * (reward + discount_factor*V[next_state])
delta = max(delta, abs(v-V[s]))
V[s] = v
if delta < theta:
break
return np.array(V)
random_policy = np.ones([env.nS, env.nA]) / env.nA
v = policy_eval(random_policy, env)
v
# Test: Make sure the evaluated policy is what we expected
expected_v = np.array([0, -14, -20, -22, -14, -18, -20, -20, -20, -20, -18, -14, -22, -20, -14, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
###Output
_____no_output_____
###Markdown
Policy Evaluation
###Code
random_policy = np.ones([env.nS, env.nA]) / env.nA
v = policy_eval(random_policy, env)
print("Value Function:")
print(v)
print("")
print("Reshaped Grid Value Function:")
print(v.reshape(env.shape))
print("")
# Test: Make sure the evaluated policy is what we expected
expected_v = np.array([0, -14, -20, -22, -14, -18, -20, -20, -20, -20, -18, -14, -22, -20, -14, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
###Output
_____no_output_____
###Markdown
* 测试
###Code
############################
print(env)
env.nS
# env.P
for prob, next_state, reward, done in env.P[0][0]:
print(prob, end=' ');
print(next_state, end=' ');
print(reward, end=' ');
print(done, end=' ');
###Output
1.0 0 0.0 True
###Markdown
###Code
def policy_eval(policy, env, discount_factor=1.0, theta=1e-4):
"""
Evaluate a policy given an environment and a full description of the environment's dynamics.
Args:
policy: [S, A] shaped matrix representing the policy.
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: gamma discount factor.
Returns:
Vector of length env.nS representing the value function.
"""
# Start with a random (all 0) value function
V = np.zeros(env.nS)
while True:
# TODO: Implement!
delta = 0
for s in range(env.nS):
v = 0;
# 第一层 for 是循环,第二层 for 只是遍历
# 循环一个状态可采取的全部 动作-概率 对; 遍历采取该动作后可以得到的状态价值 V ,并进行累加
for a, action_prob in enumerate(policy[s]):
for prob, next_state, reward, done in env.P[s][a]:
v += action_prob * prob * (reward + discount_factor * V[next_state])
delta = max(delta, np.abs(v - V[s])) #
V[s] = v # V[s] 的赋值在 delta 运算后才进行,这样才知道状态 s 的价值与之前相比改变了多少
if delta < theta:
break
return np.array(V)
random_policy = np.ones((env.nS, env.nA)) / env.nA
v = policy_eval(random_policy, env)
###Output
_____no_output_____
###Markdown
测试
###Code
v
# random_policy
for a, action_prob in enumerate(random_policy[0]):
print(a, end=' ')
print(action_prob)
random_policy
###Output
_____no_output_____
###Markdown
###Code
# Test: Make sure the evaluated policy is what we expected
expected_v = np.array([0, -14, -20, -22, -14, -18, -20, -20, -20, -20, -18, -14, -22, -20, -14, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
###Output
_____no_output_____
###Markdown
Run policy_eval function
###Code
random_policy = np.ones([env.nS, env.nA]) / env.nA
v = policy_eval(random_policy, env)
###Output
_____no_output_____
###Markdown
Run the testing to check whether the calculation is done correctly.
###Code
# Test: Make sure the evaluated policy is what we expected
expected_v = np.array([0, -14, -20, -22, -14, -18, -20, -20, -20, -20, -18, -14, -22, -20, -14, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2) # correct
# outcome
print(v)
###Output
[ 0. -13.99989315 -19.99984167 -21.99982282 -13.99989315
-17.99986052 -19.99984273 -19.99984167 -19.99984167 -19.99984273
-17.99986052 -13.99989315 -21.99982282 -19.99984167 -13.99989315
0. ]
|
_notebooks/2020-04-15-PCA-From-Scratch.ipynb | ###Markdown
Principal Component Analysis (PCA) from Scratch> The world doesn't need a yet another PCA tutorial, just like the world doesn't need another silly love song. But sometimes you still get the urge to write your own- toc: true- branch: master- badges: true- comments: true- metadata_key1: metadata_value1- metadata_key2: metadata_value2- description: There are many tutorials on PCA, but this one has interactive 3D graphs! - image: https://i.imgur.com/vKiH8As.png  RelevancePrincipal Component Analysis (PCA) is a data-reduction technique that finds application in a wide variety of fields, including biology, sociology, physics, medicine, and audio processing. PCA may be used as a "front end" processing step that feeds into additional layers of machine learning, or it may be used by itself, for example when doing data visualization. It is so useful and ubiquitious that is is worth learning not only what it is for and what it is, but how to actually *do* it.In this interactive worksheet, we work through how to perform PCA on a few different datasets, writing our own code as we go. Other (better?) treatmentsMy treatment here was written several months after viewing... - the [excellent demo page at setosa.io](http://setosa.io/ev/principal-component-analysis/)- this quick [1m30s video of a teapot](https://www.youtube.com/watch?v=BfTMmoDFXyE), - this [great StatsQuest video](https://www.youtube.com/watch?v=FgakZw6K1QQ)- this [lecture from Andrew Ng's course](https://www.youtube.com/watch?v=rng04VJxUt4) Basic IdeaPut simply, PCA involves making a coordinate transformation (i.e., a rotation) from the arbitrary axes (or "features") you started with to a set of axes 'aligned with the data itself,' and doing this almost always means that you can get rid of a few of these 'components' of data that have small variance without suffering much in the way of accurcy while saving yourself a *ton* of computation. Once you "get it," you'll find PCA to be almost no big deal, if it weren't for the fact that it's so darn useful! We'll define the following terms as we go, but here's the process in a nutshell:1. Covariance: Find the *covariance matrix* for your dataset2. Eigenvectors: Find the *eigenvectors* of that matrix (these are the "components" btw)3. Ordering: Sort the eigenvectors/'dimensions' from biggest to smallest variance4. Projection / Data reduction: Use the eigenvectors corresponding to the largest variance to project the dataset into a reduced- dimensional space6. (Check: How much did we lose by that truncation?) CaveatsSince PCA will involve making linear transformations, there are some situations where PCA won't help but...pretty much it's handy enough that it's worth giving it a shot! CovarianceIf you've got two data dimensions and they vary together, then they are co-variant. **Example:** Two-dimensional data that's somewhat co-linear:
###Code
import numpy as np
import matplotlib.pyplot as plt
import plotly.graph_objects as go
N = 100
x = np.random.normal(size=N)
y = 0.5*x + 0.2*(np.random.normal(size=N))
fig = go.Figure(data=[go.Scatter(x=x, y=y, mode='markers',
marker=dict(size=8,opacity=0.5), name="data" )])
fig.update_layout( xaxis_title="x", yaxis_title="y",
yaxis = dict(scaleanchor = "x",scaleratio = 1) )
fig.show()
###Output
_____no_output_____
###Markdown
VarianceSo, for just the $x$ component of the above data, there's some *mean* value (which in this case is zero), and there's some *variance* about this mean: technically **the variance is the average of the squared differences from the mean.** If you're familiar with the standard deviation, usually denoted by $\sigma$, the variance is just the square of the standard deviation. If $x$ had units of meters, the variance $\sigma^2$ would have units of meters^2. Think of the variance as the "spread," or "extent" of the data, about some particular axis (or input, or "feature").Similarly we can look at the variance in just the $y$ component of the data. For the above data, the variances in $x$ and $y$ are
###Code
print("Variance in x =",np.var(x))
print("Variance in y =",np.var(y))
###Output
Variance in x = 0.6470431671825421
Variance in y = 0.19318628312072175
###Markdown
Covariance You'll notice in the above graph that as $x$ varies, so does $y$ -- pretty much. So $y$ is "*covariant*" with $x$. ["Covariance indicates the level to which two variables vary together."](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) To compute it, it's kind of like the regular variance, except that instead of squaring the deviation from the mean for one variable, we multiply the deviations for the two variables:$${\rm Cov}(x,y) = {1\over N-1}\sum_{j=1}^N (x_j-\mu_x)(y_j-\mu_j),$$where $\mu_x$ and $\mu_y$ are the means for the x- and y- componenets of the data, respectively. Note that you can reverse $x$ and $y$ and get the same result, and the covariance of a variable with itself is just the regular variance -- but with one caveat! The caveat is that we're dividing by $N-1$ instead of $N$, so unlike the regular variance we're not quite taking the mean. Why this? Well, for large datasets this makes essentially no difference, but for small numbers of data points, using $N$ can give values that tend to be a bit too small for most people's tastes, so the $N-1$ was introduced to "reduce small sample bias." In Python code, the covariance calculation looks like
###Code
def covariance(a,b):
return ( (a - a.mean())*(b - b.mean()) ).sum() / (len(a)-1)
print("Covariance of x & y =",covariance(x,y))
print("Covariance of y & x =",covariance(x,y))
print("Covariance of x with itself =",covariance(x,x),", variance of x =",np.var(x))
print("Covariance of y with itself =",covariance(y,y),", variance of x =",np.var(y))
###Output
Covariance of x & y = 0.3211758726837525
Covariance of y & x = 0.3211758726837525
Covariance of x with itself = 0.6535789567500425 , variance of x = 0.6470431671825421
Covariance of y with itself = 0.19513765971790076 , variance of x = 0.19318628312072175
###Markdown
Covariance matrixSo what we do is we take the covariance of every variable with every variable (including itself) and make a matrix out of it. Along the diagonal will be the variance of each variable (except for that $N-1$ in the denominator), and the rest of the matrix will be the covariances. Note that since the order of the variables doesn't matter when computing covariance, the matrix will be *symmetric* (i.e. it will equal its own transpose, i.e. will have a reflection symmetry across the diagonal) and thus will be a *square* matrix. Numpy gives us a handy thing to call:
###Code
data = np.stack((x,y),axis=1) # pack the x & y data together in one 2D array
print("data.shape =",data.shape)
cov = np.cov(data.T) # .T b/c numpy wants varibles along rows rather than down columns?
print("covariance matrix =\n",cov)
###Output
data.shape = (100, 2)
covariance matrix =
[[0.65357896 0.32117587]
[0.32117587 0.19513766]]
###Markdown
Some 3D data to work withNow that we know what a covariance matrix is, let's generate some 3D data that we can use for what's coming next. Since there are 3 variables or 3 dimensions, the covariance matrix will now be 3x3.
###Code
z = -.5*x + 2*np.random.uniform(size=N)
data = np.stack((x,y,z)).T
print("data.shape =",data.shape)
cov = np.cov(data.T)
print("covariance matrix =\n",cov)
# Plot our data
import plotly.graph_objects as go
fig = go.Figure(data=[go.Scatter3d(x=x, y=y, z=z,mode='markers', marker=dict(size=8,opacity=0.5), name="data" )])
fig.update_layout( xaxis_title="x", yaxis_title="y", yaxis = dict(scaleanchor = "x",scaleratio = 1) )
fig.show()
###Output
data.shape = (100, 3)
covariance matrix =
[[ 0.65357896 0.32117587 -0.33982198]
[ 0.32117587 0.19513766 -0.15839307]
[-0.33982198 -0.15839307 0.5207214 ]]
###Markdown
(Note that even though our $z$ data didn't explicitly depend on $y$, the fact that $y$ is covariant with $x$ means that $y$ and $z$ 'coincidentally' have a nonzero covariance. This sort of thing shows up in many datasets where two variables are correlated and may give rise to 'confounding' factors.) So now we have a covariance matrix. The next thing in PCA is find the 'principal components'. This means the directions along which the data varies the most. You can kind of estimate these by rotating the 3D graph above. See also [this great YouTube video of a teapot](https://www.youtube.com/watch?v=BfTMmoDFXyE) (1min 30s) that explains PCA in this manner. To do Principal Component Analysis, we need to find the aforementioned "components," and this requires finding *eigenvectors* for our dataset's covariance matrix. What is an eigenvector, *really*?First a **definition**. (Stay with me! We'll flesh this out in what comes after this.)Given some matrix (or 'linear operator') ${\bf A}$ with dimensions $n\times n$ (i.e., $n$ rows and $n$ columns), there exist a set of $n$ vectors $\vec{v}_i$ (each with dimension $n$, and $i = 1...n$ counts which vector we're talking about) **such that** multiplying one of these vectors by ${\bf A}$ results in a vector (anti)parallel to $\vec{v}_i$, with a length that's multiplied by some constant $\lambda_i$. In equation form:$${\bf A}\vec{v}_i = \lambda_i \vec{v}_i,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$$where the constants $\lambda_i$ are called *eigenvalues* and the vectors $\vec{v}_i$ are called *eigenvectors*. (Note that I'm departing a bit from common notation that uses $\vec{x}_i$ instead of $\vec{v}_i$; I don't want people to get confused when I want to use $x$'s for coordinate variables.)A graphical version of this is shown in Figure 1: > Figure 1. "An eigenvector is a vector that a linear operator sends to a multiple of itself" -- [Daniel McLaury](https://www.quora.com/What-does-Eigen-mean/answer/Daniel-McLaury) Brief Q & A before we go on:1. "So what's with the 'eigen' part?" That's a German prefix, which in this context means "inherent" or "own". 2. "Can a non-square matrix have eigenvectors?" Well,...no, and think of it this way: If $\bf{A}$ were an $n\times m$ matrix (where $m n$), then it would be mapping from $n$ dimensions into $m$ dimensions, but on the "other side" of the equation with the $\lambda_i \vec{v}_i$, *that* would still have $n$ dimensions, so... you'd be saying an $n$-dimensional object equals an $m$-dimensional object, which is a no-go. 3. "But my dataset has many more rows than columns, so what am I supposed to do about that?" Just wait! It'll be ok. We're not actually going to take the eigenvectors of the dataset 'directly', we're going to take the eigenvectors of the *covariance matrix* of the dataset.4. "Are eigenvectors important?"You bet! They get used in many areas of science. I first encountered them in quantum mechanics.*They describe the "principal vectors" of many objects, or "normal modes" of oscillating systems. They get used in [computer vision](https://www.visiondummy.com/2014/03/eigenvalues-eigenvectors/), and... lots of places. You'll see them almost anywhere matrices & tensors are employed, such as our topic for today: Data science!*Ok that's not quite true: I first encountered them in an extremely boring Linear Algebra class taught by an extremely boring NASA engineer who thought he wanted to try teaching. But it wasn't until later that I learned anything about their relevance for...anything. Consquently I didn't "learn them" very well so writing this is a helpful review for me. How to find the eigenvectors of a matrixYou call a library routine that does it for you, of course! ;-)
###Code
from numpy import linalg as LA
lambdas, vs = LA.eig(cov)
lambdas, vs
###Output
_____no_output_____
###Markdown
Ok sort of kidding; we'll do it "from scratch". But, one caveat before we start: Some matrices can be "weird" or "problematic" and have things like "singular values." There are sophisticated numerical libraries for doing this, and joking aside, for real-world numerical applications you're better off calling a library routine that other very smart and very careful people have written for you. But for now, we'll do the straightforward way which works pretty well for many cases.We'll follow the basic two steps:1. Find the eigenvalues 2. 'Plug in' each eigenvalue to get a system of linear equations for the values of the components of the corresponding eigenvector3. Solve this linear system. 1. Find the eigenvalues Ok I'm hoping you at least can recall what a [determinant](https://en.wikipedia.org/wiki/Determinant) of a matrix is. Many people, even if they don't know what a determinant is good for (e.g. tons of proofs & properties all rest on the determinant), they still at least remember how to calculate one. The way to get the eigenvalues is to take the determinant of the difference between a $\bf{A}$ and $\lambda$ times the *identity matrix* $\bf{I}$ (which is just ones along the diagonal and zeros otherwise) and set that difference equal to zero...$$det( \bf{A} - \lambda I) = 0 $$> Just another observation: Since ${\bf I}$ is a square matrix, that means $\bf{A}$ has to be a square matrix too.Then solving for $\lambda$ will give you a *polynomial equation* in $\lambda$, the solutions to (or roots of) which are the eigenvectors $\lambda_i$. Let's do an example:$${\bf A} = \begin{bmatrix}-2 & 2 & 1\\-5 & 5 & 1\\-4 & 2 & 3\end{bmatrix}$$To find the eigenvalues we set $$det( \bf{A} - \lambda I) =\begin{vmatrix}-2-\lambda & 2 & 1\\-5 & 5-\lambda & 1\\-4 & 2 & 3-\lambda\end{vmatrix} = 0.$$This gives us the equation...$$0 = \lambda^3 - 6\lambda^2 + \lambda - 6$$which has the 3 solutions (in descending order)$$ \lambda = 3, 2, 1.$$ *(Aside: to create an integer matrix with integer eigenvalues, I used [this handy web tool](https://ericthewry.github.io/integer_matrices/))*.Just to check that against the numpy library:
###Code
A = np.array([[-2,2,1],[-5,5,1],[-4,2,3]])
def sorted_eig(A): # For now we sort 'by convention'. For PCA the sorting is key.
lambdas, vs = LA.eig(A)
# Next line just sorts values & vectors together in order of decreasing eigenvalues
lambdas, vs = zip(*sorted(zip(list(lambdas), list(vs.T)),key=lambda x: x[0], reverse=True))
return lambdas, np.array(vs).T # un-doing the list-casting from the previous line
lambdas, vs = sorted_eig(A)
lambdas # hold off on printing out the eigenvectors until we do the next part!
###Output
_____no_output_____
###Markdown
Close enough! 2. Use the eigenvalues to get the eigenvectorsAlthough it was anncounced in mid 2019 that [you can get eigenvectors directly from eigenvalues](https://arxiv.org/abs/1908.03795), the usual way people have done this for a very long time is to go back to the matrix $\bf{A}$ and solve the *linear system* of equation (1) above, for each of the eigenvalues. For example, for $\lambda_1=-1$, we have $${\bf}A \vec{v}_1 = -\vec{v}_1$$i.e.$$\begin{bmatrix}-2 & 2 & 1\\-5 & 5 & 1\\-4 & 2 & 3\end{bmatrix}\begin{bmatrix}v_{1x}\\v_{1y}\\v_{1z}\\\end{bmatrix}= -\begin{bmatrix}v_{1x}\\v_{1y}\\v_{1z}\\\end{bmatrix}$$This amounts to 3 equations for 3 unknowns,...which I'm going to assume you can handle... For the other eigenvalues things proceed similarly. The solutions we get for the 3 eigenvalues are: $$\lambda_1 = 3: \ \ \ \vec{v}_1 = (1,2,1)^T$$ $$\lambda_2 = 2: \ \ \ \vec{v}_2 = (1,1,2)^T$$$$\lambda_3 = 1: \ \ \ \vec{v}_3 = (1,1,1)^T$$Since our original equation (1) allows us to scale eigenvectors by any artibrary constant, often we'll express eigenvectors as *unit* vectors $\hat{v}_i$. This will amount to dividing by the length of each vector, i.e. in our example multiplying by $(1/\sqrt{6},1/\sqrt{6},1/\sqrt{3})$. In this setting $$\lambda_1 = 3: \ \ \ \hat{v}_1 = (1/\sqrt{6},2/\sqrt{6},1/\sqrt{6})^T$$ $$\lambda_2 = 2: \ \ \ \hat{v}_2 = (1/\sqrt{6},1/\sqrt{6},2/\sqrt{6})^T$$$$\lambda_3 = 1: \ \ \ \hat{v}_3 = (1,1,1)^T/\sqrt{3}$$Checking our answers (left) with numpy's answers (right):
###Code
print(" "*15,"Ours"," "*28,"Numpy")
print(np.array([1,2,1])/np.sqrt(6), vs[:,0])
print(np.array([1,1,2])/np.sqrt(6), vs[:,1])
print(np.array([1,1,1])/np.sqrt(3), vs[:,2])
###Output
Ours Numpy
[0.40824829 0.81649658 0.40824829] [-0.40824829 -0.81649658 -0.40824829]
[0.40824829 0.40824829 0.81649658] [0.40824829 0.40824829 0.81649658]
[0.57735027 0.57735027 0.57735027] [0.57735027 0.57735027 0.57735027]
###Markdown
The fact that the first one differs by a multiplicative factor of -1 is not an issue. Remember: eigenvectors can be multiplied by an arbitrary constant. (Kind of odd that numpy doesn't choose the positive version though!) One more check: let's multiply our eigenvectors times A to see what we get:
###Code
print("A*v_1 / 3 = ",np.matmul(A, np.array([1,2,1]).T)/3 ) # Dividing by eigenvalue
print("A*v_2 / 2 = ",np.matmul(A, np.array([1,1,2]).T)/2 ) # to get vector back
print("A*v_3 / 1 = ",np.matmul(A, np.array([1,1,1]).T) )
###Output
A*v_1 / 3 = [1. 2. 1.]
A*v_2 / 2 = [1. 1. 2.]
A*v_3 / 1 = [1 1 1]
###Markdown
Great! Let's move on. Back to our data! Eigenvectors for our sample 3D datasetRecall we named our 3x3 covariance matrix 'cov'. So now we'll compute its eigenvectors, and then re-plot our 3D data and also plot the 3 eigenvectors with it...
###Code
# Now that we know we can get the same answers as the numpy library, let's use it
lambdas, vs = sorted_eig(cov) # Compute e'vals and e'vectors of cov matrix
print("lambdas, vs =\n",lambdas,"\n",vs)
# Re-plot our data
fig = go.Figure(data=[go.Scatter3d(x=x, y=y, z=z,mode='markers',
marker=dict(size=8,opacity=0.5), name="data" ) ])
# Draw some extra 'lines' showing eigenvector directions
n_ev_balls = 50 # the lines will be made of lots of balls in a line
ev_size= 3 # size of balls
t = np.linspace(0,1,num=n_ev_balls) # parameterizer for drawing along vec directions
for i in range(3): # do this for each eigenvector
# Uncomment the next line to scale (unit) vector by size of the eigenvalue
# vs[:,i] *= lambdas[i]
ex, ey, ez = t*vs[0,i], t*vs[1,i], t*vs[2,i]
fig.add_trace(go.Scatter3d(x=ex, y=ey, z=ez,mode='markers',
marker=dict(size=ev_size,opacity=0.8), name="v_"+str(i+1)))
fig.update_layout( xaxis_title="x", yaxis_title="y", yaxis = dict(scaleanchor = "x",scaleratio = 1) )
fig.show()
###Output
lambdas, vs =
(1.073114351318777, 0.26724003566904386, 0.0290836286576176)
[[-0.73933506 -0.47534042 -0.47690162]
[-0.3717427 -0.30238807 0.87770657]
[ 0.56141877 -0.82620393 -0.04686177]]
###Markdown
Things to note from the above graph:1. The first (red) eigenvector points along the direction of biggest variance 2. The second (greenish) eigenvector points along the direction of second-biggest variance3. The third (purple) eigenvector points along the direction of smallest variance. (If you edit the above code to rescale the vector length by the eigenvector, you'll really see these three point become apparent!) "Principal Component" AnalysisNow we have our components (=eigenvectors), and we have them "ranked" by their "significance." Next we will *eliminate* one or more of the less-significant directions of variance. In other words, we will *project* the data onto the various *principal components* by projecting *along* the less-significant components. Or even simpler: We will "squish" the data along the smallest-variance directions. For the above 3D dataset, we're going to squish it into a 2D pancake -- by projecting along the direction of the 3rd (purple) eigenvector onto the plane defined by the 1st (red) and 2nd (greenish) eigenvectors.Yea, but how to do this projection? Projecting the dataIt's actually not that big of a deal. All we have to do is multiply by the eigenvector (matrix)! **OH WAIT! Hey, you want to see a cool trick!?** Check this out:
###Code
lambdas, vs = sorted_eig(cov)
proj_cov = vs.T @ cov @ vs # project the covariance matrix, using eigenvectors
proj_cov
###Output
_____no_output_____
###Markdown
What was THAT? Let me clean that up a bit for you...
###Code
proj_cov[np.abs(proj_cov) < 1e-15] = 0
proj_cov
###Output
_____no_output_____
###Markdown
**Important point:** What you just saw is the whole reason eigenvectors get used for so many things, because they give you a 'coordinate system' where different 'directions' *decouple* from each other. See, the system has its own (German: eigen) inherent set of orientations which are different the 'arbitrary' coordinates that we 'humans' may have assigned initially. The numbers in that matrix are the covariances *in the directions* of the eigenvectors, instead of in the directions of the original x, y, and z. So really all we have to do is make a *coordinate transformation* using the matrix of eigenvectors, and then in order to project we'll literally just *drop* a whole index's-worth of data-dimension in this new coordinate system. :-) So, instead of $x$, $y$ and $z$, let's have three coordinates which (following physicist-notation) we'll call $q_1$, $q_2$ and $q_3$, and these will run along the directions given by the three eigenvectors. Let's write our data as a N-by-3 matrix, where N is the number of data points we have.
###Code
data = np.stack((x,y,z),axis=1)
data.shape # we had a 100 data points, so expecting 100x3 matrix
###Output
_____no_output_____
###Markdown
There are two ways of doing this, and I'll show you that they're numerically equivalent:1. Use *all* the eigenvectors to "rotate" the full dataset into the new coordinate system. Then perform a projection by truncating the last column of the rotated data.2. Truncate the last eigenvector, which will make a 3x2 projection matrix which will project the data onto the 2D plane defined by those two eigenvectors. Let's show them both:
###Code
print("\n 1. All data, rotated into new coordinate system")
W = vs[:,0:3] # keep the all the eigenvectors
new_data_all = data @ W # project all the data
print("Checking: new_data_all.shape =",new_data_all.shape)
print("New covariance matrix = \n",np.cov(new_data_all.T) )
print("\n 2. Truncated data projected onto principal axes of coordinate system")
W = vs[:,0:2] # keep only the first and 2nd eigenvectors
print("W.shape = ",W.shape)
new_data_proj = data @ W # project
print("Checking: new_data_proj.shape =",new_data_proj.shape)
print("New covariance matrix in projected space = \n",np.cov(new_data_proj.T) )
# Difference between them
diff = new_data_all[:,0:2] - new_data_proj
print("\n Absolute maximum difference between the two methods = ",np.max(np.abs(diff)))
###Output
1. All data, rotated into new coordinate system
Checking: new_data_all.shape = (100, 3)
New covariance matrix =
[[1.07311435e+00 7.64444687e-17 3.77081428e-17]
[7.64444687e-17 2.67240036e-01 1.21906748e-16]
[3.77081428e-17 1.21906748e-16 2.90836287e-02]]
2. Truncated data projected onto principal axes of coordinate system
W.shape = (3, 2)
Checking: new_data_proj.shape = (100, 2)
New covariance matrix in projected space =
[[1.07311435e+00 7.64444687e-17]
[7.64444687e-17 2.67240036e-01]]
Absolute maximum difference between the two methods = 0.0
###Markdown
...Nada. The 2nd method will be faster computationally though, because it doesn't calculate stuff you're going to throw away. One more comparison between the two methods.Let's take a look at the "full" dataset (in blue) vs. the projected dataset (in red):
###Code
fig = go.Figure(data=[(go.Scatter3d(x=new_data_all[:,0], y=new_data_all[:,1], z=new_data_all[:,2],
mode='markers', marker=dict(size=4,opacity=0.5), name="full data" ))])
fig.add_trace(go.Scatter3d(x=new_data_proj[:,0], y=new_data_proj[:,1], z=new_data_proj[:,0]*0,
mode='markers', marker=dict(size=4,opacity=0.5), name="projected" ) )
fig.update_layout(scene_aspectmode='data')
fig.show()
###Output
_____no_output_____
###Markdown
(Darn it, [if only Plot.ly would support orthographic projections](https://community.plot.ly/t/orthographic-projection-for-3d-plots/3897) [[2](https://community.plot.ly/t/orthographic-projection-instead-of-perspective-for-3d-plots/10035)] it'd be a lot easier to visually compare the two datasets!) Beyond 3DSo typically we use PCA to throw out many more dimensions than just one. Often this is used for data visualization but it's also done for feature reduction, i.e. to send less data into your machine learning algorithm. (PCA can even be used just as a "multidimensional linear regression" algorithm, but you wouldn't want to!) How do you know how many dimensions to throw out?In other words, how many 'components' should you choose to keep when doing PCA? There are a few ways to make this *judgement call* -- it will involve a trade-off between accuracy and computational speed.You can make a graph of the amount of variance you get as a function of how many components you keep, and often there will be a an 'elbow' at some point on the graph indicating a good cut-off point to choose. Stay tuned as we do the next example; we'll make such a graph. For more on this topic, see [this post by Arthur Gonsales](https://towardsdatascience.com/an-approach-to-choosing-the-number-of-components-in-a-principal-component-analysis-pca-3b9f3d6e73fe). Example: Handwritten Digits The [scikit-learn library](https://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html) uses this as an example and I like it. It goes as follows:1. Take a dataset of tiny 8x8 pixel images of handwritten digits.2. Run PCA to break it down from 8x8=64 dimensions to just two or 3 dimensions.3. Show on a plot how the different digits cluster in different regions of the space.4. (This part we'll save for the Appendix: Draw boundaries between the regions and use this as a classifier.)To be clear: In what follows, *each pixel* of the image counts as a "feature," i.e. as a dimension. Thus an entire image can be represented as a single point in a multidimensional space, in which distance from the origin along each dimension is given by the pixel intensity. In this example, the input space is *not* a 2D space that is 8 units wide and 8 units long, rather it consists of 8x8= 64 dimensions.
###Code
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
digits = load_digits()
X = digits.data / 255.0
Y = digits.target
print(X.shape, Y.shape,'\n')
# Let's look a a few examples
for i in range(8): # show 8 examples
print("This is supposed to be a '",Y[i],"':",sep="")
plt.imshow(X[i].reshape([8,8]))
plt.show()
###Output
(1797, 64) (1797,)
This is supposed to be a '0':
###Markdown
Now let's do the PCA thang... First we'll try going down to 2 dimensions. This isn't going to work super great but we'll try:
###Code
digits_cov = np.cov(X.T)
print("digits_cov.shape = ",digits_cov.shape)
lambdas, vs = sorted_eig(np.array(digits_cov))
W = vs[:,0:2] # just keep two dimensions
proj_digits = X @ W
print("proj_digits.shape = ", proj_digits.shape)
# Make the plot
fig = go.Figure(data=[go.Scatter(x=proj_digits[:,0], y=proj_digits[:,1],# z=Y, #z=proj_digits[:,2],
mode='markers', marker=dict(size=6, opacity=0.7, color=Y), text=['digit='+str(j) for j in Y] )])
fig.update_layout( xaxis_title="q_1", yaxis_title="q_2", yaxis = dict(scaleanchor = "x",scaleratio = 1) )
fig.update_layout(scene_camera=dict(up=dict(x=0, y=0, z=1), center=dict(x=0, y=0, z=0), eye=dict(x=0, y=0, z=1.5)))
fig.show()
###Output
digits_cov.shape = (64, 64)
proj_digits.shape = (1797, 2)
###Markdown
This is 'sort of ok': There are some regions that are mostly one kind of digit. But you may say there's there's too much intermingling between classes for a lot of this plot. So let's try it again with 3 dimensions for PCA:
###Code
W = vs[:,0:3] # just three dimensions
proj_digits = X @ W
print("proj_digits.shape = ", proj_digits.shape)
# Make the plot, separate them by "z" which is the digit of interest.
fig = go.Figure(data=[go.Scatter3d(x=proj_digits[:,0], y=proj_digits[:,1], z=proj_digits[:,2],
mode='markers', marker=dict(size=4, opacity=0.8, color=Y, showscale=True),
text=['digit='+str(j) for j in Y] )])
fig.update_layout(title="8x8 Handwritten Digits", xaxis_title="q_1", yaxis_title="q_2", yaxis = dict(scaleanchor = "x",scaleratio = 1) )
fig.show()
###Output
proj_digits.shape = (1797, 3)
###Markdown
*Now* we can start to see some definition! The 6's are pretty much in one area, the 2's are in another area, and the 0's are in still another, and so on. There is some intermingling to be sure (particularly between the 5's and 8's), but you can see that this 'kind of' gets the job done, and instead of dealing with 64 dimensions, we're down to 3! Graphing Variance vs. ComponentsEarlier we asked the question of how many components one should keep. To answer this quantitatively, we note that the eigenvalues of the covariance matrix are themselves measures of the variance in the datset. So these eigenvalues encode the 'significance' that each feature-dimension has in the overall dataset. We can plot these eigenvalues in order and then look for a 'kink' or 'elbow' in the graph as a place to truncate our representation...
###Code
plt.plot( np.abs(lambdas)/np.sum(lambdas) )
plt.xlabel('Number of components')
plt.ylabel('Significance')
plt.show()
###Output
_____no_output_____
###Markdown
...So, if we were wanting to represent this data in more than 3 dimensions but less than the full 64, we might choose around 10 principal compnents, as this looks like roughly where the 'elbow' in the graph lies. InterpretabilityWhat is the meaning of the new coordinate axes or 'features' $q_1$, $q_2$, etc? Sometimes there exists a compelling physical intepretation of these features (e.g., as in the case of eigenmodes of coupled oscillators), but often...there may not be any. And yet we haven't even done any 'real machine learning' at this point! ;-) This is an important topic. Modern data regulations such as the European Union's [GDPR](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation) require that models used in algorithmic decision-making be 'explainable'. If the data being fed into such algorithmics is already abstracted via methods such as PCA, this could be an issue. Thankfully, the linearity of the components mean that one can describe each principal component as a linear combination of inputs. Further reading There are [many books](https://www.google.com/search?client=ubuntu&channel=fs&q=book+pca+principal+component+analysis&ie=utf-8&oe=utf-8) devoted entirely to the intricacies of PCA and its applications. Hopefully this post has helped you get a better feel for how to construct a PCA transformation and what it might be good for. To expand on this see the following... Examples & links* ["Eigenstyle: Principal Component Analysis and Fashion"](https://graceavery.com/principalcomponentanalysisandfashion/) by Grace Avery. Uses PCA on Fashion-MNIST. It's good!* [Neat paper by my friend Dr. Ryan Bebej](https://link.springer.com/article/10.1007/s10914-008-9099-1) from when he was a student and used PCA to classify locomotion types of prehistoric acquatic mammals based on skeletal measurements alone. * [Andrew Ng's Machine Learning Course, Lecture on PCA](https://www.coursera.org/lecture/machine-learning/principal-component-analysis-algorithm-ZYIPa). How I first learned about this stuff. * [PCA using Python](https://towardsdatascience.com/pca-using-python-scikit-learn-e653f8989e60) by Michael Galarnyk. Does similar things to what I've done here, although maybe better! * [Plot.ly PCA notebook examples](https://plot.ly/python/v3/ipython-notebooks/principal-component-analysis/) Appendix A: Overkill: Bigger Handwritten DigitsSure, 8x8 digit images are boring. What about 28x28 images, as in the [MNIST dataset](http://yann.lecun.com/exdb/mnist/)? Let's roll...
###Code
#from sklearn.datasets import fetch_mldata
from sklearn.datasets import fetch_openml
from random import sample
#mnist = fetch_mldata('MNIST original')
mnist = fetch_openml('mnist_784', version=1, cache=True)
X2 = mnist.data / 255
Y2 = np.array(mnist.target,dtype=np.int)
#Let's grab some indices for random suffling
indices = list(range(X2.shape[0]))
# Let's look a a few examples
for i in range(8): # 8 is good
i = sample(indices,1)
print("This is supposed to be a ",Y2[i][0],":",sep="")
plt.imshow(X2[i].reshape([28,28]))
plt.show()
# Like we did before... Almost the whole PCA method is the next 3 lines!
mnist_cov = np.cov(X2.T)
lambdas, vs = sorted_eig(np.array(mnist_cov))
W = vs[:,0:3] # Grab the 3 most significant dimensions
# Plotting all 70,000 data points is going to be too dense too look at.
# Instead let's grab a random sample of 5,000 points
n_plot = 5000
indices = sample(list(range(X2.shape[0])), n_plot)
proj_mnist = np.array(X2[indices] @ W, dtype=np.float32) # Last step of PCA: project
fig = go.Figure(data=[go.Scatter3d(x=proj_mnist[:,0], y=proj_mnist[:,1], z=proj_mnist[:,2],
mode='markers', marker=dict(size=4, opacity=0.7, color=Y2[indices],
showscale=True), text=['digit='+str(j) for j in Y2[indices]] )])
fig.show()
###Output
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:9: ComplexWarning:
Casting complex values to real discards the imaginary part
###Markdown
...bit of a mess. Not as cleanly separated as the 8x8 image examples. You can see that the 0's are well separated from the 1's and the 3's, but everything else is pretty mixed together. (I suspect the 1's are clustered strongly because they involve the most dark pixels.)If you wanted to push this further then either keeping more dimensions (thereby making it un-visualizable) or just using a different method entirely (e.g. t-SNE or even better: [UMAP](https://pair-code.github.io/understanding-umap/)) would be the way to go. Still, it's neat to see that you can get somewhat intelligible results in 3D even on this 'much harder' problem. Appendix B: Because We Can: Turning it into a Classifier...But let's not do a neural network because all I ever do are neural networks, and because I don't want to have take the time & space to explain how they work or load in external libraries. Let's do k-nearest-neighbors instead, because it's intuitively easy to grasp and it's not hard to code up:> For any new point we want to evaluate, we take a 'vote' of whatever some number (called $k$) of its nearest neighbor points are already assigned as, and we set the class of the new point according to that vote. Making a efficient classifier is all about finding the *boundaries* between regions (and usually subject to some user-adjustable parameter like $k$ or some numerical threshold). Finding these boundaries can be about finding the 'edge cases' that cause the system to 'flip' (discontinuously) from one result to another. However, we are *not* going to make an efficient classifier today. ;-) Let's go back to the 8x8 digits example, and split it into a training set and a testing set (so we can check ourselves):
###Code
# random shuffled ordering of the whole thing
indices = sample(list(range(X.shape[0])), X.shape[0])
X_shuf, Y_shuf = X[indices,:], Y[indices]
# 80-20 train-test split
max_train_ind = int(0.8*X.shape[0])
X_train, Y_train = X_shuf[0:max_train_ind,:], Y_shuf[0:max_train_ind]
X_test, Y_test = X_shuf[max_train_ind:-1,:], Y_shuf[max_train_ind:-1]
# Do PCA on training set
train_cov = np.cov(X_train.T)
ell, v = sorted_eig(np.array(train_cov))
pca_dims = 3 # number of top 'dimensions' to take
W_train = v[:,0:pca_dims]
proj_train = X_train @ W_train
# also project the testing set while we're at it
proj_test = X_test @ W_train # yes, same W_train
###Output
_____no_output_____
###Markdown
Now let's make a little k-nearest neighbors routine...
###Code
from collections import Counter
def knn_predict(xnew, proj_train, Y_train, k=3):
"""
xnew is a new data point that has the same shape as one row of proj_train.
Given xnew, calculate the (squared) distance to all the points in X_train
to find out which ones are nearest.
"""
distances = ((proj_train - xnew)**2).sum(axis=1)
# stick on an extra column of indexing 'hash' for later use after we sort
dists_i = np.stack( (distances, np.array(range(Y_train.shape[0]) )),axis=1 )
dists_i = dists_i[dists_i[:,0].argsort()] # sort in ascending order of distance
knn_inds = (dists_i[0:k,-1]).astype(np.int) # Grab the indexes for k nearest neighbors
# take 'votes':
knn_targets = list(Y_train[knn_inds]) # which classes the nn's belong to
votes = Counter(knn_targets) # count up how many of each class are represented
return votes.most_common(1)[0][0] # pick the winner, or the first member of a tie
# Let's test it on the first element of the testing set
x, y_true = proj_test[0], Y_test[0]
guess = knn_predict(x, proj_train, Y_train)
print("guess = ",guess,", true = ",y_true)
###Output
guess = 6 , true = 6
###Markdown
Now let's try it for the 'unseen' data in the testing set, and see how we do...
###Code
mistakes, n_test = 0, Y_test.shape[0]
for i in range(n_test):
x = proj_test[i]
y_pred = knn_predict(x, proj_train, Y_train, k=3)
y_true = Y_test[i]
if y_true != y_pred:
mistakes += 1
if i < 20: # show some examples
print("x, y_pred, y_true =",x, y_pred, y_true,
"YAY!" if y_pred==y_true else " BOO. :-(")
print("...skipping a lot...")
print("Total Accuracy =", (n_test-mistakes)/n_test*100,"%")
###Output
x, y_pred, y_true = [ 0.06075339 0.00153272 -0.0477644 ] 6 6 YAY!
x, y_pred, y_true = [ 0.04083212 0.09757529 -0.05361896] 1 1 YAY!
x, y_pred, y_true = [-0.0199586 -0.00778773 0.00972962] 8 5 BOO. :-(
x, y_pred, y_true = [ 0.02400112 -0.07267613 0.02774141] 0 0 YAY!
x, y_pred, y_true = [0.01180771 0.03483923 0.07526469] 1 7 BOO. :-(
x, y_pred, y_true = [ 0.00379226 -0.06269449 -0.00195609] 0 0 YAY!
x, y_pred, y_true = [-0.06832135 -0.05396545 0.02980845] 9 9 YAY!
x, y_pred, y_true = [-0.02397417 -0.04914796 0.0109273 ] 5 5 YAY!
x, y_pred, y_true = [ 0.08213707 -0.01608953 -0.08072889] 6 6 YAY!
x, y_pred, y_true = [ 0.03056858 -0.04852946 0.02204423] 0 0 YAY!
x, y_pred, y_true = [-0.02124777 0.03623541 -0.01773196] 8 8 YAY!
x, y_pred, y_true = [0.03035896 0.01398381 0.01415554] 8 8 YAY!
x, y_pred, y_true = [ 0.0214849 0.02114674 -0.08951798] 1 1 YAY!
x, y_pred, y_true = [ 0.07878152 0.03312015 -0.06488347] 6 6 YAY!
x, y_pred, y_true = [-0.01294308 0.00158962 -0.01255491] 8 5 BOO. :-(
x, y_pred, y_true = [ 0.01351581 0.11000321 -0.03895516] 1 1 YAY!
x, y_pred, y_true = [0.0081306 0.01683952 0.05911389] 7 1 BOO. :-(
x, y_pred, y_true = [0.06497268 0.02817075 0.07118004] 4 4 YAY!
x, y_pred, y_true = [-0.03879657 -0.04460611 0.02833793] 9 5 BOO. :-(
x, y_pred, y_true = [-0.05975051 0.03713843 -0.07174727] 2 2 YAY!
...skipping a lot...
Total Accuracy = 76.88022284122563 %
|
Big-Data-Clusters/CU8/Public/content/cert-management/cer033-sign-master-generated-cert.ipynb | ###Markdown
CER033 - Sign Master certificate with generated CA==================================================This notebook signs the certificate created using:- [CER023 - Create Master certificate](../cert-management/cer023-create-master-cert.ipynb)with the generate Root CA Certificate, created using either:- [CER001 - Generate a Root CA certificate](../cert-management/cer001-create-root-ca.ipynb)- [CER003 - Upload existing Root CA certificate](../cert-management/cer003-upload-existing-root-ca.ipynb)Steps----- Parameters
###Code
import getpass
app_name = "master"
scaledset_name = "master"
container_name = "mssql-server"
prefix_keyfile_name = "sql"
common_name = "master-svc"
country_name = "US"
state_or_province_name = "Illinois"
locality_name = "Chicago"
organization_name = "Contoso"
organizational_unit_name = "Finance"
email_address = f"{getpass.getuser().lower()}@contoso.com"
ssl_configuration_file = "ca.openssl.cnf"
days = "398" # the number of days to certify the certificate for
certificate_filename = "cacert.pem"
private_key_filename = "cakey.pem"
test_cert_store_root = "/var/opt/secrets/test-certificates"
###Output
_____no_output_____
###Markdown
Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("cer033-sign-master-generated-cert.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']}
error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]}
install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}
###Output
_____no_output_____
###Markdown
Get the Kubernetes namespace for the big data clusterGet the namespace of the Big Data Cluster use the kubectl command lineinterface .**NOTE:**If there is more than one Big Data Cluster in the target Kubernetescluster, then either:- set \[0\] to the correct value for the big data cluster.- set the environment variable AZDATA\_NAMESPACE, before starting Azure Data Studio.
###Code
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
###Output
_____no_output_____
###Markdown
Create a temporary directory to stage files
###Code
# Create a temporary directory to hold configuration files
import tempfile
temp_dir = tempfile.mkdtemp()
print(f"Temporary directory created: {temp_dir}")
###Output
_____no_output_____
###Markdown
Helper function to save configuration files to disk
###Code
# Define helper function 'save_file' to save configuration files to the temporary directory created above
import os
import io
def save_file(filename, contents):
with io.open(os.path.join(temp_dir, filename), "w", encoding='utf8', newline='\n') as text_file:
text_file.write(contents)
print("File saved: " + os.path.join(temp_dir, filename))
print("Function `save_file` defined successfully.")
###Output
_____no_output_____
###Markdown
Get name of the ‘Running’ `controller` `pod`
###Code
# Place the name of the 'Running' controller pod in variable `controller`
controller = run(f'kubectl get pod --selector=app=controller -n {namespace} -o jsonpath={{.items[0].metadata.name}} --field-selector=status.phase=Running', return_output=True)
print(f"Controller pod name: {controller}")
###Output
_____no_output_____
###Markdown
Create Signing Request configuration file
###Code
certificate = f"""
[ ca ]
default_ca = CA_default # The default ca section
[ CA_default ]
default_days = 1000 # How long to certify for
default_crl_days = 30 # How long before next CRL
default_md = sha256 # Use public key default MD
preserve = no # Keep passed DN ordering
x509_extensions = ca_extensions # The extensions to add to the cert
email_in_dn = no # Don't concat the email in the DN
copy_extensions = copy # Required to copy SANs from CSR to cert
base_dir = {test_cert_store_root}
certificate = $base_dir/{certificate_filename} # The CA certifcate
private_key = $base_dir/{private_key_filename} # The CA private key
new_certs_dir = $base_dir # Location for new certs after signing
database = $base_dir/index.txt # Database index file
serial = $base_dir/serial.txt # The current serial number
unique_subject = no # Set to 'no' to allow creation of
# several certificates with same subject.
[ req ]
default_bits = 2048
default_keyfile = {test_cert_store_root}/{private_key_filename}
distinguished_name = ca_distinguished_name
x509_extensions = ca_extensions
string_mask = utf8only
[ ca_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = {country_name}
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = {state_or_province_name}
localityName = Locality Name (eg, city)
localityName_default = {locality_name}
organizationName = Organization Name (eg, company)
organizationName_default = {organization_name}
organizationalUnitName = Organizational Unit (eg, division)
organizationalUnitName_default = {organizational_unit_name}
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = {common_name}
emailAddress = Email Address
emailAddress_default = {email_address}
[ ca_extensions ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always, issuer
basicConstraints = critical, CA:true
keyUsage = keyCertSign, cRLSign
[ signing_policy ]
countryName = optional
stateOrProvinceName = optional
localityName = optional
organizationName = optional
organizationalUnitName = optional
commonName = supplied
emailAddress = optional
[ signing_req ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
"""
save_file(ssl_configuration_file, certificate)
###Output
_____no_output_____
###Markdown
Copy certificate configuration to `controller` `pod`
###Code
import os
cwd = os.getcwd()
os.chdir(temp_dir) # Use chdir to workaround kubectl bug on Windows, which incorrectly processes 'c:\' on kubectl cp cmd line
run(f'kubectl exec {controller} -c controller -n {namespace} -- bash -c "mkdir -p {test_cert_store_root}/{app_name}"')
run(f'kubectl cp {ssl_configuration_file} {controller}:{test_cert_store_root}/{app_name}/{ssl_configuration_file} -c controller -n {namespace}')
os.chdir(cwd)
###Output
_____no_output_____
###Markdown
Set next serial number
###Code
run(f'kubectl exec {controller} -n {namespace} -c controller -- bash -c "test -f {test_cert_store_root}/index.txt || touch {test_cert_store_root}/index.txt"')
run(f"""kubectl exec {controller} -n {namespace} -c controller -- bash -c "test -f {test_cert_store_root}/serial.txt || echo '00' > {test_cert_store_root}/serial.txt" """)
current_serial_number = run(f"""kubectl exec {controller} -n {namespace} -c controller -- bash -c "cat {test_cert_store_root}/serial.txt" """, return_output=True)
# The serial number is hex
new_serial_number = int(f"0x{current_serial_number}", 0) + 1
run(f"""kubectl exec {controller} -n {namespace} -c controller -- bash -c "echo '{new_serial_number:02X}' > {test_cert_store_root}/serial.txt" """)
###Output
_____no_output_____
###Markdown
Create private key and certificate signing requestUse openssl ca to create a private key and signing request. See:- https://www.openssl.org/docs/man1.0.2/man1/ca.html
###Code
cmd = f"openssl ca -notext -batch -config {test_cert_store_root}/{app_name}/ca.openssl.cnf -policy signing_policy -extensions signing_req -out {test_cert_store_root}/{app_name}/{prefix_keyfile_name}-certificate.pem -infiles {test_cert_store_root}/{app_name}/{prefix_keyfile_name}-signingrequest.csr"
run(f'kubectl exec {controller} -c controller -n {namespace} -- bash -c "{cmd}"')
###Output
_____no_output_____
###Markdown
Display certificateUse openssl x509 to display the certificate, so it can be visuallyverified to be correct.
###Code
cmd = f"openssl x509 -in {test_cert_store_root}/{app_name}/{prefix_keyfile_name}-certificate.pem -text -noout"
run(f'kubectl exec {controller} -c controller -n {namespace} -- bash -c "{cmd}"')
###Output
_____no_output_____
###Markdown
Clean up temporary directory for staging configuration files
###Code
# Delete the temporary directory used to hold configuration files
import shutil
shutil.rmtree(temp_dir)
print(f'Temporary directory deleted: {temp_dir}')
print('Notebook execution complete.')
###Output
_____no_output_____ |
site/en-snapshot/hub/tutorials/boundless.ipynb | ###Markdown
Copyright 2020 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub models Boundless ColabWelcome to the Boundless model Colab! This notebook will take you through the steps of running the model on images and visualize the results. OverviewBoundless is a model for image extrapolation. This model takes an image, internally masks a portion of it ([1/2](https://tfhub.dev/google/boundless/half/1), [1/4](https://tfhub.dev/google/boundless/quarter/1), [3/4](https://tfhub.dev/google/boundless/three_quarter/1)) and completes the masked part. For more details refer to [Boundless: Generative Adversarial Networks for Image Extension](https://arxiv.org/pdf/1908.07007.pdf) or the model documentation on TensorFlow Hub. Imports and SetupLets start with the base imports.
###Code
import tensorflow as tf
import tensorflow_hub as hub
from io import BytesIO
from PIL import Image as PilImage
import numpy as np
from matplotlib import pyplot as plt
from six.moves.urllib.request import urlopen
###Output
_____no_output_____
###Markdown
Reading image for inputLets create a util method to help load the image and format it for the model (257x257x3). This method will also crop the image to a square to avoid distortion and you can use with local images or from the internet.
###Code
def read_image(filename):
fd = None
if(filename.startswith('http')):
fd = urlopen(filename)
else:
fd = tf.io.gfile.GFile(filename, 'rb')
pil_image = PilImage.open(fd)
width, height = pil_image.size
# crop to make the image square
pil_image = pil_image.crop((0, 0, height, height))
pil_image = pil_image.resize((257,257),PilImage.ANTIALIAS)
image_unscaled = np.array(pil_image)
image_np = np.expand_dims(
image_unscaled.astype(np.float32) / 255., axis=0)
return image_np
###Output
_____no_output_____
###Markdown
Visualization methodWe will also create a visuzalization method to show the original image side by side with the masked version and the "filled" version, both generated by the model.
###Code
def visualize_output_comparison(img_original, img_masked, img_filled):
plt.figure(figsize=(24,12))
plt.subplot(131)
plt.imshow((np.squeeze(img_original)))
plt.title("Original", fontsize=24)
plt.axis('off')
plt.subplot(132)
plt.imshow((np.squeeze(img_masked)))
plt.title("Masked", fontsize=24)
plt.axis('off')
plt.subplot(133)
plt.imshow((np.squeeze(img_filled)))
plt.title("Generated", fontsize=24)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Loading an ImageWe will load a sample image but fell free to upload your own image to the colab and try with it. Remember that the model have some limitations regarding human images.
###Code
wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nusfjord_road%2C_2010_09.jpg/800px-Nusfjord_road%2C_2010_09.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Beech_forest_M%C3%A1tra_in_winter.jpg/640px-Beech_forest_M%C3%A1tra_in_winter.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Marmolada_Sunset.jpg/640px-Marmolada_Sunset.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Aegina_sunset.jpg/640px-Aegina_sunset.jpg"
input_img = read_image(wikimedia)
###Output
_____no_output_____
###Markdown
Selecting a model from TensorFlow HubOn TensorFlow Hub we have 3 versions of the Boundless model: Half, Quarter and Three Quarters.In the following cell you can chose any of them and try on your image. If you want to try with another one, just chose it and execute the following cells.
###Code
#@title Model Selection { display-mode: "form" }
model_name = 'Boundless Quarter' # @param ['Boundless Half', 'Boundless Quarter', 'Boundless Three Quarters']
model_handle_map = {
'Boundless Half' : 'https://tfhub.dev/google/boundless/half/1',
'Boundless Quarter' : 'https://tfhub.dev/google/boundless/quarter/1',
'Boundless Three Quarters' : 'https://tfhub.dev/google/boundless/three_quarter/1'
}
model_handle = model_handle_map[model_name]
###Output
_____no_output_____
###Markdown
Now that we've chosen the model we want, lets load it from TensorFlow Hub.**Note**: You can point your browser to the model handle to read the model's documentation.
###Code
print("Loading model {} ({})".format(model_name, model_handle))
model = hub.load(model_handle)
###Output
_____no_output_____
###Markdown
Doing InferenceThe boundless model have two outputs:* The input image with a mask applied* The masked image with the extrapolation to complete itwe can use these two images to show a comparisson visualization.
###Code
result = model.signatures['default'](tf.constant(input_img))
generated_image = result['default']
masked_image = result['masked_image']
visualize_output_comparison(input_img, masked_image, generated_image)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub models Boundless ColabWelcome to the Boundless model Colab! This notebook will take you through the steps of running the model on images and visualize the results. OverviewBoundless is a model for image extrapolation. This model takes an image, internally masks a portioin of it ([1/2](https://tfhub.dev/google/boundless/half/1), [1/4](https://tfhub.dev/google/boundless/quarter/1), [3/4](https://tfhub.dev/google/boundless/three_quarter/1)) and completes the masked part. For more details refer to [Boundless: Generative Adversarial Networks for Image Extension](https://arxiv.org/pdf/1908.07007.pdf) or the model documentation on TensorFlow Hub. Imports and SetupLets start with the base imports.
###Code
import tensorflow as tf
import tensorflow_hub as hub
from io import BytesIO
from PIL import Image as PilImage
import numpy as np
from matplotlib import pyplot as plt
from six.moves.urllib.request import urlopen
###Output
_____no_output_____
###Markdown
Reading image for inputLets create a util method to help load the image and format it for the model (257x257x3). This method will also crop the image to a square to avoid distortion and you can use with local images or from the internet.
###Code
def read_image(filename):
fd = None
if(filename.startswith('http')):
fd = urlopen(filename)
else:
fd = tf.io.gfile.GFile(filename, 'rb')
pil_image = PilImage.open(fd)
width, height = pil_image.size
# crop to make the image square
pil_image = pil_image.crop((0, 0, height, height))
pil_image = pil_image.resize((257,257),PilImage.ANTIALIAS)
image_unscaled = np.array(pil_image)
image_np = np.expand_dims(
image_unscaled.astype(np.float32) / 255., axis=0)
return image_np
###Output
_____no_output_____
###Markdown
Visualization methodWe will also create a visuzalization method to show the original image side by side with the masked version and the "filled" version, both generated by the model.
###Code
def visualize_output_comparison(img_original, img_masked, img_filled):
plt.figure(figsize=(24,12))
plt.subplot(131)
plt.imshow((np.squeeze(img_original)))
plt.title("Original", fontsize=24)
plt.axis('off')
plt.subplot(132)
plt.imshow((np.squeeze(img_masked)))
plt.title("Masked", fontsize=24)
plt.axis('off')
plt.subplot(133)
plt.imshow((np.squeeze(img_filled)))
plt.title("Generated", fontsize=24)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Loading an ImageWe will load a sample image but fell free to upload your own image to the colab and try with it. Remember that the model have some limitations regarding human images.
###Code
wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nusfjord_road%2C_2010_09.jpg/800px-Nusfjord_road%2C_2010_09.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Beech_forest_M%C3%A1tra_in_winter.jpg/640px-Beech_forest_M%C3%A1tra_in_winter.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Marmolada_Sunset.jpg/640px-Marmolada_Sunset.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Aegina_sunset.jpg/640px-Aegina_sunset.jpg"
input_img = read_image(wikimedia)
###Output
_____no_output_____
###Markdown
Selecting a model from TensorFlow HubOn TensorFlow Hub we have 3 versions of the Boundless model: Half, Quarter and Three Quarters.In the following cell you can chose any of them and try on your image. If you want to try with another one, just chose it and execute the following cells.
###Code
#@title Model Selection { display-mode: "form" }
model_name = 'Boundless Quarter' # @param ['Boundless Half', 'Boundless Quarter', 'Boundless Three Quarters']
model_handle_map = {
'Boundless Half' : 'https://tfhub.dev/google/boundless/half/1',
'Boundless Quarter' : 'https://tfhub.dev/google/boundless/quarter/1',
'Boundless Three Quarters' : 'https://tfhub.dev/google/boundless/three_quarter/1'
}
model_handle = model_handle_map[model_name]
###Output
_____no_output_____
###Markdown
Now that we've chosen the model we want, lets load it from TensorFlow Hub.**Note**: You can point your browser to the model handle to read the model's documentation.
###Code
print("Loading model {} ({})".format(model_name, model_handle))
model = hub.load(model_handle)
###Output
_____no_output_____
###Markdown
Doing InferenceThe boundless model have two ouputs:* The input image with a mask applied* The masked image with the extrapolation to complete itwe can use these two images to show a comparisson visualization.
###Code
result = model.signatures['default'](tf.constant(input_img))
generated_image = result['default']
masked_image = result['masked_image']
visualize_output_comparison(input_img, masked_image, generated_image)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Boundless ColabWelcome to the Boundless model Colab! This notebook will take you through the steps of running the model on images and visualize the results. OverviewBoundless is a model for image extrapolation. This model takes an image, internally masks a portioin of it ([1/2](https://tfhub.dev/google/boundless/half/1), [1/4](https://tfhub.dev/google/boundless/quarter/1), [3/4](https://tfhub.dev/google/boundless/three_quarter/1)) and completes the masked part. For more details refer to [Boundless: Generative Adversarial Networks for Image Extension](https://arxiv.org/pdf/1908.07007.pdf) or the model documentation on TensorFlow Hub. Imports and SetupLets start with the base imports.
###Code
import tensorflow as tf
import tensorflow_hub as hub
from io import BytesIO
from PIL import Image as PilImage
import numpy as np
from matplotlib import pyplot as plt
from six.moves.urllib.request import urlopen
###Output
_____no_output_____
###Markdown
Reading image for inputLets create a util method to help load the image and format it for the model (257x257x3). This method will also crop the image to a square to avoid distortion and you can use with local images or from the internet.
###Code
def read_image(filename):
fd = None
if(filename.startswith('http')):
fd = urlopen(filename)
else:
fd = tf.io.gfile.GFile(filename, 'rb')
pil_image = PilImage.open(fd)
width, height = pil_image.size
# crop to make the image square
pil_image = pil_image.crop((0, 0, height, height))
pil_image = pil_image.resize((257,257),PilImage.ANTIALIAS)
image_unscaled = np.array(pil_image)
image_np = np.expand_dims(
image_unscaled.astype(np.float32) / 255., axis=0)
return image_np
###Output
_____no_output_____
###Markdown
Visualization methodWe will also create a visuzalization method to show the original image side by side with the masked version and the "filled" version, both generated by the model.
###Code
def visualize_output_comparison(img_original, img_masked, img_filled):
plt.figure(figsize=(24,12))
plt.subplot(131)
plt.imshow((np.squeeze(img_original)))
plt.title("Original", fontsize=24)
plt.axis('off')
plt.subplot(132)
plt.imshow((np.squeeze(img_masked)))
plt.title("Masked", fontsize=24)
plt.axis('off')
plt.subplot(133)
plt.imshow((np.squeeze(img_filled)))
plt.title("Generated", fontsize=24)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Loading an ImageWe will load a sample image but fell free to upload your own image to the colab and try with it. Remeber that the model have some limitations regarding human images.
###Code
wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nusfjord_road%2C_2010_09.jpg/800px-Nusfjord_road%2C_2010_09.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Beech_forest_M%C3%A1tra_in_winter.jpg/640px-Beech_forest_M%C3%A1tra_in_winter.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Marmolada_Sunset.jpg/640px-Marmolada_Sunset.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Aegina_sunset.jpg/640px-Aegina_sunset.jpg"
input_img = read_image(wikimedia)
###Output
_____no_output_____
###Markdown
Selecting a model from TensorFlow HubOn TensorFlow Hub we have 3 versions of the Boundless model: Half, Quarter and Three Quarters.In the following cell you can chose any of them and try on your image. If you want to try with another one, just chose it and execture the following cells.
###Code
#@title Model Selection { display-mode: "form" }
model_name = 'Boundless Quarter' # @param ['Boundless Half', 'Boundless Quarter', 'Boundless Three Quarters']
model_handle_map = {
'Boundless Half' : 'https://tfhub.dev/google/boundless/half/1',
'Boundless Quarter' : 'https://tfhub.dev/google/boundless/quarter/1',
'Boundless Three Quarters' : 'https://tfhub.dev/google/boundless/three_quarter/1'
}
model_hanlde = model_handle_map[model_name]
###Output
_____no_output_____
###Markdown
Now that we've chosen the model we want, lets load it from TensorFlow Hub.**Note**: You can point your browser to the model handle to read the model's documentation.
###Code
print("Loading model {} ({})".format(model_name, model_hanlde))
model = hub.load(model_hanlde)
###Output
_____no_output_____
###Markdown
Doing InferenceThe boundless model have two ouputs:* The input image with a mask applied* The masked image with the extrapolation to complete itwe can use these two images to show a comparisson visualization.
###Code
result = model.signatures['default'](tf.constant(input_img))
generated_image = result['default']
masked_image = result['masked_image']
visualize_output_comparison(input_img, masked_image, generated_image)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub models Boundless ColabWelcome to the Boundless model Colab! This notebook will take you through the steps of running the model on images and visualize the results. OverviewBoundless is a model for image extrapolation. This model takes an image, internally masks a portioin of it ([1/2](https://tfhub.dev/google/boundless/half/1), [1/4](https://tfhub.dev/google/boundless/quarter/1), [3/4](https://tfhub.dev/google/boundless/three_quarter/1)) and completes the masked part. For more details refer to [Boundless: Generative Adversarial Networks for Image Extension](https://arxiv.org/pdf/1908.07007.pdf) or the model documentation on TensorFlow Hub. Imports and SetupLets start with the base imports.
###Code
import tensorflow as tf
import tensorflow_hub as hub
from io import BytesIO
from PIL import Image as PilImage
import numpy as np
from matplotlib import pyplot as plt
from six.moves.urllib.request import urlopen
###Output
_____no_output_____
###Markdown
Reading image for inputLets create a util method to help load the image and format it for the model (257x257x3). This method will also crop the image to a square to avoid distortion and you can use with local images or from the internet.
###Code
def read_image(filename):
fd = None
if(filename.startswith('http')):
fd = urlopen(filename)
else:
fd = tf.io.gfile.GFile(filename, 'rb')
pil_image = PilImage.open(fd)
width, height = pil_image.size
# crop to make the image square
pil_image = pil_image.crop((0, 0, height, height))
pil_image = pil_image.resize((257,257),PilImage.ANTIALIAS)
image_unscaled = np.array(pil_image)
image_np = np.expand_dims(
image_unscaled.astype(np.float32) / 255., axis=0)
return image_np
###Output
_____no_output_____
###Markdown
Visualization methodWe will also create a visuzalization method to show the original image side by side with the masked version and the "filled" version, both generated by the model.
###Code
def visualize_output_comparison(img_original, img_masked, img_filled):
plt.figure(figsize=(24,12))
plt.subplot(131)
plt.imshow((np.squeeze(img_original)))
plt.title("Original", fontsize=24)
plt.axis('off')
plt.subplot(132)
plt.imshow((np.squeeze(img_masked)))
plt.title("Masked", fontsize=24)
plt.axis('off')
plt.subplot(133)
plt.imshow((np.squeeze(img_filled)))
plt.title("Generated", fontsize=24)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Loading an ImageWe will load a sample image but fell free to upload your own image to the colab and try with it. Remeber that the model have some limitations regarding human images.
###Code
wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nusfjord_road%2C_2010_09.jpg/800px-Nusfjord_road%2C_2010_09.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Beech_forest_M%C3%A1tra_in_winter.jpg/640px-Beech_forest_M%C3%A1tra_in_winter.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Marmolada_Sunset.jpg/640px-Marmolada_Sunset.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Aegina_sunset.jpg/640px-Aegina_sunset.jpg"
input_img = read_image(wikimedia)
###Output
_____no_output_____
###Markdown
Selecting a model from TensorFlow HubOn TensorFlow Hub we have 3 versions of the Boundless model: Half, Quarter and Three Quarters.In the following cell you can chose any of them and try on your image. If you want to try with another one, just chose it and execture the following cells.
###Code
#@title Model Selection { display-mode: "form" }
model_name = 'Boundless Quarter' # @param ['Boundless Half', 'Boundless Quarter', 'Boundless Three Quarters']
model_handle_map = {
'Boundless Half' : 'https://tfhub.dev/google/boundless/half/1',
'Boundless Quarter' : 'https://tfhub.dev/google/boundless/quarter/1',
'Boundless Three Quarters' : 'https://tfhub.dev/google/boundless/three_quarter/1'
}
model_handle = model_handle_map[model_name]
###Output
_____no_output_____
###Markdown
Now that we've chosen the model we want, lets load it from TensorFlow Hub.**Note**: You can point your browser to the model handle to read the model's documentation.
###Code
print("Loading model {} ({})".format(model_name, model_handle))
model = hub.load(model_handle)
###Output
_____no_output_____
###Markdown
Doing InferenceThe boundless model have two ouputs:* The input image with a mask applied* The masked image with the extrapolation to complete itwe can use these two images to show a comparisson visualization.
###Code
result = model.signatures['default'](tf.constant(input_img))
generated_image = result['default']
masked_image = result['masked_image']
visualize_output_comparison(input_img, masked_image, generated_image)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Boundless ColabWelcome to the Boundless model Colab! This notebook will take you through the steps of running the model on images and visualize the results. OverviewBoundless is a model for image extrapolation. This model takes an image, internally masks a portioin of it ([1/2](https://tfhub.dev/google/boundless/half/1), [1/4](https://tfhub.dev/google/boundless/quarter/1), [3/4](https://tfhub.dev/google/boundless/three_quarter/1)) and completes the masked part. For more details refer to [Boundless: Generative Adversarial Networks for Image Extension](https://arxiv.org/pdf/1908.07007.pdf) or the model documentation on TensorFlow Hub. Imports and SetupLets start with the base imports.
###Code
import tensorflow as tf
import tensorflow_hub as hub
from io import BytesIO
from PIL import Image as PilImage
import numpy as np
from matplotlib import pyplot as plt
from six.moves.urllib.request import urlopen
###Output
_____no_output_____
###Markdown
Reading image for inputLets create a util method to help load the image and format it for the model (257x257x3). This method will also crop the image to a square to avoid distortion and you can use with local images or from the internet.
###Code
def read_image(filename):
fd = None
if(filename.startswith('http')):
fd = urlopen(filename)
else:
fd = tf.io.gfile.GFile(filename, 'rb')
pil_image = PilImage.open(fd)
width, height = pil_image.size
# crop to make the image square
pil_image = pil_image.crop((0, 0, height, height))
pil_image = pil_image.resize((257,257),PilImage.ANTIALIAS)
image_unscaled = np.array(pil_image)
image_np = np.expand_dims(
image_unscaled.astype(np.float32) / 255., axis=0)
return image_np
###Output
_____no_output_____
###Markdown
Visualization methodWe will also create a visuzalization method to show the original image side by side with the masked version and the "filled" version, both generated by the model.
###Code
def visualize_output_comparison(img_original, img_masked, img_filled):
plt.figure(figsize=(24,12))
plt.subplot(131)
plt.imshow((np.squeeze(img_original)))
plt.title("Original", fontsize=24)
plt.axis('off')
plt.subplot(132)
plt.imshow((np.squeeze(img_masked)))
plt.title("Masked", fontsize=24)
plt.axis('off')
plt.subplot(133)
plt.imshow((np.squeeze(img_filled)))
plt.title("Generated", fontsize=24)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Loading an ImageWe will load a sample image but fell free to upload your own image to the colab and try with it. Remeber that the model have some limitations regarding human images.
###Code
wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nusfjord_road%2C_2010_09.jpg/800px-Nusfjord_road%2C_2010_09.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Beech_forest_M%C3%A1tra_in_winter.jpg/640px-Beech_forest_M%C3%A1tra_in_winter.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Marmolada_Sunset.jpg/640px-Marmolada_Sunset.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Aegina_sunset.jpg/640px-Aegina_sunset.jpg"
input_img = read_image(wikimedia)
###Output
_____no_output_____
###Markdown
Selecting a model from TensorFlow HubOn TensorFlow Hub we have 3 versions of the Boundless model: Half, Quarter and Three Quarters.In the following cell you can chose any of them and try on your image. If you want to try with another one, just chose it and execture the following cells.
###Code
#@title Model Selection { display-mode: "form" }
model_name = 'Boundless Quarter' # @param ['Boundless Half', 'Boundless Quarter', 'Boundless Three Quarters']
model_handle_map = {
'Boundless Half' : 'https://tfhub.dev/google/boundless/half/1',
'Boundless Quarter' : 'https://tfhub.dev/google/boundless/quarter/1',
'Boundless Three Quarters' : 'https://tfhub.dev/google/boundless/three_quarter/1'
}
model_handle = model_handle_map[model_name]
###Output
_____no_output_____
###Markdown
Now that we've chosen the model we want, lets load it from TensorFlow Hub.**Note**: You can point your browser to the model handle to read the model's documentation.
###Code
print("Loading model {} ({})".format(model_name, model_handle))
model = hub.load(model_handle)
###Output
_____no_output_____
###Markdown
Doing InferenceThe boundless model have two ouputs:* The input image with a mask applied* The masked image with the extrapolation to complete itwe can use these two images to show a comparisson visualization.
###Code
result = model.signatures['default'](tf.constant(input_img))
generated_image = result['default']
masked_image = result['masked_image']
visualize_output_comparison(input_img, masked_image, generated_image)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");
###Code
#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Boundless ColabWelcome to the Boundless model Colab! This notebook will take you through the steps of running the model on images and visualize the results. OverviewBoundless is a model for image extrapolation. This model takes an image, internally masks a portioin of it ([1/2](https://tfhub.dev/google/boundless/half/1), [1/4](https://tfhub.dev/google/boundless/quarter/1), [3/4](https://tfhub.dev/google/boundless/three_quarter/1)) and completes the masked part. For more details refer to [Boundless: Generative Adversarial Networks for Image Extension](https://arxiv.org/pdf/1908.07007.pdf) or the model documentation on TensorFlow Hub. Imports and SetupLets start with the base imports.
###Code
import tensorflow as tf
import tensorflow_hub as hub
from io import BytesIO
from PIL import Image as PilImage
import numpy as np
from matplotlib import pyplot as plt
from six.moves.urllib.request import urlopen
###Output
_____no_output_____
###Markdown
Reading image for inputLets create a util method to help load the image and format it for the model (257x257x3). This method will also crop the image to a square to avoid distortion and you can use with local images or from the internet.
###Code
def read_image(filename):
fd = None
if(filename.startswith('http')):
fd = urlopen(filename)
else:
fd = tf.io.gfile.GFile(filename, 'rb')
pil_image = PilImage.open(fd)
width, height = pil_image.size
# crop to make the image square
pil_image = pil_image.crop((0, 0, height, height))
pil_image = pil_image.resize((257,257),PilImage.ANTIALIAS)
image_unscaled = np.array(pil_image)
image_np = np.expand_dims(
image_unscaled.astype(np.float32) / 255., axis=0)
return image_np
###Output
_____no_output_____
###Markdown
Visualization methodWe will also create a visuzalization method to show the original image side by side with the masked version and the "filled" version, both generated by the model.
###Code
def visualize_output_comparison(img_original, img_masked, img_filled):
plt.figure(figsize=(24,12))
plt.subplot(131)
plt.imshow((np.squeeze(img_original)))
plt.title("Original", fontsize=24)
plt.axis('off')
plt.subplot(132)
plt.imshow((np.squeeze(img_masked)))
plt.title("Masked", fontsize=24)
plt.axis('off')
plt.subplot(133)
plt.imshow((np.squeeze(img_filled)))
plt.title("Generated", fontsize=24)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Loading an ImageWe will load a sample image but fell free to upload your own image to the colab and try with it. Remeber that the model have some limitations regarding human images.
###Code
wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/Nusfjord_road%2C_2010_09.jpg/800px-Nusfjord_road%2C_2010_09.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/Beech_forest_M%C3%A1tra_in_winter.jpg/640px-Beech_forest_M%C3%A1tra_in_winter.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/Marmolada_Sunset.jpg/640px-Marmolada_Sunset.jpg"
# wikimedia = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Aegina_sunset.jpg/640px-Aegina_sunset.jpg"
input_img = read_image(wikimedia)
###Output
_____no_output_____
###Markdown
Selecting a model from TensorFlow HubOn TensorFlow Hub we have 3 versions of the Boundless model: Half, Quarter and Three Quarters.In the following cell you can chose any of them and try on your image. If you want to try with another one, just chose it and execture the following cells.
###Code
#@title Model Selection { display-mode: "form" }
model_name = 'Boundless Quarter' # @param ['Boundless Half', 'Boundless Quarter', 'Boundless Three Quarters']
model_handle_map = {
'Boundless Half' : 'https://tfhub.dev/google/boundless/half/1',
'Boundless Quarter' : 'https://tfhub.dev/google/boundless/quarter/1',
'Boundless Three Quarters' : 'https://tfhub.dev/google/boundless/three_quarter/1'
}
model_hanlde = model_handle_map[model_name]
###Output
_____no_output_____
###Markdown
Now that we've chosen the model we want, lets load it from TensorFlow Hub.**Note**: You can point your browser to the model handle to read the model's documentation.
###Code
print("Loading model {} ({})".format(model_name, model_hanlde))
model = hub.load(model_hanlde)
###Output
_____no_output_____
###Markdown
Doing InferenceThe boundless model have two ouputs:* The input image with a mask applied* The masked image with the extrapolation to complete itwe can use these two images to show a comparisson visualization.
###Code
result = model.signatures['default'](tf.constant(input_img))
generated_image = result['default']
masked_image = result['masked_image']
visualize_output_comparison(input_img, masked_image, generated_image)
###Output
_____no_output_____ |
datos_modelar/.ipynb_checkpoints/modelos_rf_predics-checkpoint.ipynb | ###Markdown
Parámetros
###Code
# Número de árboles en el random forest
n_estimators = [int(x) for x in np.linspace(start = 900, stop = 1100, num = 100)]
# Número de features a considerar en cada split
max_features = ['auto', 'sqrt']
# Máximo número de niveles en el árbol
max_depth = [None, 1, 2, 3]
# Número mínimo de samples para hacer un split en un nodo
min_samples_split = [2, 5, 10]
# Número mínimo de samples para cada hoja
min_samples_leaf = [1, 2, 4]
# Método de selección para cada árbol de entrenamiento
bootstrap = [True, False]
param_grid = {'n_estimators': n_estimators,
'max_features': ['auto'],
'max_depth': [2,3],
'min_samples_split': [10,15],
'min_samples_leaf': [2],
'bootstrap': [True]}
scorer = make_scorer(median_absolute_error)
###Output
_____no_output_____
###Markdown
RandomForest
###Code
rf = GridSearchCV(RandomForestRegressor(criterion="mse"),
param_grid,
cv = 5,
scoring = scorer,
n_jobs = -1,
verbose = 1)
rf.fit(X_train, y_train)
pred_rf = np.exp(rf.predict(X_test)) - 1
pred_rf = pred_rf.reshape(-1, 1)
df = pd.DataFrame(np.exp(y_test)-1 , index = y_test.index)
df.columns = ['TARGET']
df['pred_rf'] = pred_rf
median_absolute_error(df['TARGET'],df['pred_rf'])
filename = 'random_forest_gridsearchcv1.sav'
pickle.dump(rf.best_estimator_, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
ExtraTreesRegressor
###Code
etr = GridSearchCV(ExtraTreesRegressor(criterion="mse"),
param_grid,
cv = 5,
scoring = scorer,
n_jobs = -1,
verbose = 1)
etr.fit(X_train, y_train)
pred_etr = np.exp(etr.predict(X_test)) - 1
pred_etr = pred_etr.reshape(-1, 1)
df['pred_etr'] = pred_etr
median_absolute_error(df['TARGET'],df['pred_etr'])
filename = 'extra_trees_gridsearchcv1.sav'
pickle.dump(etr.best_estimator_, open(filename, 'wb'))
###Output
_____no_output_____ |
Chapter03/Facets.ipynb | ###Markdown
FacetsDenis Rothman, 2020 Adapted from Notebook Reference:https://github.com/PAIR-code/facets/blob/master/colab_facets.ipynb Installing Facets
###Code
#@title Install the facets-overview pip package.
!pip install facets-overview
#@title Importing data <br> Set repository to "github"(default) to read the data from GitHub <br> Set repository to "google" to read the data from Google {display-mode: "form"}
import os
from google.colab import drive
#Set repository to "github" to read the data from GitHub
#Set repository to "google" to read the data from Google
repository="github"
if repository=="github":
!curl -L https://raw.githubusercontent.com/PacktPublishing/Hands-On-Explainable-AI-XAI-with-Python/master/Chapter03/DLH_train.csv --output "DLH_train.csv"
!curl -L https://raw.githubusercontent.com/PacktPublishing/Hands-On-Explainable-AI-XAI-with-Python/master/Chapter03/DLH_test.csv --output "DLH_test.csv"
#Setting the path for each file
dtrain="/content/DLH_train.csv"
dtest="/content/DLH_test.csv"
print(dtrain,dtest)
if repository=="google":
#Mounting the drive. If it is not mounted, a prompt will provide instructions.
drive.mount('/content/drive')
#Setting the path for each file
dtrain='/content/drive/My Drive/XAI/Chapter03/DLH_Train.csv'
dtest='/content/drive/My Drive/XAI/Chapter03/DLH_Train.csv'
print(dtrain,dtest)
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4922 100 4922 0 0 25502 0 --:--:-- --:--:-- --:--:-- 25502
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5418 100 5418 0 0 24853 0 --:--:-- --:--:-- --:--:-- 24853
/content/DLH_train.csv /content/DLH_test.csv
###Markdown
Facets Overview Loading the training and testing data
###Code
# Loading Denis Rothman research training and testing data into DataFrames.
import pandas as pd
features = ["colored_sputum","cough","fever","headache","days","france","chicago","class"]
train_data = pd.read_csv(
dtrain,
names=features,
sep=r'\s*,\s*',
engine='python',
na_values="?")
test_data = pd.read_csv(
dtest,
names=features,
sep=r'\s*,\s*',
skiprows=[0],
engine='python',
na_values="?")
###Output
_____no_output_____
###Markdown
Creating feature statistics for the datasets
###Code
# Create the feature stats for the datasets and stringify it.
import base64
from facets_overview.generic_feature_statistics_generator import GenericFeatureStatisticsGenerator
gfsg = GenericFeatureStatisticsGenerator()
proto = gfsg.ProtoFromDataFrames([{'name': 'train', 'table': train_data},
{'name': 'test', 'table': test_data}])
protostr = base64.b64encode(proto.SerializeToString()).decode("utf-8")
print(protostr)
###Output
CqQ0CgV0cmFpbhC4ARqiBwoOY29sb3JlZF9zcHV0dW0QARqNBwqzAgi4ARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAIAERKh/G/bokA0AZv/vrHQiDAUAgDDEAAAAAAADwPzkzMzMzMzMbQEKQAhoSEcL1KFyPwuU/IQAAAAAAAEFAGhsJwvUoXI/C5T8RwvUoXI/C9T8hAAAAAACATUAaGwnC9Shcj8L1PxFSuB6F61EAQCEAAAAAAAA/QBoSCVK4HoXrUQBAEcL1KFyPwgVAGhsJwvUoXI/CBUARMjMzMzMzC0AhAAAAAAAAJkAaGwkyMzMzMzMLQBFSuB6F61EQQCEAAAAAAAAAQBobCVK4HoXrURBAEQrXo3A9ChNAIQAAAAAAAABAGhsJCtejcD0KE0ARwvUoXI/CFUAhAAAAAAAAFEAaGwnC9Shcj8IVQBF6FK5H4XoYQCEAAAAAAAA6QBobCXoUrkfhehhAETMzMzMzMxtAIQAAAAAAACxAQpsCGhIRmpmZmZmZyT8hZmZmZmZmMkAaGwmamZmZmZnJPxFmZmZmZmbmPyFmZmZmZmYyQBobCWZmZmZmZuY/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAABAIWZmZmZmZjJAGhsJAAAAAAAAAEARAAAAAAAACEAhZmZmZmZmMkAaGwkAAAAAAAAIQBEAAAAAAAAWQCFmZmZmZmYyQBobCQAAAAAAABZAEQAAAAAAABhAIWZmZmZmZjJAGhsJAAAAAAAAGEARMzMzMzMzG0AhZmZmZmZmMkAgARq7BwoFY291Z2gQARqvBwqzAgi4ARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAIAERyEIWspCFEkAZFaWrays1A0ApAAAAAAAA8D8xZmZmZmZmEUA5ZmZmZmZmI0BCogIaGwkAAAAAAADwPxHrUbgehev9PyEAAAAAAAA8QBobCetRuB6F6/0/EetRuB6F6wVAIQAAAAAAADFAGhsJ61G4HoXrBUAR4HoUrkfhDEAhAAAAAAAARUAaGwngehSuR+EMQBHrUbgehesRQCEAAAAAAAAUQBobCetRuB6F6xFAEWZmZmZmZhVAIQAAAAAAACJAGhsJZmZmZmZmFUAR4HoUrkfhGEAhAAAAAAAAOkAaGwngehSuR+EYQBFbj8L1KFwcQCEAAAAAAAA4QBobCVuPwvUoXBxAEdajcD0K1x9AIQAAAAAAADJAGhsJ1qNwPQrXH0ARKFyPwvWoIUAhAAAAAAAAKEAaGwkoXI/C9aghQBFmZmZmZmYjQCEAAAAAAAAIQEKkAhobCQAAAAAAAPA/EQAAAAAAAPg/IWZmZmZmZjJAGhsJAAAAAAAA+D8RmpmZmZmZAUAhZmZmZmZmMkAaGwmamZmZmZkBQBEAAAAAAAAIQCFmZmZmZmYyQBobCQAAAAAAAAhAEWZmZmZmZgpAIWZmZmZmZjJAGhsJZmZmZmZmCkARZmZmZmZmEUAhZmZmZmZmMkAaGwlmZmZmZmYRQBEAAAAAAAAYQCFmZmZmZmYyQBobCQAAAAAAABhAETMzMzMzMxlAIWZmZmZmZjJAGhsJMzMzMzMzGUARAAAAAAAAHEAhZmZmZmZmMkAaGwkAAAAAAAAcQBHMzMzMzMweQCFmZmZmZmYyQBobCczMzMzMzB5AEWZmZmZmZiNAIWZmZmZmZjJAIAEasgcKBWZldmVyEAEapgcKswIIuAEYASABLQAAgD8ypAIaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQCABEUsqH8b9uhtAGQf/iNLTawJAKZqZmZmZmfE/MQAAAAAAACBAOc3MzMzMzCNAQpkCGhsJmpmZmZmZ8T8Rr0fhehSu/z8hAAAAAAAAAEAaGwmvR+F6FK7/PxHiehSuR+EGQCEAAAAAAAAoQBobCeJ6FK5H4QZAEe1RuB6F6w1AIQAAAAAAADZAGhsJ7VG4HoXrDUARfBSuR+F6EkAhAAAAAAAAJEAaGwl8FK5H4XoSQBEAAAAAAAAWQCEAAAAAAAAQQBoSCQAAAAAAABZAEYbrUbgehRlAGhsJhutRuB6FGUARDNejcD0KHUAhAAAAAAAAGEAaGwkM16NwPQodQBFI4XoUrkcgQCEAAAAAAIBUQBobCUjhehSuRyBAEQvXo3A9CiJAIQAAAAAAADlAGhsJC9ejcD0KIkARzczMzMzMI0AhAAAAAAAANUBCpAIaGwmamZmZmZnxPxHNzMzMzMwIQCFmZmZmZmYyQBobCc3MzMzMzAhAEWZmZmZmZg5AIWZmZmZmZjJAGhsJZmZmZmZmDkARXI/C9ShcHEAhZmZmZmZmMkAaGwlcj8L1KFwcQBEAAAAAAAAgQCFmZmZmZmYyQBobCQAAAAAAACBAEQAAAAAAACBAIWZmZmZmZjJAGhsJAAAAAAAAIEARAAAAAAAAIEAhZmZmZmZmMkAaGwkAAAAAAAAgQBEAAAAAAAAgQCFmZmZmZmYyQBobCQAAAAAAACBAEQAAAAAAACFAIWZmZmZmZjJAGhsJAAAAAAAAIUARZmZmZmZmIkAhZmZmZmZmMkAaGwlmZmZmZmYiQBHNzMzMzMwjQCFmZmZmZmYyQCABGr4HCghoZWFkYWNoZRABGq8HCrMCCLgBGAEgAS0AAIA/MqQCGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAgARFOb3rTm94TQBkv9ULICC8KQCkAAAAAAADwPzEAAAAAAAAQQDlmZmZmZmYjQEKiAhobCQAAAAAAAPA/EetRuB6F6/0/IQAAAAAAgEtAGhsJ61G4HoXr/T8R61G4HoXrBUAhAAAAAAAAOUAaGwnrUbgehesFQBHgehSuR+EMQCEAAAAAAAAcQBobCeB6FK5H4QxAEetRuB6F6xFAIQAAAAAAACBAGhsJ61G4HoXrEUARZmZmZmZmFUAhAAAAAAAAHEAaGwlmZmZmZmYVQBHgehSuR+EYQCEAAAAAAAAUQBobCeB6FK5H4RhAEVuPwvUoXBxAIQAAAAAAACZAGhsJW4/C9ShcHEAR1qNwPQrXH0AhAAAAAAAAHEAaGwnWo3A9CtcfQBEoXI/C9aghQCEAAAAAAAAAQBobCShcj8L1qCFAEWZmZmZmZiNAIQAAAAAAgExAQqQCGhsJAAAAAAAA8D8RzczMzMzM9D8hZmZmZmZmMkAaGwnNzMzMzMz0PxEAAAAAAAD4PyFmZmZmZmYyQBobCQAAAAAAAPg/Ea1H4XoUrv8/IWZmZmZmZjJAGhsJrUfhehSu/z8RXI/C9ShcA0AhZmZmZmZmMkAaGwlcj8L1KFwDQBEAAAAAAAAQQCFmZmZmZmYyQBobCQAAAAAAABBAEdajcD0K1xtAIWZmZmZmZjJAGhsJ1qNwPQrXG0ARAAAAAAAAIkAhZmZmZmZmMkAaGwkAAAAAAAAiQBEAAAAAAAAiQCFmZmZmZmYyQBobCQAAAAAAACJAEQAAAAAAACJAIWZmZmZmZjJAGhsJAAAAAAAAIkARZmZmZmZmI0AhZmZmZmZmMkAgARqUBwoEZGF5cxqLBwqzAgi4ARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAIAERTm9605veBUAZVwLs1eM69T8pAAAAAAAA8D8xAAAAAAAAAEA5AAAAAAAAGEBC/gEaGwkAAAAAAADwPxEAAAAAAAD4PyEAAAAAAAA6QBoSCQAAAAAAAPg/EQAAAAAAAABAGhsJAAAAAAAAAEARAAAAAAAABEAhAAAAAABAUkAaEgkAAAAAAAAEQBEAAAAAAAAIQBobCQAAAAAAAAhAEQAAAAAAAAxAIQAAAAAAAEZAGhIJAAAAAAAADEARAAAAAAAAEEAaGwkAAAAAAAAQQBEAAAAAAAASQCEAAAAAAAAqQBoSCQAAAAAAABJAEQAAAAAAABRAGhsJAAAAAAAAFEARAAAAAAAAFkAhAAAAAAAANUAaGwkAAAAAAAAWQBEAAAAAAAAYQCEAAAAAAAAcQEKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAAAEAhZmZmZmZmMkAaGwkAAAAAAAAAQBEAAAAAAAAAQCFmZmZmZmYyQBobCQAAAAAAAABAEQAAAAAAAABAIWZmZmZmZjJAGhsJAAAAAAAAAEARAAAAAAAAAEAhZmZmZmZmMkAaGwkAAAAAAAAAQBEAAAAAAAAIQCFmZmZmZmYyQBobCQAAAAAAAAhAEQAAAAAAAAhAIWZmZmZmZjJAGhsJAAAAAAAACEARAAAAAAAAEEAhZmZmZmZmMkAaGwkAAAAAAAAQQBEAAAAAAAAUQCFmZmZmZmYyQBobCQAAAAAAABRAEQAAAAAAABhAIWZmZmZmZjJAIAEa+AQKBmZyYW5jZRrtBAqzAgi4ARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAIAEguAFCvwEaEgkAAAAAAADgvxGamZmZmZnZvxoSCZqZmZmZmdm/ETMzMzMzM9O/GhIJMzMzMzMz078RmJmZmZmZyb8aEgmYmZmZmZnJvxGYmZmZmZm5vxoJCZiZmZmZmbm/GhIRoJmZmZmZuT8hAAAAAAAAZ0AaEgmgmZmZmZm5PxGcmZmZmZnJPxoSCZyZmZmZmck/ETQzMzMzM9M/GhIJNDMzMzMz0z8RmpmZmZmZ2T8aEgmamZmZmZnZPxEAAAAAAADgP0JwGgkhZmZmZmZmMkAaCSFmZmZmZmYyQBoJIWZmZmZmZjJAGgkhZmZmZmZmMkAaCSFmZmZmZmYyQBoJIWZmZmZmZjJAGgkhZmZmZmZmMkAaCSFmZmZmZmYyQBoJIWZmZmZmZjJAGgkhZmZmZmZmMkAgARrhBgoHY2hpY2FnbxrVBgqzAgi4ARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAIAERAAAAAAAA8D8pAAAAAAAA8D8xAAAAAAAA8D85AAAAAAAA8D9C0QEaEgkAAAAAAADgPxEzMzMzMzPjPxoSCTMzMzMzM+M/EWZmZmZmZuY/GhIJZmZmZmZm5j8RmpmZmZmZ6T8aEgmamZmZmZnpPxHNzMzMzMzsPxoSCc3MzMzMzOw/EQAAAAAAAPA/GhsJAAAAAAAA8D8RmpmZmZmZ8T8hAAAAAAAAZ0AaEgmamZmZmZnxPxE0MzMzMzPzPxoSCTQzMzMzM/M/Ec3MzMzMzPQ/GhIJzczMzMzM9D8RZmZmZmZm9j8aEglmZmZmZmb2PxEAAAAAAAD4P0KkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAIAEayAMKBWNsYXNzEAIivAMKswIIuAEYASABLQAAgD8ypAIaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZmZjJAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmZmMkAaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZmYyQCABEAQaFBIJcG5ldW1vbmlhGQAAAAAAAElAGg4SA2ZsdRkAAAAAAABJQCWRhbRAKlcKFCIJcG5ldW1vbmlhKQAAAAAAAElAChIIARABIgNmbHUpAAAAAAAASUAKEwgCEAIiBGNvbGQpAAAAAAAASUAKFggDEAMiB2JhZF9mbHUpAAAAAAAAQUAKpjQKBHRlc3QQxwEapAcKDmNvbG9yZWRfc3B1dHVtEAEajwcKtQIIxQEQAhgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/ITMzMzMzszNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hMzMzMzOzM0AaGwkAAAAAAADwPxEAAAAAAADwPyEzMzMzM7MzQBobCQAAAAAAAPA/EQAAAAAAAPA/ITMzMzMzszNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hMzMzMzOzM0AaGwkAAAAAAADwPxEAAAAAAADwPyEzMzMzM7MzQBobCQAAAAAAAPA/EQAAAAAAAPA/ITMzMzMzszNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hMzMzMzOzM0AaGwkAAAAAAADwPxEAAAAAAADwPyEzMzMzM7MzQBobCQAAAAAAAPA/EQAAAAAAAPA/ITMzMzMzszNAIAERL8P51yn3AUAZEWIu6uN+AUAgEDEAAAAAAADwPzkzMzMzMzMbQEKQAhoSEcL1KFyPwuU/IQAAAAAAgEdAGhsJwvUoXI/C5T8RwvUoXI/C9T8hAAAAAACATUAaGwnC9Shcj8L1PxFSuB6F61EAQCEAAAAAAAA/QBoSCVK4HoXrUQBAEcL1KFyPwgVAGhsJwvUoXI/CBUARMjMzMzMzC0AhAAAAAAAAJkAaGwkyMzMzMzMLQBFSuB6F61EQQCEAAAAAAAAAQBobCVK4HoXrURBAEQrXo3A9ChNAIQAAAAAAAABAGhsJCtejcD0KE0ARwvUoXI/CFUAhAAAAAAAAFEAaGwnC9Shcj8IVQBF6FK5H4XoYQCEAAAAAAAA6QBobCXoUrkfhehhAETMzMzMzMxtAIQAAAAAAACxAQpsCGhIRmpmZmZmZuT8hMzMzMzOzM0AaGwmamZmZmZm5PxEAAAAAAADgPyEzMzMzM7MzQBobCQAAAAAAAOA/EZqZmZmZmek/ITMzMzMzszNAGhsJmpmZmZmZ6T8RAAAAAAAA8D8hMzMzMzOzM0AaGwkAAAAAAADwPxEAAAAAAADwPyEzMzMzM7MzQBobCQAAAAAAAPA/EQAAAAAAAABAITMzMzMzszNAGhsJAAAAAAAAAEARAAAAAAAACEAhMzMzMzOzM0AaGwkAAAAAAAAIQBHtUbgehesVQCEzMzMzM7MzQBobCe1RuB6F6xVAEQAAAAAAABhAITMzMzMzszNAGhsJAAAAAAAAGEARMzMzMzMzG0AhMzMzMzOzM0AgARq7BwoFY291Z2gQARqvBwqzAgjHARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAIAERGKutvz2wEUAZEhmZqYlSA0ApAAAAAAAA8D8xAAAAAAAADEA5ZmZmZmZmI0BCogIaGwkAAAAAAADwPxHrUbgehev9PyEAAAAAAABCQBobCetRuB6F6/0/EetRuB6F6wVAIQAAAAAAADNAGhsJ61G4HoXrBUAR4HoUrkfhDEAhAAAAAACAR0AaGwngehSuR+EMQBHrUbgehesRQCEAAAAAAAAUQBobCetRuB6F6xFAEWZmZmZmZhVAIQAAAAAAACJAGhsJZmZmZmZmFUAR4HoUrkfhGEAhAAAAAAAAPEAaGwngehSuR+EYQBFbj8L1KFwcQCEAAAAAAAA2QBobCVuPwvUoXBxAEdajcD0K1x9AIQAAAAAAADJAGhsJ1qNwPQrXH0ARKFyPwvWoIUAhAAAAAAAAKEAaGwkoXI/C9aghQBFmZmZmZmYjQCEAAAAAAAAIQEKkAhobCQAAAAAAAPA/ETMzMzMzM/M/IWZmZmZm5jNAGhsJMzMzMzMz8z8RAAAAAAAAAEAhZmZmZmbmM0AaGwkAAAAAAAAAQBFmZmZmZmYGQCFmZmZmZuYzQBobCWZmZmZmZgZAEQAAAAAAAAhAIWZmZmZm5jNAGhsJAAAAAAAACEARAAAAAAAADEAhZmZmZmbmM0AaGwkAAAAAAAAMQBGjcD0K16MWQCFmZmZmZuYzQBobCaNwPQrXoxZAEc3MzMzMzBhAIWZmZmZm5jNAGhsJzczMzMzMGEARMjMzMzMzG0AhZmZmZmbmM0AaGwkyMzMzMzMbQBHNzMzMzMweQCFmZmZmZuYzQBobCc3MzMzMzB5AEWZmZmZmZiNAIWZmZmZm5jNAIAEasgcKBWZldmVyEAEapgcKswIIxwEYASABLQAAgD8ypAIaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQCABEdcUxZZSdRpAGeJZOiBJewNAKZqZmZmZmfE/MQAAAAAAACBAOc3MzMzMzCNAQpkCGhsJmpmZmZmZ8T8Rr0fhehSu/z8hAAAAAAAAAEAaGwmvR+F6FK7/PxHiehSuR+EGQCEAAAAAAAAzQBobCeJ6FK5H4QZAEe1RuB6F6w1AIQAAAAAAADtAGhsJ7VG4HoXrDUARfBSuR+F6EkAhAAAAAAAALEAaGwl8FK5H4XoSQBEAAAAAAAAWQCEAAAAAAAAQQBoSCQAAAAAAABZAEYbrUbgehRlAGhsJhutRuB6FGUARDNejcD0KHUAhAAAAAAAAGEAaGwkM16NwPQodQBFI4XoUrkcgQCEAAAAAAMBUQBobCUjhehSuRyBAEQvXo3A9CiJAIQAAAAAAADhAGhsJC9ejcD0KIkARzczMzMzMI0AhAAAAAAAANEBCpAIaGwmamZmZmZnxPxFmZmZmZmYGQCFmZmZmZuYzQBobCWZmZmZmZgZAEczMzMzMzAxAIWZmZmZm5jNAGhsJzMzMzMzMDEARKVyPwvUoEkAhZmZmZmbmM0AaGwkpXI/C9SgSQBF7FK5H4XoeQCFmZmZmZuYzQBobCXsUrkfheh5AEQAAAAAAACBAIWZmZmZm5jNAGhsJAAAAAAAAIEARAAAAAAAAIEAhZmZmZmbmM0AaGwkAAAAAAAAgQBEAAAAAAAAgQCFmZmZmZuYzQBobCQAAAAAAACBAEc3MzMzMzCBAIWZmZmZm5jNAGhsJzczMzMzMIEARC9ejcD0KIkAhZmZmZmbmM0AaGwkL16NwPQoiQBHNzMzMzMwjQCFmZmZmZuYzQCABGr4HCghoZWFkYWNoZRABGq8HCrMCCMcBGAEgAS0AAIA/MqQCGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AgARF+ciHqZtASQBlFeD5TTKAJQCkAAAAAAADwPzEAAAAAAAAMQDlmZmZmZmYjQEKiAhobCQAAAAAAAPA/EetRuB6F6/0/IQAAAAAAgE5AGhsJ61G4HoXr/T8R61G4HoXrBUAhAAAAAACAQUAaGwnrUbgehesFQBHgehSuR+EMQCEAAAAAAAAYQBobCeB6FK5H4QxAEetRuB6F6xFAIQAAAAAAACBAGhsJ61G4HoXrEUARZmZmZmZmFUAhAAAAAAAAHEAaGwlmZmZmZmYVQBHgehSuR+EYQCEAAAAAAAAYQBobCeB6FK5H4RhAEVuPwvUoXBxAIQAAAAAAACpAGhsJW4/C9ShcHEAR1qNwPQrXH0AhAAAAAAAAHEAaGwnWo3A9CtcfQBEoXI/C9aghQCEAAAAAAAAIQBobCShcj8L1qCFAEWZmZmZmZiNAIQAAAAAAgEpAQqQCGhsJAAAAAAAA8D8RzczMzMzM9D8hZmZmZmbmM0AaGwnNzMzMzMz0PxGamZmZmZn5PyFmZmZmZuYzQBobCZqZmZmZmfk/EczMzMzMzPw/IWZmZmZm5jNAGhsJzMzMzMzM/D8RZmZmZmZmAkAhZmZmZmbmM0AaGwlmZmZmZmYCQBEAAAAAAAAMQCFmZmZmZuYzQBobCQAAAAAAAAxAEfUoXI/C9RZAIWZmZmZm5jNAGhsJ9Shcj8L1FkARAAAAAAAAHkAhZmZmZmbmM0AaGwkAAAAAAAAeQBEAAAAAAAAiQCFmZmZmZuYzQBobCQAAAAAAACJAEQAAAAAAACJAIWZmZmZm5jNAGhsJAAAAAAAAIkARZmZmZmZmI0AhZmZmZmbmM0AgARqUBwoEZGF5cxqLBwqzAgjHARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAIAERLDJfmjiMBUAZ+4UzvPis9D8pAAAAAAAA8D8xAAAAAAAAAEA5AAAAAAAAGEBC/gEaGwkAAAAAAADwPxEAAAAAAAD4PyEAAAAAAAA7QBoSCQAAAAAAAPg/EQAAAAAAAABAGhsJAAAAAAAAAEARAAAAAAAABEAhAAAAAADAVEAaEgkAAAAAAAAEQBEAAAAAAAAIQBobCQAAAAAAAAhAEQAAAAAAAAxAIQAAAAAAAEhAGhIJAAAAAAAADEARAAAAAAAAEEAaGwkAAAAAAAAQQBEAAAAAAAASQCEAAAAAAAAqQBoSCQAAAAAAABJAEQAAAAAAABRAGhsJAAAAAAAAFEARAAAAAAAAFkAhAAAAAAAANUAaGwkAAAAAAAAWQBEAAAAAAAAYQCEAAAAAAAAcQEKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAAAEAhZmZmZmbmM0AaGwkAAAAAAAAAQBEAAAAAAAAAQCFmZmZmZuYzQBobCQAAAAAAAABAEQAAAAAAAABAIWZmZmZm5jNAGhsJAAAAAAAAAEARAAAAAAAAAEAhZmZmZmbmM0AaGwkAAAAAAAAAQBEAAAAAAAAIQCFmZmZmZuYzQBobCQAAAAAAAAhAEQAAAAAAAAhAIWZmZmZm5jNAGhsJAAAAAAAACEARAAAAAAAAEEAhZmZmZmbmM0AaGwkAAAAAAAAQQBEAAAAAAAAUQCFmZmZmZuYzQBobCQAAAAAAABRAEQAAAAAAABhAIWZmZmZm5jNAIAEa+AQKBmZyYW5jZRrtBAqzAgjHARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAIAEgxwFCvwEaEgkAAAAAAADgvxGamZmZmZnZvxoSCZqZmZmZmdm/ETMzMzMzM9O/GhIJMzMzMzMz078RmJmZmZmZyb8aEgmYmZmZmZnJvxGYmZmZmZm5vxoJCZiZmZmZmbm/GhIRoJmZmZmZuT8hAAAAAADgaEAaEgmgmZmZmZm5PxGcmZmZmZnJPxoSCZyZmZmZmck/ETQzMzMzM9M/GhIJNDMzMzMz0z8RmpmZmZmZ2T8aEgmamZmZmZnZPxEAAAAAAADgP0JwGgkhZmZmZmbmM0AaCSFmZmZmZuYzQBoJIWZmZmZm5jNAGgkhZmZmZmbmM0AaCSFmZmZmZuYzQBoJIWZmZmZm5jNAGgkhZmZmZmbmM0AaCSFmZmZmZuYzQBoJIWZmZmZm5jNAGgkhZmZmZmbmM0AgARrhBgoHY2hpY2FnbxrVBgqzAgjHARgBIAEtAACAPzKkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAIAERAAAAAAAA8D8pAAAAAAAA8D8xAAAAAAAA8D85AAAAAAAA8D9C0QEaEgkAAAAAAADgPxEzMzMzMzPjPxoSCTMzMzMzM+M/EWZmZmZmZuY/GhIJZmZmZmZm5j8RmpmZmZmZ6T8aEgmamZmZmZnpPxHNzMzMzMzsPxoSCc3MzMzMzOw/EQAAAAAAAPA/GhsJAAAAAAAA8D8RmpmZmZmZ8T8hAAAAAADgaEAaEgmamZmZmZnxPxE0MzMzMzPzPxoSCTQzMzMzM/M/Ec3MzMzMzPQ/GhIJzczMzMzM9D8RZmZmZmZm9j8aEglmZmZmZmb2PxEAAAAAAAD4P0KkAhobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAIAEayQMKBWNsYXNzEAIivQMKswIIxwEYASABLQAAgD8ypAIaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQBobCQAAAAAAAPA/EQAAAAAAAPA/IWZmZmZm5jNAGhsJAAAAAAAA8D8RAAAAAAAA8D8hZmZmZmbmM0AaGwkAAAAAAADwPxEAAAAAAADwPyFmZmZmZuYzQCABEAQaFBIJcG5ldW1vbmlhGQAAAAAAAE1AGg8SBGNvbGQZAAAAAAAATUAl3xW5QCpXChQiCXBuZXVtb25pYSkAAAAAAABNQAoTCAEQASIEY29sZCkAAAAAAABNQAoSCAIQAiIDZmx1KQAAAAAAAEdAChYIAxADIgdiYWRfZmx1KQAAAAAAgEJA
###Markdown
Create HTML page for Facets Overview
###Code
# Display the Facets Overview visualization for this data
from IPython.core.display import display, HTML
HTML_TEMPLATE = """
<script src="https://cdnjs.cloudflare.com/ajax/libs/webcomponentsjs/1.3.3/webcomponents-lite.js"></script>
<link rel="import" href="https://raw.githubusercontent.com/PAIR-code/facets/1.0.0/facets-dist/facets-jupyter.html" >
<facets-overview id="elem"></facets-overview>
<script>
document.querySelector("#elem").protoInput = "{protostr}";
</script>"""
html = HTML_TEMPLATE.format(protostr=protostr)
display(HTML(html))
#@title Relative entropy or Kullback-Leibler divergence example {display-mode: "form"}
from scipy.stats import entropy
X=[10,1,1,20,1,10,4]
Y=[1,2,3,4,2,2,5]
entropy(X,Y)
###Output
_____no_output_____
###Markdown
Facets Dive
###Code
#@title Python to_json example {display-mode: "form"}
from IPython.core.display import display, HTML
jsonstr=train_data.to_json(orient='records')
jsonstr
# Display the Dive visualization for the training data.
from IPython.core.display import display, HTML
HTML_TEMPLATE = """
<script src="https://cdnjs.cloudflare.com/ajax/libs/webcomponentsjs/1.3.3/webcomponents-lite.js"></script>
<link rel="import" href="https://raw.githubusercontent.com/PAIR-code/facets/1.0.0/facets-dist/facets-jupyter.html">
<facets-dive id="elem" height="600"></facets-dive>
<script>
var data = {jsonstr};
document.querySelector("#elem").data = data;
</script>"""
html = HTML_TEMPLATE.format(jsonstr=jsonstr)
display(HTML(html))
###Output
_____no_output_____ |
Case Study 2/Logistic Regression (LR)/Logistic Regression.ipynb | ###Markdown
**Logistic Regression (LR)**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from google.colab import files
uploaded = files.upload()
df=pd.read_excel('HR_DataSet.xlsx')
df.describe().transpose()
df.columns
f, axes = plt.subplots(5, 2, figsize=(12, 12))
sns.distplot(df['Late show up percentage'] , color="red", ax=axes[0, 0])
sns.distplot(df['Project initiative percentage'] , color="olive", ax=axes[0, 1])
sns.distplot(df['Percentage of project delivery on time'] , color="blue", ax=axes[1, 0])
sns.distplot(df['Percentage of emails exchanged'] , color="orange", ax=axes[1, 1])
sns.distplot(df['Percentage of responsiveness'] , color="black", ax=axes[2, 0])
sns.distplot(df['Percentage of professional email response'] , color="green", ax=axes[2, 1])
sns.distplot(df['Percentage of sharing ideas'] , color="cyan", ax=axes[3, 0])
sns.distplot(df['Percentage of helping colleagues'] , color="brown", ax=axes[3, 1])
sns.distplot(df['Percentage of entrepreneurial posts on LinkedIn'] , color="purple", ax=axes[4, 0])
sns.distplot(df['Percentage of Facebook comments'] , color="pink", ax=axes[4, 1])
plt.tight_layout()
f, axes = plt.subplots(5, 2, figsize=(12, 12))
sns.boxplot(df['Late show up percentage'] , color="red", ax=axes[0, 0])
sns.boxplot(df['Project initiative percentage'] , color="olive", ax=axes[0, 1])
sns.boxplot(df['Percentage of project delivery on time'] , color="blue", ax=axes[1, 0])
sns.boxplot(df['Percentage of emails exchanged'] , color="orange", ax=axes[1, 1])
sns.boxplot(df['Percentage of responsiveness'] , color="black", ax=axes[2, 0])
sns.boxplot(df['Percentage of professional email response'] , color="green", ax=axes[2, 1])
sns.boxplot(df['Percentage of sharing ideas'] , color="cyan", ax=axes[3, 0])
sns.boxplot(df['Percentage of helping colleagues'] , color="brown", ax=axes[3, 1])
sns.boxplot(df['Percentage of entrepreneurial posts on LinkedIn'] , color="purple", ax=axes[4, 0])
sns.boxplot(df['Percentage of Facebook comments'] , color="pink", ax=axes[4, 1])
plt.tight_layout()
f, axes = plt.subplots(5, 2, figsize=(12, 12))
sns.violinplot(df['Late show up percentage'] , color="red", ax=axes[0, 0])
sns.violinplot(df['Project initiative percentage'] , color="olive", ax=axes[0, 1])
sns.violinplot(df['Percentage of project delivery on time'] , color="blue", ax=axes[1, 0])
sns.violinplot(df['Percentage of emails exchanged'] , color="orange", ax=axes[1, 1])
sns.violinplot(df['Percentage of responsiveness'] , color="black", ax=axes[2, 0])
sns.violinplot(df['Percentage of professional email response'] , color="green", ax=axes[2, 1])
sns.violinplot(df['Percentage of sharing ideas'] , color="cyan", ax=axes[3, 0])
sns.violinplot(df['Percentage of helping colleagues'] , color="brown", ax=axes[3, 1])
sns.violinplot(df['Percentage of entrepreneurial posts on LinkedIn'] , color="purple", ax=axes[4, 0])
sns.violinplot(df['Percentage of Facebook comments'] , color="pink", ax=axes[4, 1])
plt.tight_layout()
plt.figure(figsize=(12,10))
sns.heatmap(df.corr(), annot=True, linecolor='white',linewidths=2, cmap= 'Accent')
x_features=df.drop('Quitting',axis=1)
x_features
y=df['Quitting']
y
###Output
_____no_output_____
###Markdown
**Train_Test_Split**
###Code
from sklearn.model_selection import train_test_split
seed= 50
np.random.seed(seed)
X_train, X_test, y_train, y_test = train_test_split(x_features,y, test_size=0.30)
len(X_train)
###Output
_____no_output_____
###Markdown
**Applying Logistic regression model**
###Code
from sklearn.linear_model import LogisticRegression
np.random.seed(seed)
lr = LogisticRegression(penalty='l2', C=1.0,solver='lbfgs')
lr.fit(X_train,y_train)
y_pred = lr.predict(X_test)
len(y_pred)
###Output
_____no_output_____
###Markdown
**Prediction and evaluation**
###Code
from sklearn.metrics import classification_report,confusion_matrix
print(confusion_matrix(y_test,y_pred))
sns.heatmap(confusion_matrix(y_test,y_pred), annot=True, cmap='viridis')
print(classification_report(y_test,y_pred))
###Output
precision recall f1-score support
0 0.97 0.94 0.96 152
1 0.94 0.97 0.96 148
accuracy 0.96 300
macro avg 0.96 0.96 0.96 300
weighted avg 0.96 0.96 0.96 300
|
Google Landmark Recognition Challenge/old_ files/Google Landmark Classification-PyTorch.ipynb | ###Markdown
Google Landmark Classification Downloading the datasetUsed the script available on Kaggle Import the required libraries
###Code
import pandas as pd
import cv2 as cv
import os
import random
import numpy as np
###Output
_____no_output_____
###Markdown
Function to load the data The function *load_data* takes the list of images from the *train.csv* file and shuffles them and searches them in the *train* directory to check if they have been downloaded correctly. If so, the list of the files are then saved to list *img_list* and split into 'train' and 'dev' sets.
###Code
def load_data(train_folder, train_file, train_percent):
img_list = [] # to hold the list of images names downloaded
img_labels = [] # labels of the downloaded images
train_data = pd.read_csv(train_file).values
img_list_orig = train_data[:, 0] # image name is the first column in the csv file
random.seed(42)
random.shuffle(img_list) # just shuffling the list
for img_name in img_list_orig:
file_path = os.path.join(train_folder, img_name + '.jpg')
if os.path.exists(file_path): # check if the image exists
img_list.append(img_name) # add the image to the image list
img_list_train = img_list[0:int(train_percent * img_list_orig.shape[0])]
img_list_test = img_list[int(train_percent * img_list_orig.shape[0]) + 1:]
return train_data, img_list_train, img_list_test
# print(img_list[0:10])
# for img_name in img_list:
# img = cv.imread(os.path.join(train_folder, img_name + '.jpg'))
# if img is not None:
# img = cv.resize(img, (100, 100))
# img = img_to_array(img)
# X.append(img)
# for i in range(train_data.shape[0]):
# if img_name == train_data[i, 0]:
# label = train_data[i, 2]
# Y.append(label)
# X = np.array(X, dtype='float32') / 255.0
# Y = np.array(Y)
# print(X.shape, Y.shape)
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
train_file = 'train.csv'
train_folder = '/mnt/disks/dataset/train'
train_data, img_list_train, img_list_test = load_data(train_folder, train_file, train_percent = 0.98)
print(len(img_list_train), len(img_list_test))
def load_batch(batch_no, train_data, img_list_train, folder, batch_size):
X_batch = []
Y_batch = []
iterations = int(len(img_list_train) / batch_size)
img_list_batch = img_list_train[batch_no*batch_size:(batch_no+1)*batch_size]
for img_name in img_list_batch:
img = cv.imread(os.path.join(train_folder, img_name + '.jpg'))
if img is not None:
img = cv.resize(img, (500, 500))
img = img_to_array(img)
for j in range(len(img_list_train)):
if img_name == train_data[j, 0]:
label = train_data[j, 2]
X_batch.append(img)
Y_batch.append(label)
X_batch = np.array(X_batch, dtype = 'float32') / 255.0
Y_batch = np.array(Y_batch)
return X_batch, Y_batch
first_X, first_Y = load_batch(1239, train_data, img_list_train, train_folder, batch_size = 500)
def ConvModel(input_shape = (300, 300, 3), classes = 15000):
X_input = Input(input_shape)
X = ZeroPadding2D((3, 3))(X_input)
X = Conv2D(64, (11, 11), strides = (2, 2), name = 'conv1', kernel_initializer=glorot_uniform(seed = 0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides = (2, 2))(X)
X = Flatten()(X)
X = Dense(1, activation = 'sigmoid', name = 'fc')(X)
model = Model(inputs = X_input, outputs = X, name = 'SmallConv')
return model
model = ConvModel(input_shape = (300, 300, 3), classes = 15000)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
batch_size = 500
test_folder = 'test'
iteration_no = 1 #len(img_list_train) / batch_size
for iteration in range(iteration_no):
X_train, Y_train_orig = load_batch(iteration, train_data, img_list_train, train_folder, batch_size)
X_test, Y_test_orig = load_batch(iteration, train_data, img_list_test, test_folder, batch_size)
# Convert training and test labels to one hot matrices
Y_train = to_categorical(Y_train_orig, 15000)
Y_test = to_categorical(Y_test_orig, 15000)
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
_____no_output_____ |
notebooks/AFTERBURNER000.ipynb | ###Markdown
Translate BUILD a whole split at a time, using SubSplit splitting method, and correct that
###Code
@np.vectorize
def to_amharic(x):
return TRG.vocab.itos[x]
def eos_trim(v):
try:
return ''.join(v[0:np.where(v=='<eos>')[0][0]])
except:
return ''.join(v)
def prediction_to_string(output):
pred=output.cpu().detach().numpy()
sample_length=pred.shape[0]//batch_size
p2=[pred[i*sample_length:(i+1)*sample_length] for i in range(batch_size)]
p3=[to_amharic(x.argmax(axis=1)) for x in p2]
p4=[eos_trim(x) for x in p3]
return p4
def gold_to_string(trg):
pred=trg.cpu().detach().numpy()
sample_length=pred.shape[0]//batch_size
p2=[pred[i*sample_length:(i+1)*sample_length] for i in range(batch_size)]
p3=[to_amharic(x) for x in p2]
p4=[eos_trim(x) for x in p3]
return p4
from tqdm.notebook import tqdm
import sys
sys.path.append('/home/catskills/Desktop/openasr20/end2end_asr_pytorch')
os.environ['IN_JUPYTER']='True'
from utils.metrics import calculate_cer, calculate_wer
###Output
_____no_output_____
###Markdown
R=[]for hyp,au in zip(prediction, gold): R.append((au,hyp,calculate_cer(hyp, au),calculate_wer(hyp, au)))
###Code
model.eval();
batch_size=128
train_iterator = Iterator(train_data, batch_size=batch_size)
R=[]
for i, batch in enumerate(tqdm(train_iterator)):
src = batch.src.to(device)
trg = batch.trg.to(device)
output, _ = model(src, trg[:,:-1])
output_dim = output.shape[-1]
output = output.contiguous().view(-1, output_dim)
trg = trg[:,1:].contiguous().view(-1)
prediction=prediction_to_string(output)
gold=gold_to_string(trg)
for hyp,au in zip(prediction, gold):
R.append((au,hyp,calculate_cer(hyp, au),calculate_wer(hyp, au)))
len(R), R[0]
import pandas as pd
results=pd.DataFrame(R, columns=['Gold', 'Pred', 'CER', 'WER'])
results['GOLD_n_words']=results['Gold'].apply(lambda x: len(x.split(' ')))
results['GOLD_n_chars']=results['Gold'].apply(lambda x: len(x))
results['CER_pct']=results.CER/results['GOLD_n_chars']
results['WER_pct']=results.CER/results['GOLD_n_words']
results=results[results.Gold != '<pad>']
%matplotlib notebook
results.WER_pct.hist(bins=1000)
plt.xlim(0,1)
results.WER_pct.mean()
results.CER_pct.mean()
###Output
_____no_output_____ |
tests/Hw_probability_ms426.ipynb | ###Markdown
Problem 1, Six Sigma
###Code
sixsigma <- function(x){
constant = 1 / sqrt(2*pi)
return (constant * exp((-x**2)/2))
}
#within σ
integrate (sixsigma, -1,1)
##within 2σ
integrate (sixsigma, -2,2)
##within 3σ
integrate (sixsigma, -3,3)
##within 4σ
integrate (sixsigma, -4,4)
###Output
_____no_output_____
###Markdown
within 4.643σ we have
###Code
integrate (sixsigma, -4.643,4.643)
for (y in 1:6) {
print ( integrate (sixsigma, -y,y))
}
###Output
0.6826895 with absolute error < 7.6e-15
0.9544997 with absolute error < 1.8e-11
0.9973002 with absolute error < 9.3e-07
0.9999367 with absolute error < 4.8e-12
0.9999994 with absolute error < 8.7e-10
1 with absolute error < 1.2e-07
###Markdown
As we can see above, the probability for σ is 68%, for 2σ is 95% and for 3σ is 99.7% For 4.643σ its 99.9966% and for 5σ its 99.9999%, it is 1 for 6σ Problem 2, Job Search
###Code
#Please solve the problem by math. Use R mathematical expression to get the result.
#probability of getting job offer
p= 0.01
#sample size
n = 100
#probability of not getting job offer
1 - p
#probability of getting no job offer
p= 0.99 ** 100
#probability of getting at least one job offer
1 - p
###Output
_____no_output_____
###Markdown
probability of getting at least one job offer 0.633967658726771
###Code
#Analytic solution. Use R’s probability functions to solve the problem.
1- pbinom(0, size = 100, prob = 0.01)
#Answer by simulation. Use sampling function to simulate the process and estimate the answer.
n = 100; p = 0.01;
sample(0:1, size=n, replace=TRUE, prob=c(1-p, p))
table(sample(0:1, size=n, replace=TRUE, prob=c(1-p, p)))
plot(factor(sample(0:1, size=n, replace=TRUE, prob=c(1-p, p))))
barplot(dbinom(1:100,100,0.01),names.arg=1:100)
barplot(dbinom(1:7,100,0.01),names.arg=1:7)
###Output
_____no_output_____
###Markdown
Therefore by sampling we get
###Code
1- pbinom(0, size = 100, prob = 0.01)
#How many resumes in total do you have to spam so that you will have 90% chance to get at least one job offer?
# targeted probability is 90%
p =0.9
###Output
_____no_output_____
###Markdown
as we know from above,probability of getting at least one job offer, where n is the number of resume sent, assuming that the probability is 0.9 1 - (0.99 ** n) = 0.9 0.99 ** n = 1 - 0.9 = 0.1 n Log 0.99 = log 0.1 n = log 0.1/log 0.99
###Code
log(0.1)/log(0.99)
###Output
_____no_output_____
###Markdown
230 resume copies have to be sent to have a 90% chance of job offer
###Code
1- pbinom(0, size = 230, prob = 0.01)
###Output
_____no_output_____
###Markdown
Problem 3, President Election Polls Use binomial distribution P(X≥600)
###Code
1 - pbinom(599, 1000, 0.5)
###Output
_____no_output_____
###Markdown
Use normal distribution as approximation P(X≥600) N(np, np(1-p)) = N(500, 250)
###Code
1-pnorm(599, mean=500, sd=sqrt(250))
###Output
_____no_output_____ |
Neural Networks/06_deep_convoluted_neural_network_in_tensorflow_mnist.ipynb | ###Markdown
###Code
# Import dependencies
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.layers import Dropout
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
# Load data
(X_train, y_train), (X_valid, y_valid) = mnist.load_data()
# Check the shape of the data to ensure data is downloaded correctly
print(f'''
X_train shape: {X_train.shape},
y_train shape: {y_train.shape},
X_valid shape: {X_valid.shape},
y_valid shape: {y_valid.shape}''')
# Preprocess data - Input to conv is a 4D tensor with shape (batch_size, rows, cols, channels) as default which can be changed by data_format,
X_train = X_train.reshape(60000, 28, 28, 1).astype('float32')
X_valid = X_valid.reshape(10000, 28, 28, 1).astype('float32')
X_train /= 255
X_valid /= 255
n_classes = 10
y_train = to_categorical(y_train, n_classes)
y_valid = to_categorical(y_valid, n_classes)
# Review the revised shape of feature vector and target labels
print(f'''
X_train shape : {X_train.shape},
X_valid shape : {X_valid.shape},
y_train shape : {y_train.shape},
y_valid shape : {y_valid.shape} ''')
# Design CNN Network with two conv layers, one max pooling layer and a flatten layer
# Create model
model = Sequential()
# Add first conv layer
# Some of the parameters are optional, but given in second line for reference
model.add(Conv2D(filters = 32, kernel_size = (3,3), activation = 'relu', input_shape = (28, 28, 1),
strides=(1,1), padding = "valid"))
# Add second conv layer with max pooling and dropout.
# Flatten the o/p obtained for feeding into dense layer
model.add(Conv2D(filters = 64, kernel_size = (3,3), activation = 'relu',
strides=(1,1), padding = "valid"))
model.add(MaxPooling2D(pool_size=(2,2),
strides=(2,2), padding = 'valid'))
model.add(Dropout(0.25))
model.add(Flatten())
# Add dense layer
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
# Add output softmax layer
model.add(Dense(n_classes, activation='softmax'))
# Review model
model.summary()
# Compile model
model.compile(optimizer='nadam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# Fit model - Storing the o/p of model.fit method into variable hist for plotting training/validation accuracy and loss
hist = model.fit(X_train, y_train, batch_size=128, epochs=20, verbose=1, validation_data=(X_valid, y_valid))
model.evaluate(X_valid, y_valid, verbose=1)
# Plotting variation in acccuracy over epochs
# You can see overfitting happening in the graph
import matplotlib.pyplot as plt
f, ax = plt.subplots(figsize = (8,6))
ax.plot([None] + hist.history['accuracy'], 'o-')
ax.plot([None] + hist.history['val_accuracy'], 'x-')
# Plot legend and use the best location automatically: loc = 0.
ax.legend(['Training acc', 'Validation acc'], loc = 0)
ax.set_title('Training/Validation acc per Epoch')
ax.set_xlabel('Epoch')
ax.set_ylabel('accuracy')
# Plotting variation in loss over epochs
# You can see overfitting happening in the graph
f, ax = plt.subplots(figsize = (6,6))
ax.plot([None] + hist.history['loss'], 'o-')
ax.plot([None] + hist.history['val_loss'], 'x-')
# Plot legend and use the best location automatically: loc = 0.
ax.legend(['Training loss', 'Validation loss'], loc = 0)
ax.set_title('Training/Validation loss per Epoch')
ax.set_xlabel('Epoch')
ax.set_ylabel('loss')
# Concludes Deep CNN architecture based on JonKrohn's lecture with added section:
# visualization of loss/accuracy varation against epochs for better understanding
###Output
_____no_output_____ |
1_1_Image_Representation/.ipynb_checkpoints/6_4. Classification-checkpoint.ipynb | ###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
Avg brightness: 33.0138651515
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
## TODO: set the value of a threshold that will separate day and night images
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
# above or below the threshold
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classified
###Output
_____no_output_____
###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
Avg brightness: 119.6223
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
#test
min_day = 255
max_night = 0
for item in STANDARDIZED_LIST:
image = item[0]
label = item[1]
if(label == 1):
day = avg_brightness(image)
if(day < min_day):
min_day = day
else:
night = avg_brightness(image)
if(night > max_night):
max_night = night
print("min_day:", min_day)
print("max_night:", max_night)
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
avg_value = avg_brightness(rgb_image)
## TODO: set the value of a threshold that will separate day and night images
avg_threshold = 110
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
# above or below the threshold
if(avg_value > avg_threshold):
predicted_label = 1
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classified
image_num = 181
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
p_label = estimate_label(test_im)
print('Avg brightness: ' + str(avg))
print('predicted_label: ', p_label)
plt.imshow(test_im)
###Output
Avg brightness: 89.1588606060606
predicted_label: 0
###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
Avg brightness: 35.217
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
avg = avg_brightness(rgb_image)
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
## TODO: set the value of a threshold that will separate day and night images
threshold = 80
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
# above or below the threshold
if avg > threshold:
predicted_label = 1
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classified
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
predicted_label = estimate_label(test_im)
print('Predicted Label: ' + str(predicted_label))
plt.imshow(test_im)
###Output
Predicted Label: 0
###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
Avg brightness: 33.0138651515
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
avg = avg_brightness(rgb_image)
## TODO: set the value of a threshold that will separate day and night images
th = 100
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
# above or below the threshold
if avg > 100:
predicted_label = 1
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classified
###Output
_____no_output_____
###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
Avg brightness: 33.0138651515
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
## TODO: set the value of a threshold that will separate day and night images
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
# above or below the threshold
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classified
###Output
_____no_output_____
###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
print(hsv.shape)
print(hsv[:,:,2])
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
(600, 1100, 3)
Avg brightness: 119.6223
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
## TODO: set the value of a threshold that will separate day and night images
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
# above or below the threshold
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classified
###Output
_____no_output_____
###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 195
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
##sday 128.0701409090909
##night 119 , 27, 88, 46 25 96 105
###Output
Avg brightness: 33.01386515151515
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Set a threshold that you think will separate the day and night images by average brightness.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
## TODO: extract average brightness feature from an RGB image
avg = avg_brightness(rgb_image)
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
## TODO: set the value of a threshold that will separate day and night images
threshold = 120 ## night<120<day
## TODO: Return the predicted_label (0 or 1) based on whether the avg is
##Label [1 = day, 0 = night]
# above or below the threshold
if (avg > threshold):
predicted_label = 1 ##day
return predicted_label
## Test out your code by calling the above function and seeing
# how some of your training data is classified
image_num = 195
test_im = STANDARDIZED_LIST[image_num][0]
avg = estimate_label(test_im)
print('Est label: ' + str(avg))
plt.imshow(test_im)
##sday 128.0701409090909
##night 119 , 27, 88, 46 25 96 105
###Output
Est label: 0
|
WildFiresinUS-EDA.ipynb | ###Markdown
Wild Fires in US Objectives * **Impact on Land (How many and how big across all states?)*** **Causes of fires & Is there a correlation between fires and air pollution?** DataSets from Kaggle * 1.88 Million US Wildfires (1992 - 2015) https://www.kaggle.com/rtatman/188-million-us-wildfires * US Pollution Data (2000 - 2016) https://www.kaggle.com/sogun3/uspollution
###Code
import sqlite3
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from geopy.geocoders import Nominatim
###Output
_____no_output_____
###Markdown
PART I Read Wild Fires Data into a DataFrameWild Fires Data is in sqlite formatUse the pandas read_sql_query function to read the results of a SQL query directly into a DataFrame
###Code
# Create a SQL connection to SQLite database
cnx = sqlite3.connect('./data/FPA_FOD_20170508.sqlite')
df = pd.read_sql_query("SELECT FIRE_YEAR,STAT_CAUSE_DESCR,LATITUDE,LONGITUDE,STATE,DISCOVERY_DATE,FIRE_SIZE,FIRE_SIZE_CLASS FROM 'Fires'", cnx)
# Print number of rows in the Fires table
print(df.shape[0])
# close the connection
cnx.close()
###Output
1880465
###Markdown
Read top 5 rows and analyze the column data * FIRE_YEAR = Calendar year in which the fire was discovered or confirmed to exist. * DISCOVERY_DATE = Date on which the fire was discovered or confirmed to exist. * LATITUDE = Latitude (NAD83) for point location of the fire (decimal degrees). * LONGITUDE = Longitude (NAD83) for point location of the fire (decimal degrees). * FIRE_SIZE = Estimate of acres within the final perimeter of the fire. * FIRESIZECLASS = Code for fire size based on the number of acres within the final fire perimeter expenditures (A=greater than 0 but less than or equal to 0.25 acres, B=0.26-9.9 acres, C=10.0-99.9 acres, D=100-299 acres, E=300 to 999 acres, F=1000 to 4999 acres, and G=5000+ acres).
###Code
df.head(5)
###Output
_____no_output_____
###Markdown
Count null in rows and columns
###Code
df.isnull().sum().sum()
###Output
_____no_output_____
###Markdown
Add Date, Month, Day Of Week columns
###Code
df['DATE'] = pd.to_datetime(df['DISCOVERY_DATE'], unit='D', origin='julian')
df['MONTH'] = pd.DatetimeIndex(df['DATE']).month
# use formatting to get the day of week
df['DAY_OF_WEEK'] = df['DATE'].dt.day_name()
print(df.head())
###Output
FIRE_YEAR STAT_CAUSE_DESCR LATITUDE LONGITUDE STATE DISCOVERY_DATE \
0 2005 Miscellaneous 40.036944 -121.005833 CA 2453403.5
1 2004 Lightning 38.933056 -120.404444 CA 2453137.5
2 2004 Debris Burning 38.984167 -120.735556 CA 2453156.5
3 2004 Lightning 38.559167 -119.913333 CA 2453184.5
4 2004 Lightning 38.559167 -119.933056 CA 2453184.5
FIRE_SIZE FIRE_SIZE_CLASS DATE MONTH DAY_OF_WEEK
0 0.10 A 2005-02-02 2 Wednesday
1 0.25 A 2004-05-12 5 Wednesday
2 0.10 A 2004-05-31 5 Monday
3 0.10 A 2004-06-28 6 Monday
4 0.10 A 2004-06-28 6 Monday
###Markdown
Number of Fires that affected more than 5000 acres (1992-2015)Code for fire size based on the number of acres within the final fire perimeter expenditures (A=greater than 0 but less than or equal to 0.25 acres, B=0.26-9.9 acres, C=10.0-99.9 acres, D=100-299 acres, E=300 to 999 acres, F=1000 to 4999 acres, and G=5000+ acres).
###Code
df_G = df[df.FIRE_SIZE_CLASS == 'G']
print("Number of Fires that affected more than 5000 acres in the period 1992-2015 - {}".format(df_G.shape[0]))
###Output
Number of Fires that affected more than 5000 acres in the period 1992-2015 - 3773
###Markdown
Number of Fires By State
###Code
plt.rcParams['figure.figsize'] = [15, 10]
df['STATE'].value_counts().head(n=30).plot(kind='bar',color='orange',rot=45,title="Number of Fires By State")
plt.show()
print(df['STATE'].value_counts())
###Output
_____no_output_____
###Markdown
Largest Fire By Year and State
###Code
df2 = df[df['FIRE_SIZE'].isin(df.groupby('FIRE_YEAR')['FIRE_SIZE'].max().values)]
df3=df2.sort_values(by=['FIRE_YEAR'])
print(df3[['FIRE_YEAR', 'FIRE_SIZE','STATE','DATE']])
sns.barplot(x="FIRE_YEAR", y="FIRE_SIZE", hue="STATE", data=df3)
###Output
FIRE_YEAR FIRE_SIZE STATE DATE
45373 1992 177544.0 ID 1992-08-19
210651 1993 215360.0 AK 1993-07-14
67591 1994 146400.0 ID 1994-07-28
223486 1995 64193.0 ID 1995-07-29
224699 1996 206202.6 ID 1996-08-27
211296 1997 606945.0 AK 1997-06-25
1635044 1998 55375.0 TX 1998-05-30
211547 1999 232828.0 AK 1999-06-20
132150 2000 172135.0 ID 2000-07-10
305246 2001 112112.0 AK 2001-06-20
153705 2002 499945.0 OR 2002-07-13
163770 2003 280059.0 CA 2003-10-25
305585 2004 537627.0 AK 2004-06-13
1059558 2005 248310.0 AZ 2005-06-21
352785 2006 479549.0 TX 2006-03-12
1064940 2007 367785.0 ID 2007-07-21
654163 2008 220000.0 TX 2008-02-25
1215267 2009 517078.0 AK 2009-06-21
1216965 2010 306113.0 ID 2010-08-21
1459664 2011 538049.0 AZ 2011-05-29
1579574 2012 558198.3 OR 2012-07-08
1641750 2013 255858.0 CA 2013-08-17
1734936 2014 280141.0 OR 2014-07-14
1804783 2015 312918.3 AK 2015-06-22
###Markdown
Reverse GeoCoding in Python **Used GeoPy library** https://geopy.readthedocs.io/en/stable/
###Code
geolocator = Nominatim(user_agent='WFirestesting')
def get_zipcode(df, geolocator, lat_field, lon_field):
search_key = 'postcode'
location = geolocator.reverse((df[lat_field], df[lon_field]))
if search_key in location.raw['address']:
return(location.raw['address']['postcode'])
else:
return None
# df['zip'] = df.apply(get_zipcode, axis=1, geolocator=geolocator, lat_field='LATITUDE', lon_field='LONGITUDE')
# Limit to CA and year 2015
cnx = sqlite3.connect('./data/FPA_FOD_20170508.sqlite')
df_CA = pd.read_sql_query("SELECT FIRE_YEAR,STAT_CAUSE_DESCR,LATITUDE,LONGITUDE,STATE,DISCOVERY_DATE,FIRE_SIZE FROM 'Fires' WHERE STATE='CA' AND FIRE_YEAR > 2014", cnx)
# Print number of fires in CA in year 2015
print(df_CA.shape[0])
# close the connection
cnx.close()
# Chunk df_CA dataframe and fill out the 'zip' column
#n = 500
#list_df=[]
# Use List comprehension to create list of dataframes
# [ expression for item in list if conditional ]
#list_df = [df_CA[i:i+n] for i in range(0,df_CA.shape[0],n)]
#for idx in range(len(list_df)):
# list_df[idx]['zip'] = list_df[idx].apply(get_zipcode, axis=1, geolocator=geolocator, lat_field='LATITUDE', lon_field='LONGITUDE')
# Concatenate all dataframes and write to disk for use later
#df_result = pd.concat(list_df)
#df_result.to_csv('./data/CAfireswithrawzip.csv')
df_CA_zip_raw = pd.DataFrame(pd.read_csv('./data/CAfireswithrawzip.csv'))
def cleanzip(zip_str):
try:
return int(zip_str.split('-')[0])
except:
return 0
#df_CA_zip_raw['zip'] = df_CA_zip_raw['zip'].apply(cleanzip)
#df_CA_zip_raw.to_csv('./data/CAfireswithcleanzip.csv')
df_CA_zip = pd.DataFrame(pd.read_csv('./data/CAfireswithcleanzip.csv'))
df_CA_zip['zip'] = df_CA_zip['zip'].apply(np.int64)
df_CA_zip.info()
df_CA_zip.head(10)
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 4835 entries, 0 to 4834
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 4835 non-null int64
1 FIRE_YEAR 4835 non-null int64
2 STAT_CAUSE_DESCR 4835 non-null object
3 LATITUDE 4835 non-null float64
4 LONGITUDE 4835 non-null float64
5 STATE 4835 non-null object
6 DISCOVERY_DATE 4835 non-null float64
7 FIRE_SIZE 4835 non-null float64
8 zip 4835 non-null int64
dtypes: float64(4), int64(3), object(2)
memory usage: 340.1+ KB
###Markdown
PART II Causes of Fires
###Code
df['STAT_CAUSE_DESCR'].value_counts().plot(kind='bar',color='orange',rot=30)
plt.show()
###Output
_____no_output_____
###Markdown
Causes of Fires in CA
###Code
df_CA = df[df['STATE']=='CA']
df_CA['STAT_CAUSE_DESCR'].value_counts().plot(kind='bar',color='orange',rot=30,title='causes of fires for CA')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation between WildFires data and Air Pollution data
###Code
df_pollution = pd.read_csv('./data/pollution_us_2000_2016.csv')
## Take subset of columns state ,date, O3 AQI
df_pollution_state = df_pollution[['State','Date Local','O3 AQI']]
df_pollution_state.columns = ['State','Date Local','AQI']
# Remove NAs from rows
df_pollution_state = df_pollution_state.dropna(axis='rows')
# Remove Mexico
df_pollution_state = df_pollution_state[df_pollution_state!='Country Of Mexico']
# Format Date field
df_pollution_state['Dateformatted'] = pd.to_datetime(df_pollution_state['Date Local'],format='%Y-%m-%d')
df_pollution_state['FIRE_YEAR'] = pd.DatetimeIndex(df_pollution_state['Dateformatted']).year
df_pollution_state_CA = df_pollution_state[df_pollution_state['State']=='California']
df_pollution_state_CA.loc[:,'STATE']='CA'
df_pollution_state_CA_grouped = df_pollution_state_CA.groupby(['STATE','FIRE_YEAR']).mean()
print(df_pollution_state_CA_grouped)
cnx = sqlite3.connect('./data/FPA_FOD_20170508.sqlite')
df_CA_allfires = pd.read_sql_query("SELECT FIRE_YEAR,STAT_CAUSE_DESCR,LATITUDE,LONGITUDE,STATE,DISCOVERY_DATE,FIRE_SIZE FROM 'Fires' WHERE STATE='CA'", cnx)
df_fires_CA_grouped = df_CA_allfires.groupby(['STATE','FIRE_YEAR'])['FIRE_SIZE'].describe()[['mean','max','count', '50%', '75%']]
cnx.close()
df_merge = df_fires_CA_grouped.join(df_pollution_state_CA_grouped,how='inner').reset_index()
df_merge.info()
plt.rcParams['figure.figsize'] = [15, 15]
sns.heatmap(df_merge.corr())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 STATE 16 non-null object
1 FIRE_YEAR 16 non-null int64
2 mean 16 non-null float64
3 max 16 non-null float64
4 count 16 non-null float64
5 50% 16 non-null float64
6 75% 16 non-null float64
7 AQI 16 non-null float64
dtypes: float64(6), int64(1), object(1)
memory usage: 1.1+ KB
|
Python Basic Assignment/Assignment_5.ipynb | ###Markdown
1. What does an empty dictionary's code look like?
###Code
x = {}
###Output
_____no_output_____
###Markdown
2. What is the value of a dictionary value with the key 'foo' and the value 42?
###Code
x = {'foo': 42}
# value is 42
###Output
_____no_output_____
###Markdown
3. What is the most significant distinction between a dictionary and a list? Dictionary contains key value pairs which are hashable and doesnt have a fixed order. Lists are not hashable and it has a specific order for retrieving the elements. 4. What happens if you try to access spam['foo'] if spam is {'bar': 100}? We get a KeyError exception 5. If a dictionary is stored in spam, what is the difference between the expressions 'cat' in spam and 'cat' in spam.keys()? There is no difference in execution of expressions. By default, python looks for the keys when "in" operator is used with a dictionary. 6. If a dictionary is stored in spam, what is the difference between the expressions 'cat' in spam and 'cat' in spam.values()? 'cat' in spam --> looks for presence of a key called 'cat' in the dictionary'cat' in spam.values --> looks for presence of value called 'cat' among all the values in the dictionary 7. What is a shortcut for the following code? if 'color' not in spam: spam['color'] = 'black'
###Code
# by default, a dictionary looks for the presence of key. so no explicit check is required
# if we just have to assign the value to a non-existing key
spam['color'] = 'black'
###Output
_____no_output_____
###Markdown
8. How do you "pretty print" dictionary values using which module and function?
###Code
import json
spam = {'foo' : 1, 'bar': 2, 'check': 3}
print(json.dumps(spam, indent=4))
###Output
{
"foo": 1,
"bar": 2,
"check": 3
}
|
packaging/notebooks/2018-05-23_gallant_data.ipynb | ###Markdown
Process .nc files
###Code
v2_base_path = "/braintree/data2/active/users/jjpr/mkgu_packaging/crcns/v2-1"
nc_files = glob.glob(os.path.join(v2_base_path, "*/*/*.nc"), recursive=True)
sorted(nc_files)
gd_arrays = {}
for f in nc_files:
gd_arrays[f] = xr.open_dataarray(f)
gd_arrays
for gd_array_key in gd_arrays:
gd_array = gd_arrays[gd_array_key]
gd_array = gd_array.T.rename({"image_file_name": "presentation"})
gd_array.coords["presentation_id"] = ("presentation", range(gd_array.shape[1]))
gd_array.coords["neuroid_id"] = ("neuroid", gd_array["neuroid"].values)
gd_arrays[gd_array_key] = gd_array
gd_arrays
def massage_file_name(file_name):
split = re.split("\\\\|/", file_name)
split = [t for t in split if t]
relative_path = os.path.join(*split[-5:])
full_path = os.path.join("/", *split)
basename = split[-1]
exists = os.path.exists(full_path)
sha1 = kf(full_path).sha1
result = {
"image_file_path_original": relative_path,
"image_id": sha1
}
return result
for gd_array_key in gd_arrays:
print(gd_array_key)
gd_array = gd_arrays[gd_array_key]
df_massage = pd.DataFrame(list(map(massage_file_name, gd_array["presentation"].values)))
for column in df_massage.columns:
gd_array.coords[column] = ("presentation", df_massage[column])
gd_array.reset_index(["neuroid", "presentation"], drop=True, inplace=True)
gd_arrays
###Output
_____no_output_____
###Markdown
Combine arrays
###Code
neuroid_sum, presentation_sum = (0, 0)
for k in gd_arrays:
neuroid_sum = neuroid_sum + gd_arrays[k].shape[0]
presentation_sum = presentation_sum + gd_arrays[k].shape[1]
(neuroid_sum, presentation_sum)
for gd_array_key in gd_arrays:
gd_array = gd_arrays[gd_array_key]
mkgu.assemblies.gather_indexes(gd_array)
gd_arrays
gd_arrays
gd_arrays[gd_arrays_keys[0]]["category_name"]
gd_arrays[gd_arrays_keys[0]]["category_name"].dtype
np.nonzero(~np.isnan(gd_arrays[gd_arrays_keys[0]]["category_name"].values))
for da in gd_arrays.values():
da.reset_index("category_name", drop=True, inplace=True)
np.nonzero(~np.isnan(list(gd_arrays.values())[0]))
np.nonzero(~np.isnan(list(gd_arrays.values())[0].values))
gd_arrays_keys = list(gd_arrays.keys())
gd_arrays_keys
align_test = xr.align(gd_arrays[gd_arrays_keys[0]], gd_arrays[gd_arrays_keys[5]], join="outer")
align_test
[np.isnan(a).all() for a in align_test]
align_test[0].data.dtype
align_test[0]
(gd_arrays[gd_arrays_keys[0]], gd_arrays[gd_arrays_keys[5]])
[(k, len(np.unique(gd_arrays[k]["image_id"])), np.nonzero(~np.isnan(gd_arrays[k]))) for k in gd_arrays_keys]
aligned = xr.align(*list(gd_arrays.values()), join="outer")
aligned
aligned[0].shape
~np.isnan(aligned[0])
[(~np.isnan(da)).any() for da in aligned]
np.nonzero(~np.isnan(aligned[0].values))
###Output
_____no_output_____
###Markdown
if the length of the presentation axis is the sum of the presentation axes of all the data sets, there won't be collision if we use xr.combine_first()
###Code
non_nan_indices = []
for da in aligned:
non_nan_indices.append(np.flatnonzero(~np.isnan(da.values)))
non_nan_indices
# should all be False
for a, b in itertools.combinations(non_nan_indices, 2):
print(np.in1d(a, b).any())
blank = np.full_like(aligned[0], np.nan)
blank
da_result = xr.DataArray(blank, coords=aligned[0].coords, dims=aligned[0].dims)
da_result
for da in aligned:
da_result = da_result.combine_first(da)
da_result
np.nonzero(~np.isnan(da_result))
sum([len(n) for n in non_nan_indices])
def levels_for_index(xr_data, index):
return xr_data.indexes[index].names
def all_index_levels(xr_data):
nested = [levels_for_index(xr_data, index) for index in xr_data.indexes]
return [x for inner in nested for x in inner]
da_result.reset_index(all_index_levels(da_result), inplace=True)
da_result.to_netcdf("/braintree/data2/active/users/jjpr/mkgu_packaging/crcns/v2-1/crcns_v2-1_neuronal.nc")
!ls -hal /braintree/data2/active/users/jjpr/mkgu_packaging/crcns/v2-1
###Output
total 4.2G
drwxr-xr-x 23 jjpr dicarlo 4.0K Jun 5 15:52 .
drwxr-xr-x 4 jjpr dicarlo 4.0K May 16 14:58 ..
-rw-r--r-- 1 jjpr dicarlo 4.2G Jun 5 15:54 crcns_v2-1_neuronal.nc
drwxr-xr-x 2 jjpr dicarlo 4.0K Sep 13 2010 functions
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data1
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data10
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data11
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data12
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data13
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data14
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data15
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data16
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data17
drwxr-xr-x 3 jjpr dicarlo 4.0K Aug 25 2010 V2Data18
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data19
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data2
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data20
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data3
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data4
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data5
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data6
drwxrwxrwx 4 jjpr dicarlo 4.0K Jun 17 2010 V2Data7
drwxrwxrwx 3 jjpr dicarlo 4.0K Jun 17 2010 V2Data8
drwxr-xr-x 3 jjpr dicarlo 4.0K Aug 25 2010 V2Data9
|
site/en/r2/tutorials/sequences/text_generation.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Text generation with an RNNView on TensorFlow.org Run in Google ColabView source on GitHub This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills mWhile some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow-gpu==2.0.0-alpha0
import tensorflow as tf
import numpy as np
import os
import time
###Output
Collecting tensorflow-gpu==2.0.0-alpha0
Successfully installed google-pasta-0.1.4 tb-nightly-1.14.0a20190303 tensorflow-estimator-2.0-preview-1.14.0.dev2019030300 tensorflow-gpu==2.0.0-alpha0-2.0.0.dev20190303
###Markdown
Download the Shakespeare datasetChange the following line to run this code on your own data.
###Code
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt
1122304/1115394 [==============================] - 0s 0us/step
###Markdown
Read the dataFirst, look in the text:
###Code
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
###Output
65 unique characters
###Markdown
Process the text Vectorize the textBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
###Code
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
###Output
_____no_output_____
###Markdown
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
###Code
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
###Output
'First Citizen' ---- characters mapped to int ---- > [18 47 56 57 58 1 15 47 58 47 64 43 52]
###Markdown
The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
###Code
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
###Output
F
i
r
s
t
###Markdown
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
###Code
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
###Output
'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k'
"now Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\nLet us ki"
"ll him, and we'll have corn at our own price.\nIs't a verdict?\n\nAll:\nNo more talking on't; let it be d"
'one: away, away!\n\nSecond Citizen:\nOne word, good citizens.\n\nFirst Citizen:\nWe are accounted poor citi'
###Markdown
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
###Code
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
###Output
_____no_output_____
###Markdown
Print the first examples input and target values:
###Code
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
###Output
Input data: 'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou'
Target data: 'irst Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
###Markdown
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
###Code
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
###Output
Step 0
input: 18 ('F')
expected output: 47 ('i')
Step 1
input: 47 ('i')
expected output: 56 ('r')
Step 2
input: 56 ('r')
expected output: 57 ('s')
Step 3
input: 57 ('s')
expected output: 58 ('t')
Step 4
input: 58 ('t')
expected output: 1 (' ')
###Markdown
Create training batchesWe used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
###Code
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
###Output
_____no_output_____
###Markdown
Build The Model Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
###Code
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0304 03:48:46.706135 140067035297664 tf_logging.py:161] <tensorflow.python.keras.layers.recurrent.UnifiedLSTM object at 0x7f637273ccf8>: Note that this layer is not optimized for performance. Please use tf.keras.layers.CuDNNLSTM for better performance on GPU.
###Markdown
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character: Try the modelNow run the model to see that it behaves as expected.First check the shape of the output:
###Code
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
###Output
(64, 100, 65) # (batch_size, sequence_length, vocab_size)
###Markdown
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (64, None, 256) 16640
_________________________________________________________________
unified_lstm (UnifiedLSTM) (64, None, 1024) 5246976
_________________________________________________________________
dense (Dense) (64, None, 65) 66625
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________
###Markdown
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch:
###Code
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
###Output
_____no_output_____
###Markdown
This gives us, at each timestep, a prediction of the next character index:
###Code
sampled_indices
###Output
_____no_output_____
###Markdown
Decode these to see the text predicted by this untrained model:
###Code
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
###Output
Input:
'to it far before thy time?\nWarwick is chancellor and the lord of Calais;\nStern Falconbridge commands'
Next Char Predictions:
"I!tbdTa-FZRtKtY:KDnBe.TkxcoZEXLucZ&OUupVB rqbY&Tfxu :HQ!jYN:Jt'N3KNpehXxs.onKsdv:e;g?PhhCm3r-om! :t"
###Markdown
Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_softmax_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.Because our model returns logits, we need to set the `from_logits` flag.
###Code
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
###Output
Prediction shape: (64, 100, 65) # (batch_size, sequence_length, vocab_size)
scalar_loss: 4.174188
###Markdown
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
###Code
model.compile(optimizer='adam', loss=loss)
###Output
_____no_output_____
###Markdown
Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
###Code
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
###Output
_____no_output_____
###Markdown
Execute the training To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
###Code
EPOCHS=10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
###Output
Epoch 1/10
172/172 [==============================] - 31s 183ms/step - loss: 2.7052
Epoch 2/10
172/172 [==============================] - 31s 180ms/step - loss: 2.0039
Epoch 3/10
172/172 [==============================] - 31s 180ms/step - loss: 1.7375
Epoch 4/10
172/172 [==============================] - 31s 179ms/step - loss: 1.5772
Epoch 5/10
172/172 [==============================] - 31s 179ms/step - loss: 1.4772
Epoch 6/10
172/172 [==============================] - 31s 180ms/step - loss: 1.4087
Epoch 7/10
172/172 [==============================] - 31s 179ms/step - loss: 1.3556
Epoch 8/10
172/172 [==============================] - 31s 179ms/step - loss: 1.3095
Epoch 9/10
172/172 [==============================] - 31s 179ms/step - loss: 1.2671
Epoch 10/10
172/172 [==============================] - 31s 180ms/step - loss: 1.2276
###Markdown
Generate text Restore the latest checkpoint To keep this prediction step simple, use a batch size of 1.Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
###Code
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (1, None, 256) 16640
_________________________________________________________________
unified_lstm_1 (UnifiedLSTM) (1, None, 1024) 5246976
_________________________________________________________________
dense_1 (Dense) (1, None, 65) 66625
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________
###Markdown
The prediction loopThe following code block generates the text:* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.* Get the prediction distribution of the next character using the start string and the RNN state.* Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.* The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
###Code
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a categorical distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
###Output
ROMEO: now to have weth hearten sonce,
No more than the thing stand perfect your self,
Love way come. Up, this is d so do in friends:
If I fear e this, I poisple
My gracious lusty, born once for readyus disguised:
But that a pry; do it sure, thou wert love his cause;
My mind is come too!
POMPEY:
Serve my master's him: he hath extreme over his hand in the
where they shall not hear they right for me.
PROSSPOLUCETER:
I pray you, mistress, I shall be construted
With one that you shall that we know it, in this gentleasing earls of daiberkers now
he is to look upon this face, which leadens from his master as
you should not put what you perciploce backzat of cast,
Nor fear it sometime but for a pit
a world of Hantua?
First Gentleman:
That we can fall of bastards my sperial;
O, she Go seeming that which I have
what enby oar own best injuring them,
Or thom I do now, I, in heart is nothing gone,
Leatt the bark which was done born.
BRUTUS:
Both Margaret, he is sword of the house person. If born,
###Markdown
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions. Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output.We will use `tf.GradientTape` to track the gradiends. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The procedure works as follows:* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.* Open a `tf.GradientTape`, and calculate the predictions and loss in that context.* Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.* Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
###Code
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(target, predictions))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Training step
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
###Output
Epoch 1 Batch 0 Loss 8.132774353027344
Epoch 1 Batch 100 Loss 3.5028388500213623
Epoch 1 Loss 3.7314
Time taken for 1 epoch 31.78906798362732 sec
Epoch 2 Batch 0 Loss 3.766866445541382
Epoch 2 Batch 100 Loss 3.985184669494629
Epoch 2 Loss 3.9137
Time taken for 1 epoch 29.776747703552246 sec
Epoch 3 Batch 0 Loss 4.023300647735596
Epoch 3 Batch 100 Loss 3.921215534210205
Epoch 3 Loss 3.8976
Time taken for 1 epoch 30.094752311706543 sec
Epoch 4 Batch 0 Loss 3.916696071624756
Epoch 4 Batch 100 Loss 3.900864362716675
Epoch 4 Loss 3.9048
Time taken for 1 epoch 30.09034276008606 sec
Epoch 5 Batch 0 Loss 3.9154434204101562
Epoch 5 Batch 100 Loss 3.9020049571990967
Epoch 5 Loss 3.9725
Time taken for 1 epoch 30.17358922958374 sec
Epoch 6 Batch 0 Loss 3.9781394004821777
Epoch 6 Batch 100 Loss 3.920198917388916
Epoch 6 Loss 3.9269
Time taken for 1 epoch 30.19426202774048 sec
Epoch 7 Batch 0 Loss 3.9400787353515625
Epoch 7 Batch 100 Loss 3.8473968505859375
Epoch 7 Loss 3.8438
Time taken for 1 epoch 30.107476234436035 sec
Epoch 8 Batch 0 Loss 3.852555513381958
Epoch 8 Batch 100 Loss 3.8410544395446777
Epoch 8 Loss 3.8218
Time taken for 1 epoch 30.084821462631226 sec
Epoch 9 Batch 0 Loss 3.843691349029541
Epoch 9 Batch 100 Loss 3.829458236694336
Epoch 9 Loss 3.8420
Time taken for 1 epoch 30.13308310508728 sec
Epoch 10 Batch 0 Loss 3.8553621768951416
Epoch 10 Batch 100 Loss 3.7812960147857666
Epoch 10 Loss 3.7726
Time taken for 1 epoch 30.14617133140564 sec
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Text generation with an RNNView on TensorFlow.org Run in Google Colab View source on GitHub This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills mWhile some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries
###Code
from __future__ import absolute_import, division, print_function
!pip install tensorflow-gpu==2.0.0-alpha0
import tensorflow as tf
import numpy as np
import os
import time
###Output
Collecting tensorflow-gpu==2.0.0-alpha0
Successfully installed google-pasta-0.1.4 tb-nightly-1.14.0a20190303 tensorflow-estimator-2.0-preview-1.14.0.dev2019030300 tensorflow-gpu==2.0.0-alpha0-2.0.0.dev20190303
###Markdown
Download the Shakespeare datasetChange the following line to run this code on your own data.
###Code
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt
1122304/1115394 [==============================] - 0s 0us/step
###Markdown
Read the dataFirst, look in the text:
###Code
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
###Output
65 unique characters
###Markdown
Process the text Vectorize the textBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
###Code
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
###Output
_____no_output_____
###Markdown
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
###Code
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
###Output
'First Citizen' ---- characters mapped to int ---- > [18 47 56 57 58 1 15 47 58 47 64 43 52]
###Markdown
The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text. For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right. So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
###Code
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
###Output
F
i
r
s
t
###Markdown
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
###Code
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
###Output
'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k'
"now Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\nLet us ki"
"ll him, and we'll have corn at our own price.\nIs't a verdict?\n\nAll:\nNo more talking on't; let it be d"
'one: away, away!\n\nSecond Citizen:\nOne word, good citizens.\n\nFirst Citizen:\nWe are accounted poor citi'
###Markdown
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
###Code
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
###Output
_____no_output_____
###Markdown
Print the first examples input and target values:
###Code
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
###Output
Input data: 'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou'
Target data: 'irst Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
###Markdown
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
###Code
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
###Output
Step 0
input: 18 ('F')
expected output: 47 ('i')
Step 1
input: 47 ('i')
expected output: 56 ('r')
Step 2
input: 56 ('r')
expected output: 57 ('s')
Step 3
input: 57 ('s')
expected output: 58 ('t')
Step 4
input: 58 ('t')
expected output: 1 (' ')
###Markdown
Create training batchesWe used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
###Code
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
###Output
_____no_output_____
###Markdown
Build The Model Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
###Code
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0304 03:48:46.706135 140067035297664 tf_logging.py:161] <tensorflow.python.keras.layers.recurrent.UnifiedLSTM object at 0x7f637273ccf8>: Note that this layer is not optimized for performance. Please use tf.keras.layers.CuDNNLSTM for better performance on GPU.
###Markdown
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character: Try the modelNow run the model to see that it behaves as expected.First check the shape of the output:
###Code
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
###Output
(64, 100, 65) # (batch_size, sequence_length, vocab_size)
###Markdown
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (64, None, 256) 16640
_________________________________________________________________
unified_lstm (UnifiedLSTM) (64, None, 1024) 5246976
_________________________________________________________________
dense (Dense) (64, None, 65) 66625
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________
###Markdown
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary. Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch:
###Code
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
###Output
_____no_output_____
###Markdown
This gives us, at each timestep, a prediction of the next character index:
###Code
sampled_indices
###Output
_____no_output_____
###Markdown
Decode these to see the text predicted by this untrained model:
###Code
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
###Output
Input:
'to it far before thy time?\nWarwick is chancellor and the lord of Calais;\nStern Falconbridge commands'
Next Char Predictions:
"I!tbdTa-FZRtKtY:KDnBe.TkxcoZEXLucZ&OUupVB rqbY&Tfxu :HQ!jYN:Jt'N3KNpehXxs.onKsdv:e;g?PhhCm3r-om! :t"
###Markdown
Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_softmax_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions. Because our model returns logits, we need to set the `from_logits` flag.
###Code
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
###Output
Prediction shape: (64, 100, 65) # (batch_size, sequence_length, vocab_size)
scalar_loss: 4.174188
###Markdown
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
###Code
model.compile(optimizer='adam', loss=loss)
###Output
_____no_output_____
###Markdown
Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
###Code
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
###Output
_____no_output_____
###Markdown
Execute the training To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
###Code
EPOCHS=10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
###Output
Epoch 1/10
172/172 [==============================] - 31s 183ms/step - loss: 2.7052
Epoch 2/10
172/172 [==============================] - 31s 180ms/step - loss: 2.0039
Epoch 3/10
172/172 [==============================] - 31s 180ms/step - loss: 1.7375
Epoch 4/10
172/172 [==============================] - 31s 179ms/step - loss: 1.5772
Epoch 5/10
172/172 [==============================] - 31s 179ms/step - loss: 1.4772
Epoch 6/10
172/172 [==============================] - 31s 180ms/step - loss: 1.4087
Epoch 7/10
172/172 [==============================] - 31s 179ms/step - loss: 1.3556
Epoch 8/10
172/172 [==============================] - 31s 179ms/step - loss: 1.3095
Epoch 9/10
172/172 [==============================] - 31s 179ms/step - loss: 1.2671
Epoch 10/10
172/172 [==============================] - 31s 180ms/step - loss: 1.2276
###Markdown
Generate text Restore the latest checkpoint To keep this prediction step simple, use a batch size of 1.Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built. To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
###Code
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (1, None, 256) 16640
_________________________________________________________________
unified_lstm_1 (UnifiedLSTM) (1, None, 1024) 5246976
_________________________________________________________________
dense_1 (Dense) (1, None, 65) 66625
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________
###Markdown
The prediction loopThe following code block generates the text:* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.* Get the prediction distribution of the next character using the start string and the RNN state.* Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.* The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
###Code
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a categorical distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
###Output
ROMEO: now to have weth hearten sonce,
No more than the thing stand perfect your self,
Love way come. Up, this is d so do in friends:
If I fear e this, I poisple
My gracious lusty, born once for readyus disguised:
But that a pry; do it sure, thou wert love his cause;
My mind is come too!
POMPEY:
Serve my master's him: he hath extreme over his hand in the
where they shall not hear they right for me.
PROSSPOLUCETER:
I pray you, mistress, I shall be construted
With one that you shall that we know it, in this gentleasing earls of daiberkers now
he is to look upon this face, which leadens from his master as
you should not put what you perciploce backzat of cast,
Nor fear it sometime but for a pit
a world of Hantua?
First Gentleman:
That we can fall of bastards my sperial;
O, she Go seeming that which I have
what enby oar own best injuring them,
Or thom I do now, I, in heart is nothing gone,
Leatt the bark which was done born.
BRUTUS:
Both Margaret, he is sword of the house person. If born,
###Markdown
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions. Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output. We will use `tf.GradientTape` to track the gradiends. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The procedure works as follows:* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.* Open a `tf.GradientTape`, and calculate the predictions and loss in that context.* Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.* Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
###Code
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(target, predictions))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Training step
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
###Output
Epoch 1 Batch 0 Loss 8.132774353027344
Epoch 1 Batch 100 Loss 3.5028388500213623
Epoch 1 Loss 3.7314
Time taken for 1 epoch 31.78906798362732 sec
Epoch 2 Batch 0 Loss 3.766866445541382
Epoch 2 Batch 100 Loss 3.985184669494629
Epoch 2 Loss 3.9137
Time taken for 1 epoch 29.776747703552246 sec
Epoch 3 Batch 0 Loss 4.023300647735596
Epoch 3 Batch 100 Loss 3.921215534210205
Epoch 3 Loss 3.8976
Time taken for 1 epoch 30.094752311706543 sec
Epoch 4 Batch 0 Loss 3.916696071624756
Epoch 4 Batch 100 Loss 3.900864362716675
Epoch 4 Loss 3.9048
Time taken for 1 epoch 30.09034276008606 sec
Epoch 5 Batch 0 Loss 3.9154434204101562
Epoch 5 Batch 100 Loss 3.9020049571990967
Epoch 5 Loss 3.9725
Time taken for 1 epoch 30.17358922958374 sec
Epoch 6 Batch 0 Loss 3.9781394004821777
Epoch 6 Batch 100 Loss 3.920198917388916
Epoch 6 Loss 3.9269
Time taken for 1 epoch 30.19426202774048 sec
Epoch 7 Batch 0 Loss 3.9400787353515625
Epoch 7 Batch 100 Loss 3.8473968505859375
Epoch 7 Loss 3.8438
Time taken for 1 epoch 30.107476234436035 sec
Epoch 8 Batch 0 Loss 3.852555513381958
Epoch 8 Batch 100 Loss 3.8410544395446777
Epoch 8 Loss 3.8218
Time taken for 1 epoch 30.084821462631226 sec
Epoch 9 Batch 0 Loss 3.843691349029541
Epoch 9 Batch 100 Loss 3.829458236694336
Epoch 9 Loss 3.8420
Time taken for 1 epoch 30.13308310508728 sec
Epoch 10 Batch 0 Loss 3.8553621768951416
Epoch 10 Batch 100 Loss 3.7812960147857666
Epoch 10 Loss 3.7726
Time taken for 1 epoch 30.14617133140564 sec
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Text generation with an RNNView on TensorFlow.org Run in Google Colab View source on GitHub This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills mWhile some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow-gpu==2.0.0-alpha0
import tensorflow as tf
import numpy as np
import os
import time
###Output
Collecting tensorflow-gpu==2.0.0-alpha0
Successfully installed google-pasta-0.1.4 tb-nightly-1.14.0a20190303 tensorflow-estimator-2.0-preview-1.14.0.dev2019030300 tensorflow-gpu==2.0.0-alpha0-2.0.0.dev20190303
###Markdown
Download the Shakespeare datasetChange the following line to run this code on your own data.
###Code
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt
1122304/1115394 [==============================] - 0s 0us/step
###Markdown
Read the dataFirst, look in the text:
###Code
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
###Output
65 unique characters
###Markdown
Process the text Vectorize the textBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
###Code
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
###Output
_____no_output_____
###Markdown
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
###Code
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
###Output
'First Citizen' ---- characters mapped to int ---- > [18 47 56 57 58 1 15 47 58 47 64 43 52]
###Markdown
The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text. For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right. So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
###Code
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
###Output
F
i
r
s
t
###Markdown
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
###Code
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
###Output
'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k'
"now Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\nLet us ki"
"ll him, and we'll have corn at our own price.\nIs't a verdict?\n\nAll:\nNo more talking on't; let it be d"
'one: away, away!\n\nSecond Citizen:\nOne word, good citizens.\n\nFirst Citizen:\nWe are accounted poor citi'
###Markdown
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
###Code
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
###Output
_____no_output_____
###Markdown
Print the first examples input and target values:
###Code
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
###Output
Input data: 'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou'
Target data: 'irst Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
###Markdown
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
###Code
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
###Output
Step 0
input: 18 ('F')
expected output: 47 ('i')
Step 1
input: 47 ('i')
expected output: 56 ('r')
Step 2
input: 56 ('r')
expected output: 57 ('s')
Step 3
input: 57 ('s')
expected output: 58 ('t')
Step 4
input: 58 ('t')
expected output: 1 (' ')
###Markdown
Create training batchesWe used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
###Code
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
###Output
_____no_output_____
###Markdown
Build The Model Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
###Code
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0304 03:48:46.706135 140067035297664 tf_logging.py:161] <tensorflow.python.keras.layers.recurrent.UnifiedLSTM object at 0x7f637273ccf8>: Note that this layer is not optimized for performance. Please use tf.keras.layers.CuDNNLSTM for better performance on GPU.
###Markdown
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character: Try the modelNow run the model to see that it behaves as expected.First check the shape of the output:
###Code
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
###Output
(64, 100, 65) # (batch_size, sequence_length, vocab_size)
###Markdown
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (64, None, 256) 16640
_________________________________________________________________
unified_lstm (UnifiedLSTM) (64, None, 1024) 5246976
_________________________________________________________________
dense (Dense) (64, None, 65) 66625
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________
###Markdown
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary. Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch:
###Code
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
###Output
_____no_output_____
###Markdown
This gives us, at each timestep, a prediction of the next character index:
###Code
sampled_indices
###Output
_____no_output_____
###Markdown
Decode these to see the text predicted by this untrained model:
###Code
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
###Output
Input:
'to it far before thy time?\nWarwick is chancellor and the lord of Calais;\nStern Falconbridge commands'
Next Char Predictions:
"I!tbdTa-FZRtKtY:KDnBe.TkxcoZEXLucZ&OUupVB rqbY&Tfxu :HQ!jYN:Jt'N3KNpehXxs.onKsdv:e;g?PhhCm3r-om! :t"
###Markdown
Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_softmax_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions. Because our model returns logits, we need to set the `from_logits` flag.
###Code
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
###Output
Prediction shape: (64, 100, 65) # (batch_size, sequence_length, vocab_size)
scalar_loss: 4.174188
###Markdown
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
###Code
model.compile(optimizer='adam', loss=loss)
###Output
_____no_output_____
###Markdown
Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
###Code
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
###Output
_____no_output_____
###Markdown
Execute the training To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
###Code
EPOCHS=10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
###Output
Epoch 1/10
172/172 [==============================] - 31s 183ms/step - loss: 2.7052
Epoch 2/10
172/172 [==============================] - 31s 180ms/step - loss: 2.0039
Epoch 3/10
172/172 [==============================] - 31s 180ms/step - loss: 1.7375
Epoch 4/10
172/172 [==============================] - 31s 179ms/step - loss: 1.5772
Epoch 5/10
172/172 [==============================] - 31s 179ms/step - loss: 1.4772
Epoch 6/10
172/172 [==============================] - 31s 180ms/step - loss: 1.4087
Epoch 7/10
172/172 [==============================] - 31s 179ms/step - loss: 1.3556
Epoch 8/10
172/172 [==============================] - 31s 179ms/step - loss: 1.3095
Epoch 9/10
172/172 [==============================] - 31s 179ms/step - loss: 1.2671
Epoch 10/10
172/172 [==============================] - 31s 180ms/step - loss: 1.2276
###Markdown
Generate text Restore the latest checkpoint To keep this prediction step simple, use a batch size of 1.Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built. To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
###Code
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (1, None, 256) 16640
_________________________________________________________________
unified_lstm_1 (UnifiedLSTM) (1, None, 1024) 5246976
_________________________________________________________________
dense_1 (Dense) (1, None, 65) 66625
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________
###Markdown
The prediction loopThe following code block generates the text:* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.* Get the prediction distribution of the next character using the start string and the RNN state.* Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.* The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
###Code
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a categorical distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
###Output
ROMEO: now to have weth hearten sonce,
No more than the thing stand perfect your self,
Love way come. Up, this is d so do in friends:
If I fear e this, I poisple
My gracious lusty, born once for readyus disguised:
But that a pry; do it sure, thou wert love his cause;
My mind is come too!
POMPEY:
Serve my master's him: he hath extreme over his hand in the
where they shall not hear they right for me.
PROSSPOLUCETER:
I pray you, mistress, I shall be construted
With one that you shall that we know it, in this gentleasing earls of daiberkers now
he is to look upon this face, which leadens from his master as
you should not put what you perciploce backzat of cast,
Nor fear it sometime but for a pit
a world of Hantua?
First Gentleman:
That we can fall of bastards my sperial;
O, she Go seeming that which I have
what enby oar own best injuring them,
Or thom I do now, I, in heart is nothing gone,
Leatt the bark which was done born.
BRUTUS:
Both Margaret, he is sword of the house person. If born,
###Markdown
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions. Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output. We will use `tf.GradientTape` to track the gradiends. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The procedure works as follows:* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.* Open a `tf.GradientTape`, and calculate the predictions and loss in that context.* Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.* Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
###Code
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(target, predictions))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Training step
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
###Output
Epoch 1 Batch 0 Loss 8.132774353027344
Epoch 1 Batch 100 Loss 3.5028388500213623
Epoch 1 Loss 3.7314
Time taken for 1 epoch 31.78906798362732 sec
Epoch 2 Batch 0 Loss 3.766866445541382
Epoch 2 Batch 100 Loss 3.985184669494629
Epoch 2 Loss 3.9137
Time taken for 1 epoch 29.776747703552246 sec
Epoch 3 Batch 0 Loss 4.023300647735596
Epoch 3 Batch 100 Loss 3.921215534210205
Epoch 3 Loss 3.8976
Time taken for 1 epoch 30.094752311706543 sec
Epoch 4 Batch 0 Loss 3.916696071624756
Epoch 4 Batch 100 Loss 3.900864362716675
Epoch 4 Loss 3.9048
Time taken for 1 epoch 30.09034276008606 sec
Epoch 5 Batch 0 Loss 3.9154434204101562
Epoch 5 Batch 100 Loss 3.9020049571990967
Epoch 5 Loss 3.9725
Time taken for 1 epoch 30.17358922958374 sec
Epoch 6 Batch 0 Loss 3.9781394004821777
Epoch 6 Batch 100 Loss 3.920198917388916
Epoch 6 Loss 3.9269
Time taken for 1 epoch 30.19426202774048 sec
Epoch 7 Batch 0 Loss 3.9400787353515625
Epoch 7 Batch 100 Loss 3.8473968505859375
Epoch 7 Loss 3.8438
Time taken for 1 epoch 30.107476234436035 sec
Epoch 8 Batch 0 Loss 3.852555513381958
Epoch 8 Batch 100 Loss 3.8410544395446777
Epoch 8 Loss 3.8218
Time taken for 1 epoch 30.084821462631226 sec
Epoch 9 Batch 0 Loss 3.843691349029541
Epoch 9 Batch 100 Loss 3.829458236694336
Epoch 9 Loss 3.8420
Time taken for 1 epoch 30.13308310508728 sec
Epoch 10 Batch 0 Loss 3.8553621768951416
Epoch 10 Batch 100 Loss 3.7812960147857666
Epoch 10 Loss 3.7726
Time taken for 1 epoch 30.14617133140564 sec
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Text generation with an RNNView on TensorFlow.org Run in Google Colab View source on GitHub This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills mWhile some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries
###Code
from __future__ import absolute_import, division, print_function
!pip install tf-nightly-gpu-2.0-preview
import tensorflow as tf
import numpy as np
import os
import time
###Output
_____no_output_____
###Markdown
Download the Shakespeare datasetChange the following line to run this code on your own data.
###Code
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
###Output
_____no_output_____
###Markdown
Read the dataFirst, look in the text:
###Code
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
###Output
_____no_output_____
###Markdown
Process the text Vectorize the textBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
###Code
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
###Output
_____no_output_____
###Markdown
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
###Code
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
###Output
_____no_output_____
###Markdown
The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text. For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right. So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
###Code
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
###Output
_____no_output_____
###Markdown
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
###Code
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
###Output
_____no_output_____
###Markdown
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
###Code
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
###Output
_____no_output_____
###Markdown
Print the first examples input and target values:
###Code
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
###Output
_____no_output_____
###Markdown
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
###Code
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
###Output
_____no_output_____
###Markdown
Create training batchesWe used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
###Code
# Batch size
BATCH_SIZE = 64
steps_per_epoch = examples_per_epoch//BATCH_SIZE
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.repeat().shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
###Output
_____no_output_____
###Markdown
Build The Model Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
###Code
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
###Output
_____no_output_____
###Markdown
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character: Try the modelNow run the model to see that it behaves as expected.First check the shape of the output:
###Code
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
###Output
_____no_output_____
###Markdown
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary. Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch:
###Code
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
###Output
_____no_output_____
###Markdown
This gives us, at each timestep, a prediction of the next character index:
###Code
sampled_indices
###Output
_____no_output_____
###Markdown
Decode these to see the text predicted by this untrained model:
###Code
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
###Output
_____no_output_____
###Markdown
Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_softmax_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions. Because our model returns logits, we need to set the `from_logits` flag.
###Code
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
###Output
_____no_output_____
###Markdown
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
###Code
model.compile(optimizer='adam', loss=loss)
###Output
_____no_output_____
###Markdown
Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
###Code
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
###Output
_____no_output_____
###Markdown
Execute the training To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
###Code
EPOCHS=3
history = model.fit(dataset, epochs=EPOCHS, steps_per_epoch=steps_per_epoch,
callbacks=[checkpoint_callback])
###Output
_____no_output_____
###Markdown
Generate text Restore the latest checkpoint To keep this prediction step simple, use a batch size of 1.Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built. To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
###Code
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
###Output
_____no_output_____
###Markdown
The prediction loopThe following code block generates the text:* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.* Get the prediction distribution of the next character using the start string and the RNN state.* Then, use a multinomial distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.* The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
###Code
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a multinomial distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.multinomial(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
###Output
_____no_output_____
###Markdown
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions. Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output. We will use `tf.GradientTape` to track the gradiends. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The procedure works as follows:* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.* Open a `tf.GradientTape`, and calculate the predictions and loss in that context.* Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.* Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
###Code
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(target, predictions))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Training step
EPOCHS = 1
for epoch in range(EPOCHS):
start = time.time()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Text generation with an RNNView on TensorFlow.org Run in Google Colab View source on GitHub This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills mWhile some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries
###Code
from __future__ import absolute_import, division, print_function
!pip install tf-nightly-2.0-preview
import tensorflow as tf
import numpy as np
import os
import time
###Output
_____no_output_____
###Markdown
Download the Shakespeare datasetChange the following line to run this code on your own data.
###Code
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
###Output
_____no_output_____
###Markdown
Read the dataFirst, look in the text:
###Code
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
###Output
_____no_output_____
###Markdown
Process the text Vectorize the textBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
###Code
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
###Output
_____no_output_____
###Markdown
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
###Code
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
###Output
_____no_output_____
###Markdown
The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text. For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right. So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
###Code
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
###Output
_____no_output_____
###Markdown
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
###Code
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
###Output
_____no_output_____
###Markdown
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
###Code
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
###Output
_____no_output_____
###Markdown
Print the first examples input and target values:
###Code
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
###Output
_____no_output_____
###Markdown
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
###Code
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
###Output
_____no_output_____
###Markdown
Create training batchesWe used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
###Code
# Batch size
BATCH_SIZE = 64
steps_per_epoch = examples_per_epoch//BATCH_SIZE
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.repeat().shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
###Output
_____no_output_____
###Markdown
Build The Model Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
###Code
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
###Output
_____no_output_____
###Markdown
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character: Try the modelNow run the model to see that it behaves as expected.First check the shape of the output:
###Code
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
###Output
_____no_output_____
###Markdown
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary. Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch:
###Code
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
###Output
_____no_output_____
###Markdown
This gives us, at each timestep, a prediction of the next character index:
###Code
sampled_indices
###Output
_____no_output_____
###Markdown
Decode these to see the text predicted by this untrained model:
###Code
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
###Output
_____no_output_____
###Markdown
Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_softmax_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions. Because our model returns logits, we need to set the `from_logits` flag.
###Code
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
###Output
_____no_output_____
###Markdown
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
###Code
model.compile(optimizer='adam', loss=loss)
###Output
_____no_output_____
###Markdown
Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
###Code
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
###Output
_____no_output_____
###Markdown
Execute the training To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
###Code
EPOCHS=3
history = model.fit(dataset, epochs=EPOCHS, steps_per_epoch=steps_per_epoch,
callbacks=[checkpoint_callback])
###Output
_____no_output_____
###Markdown
Generate text Restore the latest checkpoint To keep this prediction step simple, use a batch size of 1.Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built. To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
###Code
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
###Output
_____no_output_____
###Markdown
The prediction loopThe following code block generates the text:* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.* Get the prediction distribution of the next character using the start string and the RNN state.* Then, use a multinomial distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.* The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
###Code
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a multinomial distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.multinomial(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
###Output
_____no_output_____
###Markdown
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions. Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output. We will use `tf.GradientTape` to track the gradiends. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The procedure works as follows:* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.* Open a `tf.GradientTape`, and calculate the predictions and loss in that context.* Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.* Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
###Code
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(target, predictions))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Training step
EPOCHS = 1
for epoch in range(EPOCHS):
start = time.time()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Text generation with an RNNView on TensorFlow.org Run in Google Colab View source on GitHub This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills mWhile some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries
###Code
from __future__ import absolute_import, division, print_function
!pip install tf-nightly-gpu-2.0-preview
import tensorflow as tf
import numpy as np
import os
import time
###Output
Collecting tf-nightly-gpu-2.0-preview
Successfully installed google-pasta-0.1.4 tb-nightly-1.14.0a20190303 tensorflow-estimator-2.0-preview-1.14.0.dev2019030300 tf-nightly-gpu-2.0-preview-2.0.0.dev20190303
###Markdown
Download the Shakespeare datasetChange the following line to run this code on your own data.
###Code
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt
1122304/1115394 [==============================] - 0s 0us/step
###Markdown
Read the dataFirst, look in the text:
###Code
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
###Output
65 unique characters
###Markdown
Process the text Vectorize the textBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
###Code
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
###Output
_____no_output_____
###Markdown
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
###Code
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
###Output
'First Citizen' ---- characters mapped to int ---- > [18 47 56 57 58 1 15 47 58 47 64 43 52]
###Markdown
The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text. For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right. So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
###Code
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
###Output
F
i
r
s
t
###Markdown
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
###Code
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
###Output
'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k'
"now Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\nLet us ki"
"ll him, and we'll have corn at our own price.\nIs't a verdict?\n\nAll:\nNo more talking on't; let it be d"
'one: away, away!\n\nSecond Citizen:\nOne word, good citizens.\n\nFirst Citizen:\nWe are accounted poor citi'
###Markdown
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
###Code
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
###Output
_____no_output_____
###Markdown
Print the first examples input and target values:
###Code
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
###Output
Input data: 'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou'
Target data: 'irst Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
###Markdown
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
###Code
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
###Output
Step 0
input: 18 ('F')
expected output: 47 ('i')
Step 1
input: 47 ('i')
expected output: 56 ('r')
Step 2
input: 56 ('r')
expected output: 57 ('s')
Step 3
input: 57 ('s')
expected output: 58 ('t')
Step 4
input: 58 ('t')
expected output: 1 (' ')
###Markdown
Create training batchesWe used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
###Code
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
###Output
_____no_output_____
###Markdown
Build The Model Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
###Code
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0304 03:48:46.706135 140067035297664 tf_logging.py:161] <tensorflow.python.keras.layers.recurrent.UnifiedLSTM object at 0x7f637273ccf8>: Note that this layer is not optimized for performance. Please use tf.keras.layers.CuDNNLSTM for better performance on GPU.
###Markdown
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character: Try the modelNow run the model to see that it behaves as expected.First check the shape of the output:
###Code
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
###Output
(64, 100, 65) # (batch_size, sequence_length, vocab_size)
###Markdown
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (64, None, 256) 16640
_________________________________________________________________
unified_lstm (UnifiedLSTM) (64, None, 1024) 5246976
_________________________________________________________________
dense (Dense) (64, None, 65) 66625
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________
###Markdown
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary. Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch:
###Code
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
###Output
_____no_output_____
###Markdown
This gives us, at each timestep, a prediction of the next character index:
###Code
sampled_indices
###Output
_____no_output_____
###Markdown
Decode these to see the text predicted by this untrained model:
###Code
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
###Output
Input:
'to it far before thy time?\nWarwick is chancellor and the lord of Calais;\nStern Falconbridge commands'
Next Char Predictions:
"I!tbdTa-FZRtKtY:KDnBe.TkxcoZEXLucZ&OUupVB rqbY&Tfxu :HQ!jYN:Jt'N3KNpehXxs.onKsdv:e;g?PhhCm3r-om! :t"
###Markdown
Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_softmax_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions. Because our model returns logits, we need to set the `from_logits` flag.
###Code
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
###Output
Prediction shape: (64, 100, 65) # (batch_size, sequence_length, vocab_size)
scalar_loss: 4.174188
###Markdown
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
###Code
model.compile(optimizer='adam', loss=loss)
###Output
_____no_output_____
###Markdown
Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
###Code
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
###Output
_____no_output_____
###Markdown
Execute the training To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
###Code
EPOCHS=10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
###Output
Epoch 1/10
172/172 [==============================] - 31s 183ms/step - loss: 2.7052
Epoch 2/10
172/172 [==============================] - 31s 180ms/step - loss: 2.0039
Epoch 3/10
172/172 [==============================] - 31s 180ms/step - loss: 1.7375
Epoch 4/10
172/172 [==============================] - 31s 179ms/step - loss: 1.5772
Epoch 5/10
172/172 [==============================] - 31s 179ms/step - loss: 1.4772
Epoch 6/10
172/172 [==============================] - 31s 180ms/step - loss: 1.4087
Epoch 7/10
172/172 [==============================] - 31s 179ms/step - loss: 1.3556
Epoch 8/10
172/172 [==============================] - 31s 179ms/step - loss: 1.3095
Epoch 9/10
172/172 [==============================] - 31s 179ms/step - loss: 1.2671
Epoch 10/10
172/172 [==============================] - 31s 180ms/step - loss: 1.2276
###Markdown
Generate text Restore the latest checkpoint To keep this prediction step simple, use a batch size of 1.Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built. To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
###Code
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (1, None, 256) 16640
_________________________________________________________________
unified_lstm_1 (UnifiedLSTM) (1, None, 1024) 5246976
_________________________________________________________________
dense_1 (Dense) (1, None, 65) 66625
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________
###Markdown
The prediction loopThe following code block generates the text:* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.* Get the prediction distribution of the next character using the start string and the RNN state.* Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.* The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
###Code
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a categorical distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
###Output
ROMEO: now to have weth hearten sonce,
No more than the thing stand perfect your self,
Love way come. Up, this is d so do in friends:
If I fear e this, I poisple
My gracious lusty, born once for readyus disguised:
But that a pry; do it sure, thou wert love his cause;
My mind is come too!
POMPEY:
Serve my master's him: he hath extreme over his hand in the
where they shall not hear they right for me.
PROSSPOLUCETER:
I pray you, mistress, I shall be construted
With one that you shall that we know it, in this gentleasing earls of daiberkers now
he is to look upon this face, which leadens from his master as
you should not put what you perciploce backzat of cast,
Nor fear it sometime but for a pit
a world of Hantua?
First Gentleman:
That we can fall of bastards my sperial;
O, she Go seeming that which I have
what enby oar own best injuring them,
Or thom I do now, I, in heart is nothing gone,
Leatt the bark which was done born.
BRUTUS:
Both Margaret, he is sword of the house person. If born,
###Markdown
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions. Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output. We will use `tf.GradientTape` to track the gradiends. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The procedure works as follows:* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.* Open a `tf.GradientTape`, and calculate the predictions and loss in that context.* Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.* Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
###Code
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(target, predictions))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Training step
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
###Output
Epoch 1 Batch 0 Loss 8.132774353027344
Epoch 1 Batch 100 Loss 3.5028388500213623
Epoch 1 Loss 3.7314
Time taken for 1 epoch 31.78906798362732 sec
Epoch 2 Batch 0 Loss 3.766866445541382
Epoch 2 Batch 100 Loss 3.985184669494629
Epoch 2 Loss 3.9137
Time taken for 1 epoch 29.776747703552246 sec
Epoch 3 Batch 0 Loss 4.023300647735596
Epoch 3 Batch 100 Loss 3.921215534210205
Epoch 3 Loss 3.8976
Time taken for 1 epoch 30.094752311706543 sec
Epoch 4 Batch 0 Loss 3.916696071624756
Epoch 4 Batch 100 Loss 3.900864362716675
Epoch 4 Loss 3.9048
Time taken for 1 epoch 30.09034276008606 sec
Epoch 5 Batch 0 Loss 3.9154434204101562
Epoch 5 Batch 100 Loss 3.9020049571990967
Epoch 5 Loss 3.9725
Time taken for 1 epoch 30.17358922958374 sec
Epoch 6 Batch 0 Loss 3.9781394004821777
Epoch 6 Batch 100 Loss 3.920198917388916
Epoch 6 Loss 3.9269
Time taken for 1 epoch 30.19426202774048 sec
Epoch 7 Batch 0 Loss 3.9400787353515625
Epoch 7 Batch 100 Loss 3.8473968505859375
Epoch 7 Loss 3.8438
Time taken for 1 epoch 30.107476234436035 sec
Epoch 8 Batch 0 Loss 3.852555513381958
Epoch 8 Batch 100 Loss 3.8410544395446777
Epoch 8 Loss 3.8218
Time taken for 1 epoch 30.084821462631226 sec
Epoch 9 Batch 0 Loss 3.843691349029541
Epoch 9 Batch 100 Loss 3.829458236694336
Epoch 9 Loss 3.8420
Time taken for 1 epoch 30.13308310508728 sec
Epoch 10 Batch 0 Loss 3.8553621768951416
Epoch 10 Batch 100 Loss 3.7812960147857666
Epoch 10 Loss 3.7726
Time taken for 1 epoch 30.14617133140564 sec
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Text generation with an RNNView on TensorFlow.org Run in Google Colab View source on GitHub This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware acclerator > GPU*. If running locally make sure TensorFlow version >= 1.11.This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":QUEENE:I had thought thou hadst a Roman; for the oracle,Thus by All bids the man against the word,Which are so weak of care, by old care done;Your children were in your holy love,And the precipitation through the bleeding throne.BISHOP OF ELY:Marry, and will, my lord, to weep in such a one were prettiest;Yet now I was adopted heirOf the world's lamentable day,To watch the next way with his father with his face?ESCALUS:The cause why then we are all resolved more sons.VOLUMNIA:O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,And love and pale as any will to that word.QUEEN ELIZABETH:But how long have I heard the soul for this world,And show his hands of life be proved to stand.PETRUCHIO:I say he look'd on, if I must be contentTo stay him from the fatal of our country's bliss.His lordship pluck'd from this sentence then for prey,And then let us twain, being the moon,were she such a case as fills mWhile some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure. Setup Import TensorFlow and other libraries
###Code
from __future__ import absolute_import, division, print_function
!pip install tf-nightly-2.0-preview
import tensorflow as tf
import numpy as np
import os
import time
###Output
_____no_output_____
###Markdown
Download the Shakespeare datasetChange the following line to run this code on your own data.
###Code
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
###Output
_____no_output_____
###Markdown
Read the dataFirst, look in the text:
###Code
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
###Output
_____no_output_____
###Markdown
Process the text Vectorize the textBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.
###Code
# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)
text_as_int = np.array([char2idx[c] for c in text])
###Output
_____no_output_____
###Markdown
Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to `len(unique)`.
###Code
print('{')
for char,_ in zip(char2idx, range(20)):
print(' {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print(' ...\n}')
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
###Output
_____no_output_____
###Markdown
The prediction task Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character? Create training examples and targetsNext divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text. For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right. So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
###Code
# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length
# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)
for i in char_dataset.take(5):
print(idx2char[i.numpy()])
###Output
_____no_output_____
###Markdown
The `batch` method lets us easily convert these individual characters to sequences of the desired size.
###Code
sequences = char_dataset.batch(seq_length+1, drop_remainder=True)
for item in sequences.take(5):
print(repr(''.join(idx2char[item.numpy()])))
###Output
_____no_output_____
###Markdown
For each sequence, duplicate and shift it to form the input and target text by using the `map` method to apply a simple function to each batch:
###Code
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
dataset = sequences.map(split_input_target)
###Output
_____no_output_____
###Markdown
Print the first examples input and target values:
###Code
for input_example, target_example in dataset.take(1):
print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
###Output
_____no_output_____
###Markdown
Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the `RNN` considers the previous step context in addition to the current input character.
###Code
for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
print("Step {:4d}".format(i))
print(" input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
print(" expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
###Output
_____no_output_____
###Markdown
Create training batchesWe used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.
###Code
# Batch size
BATCH_SIZE = 64
steps_per_epoch = examples_per_epoch//BATCH_SIZE
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = dataset.repeat().shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
dataset
###Output
_____no_output_____
###Markdown
Build The Model Use `tf.keras.Sequential` to define the model. For this simple example three layers are used to define our model:* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map the numbers of each character to a vector with `embedding_dim` dimensions;* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use a LSTM layer here.)* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs.
###Code
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=True,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
###Output
_____no_output_____
###Markdown
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-liklihood of the next character: Try the modelNow run the model to see that it behaves as expected.First check the shape of the output:
###Code
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
###Output
_____no_output_____
###Markdown
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary. Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.Try it for the first example in the batch:
###Code
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
###Output
_____no_output_____
###Markdown
This gives us, at each timestep, a prediction of the next character index:
###Code
sampled_indices
###Output
_____no_output_____
###Markdown
Decode these to see the text predicted by this untrained model:
###Code
print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
###Output
_____no_output_____
###Markdown
Train the model At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character. Attach an optimizer, and a loss function The standard `tf.keras.losses.sparse_softmax_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions. Because our model returns logits, we need to set the `from_logits` flag.
###Code
def loss(labels, logits):
return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss: ", example_batch_loss.numpy().mean())
###Output
_____no_output_____
###Markdown
Configure the training procedure using the `tf.keras.Model.compile` method. We'll use `tf.keras.optimizers.Adam` with default arguments and the loss function.
###Code
model.compile(optimizer='adam', loss=loss)
###Output
_____no_output_____
###Markdown
Configure checkpoints Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
###Code
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
###Output
_____no_output_____
###Markdown
Execute the training To keep training time reasonable, use 3 epochs to train the model. In Colab, set the runtime to GPU for faster training.
###Code
EPOCHS=3
history = model.fit(dataset, epochs=EPOCHS, steps_per_epoch=steps_per_epoch,
callbacks=[checkpoint_callback])
###Output
_____no_output_____
###Markdown
Generate text Restore the latest checkpoint To keep this prediction step simple, use a batch size of 1.Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built. To run the model with a different `batch_size`, we need to rebuild the model and restore the weights from the checkpoint.
###Code
tf.train.latest_checkpoint(checkpoint_dir)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.build(tf.TensorShape([1, None]))
model.summary()
###Output
_____no_output_____
###Markdown
The prediction loopThe following code block generates the text:* It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.* Get the prediction distribution of the next character using the start string and the RNN state.* Then, use a multinomial distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.* The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
###Code
def generate_text(model, start_string):
# Evaluation step (generating text using the learned model)
# Number of characters to generate
num_generate = 1000
# Converting our start string to numbers (vectorizing)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# Empty string to store our results
text_generated = []
# Low temperatures results in more predictable text.
# Higher temperatures results in more surprising text.
# Experiment to find the best setting.
temperature = 1.0
# Here batch size == 1
model.reset_states()
for i in range(num_generate):
predictions = model(input_eval)
# remove the batch dimension
predictions = tf.squeeze(predictions, 0)
# using a multinomial distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.multinomial(predictions, num_samples=1)[-1,0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated.append(idx2char[predicted_id])
return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
###Output
_____no_output_____
###Markdown
The easiest thing you can do to improve the results it to train it for longer (try `EPOCHS=30`).You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions. Advanced: Customized TrainingThe above training procedure is simple, but does not give you much control.So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement _curriculum learning_ to help stabilize the model's open-loop output. We will use `tf.GradientTape` to track the gradiends. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).The procedure works as follows:* First, initialize the RNN state. We do this by calling the `tf.keras.Model.reset_states` method.* Next, iterate over the dataset (batch by batch) and calculate the *predictions* associated with each.* Open a `tf.GradientTape`, and calculate the predictions and loss in that context.* Calculate the gradients of the loss with respect to the model variables using the `tf.GradientTape.grads` method.* Finally, take a step downwards by using the optimizer's `tf.train.Optimizer.apply_gradients` method.
###Code
model = build_model(
vocab_size = len(vocab),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
with tf.GradientTape() as tape:
predictions = model(inp)
loss = tf.reduce_mean(
tf.keras.losses.sparse_categorical_crossentropy(target, predictions))
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss
# Training step
EPOCHS = 1
for epoch in range(EPOCHS):
start = time.time()
for (batch_n, (inp, target)) in enumerate(dataset):
loss = train_step(inp, target)
if batch_n % 100 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch+1, batch_n, loss))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
model.save_weights(checkpoint_prefix.format(epoch=epoch))
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/01-exploratory_data_analysis-checkpoint.ipynb | ###Markdown
Note that 30% of the features are genres are less than 1e-5, and the median is 7e-5. Let's clean the data more
###Code
df_out2 = df_out.drop(colnames_small,axis=1)
print(len(df_out2.index),len(df_out2.columns))
sns.violinplot(x=df_out2["genre_productivity_percent"])
df_out2.to_csv('../data/books_25_pages_clean0.csv',index=False)
###Output
_____no_output_____ |
Covid19Prediction.ipynb | ###Markdown
Here I covert the date from string to date format, and then use lamda function to map the difference and then add a column days
###Code
wc['days']= wc['date'].map(lambda x : (datetime.strptime(x, '%m/%d/%Y') - datetime.strptime("1/22/2020", '%m/%d/%Y')).days )
wc[['date','days','confirmed']]
###Output
_____no_output_____
###Markdown
The Gompertz curve or Gompertz function, is a type of mathematical model for a time series and is named after Benjamin Gompertz (1779-1865). It is a sigmoid function which describes growth as being slowest at the start and end of a given time period. The right-hand or future value asymptote of the function is approached much more gradually by the curve than the left-hand or lower valued asymptote. This is in contrast to the simple logistic function in which both asymptotes are approached by the curve symmetrically. It is a special case of the generalised logistic function. The function was originally designed to describe human mortality, but since has been modified to be applied in biology, with regard to detailing populations.
###Code
def gompertz(a, c, t, t_0):
Q = a * np.exp(-np.exp(-c*(t-t_0)))
return Q
x = list(wc['days'])
y = list(wc['confirmed'])
x_train, x_test, y_train, y_test = train_test_split(x,y,train_size=0.85, test_size=0.15, shuffle=False)
x_test_added = x_test + list(range((max(x_test)+1), 140))
popt, pcov = curve_fit(gompertz, x_train, y_train, method='trf', bounds=([5, 0, 0],[14*max(y_train),0.15, 160]))
a, estimated_c, estimated_t_0 = popt
y_pred = gompertz(a, estimated_c, x_train+x_test_added, estimated_t_0)
y_pred
plt.plot(x_train+x_test_added, y_pred, linewidth=2, label='predicted')
plt.plot(x, y, linewidth=2, color='g', linestyle='dotted', label='confirmed')
plt.title('prediction vs confirmed data on covid-19 cases in Nepal\n')
plt.xlabel('days since January 22 2020')
plt.ylabel('confirmed positive cases')
plt.legend(loc='upper left')
###Output
_____no_output_____ |
docs/_downloads/13b143c2380f4768d9432d808ad50799/char_rnn_classification_tutorial.ipynb | ###Markdown
기초부터 시작하는 NLP: 문자-단위 RNN으로 이름 분류하기**********************************************************************************Author**: `Sean Robertson `_ **번역**: `황성수 `_단어를 분류하기 위해 기초적인 문자-단위 RNN을 구축하고 학습 할 예정입니다.이 튜토리얼에서는 (이후 2개 튜토리얼과 함께) NLP 모델링을 위한 데이터 전처리를`torchtext` 의 편리한 많은 기능들을 사용하지 않고 어떻게 하는지 "기초부터(from scratch)"보여주기 떄문에 NLP 모델링을 위한 전처리가 저수준에서 어떻게 진행되는지를 알 수 있습니다.문자-단위 RNN은 단어를 문자의 연속으로 읽어 들여서 각 단계의 예측과"은닉 상태(Hidden State)" 출력하고, 다음 단계에 이전 은닉 상태를 전달합니다.단어가 속한 클래스로 출력이 되도록 최종 예측으로 선택합니다.구체적으로, 18개 언어로 된 수천 개의 성(姓)을 훈련시키고,철자에 따라 이름이 어떤 언어인지 예측합니다::: $ python predict.py Hinton (-0.47) Scottish (-1.52) English (-3.57) Irish $ python predict.py Schmidhuber (-0.19) German (-2.48) Czech (-2.68) Dutch**추천 자료:**Pytorch를 설치했고, Python을 알고, Tensor를 이해한다고 가정합니다:- https://pytorch.org/ 설치 안내- :doc:`/beginner/deep_learning_60min_blitz` PyTorch 시작하기- :doc:`/beginner/pytorch_with_examples` 넓고 깊은 통찰을 위한 자료- :doc:`/beginner/former_torchies_tutorial` 이전 Lua Torch 사용자를 위한 자료RNN과 작동 방식을 아는 것 또한 유용합니다:- `The Unreasonable Effectiveness of Recurrent Neural Networks `__ 실생활 예제를 보여 줍니다.- `Understanding LSTM Networks `__ LSTM에 관한 것이지만 RNN에 관해서도 유익합니다.데이터 준비==================.. NOTE:: `여기 `__ 에서 데이터를 다운 받고, 현재 디렉토리에 압축을 푸십시오.``data/names`` 디렉토리에는 "[Language].txt" 라는 18 개의 텍스트 파일이 있습니다.각 파일에는 한 줄에 하나의 이름이 포함되어 있으며 대부분 로마자로 되어 있습니다(그러나, 유니코드에서 ASCII로 변환해야 함).각 언어 별로 이름 목록 사전 ``{language: [names ...]}`` 을 만듭니다.일반 변수 "category" 와 "line" (우리의 경우 언어와 이름)은 이후의 확장성을 위해 사용됩니다... NOTE::역자 주: "line" 에 입력을 "category"에 클래스를 적용하여 다른 문제에도 활용 할 수 있습니다.여기서는 "line"에 이름(ex. Robert )를 입력으로 "category"에 클래스(ex. english)로 사용합니다.
###Code
from __future__ import unicode_literals, print_function, division
from io import open
import glob
import os
def findFiles(path): return glob.glob(path)
print(findFiles('data/names/*.txt'))
import unicodedata
import string
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# 유니코드 문자열을 ASCII로 변환, https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicodeToAscii('Ślusàrski'))
# 각 언어의 이름 목록인 category_lines 사전 생성
category_lines = {}
all_categories = []
# 파일을 읽고 줄 단위로 분리
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]
for filename in findFiles('data/names/*.txt'):
category = os.path.splitext(os.path.basename(filename))[0]
all_categories.append(category)
lines = readLines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
###Output
_____no_output_____
###Markdown
이제 각 ``category`` (언어)를 ``line`` (이름)에 매핑하는 사전인``category_lines`` 를 만들었습니다. 나중에 참조 할 수 있도록``all_categories`` (언어 목록)와 ``n_categories`` 도 추적합니다.
###Code
print(category_lines['Italian'][:5])
###Output
_____no_output_____
###Markdown
이름을 Tensor로 변경--------------------------이제 모든 이름을 체계화 했으므로, 이를 활용하기 위해 Tensor로전환해야 합니다.하나의 문자를 표현하기 위해, 크기가 ```` 인"One-Hot 벡터" 를 사용합니다. One-Hot 벡터는 현재 문자의주소에만 1을 값으로 가지고 그외에 나머지는 0으로 채워진다.예시 ``"b" = `` .단어를 만들기 위해 One-Hot 벡터들을 2 차원 행렬```` 에 결합시킵니다.위에서 보이는 추가적인 1차원은 PyTorch에서 모든 것이 배치(batch)에 있다고 가정하기때문에 발생합니다. 여기서는 배치 크기 1을 사용하고 있습니다.
###Code
'''
.. NOTE::
역자 주: One-Hot 벡터는 언어를 다룰 때 자주 이용되며,
단어,글자 등을 벡터로 표현 할 때 단어,글자 사이의 상관 관계를 미리 알 수 없을 경우,
One-Hot으로 표현하여 서로 직교한다고 가정하고 학습을 시작합니다.
동일하게 상관 관계를 알 수 없는 다른 데이터의 경우에도 One-Hot 벡터를 활용 할 수 있습니다.
'''
import torch
# all_letters 로 문자의 주소 찾기, 예시 "a" = 0
def letterToIndex(letter):
return all_letters.find(letter)
# 검증을 위해서 한개의 문자를 <1 x n_letters> Tensor로 변환
def letterToTensor(letter):
tensor = torch.zeros(1, n_letters)
tensor[0][letterToIndex(letter)] = 1
return tensor
# 한 줄(이름)을 <line_length x 1 x n_letters>,
# 또는 One-Hot 문자 벡터의 Array로 변경
def lineToTensor(line):
tensor = torch.zeros(len(line), 1, n_letters)
for li, letter in enumerate(line):
tensor[li][0][letterToIndex(letter)] = 1
return tensor
print(letterToTensor('J'))
print(lineToTensor('Jones').size())
###Output
_____no_output_____
###Markdown
네트워크 생성====================Autograd 전에, Torch에서 RNN(recurrent neural network) 생성은여러 시간 단계 걸처서 계층의 매개변수를 복제하는 작업을 포함합니다.계층은 은닉 상태와 변화도(Gradient)를 가지며, 이제 이것들은 그래프 자체에서완전히 처리되는 됩니다. 이는 feed-forward 계층과같은 매우 "순수한" 방법으로 RNN을 구현할 수 있다는 것을 의미합니다.역자 주 : 여기서는 교육목적으로 nn.RNN 대신 직접 RNN을 사용합니다.이 RNN 모듈(대부분 `Torch 사용자를 위한 PyTorch 튜토리얼<https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.htmlexample-2-recurrent-net>`__ 에서 복사함)은 입력 및 은닉 상태로 작동하는 2개의 선형 계층이며,출력 다음에 LogSoftmax 계층이 있습니다... figure:: https://i.imgur.com/Z2xbySO.png :alt:
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
n_hidden = 128
rnn = RNN(n_letters, n_hidden, n_categories)
###Output
_____no_output_____
###Markdown
이 네트워크의 한 단계를 실행하려면 입력(현재 문자 Tensor)과이전의 은닉 상태 (처음에는 0으로 초기화)를 전달해야 합니다.출력(각 언어의 확률)과 다음 은닉 상태 (다음 단계를 위해 유지)를돌려 받습니다.
###Code
input = letterToTensor('A')
hidden =torch.zeros(1, n_hidden)
output, next_hidden = rnn(input, hidden)
###Output
_____no_output_____
###Markdown
효율성을 위해서 매 단계마다 새로운 Tensor를 만들고 싶지 않기 때문에``letterToTensor`` 대신 ``lineToTensor`` 를 잘라서 사용할것입니다. 이것은 Tensor의 사전 연산(pre-computing) 배치에 의해더욱 최적화 될 수 있습니다.
###Code
input = lineToTensor('Albert')
hidden = torch.zeros(1, n_hidden)
output, next_hidden = rnn(input[0], hidden)
print(output)
###Output
_____no_output_____
###Markdown
보시다시피 출력은 ```` Tensor이고, 모든 항목은해당 카테고리의 우도(likelihood) 입니다 (더 높은 것이 더 확률 높음). 학습========학습 준비----------------------학습으로 들어가기 전에 몇몇 도움되는 함수를 만들어야합니다.첫째는 우리가 알아낸 각 카테고리의 우도인 네트워크 출력을 해석하는 것 입니다.가장 큰 값의 주소를 알기 위해서 ``Tensor.topk`` 를 사용 할 수 있습니다.역자 주: 네트워크 출력(각 카테고리의 우도)으로가장 확률이 높은 카테고리 이름(언어)과 카테고리 번호 반환
###Code
def categoryFromOutput(output):
top_n, top_i = output.topk(1) # 텐서의 가장 큰 값 및 주소
category_i = top_i[0].item() # 텐서에서 정수 값으로 변경
return all_categories[category_i], category_i
print(categoryFromOutput(output))
###Output
_____no_output_____
###Markdown
학습 예시(하나의 이름과 그 언어)를 얻는 빠른 방법도 필요합니다.:
###Code
import random
def randomChoice(l):
return l[random.randint(0, len(l) - 1)]
def randomTrainingExample():
category = randomChoice(all_categories)
line = randomChoice(category_lines[category])
category_tensor = torch.tensor([all_categories.index(category)], dtype=torch.long)
line_tensor = lineToTensor(line)
return category, line, category_tensor, line_tensor
for i in range(10):
category, line, category_tensor, line_tensor = randomTrainingExample()
print('category =', category, '/ line =', line)
###Output
_____no_output_____
###Markdown
네트워크 학습--------------------이제 이 네트워크를 학습하는데 필요한 예시(학습 데이터)들을 보여주고 추정합니다.만일 틀렸다면 알려 줍니다.RNN의 마지막 계층이 ``nn.LogSoftmax`` 이므로 손실 함수로``nn.NLLLoss`` 가 적합합니다.
###Code
criterion = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
각 학습 루프는 다음과 같습니다:- 입력과 목표 Tensor 생성- 0 로 초기화된 은닉 상태 생성- 각 문자를 읽기 - 다음 문자를 위한 은닉 상태 유지- 목표와 최종 출력 비교- 역전파- 출력과 손실 반환
###Code
learning_rate = 0.005 # 이것을 너무 높게 설정하면 발산할 수 있고, 너무 낮으면 학습이 되지 않을 수 있습니다.
def train(category_tensor, line_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
# 매개변수의 경사도에 학습률을 곱해서 그 매개변수의 값에 더합니다.
for p in rnn.parameters():
p.data.add_(p.grad.data, alpha=-learning_rate)
return output, loss.item()
###Output
_____no_output_____
###Markdown
이제 예시 데이터를 사용하여 실행해야합니다. ``train`` 함수가 출력과 손실을반환하기 때문에 추측을 화면에 출력하고 도식화를 위한 손실을 추적 할 수있습니다. 1000개의 예시 데이터가 있기 때문에 ``print_every`` 예제만출력하고, 손실의 평균을 얻습니다.
###Code
import time
import math
n_iters = 100000
print_every = 5000
plot_every = 1000
# 도식화를 위한 손실 추적
current_loss = 0
all_losses = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# iter 숫자, 손실, 이름, 추측 화면 출력
if iter % print_every == 0:
guess, guess_i = categoryFromOutput(output)
correct = '✓' if guess == category else '✗ (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
# 현재 평균 손실을 전체 손실 리스트에 추가
if iter % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
###Output
_____no_output_____
###Markdown
결과 도식화--------------------``all_losses`` 를 이용한 손실 도식화는네트워크의 학습을 보여준다:
###Code
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.figure()
plt.plot(all_losses)
###Output
_____no_output_____
###Markdown
결과 평가======================네트워크가 다른 카테고리에서 얼마나 잘 작동하는지 보기위해모든 실제 언어(행)가 네트워크에서 어떤 언어로 추측(열)되는지를 나타내는혼란 행열(confusion matrix)을 만듭니다. 혼란 행렬을 계산하기 위해``evaluate()`` 로 많은 수의 샘플을 네트워크에 실행합니다.``evaluate()`` 은 ``train ()`` 과 역전파를 빼면 동일합니다.
###Code
# 혼란 행렬에서 정확한 추측을 추적
confusion = torch.zeros(n_categories, n_categories)
n_confusion = 10000
# 주어진 라인의 출력 반환
def evaluate(line_tensor):
hidden = rnn.initHidden()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
return output
# 예시들 중에 어떤 것이 정확하게 예측되었는지 기록
for i in range(n_confusion):
category, line, category_tensor, line_tensor = randomTrainingExample()
output = evaluate(line_tensor)
guess, guess_i = categoryFromOutput(output)
category_i = all_categories.index(category)
confusion[category_i][guess_i] += 1
# 모든 행을 합계로 나누어 정규화
for i in range(n_categories):
confusion[i] = confusion[i] / confusion[i].sum()
# 도식 설정
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(confusion.numpy())
fig.colorbar(cax)
# 축 설정
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
# 모든 tick에서 레이블 지정
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
# sphinx_gallery_thumbnail_number = 2
plt.show()
###Output
_____no_output_____
###Markdown
주축에서 벗어난 밝은 점을 선택하여 잘못 추측한 언어를 표시할 수 있습니다. 예를 들어 한국어는 중국어로 이탈리아어로 스페인어로.그리스어는 매우 잘되는 것으로 영어는 매우 나쁜것으로 보입니다.(다른 언어들과 중첩 때문으로 추정) 사용자 입력으로 실행---------------------
###Code
def predict(input_line, n_predictions=3):
print('\n> %s' % input_line)
with torch.no_grad():
output = evaluate(lineToTensor(input_line))
# Get top N categories
topv, topi = output.topk(n_predictions, 1, True)
predictions = []
for i in range(n_predictions):
value = topv[0][i].item()
category_index = topi[0][i].item()
print('(%.2f) %s' % (value, all_categories[category_index]))
predictions.append([value, all_categories[category_index]])
predict('Dovesky')
predict('Jackson')
predict('Satoshi')
###Output
_____no_output_____
###Markdown
문자-단위 RNN으로 이름 분류하기***********************************************Author**: `Sean Robertson `_ **번역**: `황성수 `_이 튜토리얼에서 우리는 단어를 분류하기 위해 기초적인 문자-단위 RNN을 구축하고 학습 할 예정입니다.문자-단위 RNN은 단어를 문자의 연속으로 읽어 들여서 각 단계의 예측과 "은닉 상태(Hidden State)" 출력하고, 다음 단계에 이전 은닉 상태를 전달합니다. 단어가 속한 클래스로 출력이 되도록 최종 예측으로 선택합니다.구체적으로, 18개 언어로 된 수천 개의 성(姓)을 훈련시키고, 철자에 따라 이름이 어떤 언어인지 예측합니다::: $ python predict.py Hinton (-0.47) Scottish (-1.52) English (-3.57) Irish $ python predict.py Schmidhuber (-0.19) German (-2.48) Czech (-2.68) Dutch**추천 자료:**Pytorch를 설치했고, Python을 알고, Tensor를 이해한다고 가정합니다:- https://pytorch.org/ 설치 안내- :doc:`/beginner/deep_learning_60min_blitz` PyTorch 시작하기- :doc:`/beginner/pytorch_with_examples` 넓고 깊은 통찰을 위한 자료- :doc:`/beginner/former_torchies_tutorial` 이전 Lua Torch 사용자를 위한 자료RNN과 작동 방식을 아는 것 또한 유용합니다:- `The Unreasonable Effectiveness of Recurrent Neural Networks `__ 실생활 예제를 보여 줍니다.- `Understanding LSTM Networks `__ LSTM에 관한 것이지만 RNN에 관해서도 유익합니다.데이터 준비==================.. NOTE:: `여기 `__ 에서 데이터를 다운 받고, 현재 디렉토리에 압축을 푸십시오.``data/names`` 디렉토리에는 "[Language].txt" 라는 18 개의 텍스트 파일이 있습니다.각 파일에는 한 줄에 하나의 이름이 포함되어 있으며 대부분 로마자로 되어 있습니다(그러나, 유니코드에서 ASCII로 변환해야 함).각 언어 별로 이름 목록 사전 ``{language: [names ...]}`` 을 만듭니다. 일반 변수 "category" 와 "line" (우리의 경우 언어와 이름)은 이후의 확장성을 위해 사용됩니다... NOTE::역자 주: "line" 에 입력을 "category"에 클래스를 적용하여 다른 문제에도 활용 할 수 있습니다.여기서는 "line"에 이름(ex. Robert )를 입력으로 "category"에 클래스(ex. english)로 사용합니다.
###Code
from __future__ import unicode_literals, print_function, division
from io import open
import glob
import os
def findFiles(path): return glob.glob(path)
print(findFiles('data/names/*.txt'))
import unicodedata
import string
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# 유니코드 문자열을 ASCII로 변환, https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicodeToAscii('Ślusàrski'))
# 각 언어의 이름 목록인 category_lines 사전 생성
category_lines = {}
all_categories = []
# 파일을 읽고 줄 단위로 분리
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]
for filename in findFiles('data/names/*.txt'):
category = os.path.splitext(os.path.basename(filename))[0]
all_categories.append(category)
lines = readLines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
###Output
_____no_output_____
###Markdown
이제 각 ``category`` (언어)를 ``line`` (이름)에 매핑하는 사전인``category_lines`` 를 만들었습니다. 나중에 참조 할 수 있도록``all_categories`` (언어 목록)와 ``n_categories`` 도 추적합니다.
###Code
print(category_lines['Italian'][:5])
###Output
_____no_output_____
###Markdown
이름을 Tensor로 변경--------------------------이제 모든 이름을 체계화 했으므로, 이를 활용하기 위해 Tensor로전환해야 합니다.하나의 문자를 표현하기 위해, 크기가 ```` 인"One-Hot 벡터" 를 사용합니다. One-Hot 벡터는 현재 문자의주소에만 1을 값으로 가지고 그외에 나머지는 0으로 채워진다.예시 ``"b" = `` .단어를 만들기 위해 One-Hot 벡터들을 2 차원 행렬```` 에 결합시킵니다.위에서 보이는 추가적인 1차원은 PyTorch에서 모든 것이 배치(batch)에 있다고 가정하기때문에 발생합니다. 여기서는 배치 크기 1을 사용하고 있습니다.
###Code
'''
.. NOTE::
역자 주: One-Hot 벡터는 언어를 다룰 때 자주 이용되며,
단어,글자 등을 벡터로 표현 할 때 단어,글자 사이의 상관 관계를 미리 알 수 없을 경우,
One-Hot으로 표현하여 서로 직교한다고 가정하고 학습을 시작합니다.
동일하게 상관 관계를 알 수 없는 다른 데이터의 경우에도 One-Hot 벡터를 활용 할 수 있습니다.
'''
import torch
# all_letters 로 문자의 주소 찾기, 예시 "a" = 0
def letterToIndex(letter):
return all_letters.find(letter)
# 검증을 위해서 한개의 문자를 <1 x n_letters> Tensor로 변환
def letterToTensor(letter):
tensor = torch.zeros(1, n_letters)
tensor[0][letterToIndex(letter)] = 1
return tensor
# 한 줄(이름)을 <line_length x 1 x n_letters>,
# 또는 One-Hot 문자 벡터의 Array로 변경
def lineToTensor(line):
tensor = torch.zeros(len(line), 1, n_letters)
for li, letter in enumerate(line):
tensor[li][0][letterToIndex(letter)] = 1
return tensor
print(letterToTensor('J'))
print(lineToTensor('Jones').size())
###Output
_____no_output_____
###Markdown
네트워크 생성====================Autograd 전에, Torch에서 RNN(recurrent neural network) 생성은여러 시간 단계 걸처서 계층의 매개변수를 복제하는 작업을 포함합니다.계층은 은닉 상태와 변화도(Gradient)를 가지며, 이제 이것들은 그래프 자체에서 완전히 처리되는 됩니다. 이는 feed-forward 계층과같은 매우 "순수한" 방법으로 RNN을 구현할 수 있다는 것을 의미합니다.역자 주 : 여기서는 교육목적으로 nn.RNN 대신 직접 RNN을 사용합니다.이 RNN 모듈(대부분 `Torch 사용자를 위한 PyTorch 튜토리얼<https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.htmlexample-2-recurrent-net>`__ 에서 복사함)은 입력 및 은닉 상태로 작동하는 2개의 선형 계층이며,출력 다음에 LogSoftmax 계층이 있습니다... figure:: https://i.imgur.com/Z2xbySO.png :alt:
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
n_hidden = 128
rnn = RNN(n_letters, n_hidden, n_categories)
###Output
_____no_output_____
###Markdown
이 네트워크의 한 단계를 실행하려면 입력(현재 문자 Tensor)과이전의 은닉 상태 (처음에는 0으로 초기화)를 전달해야 합니다.출력(각 언어의 확률)과 다음 은닉 상태 (다음 단계를 위해 유지)를돌려 받습니다.
###Code
input = letterToTensor('A')
hidden =torch.zeros(1, n_hidden)
output, next_hidden = rnn(input, hidden)
###Output
_____no_output_____
###Markdown
효율성을 위해서 매 단계마다 새로운 Tensor를 만들고 싶지 않기 때문에``letterToTensor`` 대신 ``lineToTensor`` 를 잘라서 사용할것입니다. 이것은 Tensor의 사전 연산(pre-computing) 배치에 의해더욱 최적화 될 수 있습니다.
###Code
input = lineToTensor('Albert')
hidden = torch.zeros(1, n_hidden)
output, next_hidden = rnn(input[0], hidden)
print(output)
###Output
_____no_output_____
###Markdown
보시다시피 출력은 ```` Tensor이고, 모든 항목은해당 카테고리의 우도(likelihood) 입니다 (더 높은 것이 더 확률 높음). 학습========학습 준비----------------------학습으로 들어가기 전에 몇몇 도움되는 함수를 만들어야합니다. 첫째는 우리가 알아낸 각 카테고리의 우도인 네트워크 출력을 해석하는 것 입니다.가장 큰 값의 주소를 알기 위해서 ``Tensor.topk`` 를 사용 할 수 있습니다.역자 주: 네트워크 출력(각 카테고리의 우도)으로 가장 확률이 높은 카테고리 이름(언어)과 카테고리 번호 반환
###Code
def categoryFromOutput(output):
top_n, top_i = output.topk(1) # 텐서의 가장 큰 값 및 주소
category_i = top_i[0].item() # 텐서에서 정수 값으로 변경
return all_categories[category_i], category_i
print(categoryFromOutput(output))
###Output
_____no_output_____
###Markdown
학습 예시(하나의 이름과 그 언어)를 얻는 빠른 방법도 필요합니다.:
###Code
import random
def randomChoice(l):
return l[random.randint(0, len(l) - 1)]
def randomTrainingExample():
category = randomChoice(all_categories)
line = randomChoice(category_lines[category])
category_tensor = torch.tensor([all_categories.index(category)], dtype=torch.long)
line_tensor = lineToTensor(line)
return category, line, category_tensor, line_tensor
for i in range(10):
category, line, category_tensor, line_tensor = randomTrainingExample()
print('category =', category, '/ line =', line)
###Output
_____no_output_____
###Markdown
네트워크 학습--------------------이제 이 네트워크를 학습하는데 필요한 예시(학습 데이터)들을 보여주고 추정합니다.만일 틀렸다면 알려 줍니다.RNN의 마지막 계층이 ``nn.LogSoftmax`` 이므로 손실 함수로``nn.NLLLoss`` 가 적합합니다.
###Code
criterion = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
각 학습 루프는 다음과 같습니다:- 입력과 목표 Tensor 생성- 0 로 초기화된 은닉 상태 생성- 각 문자를 읽기 - 다음 문자를 위한 은닉 상태 유지- 목표와 최종 출력 비교- 역전파- 출력과 손실 반환
###Code
learning_rate = 0.005 # 이것을 너무 높게 설정하면 발산할 수 있고, 너무 낮으면 학습이 되지 않을 수 있습니다.
def train(category_tensor, line_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
# 매개변수의 경사도에 학습률을 곱해서 그 매개변수의 값에 더합니다.
for p in rnn.parameters():
p.data.add_(-learning_rate, p.grad.data)
return output, loss.item()
###Output
_____no_output_____
###Markdown
이제 예시 데이터를 사용하여 실행해야합니다. ``train`` 함수가 출력과 손실을반환하기 때문에 추측을 화면에 출력하고 도식화를 위한 손실을 추적 할 수있습니다. 1000개의 예시 데이터가 있기 때문에 ``print_every`` 예제만출력하고, 손실의 평균을 얻습니다.
###Code
import time
import math
n_iters = 100000
print_every = 5000
plot_every = 1000
# 도식화를 위한 손실 추적
current_loss = 0
all_losses = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# iter 숫자, 손실, 이름, 추측 화면 출력
if iter % print_every == 0:
guess, guess_i = categoryFromOutput(output)
correct = '✓' if guess == category else '✗ (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
# 현재 평균 손실을 전체 손실 리스트에 추가
if iter % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
###Output
_____no_output_____
###Markdown
결과 도식화--------------------``all_losses`` 를 이용한 손실 도식화는네트워크의 학습을 보여준다:
###Code
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.figure()
plt.plot(all_losses)
###Output
_____no_output_____
###Markdown
결과 평가======================네트워크가 다른 카테고리에서 얼마나 잘 작동하는지 보기위해모든 실제 언어(행)가 네트워크에서 어떤 언어로 추측(열)되는지를 나타내는혼란 행열(confusion matrix)을 만듭니다. 혼란 행렬을 계산하기 위해``evaluate()`` 로 많은 수의 샘플을 네트워크에 실행합니다.``evaluate()`` 은 ``train ()`` 과 역전파를 빼면 동일합니다.
###Code
# 혼란 행렬에서 정확한 추측을 추적
confusion = torch.zeros(n_categories, n_categories)
n_confusion = 10000
# 주어진 라인의 출력 반환
def evaluate(line_tensor):
hidden = rnn.initHidden()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
return output
# 예시들 중에 어떤 것이 정확하게 예측되었는지 기록
for i in range(n_confusion):
category, line, category_tensor, line_tensor = randomTrainingExample()
output = evaluate(line_tensor)
guess, guess_i = categoryFromOutput(output)
category_i = all_categories.index(category)
confusion[category_i][guess_i] += 1
# 모든 행을 합계로 나누어 정규화
for i in range(n_categories):
confusion[i] = confusion[i] / confusion[i].sum()
# 도식 설정
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(confusion.numpy())
fig.colorbar(cax)
# 축 설정
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
# 모든 tick에서 레이블 지정
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
# sphinx_gallery_thumbnail_number = 2
plt.show()
###Output
_____no_output_____
###Markdown
주축에서 벗어난 밝은 점을 선택하여 잘못 추측한 언어를 표시할 수 있습니다. 예를 들어 한국어는 중국어로 이탈리아어로 스페인어로.그리스어는 매우 잘되는 것으로 영어는 매우 나쁜것으로 보입니다.(다른 언어들과 중첩 때문으로 추정) 사용자 입력으로 실행---------------------
###Code
def predict(input_line, n_predictions=3):
print('\n> %s' % input_line)
with torch.no_grad():
output = evaluate(lineToTensor(input_line))
# Get top N categories
topv, topi = output.topk(n_predictions, 1, True)
predictions = []
for i in range(n_predictions):
value = topv[0][i].item()
category_index = topi[0][i].item()
print('(%.2f) %s' % (value, all_categories[category_index]))
predictions.append([value, all_categories[category_index]])
predict('Dovesky')
predict('Jackson')
predict('Satoshi')
###Output
_____no_output_____
###Markdown
기초부터 시작하는 NLP: 문자-단위 RNN으로 이름 분류하기**********************************************************************************Author**: `Sean Robertson `_ **번역**: `황성수 `_단어를 분류하기 위해 기초적인 문자-단위 RNN을 구축하고 학습 할 예정입니다.이 튜토리얼에서는 (이후 2개 튜토리얼과 함께) NLP 모델링을 위한 데이터 전처리를 `torchtext` 의 편리한 많은 기능들을 사용하지 않고 어떻게 하는지 "기초부터(from scratch)" 보여주기 떄문에 NLP 모델링을 위한 전처리가 저수준에서 어떻게 진행되는지를 알 수 있습니다.문자-단위 RNN은 단어를 문자의 연속으로 읽어 들여서 각 단계의 예측과 "은닉 상태(Hidden State)" 출력하고, 다음 단계에 이전 은닉 상태를 전달합니다. 단어가 속한 클래스로 출력이 되도록 최종 예측으로 선택합니다.구체적으로, 18개 언어로 된 수천 개의 성(姓)을 훈련시키고, 철자에 따라 이름이 어떤 언어인지 예측합니다::: $ python predict.py Hinton (-0.47) Scottish (-1.52) English (-3.57) Irish $ python predict.py Schmidhuber (-0.19) German (-2.48) Czech (-2.68) Dutch**추천 자료:**Pytorch를 설치했고, Python을 알고, Tensor를 이해한다고 가정합니다:- https://pytorch.org/ 설치 안내- :doc:`/beginner/deep_learning_60min_blitz` PyTorch 시작하기- :doc:`/beginner/pytorch_with_examples` 넓고 깊은 통찰을 위한 자료- :doc:`/beginner/former_torchies_tutorial` 이전 Lua Torch 사용자를 위한 자료RNN과 작동 방식을 아는 것 또한 유용합니다:- `The Unreasonable Effectiveness of Recurrent Neural Networks `__ 실생활 예제를 보여 줍니다.- `Understanding LSTM Networks `__ LSTM에 관한 것이지만 RNN에 관해서도 유익합니다.데이터 준비==================.. NOTE:: `여기 `__ 에서 데이터를 다운 받고, 현재 디렉토리에 압축을 푸십시오.``data/names`` 디렉토리에는 "[Language].txt" 라는 18 개의 텍스트 파일이 있습니다.각 파일에는 한 줄에 하나의 이름이 포함되어 있으며 대부분 로마자로 되어 있습니다(그러나, 유니코드에서 ASCII로 변환해야 함).각 언어 별로 이름 목록 사전 ``{language: [names ...]}`` 을 만듭니다. 일반 변수 "category" 와 "line" (우리의 경우 언어와 이름)은 이후의 확장성을 위해 사용됩니다... NOTE::역자 주: "line" 에 입력을 "category"에 클래스를 적용하여 다른 문제에도 활용 할 수 있습니다.여기서는 "line"에 이름(ex. Robert )를 입력으로 "category"에 클래스(ex. english)로 사용합니다.
###Code
from __future__ import unicode_literals, print_function, division
from io import open
import glob
import os
def findFiles(path): return glob.glob(path)
print(findFiles('data/names/*.txt'))
import unicodedata
import string
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# 유니코드 문자열을 ASCII로 변환, https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicodeToAscii('Ślusàrski'))
# 각 언어의 이름 목록인 category_lines 사전 생성
category_lines = {}
all_categories = []
# 파일을 읽고 줄 단위로 분리
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]
for filename in findFiles('data/names/*.txt'):
category = os.path.splitext(os.path.basename(filename))[0]
all_categories.append(category)
lines = readLines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
###Output
_____no_output_____
###Markdown
이제 각 ``category`` (언어)를 ``line`` (이름)에 매핑하는 사전인``category_lines`` 를 만들었습니다. 나중에 참조 할 수 있도록``all_categories`` (언어 목록)와 ``n_categories`` 도 추적합니다.
###Code
print(category_lines['Italian'][:5])
###Output
_____no_output_____
###Markdown
이름을 Tensor로 변경--------------------------이제 모든 이름을 체계화 했으므로, 이를 활용하기 위해 Tensor로전환해야 합니다.하나의 문자를 표현하기 위해, 크기가 ```` 인"One-Hot 벡터" 를 사용합니다. One-Hot 벡터는 현재 문자의주소에만 1을 값으로 가지고 그외에 나머지는 0으로 채워진다.예시 ``"b" = `` .단어를 만들기 위해 One-Hot 벡터들을 2 차원 행렬```` 에 결합시킵니다.위에서 보이는 추가적인 1차원은 PyTorch에서 모든 것이 배치(batch)에 있다고 가정하기때문에 발생합니다. 여기서는 배치 크기 1을 사용하고 있습니다.
###Code
'''
.. NOTE::
역자 주: One-Hot 벡터는 언어를 다룰 때 자주 이용되며,
단어,글자 등을 벡터로 표현 할 때 단어,글자 사이의 상관 관계를 미리 알 수 없을 경우,
One-Hot으로 표현하여 서로 직교한다고 가정하고 학습을 시작합니다.
동일하게 상관 관계를 알 수 없는 다른 데이터의 경우에도 One-Hot 벡터를 활용 할 수 있습니다.
'''
import torch
# all_letters 로 문자의 주소 찾기, 예시 "a" = 0
def letterToIndex(letter):
return all_letters.find(letter)
# 검증을 위해서 한개의 문자를 <1 x n_letters> Tensor로 변환
def letterToTensor(letter):
tensor = torch.zeros(1, n_letters)
tensor[0][letterToIndex(letter)] = 1
return tensor
# 한 줄(이름)을 <line_length x 1 x n_letters>,
# 또는 One-Hot 문자 벡터의 Array로 변경
def lineToTensor(line):
tensor = torch.zeros(len(line), 1, n_letters)
for li, letter in enumerate(line):
tensor[li][0][letterToIndex(letter)] = 1
return tensor
print(letterToTensor('J'))
print(lineToTensor('Jones').size())
###Output
_____no_output_____
###Markdown
네트워크 생성====================Autograd 전에, Torch에서 RNN(recurrent neural network) 생성은여러 시간 단계 걸처서 계층의 매개변수를 복제하는 작업을 포함합니다.계층은 은닉 상태와 변화도(Gradient)를 가지며, 이제 이것들은 그래프 자체에서 완전히 처리되는 됩니다. 이는 feed-forward 계층과같은 매우 "순수한" 방법으로 RNN을 구현할 수 있다는 것을 의미합니다.역자 주 : 여기서는 교육목적으로 nn.RNN 대신 직접 RNN을 사용합니다.이 RNN 모듈(대부분 `Torch 사용자를 위한 PyTorch 튜토리얼<https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.htmlexample-2-recurrent-net>`__ 에서 복사함)은 입력 및 은닉 상태로 작동하는 2개의 선형 계층이며,출력 다음에 LogSoftmax 계층이 있습니다... figure:: https://i.imgur.com/Z2xbySO.png :alt:
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
n_hidden = 128
rnn = RNN(n_letters, n_hidden, n_categories)
###Output
_____no_output_____
###Markdown
이 네트워크의 한 단계를 실행하려면 입력(현재 문자 Tensor)과이전의 은닉 상태 (처음에는 0으로 초기화)를 전달해야 합니다.출력(각 언어의 확률)과 다음 은닉 상태 (다음 단계를 위해 유지)를돌려 받습니다.
###Code
input = letterToTensor('A')
hidden =torch.zeros(1, n_hidden)
output, next_hidden = rnn(input, hidden)
###Output
_____no_output_____
###Markdown
효율성을 위해서 매 단계마다 새로운 Tensor를 만들고 싶지 않기 때문에``letterToTensor`` 대신 ``lineToTensor`` 를 잘라서 사용할것입니다. 이것은 Tensor의 사전 연산(pre-computing) 배치에 의해더욱 최적화 될 수 있습니다.
###Code
input = lineToTensor('Albert')
hidden = torch.zeros(1, n_hidden)
output, next_hidden = rnn(input[0], hidden)
print(output)
###Output
_____no_output_____
###Markdown
보시다시피 출력은 ```` Tensor이고, 모든 항목은해당 카테고리의 우도(likelihood) 입니다 (더 높은 것이 더 확률 높음). 학습========학습 준비----------------------학습으로 들어가기 전에 몇몇 도움되는 함수를 만들어야합니다. 첫째는 우리가 알아낸 각 카테고리의 우도인 네트워크 출력을 해석하는 것 입니다.가장 큰 값의 주소를 알기 위해서 ``Tensor.topk`` 를 사용 할 수 있습니다.역자 주: 네트워크 출력(각 카테고리의 우도)으로 가장 확률이 높은 카테고리 이름(언어)과 카테고리 번호 반환
###Code
def categoryFromOutput(output):
top_n, top_i = output.topk(1) # 텐서의 가장 큰 값 및 주소
category_i = top_i[0].item() # 텐서에서 정수 값으로 변경
return all_categories[category_i], category_i
print(categoryFromOutput(output))
###Output
_____no_output_____
###Markdown
학습 예시(하나의 이름과 그 언어)를 얻는 빠른 방법도 필요합니다.:
###Code
import random
def randomChoice(l):
return l[random.randint(0, len(l) - 1)]
def randomTrainingExample():
category = randomChoice(all_categories)
line = randomChoice(category_lines[category])
category_tensor = torch.tensor([all_categories.index(category)], dtype=torch.long)
line_tensor = lineToTensor(line)
return category, line, category_tensor, line_tensor
for i in range(10):
category, line, category_tensor, line_tensor = randomTrainingExample()
print('category =', category, '/ line =', line)
###Output
_____no_output_____
###Markdown
네트워크 학습--------------------이제 이 네트워크를 학습하는데 필요한 예시(학습 데이터)들을 보여주고 추정합니다.만일 틀렸다면 알려 줍니다.RNN의 마지막 계층이 ``nn.LogSoftmax`` 이므로 손실 함수로``nn.NLLLoss`` 가 적합합니다.
###Code
criterion = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
각 학습 루프는 다음과 같습니다:- 입력과 목표 Tensor 생성- 0 로 초기화된 은닉 상태 생성- 각 문자를 읽기 - 다음 문자를 위한 은닉 상태 유지- 목표와 최종 출력 비교- 역전파- 출력과 손실 반환
###Code
learning_rate = 0.005 # 이것을 너무 높게 설정하면 발산할 수 있고, 너무 낮으면 학습이 되지 않을 수 있습니다.
def train(category_tensor, line_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
# 매개변수의 경사도에 학습률을 곱해서 그 매개변수의 값에 더합니다.
for p in rnn.parameters():
p.data.add_(-learning_rate, p.grad.data)
return output, loss.item()
###Output
_____no_output_____
###Markdown
이제 예시 데이터를 사용하여 실행해야합니다. ``train`` 함수가 출력과 손실을반환하기 때문에 추측을 화면에 출력하고 도식화를 위한 손실을 추적 할 수있습니다. 1000개의 예시 데이터가 있기 때문에 ``print_every`` 예제만출력하고, 손실의 평균을 얻습니다.
###Code
import time
import math
n_iters = 100000
print_every = 5000
plot_every = 1000
# 도식화를 위한 손실 추적
current_loss = 0
all_losses = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# iter 숫자, 손실, 이름, 추측 화면 출력
if iter % print_every == 0:
guess, guess_i = categoryFromOutput(output)
correct = '✓' if guess == category else '✗ (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
# 현재 평균 손실을 전체 손실 리스트에 추가
if iter % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
###Output
_____no_output_____
###Markdown
결과 도식화--------------------``all_losses`` 를 이용한 손실 도식화는네트워크의 학습을 보여준다:
###Code
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.figure()
plt.plot(all_losses)
###Output
_____no_output_____
###Markdown
결과 평가======================네트워크가 다른 카테고리에서 얼마나 잘 작동하는지 보기위해모든 실제 언어(행)가 네트워크에서 어떤 언어로 추측(열)되는지를 나타내는혼란 행열(confusion matrix)을 만듭니다. 혼란 행렬을 계산하기 위해``evaluate()`` 로 많은 수의 샘플을 네트워크에 실행합니다.``evaluate()`` 은 ``train ()`` 과 역전파를 빼면 동일합니다.
###Code
# 혼란 행렬에서 정확한 추측을 추적
confusion = torch.zeros(n_categories, n_categories)
n_confusion = 10000
# 주어진 라인의 출력 반환
def evaluate(line_tensor):
hidden = rnn.initHidden()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
return output
# 예시들 중에 어떤 것이 정확하게 예측되었는지 기록
for i in range(n_confusion):
category, line, category_tensor, line_tensor = randomTrainingExample()
output = evaluate(line_tensor)
guess, guess_i = categoryFromOutput(output)
category_i = all_categories.index(category)
confusion[category_i][guess_i] += 1
# 모든 행을 합계로 나누어 정규화
for i in range(n_categories):
confusion[i] = confusion[i] / confusion[i].sum()
# 도식 설정
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(confusion.numpy())
fig.colorbar(cax)
# 축 설정
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
# 모든 tick에서 레이블 지정
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
# sphinx_gallery_thumbnail_number = 2
plt.show()
###Output
_____no_output_____
###Markdown
주축에서 벗어난 밝은 점을 선택하여 잘못 추측한 언어를 표시할 수 있습니다. 예를 들어 한국어는 중국어로 이탈리아어로 스페인어로.그리스어는 매우 잘되는 것으로 영어는 매우 나쁜것으로 보입니다.(다른 언어들과 중첩 때문으로 추정) 사용자 입력으로 실행---------------------
###Code
def predict(input_line, n_predictions=3):
print('\n> %s' % input_line)
with torch.no_grad():
output = evaluate(lineToTensor(input_line))
# Get top N categories
topv, topi = output.topk(n_predictions, 1, True)
predictions = []
for i in range(n_predictions):
value = topv[0][i].item()
category_index = topi[0][i].item()
print('(%.2f) %s' % (value, all_categories[category_index]))
predictions.append([value, all_categories[category_index]])
predict('Dovesky')
predict('Jackson')
predict('Satoshi')
###Output
_____no_output_____ |
Project_5/Project_5_Machine_Learning_POI_Classifier.ipynb | ###Markdown
Machine Learning: Identifying Fraud From Enron Email Zach Farmer Udacity Data Analyst Nano Degree Project 5 Table of Contents * [Project Overview](Project Overview) * [Question Overview](Questions) * [Necessary Resources](Necessary Resources) * [Project Goal](Project Goal) * [Question 1--Part 1](Question 1--Part 1) * [Analysis](Analysis) * [Question 1--Part 2](Question 1--Part 2) * [original poi_id.py](original poi_id.py) * [First Look](First Look) * [Preprocessing / Feature Selection / Feature Creation](Preprocessing / Feature Selection / Feature Creation) * [Question 2](Question 2) * [Question 3](Question 3) * [Algorithm Choice](Algorithm choice) * [Question 4](Question 4) * [Question 5](Question 5) * [Question 6](Question 6) * [POI Script](POI Script) * [Test Classifier](Test Classifier) * [References and Resources](References and Resources) **** Project Overview*"In 2000, Enron was one of the largest companies in the United States. By 2002, it had collapsed into bankruptcy due to widespread corporate fraud. In the resulting Federal investigation, a significant amount of typically confidential information entered into the public record, including tens of thousands of emails and detailed financial data for top executives. In this project, you will play detective, and put your new skills to use by building a person of interest identifier based on financial and email data made public as a result of the Enron scandal. To assist you in your detective work, we've combined this data with a hand-generated list of persons of interest in the fraud case, which means individuals who were indicted, reached a settlement or plea deal with the government, or testified in exchange for prosecution immunity."* > The above project overview is from Udacity's Project 5 Details. It can be found [here](https://www.udacity.com/course/viewer!/c-nd002/l-3174288624/m-3180398637 "https://www.udacity.com/course/viewer!/c-nd002/l-3174288624/m-3180398637") **** Question Overview *1. Summarize for us the goal of this project and how machine learning is useful in trying to accomplish it. As part of your answer, give some background on the dataset and how it can be used to answer the project question. Were there any outliers in the data when you got it, and how did you handle those?* *2. What features did you end up using in your POI identifier, and what selection process did you use to pick them? Did you have to do any scaling? Why or why not? Give the feature importances of the features that you use, and if you used an automated feature selection function like SelectKBest, please report the feature scores and reasons for your choice of parameter values.* *3. What algorithm did you end up using? What other one(s) did you try? How did model performance differ between algorithms?* *4. What does it mean to tune the parameters of an algorithm, and what can happen if you don’t do this well? How did you tune the parameters of your particular algorithm?* *5. What is validation, and what’s a classic mistake you can make if you do it wrong? How did you validate your analysis?* *6. Give at least 2 evaluation metrics and your average performance for each of them. Explain an interpretation of your metrics that says something human-understandable about your algorithms' performance.* > Note: These question are answered throughout the this notebook at points where my analysis and code address their concerns. However in order to provide a clean and single location for easier review of all of the questions and their respective answer will be saved to a separate markdown file contained in the local directory named "Question.md". ******* Necessary Resources In the local directory are the original skeleton files and emails by address as provided by Udacity for the project. All relevant files will be loaded into this notebook and the project conducted herein. The following project has been deconstructed into its constituent parts. The final `poi_id.py` file will be included near the end of the report in its entirety. In order to document my thinking and to incorporate the answers to the provided questions into the flow of the report I have reproduced the different stages in the `poi_id.py` file and reviewed them in their own sections. My methods of feature selection and classifier selection implemented a pipeline with a grid-search through the hyper parameters of my feature selection and classifier algorithm parameters. Therefore when exploring the different sections individually such as the feature selection understand that the parameters I chose to use were derived from the grid-search performed to find the best overall classifier. This project borrows heavily from Sklearn's machine learning modules, as a result you will need to install these modules in order to run the project on your machine. Constituent to the sklearn requirements you will need to implement numpy structures and therefore need the numpy module. I found that installing the anaconda python distribution satisfied all the requirements necessary to run everything in this notebook. You will need to ensure a Scikit-Learn distro. of at least version 0.17 or higher as some of the functions used are not present in older versions or are implemented in a significantly different manner. There is one script contain in this report that has been configured to run using `IPython.parallel`. This script, to run effectively, requires that you activate more clusters for your notebook to access. In order to practice using large distributed resource I used MIT's [starcluster](http://star.mit.edu/cluster/docs/latest/index.html "http://star.mit.edu/cluster/docs/latest/index.html") and moved the relevant script to an AWS EC2 cluster with many cores to distribute the embarrassingly parallel grid-search computations over. `poi_id.py` : *Starter code for the POI identifier* `final_project_dataset.pkl` : *The dataset for the project, more details below.* `tester.py` : *code to test results* `emails_by_address.tar.gz` : *this zipped directory contains many text files, each of which contains all the messages to or from a particular email address, It is for reference.* `poi_names.txt` : *hand coded person of interest ground truth values. A list of the real poi's and non poi's hand coded and provided by Udacity.* `enron61702insiderpay.pdf`: *financial data in pdf format, used for description of the financial features.* `tools/`: *directory containing supplemental modules written by Udacity instructors. Necessary for testing script and general feature extraction.* > *"As preprocessing to this project, we've combined the Enron email and financial data into a dictionary, where each key-value pair in the dictionary corresponds to one person. The dictionary key is the person's name, and the value is another dictionary, which contains the names of all the features and their values for that person. The features in the data fall into three major types, namely financial features, email features and POI labels."* **Final_Project_dataset:** details follow **financial features:** ['salary', 'deferral_payments', 'total_payments', 'loan_advances', 'bonus', 'restricted_stock_deferred', 'deferred_income', 'total_stock_value', 'expenses', 'exercised_stock_options', 'other', 'long_term_incentive', 'restricted_stock', 'director_fees'] (all units are in US dollars)**email features:** ['to_messages', 'email_address', 'from_poi_to_this_person', 'from_messages', 'from_this_person_to_poi', 'poi', 'shared_receipt_with_poi'] (units are generally number of emails messages; notable exception is ‘email_address’, which is a text string)_**POI label**: [‘poi’] (boolean, represented as integer) ***** Project Goal Question 1 - Part 1:***Summarize the goal of this project and how machine learning is useful in trying to accomplish it. As part of the answer, give some background on the dataset and how it can be used to answer the project question.*** ***Answer:*** The fundamental goal of this project is to determine whether or not an Enron employee is a person of interest (___POI___*) in the massive fraud perpetrated by the corporation. We will use a machine learning classifier taking as input a series of features and outputting a prediction as to whether a person is a POI or not. The series of input features will be made up of the massive court mandated release of Enron's data, from financial data to email messages. > *"This dataset was collected and prepared by the CALO Project (A Cognitive Assistant that Learns and Organizes). It contains data from about 150 users, mostly senior management of Enron, organized into folders. The corpus contains a total of about 0.5M messages. This data was originally made public, and posted to the web, by the Federal Energy Regulatory Commission during its investigation."* [Enron Email Dataset](https://www.cs.cmu.edu/~./enron/ "https://www.cs.cmu.edu/~./enron/") The financial data was collected for the employees in the email corpus and mapped to the relevant people by Udacity Instructors.By treating each of those features gathered from the above resources as vectors containing underlying information regarding possible fraud we can mathematically work towards constructing a model to predict behavior that is possibly fraudulent. By these means if an effective model can be found we should be able to simply plug in the inputs (features) of an employee and be told with hopefully high accuracy whether that employee was likely engaged in fraudulent behavior. Remember that in our case we are simply deciding whether or not an employee should be given extra scrutiny (i.e. person of interest). > *What is a person of interest: Indicted, Settled without admitting guilt, Testified in exchange for immunity* **** Analysis
###Code
##Modules used in the project
import sys # for path changes to supplemental code
import pickle # neccessary to save final python objects
import time # for measure length of algorithms
import numpy as np # for sklearn algorithms
import pandas as pd # for data analysis
import matplotlib as plt # for plotting
import re # grep
import os # path navigation
import pprint # used mainly for my analysis
sys.path.append("tools/")
from feature_format import featureFormat, targetFeatureSplit
from tester import dump_classifier_and_data
## Following modules are loaded where relevant
# from pandas.tools.plotting import scatter_matrix #data visualization
## Preprocessing and validation
# from sklearn.feature_selection import SelectPercentile, f_classif # feature selection
# from sklearn.decomposition import PCA # features selection and transformation
# from sklearn.preprocessing import MinMaxScaler # feature scaling
# from sklearn import cross_validation
# from sklearn.cross_validation import train_test_split
# from sklearn.cross_validation import StratifiedShuffleSplit
# from sklearn.pipeline import Pipeline, FeatureUnion # hyper-parameter optimization
# from sklearn.grid_search import GridSearchCV # hyper-parameter optimization
## Classifier models training
# from sklearn import svm
# from sklearn.naive_bayes import GaussianNB
# from sklearn.tree import DecisionTreeClassifier
# from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
# from sklearn.linear_model import LogisticRegressionCV
## Model evaluation
# from sklearn.metrics import classification_report
# from itertools import compress # for data summary reporting
###Output
_____no_output_____
###Markdown
***** Orginal poi_id.py script The following code is the original `poi_id.py` skeleton file for reference. The order of tasks is not strictly followed in the following analysis but all tasks are addressed.
###Code
# #!/usr/bin/python
# import sys
# import pickle
# sys.path.append("tools/")
# from feature_format import featureFormat, targetFeatureSplit
# from tester import dump_classifier_and_data
# ### Task 1: Select what features you'll use.
# ### features_list is a list of strings, each of which is a feature name.
# ### The first feature must be "poi".
# features_list = ['poi','salary'] # You will need to use more features
# ### Load the dictionary containing the dataset
# with open("final_project_dataset.pkl", "r") as data_file:
# data_dict = pickle.load(data_file)
# ### Task 2: Remove outliers
# ### Task 3: Create new feature(s)
# ### Store to my_dataset for easy export below.
# my_dataset = data_dict
# ### Extract features and labels from dataset for local testing
# data = featureFormat(my_dataset, features_list, sort_keys = True)
# labels, features = targetFeatureSplit(data)
# ### Task 4: Try a varity of classifiers
# ### Please name your classifier clf for easy export below.
# ### Note that if you want to do PCA or other multi-stage operations,
# ### you'll need to use Pipelines. For more info:
# ### http://scikit-learn.org/stable/modules/pipeline.html
# # Provided to give you a starting point. Try a variety of classifiers.
# from sklearn.naive_bayes import GaussianNB
# clf = GaussianNB()
# ### Task 5: Tune your classifier to achieve better than .3 precision and recall
# ### using our testing script. Check the tester.py script in the final project
# ### folder for details on the evaluation method, especially the test_classifier
# ### function. Because of the small size of the dataset, the script uses
# ### stratified shuffle split cross validation. For more info:
# ### http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html
# # Example starting point. Try investigating other evaluation techniques!
# from sklearn.cross_validation import train_test_split
# features_train, features_test, labels_train, labels_test = \
# train_test_split(features, labels, test_size=0.3, random_state=42)
# ### Task 6: Dump your classifier, dataset, and features_list so anyone can
# ### check your results. You do not need to change anything below, but make sure
# ### that the version of poi_id.py that you submit can be run on its own and
# ### generates the necessary .pkl files for validating your results.
# dump_classifier_and_data(clf, my_dataset, features_list)
###Output
_____no_output_____
###Markdown
First look at the data set
###Code
#!/anaconda/bin/python
#Author: Zach Farmer
#Purpose: first look at Enron dataset
"""
The following code block is a first look analysis of all of
the data contained within the data_dict provided by Udacity
in the final_project_dataset.pkl file.
"""
def data_dict_to_pd_df(data_dictionary):
"""
Convert data_dict into a pandas dataframe for easier analysis and plotting
Paramters:
data_dictionary = dictionary of dataset
(data_dict provided by Udacity) to be converted
into a pandas data frame
Output:
two dataframes: one to include the full set of data, the
second to contain just the poi's.
"""
# Convert dictionary into pandas dataframe for easier
# analysis of values: missing, outlier, etc.
enron_df = pd.DataFrame.from_dict(data_dict).T
# create seperate data frame of just the poi's
poi_df = enron_df[enron_df.poi==1]
return (enron_df, poi_df)
def data_set_overview(data_dictionary):
"""
Print out overview statistics
Parameters:
data_dictionary = Dataset dictionary (data_dict)
Output:
Big picture overview of the data set. To include
total number of data points. Total number of features and
total number of persons of interest.
"""
## Find total number of uniques features for the financial data
features = [value for value in data_dictionary.itervalues() for value in value.keys()]
print "Number of data points in the data_dict: {0}".\
format(len(data_dictionary.items()))
print "Number of Features: {0}".\
format(len(set(features)))
print "Number of Persons of Interest in data_dict: {0}".\
format(sum([person['poi'] == 1 for person in data_dict.values()]))
return None
def num_NaNs(enron_df, poi_df):
"""
find number of missing values for each feature for the full dataet
and specific subsets of the data set
Parameters:
enron_df = full Enron dataframe
poi_df = subsetted POI only Dataframe
Ouput:
Data Frame containing count and percentage of
NaN's by feature for both data frames
"""
Num_NaNs_full = enron_df.apply(lambda x: x == 'NaN', axis=0).sum()
Per_NaNs_full = enron_df.apply(lambda x: x == 'NaN', axis=0).\
sum()/float(len(data_dict.items()))*100
Per_NaNs_full = Per_NaNs_full.apply(lambda x: round(x,2))
Num_NaNs_poi = poi_df.apply(lambda x: x == 'NaN', axis=0).sum()
Per_NaNs_poi = poi_df.apply(lambda x: x == 'NaN', axis=0).\
sum()/float(len(poi_df))*100
Per_NaNs_poi = Per_NaNs_poi.apply(lambda x: round(x,2))
Num_NaNs_sansPoi = enron_df[enron_df['poi'] == 0].\
apply(lambda x: x == 'NaN', axis=0).sum()
Per_NaNs_sansPoi = enron_df[enron_df['poi'] == 0].\
apply(lambda x: x == 'NaN', axis=0).\
sum()/float(len(enron_df[enron_df['poi'] == 0]))*100
Per_NaNs_sansPoi = Per_NaNs_sansPoi.apply(lambda x: round(x,2))
NaN_df = pd.concat([Num_NaNs_full, Per_NaNs_full, Num_NaNs_poi, Per_NaNs_poi,
Num_NaNs_sansPoi, Per_NaNs_sansPoi], axis=1)
NaN_df.columns = ["Number_Full","Percent_Full",\
"Number_POI","Percent_POI",\
"Number_sansPOI","Percent_sansPOI"
]
return NaN_df
if __name__ == "__main__":
### Load the dictionary containing the dataset
with open("final_project_dataset.pkl", "r") as data_file:
data_dict = pickle.load(data_file)
## Find total number of uniques features for the financial data
features = [value for value in data_dict.itervalues() for value in value.keys()]
## Overview of data set
data_set_overview(data_dict)
## Find total number of POI's from provided txt file
# c = !awk '{for(i=1;i<=NF;i++){if($i~/^\(/){print $i}}}' poi_names.txt | wc -l
# print "Total number of POIs as identified by Udacity staff and listed in\
# the poi_name.txt file: {0}\n".format(c[0].strip())
## convert to pandas data frames
enron_df, poi_df = data_dict_to_pd_df(data_dict)
## Number of NaN values for each of the features
print "\nNumber and Percent of missing values ('NaN's') for each feature for\
the full data set,\na subset containing just the poi's and a subset of the dataset sans poi's:\n"
print num_NaNs(enron_df, poi_df)
###Output
Number of data points in the data_dict: 146
Number of Features: 21
Number of Persons of Interest in data_dict: 18
Number and Percent of missing values ('NaN's') for each feature for the full data set,
a subset containing just the poi's and a subset of the dataset sans poi's:
Number_Full Percent_Full Number_POI Percent_POI \
bonus 64 43.84 2 11.11
deferral_payments 107 73.29 13 72.22
deferred_income 97 66.44 7 38.89
director_fees 129 88.36 18 100.00
email_address 35 23.97 0 0.00
exercised_stock_options 44 30.14 6 33.33
expenses 51 34.93 0 0.00
from_messages 60 41.10 4 22.22
from_poi_to_this_person 60 41.10 4 22.22
from_this_person_to_poi 60 41.10 4 22.22
loan_advances 142 97.26 17 94.44
long_term_incentive 80 54.79 6 33.33
other 53 36.30 0 0.00
poi 0 0.00 0 0.00
restricted_stock 36 24.66 1 5.56
restricted_stock_deferred 128 87.67 18 100.00
salary 51 34.93 1 5.56
shared_receipt_with_poi 60 41.10 4 22.22
to_messages 60 41.10 4 22.22
total_payments 21 14.38 0 0.00
total_stock_value 20 13.70 0 0.00
Number_sansPOI Percent_sansPOI
bonus 62 48.44
deferral_payments 94 73.44
deferred_income 90 70.31
director_fees 111 86.72
email_address 35 27.34
exercised_stock_options 38 29.69
expenses 51 39.84
from_messages 56 43.75
from_poi_to_this_person 56 43.75
from_this_person_to_poi 56 43.75
loan_advances 125 97.66
long_term_incentive 74 57.81
other 53 41.41
poi 0 0.00
restricted_stock 35 27.34
restricted_stock_deferred 110 85.94
salary 50 39.06
shared_receipt_with_poi 56 43.75
to_messages 56 43.75
total_payments 21 16.41
total_stock_value 20 15.63
###Markdown
A quick glance at the data suggests that a.) there is just not that much data with only 146 data points and that b.) for a many of those data points large portions of their features are missing values. There are a couple of instances where the poi data set has no values in some of the features. Specifically the directors fees and restricted stock deferred, it is the case that the non-poi dataset mostly contains NaN values too for these features with roughly 15% of the data points actually containing a value. If I were to manually remove features from this data set that I felt did not add much value I might consider removing these two features. One more feature that I might consider removing is the loan advance feature. These three features are universally under-represented in this dataset compared to the other features. Deferral payments are also rather sparsely populated however I would prefer a more analytical approach to determining the value of this variable. There are some cases where all poi's have values for certain features; email addresses, expenses, other, total payments and total stock value. However it is also true that the majority of non-poi's also have values in these categories. Clearly if we could separate the classes of the data with the existence of just one feature then machine learning would not be necessary. The key problem with this data set is the nominal number of data points and most machine learning algorithms performance is very closer related to the quantity of data on which to train (also the quality however in our case we are more consider with quantity and lack thereof). This means that the missing values are a big deal, they can be ignored (i.e. drop those features and data points with lots of missing values). This is a questionable option when we have so few data points to begin with. We could impute the data or simply set the missing values to zero. Both of these carry some risk of inserting bias into the data and failing to capture the unique outliers in those features that we impute. These outliers are some of the very things our classification algorithm is trying to pick up on in order to predict possible fraudulent behavior. It may be best simply to ignore those features where both the poi and non-poi have mostly 'NaN' values. Udacity's `featureFormat` function chooses to replace all the 'NaN' values with 0.0 and then remove any data points where all of the features are 0.0 (an additional option allows you to remove a data point that contains any 0.0 valued features--to aggressive in my opinion). There are of course problems that can arise by simply assuming that a 'NaN' can be replaced with a 0.0 without consequence. However other imputation methods are especially difficult in this case as we have so little other data from which to draw inferences in order to weigh imputation methods. > Might have consider adding a tf-idf vectorizer for the words in the email messages, however this will create a situation where we have a much larger number of features than we do data points (not always a bad thing). In addition the following analysis shows that not all of the email address have corresponding financial data and vice versa, suggesting that we would have to introduce data points wholly missing either financial data or bag of words data.
###Code
#!/anaconda/bin/python
#Author: Zach Farmer
#Purpose: Explore possibility of using email content in classifier
from poi_email_addresses import poiEmails
def emailFileNames(directory_of_email_files):
"""
Find the unique email address that are apart of the to-and-from email corpus provided
by Udacity for reference
Parameters:
directory_of_email_files = The name (str) of the email directory containing
the .txt files of the to/from emails. The .txt file naming convention includes the
email address. (e.g. '[email protected]','[email protected]')
Output:
The unique email addresses found in the directory in python list format.
"""
current_directory = os.getcwd()
email_by_address = os.listdir(os.path.join(current_directory,directory_of_email_files))
emails_all = []
for elem in email_by_address:
emails_all.append(elem.split("_")[1][0:-4])
unique_emails_all = set(emails_all)
return unique_emails_all
def numSharedEmails(email_list_1,email_list_2):
"""
find the number of emails in common
Parameters:
email_list_1 = python list containing unique email addresses
email_list_2 = python list containing unique email addresses
Output:
Return the number of unique emails from the first list
that are found in the second list.
"""
count = 0
for email in email_list_1:
if email in email_list_2:
if email == "NAN":
pass
else:
count +=1
else:
continue
return count
if __name__ == "__main__":
## Generate list of emails in the data set
emails_in_dataset = [value['email_address'] for value in data_dict.itervalues()\
if value['email_address'] != "NaN"]
unique_dataset_emails = set(emails_in_dataset)
poi_emails_in_dataset = [value['email_address'] for value in data_dict.itervalues()\
if value['poi'] == 1]
## If just clone from github repository unzip emails_by_address for parsing by following
## code.
if os.path.isdir(os.path.join(os.getcwd(),"emails_by_address")):
print "find emails_by_address directory"
else:
!tar -zxvf emails_by_address.tar.gz
## Count the number of data points that include email addresses
print "Number of data points (persons) in the dataset with a known email address:__{0}__".\
format(len(unique_dataset_emails)) #Remove the NaN value
## Count the number of unique email address in the email_by_address corpus
print "Number of unique email addresses in the emails_by_address reference corpus:__{0}__".\
format(len(emailFileNames("emails_by_address"))-1) #Account for empty line recorded as an address
## Count the number of unique email from the dataset that are in the email_by_address
## corpus
print "Number of unique email addresses in the dataset that are found in the\nemails_by_address\
corpus:__{0}__".format(numSharedEmails(unique_dataset_emails,unique_dataset_emails))
##Count the number of unique poi emails in the data dictionary found in the
## email_by_address corpus
print "Number of unique poi emails in the dataset that are found in the\nemails_by_address\
corpus:__{0}__".format(numSharedEmails(poi_emails_in_dataset,unique_dataset_emails))
## return list from Udacity's person of interest email address function
poi_email_address = poiEmails()
## Count the number of poi's email address in the email_by_address corpus
print "Number of POI email addresses (according to poi_email_addresses):__{0}__".format(len(poi_email_address))
print "Number of POI email addresses found in the emails_by_address corpus:__{0}__".\
format(numSharedEmails(poi_email_address,unique_dataset_emails))
###Output
find emails_by_address directory
Number of data points (persons) in the dataset with a known email address:__111__
Number of unique email addresses in the emails_by_address reference corpus:__2328__
Number of unique email addresses in the dataset that are found in the
emails_by_address corpus:__111__
Number of unique poi emails in the dataset that are found in the
emails_by_address corpus:__18__
Number of POI email addresses (according to poi_email_addresses):__90__
Number of POI email addresses found in the emails_by_address corpus:__17__
###Markdown
Roughly 75% of the data points in our data_dict have email addresses. It should be noted that we have only one listed email in the dataset (data_dict) but it is certainly possible that some people may have had multiple email accounts. This means there is a chance we may fail to map all of the emails that were sent and received by an individual in our dataset to that individuals email-related features. We can find 18 of our poi's email addresses in the corpus from the dataset (this is necessarily the case given that all the unique emails in the data dictionary were found in the email corpus). Attempting to incorporate the email content as feature data into the current data dictionary would introduce a great many missing valued financial features for many of the data points if we included all of the email corpus email addresses and content. We could selectively choose to incorporate only those emails associated with addresses that have a matched pair in the data dictionary. Selectively choosing which emails to add will likely mean however that we fail to included some of the poi email addresses who we do not have financial data for and thus deprive our algorithm of valuable information about connections between poi's. This trade-off is often present in situations were we attempt to merge data from two different sources. A possible solution was addressed in the Udacity course ["Introduction to Machine Learning"](https://www.udacity.com/course/intro-to-machine-learning--ud120 "https://www.udacity.com/course/intro-to-machine-learning--ud120") by simply recording the number of emails sent from and to a POI's (in addition to cc's) ignoring the email content, and actual addresses themselves. This meant we could capture some of the connection information without having to include all of the email addresses as data points. Of course we still have to parse over both data sets but we wouldn't be trying to directly merge them together. This solution is not the only possible one but is the method I will adopt. In terms of further study on developing a POI classifier the merging of the email content through a tf-idf vectorizer might be an interesting avenue to explore to determine if any better accuracy could be achieved in identifying POI's. To the best of my knowledge we would have to except loss of information and connections either financial related or email content related with this approach.
###Code
%matplotlib inline
#!/anaconda/bin/python
#Author: Zach Farmer
#Purpose: summarize and graph the datapoints features
"""
Following code is for the purpose of describing the features and plotting each
as a histogram. Hopefully will help to highlight outliers.
"""
## Convert to pandas dataframe to take advantage of internal plotting methods
enron_df = enron_df.convert_objects(convert_numeric=True)
enron_df_clnd = enron_df.loc[:,enron_df.columns != 'email_address']
print enron_df_clnd.describe()
enron_df_clnd.hist(figsize = (15,15))
###Output
bonus deferral_payments deferred_income director_fees \
count 82.000000 39.000000 49.000000 17.000000
mean 2374234.609756 1642674.153846 -1140475.142857 166804.882353
std 10713327.969046 5161929.973575 4025406.378506 319891.409747
min 70000.000000 -102500.000000 -27992891.000000 3285.000000
25% 431250.000000 81573.000000 -694862.000000 98784.000000
50% 769375.000000 227449.000000 -159792.000000 108579.000000
75% 1200000.000000 1002671.500000 -38346.000000 113784.000000
max 97343619.000000 32083396.000000 -833.000000 1398517.000000
exercised_stock_options expenses from_messages \
count 1.020000e+02 95.000000 86.000000
mean 5.987054e+06 108728.915789 608.790698
std 3.106201e+07 533534.814109 1841.033949
min 3.285000e+03 148.000000 12.000000
25% 5.278862e+05 22614.000000 22.750000
50% 1.310814e+06 46950.000000 41.000000
75% 2.547724e+06 79952.500000 145.500000
max 3.117640e+08 5235198.000000 14368.000000
from_poi_to_this_person from_this_person_to_poi loan_advances \
count 86.000000 86.000000 4.0000
mean 64.895349 41.232558 41962500.0000
std 86.979244 100.073111 47083208.7019
min 0.000000 0.000000 400000.0000
25% 10.000000 1.000000 1600000.0000
50% 35.000000 8.000000 41762500.0000
75% 72.250000 24.750000 82125000.0000
max 528.000000 609.000000 83925000.0000
long_term_incentive other poi restricted_stock \
count 66.000000 93.000000 146 1.100000e+02
mean 1470361.454545 919064.967742 0.123288 2.321741e+06
std 5942759.315498 4589252.907638 0.329899 1.251828e+07
min 69223.000000 2.000000 False -2.604490e+06
25% 281250.000000 1215.000000 0 2.540180e+05
50% 442035.000000 52382.000000 0 4.517400e+05
75% 938672.000000 362096.000000 0 1.002370e+06
max 48521928.000000 42667589.000000 True 1.303223e+08
restricted_stock_deferred salary shared_receipt_with_poi \
count 18.000000 95.000000 86.000000
mean 166410.555556 562194.294737 1176.465116
std 4201494.314703 2716369.154553 1178.317641
min -7576788.000000 477.000000 2.000000
25% -389621.750000 211816.000000 249.750000
50% -146975.000000 259996.000000 740.500000
75% -75009.750000 312117.000000 1888.250000
max 15456290.000000 26704229.000000 5521.000000
to_messages total_payments total_stock_value
count 86.000000 1.250000e+02 1.260000e+02
mean 2073.860465 5.081526e+06 6.773957e+06
std 2582.700981 2.906172e+07 3.895777e+07
min 57.000000 1.480000e+02 -4.409300e+04
25% 541.250000 3.944750e+05 4.945102e+05
50% 1211.000000 1.101393e+06 1.102872e+06
75% 2634.750000 2.093263e+06 2.949847e+06
max 15149.000000 3.098866e+08 4.345095e+08
###Markdown
Reviewing the summary statistics of our features and looking over the histograms plot there are several features that might bear a second look at. Udacity's Intro to Machine Learning specifically the lesson on outliers highlighted the 'salary' features as having possible outliers. There are a series of other features with very large difference between the I.Q.R. (Inter-Quartile-Range) and min and max values. Examples include; deferred income, bonus, deferral payments, directors fees, exercised stock options, expenses, from messages, long term incentives and a few others. While in may simply be the case that these feature contain legitimate outliers (i.e. CEO, CFO ) experience with the 'salary' feature in the course suggests taking a look at these values further.
###Code
#!/anaconda/bin/python
#Author: Zach Farmer
#Purpose: Remove Outliers from the data
"""
Following code first finds and then removes the oultiera in the converted dataframe.
A description of the data along with histograms are plotted after the outlier
removal to measure the effect. Second the offending outliers are removed from
the data_dict.
"""
def findOutliers(data_dict):
"""
For each feature return all data points with values that are 1.5 times
the Inter-Quartile-Range (I.Q.R.) for that feature.
parameters:
data_dict = The original Udacity provided data set as a python
dictionary
Output:
A dictionary whose keys are the features in the data set and whose
values are a list of all the data point names associated with values that
are above or below 1.5 times the I.Q.R.
"""
def isOutlier(x,feature_name):
if x < IQR_lower_array[feature_name]:
return True
elif x > IQR_upper_array[feature_name]:
return True
else:
return False
# convenient definition of an outlier "1.5 * IQR in either direction."
# Convert to pandas dataframe for easier data manipulation to find outliers.
enron_df, poi_df = data_dict_to_pd_df(data_dict)
# Make sure that all the values are numeric
enron_df = enron_df.convert_objects(convert_numeric=True)
# remove the email_address as this is obviously not a numeric feature
enron_df_clnd = enron_df.loc[:,enron_df.columns != 'email_address']
# Find the outlier boundaries in either direction for each of the features
enron_df_IQR = enron_df_clnd.quantile(axis = 0)
IQR_upper_array = enron_df_IQR.apply(lambda x: x + abs(x*1.5))
IQR_lower_array = enron_df_IQR.apply(lambda x: x - abs(x*1.5))
#print IQR_lower_array
Outlier_container= {}
# Parse through all the values for each feature and determine if outlier
# or not
for feature_name, feature_vector in enron_df_clnd.iteritems():
outlier = feature_vector.apply(lambda x: "Outlier" if\
isOutlier(x,feature_name) else\
None)
# Record the data point names associated with the positive outliers
for name, value in outlier.iteritems():
if feature_name not in Outlier_container.keys():
if value:
Outlier_container[feature_name] = [name]
else:
continue
else:
if value:
Outlier_container[feature_name].append(name)
else:
continue
# return a dictionary containing the data-point names of the outliers for
# each feature.
return Outlier_container
def findOutlierInFeature(data_dict, feature_name, threshold):
"""
Find any 'outliers' in a feature by manually setting threshold value
Parameters:
data_dict = The Udacity provided data set in data_dict.
feature_name = the name of the feature which we wish to find
outliers in.
threshold = The value that we will threshold an outlier by. (e.g.
greater than 20,000,000 is an outlier)
Output:
Print to console the key associated with the value above the
given threshold for the given feature.
"""
max_value_key = [peeps for peeps, value in data_dict.items()\
if (value[feature_name] != 'NaN') and (value[feature_name] > threshold)]
return feature_name,max_value_key
def removeOutliers(data_dict, list_data_points):
"""
remove the data points associated with any discovered outliers
Parameters:
data_dict = The Udacity provided data set data_dict.
data_point_name = The key name for the datapoint containing
outliers. (e.g. 'Total').
Output:
data_dict with the provided data point removed from the
dictionary.
"""
for elem in list_data_points:
try:
data_dict.pop(elem,0)
return data_dict
except ValueError:
print "data_point not found in data_dict."
pass
return None
if __name__ == "__main__":
## Find outliers
for key,value in findOutliers(data_dict).iteritems():
print key+":\n", pprint.pprint(sorted(value))
# feature_name,max_value_key = findOutlierInFeature(data_dict,\
# feature_name = 'salary',\
# threshold = 20000000)
# print "The key associated with the max {0} value is: {1}".\
# format(feature_name,max_value_key)
## Remove the outlier from the enron_df and analyze results
try:
enron_df_clnd = enron_df_clnd.drop('TOTAL', axis=0)
enron_df_clnd = enron_df_clnd.drop('THE TRAVEL AGENCY IN THE PARK',axis =0)
except ValueError:
pass
print "The data set descibed after removing the {0} data point(s)\n".\
format(['TOTAL','THE TRAVEL AGENCY IN THE PARK']),enron_df_clnd.describe()
enron_df_clnd.hist(figsize = (15,15))
## Remove the outliers from the data_dict
data_dict = removeOutliers(data_dict, ['TOTAL','THE TRAVEL AGENCY IN THE PARK'])
###Output
salary:
['FREVERT MARK A',
'LAY KENNETH L',
'PICKERING MARK R',
'SKILLING JEFFREY K',
'TOTAL']
None
to_messages:
['BECK SALLY W',
'BELDEN TIMOTHY N',
'BUY RICHARD B',
'DELAINEY DAVID W',
'FREVERT MARK A',
'HAEDICKE MARK E',
'KAMINSKI WINCENTY J',
'KEAN STEVEN J',
'KITCHEN LOUISE',
'LAVORATO JOHN J',
'LAY KENNETH L',
'MCCONNELL MICHAEL S',
'SHANKMAN JEFFREY A',
'SHAPIRO RICHARD S',
'SHARP VICTORIA T',
'SHERRIFF JOHN R',
'SKILLING JEFFREY K',
'WHALLEY LAWRENCE G']
None
deferral_payments:
['ALLEN PHILLIP K',
'BAXTER JOHN C',
'BAZELIDES PHILIP J',
'BELDEN TIMOTHY N',
'BUY RICHARD B',
'DETMERING TIMOTHY J',
'FREVERT MARK A',
'HAEDICKE MARK E',
'HORTON STANLEY C',
'HUMPHREY GENE E',
'MEYER ROCKFORD G',
'MULLER MARK S',
'NOLES JAMES L',
'PIPER GREGORY F',
'TOTAL',
'WASAFF GEORGE']
None
total_payments:
['ALLEN PHILLIP K',
'BAXTER JOHN C',
'BELDEN TIMOTHY N',
'BHATNAGAR SANJAY',
'DELAINEY DAVID W',
'FALLON JAMES B',
'FREVERT MARK A',
'HAEDICKE MARK E',
'HORTON STANLEY C',
'HUMPHREY GENE E',
'KITCHEN LOUISE',
'LAVORATO JOHN J',
'LAY KENNETH L',
'MARTIN AMANDA K',
'MCMAHON JEFFREY',
'MULLER MARK S',
'PAI LOU L',
'SHANKMAN JEFFREY A',
'SHERRIFF JOHN R',
'SKILLING JEFFREY K',
'TOTAL',
'WHALLEY LAWRENCE G']
None
long_term_incentive:
['BAXTER JOHN C',
'DELAINEY DAVID W',
'DURAN WILLIAM D',
'ECHOLS JOHN B',
'FASTOW ANDREW S',
'FREVERT MARK A',
'HANNON KEVIN P',
'LAVORATO JOHN J',
'LAY KENNETH L',
'LEFF DANIEL P',
'MARTIN AMANDA K',
'MULLER MARK S',
'RICE KENNETH D',
'SKILLING JEFFREY K',
'TOTAL']
None
bonus:
['ALLEN PHILLIP K',
'BELDEN TIMOTHY N',
'DELAINEY DAVID W',
'FALLON JAMES B',
'FREVERT MARK A',
'KITCHEN LOUISE',
'LAVORATO JOHN J',
'LAY KENNETH L',
'MCMAHON JEFFREY',
'SHANKMAN JEFFREY A',
'SKILLING JEFFREY K',
'TOTAL',
'WHALLEY LAWRENCE G']
None
restricted_stock_deferred:
['BANNANTINE JAMES M',
'BHATNAGAR SANJAY',
'CLINE KENNETH W',
'DERRICK JR. JAMES V',
'PIPER GREGORY F',
'TOTAL']
None
total_stock_value:
['BANNANTINE JAMES M',
'BAXTER JOHN C',
'BUY RICHARD B',
'CHRISTODOULOU DIOMEDES',
'DELAINEY DAVID W',
'DERRICK JR. JAMES V',
'DIMICHELE RICHARD G',
'ELLIOTT STEVEN',
'FREVERT MARK A',
'HANNON KEVIN P',
'HIRKO JOSEPH',
'HORTON STANLEY C',
'IZZO LAWRENCE L',
'KEAN STEVEN J',
'LAVORATO JOHN J',
'LAY KENNETH L',
'LINDHOLM TOD A',
'MCCONNELL MICHAEL S',
'OVERDYKE JR JERE C',
'PAI LOU L',
'REDMOND BRIAN L',
'REYNOLDS LAWRENCE',
'RICE KENNETH D',
'SHERRIFF JOHN R',
'SKILLING JEFFREY K',
'TAYLOR MITCHELL S',
'THORN TERENCE H',
'TOTAL',
'WALLS JR ROBERT H',
'WHALLEY LAWRENCE G',
'WHITE JR THOMAS E',
'YEAGER F SCOTT']
None
expenses:
['BAY FRANKLIN R',
'GLISAN JR BEN F',
'KOENIG MARK E',
'KOPPER MICHAEL J',
'MCCLELLAN GEORGE',
'MCMAHON JEFFREY',
'SHANKMAN JEFFREY A',
'SHAPIRO RICHARD S',
'TOTAL',
'URQUHART JOHN A']
None
from_poi_to_this_person:
['BECK SALLY W',
'BELDEN TIMOTHY N',
'BOWEN JR RAYMOND M',
'BUY RICHARD B',
'CALGER CHRISTOPHER F',
'COLWELL WESLEY',
'DEFFNER JOSEPH M',
'DIETRICH JANET R',
'DONAHUE JR JEFFREY M',
'DURAN WILLIAM D',
'FREVERT MARK A',
'HAEDICKE MARK E',
'KEAN STEVEN J',
'KITCHEN LOUISE',
'LAVORATO JOHN J',
'LAY KENNETH L',
'MCCONNELL MICHAEL S',
'REDMOND BRIAN L',
'SHANKMAN JEFFREY A',
'SKILLING JEFFREY K',
'WHALLEY LAWRENCE G']
None
exercised_stock_options:
['BANNANTINE JAMES M',
'BAXTER JOHN C',
'CHRISTODOULOU DIOMEDES',
'DERRICK JR. JAMES V',
'DIMICHELE RICHARD G',
'ELLIOTT STEVEN',
'FREVERT MARK A',
'HANNON KEVIN P',
'HIRKO JOSEPH',
'HORTON STANLEY C',
'LAVORATO JOHN J',
'LAY KENNETH L',
'OVERDYKE JR JERE C',
'PAI LOU L',
'REDMOND BRIAN L',
'REYNOLDS LAWRENCE',
'RICE KENNETH D',
'SKILLING JEFFREY K',
'THORN TERENCE H',
'TOTAL',
'WALLS JR ROBERT H',
'WHALLEY LAWRENCE G',
'YEAGER F SCOTT']
None
from_messages:
['ALLEN PHILLIP K',
'BECK SALLY W',
'BELDEN TIMOTHY N',
'BUCHANAN HAROLD G',
'BUY RICHARD B',
'CALGER CHRISTOPHER F',
'DELAINEY DAVID W',
'DERRICK JR. JAMES V',
'HAEDICKE MARK E',
'HAYSLETT RODERICK J',
'HORTON STANLEY C',
'KAMINSKI WINCENTY J',
'KEAN STEVEN J',
'KITCHEN LOUISE',
'LAVORATO JOHN J',
'MARTIN AMANDA K',
'MCCARTY DANNY J',
'MCCONNELL MICHAEL S',
'PIPER GREGORY F',
'REDMOND BRIAN L',
'SHANKMAN JEFFREY A',
'SHAPIRO RICHARD S',
'SHARP VICTORIA T',
'SKILLING JEFFREY K',
'WALLS JR ROBERT H',
'WHALLEY LAWRENCE G']
None
other:
['BANNANTINE JAMES M',
'BAXTER JOHN C',
'BELDEN TIMOTHY N',
'BERGSIEKER RICHARD P',
'BHATNAGAR SANJAY',
'BIBI PHILIPPE A',
'BUTTS ROBERT H',
'BUY RICHARD B',
'CAUSEY RICHARD A',
'DIMICHELE RICHARD G',
'FALLON JAMES B',
'FASTOW ANDREW S',
'FITZGERALD JAY L',
'FREVERT MARK A',
'GLISAN JR BEN F',
'GOLD JOSEPH',
'GRAY RODNEY',
'HERMANN ROBERT J',
'IZZO LAWRENCE L',
'KISHKILL JOSEPH G',
'KOENIG MARK E',
'KOPPER MICHAEL J',
'LAY KENNETH L',
'MARTIN AMANDA K',
'MCMAHON JEFFREY',
'PAI LOU L',
'REYNOLDS LAWRENCE',
'RICE KENNETH D',
'SHELBY REX',
'SHERRIFF JOHN R',
'STABLER FRANK',
'THE TRAVEL AGENCY IN THE PARK',
'THORN TERENCE H',
'TILNEY ELIZABETH A',
'TOTAL',
'WESTFAHL RICHARD K',
'WHALLEY LAWRENCE G',
'WHITE JR THOMAS E',
'WODRASKA JOHN',
'YEAGER F SCOTT']
None
from_this_person_to_poi:
['ALLEN PHILLIP K',
'BECK SALLY W',
'BELDEN TIMOTHY N',
'BUY RICHARD B',
'CALGER CHRISTOPHER F',
'DELAINEY DAVID W',
'FALLON JAMES B',
'GARLAND C KEVIN',
'HAEDICKE MARK E',
'HANNON KEVIN P',
'HAYSLETT RODERICK J',
'KAMINSKI WINCENTY J',
'KEAN STEVEN J',
'KITCHEN LOUISE',
'LAVORATO JOHN J',
'MCCONNELL MICHAEL S',
'MCMAHON JEFFREY',
'PIPER GREGORY F',
'REDMOND BRIAN L',
'RIEKER PAULA H',
'SHANKMAN JEFFREY A',
'SHAPIRO RICHARD S',
'SHERRIFF JOHN R',
'SKILLING JEFFREY K',
'WHALLEY LAWRENCE G']
None
poi:
['BELDEN TIMOTHY N',
'BOWEN JR RAYMOND M',
'CALGER CHRISTOPHER F',
'CAUSEY RICHARD A',
'COLWELL WESLEY',
'DELAINEY DAVID W',
'FASTOW ANDREW S',
'GLISAN JR BEN F',
'HANNON KEVIN P',
'HIRKO JOSEPH',
'KOENIG MARK E',
'KOPPER MICHAEL J',
'LAY KENNETH L',
'RICE KENNETH D',
'RIEKER PAULA H',
'SHELBY REX',
'SKILLING JEFFREY K',
'YEAGER F SCOTT']
None
deferred_income:
['ALLEN PHILLIP K',
'BAXTER JOHN C',
'BELDEN TIMOTHY N',
'BERGSIEKER RICHARD P',
'BUY RICHARD B',
'DERRICK JR. JAMES V',
'DETMERING TIMOTHY J',
'ELLIOTT STEVEN',
'FASTOW ANDREW S',
'FREVERT MARK A',
'HAEDICKE MARK E',
'HANNON KEVIN P',
'MULLER MARK S',
'RICE KENNETH D',
'TILNEY ELIZABETH A',
'TOTAL',
'WASAFF GEORGE']
None
shared_receipt_with_poi:
['BECK SALLY W',
'BELDEN TIMOTHY N',
'BLACHMAN JEREMY M',
'BUY RICHARD B',
'CALGER CHRISTOPHER F',
'DELAINEY DAVID W',
'DIETRICH JANET R',
'FREVERT MARK A',
'KEAN STEVEN J',
'KITCHEN LOUISE',
'KOENIG MARK E',
'LAVORATO JOHN J',
'LAY KENNETH L',
'LEFF DANIEL P',
'MCCONNELL MICHAEL S',
'MCMAHON JEFFREY',
'SHAPIRO RICHARD S',
'SHARP VICTORIA T',
'SHERRIFF JOHN R',
'SKILLING JEFFREY K',
'SUNDE MARTIN',
'WHALLEY LAWRENCE G']
None
restricted_stock:
['BANNANTINE JAMES M',
'BAXTER JOHN C',
'BHATNAGAR SANJAY',
'CAUSEY RICHARD A',
'DELAINEY DAVID W',
'DERRICK JR. JAMES V',
'ELLIOTT STEVEN',
'FALLON JAMES B',
'FASTOW ANDREW S',
'FREVERT MARK A',
'HAUG DAVID L',
'HORTON STANLEY C',
'IZZO LAWRENCE L',
'KEAN STEVEN J',
'KOENIG MARK E',
'LAY KENNETH L',
'MCCONNELL MICHAEL S',
'OVERDYKE JR JERE C',
'PAI LOU L',
'RICE KENNETH D',
'SHERRIFF JOHN R',
'SKILLING JEFFREY K',
'TOTAL',
'WALLS JR ROBERT H',
'WHALLEY LAWRENCE G',
'WHITE JR THOMAS E',
'YEAGER F SCOTT']
None
director_fees:
['TOTAL']
None
The data set descibed after removing the ['TOTAL', 'THE TRAVEL AGENCY IN THE PARK'] data point(s)
bonus deferral_payments deferred_income director_fees \
count 81.000000 38.000000 48.000000 16.000000
mean 1201773.074074 841602.526316 -581049.812500 89822.875000
std 1441679.438330 1289322.626180 942076.402972 41112.700735
min 70000.000000 -102500.000000 -3504386.000000 3285.000000
25% 425000.000000 79644.500000 -611209.250000 83674.500000
50% 750000.000000 221063.500000 -151927.000000 106164.500000
75% 1200000.000000 867211.250000 -37926.000000 112815.000000
max 8000000.000000 6426990.000000 -833.000000 137864.000000
exercised_stock_options expenses from_messages \
count 101.000000 94.000000 86.000000
mean 2959559.257426 54192.010638 608.790698
std 5499449.598994 46108.377454 1841.033949
min 3285.000000 148.000000 12.000000
25% 506765.000000 22479.000000 22.750000
50% 1297049.000000 46547.500000 41.000000
75% 2542813.000000 78408.500000 145.500000
max 34348384.000000 228763.000000 14368.000000
from_poi_to_this_person from_this_person_to_poi loan_advances \
count 86.000000 86.000000 3.000000
mean 64.895349 41.232558 27975000.000000
std 86.979244 100.073111 46382560.030684
min 0.000000 0.000000 400000.000000
25% 10.000000 1.000000 1200000.000000
50% 35.000000 8.000000 2000000.000000
75% 72.250000 24.750000 41762500.000000
max 528.000000 609.000000 81525000.000000
long_term_incentive other poi restricted_stock \
count 65.000000 91.000000 144 109.000000
mean 746491.200000 466410.516484 0.125 1147424.091743
std 862917.421568 1397375.607531 0.331873 2249770.356903
min 69223.000000 2.000000 False -2604490.000000
25% 275000.000000 1203.000000 0 252055.000000
50% 422158.000000 51587.000000 0 441096.000000
75% 831809.000000 331983.000000 0 985032.000000
max 5145434.000000 10359729.000000 True 14761694.000000
restricted_stock_deferred salary shared_receipt_with_poi \
count 17.000000 94.000000 86.000000
mean 621892.823529 284087.542553 1176.465116
std 3845528.349509 177131.115377 1178.317641
min -1787380.000000 477.000000 2.000000
25% -329825.000000 211802.000000 249.750000
50% -140264.000000 258741.000000 740.500000
75% -72419.000000 308606.500000 1888.250000
max 15456290.000000 1111258.000000 5521.000000
to_messages total_payments total_stock_value
count 86.000000 1.230000e+02 125.000000
mean 2073.860465 2.641806e+06 3352073.024000
std 2582.700981 9.524694e+06 6532883.097201
min 57.000000 1.480000e+02 -44093.000000
25% 541.250000 3.969340e+05 494136.000000
50% 1211.000000 1.101393e+06 1095040.000000
75% 2634.750000 2.087530e+06 2606763.000000
max 15149.000000 1.035598e+08 49110078.000000
###Markdown
It looks like that after removing the 'TOTAL' data point which clearly is not an actual person we have gone a long way towards reducing some of the extreme values in the features. Testing of several of the remaining features with large differences between their $75^{th}$ percentile and max values suggest other possible outliers. 'THE TRAVEL AGENCY IN THE PARK' is not really a person and although may have been used to commit it is a coporate entity and not an individual. I will drop this data point. Visually charting a few of the features with extreme values confirms some values are way outside the norm but the financial pdf (provided as reference) confirms these values as accurate. This assumes of course that the data in the pdf is valid, I will operate under this assumption otherwise we would be undermining the validity of any classifier trained on the data.
###Code
import matplotlib.pyplot as plt
feature_name,max_value_key = findOutlierInFeature(data_dict,
feature_name = "from_messages",
threshold = 10000)
print "The key(s) associated with the max {0} value is: {1}".\
format(feature_name,max_value_key)
msgs_by_Kaminski = data_dict['KAMINSKI WINCENTY J']['from_messages']
msgs_by_all_others = sum([value['from_messages'] for value in data_dict.itervalues()\
if value['from_messages'] != "NaN"])
print "The number of from messages from {0}: {1}\nThe number of all other from messages total\
: {2}".format(max_value_key, msgs_by_Kaminski, msgs_by_all_others - msgs_by_Kaminski)
#print data_dict['KAMINSKI WINCENTY J']
enron_df_clnd['from_messages'].plot(kind = 'box', figsize = (8,4),vert=False )
plt.title("Boxplot of from_messages distributions")
plt.xlabel("Number of from messages (discrete scale)")
plt.show()
#ax.set_xscale('log)
## Method for loag transform found here:
#http://stackoverflow.com/questions/29930340/want-to-plot-pandas-dataframe-as-multiple-histograms-with-log10-scale-x-axis
# by unutbu
axs = enron_df_clnd['from_messages'].plot(kind = 'box', figsize = (8,4),vert=False )
axs.set_xscale('log')
plt.title("Boxplot of from_messages distributions")
plt.xlabel("Number of from messages (log10)")
plt.show()
###Output
The key(s) associated with the max from_messages value is: ['KAMINSKI WINCENTY J']
The number of from messages from ['KAMINSKI WINCENTY J']: 14368
The number of all other from messages total: 37988
###Markdown
It seems a little improbable that one person would have nearly half of all the from messages sent by the persons in this data set. Although it is hard to say whether it is just inside this data set that number of emails is an anomaly or if Kaminski would still be an outlier if we looked at all of the emails in the entire Enron email directory. I am not sure I will remove this data point precisely because we don't actually know if this value truly is anomalous or if it is just extreme for the data points we have financial for. There are 14,368 entries in Mr. Kaminskis [email protected] document, so I will operate under the assumption that this value although extreme is accurate.
###Code
axs = enron_df_clnd['other'].plot(kind = 'box', figsize = (8,4),vert=False )
axs.set_xscale('log')
plt.title("distribution of 'other' values log transformed")
feature_name,max_value_key = findOutlierInFeature(data_dict, feature_name = "other",
threshold = 1000000)
print "The key(s) associated with the max {0} value is: {1}".\
format(feature_name,sorted(max_value_key))
###Output
The key(s) associated with the max other value is: ['BAXTER JOHN C', 'FREVERT MARK A', 'IZZO LAWRENCE L', 'LAY KENNETH L', 'MARTIN AMANDA K', 'PAI LOU L', 'SHELBY REX', 'SHERRIFF JOHN R', 'WHITE JR THOMAS E']
###Markdown
We might expect Mr. Lay to have a rather large value and in fact we know he was very involved in fraud and therefore certainly would not want to remove this data point. The rest of the individuals are not the far outside the norm and double checking the Enron insider pay .pdf suggests their values are accurate.
###Code
axs = enron_df_clnd['total_payments'].plot(kind = 'box', figsize = (8,4),vert=False )
axs.set_xscale('log')
plt.title("distribution of 'total_payments' values log transformed")
feature_name,max_value_key = findOutlierInFeature(data_dict,
feature_name = "total_payments",
threshold = 10000000)
print "The key(s) associated with the max {0} value is: {1}".\
format(feature_name,max_value_key)
print "Bhatnagar Sanjay's financial data:\n", pprint.pprint(data_dict['BHATNAGAR SANJAY'])
###Output
The key(s) associated with the max total_payments value is: ['LAVORATO JOHN J', 'LAY KENNETH L', 'BHATNAGAR SANJAY', 'FREVERT MARK A']
Bhatnagar Sanjay's financial data:
{'bonus': 'NaN',
'deferral_payments': 'NaN',
'deferred_income': 'NaN',
'director_fees': 137864,
'email_address': '[email protected]',
'exercised_stock_options': 2604490,
'expenses': 'NaN',
'from_messages': 29,
'from_poi_to_this_person': 0,
'from_this_person_to_poi': 1,
'loan_advances': 'NaN',
'long_term_incentive': 'NaN',
'other': 137864,
'poi': False,
'restricted_stock': -2604490,
'restricted_stock_deferred': 15456290,
'salary': 'NaN',
'shared_receipt_with_poi': 463,
'to_messages': 523,
'total_payments': 15456290,
'total_stock_value': 'NaN'}
None
###Markdown
According to the `enron61702insiderpay.pdf` document Sanjay's total_payments are actually \$137,864, he's restricted stock is \$2,604,490 and his restricted stock deferred is \$-2,604,490. He's 'expenses' and 'other' are also switched around, he's exercised stock options should be \$15,456,290 and the total stock value is the same (\$15,456,290)
###Code
## Correct the mistake found in the data dictionary
def fixFinancialData(data_dictionary):
"""
Fix the financial data mistake for 'BHATNAGAR SANJAY'
Parameters:
data_dictionary: Python dictionary containing the data set
Output:
Pythin dictionary containing the data set with 'BHATNAGAR SANJAY'
financial data corrected
"""
data_dictionary['BHATNAGAR SANJAY']['total_payments'] = 137864
data_dictionary['BHATNAGAR SANJAY']['restricted_stock'] = 2604490
data_dictionary['BHATNAGAR SANJAY']['restricted_stock_deferred'] = -2604490
data_dictionary['BHATNAGAR SANJAY']['expenses'] = 137864
data_dictionary['BHATNAGAR SANJAY']['other'] = 'NaN'
data_dictionary['BHATNAGAR SANJAY']['exercised_stock_options'] = 15456290
data_dictionary['BHATNAGAR SANJAY']['total_stock_value'] = 15456290
return data_dictionary
data_dict = fixFinancialData(data_dict)
print "Bhatnagar Sanjay's financial data:\n", pprint.pprint(data_dict['BHATNAGAR SANJAY'])
## Ensure the extreme values in the restricted stock deferred is supported by the pdf
min_value_key = [peeps for peeps, value in data_dict.items()\
if (value['restricted_stock_deferred'] != 'NaN') and (value['restricted_stock_deferred'] < -1000000)]
print "The key(s) associated with the min {0} value is: {1}".\
format('restricted_stock_deferred',max_value_key)
###Output
The key(s) associated with the min restricted_stock_deferred value is: ['LAVORATO JOHN J', 'LAY KENNETH L', 'BHATNAGAR SANJAY', 'FREVERT MARK A']
###Markdown
Question 1--Part 2: ***Were there any outliers in the data when you got it, and how did you handle those?*** ***Answer:*** Yes there was at least one major outlier that perpetuated itself throughout the financial data. This outlier was a data point labeled "TOTAL" that upon examination of the financial data pdf clearly represented the 'Totals' of the financial features. Obviously this data point is not a person and the information it contains already exists in each of the features by simply summing over all of their values. I chose to drop the data point. 'THE TRAVEL AGENCY IN THE PARK' is not really a person and although may have been used to commit fraud it is a coporate entity and not an individual, I also dropped this data point. Reviewing some more of the extreme values in the features (values that are greater then or less then 1.5 times the I.Q.R. (inter-quartile-range) from the top and bottom of the I.Q.R.) I found at least one data point with incorrect information. According to the `enron61702insiderpay.pdf` document Bhatnagar Sanjay's financial data was all switched around, I fixed these errors and not finding any more red flags in the data moved on to creating my own features. *** Preprocessing -- Feature Selection -- Feature Creation The following code blocks will focus on the provided data_dict dataset. Some features will be dropped by hand, a few new features will be created and a final feature selection analysis will be run on the remaining features in order to reduce them based on importance. Importance was determined in a later code block in which I ran a grid search with a pipeline to find the feature selection parameters and algorithm combination to achieve the 'best' results from my poi classifier. The following code in designed to replicate that later process as closely as possible using the learned parameters and to elucidate on the results. The next several code blocks have been written to the local directory, I used them for my grid-search using AWS's EC2 clusters and wanted an easy way to pass these function to those remote file systems so I could reference them locally there. The code is located in the local directory and I simply import to the name space of the scripts where it is necessary, it exists below only as a reference for parameters. running the code would require switching the cell type to code.
###Code
%%writefile create_my_dataset.py
#!/usr/bin/python
#Author: Zach Farmer
#Purpose: Create my own dataset from provided data_dict
"""
Functions for creating my own dataset from the provided data_dict.
Function for dropping features by hand, chiefly for the
removal of the non-numeric feature 'email_address'. One could
certainly manually remove other variables to perform feature selection
It is recommended that a more analytical approach be taken using sklearn's
feature selection methods.
Function for computing fraction between two features provided. For the purpose
of creating new features based on ratios.
Function for generating new features provided. This function implements hard
coded features. It is not abstractable for the creation of any other features
as written.
Function for removing outliers found in the dataset, it can except a list of
datapoints to remove.
Function for correcting financial data for one of the persons in the data set.
This function is hard-coded and cannot be abstracted for use on other persons,
and features.
"""
def dropFeatures(features, remove_list):
"""
Parameters:
features = Python list of unique features in the data_dict.
remove_list = Python list of features to be removed.(drop
non-numeric features such as the email address)
Output:
Python list of unique features sans the features in the remove
list.
"""
## Method courtesy of user: Donut at:
## http://stackoverflow.com/questions/4211209/remove-all-the-elements-that-occur-in-one-list-from-another
features_remove = remove_list
learning_features = [feature for feature in features if feature not in features_remove]
return learning_features
## Following code adapted from Udacity's Intro to Machine learning lesson 11 Visualizing
## Your New Feature Quiz
def computeFraction(feature_1, feature_2 ):
"""
Parameters:
Two numeric feature vectors for which we want to compute a ratio
between
Output:
Return fraction or ratio of feature_1 divided by feature_2
"""
fraction = 0.
if feature_1 == "NaN":
fraction = 0.0
elif feature_2 == "NaN":
fraction = 0.0
else:
fraction = int(feature_1) / float(feature_2)
return fraction
def newFeatures(data_dict):
"""
Parameters:
data_dict provided by Udacity instructors
Output:
data_dict with new features (hard-coded)
"""
## following is not extensible to making any other features
## then what is hard coded below.
## Note: Some of the following features are susceptible to data leakage.
## The features which inlcude the links to poi's through emails mean that
## the test data potentially inlcudes ground truth information of actual poi's.
## potentially inlcudes ground truth information of actual poi's.
for name in data_dict:
from_poi_to_this_person = data_dict[name]["from_poi_to_this_person"]
to_messages = data_dict[name]["to_messages"]
fraction_from_poi = computeFraction( from_poi_to_this_person, to_messages )
data_dict[name]["fraction_from_poi"] = fraction_from_poi
from_this_person_to_poi = data_dict[name]["from_this_person_to_poi"]
from_messages = data_dict[name]["from_messages"]
fraction_to_poi = computeFraction( from_this_person_to_poi, from_messages )
data_dict[name]["fraction_to_poi"] = fraction_to_poi
salary = data_dict[name]['salary']
total_payments = data_dict[name]['total_payments']
salary_to_totalPayment_ratio = computeFraction(salary, total_payments)
data_dict[name]['salary_to_totalPayment_ratio'] = salary_to_totalPayment_ratio
salary = data_dict[name]['salary']
total_stock = data_dict[name]['total_stock_value']
salary_to_stockValue_ratio = computeFraction(salary, total_stock)
data_dict[name]['salary_to_stockValue_ratio'] = salary_to_stockValue_ratio
return data_dict
def removeOutliers(data_dictionary, list_data_points):
"""
remove the data points associated with any discovered outliers
Parameters:
data_dictionay = The Udacity provided data set data_dict or my_dataset.
data_point_name = The key name for the datapoint containing
outliers. (e.g. 'Total').
Output:
data_dictionary with the provided data point removed from the
dictionary.
"""
for elem in list_data_points:
try:
data_dictionary.pop(elem,0)
except ValueError:
print "data_point not found in data_dict."
pass
return data_dictionary
def fixFinancialData(data_dictionary):
"""
Fix the financial data mistake for 'BHATNAGAR SANJAY'
Parameters:
data_dictionary: Python dictionary containing the data set
Output:
Pythin dictionary containing the data set with 'BHATNAGAR SANJAY'
financial data corrected
"""
data_dictionary['BHATNAGAR SANJAY']['total_payments'] = 137864
data_dictionary['BHATNAGAR SANJAY']['restricted_stock'] = 2604490
data_dictionary['BHATNAGAR SANJAY']['restricted_stock_deferred'] = -2604490
data_dictionary['BHATNAGAR SANJAY']['expenses'] = 137864
data_dictionary['BHATNAGAR SANJAY']['other'] = 'NaN'
data_dictionary['BHATNAGAR SANJAY']['exercised_stock_options'] = 15456290
data_dictionary['BHATNAGAR SANJAY']['total_stock_value'] = 15456290
return data_dictionary
# %%writefile select_features.py
# #!/python/bin/python
# #Author: Zach Farmer
# #Purpose: Feature Selection
# """
# Following function is designed to find features that contain the greatest
# explanation power in regards to the classification goal of identifying
# poi's. Function implements sklearn's SelectPercentile method and PCA methods.
# Parameters for these two methods should be discovered using the gridsearch
# optimization in a later script.
# """
# def featureSelection(reduced_features,labels,clnd_features,percentile,n_components,results=False):
# """
# Parameters:
# reduced_features = Unique feature names in python list after dropping non-numeric
# feaures.
# labels = ground truth labels for the data points.
# clnd_features = data point features in numpy array format corresponding
# to the labels.
# percentile= the parameter for the SelectPercentile method;
# between 0.0-1.0.
# n_components = the n_components for the pca.
# results = False returns python list of selected features. If True
# returns the metrics of the feature selectors (F-statistic, and p-values from
# f_classif) and the top 'n' pca component variance measurements.
# Output:
# Resulting list of feature from the SelectPercentile function and the
# number of principle components used. If p_results = True then the
# statistics of the SelectPercentile method using f_classif will be printed.
# In addition the explained variance of the top 'x' principle components will
# also be printed.
# """
# from sklearn.feature_selection import SelectPercentile, f_classif
# from sklearn.decomposition import PCA
# from itertools import compress
# selector = SelectPercentile(f_classif, percentile=percentile)
# selector.fit_transform(clnd_features, labels)
# pca = PCA(n_components = n_components)
# pca.fit_transform(clnd_features, labels)
# if results == True:
# f_stat = sorted(zip(reduced_features[1:],f_classif(clnd_features,labels)[0]),\
# key = lambda x: x[1], reverse=True)
# p_vals = sorted(zip(reduced_features[1:],f_classif(clnd_features,labels)[1]),\
# key = lambda x: x[1])
# expl_var = pca.explained_variance_ratio_
# return f_stat,p_vals,expl_var
# else:
# ## return a boolean index of the retained features
# retained_features = selector.get_support()
# ## index the original features by the boolean index of top x% features
# ## return a python list of the features to be used for training
# features_list = list(compress(reduced_features[1:],retained_features))
# ## add back in the 'poi' to the first position in the final features list
# features_list.insert(0,'poi')
# return features_list
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
#!/anaconda/bin/python
#Author: Zach Farmer
#Purpose: Identify information rich features
from create_my_dataset import newFeatures, dropFeatures, removeOutliers, fixFinancialData
from select_features import featureSelection
"""
This code will implement a feature selection I wrote to emulate the one ultimately
used in the final poi_id.pc script. I used a pipeline inside a gridsearch to
train and test my classifier. Therefore in order to concentrate more attention on
just the feature selection process to answer the provided question this
code attempts to return something very similar to the pipeline.
"""
if __name__=="__main__":
## Add new feature to my dataset
my_dataset = newFeatures(data_dict)
## Remove outliers
my_dataset = removeOutliers(my_dataset,['TOTAL','THE TRAVEL AGENCY IN THE PARK'])
## Fix bad data
my_dataset = fixFinancialData(my_dataset)
## Find total number of unique features in my_dataset
features = [value for value in my_dataset.itervalues() for value in value.keys()]
unique_features = list(set(features))
## Remove non-numeric features return feature list (email_address)
tmp_features = dropFeatures(unique_features, ['email_address'])
## Method for moving an item in a list to a new position found at:
## http://stackoverflow.com/questions/3173154/move-an-item-inside-a-list
## posted by nngeek
## ensure that 'poi' is the first value in the feature list
try:
tmp_features.remove('poi')
tmp_features.insert(0, 'poi')
except ValueError:
pass
### Extract features and labels from dataset for local features importance analysis
data = featureFormat(my_dataset, tmp_features, sort_keys=True)
labels, numpy_features = targetFeatureSplit(data)
## Find the top most 'informative' features using sklearn's
## f_classif as the metric and sklearn's pca method.
# Parameters used were found from a grid search over a pipeline
percentile = 61
n_components = 22
final_feature_list = featureSelection(tmp_features,
labels,
numpy_features,
percentile = percentile,
n_components = n_components,
results = False
)
print "\nTop {0} features explaining poi according to sklearn's\nSelect-Percentile\
with the function f_classif:\n".\
format(len(final_feature_list[1:]))
pprint.pprint(final_feature_list[1:])
%matplotlib qt
#!/anaconda/bin/python
#Author: Zach Farmer
#Purpose: Final visual review of features selected for model training
"""
Review the features that remain after feature selection from the original data
dictionary. Does not inlcude the top principle components for obvious reasons.
This visual analysis will only look at the features returned from the feature
select.
"""
from pandas.tools.plotting import scatter_matrix
## Convert my_dataset into pandas dataframe for analysis
df,poi_df = data_dict_to_pd_df(my_dataset)
## Convert to numeric for plotting purposes
numeric_df = df.convert_objects(convert_numeric=True)
## Index by only those features that we will use in our model training
model_df = numeric_df.loc[:,final_feature_list]
## print general statistics about the dataframe
print model_df.describe()
## Print scatter plot matrix of features -- To many variables to clearly read...won't graph
## inline. If you run this you will need to exit out of the graphic in order to continue running
## the rest of this script. If not interested in seeing the scatterplox matrix graphic then
## simply comment out the following line.
scatter_matrix(model_df, alpha = .9, figsize=(14, 14), diagonal='kde')
###Output
poi expenses deferred_income from_poi_to_this_person \
count 145 95.000000 48.000000 86.000000
mean 0.124138 55072.768421 -581049.812500 64.895349
std 0.330882 46658.979762 942076.402972 86.979244
min False 148.000000 -3504386.000000 0.000000
25% 0 22614.000000 -611209.250000 10.000000
50% 0 46950.000000 -151927.000000 35.000000
75% 0 79952.500000 -37926.000000 72.250000
max True 228763.000000 -833.000000 528.000000
exercised_stock_options shared_receipt_with_poi loan_advances \
count 101.000000 86.000000 3.000000
mean 3086804.801980 1176.465116 27975000.000000
std 5638086.075942 1178.317641 46382560.030684
min 3285.000000 2.000000 400000.000000
25% 506765.000000 249.750000 1200000.000000
50% 1297049.000000 740.500000 2000000.000000
75% 2542813.000000 1888.250000 41762500.000000
max 34348384.000000 5521.000000 81525000.000000
other bonus total_stock_value \
count 91.000000 81.000000 126.000000
mean 468874.604396 1201773.074074 3448138.238095
std 1396987.469699 1441679.438330 6595447.465066
min 2.000000 70000.000000 -44093.000000
25% 1203.000000 425000.000000 494510.250000
50% 51587.000000 750000.000000 1102872.500000
75% 359083.500000 1200000.000000 2949846.750000
max 10359729.000000 8000000.000000 49110078.000000
long_term_incentive restricted_stock salary total_payments \
count 65.000000 109.000000 94.000000 1.240000e+02
mean 746491.200000 1195212.899083 284087.542553 2.499885e+06
std 862917.421568 2224517.529709 177131.115377 9.419135e+06
min 69223.000000 32460.000000 477.000000 1.480000e+02
25% 275000.000000 259907.000000 211802.000000 3.616470e+05
50% 422158.000000 462384.000000 258741.000000 1.095882e+06
75% 831809.000000 1008149.000000 308606.500000 2.056144e+06
max 5145434.000000 14761694.000000 1111258.000000 1.035598e+08
fraction_to_poi
count 145.000000
mean 0.109164
std 0.185513
min 0.000000
25% 0.000000
50% 0.000000
75% 0.198436
max 1.000000
###Markdown
Feature Scaling
###Code
#!/anaconda/bin/python
#Author: Zach Farmer
#Purpose: Scale features for machine learning algorithms
"""
Following code scales features for model training as the ranges
differ greater between features.
"""
from sklearn.preprocessing import MinMaxScaler
if __name__ == "__main__":
### Extract features and labels from dataset for local testing
data = featureFormat(my_dataset, final_feature_list, sort_keys = True)
labels_final, features_final = targetFeatureSplit(data)
## Scale features with MinMax
scaler = MinMaxScaler()
scaled_features = scaler.fit_transform(features_final)
###Output
//anaconda/lib/python2.7/site-packages/matplotlib/collections.py:590: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
if self._edgecolors == str('face'):
###Markdown
Question 2:***What features did you end up using in your POI identifier, and what selection process did you use to pick them? Did you have to do any scaling? Why or why not? Explain what feature you tried to make, and the rationale behind it. If you used an automated feature selection function like SelectKBest, please report the feature scores and reasons for your choice of parameter values?*** **What features did you end up using in your POI identifier?**
###Code
print "Features used in the POI identifier model from the SelectPercetile:\n",
pprint.pprint(final_feature_list[1:])
print "\nIn addition I implemented a PCA on all of the features and returned the top {0}\
components.\nThese top {0} components were combined with the top {1}% of features using\
sklearn's\nfeature union method".format(n_components, percentile)
###Output
Features used in the POI identifier model from the SelectPercetile:
['expenses',
'deferred_income',
'from_poi_to_this_person',
'exercised_stock_options',
'shared_receipt_with_poi',
'loan_advances',
'other',
'bonus',
'total_stock_value',
'long_term_incentive',
'restricted_stock',
'salary',
'total_payments',
'fraction_to_poi']
In addition I implemented a PCA on all of the features and returned the top 22 components.
These top 22 components were combined with the top 61% of features using sklearn's
feature union method
###Markdown
**what selection process did you use to pick them?**
###Code
print "I used sklearn's Select-Percentile method with percentile = {0}% and sklearn's f_classif (ANOVA)\
providing the metrics returning F-statistics and P-values.".format(percentile)
print "For the top {0} principle components I used sklearn's PCA fit_transform.".\
format(n_components)
###Output
I used sklearn's Select-Percentile method with percentile = 61% and sklearn's f_classif (ANOVA) providing the metrics returning F-statistics and P-values.
For the top 22 principle components I used sklearn's PCA fit_transform.
###Markdown
**Did you have to do any scaling? Why or why not?** ***Answer:*** Yes I implemented a MinMaxScaler. My features contain vastly different scales; from the to/from email percentages to total stock values in the tens of millions. While certain algorithms may handle such vast differences with greater ease (decision trees), In order to be as flexible as possible with my algorithm choices I scaled all of the features for my dataset. **Engineer your own feature that does not come ready-made in the dataset -- explain what feature you tried to make, and the rationale behind it.** ***Answer:*** I felt that Katie's (Udacity Intro to Machine Learning course co-instructor) idea of generating features that measured the frequency of communications both to and from known POIs was an excellent idea. This method of utilizing the emails allows us to gather some of the information available in the email corpus without trying to engineer methods of including the actual content of the emails in the same dataset as the financial data, this spared me from the trouble of combining two different datasets, which did not contain all of the same data points. Furthermore Katie's idea of a shared receipt also resonated with me as an excellent method of capturing second degree associations with POIs. After implementing the fraction_to and fraction_from this person to POI* and the shared receipt feature I engineered two other features. Salary to total payment ratio and salary to total stock value ratio. These features were engineered on the hypothesis that an individual is most likely to commit fraud because they are receiving some sort of benefit from doing so. In other words it is a risk and reward trade-off and my theory was that individuals committing fraud would show a lower ratio of salary to total compensation metrics. In other words two people in similar professional standings with fairly similar base salaries should show differences in total compensation metrics. In the case of an individual committing fraud they would be more likely to receive financial gains as a result of their fraudulently attained success in the form of some type 'bonus' compensation and therefore they should present higher total payments and total stock values and consequently a lower ratio of salary to those metrics. My engineered features did not make it past the feature selection stage, unless you count them as possibly being included in the top principle components. I believe that because many of the key individuals involved in the fraud were at the very top of the company and those individuals naturally receive total compensation far in excess of their base salaries the information I was hoping to discover in the data wasn't very distinctive. > *Including these features does introduce some data leakage into the validation set. The validation test data will have some connection to ground truth values, i.e. knows what email addresses are ground truth poi's. **In your feature selection step, if you used an automated feature selection function like SelectKBest, please report the feature scores and reasons for your choice of parameter values.** ***Answer:*** I used Sklearn's SelectPercentile method to select my features. The parameter choice of 61% for the Select-Percentile method was arrived at after running a randomized grid search over my feature selection and algorithm parameters. The combination with the best outcome was a combination of the top 61% of features and the top 22 principle components from the PCA. Technically the top 22 components are transformations of the features therefore I used 14 of the original features but all the features were used to generate the 22 principle components.
###Code
from select_features import featureSelection
f_stat, p_vals, expl_var = featureSelection(tmp_features,
labels,
numpy_features,
percentile = percentile,
n_components = n_components,
results = True)
print "F-statistics of sklearn's f-classif (ANOVA) for features:\n"
pprint.pprint(f_stat)
print "\nP-values of sklearn's f-classif (ANOVA) for features:\n"
pprint.pprint(p_vals)
print "\nThe variance explained by the top {0} components of the PCA:\n".format(n_components)
pprint.pprint(zip(range(1,len(expl_var)+1,1),expl_var))
## Final dataset overview
with open("final_project_dataset.pkl", "r") as data_file:
data_dict = pickle.load(data_file)
## Add new feature to my dataset
my_dataset = newFeatures(data_dict)
## Remove outliers
my_dataset = removeOutliers(my_dataset,['TOTAL','THE TRAVEL AGENCY IN THE PARK'])
## Fix bad financial data
my_dataset = fixFinancialData(my_dataset)
#print sorted(my_dataset.keys())
data_set_overview(my_dataset)
###Output
Number of data points in the data_dict: 144
Number of Features: 25
Number of Persons of Interest in data_dict: 18
###Markdown
*** Question 3 **What algorithm did you end up using? What other ones did you try? How did model performance differ between algorithms?** Algorithm Choice One of the scripts below is actually written to the local directory and although it can be run locally if we manually activate more clusters the script was designed to be exported to an AWS EC2 cluster. The final parameters and classifier were reached utilizing a different script found in this section of the report. All of the code implemented in this section was done in pursuit of finding the optimal hyper-parameters for both the feature selection and classifier model. The code used to find the parameters and classifier for the final classifier to be tested was performed on my local machine using a randomized grid search in order to reduce the run time. The other code was used with starcluster to instantiate and run in parallel an exhaustive grid search on an EC2 cluster and is here for reference only. It can only be run if you have an AWS account and it costs money to rent the cluster time. In addition it uses an exhaustive naive grid search which takes a long time to run even with access to many cores. I decided to implement that code just for practice.
###Code
#!/anaconda/bin/python
#Author: Zach Farmer
#Purpose: Parallelize algorithm and feature selection with randomized grid search. Run on local
#Machine
"""
The following code blocks borrow some from Olivier Grisel
"Advanced Machine Learning with scikit-learn" tutorial given at PyCon
2013 in Santa Clara. The tutorial provided ipython notebooks with
examples below you can find an address to the tutorial. I also took
liberal advantage of sklearn's documentation and examples of pipeline
usage and grid search functions. Sklearn's website was far and away
the most useful reference for this part of my analysis and much of my
code is based on their examples.
For those interested you can find the notebooks and link to the tutorial at the
following github: https://github.com/ogrisel/parallel_ml_tutorial/
"""
import sys
import numpy as np
import os
import pickle
import re
import scipy.stats as sp
from pprint import pprint
from create_my_dataset import newFeatures, dropFeatures, removeOutliers, fixFinancialData
sys.path.append("tools/")
from feature_format import featureFormat, targetFeatureSplit
from sklearn.feature_selection import SelectPercentile, f_classif
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.externals import joblib
from IPython.parallel import Client
from sklearn.grid_search import ParameterGrid
from sklearn.grid_search import RandomizedSearchCV
from sklearn import svm
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import classification_report
def PickleBestClassifers(best_classifiers, file_Name):
"""
Parameters:
best_classifiers = A python dictionary containing the names of
classifers as keys and a pipeline object containing the optimized
paramters for the feature selection and classifier.
file_name = The name that the pickled file will be saved under
as a python string.
Output:
(none) pickled object saved to the local directory.
"""
# Pickle the results
fileObject = open(file_Name,'wb')
pickle.dump(best_classifiers, fileObject)
fileObject.close()
print "{0} saved to local directory as a pickle file".format(file_Name)
return None
if __name__ == "__main__":
with open("final_project_dataset.pkl", "r") as data_file:
data_dict = pickle.load(data_file)
## set random seed generator for the sciy.stats
np.random.seed(42)
## Add new feature to my dataset
my_dataset = newFeatures(data_dict)
## Remove outliers
my_dataset = removeOutliers(my_dataset,['TOTAL','THE TRAVEL AGENCY IN THE PARK'])
## Fix bad financial data
my_dataset = fixFinancialData(my_dataset)
## Find unique features in my_dataset
features = [value for value in my_dataset.itervalues() for value in value.keys()]
unique_features = list(set(features))
## Remove non-numeric features, return feature list (email_address)
features_list = dropFeatures(unique_features, ['email_address'])
## Method for moving an item in a list to a new position found at:
## http://stackoverflow.com/questions/3173154/move-an-item-inside-a-list
## posted by nngeek
## ensure that 'poi' is the first value in the feature list
try:
features_list.remove('poi')
features_list.insert(0, 'poi')
except ValueError:
pass
### Extract features and labels convert to numpy arrays
data = featureFormat(my_dataset, features_list, sort_keys=True)
labels, numpy_features = targetFeatureSplit(data)
## Create training and test splits on all of the features, feature
## selection to be performed in the pipeline
X_train, X_test, y_train, y_test = train_test_split(numpy_features,\
labels,\
test_size=0.1,\
random_state=42)
## set randomized grid search cv
cv = StratifiedShuffleSplit(y_train,\
n_iter = 30,\
test_size = .3,\
random_state=42)
if "Best_Classifiers_1.pkl" not in os.listdir('.'):
## List of classifiers to explore and compare
classifiers = {
"GNB": GaussianNB(),
"SVC": svm.SVC(),
"RDF": RandomForestClassifier(),
"ADB": AdaBoostClassifier(DecisionTreeClassifier(class_weight='balanced')),
"LRC": LogisticRegressionCV(random_state = 42)
}
## dictionary of parameters for the randomized grid search cv
param_grid = dict(
features__pca__n_components = sp.randint(1,len(X_train[0])),
features__univ_select__percentile = range(1,100,10),
SVC__C = sp.expon(scale = 100),
SVC__gamma = sp.expon(scale=.1),
SVC__kernel = ['rbf', 'linear','sigmoid'],
SVC__class_weight = ['balanced'],
RDF__n_estimators = range(1,500,1),
RDF__criterion = ['gini','entropy'],
RDF__max_depth = range(1,len(X_train[0]),1),
RDF__class_weight = ['balanced'],
ADB__n_estimators = range(1,500,1),
ADB__learning_rate = sp.expon(scale= 300),
LRC__Cs = range(0,10,1),
LRC__class_weight = ['balanced']
)
best_classifiers = {}
for classifier in classifiers:
## Method for supplying just the parameter grid entries related to the classifier
## in the current interation while excluding the other classifer paramters.
# dict comprehension method courtesy of BernBarn at:
# http://stackoverflow.com/questions/14507591/python-dictionary-comprehension
param_for_class = {key: value for key,value in param_grid.iteritems() if
re.search(key.split("_")[0],'features ' + classifier)}
## Feature selection method, same for all classifiers
pca = PCA()
selection = SelectPercentile()
## Note: Only implement when using randomized grid search. PCA takes a long
## time to run, not a good choice with exhaustive grid search
feature_select = FeatureUnion([("pca",pca),("univ_select",selection)])
## Active the classifier for the current loop
clf = classifiers[classifier]
## Pipeline feature selection, feature scaling and classifier for optimization
pipeline = Pipeline([
("features", feature_select),
("scaler", MinMaxScaler()),
(classifier,clf)
])
## use f1_weighted scoring to account for heavily skewed classes
search = RandomizedSearchCV(estimator = pipeline,
param_distributions = param_for_class,
scoring = 'f1_weighted',
n_jobs=-1,
cv = cv,
n_iter = 20,
verbose = 1,
error_score = 0,
random_state = 42)
results = search.fit(X_train,y_train)
best_classifiers[classifier] = results.best_estimator_
## Save the best classifier pipeline objects to local directory using pickle
PickleBestClassifers(best_classifiers,"Best_Classifiers_1.pkl")
else:
## After initial run of grid search, reference the pickled outcomes for the
## rest of the analysis. Actual searching process takes some time
## on my system setup, so I want to run it as few times as possible.
savedResults = open("Best_Classifiers_1.pkl",'r')
best_classifiers = pickle.load(savedResults)
for key,value in best_classifiers.iteritems():
print "Parameters for {0}\nFEATURE SELECTION:\n[{1}]\nSCALER:\n[{2}]\nCLASSIFIER:\n[{3}]\n\n".\
format(key,value.steps[0][1].get_params(),
value.steps[1][1],
value.steps[2][1])
## Method of accessing pipeline objects and performing transformation found at
## Zac Stewarts blog:
## http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html
## transform and predict on the X_Test split of the data
X_test_data = value.steps[0][1].transform(X_test)
X_test_data_scl = value.steps[1][1].transform(X_test_data)
pred = value.steps[2][1].predict(X_test_data_scl)
## return classification report of prediction results compared to truth values
print key + " Score:" + "\n" + (classification_report(y_test,
pred,
target_names=['non-poi','poi']
))
for key,value in best_classifiers.iteritems():
if key == 'LRC':
print "Parameters for {0}\nFEATURE SELECTION:\n>>> {1}\nSCALER:\n>>> {2}\nCLASSIFIER:\n>>> {3}\n\n".\
format(key,value.steps[0][1].get_params(),
value.steps[1][1],
value.steps[2][1])
else:
continue
###Output
Parameters for LRC
FEATURE SELECTION:
>>> {'n_jobs': 1, 'univ_select': SelectPercentile(percentile=61,
score_func=<function f_classif at 0x10ca48c08>), 'pca__copy': True, 'transformer_list': [('pca', PCA(copy=True, n_components=22, whiten=False)), ('univ_select', SelectPercentile(percentile=61,
score_func=<function f_classif at 0x10ca48c08>))], 'pca__n_components': 22, 'pca__whiten': False, 'pca': PCA(copy=True, n_components=22, whiten=False), 'transformer_weights': None, 'univ_select__score_func': <function f_classif at 0x10ca48c08>, 'univ_select__percentile': 61}
SCALER:
>>> MinMaxScaler(copy=True, feature_range=(0, 1))
CLASSIFIER:
>>> LogisticRegressionCV(Cs=2, class_weight='balanced', cv=None, dual=False,
fit_intercept=True, intercept_scaling=1.0, max_iter=100,
multi_class='ovr', n_jobs=1, penalty='l2', random_state=42,
refit=True, scoring=None, solver='lbfgs', tol=0.0001, verbose=0)
###Markdown
***Answer:*** I use a cross-validated logistic regression classifier for my algorithm. I tried out 5 different classifiers in order to determine which ones provides the best performance. Using Sklearn's classification report to analyze the results of the 5 classifiers I found the logistic regression algorithm had the best precision and recall for identifying both poi's and non-poi's (specifically a high recall which I deemed as the more important of the two metrics in this case). The adaBoost, randomforest, gaussian naive bayes and support vector classifier algorithms all had some success at identifying poi's and non-poi's with high precision and decent recall. However none of these algorithms had as much success with precision and recall when it came to identifying poi's as the Logistic Regression. Given the choice between being especially aggressive in predicting poi's and generating lots of false positives vs. being to conservative and failing to identify known poi's I will lean towards the algorithm that successfully manages to predict most of the know poi's as poi's and generates a few more false positives as a result. *The following code was written to be distributed on an AWS EC2 cluster using Starcluster AMI's to instantiate the instances.*
###Code
# #%%writefile ec2_cluster_files/parallel_GridSearch.py
# #!/usr/bin/python
# #Author: Zach Farmer
# #Purpose: Parallelize algorithm and feature selection with paramter grid search.
# #Use numpy memarrays and Ipython Parallel run on AWS EC2 using StarCluster.
# """
# The following code blocks borrower heavily from Olivier Grisel
# "Advanced Machine Learning with scikit-learn" tutorial given at PyCon
# 2013 in Santa Clara. The tutorial provided ipython notebooks with
# examples, I have used the code examples in those notebooks as a guidline
# for implementing my gridsearch optimization in parallel. For those
# interested you can find the notebooks and link to the tutorial at the
# following github: https://github.com/ogrisel/parallel_ml_tutorial/
# some of the functions are adaptations of the code found in that tutorial.
# The code block was designed to be uploaded to a starcluster intialized ec2
# instance. (I hade trouble getting this code to work quickly on the ec2 instance,
# not sure why as the environment should be identical to my machine, just
# with more cores. Regardless the distribution of computations didn't seem to
# speed up the process, and I had to ditch the pca which basically caused the
# computations to stall indefinitely)
# """
# import sys
# import numpy as np
# import os
# import time
# import pickle
# import re
# from pprint import pprint
# from create_my_dataset import newFeatures, dropFeatures, removeOutliers, fixFinancialData
# from feature_format import featureFormat, targetFeatureSplit
# from sklearn.feature_selection import SelectPercentile, f_classif
# from sklearn.preprocessing import MinMaxScaler
# from sklearn.cross_validation import train_test_split
# from sklearn.cross_validation import StratifiedShuffleSplit
# from sklearn.pipeline import Pipeline
# from sklearn.externals import joblib
# from IPython.parallel import Client
# from sklearn.grid_search import ParameterGrid
# from sklearn import svm
# from sklearn.naive_bayes import GaussianNB
# from sklearn.tree import DecisionTreeClassifier
# from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
# from sklearn.linear_model import LogisticRegressionCV
# ## Create Cross-validated training and test dataset using joblib
# ## borrowers heavily for Oliver grisels example
# def persist_cv_splits(X, y, n_cv_splits = 14, filename="data",
# suffix = "_cv_%03d.pkl", test_size=.1,
# random_state=42):
# """
# Input:
# X = The features data in a numpy array.
# y = the corresponding ground truth labels.
# n_cv_splits = Number of cross_validated splits to make (folds).
# filename = The filename prefix for the pickled splits.
# suffix = The filename suffix for the pickled splits.
# (apply logical naming for easier interative access later
# when performing model fitting. (e.g. data_cv_001,
# data_cv_002, ...)).
# test_size = Number of data points to set aside for testing
# as a ratio. (.1,.2,...).
# random_state = Number for the random state generator for
# replication.
# *Note: oweing to the small size of the data set and the rarity
# of a positive poi labels the cv split will be performed using a
# stratified shuffle split.
# Output:
# pickled cross-val datasets for use by numpy memory maps to reduce
# redundant distributions of the dataset to each of the engines.
# In the case of AWS Clusters the cross-val dataset should be shared
# across all the clusters using NFS.
# """
# ## Implement stratified shuffle split
# cv = StratifiedShuffleSplit(y,
# n_iter = n_cv_splits,
# test_size = test_size,
# random_state = random_state)
# ## List of cv_split filenames
# cv_split_filenames = []
# for i, (train, test) in enumerate(cv):
# # Create a tuple containing the cross_val fold
# cv_fold = (X[train], y[train], X[test], y[test])
# # use the index for the filenaming scheme
# cv_split_filename = filename + suffix % i
# # add absolute path to filename
# cv_split_filename = os.path.abspath(cv_split_filename)
# # Using joblib dump the cv_dataset files
# joblib.dump(cv_fold, cv_split_filename)
# # add the name to the cv_split_filenames to pass on as an iterator
# cv_split_filenames.append(cv_split_filename)
# return cv_split_filenames
# def compute_evaluation(cv_split_filename, model, params):
# """
# Parameters:
# cv_split_filename = The file name of the memory mapped numpy array
# containing a fold of the cross-validated split on which the model is
# to be trained.
# model = tuple containing in the [0] index the alias name for the classifier
# and in the [1] index the instantiation of the classifier itself.
# Params = A dictionary of relevant parameters for the pipeline objects.
# Output:
# The validation score for the pipeline for the given
# cross-val dataset and parameters.
# """
# from sklearn.externals import joblib
# from sklearn.pipeline import Pipeline
# from sklearn.preprocessing import MinMaxScaler
# from sklearn.feature_selection import SelectPercentile, f_classif
# from sklearn import svm
# from sklearn.naive_bayes import GaussianNB
# from sklearn.tree import DecisionTreeClassifier
# from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
# from sklearn.linear_model import LogisticRegressionCV
# X_train, y_train, X_validation, y_validation = joblib.load(
# cv_split_filename, mmap_mode='c')
# ## Feature selection method, same for all classifiers
# selection = SelectPercentile()
# ## Pipeline the feature selection, feature scaling and classifier for optimization
# ## procedure
# pipeline = Pipeline([
# ("features", selection),
# ("scaler", MinMaxScaler()),
# (model[0],model[1])
# ])
# # set model parameters
# pipeline.set_params(**params)
# # train the model
# trained_pipeline = pipeline.fit(X_train, y_train)
# # evaluate model score
# validation_score = trained_pipeline.score(X_validation, y_validation)
# return validation_score
# def grid_search(lb_view, model,
# cv_split_filenames, param_grid):
# """
# Parameters:
# lb_view = A load-balanced IPython.parallel client.
# model = tuple containing in the [0] index the alias name for the classifier
# and in the [1] index the instantiation of the classifier itself.
# cv_split_filenames = list of cross-val dataset filenames.
# param_grid = dictionary of all the hyper-parameters for the pipeline
# objects to be trained.
# Output:
# List of parameters and list of asynchronous client tasks handles
# """
# all_tasks = []
# all_parameters = list(ParameterGrid(param_grid))
# for i, params in enumerate(all_parameters):
# task_for_params = []
# for j,cv_split_filename in enumerate(cv_split_filenames):
# t = lb_view.apply(compute_evaluation, cv_split_filename,
# model, params)
# task_for_params.append(t)
# all_tasks.append(task_for_params)
# return all_parameters, all_tasks
# def find_best(all_parameters, all_tasks, n_top=5):
# """compute the mean score of the completed tasks"""
# mean_scores = []
# for param, task_group in zip(all_parameters,all_tasks):
# scores = [t.get() for t in task_group if t.ready()]
# if len(scores) == 0:
# continue
# mean_scores.append((np.mean(scores), param))
# return sorted(mean_scores, reverse=True, key = lambda x: x[0])[:n_top]
# def progress(tasks):
# """
# Input:
# The asynchronus task handles returned
# Output:
# The the number tasks that have been completed
# """
# return np.mean([task.ready() for task_group in tasks
# for task in task_group])
# def PickleBestClassifers(best_classifiers, file_Name):
# """
# Input:
# A python dictionary containing the names of classifers as keys and the
# pipeline object containing the optimized paramters for the feature selection
# and learning algorithms. The name that the pickled file will be saved
# under as a python string.
# Output:
# (None) pickled object saved to the local directory.
# """
# # Pickle the results, as the gridSearch takes forever
# fileObject = open(file_Name,'wb')
# pickle.dump(best_classifiers, fileObject)
# fileObject.close()
# return None
# if __name__ =="__main__":
# with open("final_project_dataset.pkl", "r") as data_file:
# data_dict = pickle.load(data_file)
# ## set random seed generator for the sciy.stats
# np.random.seed(42)
# ## Add new feature to my dataset
# my_dataset = newFeatures(data_dict)
# ## Remove outliers
# my_dataset = removeOutliers(my_dataset,['TOTAL','THE TRAVEL AGENCY IN THE PARK'])
# ## Fix bad financial data
# my_dataset = fixFinancialData(my_dataset)
# ## Find unique features in my_dataset
# features = [value for value in my_dataset.itervalues() for value in value.keys()]
# unique_features = list(set(features))
# ## Remove non-numeric features, return feature list (email_address)
# features_list = dropFeatures(unique_features, ['email_address'])
# ## Method for moving an item in a list to a new position found at:
# ## http://stackoverflow.com/questions/3173154/move-an-item-inside-a-list
# ## posted by nngeek
# ## ensure that 'poi' is the first value in the feature list
# try:
# features_list.remove('poi')
# features_list.insert(0, 'poi')
# except ValueError:
# pass
# ### Extract features and labels convert to numpy arrays
# data = featureFormat(my_dataset, features_list, sort_keys=True)
# labels, numpy_features = targetFeatureSplit(data)
# ## Create training and test splits on all of the features, feature
# ## selection to be performed in the pipeline
# X_train, X_test, y_train, y_test = train_test_split(numpy_features,\
# labels,\
# test_size=0.2,\
# random_state=42)
# ## Create training and test splits for the grid-search cross-validation
# cv_split_filenames = persist_cv_splits(np.array(X_train),\
# np.array(y_train),\
# n_cv_splits = 10,\
# filename="data",\
# suffix = "_cv_%03d.pkl",\
# test_size=.2,\
# random_state=42)
# if "Best_Classifiers.pkl" not in os.listdir('.'):
# ## List of classifiers to explore and compare
# classifiers = {
# "GNB": GaussianNB(),
# "SVC": svm.SVC(),
# "RDF": RandomForestClassifier(),
# "ADB": AdaBoostClassifier(DecisionTreeClassifier(class_weight='balanced')),
# "LRC" : LogisticRegressionCV()
# }
# ## dictionary of parameters for the GridSearchCV
# param_grid = dict(
# features__percentile = range(10,100,10),
# SVC__C = np.logspace(-2,8,9,base=3),
# SVC__gamma = np.logspace(-9,3,9,base = 3),
# SVC__kernel = ['rbf', 'linear','sigmoid'],
# SVC__class_weight = ['balanced'],
# RDF__n_estimators = range(10,100,10),
# RDF__criterion = ['gini','entropy'],
# RDF__max_depth = range(1,len(X_train[0]),1),
# RDF__class_weight = ['balanced'],
# ADB__n_estimators = range(50,500,50),
# ADB__learning_rate = np.logspace(-2,8,9,base=3),
# LRC__Cs = range(0,10,1),
# LRC__class_weight = ['balanced']
# )
# best_classifiers = {}
# client = Client(packer = "pickle")
# lb_view = client.load_balanced_view()
# for classifier in classifiers:
# ## Method for supplying just the parameter grid entries related to the classifier
# ## in the current interation while excluding the other classifer paramters.
# # dict comprehension method courtesy of BernBarn at:
# # http://stackoverflow.com/questions/14507591/python-dictionary-comprehension
# param_for_class = {key: value for key,value in param_grid.iteritems() if
# re.search(key.split("_")[0],'features ' + classifier)}
# lb_view.abort()
# time.sleep(4)
# model = (classifier, classifiers[classifier])
# all_parameters, all_tasks = grid_search(lb_view, model,\
# cv_split_filenames,\
# param_for_class)
# while progress(all_tasks) < .99:
# print("Tasks completed for {0}: {1}%".format(classifier,100 * progress(all_tasks)))
# time.sleep(30)
# [t.wait() for tasks in all_tasks for t in tasks]
# best_classifiers[classifier] = find_best(all_parameters,\
# all_tasks,\
# n_top = 1)
# PickleBestClassifers(best_classifiers,"Best_Classifiers.pkl")
# else:
# ## After initial run of grid search, reference the pickled outcomes for the
# ## rest of the analysis. Actual searching process takes a while
# ## on my system setup, so I want to run it as few times as possible.
# savedResults = open("Best_Classifiers.pkl",'r')
# best_classifiers = pickle.load(savedResults)
###Output
Overwriting ec2_cluster_files/parallel_GridSearch.py
###Markdown
*Naive Grid-Search is very slow, exponential growth in calculations required for the number of folds and the parameters for both the feature selection stage and the algorithm training that have to be searched over. Could possible give better results but not necessarily and the cost in time is significant, especially when running as many different parameters as I am with as many cv-folds. This code should be run on many cores in order to keep the run time down.* **** Question 4***What does it mean to tune the parameters of an algorithm, and what can happen if you don’t do this well? How did you tune the parameters of your particular algorithm?*** ***Answer:*** Tuning of parameters is done to adjust those components of the algorithm that are associated with how the algorithm is fitting your data. To tune the parameters we have to decide on a metric of success and compare the results of that metric between different parameter values, selecting the 'tuned' parameters that offer us the best scores. These parameters have great influence over bias and variance which means that getting this step wrong can introduce high bias or high variance into our models. I chose to address the problem of tuning my parameters using a randomized cross-validated grid search over a range of values for many of my tunable parameters. This method automates the process to a degree, I still have to provide the ranges of parameters to be tested. The model I used implemented a tuned feature selection algorithm, a tuned pca and a tuned logistic regression classifier. Essentially I perturbed the number of features to included, varied the number of principle components to included and finally adjusted the strength of the regularization term in the classifier. By performing all these steps I was able to tune my entire model to relatively descent success in identifying poi's and non-poi's **** Question 5***What is validation, and what’s a classic mistake you can make if you do it wrong? How did you validate your analysis?*** ***Answer:*** Validation is the step taken to assess the accuracy of our trained model. It's necessary to ensure that our model will generalize well, which is after all the primary purpose of a machine learning algorithm. A classic mistake with validation is to not appropriately separate testing and training data. If we use any of the data points in the training of an algorithm, whether that is to train the model to fit the data or to tune the hyper-parameters of a model, in the testing of our trained model then we have deprived ourselves of that data's anonymity. We should always test our model using data that was not associated with the fitting of our model or the training of it's hyper-parameters. This is extremely important because if we use data that we trained with to test the performance of a model we are failing to address the primary directive of any forecasting model which is to provide accurate assessments on data not seen before. We would be allowing the information contained in training phases to leak into our generalizable model, meaning that we would have no way of accurately predicting how well our model will perform on completely unknown data because it will not have seen completely unknown data. By not properly validating our model we will not have any idea how effective our algorithm really is, not a good thing especially if money is on the line. I validated my data first by splitting the original data set using a stratified split to separate my data into a training set of the features and corresponding labels and testing set of features of the same format. After the separation I passed the training sets into a grid search which used a stratified shuffle split to again split my training data so that I could perform not just a model fitting to my data but could also fit the 'best' hyper-parameters. By using this cross-validated method I was able to find the best hyper-parameters without validating on the same data used to fit the model. This meant that after finding the best parameters I could then test the final model and parameters on my first held out test set to validate the overall model score without having to utilize any of the data points used in the fitting of the model and in the training of the hyper-parameters to measure the success of my model. **** Question 6 ***Give at least 2 evaluation metrics and your average performance for each of them. Explain an interpretation of your metrics that says something human-understandable about your algorithm’s performance.***
###Code
## transform and predict on the X_Test split of the data
pred = best_classifiers["LRC"].predict(X_test)
## return classification report of prediction results compared to truth values
print "Logistic Regression Classifier Report:" + "\n" + (classification_report(y_test,
pred,
target_names=['non-poi','poi']
))
###Output
Logistic Regression Classifier Report:
precision recall f1-score support
non-poi 1.00 0.86 0.92 14
poi 0.33 1.00 0.50 1
avg / total 0.96 0.87 0.89 15
###Markdown
> NOTE: The following answer is based on my results from testing on my holdout test set. The results from the `tester.py` are different, however the essence of the results are the same. The test set uses more 'support' data points both non-poi and poi so the extremes in my results, the 100%, are mellowed out to much lower then 100%. That being said the result are proportionally very similar between my test data and the `tester.py` outcome. ***Answer:*** The classification report above tells us a few important things about the model's performance. The 'Precision' metric is indicating the percentage of relevant (that is positive and correct) predictions made by the model over all the positive predictions made by the model (correct or incorrect). In other words how many of the positive predictions made by the model are actually relevant (positive and correct) when compared to ground truth reality. 'Recall' on the other hand compares the number of relevant (positive and correct) predictions made by the model vs. the total number of relevant (positive and correct) values that actually exist in reality. In other words how many of the real relevant (positive and correct) ground truth values does our model identify compared to the total number that actually exists. The classification report above further breaks down the precision and recall for each of the possible labels non-poi and poi. According to this classification report our model does an excellent job of correctly identifying all non-poi's labels, of the 14 non-pois predicted by our classifier all 14 were in reality non-pois (non-poi 'precision'). The model has less success with poi's, a total of 3 data points were predicted to be persons of interest but in reality only 1 of them was a person of interest (poi 'precision'). Our model however has more success when it comes to correctly identifying all the actual poi (positive and correct) labels that exist in the test set data, there was 1 real poi and our model identified it correctly as a poi (poi 'recall'). We can see that it identifies most of the non-pois that can be found in the test data set, there were 14 real non-pois and our classifier correctly predicted 12 of them (non-poi 'recall'). To achieve the higher success rate in recall our model was more liberal in predicting pois, as a result this model is likely to cause a greater workload for those tasked with looking into individuals suspected of fraudulent behaviour, the trade-off in my mind worth it. Higher recall will reduce the rate of false negatives among the actual pois at the expense of a higher false positive rate. I'm more biased towards an algorithm that will correctly identify all of the poi's that exist in reality even if it means flagging more innocent individuals for further scrutiny. Finally the F1 score .89 representing the weighted average accuracy of our model, weights biased towards higher recall values, suggest a fairly good model overall. **** Final POI Script This final version of the poi_id.py script will contain all of the functions necessary to reach the final pickle dumps for the tester. The code below is an aggregation of most of the analysis conducted thus far.
###Code
%%writefile poi_id.py
#!/usr/bin/python
#Author: Zach Farmer
#Purpose: Generate pkl files containing my dataset, list of features, and final
#optimized classifier
import sys
import numpy as np
import os
import pickle
import re
import scipy.stats as sp
from pprint import pprint
from create_my_dataset import newFeatures, dropFeatures, removeOutliers, fixFinancialData
sys.path.append("tools/")
from feature_format import featureFormat, targetFeatureSplit
from tester import dump_classifier_and_data
from sklearn.feature_selection import SelectPercentile, f_classif
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.externals import joblib
from IPython.parallel import Client
from sklearn.grid_search import ParameterGrid
from sklearn.grid_search import RandomizedSearchCV
from sklearn import svm
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import classification_report
def dropFeatures(features, remove_list):
"""
Parameters:
features = Python list of unique features in the data_dict.
remove_list = Python list of features to be removed.(drop
non-numeric features such as the email address)
Output:
Python list of unique features sans the features in the remove
list.
"""
## Method courtesy of user: Donut at:
## http://stackoverflow.com/questions/4211209/remove-all-the-elements-that-occur-in-one-list-from-another
features_remove = remove_list
learning_features = [feature for feature in features if feature not in features_remove]
return learning_features
## Following code adapted from Udacity's Intro to Machine learning lesson 11 Visualizing
## Your New Feature Quiz
def computeFraction(feature_1, feature_2):
"""
Parameters:
Two numeric feature vectors for which we want to compute a ratio
between
Output:
Return fraction or ratio of feature_1 divided by feature_2
"""
fraction = 0.
if feature_1 == "NaN":
fraction = 0.0
elif feature_2 == "NaN":
fraction = 0.0
else:
fraction = int(feature_1) / float(feature_2)
return fraction
def newFeatures(data_dict):
"""
Parameters:
data_dict provided by Udacity instructors
Output:
data_dict with new features (hard-coded)
"""
## following is not extensible or abstractable to making any other features
## then what is hard coded below.
for name in data_dict:
from_poi_to_this_person = data_dict[name]["from_poi_to_this_person"]
to_messages = data_dict[name]["to_messages"]
fraction_from_poi = computeFraction( from_poi_to_this_person, to_messages )
data_dict[name]["fraction_from_poi"] = fraction_from_poi
from_this_person_to_poi = data_dict[name]["from_this_person_to_poi"]
from_messages = data_dict[name]["from_messages"]
fraction_to_poi = computeFraction( from_this_person_to_poi, from_messages )
data_dict[name]["fraction_to_poi"] = fraction_to_poi
salary = data_dict[name]['salary']
total_payments = data_dict[name]['total_payments']
salary_to_totalPayment_ratio = computeFraction(salary, total_payments)
data_dict[name]['salary_to_totalPayment_ratio'] = salary_to_totalPayment_ratio
salary = data_dict[name]['salary']
total_stock_value = data_dict[name]['total_stock_value']
salary_to_stockValue_ratio = computeFraction(salary, total_stock_value)
data_dict[name]['salary_to_stockValue_ratio'] = salary_to_stockValue_ratio
return data_dict
def PickleBestClassifers(best_classifiers, file_Name):
"""
Parameters:
best_classifiers = A python dictionary containing the names of
classifers as keys and a pipeline object containing the optimized
paramters for the feature selection and classifier.
file_name = The name that the pickled file will be saved under
as a python string.
Output:
(none) pickled object saved to the local directory.
"""
# Pickle the results
fileObject = open(file_Name,'wb')
pickle.dump(best_classifiers, fileObject)
fileObject.close()
print "{0} saved to local directory as a pickle file".format(file_Name)
return None
def removeOutliers(data_dict,listOfOutliers):
"""
Parameters:
data_dict= The data_dict provided by Udacity.
listOfOutliers = Python List of outliers (key names)
to remove from the data_dict.
Output:
Updated data_dict where the outliers have been removed.
"""
for outlier in listOfOutliers:
try:
data_dict.pop(outlier,0)
except ValueError:
pass
return data_dict
def generateFeaturesList(my_dataset):
"""
Parameters:
my_dataset = Updated data_dict which includes the new features and has had
outliers removed.
Output:
A python list containing all of the features to be used in the fitting and
testing of the classifier.
"""
## Find unique features in my_dataset
features = [value for value in my_dataset.itervalues() for value in value.keys()]
unique_features = list(set(features))
## Remove non-numeric features (email_address)
reduced_features = dropFeatures(unique_features, ['email_address'])
## Method for moving an item in a list to a new position found at:
## http://stackoverflow.com/questions/3173154/move-an-item-inside-a-list
## posted by nngeek
## ensure that 'poi' is the first value in the feature list
try:
reduced_features.remove('poi')
reduced_features.insert(0, 'poi')
except ValueError:
pass
return reduced_features
if __name__=="__main__":
with open("final_project_dataset.pkl", "r") as data_file:
data_dict = pickle.load(data_file)
## following if statement to be run only if the optimized classifier/feature
## Select pipeline object is not found in a the local directory in the pickle file.
## This block of code will rerun the entire grid search and pipeline process to
## generate the content that should be available in the pickle file. All random states
## have been set, I believe the outcome should be the same each time the code is
## run
if "Best_Classifiers.pkl" not in os.listdir('.'):
## set random seed generator for the sciy.stats
np.random.seed(42)
## Add new feature to my dataset
my_dataset = newFeatures(data_dict)
## Remove outliers
my_dataset = removeOutliers(my_dataset,['TOTAL','THE TRAVEL AGENCY IN THE PARK'])
## Fix bad financial data
my_dataset = fixFinancialData(my_dataset)
## Find unique features in my_dataset
features = [value for value in my_dataset.itervalues() for value in value.keys()]
unique_features = list(set(features))
## Remove non-numeric features, return feature list (email_address)
features_list = dropFeatures(unique_features, ['email_address'])
## Method for moving an item in a list to a new position found at:
## http://stackoverflow.com/questions/3173154/move-an-item-inside-a-list
## posted by nngeek
## ensure that 'poi' is the first value in the feature list
try:
features_list.remove('poi')
features_list.insert(0, 'poi')
except ValueError:
pass
### Extract features and labels convert to numpy arrays
data = featureFormat(my_dataset, features_list, sort_keys=True)
labels, numpy_features = targetFeatureSplit(data)
## Create training and test splits on all of the features, feature
## selection to be performed in the pipeline
X_train, X_test, y_train, y_test = train_test_split(numpy_features,\
labels,\
test_size=0.1,\
random_state=42)
## set randomized grid search cross-validation method
cv = StratifiedShuffleSplit(y_train,\
n_iter = 30,\
test_size = .3,\
random_state=42)
## list of classifier to compare
classifiers = {
"GNB": GaussianNB(),
"SVC": svm.SVC(),
"RDF": RandomForestClassifier(),
"ADB": AdaBoostClassifier(DecisionTreeClassifier(class_weight='balanced')),
"LRC": LogisticRegressionCV(random_state = 42)
}
## dictionary of parameters for the randomized grid search cv
param_grid = dict(
features__pca__n_components = sp.randint(1,len(X_train[0])),
features__univ_select__percentile = range(1,100,10),
SVC__C = sp.expon(scale = 100),
SVC__gamma = sp.expon(scale=.1),
SVC__kernel = ['rbf', 'linear','sigmoid'],
SVC__class_weight = ['balanced'],
RDF__n_estimators = range(1,500,1),
RDF__criterion = ['gini','entropy'],
RDF__max_depth = range(1,len(X_train[0]),1),
RDF__class_weight = ['balanced'],
ADB__n_estimators = range(1,500,1),
ADB__learning_rate = sp.expon(scale= 300),
LRC__Cs = range(0,10,1),
LRC__class_weight = ['balanced']
)
best_classifiers = {}
for classifier in classifiers:
## Method for supplying just the parameter grid entries related to the classifier
## in the current interation while excluding the other classifer paramters.
# dict comprehension method courtesy of BernBarn at:
# http://stackoverflow.com/questions/14507591/python-dictionary-comprehension
param_for_class = {key: value for key,value in param_grid.iteritems() if
re.search(key.split("_")[0],'features ' + classifier)}
## Feature selection method, same for all classifiers
pca = PCA()
selection = SelectPercentile()
## Note to self: Only implement when using randomized grid search.
## PCA takes a long time to run, not a good choice with exhaustive
## grid search
feature_select = FeatureUnion([("pca",pca),("univ_select",selection)])
## Activate the classifier for the current loop
clf = classifiers[classifier]
## Pipeline feature selection, feature scaling and classifier for optimization
pipeline = Pipeline([
("features", feature_select),
("scaler", MinMaxScaler()),
(classifier,clf)
])
## use f1_weighted scoring to account for heavily skewed classes
search = RandomizedSearchCV(estimator = pipeline,
param_distributions = param_for_class,
scoring = 'f1_weighted',
n_jobs=-1,
cv = cv,
n_iter = 20,
verbose = 1,
error_score = 0,
random_state = 42)
## Save the results of the combination
results = search.fit(X_train,y_train)
best_classifiers[classifier] = results.best_estimator_
## Save the best classifier pipeline objects to local directory using pickle
PickleBestClassifers(best_classifiers,"Best_Classifiers.pkl")
else:
savedResults = open("Best_Classifiers.pkl",'r')
best_classifiers = pickle.load(savedResults)
## Remove Outliers
data_dict = removeOutliers(data_dict,['Total'])
### Store to my_dataset for easy export below.
my_dataset = newFeatures(data_dict)
### features_list is a list of strings, each of which is a feature name.
### The first feature must be "poi".
features_list = generateFeaturesList(my_dataset)
## Best classifier
clf = best_classifiers["LRC"]
### Dump classifier, dataset, and features_list so anyone can
### check your results. You do not need to change anything below, but make sure
### that the version of poi_id.py that you submit can be run on its own and
### generates the necessary .pkl files for validating your results.
dump_classifier_and_data(clf, my_dataset, features_list)
###Output
Overwriting poi_id.py
###Markdown
Test Classifier Test my trained classifier using the Udacity provided tester python script.
###Code
# %load tester.py
#!/usr/bin/pickle
"""
a basic script for importing student's POI identifier,
and checking the results that they get from it
requires that the algorithm, dataset, and features list
be written to my_classifier.pkl, my_dataset.pkl, and
my_feature_list.pkl, respectively
that process should happen at the end of poi_id.py
"""
import pickle
import sys
from sklearn.cross_validation import StratifiedShuffleSplit
sys.path.append("tools/")
from feature_format import featureFormat, targetFeatureSplit
PERF_FORMAT_STRING = "\
\tAccuracy: {:>0.{display_precision}f}\tPrecision: {:>0.{display_precision}f}\t\
Recall: {:>0.{display_precision}f}\tF1: {:>0.{display_precision}f}\tF2: {:>0.{display_precision}f}"
RESULTS_FORMAT_STRING = "\tTotal predictions: {:4d}\tTrue positives: {:4d}\tFalse positives: {:4d}\
\tFalse negatives: {:4d}\tTrue negatives: {:4d}"
def test_classifier(clf, dataset, feature_list, folds = 1000):
data = featureFormat(dataset, feature_list, sort_keys = True)
labels, features = targetFeatureSplit(data)
cv = StratifiedShuffleSplit(labels, folds, random_state = 42)
true_negatives = 0
false_negatives = 0
true_positives = 0
false_positives = 0
for train_idx, test_idx in cv:
features_train = []
features_test = []
labels_train = []
labels_test = []
for ii in train_idx:
features_train.append( features[ii] )
labels_train.append( labels[ii] )
for jj in test_idx:
features_test.append( features[jj] )
labels_test.append( labels[jj] )
### fit the classifier using training set, and test on test set
clf.fit(features_train, labels_train)
predictions = clf.predict(features_test)
for prediction, truth in zip(predictions, labels_test):
if prediction == 0 and truth == 0:
true_negatives += 1
elif prediction == 0 and truth == 1:
false_negatives += 1
elif prediction == 1 and truth == 0:
false_positives += 1
elif prediction == 1 and truth == 1:
true_positives += 1
else:
print "Warning: Found a predicted label not == 0 or 1."
print "All predictions should take value 0 or 1."
print "Evaluating performance for processed predictions:"
break
try:
total_predictions = true_negatives + false_negatives + false_positives + true_positives
accuracy = 1.0*(true_positives + true_negatives)/total_predictions
precision = 1.0*true_positives/(true_positives+false_positives)
recall = 1.0*true_positives/(true_positives+false_negatives)
f1 = 2.0 * true_positives/(2*true_positives + false_positives+false_negatives)
f2 = (1+2.0*2.0) * precision*recall/(4*precision + recall)
print clf
print PERF_FORMAT_STRING.format(accuracy, precision, recall, f1, f2, display_precision = 5)
print RESULTS_FORMAT_STRING.format(total_predictions, true_positives, false_positives, false_negatives, true_negatives)
print ""
except:
print "Got a divide by zero when trying out:", clf
print "Precision or recall may be undefined due to a lack of true positive predicitons."
CLF_PICKLE_FILENAME = "my_classifier.pkl"
DATASET_PICKLE_FILENAME = "my_dataset.pkl"
FEATURE_LIST_FILENAME = "my_feature_list.pkl"
def dump_classifier_and_data(clf, dataset, feature_list):
with open(CLF_PICKLE_FILENAME, "w") as clf_outfile:
pickle.dump(clf, clf_outfile)
with open(DATASET_PICKLE_FILENAME, "w") as dataset_outfile:
pickle.dump(dataset, dataset_outfile)
with open(FEATURE_LIST_FILENAME, "w") as featurelist_outfile:
pickle.dump(feature_list, featurelist_outfile)
def load_classifier_and_data():
with open(CLF_PICKLE_FILENAME, "r") as clf_infile:
clf = pickle.load(clf_infile)
with open(DATASET_PICKLE_FILENAME, "r") as dataset_infile:
dataset = pickle.load(dataset_infile)
with open(FEATURE_LIST_FILENAME, "r") as featurelist_infile:
feature_list = pickle.load(featurelist_infile)
return clf, dataset, feature_list
def main():
### load up student's classifier, dataset, and feature_list
clf, dataset, feature_list = load_classifier_and_data()
### Run testing script
test_classifier(clf, dataset, feature_list)
if __name__ == '__main__':
main()
###Output
Pipeline(steps=[('features', FeatureUnion(n_jobs=1,
transformer_list=[('pca', PCA(copy=True, n_components=22, whiten=False)), ('univ_select', SelectPercentile(percentile=61,
score_func=<function f_classif at 0x104e9a410>))],
transformer_weights=None)), ('scaler', MinMaxScaler(copy=True...'l2', random_state=42,
refit=True, scoring=None, solver='lbfgs', tol=0.0001, verbose=0))])
Accuracy: 0.80680 Precision: 0.36377 Recall: 0.59950 F1: 0.45279 F2: 0.53072
Total predictions: 15000 True positives: 1199 False positives: 2097 False negatives: 801 True negatives: 10903
###Markdown
*** References and Resources
###Code
# %%writefile References.txt
# The following are a list of resources and references that I utilized in the course of the report. Some of these resources are were extensively utilized such as Scikit-Learn modules and I will not be referencing every single page that I utilized, rather I list the main web page and give a general description of how I engaged the resources.
# Scikit-Learn: Machine Learning in Python: http://scikit-learn.org/stable/
# Heavily used for detailed examples and explanations of most of the machine learning pipeline.
# From selecting features, scaling, classifiers, pipelines and hyper-parameter optimization I found Sklearn's website invaluable.
# pandas 0.17.1 documentation: http://pandas.pydata.org/pandas-docs/stable/
# - http://pandas.pydata.org/pandas-docs/stable/visualization.html
# Used as references for the pandas dataframe utilized in my code for data set review and
# plotting.
# Oliver Grisel's parallel_ml_tutorial: https://github.com/ogrisel/parallel_ml_tutorial/
# Advanced Machine Learning Tutorial with scikit-learn: https://www.youtube.com/watch?v=iFkRt3BCctg
# These two resources were instrumental in helping me construct my parallelizable code for
# distribution on an aws ec2 cluster.
# Starcluster : http://star.mit.edu/cluster/docs/latest/index.html
# MIT's Starcluster was very helpful in my implementing a naive grid-search in on an AWS EC2
# cluster. While Ultimately I did not include the results of this exercise in my final results, the process of parallelizing my grid search and distributing over large cluster was a valuable
# learning opportunity.
# Stack Overflow:
# - http://stackoverflow.com/questions/4211209/remove-all-the-elements-that-occur-in-one-list-from-another
# - http://stackoverflow.com/questions/3173154/move-an-item-inside-a-list
# - http://stackoverflow.com/questions/14507591/python-dictionary-comprehension
# - http://stackoverflow.com/questions/29930340/want-to-plot-pandas-dataframe-as-multiple-histograms-with-log10-scale-x-axis
# As usual Stack Overflow was invaluable for solving all those little problems that come up
# in the implementation of your ideas to actual code. Most of the code referenced in these Stackoverflow pages are related to vagaries of the python language, for example what approach is the most pythonic method to solving a particular goal.
# For those interested in reading up on the methodology and validity of a randomized grid search you
# can find a seminal paper on the subject of "Random Search for Hyper-Parameter Optimization"
# by James Bergstra and Yoshua Bengio here:
# - http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf
###Output
_____no_output_____ |
notebooks/GP_model.ipynb | ###Markdown
Generalized Gaussian Process Using PyMC3's Latent Gaussian Process implementation- API: https://docs.pymc.io/api/gp/implementations.html- Example: https://docs.pymc.io/notebooks/GP-Latent.htmlExample data: `boston_medical_center_2020-04-29_to_2020-06-22.csv`- Column: `hospitalized_total_covid_patients_suspected_and_confirmed_including_icu` GoalGiven a series of past daily counts (admissions, census, etc.) $$y_1, y_2, ..., y_T$$Assuming $T$ is today, want to predict counts $$y_{T+1}, y_{T+2}, ..., y_{T+F}$$ for $F$ days ahead. ModelSuppose that $y$ is Poisson distributed over the exponential of a latent Gaussian Process, i.e.,$$y_t \sim \text{Poisson}( \exp(f_t) )$$where $f$ is modeled by a Gaussian Process$$ f_t \sim N(m(t), k(t,t'))$$with constant mean$$m(t) = c$$and squared exponential covariance$$k(t,t') = a^2 \exp\left(-\frac{(t-t')^2}{2l^2}\right)$$ Parameters and their PriorsGP mean: $$c \sim \text{TruncatedNormal}(4, 2, \text{low}=0)$$SqExp cov amplitude: $$a \sim \text{HalfNormal}(\sigma=2)$$SqExp cov time-scale: $$l \sim \text{TruncatedNormal}(10, 2, \text{low}=0)$$ TrainingLet the subscript "past" represent indices $1$ through $T$, and the subscript "future" represent indices $T+1$ through $T+F$.1. Specify a Latent GP with the mean and covariance functions defined above.1. Define a `prior` distribution over all $f$ (i.e., condition on both $t_\text{past}$ and $t_\text{future}$).1. Define the observed variable $$y_\text{past} \sim \text{Poisson}(\mu = \exp(f_\text{past}))$$ Set `mu` to be the exponential of the first $T$ $f$ values, i.e., only $f_\text{past}$.1. Use PyMC3's MCMC to draw samples from the posterior $$c^s, a^s, l^s, f^s \sim p(c, a, l, f_\text{past}, f_\text{future} | y_\text{past})$$
###Code
import pymc3 as pm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import theano
import theano.tensor as tt
theano.config.gcc.cxxflags = "-Wno-c++11-narrowing"
import logging
logger = logging.getLogger('arviz')
logger.setLevel(logging.ERROR)
df = pd.read_csv('../mass_dot_gov_datasets/boston_medical_center_2020-04-29_to_2020-06-22.csv')
y = df['hospitalized_total_covid_patients_suspected_and_confirmed_including_icu'].astype(float)
T = len(y)
F = 7
t = np.arange(T+F)[:,None]
with pm.Model() as model:
c = pm.TruncatedNormal('mean', mu=4, sigma=2, lower=0)
mean_func = pm.gp.mean.Constant(c=c)
a = pm.HalfNormal('amplitude', sigma=2)
l = pm.TruncatedNormal('time-scale', mu=10, sigma=2, lower=0)
cov_func = a**2 * pm.gp.cov.ExpQuad(input_dim=1, ls=l)
gp = pm.gp.Latent(mean_func=mean_func, cov_func=cov_func)
f = gp.prior('f', X=t)
y_past = pm.Poisson('y_past', mu=tt.exp(f[:T]), observed=y)
y_logp = pm.Deterministic('y_logp', y_past.logpt)
with model:
trace = pm.sample(5000, tune=1000, target_accept=.99, random_seed=42, chains=1)
summary = pm.summary(trace)['mean'].to_dict()
for key in ['mean', 'amplitude', 'time-scale']:
print(key, summary[key])
print('\nTraining score:')
print(np.log(np.mean(np.exp(trace.get_values('y_logp', chains=0)))) / T)
pm.traceplot(trace);
###Output
_____no_output_____
###Markdown
Forecasting Procedure1. Define a predictive distribution for future $y$ values $$y_\text{future} \sim \text{Poisson}(\mu = \exp(f_\text{future}))$$1. Use PyMC3's `sample_posterior_predictive` and the posterior samples collected during training to produce forecasts.
###Code
with model:
y_future = pm.Poisson('y_future', mu=tt.exp(f[-F:]), shape=F)
forecasts = pm.sample_posterior_predictive(trace, vars=[y_future], random_seed=42)
samples = forecasts['y_future']
low = np.zeros(F)
high = np.zeros(F)
mean = np.zeros(F)
median = np.zeros(F)
for i in range(F):
low[i] = np.percentile(samples[:,i], 2.5)
high[i] = np.percentile(samples[:,i], 97.5)
median[i] = np.percentile(samples[:,i], 50)
mean[i] = np.mean(samples[:,i])
plt.figure(figsize=(8,6))
x_future = np.arange(1,F+1)
plt.errorbar(x_future, median,
yerr=[median-low, high-median],
capsize=2, fmt='.', linewidth=1,
label='2.5, 50, 97.5 percentiles');
plt.plot(x_future, mean, 'x', label='mean');
x_past = np.arange(-4,1)
plt.plot(x_past, y[-5:], 's', label='observed')
plt.legend();
plt.title('Forecasts');
plt.xlabel('Days ahead');
###Output
_____no_output_____
###Markdown
Heldout Scoring Procedure1. Partition the data to treat the first 80% as "past" and the last 20% as "future." We'll use $y_\text{past}$ as the training set, and $y_\text{future}$ as the validation set.1. Train the model using $y_\text{past}$.1. Define the predictive distribution $$y_\text{future} \sim \text{Poisson}(\mu = \exp(f_\text{future}))$$ Set `observed` to be the observed $y_\text{future}$ values.1. Define a `Deterministic` distribution that computes the logp of the observed variable $y_\text{future}$.1. Use `sample_posterior_predictive` to compute the log probability of $y_\text{future}$ conditioned on each posterior sample {$c^s, a^s, l^s, f^s$}.1. Use Monte Carlo integration to estimate the log probability of the heldout set: $$\log p(y_\text{future} | y_\text{past}) = \log \frac{1}{S} \sum_{s=1}^S p(y_\text{future} | c^s, a^s, l^s, f^s, y_\text{past})$$
###Code
df = pd.read_csv("../mass_dot_gov_datasets/boston_medical_center_2020-04-29_to_2020-06-22.csv")
y = df['hospitalized_total_covid_patients_suspected_and_confirmed_including_icu'].astype(float)
T = int(.8 * len(y))
y_tr = y[:T]
y_va = y[T:]
F = len(y_va)
t = np.arange(T+F)[:,None]
with pm.Model() as model:
c = pm.TruncatedNormal('mean', mu=4, sigma=2, lower=0)
mean_func = pm.gp.mean.Constant(c=c)
a = pm.HalfNormal('amplitude', sigma=2)
l = pm.TruncatedNormal('time-scale', mu=20, sigma=5, lower=0)
cov_func = a**2 * pm.gp.cov.ExpQuad(input_dim=1, ls=l)
gp = pm.gp.Latent(mean_func=mean_func, cov_func=cov_func)
f = gp.prior('f', X=t)
y_past = pm.Poisson('y_past', mu=tt.exp(f[:T]), observed=y_tr)
y_logp = pm.Deterministic('y_logp', y_past.logpt)
with model:
trace = pm.sample(5000, tune=1000, chains=1, target_accept=.99, random_seed=42)
summary = pm.summary(trace)['mean'].to_dict()
for key in ['mean', 'amplitude', 'time-scale']:
print(key, summary[key])
print('\nTraining score:')
print(np.log(np.mean(np.exp(trace.get_values('y_logp', chains=0)))) / T)
pm.traceplot(trace);
with model:
y_future = pm.Poisson('y_future', mu=tt.exp(f[-F:]), observed=y_va)
lik = pm.Deterministic('lik', y_future.logpt)
logp_list = pm.sample_posterior_predictive(trace, vars=[lik], keep_size=True)
print('Heldout score:')
print(np.log(np.mean(np.exp(logp_list['lik'][0]))) / F)
with model:
y_pred = pm.Poisson('y_pred', mu=tt.exp(f[T:]), shape=F)
forecasts = pm.sample_posterior_predictive(trace, vars=[y_pred], random_seed=42)
samples = forecasts['y_pred']
low = np.zeros(F)
high = np.zeros(F)
mean = np.zeros(F)
median = np.zeros(F)
for i in range(F):
low[i] = np.percentile(samples[:,i], 2.5)
high[i] = np.percentile(samples[:,i], 97.5)
median[i] = np.percentile(samples[:,i], 50)
mean[i] = np.mean(samples[:,i])
xticks = np.arange(F)
plt.figure(figsize=(8,6))
plt.errorbar(xticks, median,
yerr=[median-low, high-median],
capsize=2, fmt='.', linewidth=1,
label='2.5, 50, 97.5 percentiles');
plt.plot(xticks, mean, 'x', label='mean');
plt.plot(xticks, y_va, 's', label='observed');
plt.legend();
plt.title('Forecasts');
plt.xlabel('Day');
###Output
_____no_output_____ |
02-more-on-models.ipynb | ###Markdown
More on models
###Code
from __future__ import print_function
import numpy as np
import sncosmo
%matplotlib inline
# Other models (this one is a IIP)
model = sncosmo.Model(source='snana-2004hx')
###Output
_____no_output_____
###Markdown
(see http://sncosmo.readthedocs.org/en/latest/source-list.html for more)
###Code
print(model)
# all the same methods work
model.set(amplitude=1.e-10)
model.bandmag('sdssg', 'ab', 10.)
sncosmo.plot_lc(model=model, bands=['sdssg', 'sdssr', 'sdssi', 'sdssz']);
###Output
_____no_output_____
###Markdown
Adding host galaxy dust
###Code
dust = sncosmo.CCM89Dust()
print(dust)
model = sncosmo.Model(source='snana-2004hx', effects=[dust],
effect_names=['host'], effect_frames=['rest'])
print(model)
model.set(hostebv=0.3, hostr_v=2.1)
sncosmo.plot_lc(model=model, bands=['sdssg', 'sdssr', 'sdssi', 'sdssz']);
###Output
_____no_output_____ |
modules/hand_tracking/hand_model.ipynb | ###Markdown
Generating Test dataset
###Code
%%time
# listing images
import os
img_list = dict()
for file_name in os.listdir("temp"):
img_list[file_name] = {'source': cv2.imread("temp/" + file_name)[:,:,::-1]}
img_list[file_name]['landmark'], img_list[file_name]['box'] = detector(img_list[file_name]['source'])
import pickle
with open('dataset.pkl', 'wb') as file:
pickle.dump(img_list, file)
###Output
_____no_output_____ |
Final Project (ADEGBITE AYOADE ABEL).ipynb | ###Markdown
Classification with Python In this notebook we try to practice all the classification algorithms that we learned in this course.We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.Lets first load required libraries:
###Code
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline
###Output
_____no_output_____
###Markdown
About dataset This dataset is about past loans. The __Loan_train.csv__ data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:| Field | Description ||----------------|---------------------------------------------------------------------------------------|| Loan_status | Whether a loan is paid off on in collection || Principal | Basic principal loan amount at the || Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule || Effective_date | When the loan got originated and took effects || Due_date | Since it’s one-time payoff schedule, each loan has one single due date || Age | Age of applicant || Education | Education of applicant || Gender | The gender of applicant | Lets download the dataset
###Code
!wget -O loan_train.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv
###Output
--2019-07-11 00:43:34-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23101 (23K) [text/csv]
Saving to: ‘loan_train.csv’
100%[======================================>] 23,101 --.-K/s in 0.002s
2019-07-11 00:43:34 (12.8 MB/s) - ‘loan_train.csv’ saved [23101/23101]
###Markdown
Load Data From CSV File
###Code
df = pd.read_csv('loan_train.csv')
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Convert to date time object
###Code
df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()
###Output
_____no_output_____
###Markdown
Data visualization and pre-processing Let’s see how many of each class is in our data set
###Code
df['loan_status'].value_counts()
###Output
_____no_output_____
###Markdown
260 people have paid off the loan on time while 86 have gone into collection Lets plot some columns to underestand data better:
###Code
# notice: installing seaborn might takes a few minutes
!conda install -c anaconda seaborn -y
import seaborn as sns
bins = np.linspace(df.Principal.min(), df.Principal.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'Principal', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
bins = np.linspace(df.age.min(), df.age.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'age', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
###Output
_____no_output_____
###Markdown
Pre-processing: Feature selection/extraction Lets look at the day of the week people get the loan
###Code
df['dayofweek'] = df['effective_date'].dt.dayofweek
bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'dayofweek', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
###Output
_____no_output_____
###Markdown
We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4
###Code
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
df.head()
###Output
_____no_output_____
###Markdown
Convert Categorical features to numerical values Lets look at gender:
###Code
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
86 % of female pay there loans while only 73 % of males pay there loan Lets convert male to 0 and female to 1:
###Code
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
One Hot Encoding How about education?
###Code
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Feature befor One Hot Encoding
###Code
df[['Principal','terms','age','Gender','education']].head()
###Output
_____no_output_____
###Markdown
Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame
###Code
Feature = df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)
Feature.head()
###Output
_____no_output_____
###Markdown
Feature selection Lets defind feature sets, X:
###Code
X = Feature
X[0:5]
###Output
_____no_output_____
###Markdown
What are our lables?
###Code
y = pd.get_dummies(df['loan_status'])['PAIDOFF'].values
y[0:5]
###Output
_____no_output_____
###Markdown
Normalize Data Data Standardization give data zero mean and unit variance (technically should be done after train test split )
###Code
X= preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]
###Output
_____no_output_____
###Markdown
Classification Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the modelYou should use the following algorithm:- K Nearest Neighbor(KNN)- Decision Tree- Support Vector Machine- Logistic Regression__ Notice:__ - You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.- You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.- You should include the code of the algorithm in the following cells. K Nearest Neighbor(KNN)Notice: You should find the best k to build the model with the best accuracy. **warning:** You should not use the __loan_test.csv__ for finding the best k, however, you can split your train_loan.csv into train and test to find the best __k__.
###Code
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn import metrics
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
mean_acc=np.zeros(50)
std_acc = np.zeros(50)
for n in range(1,51):
knnmodel=KNeighborsClassifier(n_neighbors=n).fit(X_train,y_train)
y_pred=knnmodel.predict(X_test)
mean_acc[n-1]=metrics.accuracy_score(y_test,y_pred)
std_acc[n-1]=np.std(y_pred==y_test)/np.sqrt(y_pred.shape[0])
plt.plot(range(1,51),mean_acc,'g')
plt.fill_between(range(1,51),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.legend(('Accuracy ', '+/- 3xstd'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of Nabors (K)')
plt.tight_layout()
plt.show()
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
###Output
The best accuracy was with 0.7857142857142857 with k= 37
###Markdown
Decision Tree
###Code
from sklearn.tree import DecisionTreeClassifier
dtmodel = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
dtmodel.fit(X_train,y_train)
y_pred=dtmodel.predict(X_test)
TreeAccuracy=metrics.accuracy_score(y_test,y_pred)
TreeAccuracy
###Output
_____no_output_____
###Markdown
Support Vector Machine
###Code
from sklearn import svm
svmmodel=svm.SVC(kernel='rbf')
svmmodel.fit(X_train,y_train)
y_pred=svmmodel.predict(X_test)
y_pred
metrics.accuracy_score(y_test,y_pred)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
lrmodel=LogisticRegression(C=0.01,solver='liblinear').fit(X_train,y_train)
y_pred=lrmodel.predict(X_test)
print(y_pred)
print(LR.predict_proba(X_test))
metrics.accuracy_score(y_test,y_pred)
###Output
_____no_output_____
###Markdown
Model Evaluation using Test set
###Code
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss
###Output
_____no_output_____
###Markdown
First, download and load the test set:
###Code
!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv
###Output
--2019-07-11 02:32:38-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3642 (3.6K) [text/csv]
Saving to: ‘loan_test.csv’
100%[======================================>] 3,642 --.-K/s in 0s
2019-07-11 02:32:38 (638 MB/s) - ‘loan_test.csv’ saved [3642/3642]
###Markdown
Load Test set for evaluation
###Code
test_df = pd.read_csv('loan_test.csv')
test_df.head()
test_df['effective_date']=pd.to_datetime(test_df['effective_date'])
test_df['dayofweek'] = test_df['effective_date'].dt.dayofweek
test_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
Feature_test = test_df[['Principal','terms','age','Gender','weekend']]
Feature_test = pd.concat([Feature_test,pd.get_dummies(test_df['education'])], axis=1)
Feature_test.drop(['Master or Above'], axis = 1,inplace=True)
Feature_test.head()
X_testset=Feature_test
y_testset=pd.get_dummies(test_df['loan_status'])['PAIDOFF'].values
y_testset
y_pred_knn=knnmodel.predict(X_testset)
y_pred_dt=dtmodel.predict(X_testset)
y_pred_svm=svmmodel.predict(X_testset)
y_pred_lr=lrmodel.predict(X_testset)
y_pred_lr_proba=lrmodel.predict_proba(X_testset)
print(f1_score(y_testset,y_pred_knn))
print(f1_score(y_testset,y_pred_dt))
print(f1_score(y_testset,y_pred_svm))
print(f1_score(y_testset,y_pred_lr))
print(jaccard_similarity_score(y_testset,y_pred_knn))
print(jaccard_similarity_score(y_testset,y_pred_dt))
print(jaccard_similarity_score(y_testset,y_pred_svm))
print(jaccard_similarity_score(y_testset,y_pred_lr))
LR_log_loss=log_loss(y_testset,y_pred_lr_proba)
LR_log_loss
###Output
_____no_output_____ |
Bungee/BungeeDropLab.ipynb | ###Markdown
Bungee Drop Lab PH 211 COCC Bruce Emerson 3/2/2020This notebook is meant to provide tools and discussion to support data analysis and presentation as you generate your lab reports. [Bungee II Lab](http://coccweb.cocc.edu/bemerson/PhysicsGlobal/Courses/PH211/PH211Materials/PH211Labs/PH211LabbungeeII.html) and [Bungee II Lab Discussion](http://coccweb.cocc.edu/bemerson/PhysicsGlobal/Courses/PH211/PH211Materials/PH211Labs/PH211LabDbungeeII.html)In this lab we are examine a calibration curve, verifiy it's values, fit a curve to the data and integrate that function. From there we can use energy methods to determine the mass that can be dropped from the railing without damaging the floor.For the formal lab report you will want to create your own description of what you understand the process and intended outcome of the lab is. Please don't just copy the purpose statement from the lab page. DependenciesThis is where we load in the various libraries of python tools that are needed for the particular work we are undertaking. The new library from ```numpy``` is needed for creating a polynomial fit to the data later on. There are multiple version of these modules for different purposes. This one feels best matched to our needs and experience.[numpy.polynomial.polynomial module](https://docs.scipy.org/doc/numpy/reference/routines.polynomials.polynomial.html)To do the integration that we need for this lab we also import the integrate library from the python scientific computing library called scipy. The particular tool we need from the integrate library is call quad for 'integrating by quadrature' which is a particular numerical integration technique[from scipy.integrate quad reference](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html)As usual the following code cell will need to be run first before any other code cells.
###Code
import numpy as np
import matplotlib as mplot
import matplotlib.pyplot as plt
from numpy.polynomial import polynomial as ply
from scipy.integrate import quad
###Output
_____no_output_____
###Markdown
The SettingIn the 'IRL' version of this lab you are presented with a '2 m' calibrated bungee cord that is shared by the class. The class is shown the railing in the lobby of the Science Building to which a clamp will be afixed holding a rigid bar. The bungee cord will be attached to the bar, weights will be put in the bag at the end of the bungee (replicating a human being), and the bag will be dropped from a point level with the railing. Your task, as the student, is to determine how much mass can be placed in the bag such that the bag will stop just as it reaches the ground (meaning a couple of cm above the floor since we don't want to damage the bag or the floor). This replicates the adjustments that actual bungee jump operations accomplish by changing the length of a static rope (that doesn't stretch much) attached between the bungee cord and the frame of the jumping platform. How far does Bungee stretch?In the future when we return to the lab in real life the following numbers will be manipulated to compell students to replace them with their own measurements.Here is the underlying (currently real - 3/2021) data:1. The unstretched length of the bungee is 2.45 m between the knots (Lo)1. The length of the clips and the bag combined is .33 m (BC)1. The knotted parts of the bungee are 1 cm long each (Kn)1. The height of the railing is 6.02 m above the floorCan you figure out, on your own, bow much the bungee cord must be stretched when the bag just reaches the floor?The ultimate stretch $\Delta x$ of the bungee when it gets to the floor is given by....$$\large \Delta x = H - ( L_0 + BC + Kn) $$
###Code
drop_height = 6.02 # H in m
relaxed_length = 2.45 # L0 from above in m
bag_clips = 0.33 # BC above in m
knots = 0.02 #Kn above in m
desired_stretch = drop_height - (relaxed_length + bag_clips + knots)
print("The maximum stretch of the bungee will be %.3f m " % (desired_stretch))
###Output
The maximum stretch of the bungee will be 3.220 m
###Markdown
Energy Bar Graph:Now that we have a python notebook that we can use to generate an energy bar graph it seems reasonable to use it here. [Energy Bar Graph notebook](https://github.com/smithrockmaker/PH211/blob/master/EnergyBarGraph.ipynb)This notbook has active element (interactive widgets) that will not work unless you download the notebook and run it in your Jupyterlab window. When you do so you might generate a energy bar chart that looks like this. Conceptual PhysicsThe energy bar graph indicates clearly that the energy stored in the bungee cord when the bag reaches the ground must be the same as the initial gravitational potential energy. Since we know how far the bungee cord has stretched we should be able to figure out that energy (which doesn't depend on the mass) and set it equal to the initial gravitational potential energy which does depend on the mass. The rest of this notebook is dedicated to the process of determining the energy stored in the bungee cord which is **NOT** an ideal physics spring. This means we need to use data, curve fitting, and integration to figure this out. This can be done by hand but we're going to do it in this notebook. Data Entry (Lists/Vectors) I am providing a set of data points that represent the characterization curve for a 2 m bungee cord. In principle we should be able to use the normalized data from the Bungee Characterization Lab but typically our results are inconsistent enough that actual data is a better idea. I generated this data by stretching the bungee out on the floor and using a scale to measure the force. Because that's a little challenging with one person I'd like you to check that data. This is generally a good practice for all experimental tests.
###Code
forcedata = [0., 4.90, 9.81, 14.72, 19.62, 24.53, 29.43,34.34,39.24, 44.15]
stretchdata = [0., 0.20, .47, .72, 1.11, 1.51, 2.10, 2.69, 3.22, 3.82]
# useful constants
gravity = 9.815 # in m/s/s
drop_height = 6.02 # in m
# 2 ways to print out and check your data
print("force data:",forcedata)
print("stretch data:",stretchdata)
forcedatalength = len(forcedata)
stretchdatalength = len(stretchdata)
# length counts how many 'data points' in the list
print("number of data points (x):", forcedatalength)
print("number of data points (y):", stretchdatalength)
###Output
force data: [0.0, 4.9, 9.81, 14.72, 19.62, 24.53, 29.43, 34.34, 39.24, 44.15]
stretch data: [0.0, 0.2, 0.47, 0.72, 1.11, 1.51, 2.1, 2.69, 3.22, 3.82]
number of data points (x): 10
number of data points (y): 10
###Markdown
Data PlotIf you are unsure what is happening here refer to earlier labs where it has been described in more detail.
###Code
fig1, ax1 = plt.subplots()
ax1.scatter(stretchdata, forcedata)
# a way to set labels
plt.xlabel('stretch of bungee (m)', fontsize = 10)
plt.ylabel('Force produced by bungee (N)', fontsize = 10)
plt.title('2 m Bungee Cord', fontsize = 20)
ax1.grid()
fig1.set_size_inches(10, 9)
#fig.savefig("myplot.png")
plt.show()
###Output
_____no_output_____
###Markdown
Curve FittingNow that we know how to curve fit lets use that skill to find the function that matches the data. I include the reminders from our previous lab.```degree``` is the order of the polynomial as in degree = 2 => quadratic polynomial with 3 coefficients.[polynomial.polynomial.polyfit](https://docs.scipy.org/doc/numpy/reference/generated/numpy.polynomial.polynomial.polyfit.html)Here's an interesting thing to notice about this data...it looks much like a parabola laid on it's side. When we do a polynomial fit to the data we're using functions like $x$ , $x^2$, and $x^3$ etc. All of the functions curl upwards and don't match our function well. However, if we reverse the x and y axes in our data it looks much more reasonable to do a polynomial fit. To help visualize this I plotted our data in this way below.
###Code
fig2, ax2 = plt.subplots()
ax2.scatter(forcedata, stretchdata)
# a way to set labels
plt.xlabel('Force produced by bungee (N)', fontsize = 10)
plt.ylabel('stretch of bungee (m)', fontsize = 10)
plt.title('2 m Bungee Cord', fontsize = 20)
ax2.grid()
fig2.set_size_inches(10, 9)
#fig.savefig("myplot.png")
plt.show()
# be sure that the order of the polynomial fit matches the model calculation
# in the next code cell.
degree = 2
# fitting the data as if the force is the x value
# and the stretch is the y value, we expect this to look
# sort of parabolic
coefs = ply.polyfit(forcedata, stretchdata,degree)
# fitting the data as if the stretch is the x value
# and the force is the y value, This will give very
# different coefficients. I include this because I'm
# curious.
#coefs = ply.polyfit(stretchdata, forcedata,degree)
print("Coefficients of polynomial fit:", coefs)
###Output
Coefficients of polynomial fit: [-0.0084568 0.03467479 0.00120597]
###Markdown
Add the physics model...the curve fit and the bungeeIn this case all we want to do is plot the model against the data and be sure that it feels like a good fit. The terms for cubic and quartic polynomials are in the expression but commented out. It is worth your time to explore what order polynomial gives you the most reasonable fit. You can adjust that in the cell above and below in the appropriate places.Because we treated the force as the 'x' variable and the stretch as the 'y' variable we need to be careful as we implement.**A Coding Note**The calculation of the model values can get pretty long as a mathematical formula. What is illustrated below is one method to 'fold' or 'wrap' the code line so that it is easier to read and understand. To make this method work I put the entire calculation inside a set of parentheses so that the python elves keep looking for the other end of the parentheses and find it on a later line.
###Code
# generate x values for model of data
maxforce = 45.
numpoints = 20
# create the list of 'x' values that represent
# the forces in our model
modelforce = np.linspace(0.,maxforce,numpoints)
# create a model height list that matches the model time
# These are the 'y' values
modelstretch = np.full_like(modelforce,0)
# calculate the heights predicted from the model
# Uncomment the appropriate terms if using
# higher order polynomials!! Check location of closing
# parentheses!!
modelstretch = (coefs[0] + coefs[1]*modelforce
+ coefs[2]*modelforce**2)
#+ coefs[3]*modelforce**3)
#+ coefs[4]*modelforce**4)
# print("testing the output of the loop;", modelheight)
###Output
_____no_output_____
###Markdown
Plot Data with Model Because the curve fit polynomial reverses the axes relative to the traditional (stretch on the x and force on the y) we will continue to plot the axes in this same way. This means that the stretch is along the vertical axis and the maximum stretch is a horizontal line. The 'area under the curve' is now the area between the y axis and the function. This will be important when we actually determine the integral.The horizontal line is plotted to represent the stretch of at which the bag will hit
###Code
fig3, ax3 = plt.subplots()
ax3.scatter(forcedata, stretchdata,
marker = 'x', color = 'green',
label = "data")
ax3.plot(modelforce, modelstretch,
color = 'blue', linestyle = ':',
linewidth = 3., label = "model")
# plot the desired stretch (delta x) from back
# at the begining of the notebook
ax3.hlines(desired_stretch, 0, 40,
color = 'magenta', linestyle = '-',
linewidth = 2., label = "maximum stretch of bungee")
# a way to set labels
plt.xlabel('Force produced by bungee (N)', fontsize = 10)
plt.ylabel('stretch of bungee (m)', fontsize = 10)
plt.title('2 m Bungee Cord', fontsize = 20)
fig3.set_size_inches(10, 9)
ax3.grid()
plt.legend(loc= 2)
plt.show()
###Output
_____no_output_____
###Markdown
Energy is the Area Under the Curve:Perhaps you remember this expression for the work done (energy moved) by a force which is not constant. At this point you are begining to associate integrals with areas under some curve. For the bungee cord the energy stored by the bungee is the magnitude of the area under the curve when it is stretched to some length. This is now what we need to do. As was just mentioned the area in question here is between the 'y' axis and the function. .$$ \large W_F = \int_{\bar{x}_0}^{\bar{x}_f}\bar{F}\:\cdot d\bar{x}$$ Hand IntegrationWhen this lab is done without a python notebook the integration is done manually by counting the little rectangles on a physics plot of the force vrs stretch curve. It is mind numbing as a task but it does work well as long as you have correctly identified all of the relevant variables. Numerical Integration (New Skill!)Not surprisingly there are numerical integration tools built into python and in particular into the scientific computing library called scipy. The tool we will use is called quad and is one of a range of integration tools available. I am completely unable to describe what all the different tools do at this point (its on my reading list now) but quad is apparently a standard choice. I give you both references below.[scipy.integrate.quad](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.htmlscipy.integrate.quad)[scipy.integrate](https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html)I took and reworked the example given on these pages for our purposes.**NOTE** Because integration in python is still going to give us the area above the x axis and below the function we will need to be clever to find the area we need. What are the limits of our integration?What we know from an analysis of our setting is the ultimate stretch of our bungee when it (hopefully) brings the mass to rest just above the floor. The energy stored in the bungee cord is the area under the force vrs stretch curve up to that length. If you think about it that is the area to the left of the stretch vrs force curve. Because of the reversed axes I need to know what force will deliver the stretch that I know I need. This will eb the upper limit of my integration. A great time to use our skills from the Space Station Lab to zoom in on the plot and read the answer off the graph.When I give the function ```polyint``` an force value along with the degree of the fit and the coefficients it will tell me the stretch that it creates. Keep trying different trial forces until you find the force that gives you the stretch you need.
###Code
fig4, ax4 = plt.subplots()
ax4.scatter(forcedata, stretchdata,
marker = 'x', color = 'green',
label = "data")
ax4.plot(modelforce, modelstretch,
color = 'blue', linestyle = ':',
linewidth = 3., label = "model")
# plot the desired stretch (delta x) from back
# at the begining of the notebook
ax4.hlines(desired_stretch, 0, 40,
color = 'magenta', linestyle = '-',
linewidth = 2., label = "maximum stretch of bungee")
# a way to set labels
plt.xlabel('Force produced by bungee (N)', fontsize = 10)
plt.ylabel('stretch of bungee (m)', fontsize = 10)
plt.title('2 m Bungee Cord', fontsize = 20)
# zoom in!!
ax4.set_xlim([38,40])
ax4.set_ylim([3.1,3.3])
fig4.set_size_inches(10, 9)
ax4.grid()
plt.legend(loc= 2)
plt.show()
# maximum spring force read off of plot above
maxForce = 39.3
# replot with all the info
fig5, ax5 = plt.subplots()
ax5.scatter(forcedata, stretchdata,
marker = 'x', color = 'green',
label = "data")
ax5.plot(modelforce, modelstretch,
color = 'blue', linestyle = ':',
linewidth = 3., label = "model")
# plot the desired stretch (delta x) from back
# at the begining of the notebook
ax5.hlines(desired_stretch, 0, 40,
color = 'magenta', linestyle = '-',
linewidth = 2., label = "maximum stretch of bungee")
# plot the force that defines the upper limit of integration
ax5.vlines(maxForce, 0, 4,
color = 'dodgerblue', linestyle = '-',
linewidth = 2., label = "maximum stretch of bungee")
# a way to set labels
plt.xlabel('Force produced by bungee (N)', fontsize = 10)
plt.ylabel('stretch of bungee (m)', fontsize = 10)
plt.title('2 m Bungee Cord', fontsize = 20)
fig5.set_size_inches(10, 9)
ax5.grid()
plt.legend(loc= 2)
plt.show()
###Output
_____no_output_____
###Markdown
Figuring out the areaTo get the desired area we can take the area under the curve away from the area of the rectangle bounded by the two straight lines. First task is to use the integration tools to find the area bounded by the horzontal axis, the vertical blue line, and the curve. This is what we will subtract from the area of the rectangle. Using the Integration ToolThe code below follows the structure of the ```scipy.integrate.quad example``` cited above. Here is the structure for how we use the tool.```definite_integral = quad(polyint, lower, upper, args = (coefs, degree))```'quad' needs only three basic bits of data. The first argument, called ```polyint``` here, delivers the value of the function I am integrating when given an x value. The second and third arguments are the lower and upper 'x' values of the integration. Finally there is a list of any information that the first argument needs to be able to determine the 'y' value of the function.Internally the 'quad' command is going to pick a value of x starting at the lower limit and moving toward the upper limit. It will use the function provided (```polyint``` in this case) to find the value of the integrand and calculate the area of a 'rectangle' from $f(x)\Delta x$. Just as we would do with graph paper it adds up the area of all the rectangles to get the definite integral.```polyint``` is what is called a function in python and other coding languages. You can think of it as a customized command. I am not going to attempt to explain functions at this time I only share with you this function which uses the coefficients we have previously determined for our polynomial fit to calculate the value of the data model.```def polyint(x, coefs, degree): dependent_var = 0 for i in range (0,degree+1): dependent_var = dependent_var + coefs[i] * x**i return dependent_var```The cell below finds the area below the stretch vrs force curve in the normal sense of the integral. Take a look at the 'reversed' plot up above to see that the values make sense. If you picture the area under the curve from 0 to 10 N you would expect it to be a bit less than 2.5 -- think about triangles -- and that is what the integration below delivers.
###Code
# define a function which is the integrand of our integral
# In this case I will try to set it up to handle a generalized polynomial
def polyint(x, coefs, degree):
dependent_var = 0
# This adds up the individual terms of the polynomial to
# get the value of the polynomial
for i in range (0,degree+1):
dependent_var = dependent_var + coefs[i] * x**i
return dependent_var
# This can be manually modified to test the integration process
# maxForce is the upper limit of the force that corresponds to
# the maximum stretch of the bungee cord.
upper_limit = maxForce
definite_integral = quad(polyint, 0., upper_limit, args = (coefs, degree))
# print(definite_integral)
print("The definite integral is %.3f with estimated error %.6f:" % (definite_integral[0],definite_integral[1]))
###Output
The definite integral is 50.845 with estimated error 0.000000:
###Markdown
Find the energy stored by the bungeeThis is the area of the rectangle above - the area under the curve!
###Code
# area of rectangle - the integral
energy_absorbed = maxForce*desired_stretch - definite_integral[0]
print("The energy absorbed by the bungee cord is %.4f Joules" % energy_absorbed)
###Output
The energy absorbed by the bungee cord is 75.7008 Joules
###Markdown
Mass that can be dropped!Check to make sure the formula matches your energy bar chart analysis! If we were to write it more formally we would say....$$\large PE_{0_{gravity}} = PE_{f_{spring}}$$where....$$\large PE_{0_{gravity}} = m\:g \: h_0$$and....$$\large PE_{f_{spring}} = \int_{\bar{x}_0}^{\bar{x}_f}\bar{F}_{spring}\:\cdot d\bar{x}$$We just calculated the energy stored in the spring so we can complete the calculation of the mass to be dropped..$$\large m = \frac{\int_{\bar{x}_0}^{\bar{x}_f}\bar{F}_{spring}\:\cdot d\bar{x}}{g\: h_0}$$The mass of the bag and the clips should be deducted from the overall mass since they also fall from the railing. The mass of the bag and the 2 clips is 170 g.
###Code
# set mass of bag and clips
mass_BC = .170
# determine the mass from previous expression
mass_drop = energy_absorbed/(gravity*drop_height)
corrected_mass = mass_drop - mass_BC
print(" The maximum mass that will stop at the level of the floor is %.4f kg" % mass_drop)
print(" The mass that should go in the bag is %.4f kg" % corrected_mass)
###Output
The maximum mass that will stop at the level of the floor is 1.2812 kg
The mass that should go in the bag is 1.1112 kg
|
P03-Forcasting_with_ML_Model.ipynb | ###Markdown
EUR/PLN Time series - Currency exchange rate predictions 3/3 - Machine Learning Models IntroductionIn this notebook, on the data already prepared, I will apply the Machine Learning approach to predicting values in time series. For this I will use 15 ML models and one neural network from the sklearn library.Models used:1. ElasticNet2. Lasso3. Ridge4. Linear Regression5. SVR - Kernel: RBF6. Decision Tree7. Random Forest8. KNN9. Ensamble estimator: BaggingRegressor with Decision Tree and Ridge models11. Extra Tree12. Ada Boost13. Gradient Boost14. XGBoost with Random and Grid Search16. Multi-layer Perceptron regressor Imports
###Code
#Basic
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pprint
from scipy import stats
from datetime import datetime
#Pipeline & Split
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import TimeSeriesSplit
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
from sklearn.pipeline import Pipeline
#Utilitis
from scipy.stats.distributions import uniform, randint
import missingno as msno
import glob
import re
#Preprocessing
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler, PowerTransformer, StandardScaler
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.decomposition import PCA
from statsmodels.tsa.seasonal import seasonal_decompose
#Models
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import LinearRegression
from sklearn.neural_network import MLPRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import VotingRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.neighbors import KNeighborsRegressor
from xgboost import XGBRegressor
from xgboost import XGBRFRegressor
#Metrics
from sklearn import metrics
import warnings
warnings.filterwarnings("ignore")
import pickle
###Output
_____no_output_____
###Markdown
Loading data
###Code
df = pd.read_csv('Data/currencies_and_indicators.csv', index_col = 'Date')
df.head()
df.info()
df.isna().sum()
df.loc['2008-01-03':].isna().sum().values
df_cut = df.loc['2008-01-03':]
df_cut = df.copy()
msno.matrix(df_cut)
plt.show()
#df_cut = df_cut.dropna(axis=1)
df_cut['Exch rate']
msno.matrix(df_cut)
plt.show()
pln_corr_matrix = df_cut.corr()[['PLN']]
pln_corr_matrix
f,ax = plt.subplots(figsize=(20, 30))
sns.heatmap(pln_corr_matrix, vmin=-1, vmax=1, center=0, robust=False, annot=True, fmt='.1g', annot_kws=None, linewidths=0.2, linecolor='white', cbar=True, cbar_kws=None, cbar_ax=None, square=False, xticklabels='auto', yticklabels='auto', mask=None, ax=None)
plt.show()
###Output
_____no_output_____
###Markdown
Columns with high correlation
###Code
treshold = 0.70
corr_data_to_print = pln_corr_matrix[((pln_corr_matrix['PLN'] > treshold) | (pln_corr_matrix['PLN'] < - treshold))][['PLN']]
corr_data_to_print
###Output
_____no_output_____
###Markdown
Columns with low correlation
###Code
corr_data_to_print = pln_corr_matrix[((pln_corr_matrix['PLN'] < treshold) & (pln_corr_matrix['PLN'] > - treshold))][['PLN']]
corr_data_to_print
to_shift = [c for c in corr_data_to_print.index if len(c) == 3 and c != 'PLN' ]
to_shift
if len(to_shift) == 0:
to_shift = None
df_pl = df_cut[['PLN']]
df_pl.head()
###Output
_____no_output_____
###Markdown
Multiply the value by 10,000 to make the results easier to read.
###Code
df_pl['PLN'] = df_pl['PLN'].mul(10000)
df_pl.shape
df_pl.tail()
df_pl.index= pd.to_datetime(df_pl.index)
print(pd.infer_freq(df_pl.index))
###Output
B
###Markdown
Important note Due to the problems I encountered while training models with additional parameters and the fact that many columns lacked historical data from 1999, they only started from 2008 etc. In the end, I only used additional generated features because I wanted the models to learn for the longest possible time frame. Feature Engineering Transformer
###Code
class FeatureEngineeringTransformer(BaseEstimator, TransformerMixin):
def __init__(self, lags = 8, to_shift = None):
self.lags = lags
self.to_shift = to_shift
def fit( self, X, y = None ):
return self
def transform( self, X, y = None ):
df_features = X.copy()
# seasonality
df_features['seasonality'] = seasonal_decompose(df_features['PLN']).seasonal
df_features['trend'] = seasonal_decompose(df_features['PLN']).trend
# 'Rolling statistics'
moving_window = list(range(2,8)).extend([14,30])
for i in range(2,8):
df_features[f'mov_avg_{i}'] = df_features['PLN'].rolling(window=i).mean()
df_features[f'mov_std_{i}'] = df_features['PLN'].rolling(window=i).std()
# 'Generate traffic_lag'
for i in range(2, self.lags+1):
df_features[f'PLN_lag_{i}'] = df_features['PLN'].shift(i)
if self.to_shift is not None:
for c in self.to_shift:
df_features[f'{c}_lag_1'] = df_features[c].shift(1)
# 'Generate Data-related features', 'is_weekend'
df_features['date'] = pd.to_datetime(df_features.index)
#df_features['month'] = df_features['date'].dt.month
#df_features['month_day'] = df_features['date'].dt.day
#df_features['day_of_week'] = df_features['date'].dt.dayofweek # 0 - Monday; 6 - Sunday
# drop missing values and unnecessary columns
df_features = df_features.dropna()
df_features = df_features.drop('date', axis=1)
if self.to_shift is not None:
df_features = df_features.drop(self.to_shift, axis=1)
return df_features
df_pl1 = df_pl.copy()
print(pd.infer_freq(df_pl1.index))
my_features = FeatureEngineeringTransformer(lags=14,to_shift=None)
df_pl2 = my_features.fit_transform(df_pl1)
df_pl2.tail()
df_pl2.isna().sum()
###Output
_____no_output_____
###Markdown
Assinge y and X
###Code
X = df_pl2.drop('PLN',axis=1)
y = df_pl2[['PLN']]
X.shape
###Output
_____no_output_____
###Markdown
Spliting data
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, shuffle = False)
tss_cv = TimeSeriesSplit(n_splits = 5)
X_train.shape, X_test.shape
X_train
y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
def Combine plots
###Code
def combine_plots(train, test, predict = None, index = None, zoom_start = None,zoom_end = None, Title = ''):
def get_cmap(n, name='hsv'):
return plt.cm.get_cmap(name, n)
cmap = get_cmap(20)
temp_df = pd.DataFrame(index = index)
temp_df['test'] = test
temp_df['train'] = train
temp_df[zoom_start:zoom_end].train.plot(figsize= (20,8), color = cmap(1), legend = 'Train')
temp_df[zoom_start:zoom_end].test.plot(color = cmap(8), alpha = 0.5, legend = 'Test')
if predict is not None:
cmap = get_cmap(len(predict)+1)
for i,c in enumerate(predict, start=1):
temp_df_pred = pd.DataFrame(c,index =test.index )
temp_df[f'predict_{i}'] = temp_df_pred
temp_df.loc[zoom_start:zoom_end,f'predict_{i}'].plot( alpha = 0.8, legend = 'Predict')#color = cmap(5*i+5),
plt.ylabel('pln')
plt.title(f'EUR/PLN Train and Test Data {Title}', size = 20)
plt.show()
combine_plots(y_train.PLN,y_test.PLN,index = df_pl2.index)
###Output
_____no_output_____
###Markdown
Set n_job = THREAD
###Code
THREADS = 10
VERBOSE = 1
###Output
_____no_output_____
###Markdown
Models Save model function
###Code
def save_model(model, filename = None):
t = datetime.now()
time = t.strftime("%Y_%m_%d_%H_%M_%S")
if filename is None:
filename = f'Models/Features/{model.best_estimator_.steps[-1][0]}_{time}.pkl'
else:
filename = f'Models/Features/{filename}_{time}.pkl'
outfile = open(filename,'wb')
pickle.dump(model,outfile)
outfile.close()
###Output
_____no_output_____
###Markdown
Load model function
###Code
def load_model(path):
file_list = glob.glob(f'{path}*.pkl')
#print(file_list)
file_list_len = len(file_list)
models = []
for i in range(file_list_len):
try:
pkl_file = open(file_list[i], 'rb')
mymodel = pickle.load(pkl_file)
m = re.search('\\\\(.*)\.pkl', file_list[i])
model_name = 'temp'
if m:
model_name = m.group(1)
print(f'Model index {i}: {model_name}')
models.append((model_name,mymodel))
pkl_file.close()
except(e):
print('Load error',e)
pkl_file.close()
break
return models
###Output
_____no_output_____
###Markdown
ElasticNet
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
#('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('ElasticNet', ElasticNet(alpha=1, tol=0.1))
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None,StandardScaler()],
#'pca__n_components': [0.8],
'polynomialfeatures__degree': [1, 2,3,4],
'ElasticNet__tol': [0.001,0.01, 0.05, 0.1, 0.2],
'ElasticNet__alpha': [1., 2., 3.],
}
grid_1 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_1.fit(X_train, y_train)
end = datetime.now()
save_model(grid_1)
print(grid_1.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-12 15:48:12.185448
Fitting 5 folds for each of 240 candidates, totalling 1200 fits
###Markdown
Lasso
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('lasso', Lasso(alpha=1, tol=0.1))
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(), MinMaxScaler()],
'pca__n_components': [0.2,0.5,0.6,0.8,1.0],
'polynomialfeatures__degree': [1, 2, 3, 4],
'lasso__tol': [0.001,0.01, 0.05, 0.1, 0.2],
'lasso__alpha': [1., 2., 3.],
}
grid_2 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_2.fit(X_train, y_train)
end = datetime.now()
save_model(grid_2)
print(grid_2.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-10 11:48:13.929181
Fitting 5 folds for each of 1800 candidates, totalling 9000 fits
###Markdown
Ridge
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('ridge', Ridge(alpha=1, tol=0.1))
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(), MinMaxScaler()],
'pca__n_components': [0.2,0.5,0.6,0.8,1.0],
'polynomialfeatures__degree': [1, 2, 3, 4],
'ridge__tol': [0.001,0.01, 0.05, 0.1, 0.2],
'ridge__alpha': [1., 2., 3.],
}
grid_3 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_3.fit(X_train, y_train)
end = datetime.now()
save_model(grid_3)
print(grid_3.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-10 11:50:23.189678
Fitting 5 folds for each of 1800 candidates, totalling 9000 fits
###Markdown
LinearRegression
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('lr', LinearRegression())
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(), MinMaxScaler()],
'pca__n_components': [0.8,0.5],
'polynomialfeatures__degree': [1,2,3,4],
'lr__normalize': [False, True],
}
grid_4 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_4.fit(X_train, y_train)
end = datetime.now()
save_model(grid_4)
print(grid_4.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-12 10:28:28.805957
Fitting 5 folds for each of 96 candidates, totalling 480 fits
###Markdown
SVR - RBF
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('svr', SVR(kernel='rbf',C=1))
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(), MinMaxScaler()],
'pca__n_components': [0.2,0.8],
'polynomialfeatures__degree': [1, 2, 3, 4],
'svr__C': [ 0.01, 0.1, 1, 10, 100,1000,10000],
'svr__gamma': [0.0001,0.001, 0.01,1,10,100,1000]
}
grid_5 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_5.fit(X_train, y_train)
end = datetime.now()
save_model(grid_5)
print(grid_5.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 00:23:50.245362
Fitting 5 folds for each of 2352 candidates, totalling 11760 fits
###Markdown
Decision Tree
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('tree', DecisionTreeRegressor())
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(), MinMaxScaler()],
'pca__n_components': [0.8,0.5],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
"tree__criterion": ["mse", "friedman_mse","mae"],
"tree__max_depth": [2,20,50, None], #[2, 5, 10, 15,20,30, None],
"tree__min_samples_split": [2, 15], #[2, 5, 10, 15],
"tree__splitter": ["best", "random"],
"tree__min_samples_leaf": [ 3, 10], #[ 3, 4, 5,8,10],
"tree__max_features": ["auto","sqrt","log2"],
"tree__ccp_alpha" : [0.0, 0.1],
"tree__max_leaf_nodes": [2, 3, None] #[1, 2, 5, 10, 20, None]
}
grid_6 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_6.fit(X_train, y_train)
end = datetime.now()
save_model(grid_6)
print(grid_6.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 00:43:56.370097
Fitting 5 folds for each of 41472 candidates, totalling 207360 fits
###Markdown
Random Forest
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('rf', RandomForestRegressor())
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler()], #MinMaxScaler()
'pca__n_components': [0.8],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
'rf__n_estimators': [150], #, 300
"rf__criterion": ["mae", "msa"], #
"rf__max_depth": [2, None], #[2, 30, None],
"rf__min_samples_split": [2, 15], #[2, 5, 10, 15],
"rf__min_samples_leaf": [ 3,10], #[ 3, 4, 5,8,10],
"rf__max_features": ["auto","log2"], #"sqrt",
"rf__ccp_alpha" : [0.0, 0.1], #[0.0,0.05, 0.1],
"rf__max_leaf_nodes": [2,3, None] #[1, 2, 5, 10, 20, None]
}
grid_7 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_7.fit(X_train, y_train)
end = datetime.now()
save_model(grid_7)
print(grid_7.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 01:40:26.168353
Fitting 5 folds for each of 1536 candidates, totalling 7680 fits
###Markdown
KNN
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('knn', KNeighborsRegressor())
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler()], #MinMaxScaler()
'pca__n_components': [0.8],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
"knn__n_neighbors": [2, 5, 10, 20,30,40],
"knn__p" : [2,3,4],
"knn__metric": ["minkowski"],
"knn__weights": ['uniform','distance'],
"knn__algorithm" : [ 'auto', 'ball_tree', 'kd_tree', 'brute'],
"knn__leaf_size" : [10,20,30,40],
}
grid_8 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_8.fit(X_train, y_train)
end = datetime.now()
save_model(grid_8)
print(grid_8.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 02:35:21.871769
Fitting 5 folds for each of 4608 candidates, totalling 23040 fits
###Markdown
Bagging DT
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('BaggingRegressor_tree', BaggingRegressor(DecisionTreeRegressor()))
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler()], #MinMaxScaler()
'pca__n_components': [0.8,0.2],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
"BaggingRegressor_tree__n_estimators": [10], #[10,100,200]
"BaggingRegressor_tree__max_samples" : [0.2, 0.5],#[0.2, 0.3, 0.5, 1.0]
"BaggingRegressor_tree__max_features" : [0.2, 1.0], #0.5,
"BaggingRegressor_tree__bootstrap" : [True,False],
"BaggingRegressor_tree__base_estimator__max_depth" : [None,5],#[None, 2,5,10]
"BaggingRegressor_tree__base_estimator__criterion": ["mae"],#["mse", "friedman_mse","mae"]
"BaggingRegressor_tree__base_estimator__min_samples_split": [2, 5],
"BaggingRegressor_tree__base_estimator__splitter": ["best"],#, "random"
"BaggingRegressor_tree__base_estimator__min_samples_leaf": [ 3, 5],
"BaggingRegressor_tree__base_estimator__max_features": ["auto","log2"], #"sqrt",
"BaggingRegressor_tree__base_estimator__ccp_alpha" : [0.0, 0.1],
"BaggingRegressor_tree__base_estimator__max_leaf_nodes": [ 2, None] #5
}
grid_9 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_9.fit(X_train, y_train)
end = datetime.now()
save_model(grid_9)
print(grid_9.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 02:44:31.901903
Fitting 5 folds for each of 8192 candidates, totalling 40960 fits
###Markdown
Bagging Ridge
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('BaggingRegressor_Ridge', BaggingRegressor(Ridge()))
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(),MinMaxScaler()], #MinMaxScaler()
'pca__n_components': [0.5,0.6,0.7,0.8],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
"BaggingRegressor_Ridge__n_estimators": [10,50,100],#,200
"BaggingRegressor_Ridge__max_samples" : [0.2, 0.3],#[0.2, 0.3, 0.5, 1.0]
"BaggingRegressor_Ridge__max_features" : [0.2, 0.5, 1.0],
"BaggingRegressor_Ridge__bootstrap" : [True,False],
'BaggingRegressor_Ridge__base_estimator__tol': [0.001,0.01, 0.2],#[0.001,0.01, 0.05, 0.1, 0.2]# 0.05, 0.1,
'BaggingRegressor_Ridge__base_estimator__alpha': [1., 3.],#2.,
}
grid_10 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_10.fit(X_train, y_train)
end = datetime.now()
save_model(grid_10)
print(grid_10.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 03:03:46.512536
Fitting 5 folds for each of 10368 candidates, totalling 51840 fits
###Markdown
Extra Tree
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('ExtraTrees', ExtraTreesRegressor())
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(),MinMaxScaler()], #MinMaxScaler()
'pca__n_components': [0.2,0.8],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
"ExtraTrees__n_estimators": [10,100],#[1,10,100,200],
"ExtraTrees__max_depth": [4,10,12,14,None],#[4,8,10,12,None]
"ExtraTrees__criterion": ["mse", "mae"],
"ExtraTrees__max_features": ["auto","log2","sqrt"],#["auto","sqrt","log2"],
"ExtraTrees__ccp_alpha" : [0.0, 0.1],#[0.0,0.05, 0.1],
"ExtraTrees__max_leaf_nodes": [ 2, 5,10,None],#[ 2, 5, 10, None],
"ExtraTrees__bootstrap" : [True, False]
}
grid_11 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_11.fit(X_train, y_train)
end = datetime.now()
save_model(grid_11)
print(grid_11.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 03:28:30.901883
Fitting 5 folds for each of 23040 candidates, totalling 115200 fits
###Markdown
Ada Boost
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('AdaBoost', AdaBoostRegressor())
]
pipeline = Pipeline(steps=steps)
param_grid = {
'scale': [None, StandardScaler(),MinMaxScaler()], #MinMaxScaler()
'pca__n_components': [0.2,0.5,0.6,0.8,1.0],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
"AdaBoost__n_estimators": [10, 50,70, 100,200,300,400,500,1200],#[10, 50, 100,200,300,400,500,1000,1200],
"AdaBoost__learning_rate": [0.1,0.2,0.3, 0.5, 0.9,1.0],#[0.1,0.2,0.3, 0.5, 0.6, 0.7, 0.9,1.0]
"AdaBoost__loss" : ["linear","square","exponential"]
}
grid_12 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_12.fit(X_train, y_train)
end = datetime.now()
save_model(grid_12)
print(grid_12.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 06:47:31.933031
Fitting 5 folds for each of 4860 candidates, totalling 24300 fits
###Markdown
Gradient Boost
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('GradientBoosting', GradientBoostingRegressor())
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler()], #MinMaxScaler()
'pca__n_components': [0.8],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
"GradientBoosting__n_estimators": [5,10, 50, 100,250],#10, 50,
"GradientBoosting__learning_rate": [0.05,0.1,0.3, 0.5,0.7,0.8, 0.9, 1],# 0.7,0.8, 0.9, 1
"GradientBoosting__loss":['ls', 'lad','huber','quantile'],#['ls', 'lad','huber','quantile'],
"GradientBoosting__criterion":['friedman_mse', 'mse', 'mae'],
"GradientBoosting__subsample": [0.0,0.1,0.3,0.5,0.8, 0.9, 1],#
}
grid_13 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_13.fit(X_train, y_train)
end = datetime.now()
save_model(grid_13)
print(grid_13.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 07:19:59.764513
Fitting 5 folds for each of 26880 candidates, totalling 134400 fits
###Markdown
XGB Param Grid
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('XGB_PG', XGBRegressor())
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(),MinMaxScaler()], #MinMaxScaler()
'pca__n_components': [0.2,0.5,0.8,None],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
'XGB_PG__max_depth': [3, 5, 8, 10,20,30,100],
'XGB_PG__learning_rate': [0.001, 0.01, 0.05, 0.1, 0.2],#[0.001, 0.01, 0.05, 0.1, 0.2, 0.3,0.4,0.5,0.6],
'XGB_PG__n_estimators': [50, 100, 200, 350, 400,500,900,1000,1100],
'XGB_PG__gamma': [0,0.3, 0.5,0.8, 1,1.2],
'XGB_PG__colsample_bytree': [1, 0.8, 0.5, 0.3],
'XGB_PG__subsample': [1, 0.8, 0.5],
'XGB_PG__min_child_weight': [1,2, 5, 8,10,12]
}
grid_14 = RandomizedSearchCV(
n_iter=1000, estimator=pipeline,
param_distributions=param_grid,
cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_14.fit(X_train, y_train)
end = datetime.now()
save_model(grid_14)
print(grid_14.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-12 12:36:40.166918
Fitting 5 folds for each of 1000 candidates, totalling 5000 fits
###Markdown
XGB Param Distribution
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('XGB_PD', XGBRegressor())
]
pipeline = Pipeline(steps=steps)
param_distribution = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(),MinMaxScaler()], #MinMaxScaler()
'pca__n_components': [0.2,0.5,0.8],
'polynomialfeatures__degree': [1, 2,3], #[1, 2, 3, 4],
'XGB_PD__max_depth': randint(3, 15),
'XGB_PD__learning_rate': uniform(0.001, 0.1-0.001),
'XGB_PD__n_estimators': randint(300, 1600),
'XGB_PD__gamma': uniform(0,2.1),
'XGB_PD__colsample_bytree': uniform(0.5, 0.5),
'XGB_PD__subsample': uniform(0.5, 0.5),
'XGB_PD__min_child_weight': randint(1, 12)
}
grid_15 = RandomizedSearchCV(
n_iter=1000, estimator=pipeline,
param_distributions=param_distribution,
cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_15.fit(X_train, y_train)
end = datetime.now()
save_model(grid_15)
print(grid_15.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-12 12:18:00.516571
Fitting 5 folds for each of 1000 candidates, totalling 5000 fits
###Markdown
MLPRegressor
###Code
start = datetime.now()
print(f'Line start at: {start}')
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.8)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('MLPRegressor', MLPRegressor(hidden_layer_sizes=(100,100,100),activation='tanh',alpha=0.0001))
]
pipeline = Pipeline(steps=steps)
param_grid = {
'transform': [None,PowerTransformer(method='yeo-johnson')],
'scale': [None, StandardScaler(),MinMaxScaler()], #MinMaxScaler()
'pca__n_components': [0.2,0.5,0.8],
'polynomialfeatures__degree': [1, 2], #[1, 2, 3, 4],
'MLPRegressor__hidden_layer_sizes': [(100,100,100),(100,200,100),(50,300,50)],#,(100,200,100),(50,100,20),(50,300,10)
'MLPRegressor__alpha': [0.01, 0.1,1,2,10,20],
'MLPRegressor__activation': ['relu','identity','logistic','tanh'],
#'MLPRegressor__solver': ['lbfgs','sgd','adam'],
#'MLPRegressor__learning_rate': ['constant','invscaling','adaptive'],
#'MLPRegressor__max_iter': [200,300,150],
}
grid_16 = GridSearchCV(pipeline, param_grid=param_grid, cv=tss_cv,
refit=True,verbose=VERBOSE,n_jobs = THREADS)
grid_16.fit(X_train, y_train)
end = datetime.now()
save_model(grid_16)
print(grid_16.best_params_)
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
###Output
Line start at: 2020-12-11 18:43:00.734401
Fitting 5 folds for each of 3600 candidates, totalling 18000 fits
###Markdown
METRICS Read Models from files Unfortunately, at the early stage of the project, I did not save the learned models for comparison, so I leave the tables of results from the early stages of learning.
###Code
#Basic models settings
#Historical data
# Models with extra features PCA and Scaling
# Testing different features and PCA settings
###Output
['Models/Features\\ElasticNet_features.pkl', 'Models/Features\\lasso_features.pkl', 'Models/Features\\lr_features.pkl', 'Models/Features\\MLPRegressor_features.pkl', 'Models/Features\\ridge_features.pkl', 'Models/Features\\XGB_PG_features.pkl']
###Markdown
Loading results
###Code
models_from_files = load_model('Models/Features/')
file_r2 = []
file_explained_variance_score = []
file_median_absolute_error = []
file_mean_squared_error = []
file_mean_absolute_error = []
file_labels = []
df_predictrions = pd.DataFrame(index = X_test.index)
df_predictrions['test'] = y_test
for name, model in models_from_files:
file_labels.append(name)
file_r2.append(metrics.r2_score(y_test, model.predict(X_test)))
file_explained_variance_score.append(metrics.explained_variance_score(y_test, model.predict(X_test)))
file_median_absolute_error.append( metrics.median_absolute_error(y_test, model.predict(X_test)))
file_mean_squared_error.append(np.sqrt(metrics.mean_squared_error(y_test, model.predict(X_test))))
file_mean_absolute_error.append(metrics.mean_absolute_error(y_test, model.predict(X_test)))
df_predictrions[name] = model.predict(X_test)
file_d = {'r2': file_r2,
'explained_variance_score': file_explained_variance_score,
'median_absolute_error': file_median_absolute_error,
'mean_squared_error' : file_mean_squared_error,
'mean_absolute_error' : file_mean_absolute_error,
}
file_df_score = pd.DataFrame(data=file_d)
file_df_score.insert(loc=0, column='Method', value=file_labels)
file_df_score.sort_values(by = 'r2',ascending=False)
df_predictrions.loc['2020-06':,['test','lr_features_None_PCA','lr_features','XGB_PG_2020_12_12_14_26_19']].plot(figsize = (20,10))
plt.show()
###Output
_____no_output_____
###Markdown
Last model - Voting Regressor
###Code
start = datetime.now()
print(f'Line start at: {start}')
vr_estimators = [
('LinearRegression', models_from_files[10][1].best_estimator_),
('ElasticNet', models_from_files[3][1].best_estimator_),
('XGB_PG', models_from_files[18][1].best_estimator_),
]
voting_clf_soft = VotingRegressor(
estimators=vr_estimators,
verbose=VERBOSE,
n_jobs = THREADS)
voting_clf_soft.fit(X_train, y_train)
end = datetime.now()
save_model(voting_clf_soft,filename='VotingRegressor')
print(f'Line end at: {end}')
print(f'The process took: {end - start}')
models_from_files = load_model('Models/Features/')
file_r2 = []
file_explained_variance_score = []
file_median_absolute_error = []
file_mean_squared_error = []
file_mean_absolute_error = []
file_labels = []
df_predictrions = pd.DataFrame(index = X_test.index)
df_predictrions['test'] = y_test
for name, model in models_from_files:
file_labels.append(name)
file_r2.append(metrics.r2_score(y_test, model.predict(X_test)))
file_explained_variance_score.append(metrics.explained_variance_score(y_test, model.predict(X_test)))
file_median_absolute_error.append( metrics.median_absolute_error(y_test, model.predict(X_test)))
file_mean_squared_error.append(np.sqrt(metrics.mean_squared_error(y_test, model.predict(X_test))))
file_mean_absolute_error.append(metrics.mean_absolute_error(y_test, model.predict(X_test)))
df_predictrions[name] = model.predict(X_test)
file_d = {'r2': file_r2,
'explained_variance_score': file_explained_variance_score,
'median_absolute_error': file_median_absolute_error,
'mean_squared_error' : file_mean_squared_error,
'mean_absolute_error' : file_mean_absolute_error,
}
file_df_score = pd.DataFrame(data=file_d)
file_df_score.insert(loc=0, column='Method', value=file_labels)
file_df_score.sort_values(by = 'r2',ascending=False)
df_predictrions.
combine_plots(y_train.PLN, y_test.PLN,
predict=[df_predictrions['lr_features_None_PCA'].values,
df_predictrions['VotingRegressor_2020_12_12_17_14_07'].values,
df_predictrions['AdaBoost_features'].values,
],
index = df_pl2.index,
zoom_start='2016-06',
zoom_end='2016-09')
###Output
_____no_output_____
###Markdown
Naive Forecast
###Code
def naive_forecast(ts, period):
actual = ts[period:]
preds = ts.shift(period)[period:]
return np.sqrt(metrics.mean_squared_error(actual, preds))
naive_forecast(y_test, period=1)
###Output
_____no_output_____
###Markdown
Baseline Performance
###Code
for i in range(1,8):
print(f'{i}: {naive_forecast(y_test, period=i)}')
###Output
1: 123.77045641138174
2: 178.90721883574713
3: 226.6395583696488
4: 267.34210887438803
5: 302.2981907208083
6: 336.03467175117703
7: 364.1051000656339
###Markdown
Forcast for Linear Regression
###Code
def prepare_data(X, y, period, split_ratio=0.8):
# define index border between train and test
split_index = int(split_ratio*X.shape[0])
# shift the features; remove missing values by slicing; target should be aligned with features
X_shift = X.shift(period)[period:]
y_shift = y[period:]
# split original features and target on index
X_train, X_test = X_shift[:split_index], X_shift[split_index:]
y_train, y_test = y_shift[:split_index], y_shift[split_index:]
return X_train, y_train, X_test, y_test
models_from_files[10][1].best_params_
models_from_files[10][1].estimator
# pipeline definition
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.9)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('lr', LinearRegression())
]
pipeline = Pipeline(steps=steps)
lr_preds = []
for period in range(1,8):
X_train, y_train, X_test, y_test = prepare_data(X, y, period=period, split_ratio=0.8)
params = {
'transform': [None],
'scale': [None],
'pca__n_components': [None],
'polynomialfeatures__degree': [1],
'lr__normalize': [False],
}
cv = TimeSeriesSplit(n_splits=5).split(X_train)
grid_lr = GridSearchCV(pipeline, param_grid=params, cv=cv, verbose=0)
grid_lr.fit(X_train, y_train)
lr_model = grid_lr.best_estimator_
score = np.sqrt(metrics.mean_squared_error(lr_model.predict(X_test), y_test))
pred = lr_model.predict(X.tail(1))[0][0]
test_val = y_test.tail(1).values[0]
lr_preds.append(pred)
print(f'Period: {period}')
print(f'Forecast: {round(pred)} +/- {score}')
y_test.tail(1)
df_pl.tail(5)
models_from_files[14][1].best_params_
###Output
_____no_output_____
###Markdown
Forcast for SVR - RBF
###Code
# pipeline definition
steps = [
('transform', PowerTransformer(method='yeo-johnson')),
('scale', MinMaxScaler()),
('pca', PCA(n_components=0.9)), # to remove highly correlated features
('polynomialfeatures', PolynomialFeatures(degree=2)),
('svr', SVR(kernel='rbf',C=1))
]
pipeline = Pipeline(steps=steps)
svr_preds = []
for period in range(1,8):
X_train, y_train, X_test, y_test = prepare_data(X, y, period=period, split_ratio=0.8)
params = {
'transform': [None],
'scale': [StandardScaler()],
'pca__n_components': [0.8],
'polynomialfeatures__degree': [1],
'svr__C': [10000],
'svr__gamma': [0.0001]
}
cv = TimeSeriesSplit(n_splits=5).split(X_train)
grid_svr = GridSearchCV(pipeline, param_grid=params, cv=cv, verbose=0)
grid_svr.fit(X_train, y_train)
svr_model = grid_svr.best_estimator_
score_svr = np.sqrt(metrics.mean_squared_error(lr_model.predict(X_test), y_test))
pred_svr = svr_model.predict(X.tail(1))[0]
test_val = y_test.tail(1).values[0]
svr_preds.append(pred_svr)
print(f'Period: {period}')
print(f'Forecast: {round(pred_svr)} +/- {score_svr}')
###Output
Period: 1
Forecast: 45864 +/- 108.26315018194815
Period: 2
Forecast: 45818 +/- 113.81211113076493
Period: 3
Forecast: 45816 +/- 171.70899022846044
Period: 4
Forecast: 45809 +/- 218.92122370317472
Period: 5
Forecast: 45792 +/- 259.93159611416513
Period: 6
Forecast: 45803 +/- 295.5348683648941
Period: 7
Forecast: 45810 +/- 328.12383539279244
###Markdown
Comparison
###Code
lr_result = np.array(lr_preds[:2])
lr_result
svr_result = np.array(svr_preds[:2])
svr_result
ARIMA_0_1_1 = np.array([45267.42154732, 45268.0400499])
ARIMA_0_1_1
#real data
real = [i[0] for i in df_pl.tail(2).values.tolist()]
real = np.array(real)
real
print(f'Differences for LR model {np.abs(lr_result - real)}')
print(f'Differences for SVR model {np.abs(svr_result - real)}')
print(f'Differences for ARIMA_0_1_1 model {np.abs(ARIMA_0_1_1 - real)}')
###Output
Differences for LR model [265.98383904 265.98383904]
Differences for SVR model [ 601.05085464 1147.87323634]
Differences for ARIMA_0_1_1 model [ 4.42154732 598.0400499 ]
|
assessment_09/LogisticRegression_on_Iris.ipynb | ###Markdown
Logistic Regression on Iris-Virginica---------- Step (1): Environment SetupIn order to find out, if the Iris given by it's properties *[ 4.8,2.5,5.3,2.4 ]* is an *Iris-Virginica*, let's setup the environment first:
###Code
%matplotlib inline
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
# load the iris data set
iris = datasets.load_iris()
###Output
_____no_output_____
###Markdown
---------- Step (2): Examing the Iris data setThe Iris data set contains the folliing keys to retrieve information from:
###Code
iris.keys()
###Output
_____no_output_____
###Markdown
... and contains the following features:
###Code
iris['feature_names']
###Output
_____no_output_____
###Markdown
---------- Step (3): Feature Regression CurvesNow that we know what the iris data set consists of, we need to find out what features we can actually use for our regression.For this purpose, we will split up all features from the iris data set and examine them seperately. For this, lets define our dependent variable **`y`** first, which we will get from the *`target`* section from the Iris data set.Our selection will be whether a target entry is an *Iris Virginica* (value **`2`** in the Iris data set in this case) or not.The result will be an arry giving us the information if an entry is an *Iris Virginica* or not `[1=yes, 0=no]`:
###Code
y = (iris["target"] == 2).astype(np.int)
###Output
_____no_output_____
###Markdown
Now let's create the training data set from each feature.For this purpose, we select all the data from our four identified features *`sepal legth`*, *`sepal width`*, *`petal length`* and *`petal width`* and combine thos data sets into one Array:
###Code
X = np.empty(shape=(4, 150, 1))
X[0] = iris['data'][:, 0:1].reshape(-1, 1) # sepal length data
X[1] = iris['data'][:, 1:2].reshape(-1, 1) # sepal width data
X[2] = iris['data'][:, 2:3].reshape(-1, 1) # petal length data
X[3] = iris['data'][:, 3:4].reshape(-1, 1) # petal width data
###Output
_____no_output_____
###Markdown
In our next step, we are going to train our *Logistic Regression* models - one model per feature -,using the target **`y`** as our dependent variable and each feature's data set stored in **`X`**:
###Code
# define logistic regression instances for each feature
log_regs = np.array([\
LogisticRegression(),\
LogisticRegression(),\
LogisticRegression(),\
LogisticRegression()\
])
# trigger 'leaerning' process for each feature
for i in range(0, X.shape[0]):
x = X[i]
log_regs[i].fit(x, y)
###Output
_____no_output_____
###Markdown
Having all *Logistic Regression models* trained, let's determine the minimum and maximum for each feature,in order to create correct testing samples from those ranges:
###Code
X_minmax = np.empty(shape=(4,2))
for i in range(0, X.shape[0]):
X_minmax[i] = np.array([X[i].min(), X[i].max()])
X_minmax
###Output
_____no_output_____
###Markdown
Now that we have all feature minima and maxima in place,we can calulcate samples for each feature mathing those bounds (`1000 samples` per feature range):
###Code
sample_size = 1000
X_samples = np.empty(shape=(4, 1000, 1))
for i in range(0, X_minmax.shape[0]):
X_samples[i] = np.linspace(np.floor(X_minmax[i][0]), np.ceil(X_minmax[i][1]), sample_size).reshape(-1, 1)
###Output
_____no_output_____
###Markdown
With all those feature samples calculated, we can move furtherand calculate the target probabilities for each feature-sample set, matching the case that the dependent variable **`y`** is `True` (remember: we want to determine if the given Iris is an *Iris-Virginica*)
###Code
y_probs = np.empty(shape=(4, 1000, 2))
for i in range(0, log_regs.shape[0]):
samples = X_samples[i]
log_reg = log_regs[i]
y_probs[i] = log_reg.predict_proba(samples)
###Output
_____no_output_____
###Markdown
At this point we are almost finished, to find out what features we can use for our final regression. We have everything calculated so far being ableto draw our diagrams visualizing the regression curve for each feature:
###Code
plt.figure(figsize=(24, 16))
y_test = (iris["target"] == 2).astype(np.int)
for i in range(0, len(iris['feature_names'])):
boundary_samples = X_samples[i][y_probs[i][:, 1] >= 0.5]
decision_boundary = 0
x_axis_min = np.floor(X_minmax[i][0])
x_axis_max = np.ceil(X_minmax[i][1])
plt.subplot(2, 2, i+1)
plt.plot(X[i][y_test==1], y_test[y_test==1], marker='^', color='blueviolet', linestyle='None') # markers for Iris-Virginica
plt.plot(X[i][y_test==0], y_test[y_test==0], marker="s", color='deepskyBlue', linestyle='None') # markers for NO Iris-Virginica
plt.plot(X_samples[i], y_probs[i][:, 1], color='blueviolet', linewidth=2, label='Iris-Virginica')
plt.plot(X_samples[i], y_probs[i][:, 0], 'deepskyblue', linewidth=2, label='Not Iris-Virginica')
# determine if we have a decision boundary available
if (len(boundary_samples) > 0):
decision_boundary = boundary_samples[0]
plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2)
plt.text(decision_boundary+0.02, 0.15, 'Decision boundary\n (%.2f)' % decision_boundary, fontsize=16, color="k", ha="center")
plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='blueviolet', ec='blueviolet')
plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='deepskyBlue', ec='deepskyBlue')
else:
# no decision boundary available --> align axis to better illustrate that both curves don't intersect
x_axis_min = 0
x_axis_max = 8
plt.axis([x_axis_min, x_axis_max, -0.02, 1.02])
plt.legend(loc='center left', fontsize=16)
plt.xlabel(iris['feature_names'][i], fontsize=16)
plt.ylabel('Probability', fontsize=14)
###Output
_____no_output_____
###Markdown
The diagrams above indicate that - except for the feature *`sepal width`* - have a dicision boundary.This means, that we can make predictions on the features *`sepal length`*, *`sepal width`* and *`petal length`* whether the Iris **`[4.8, 2.5, 5.3, 2.4]`** is an *Iris-Virginica* or not: ------------- Step (4): Performing the actual RegressionNow that we know what features we can use for our regression, let's re-define our dependent variable **`y`** as well as the training data set **`X`**, which will include the features *`sepal length`*, *`sepal width`* and *`petal length`*:
###Code
y = iris.target
X = np.column_stack((iris.data[:, 0:2], iris.data[:, 3:4]))
X[0:5, :]
###Output
_____no_output_____
###Markdown
With this information in place, we can perform our regression on the givenIris sample ***`[4.8, 2.5, 5.3, 2.4]`*** which we want to classify:
###Code
# instantiate the Logistic Regression classifier
log_reg_classifier = LogisticRegression()
log_reg_classifier.fit(X, y)
# instantiate the Iris Sample [4.8, 2.5, 5.3, 2.4] leaving the 5.3 (sepal width)
iris_sample = np.array([4.8, 2.5, 2.4]).reshape(1, -1)
iris_sample
###Output
_____no_output_____
###Markdown
Having the regression classifier as well as the the Iris Sample to be examined instantiated, we can now calculate the probabilities for this sample:
###Code
iris_sample_probs = log_reg_classifier.predict_proba(iris_sample)
iris_sample_probs
###Output
_____no_output_____
###Markdown
... and can retrieve a prediction from the Logistic Regression classifier what Iris class the sample might be:
###Code
sample_predicition = log_reg_classifier.predict(iris_sample)
sample_predicition
###Output
_____no_output_____
###Markdown
------ Result of ValidationSo let's give an answer if the Iris defined by it's properties ***`[4.8, 2.5, 5.3, 2.4]`*** an *Iris-Virginica*?
###Code
# compare predictions for sepal length, petal length and petal width
# <-- sepal width has no decision boundary, so the sepal width has no evidence in making any decision here
predicted_sample_class_name = iris.target_names[int(sample_predicition[0])]
predicted_sample_class_name
###Output
_____no_output_____ |
Learning_Tensorflow/Advanced_Tensorflow/Core_tf/advanced_tf_autograd_training.ipynb | ###Markdown
AutoGrad API in Tensorflow
###Code
%tensorflow_version 2.x
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Gradient Tape TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using reverse mode differentiation.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as tape:
tape.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
dz_dx = tape.gradient(z, x)
print(dz_dx)
###Output
tf.Tensor(
[[8. 8.]
[8. 8.]], shape=(2, 2), dtype=float32)
###Markdown
You can also request gradients of the output with respect to intermediate values computed during a "recorded" tf.GradientTape context.
###Code
x = tf.ones((2, 2))
with tf.GradientTape() as tape:
tape.watch(x)
y = tf.reduce_sum(x)
z = tf.multiply(y, y)
dz_dy = tape.gradient(z, y)
print(dz_dy)
###Output
tf.Tensor(8.0, shape=(), dtype=float32)
###Markdown
By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a persistent gradient tape. This allows multiple calls to the gradient() method as resources are released when the tape object is garbage collected.
###Code
x = tf.constant(3.0)
with tf.GradientTape(persistent=True) as tape:
tape.watch(x)
y = x * x
z = y * y
dz_dx = tape.gradient(z, x)
dz_dy = tape.gradient(z, y)
dy_dx = tape.gradient(y, x)
print(dz_dx, dz_dy, dy_dx)
del tape
###Output
_____no_output_____
###Markdown
Recording control flowBecause tapes record operations as they are executed, Python control flow (using ifs and whiles for example) is naturally handled:
###Code
def f(x, y):
output = 1.0
for i in range(y):
if i > 1 and i < 5:
output = tf.multiply(output, x)
return output
def grad(x, y):
with tf.GradientTape() as tape:
tape.watch(x)
out = f(x, y)
return tape.gradient(out, x)
x = tf.convert_to_tensor(2.0)
x
print(grad(x, 6))
print(grad(x, 5))
print(grad(x, 4))
###Output
tf.Tensor(4.0, shape=(), dtype=float32)
###Markdown
Higher Order Gradients Operations inside of the GradientTape context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example:
###Code
x = tf.Variable(1.0)
with tf.GradientTape() as tape1:
with tf.GradientTape() as tape2:
y = x * x * x
dy_dx = tape2.gradient(y, x)
d2y_dx2 = tape1.gradient(dy_dx, x)
print(dy_dx)
print(d2y_dx2)
###Output
tf.Tensor(3.0, shape=(), dtype=float32)
tf.Tensor(6.0, shape=(), dtype=float32)
###Markdown
Using @tf.function In TensorFlow 2.0, eager execution is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier and faster), but this can come at the expense of performance and deployability.To get peak performance and to make your model deployable anywhere, use tf.function to make graphs out of your programs. Thanks to AutoGraph, a surprising amount of Python code just works with tf.function, but there are still pitfalls to be wary of.The main takeaways and recommendations are: Don't rely on Python side effects like object mutation or list appends. tf.function works best with TensorFlow ops, rather than NumPy ops or Python primitives. When in doubt, use the ` for x in y idiom `.
###Code
import traceback
import contextlib
# Some helper code to demonstrate the kinds of errors you might encounter.
@contextlib.contextmanager
def assert_raises(error_class):
try:
yield
except error_class as e:
print('Caught expected exception \n {}:'.format(error_class))
traceback.print_exc(limit=2)
except Exception as e:
raise e
else:
raise Exception('Expected {} to be raised but no error was raised!'.format(
error_class))
###Output
_____no_output_____
###Markdown
BasicsA tf.function you define is just like a core TensorFlow operation: You can execute it eagerly; you can use it in a graph; it has gradients; and so on.
###Code
@tf.function
def add(a, b):
return a + b
add(tf.ones([2, 2]), tf.ones([2, 2]))
v = tf.Variable(1.0)
with tf.GradientTape() as tape:
result = add(v, 10)
tape.gradient(result, v)
###Output
_____no_output_____
###Markdown
You can use functions inside functions
###Code
@tf.function
def dense_layer(x, w, b):
return add(tf.matmul(x, w), b)
dense_layer(tf.ones([3, 2]), tf.ones([2, 2]), tf.ones([2]))
###Output
_____no_output_____ |
notebooks/chap14.ipynb | ###Markdown
Survival Analysis Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event.In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions.Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data.As examples, we'll consider two applications that are a little less serious than life and death: the time until light bulbs fail and the time until dogs in a shelter are adopted.To describe these "survival times", we'll use the Weibull distribution. The Weibull DistributionThe [Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution) is often used in survival analysis because it is a good model for the distribution of lifetimes for manufactured products, at least over some parts of the range.SciPy provides several versions of the Weibull distribution; the one we'll use is called `weibull_min`.To make the interface consistent with our notation, I'll wrap it in a function that takes as parameters $\lambda$, which mostly affects the location or "central tendency" of the distribution, and $k$, which affects the shape.
###Code
from scipy.stats import weibull_min
def weibull_dist(lam, k):
return weibull_min(k, scale=lam)
###Output
_____no_output_____
###Markdown
As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$.
###Code
lam = 3
k = 0.8
actual_dist = weibull_dist(lam, k)
###Output
_____no_output_____
###Markdown
The result is an object that represents the distribution.Here's what the Weibull CDF looks like with those parameters.
###Code
import numpy as np
from empiricaldist import Cdf
from utils import decorate
qs = np.linspace(0, 12, 101)
ps = actual_dist.cdf(qs)
cdf = Cdf(ps, qs)
cdf.plot()
decorate(xlabel='Duration in time',
ylabel='CDF',
title='CDF of a Weibull distribution')
###Output
_____no_output_____
###Markdown
`actual_dist` provides `rvs`, which we can use to generate a random sample from this distribution.
###Code
np.random.seed(17)
data = actual_dist.rvs(10)
data
###Output
_____no_output_____
###Markdown
So, given the parameters of the distribution, we can generate a sample.Now let's see if we can go the other way: given the sample, we'll estimate the parameters.Here's a uniform prior distribution for $\lambda$:
###Code
from utils import make_uniform
lams = np.linspace(0.1, 10.1, num=101)
prior_lam = make_uniform(lams, name='lambda')
###Output
_____no_output_____
###Markdown
And a uniform prior for $k$:
###Code
ks = np.linspace(0.1, 5.1, num=101)
prior_k = make_uniform(ks, name='k')
###Output
_____no_output_____
###Markdown
I'll use `make_joint` to make a joint prior distribution for the two parameters.
###Code
from utils import make_joint
prior = make_joint(prior_lam, prior_k)
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows.Now I'll use `meshgrid` to make a 3-D mesh with $\lambda$ on the first axis (`axis=0`), $k$ on the second axis (`axis=1`), and the data on the third axis (`axis=2`).
###Code
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
###Output
_____no_output_____
###Markdown
Now we can use `weibull_dist` to compute the PDF of the Weibull distribution for each pair of parameters and each data point.
###Code
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
densities.shape
###Output
_____no_output_____
###Markdown
The likelihood of the data is the product of the probability densities along `axis=2`.
###Code
likelihood = densities.prod(axis=2)
likelihood.sum()
###Output
_____no_output_____
###Markdown
Now we can compute the posterior distribution in the usual way.
###Code
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps.It takes a joint prior distribution and the data, and returns a joint posterior distribution.
###Code
def update_weibull(prior, data):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
Here's how we use it.
###Code
posterior = update_weibull(prior, data)
###Output
_____no_output_____
###Markdown
And here's a contour plot of the joint posterior distribution.
###Code
from utils import plot_contour
plot_contour(posterior)
decorate(title='Posterior joint distribution of Weibull parameters')
###Output
_____no_output_____
###Markdown
It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3.And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8. Marginal DistributionsTo be more precise about these ranges, we can extract the marginal distributions:
###Code
from utils import marginal
posterior_lam = marginal(posterior, 0)
posterior_k = marginal(posterior, 1)
###Output
_____no_output_____
###Markdown
And compute the posterior means and 90% credible intervals.
###Code
import matplotlib.pyplot as plt
plt.axvline(3, color='C5')
posterior_lam.plot(color='C4', label='lambda')
decorate(xlabel='lam',
ylabel='PDF',
title='Posterior marginal distribution of lam')
###Output
_____no_output_____
###Markdown
The vertical gray line show the actual value of $\lambda$.Here's the marginal posterior distribution for $k$.
###Code
plt.axvline(0.8, color='C5')
posterior_k.plot(color='C12', label='k')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
###Output
_____no_output_____
###Markdown
The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely.But for both parameters, the actual value falls in the credible interval.
###Code
print(lam, posterior_lam.credible_interval(0.9))
print(k, posterior_k.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
Incomplete DataIn the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know).But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future.As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted.Some dogs might be snapped up immediately; others might have to wait longer.The people who operate the shelter might want to make inferences about the distribution of these residence times.Suppose you monitor arrivals and departures over 8 weeks and 10 dogs arrive during that interval.I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this.
###Code
np.random.seed(19)
start = np.random.uniform(0, 8, size=10)
start
###Output
_____no_output_____
###Markdown
Now let's suppose that the residence times follow the Weibull distribution we used in the previous example.We can generate a sample from that distribution like this:
###Code
np.random.seed(17)
duration = actual_dist.rvs(10)
duration
###Output
_____no_output_____
###Markdown
I'll use these values to construct a `DataFrame` that contains the arrival and departure times for each dog, called `start` and `end`.
###Code
import pandas as pd
d = dict(start=start, end=start+duration)
obs = pd.DataFrame(d)
###Output
_____no_output_____
###Markdown
For display purposes, I'll sort the rows of the `DataFrame` by arrival time.
###Code
obs = obs.sort_values(by='start', ignore_index=True)
obs
###Output
_____no_output_____
###Markdown
Notice that several of the lifelines extend past the observation window of 8 weeks.So if we observed this system at the beginning of Week 8, we would have incomplete information.Specifically, we would not know the future adoption times for Dogs 6, 7, and 8.I'll simulate this incomplete data by identifying the lifelines that extend past the observation window:
###Code
censored = obs['end'] > 8
###Output
_____no_output_____
###Markdown
`censored` is a Boolean Series that is `True` for lifelines that extend past Week 8.Data that is not available is sometimes called "censored" in the sense that it is hidden from us.But in this case it is hidden because we don't know the future, not because someone is censoring it.For the lifelines that are censored, I'll modify `end` to indicate when they are last observed and `status` to indicate that the observation is incomplete.
###Code
obs.loc[censored, 'end'] = 8
obs.loc[censored, 'status'] = 0
###Output
_____no_output_____
###Markdown
Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line.
###Code
def plot_lifelines(obs):
"""Plot a line for each observation.
obs: DataFrame
"""
for y, row in obs.iterrows():
start = row['start']
end = row['end']
status = row['status']
if status == 0:
# ongoing
plt.hlines(y, start, end, color='C0')
else:
# complete
plt.hlines(y, start, end, color='C1')
plt.plot(end, y, marker='o', color='C1')
decorate(xlabel='Time (weeks)',
ylabel='Dog index',
title='Lifelines showing censored and uncensored observations')
plt.gca().invert_yaxis()
plot_lifelines(obs)
###Output
_____no_output_____
###Markdown
And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines.
###Code
obs['T'] = obs['end'] - obs['start']
###Output
_____no_output_____
###Markdown
What we have simulated is the data that would be available at the beginning of Week 8. Using Incomplete DataNow, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times.First I'll split the data into two sets: `data1` contains residence times for dogs whose arrival and departure times are known; `data2` contains incomplete residence times for dogs who were not adopted during the observation interval.
###Code
data1 = obs.loc[~censored, 'T']
data2 = obs.loc[censored, 'T']
data1
data2
###Output
_____no_output_____
###Markdown
For the complete data, we can use `update_weibull`, which uses the PDF of the Weibull distribution to compute the likelihood of the data.
###Code
posterior1 = update_weibull(prior, data1)
###Output
_____no_output_____
###Markdown
For the incomplete data, we have to think a little harder.At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than `T`.And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds `T`.The following function is identical to `update_weibull` except that it uses `sf`, which computes the survival function, rather than `pdf`.
###Code
def update_weibull_incomplete(prior, data):
"""Update the prior using incomplete data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
# evaluate the survival function
probs = weibull_dist(lam_mesh, k_mesh).sf(data_mesh)
likelihood = probs.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
Here's the update with the incomplete data.
###Code
posterior2 = update_weibull_incomplete(posterior1, data2)
###Output
_____no_output_____
###Markdown
And here's what the joint posterior distribution looks like after both updates.
###Code
plot_contour(posterior2)
decorate(title='Posterior joint distribution, incomplete data')
###Output
_____no_output_____
###Markdown
Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider.We can see that more clearly by looking at the marginal distributions.
###Code
posterior_lam2 = marginal(posterior2, 0)
posterior_k2 = marginal(posterior2, 1)
###Output
_____no_output_____
###Markdown
Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data.
###Code
posterior_lam.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_lam2.plot(color='C2', label='Some censored')
decorate(xlabel='lambda',
ylabel='PDF',
title='Marginal posterior distribution of lambda')
###Output
_____no_output_____
###Markdown
The distribution with some incomplete data is substantially wider.As an aside, notice that the posterior distribution does not come all the way to 0 on the right side.That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter.If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior.Here's the posterior marginal distribution for $k$:
###Code
posterior_k.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_k2.plot(color='C12', label='Some censored')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
###Output
_____no_output_____
###Markdown
In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider.In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored.In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty.This example is based on data I generated; in the next section we'll do a similar analysis with real data. Light BulbsIn 2007 [researchers ran an experiment](https://www.researchgate.net/publication/225450325_Renewal_Rate_of_Filament_Lamps_Theory_and_Experiment) to characterize the distribution of lifetimes for light bulbs.Here is their description of the experiment:> An assembly of 50 new Philips (India) lamps with the rating 40 W, 220 V (AC) was taken and installed in the horizontal orientation and uniformly distributed over a lab area 11 m x 7 m.>> The assembly was monitored at regular intervals of 12 h to look for failures. The instants of recorded failures were [recorded] and a total of 32 data points were obtained such that even the last bulb failed.
###Code
download('https://gist.github.com/epogrebnyak/7933e16c0ad215742c4c104be4fbdeb1/raw/c932bc5b6aa6317770c4cbf43eb591511fec08f9/lamps.csv')
###Output
_____no_output_____
###Markdown
We can load the data into a `DataFrame` like this:
###Code
df = pd.read_csv('lamps.csv', index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Column `h` contains the times when bulbs failed in hours; Column `f` contains the number of bulbs that failed at each time.We can represent these values and frequencies using a `Pmf`, like this:
###Code
from empiricaldist import Pmf
pmf_bulb = Pmf(df['f'].to_numpy(), df['h'])
pmf_bulb.normalize()
###Output
_____no_output_____
###Markdown
Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously. The average lifetime is about 1400 h.
###Code
pmf_bulb.mean()
###Output
_____no_output_____
###Markdown
Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data.Again, I'll start with uniform priors for $\lambda$ and $k$:
###Code
lams = np.linspace(1000, 2000, num=51)
prior_lam = make_uniform(lams, name='lambda')
ks = np.linspace(1, 10, num=51)
prior_k = make_uniform(ks, name='k')
###Output
_____no_output_____
###Markdown
For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations.They will run faster with fewer values, but the results will be less precise.As usual, we can use `make_joint` to make the prior joint distribution.
###Code
prior_bulb = make_joint(prior_lam, prior_k)
###Output
_____no_output_____
###Markdown
Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times.We can use `np.repeat` to transform the data.
###Code
data_bulb = np.repeat(df['h'], df['f'])
len(data_bulb)
###Output
_____no_output_____
###Markdown
Now we can use `update_weibull` to do the update.
###Code
posterior_bulb = update_weibull(prior_bulb, data_bulb)
###Output
_____no_output_____
###Markdown
Here's what the posterior joint distribution looks like:
###Code
plot_contour(posterior_bulb)
decorate(title='Joint posterior distribution, light bulbs')
###Output
_____no_output_____
###Markdown
To summarize this joint posterior distribution, we'll compute the posterior mean lifetime. Posterior MeansTo compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$.
###Code
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
###Output
_____no_output_____
###Markdown
Now for each pair of parameters we'll use `weibull_dist` to compute the mean.
###Code
means = weibull_dist(lam_mesh, k_mesh).mean()
means.shape
###Output
_____no_output_____
###Markdown
The result is an array with the same dimensions as the joint distribution.Now we need to weight each mean with the corresponding probability from the joint posterior.
###Code
prod = means * posterior_bulb
###Output
_____no_output_____
###Markdown
Finally we compute the sum of the weighted means.
###Code
prod.to_numpy().sum()
###Output
_____no_output_____
###Markdown
Based on the posterior distribution, we think the mean lifetime is about 1413 hours.The following function encapsulates these steps:
###Code
def joint_weibull_mean(joint):
"""Compute the mean of a joint distribution of Weibulls."""
lam_mesh, k_mesh = np.meshgrid(
joint.columns, joint.index)
means = weibull_dist(lam_mesh, k_mesh).mean()
prod = means * joint
return prod.to_numpy().sum()
###Output
_____no_output_____
###Markdown
Incomplete InformationThe previous update was not quite right, because it assumed each light bulb died at the instant we observed it. According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check.It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval.
###Code
def update_weibull_between(prior, data, dt=12):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
dist = weibull_dist(lam_mesh, k_mesh)
cdf1 = dist.cdf(data_mesh)
cdf2 = dist.cdf(data_mesh-12)
likelihood = (cdf1 - cdf2).prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval.Here's how we run the update.
###Code
posterior_bulb2 = update_weibull_between(prior_bulb, data_bulb)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
plot_contour(posterior_bulb2)
decorate(title='Joint posterior distribution, light bulbs')
###Output
_____no_output_____
###Markdown
Visually this result is almost identical to what we got using the PDF.And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct.To see whether it makes any difference at all, let's check the posterior means.
###Code
joint_weibull_mean(posterior_bulb)
joint_weibull_mean(posterior_bulb2)
###Output
_____no_output_____
###Markdown
When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less.And that makes sense: if we assume that a bulb is equally likely to expire at any point in the interval, the average would be the midpoint of the interval. Posterior Predictive DistributionSuppose you install 100 light bulbs of the kind in the previous section, and you come back to check on them after 1000 hours. Based on the posterior distribution we just computed, what is the distribution of the number of bulbs you find dead?If we knew the parameters of the Weibull distribution for sure, the answer would be a binomial distribution.For example, if we know that $\lambda=1550$ and $k=4.25$, we can use `weibull_dist` to compute the probability that a bulb dies before you return:
###Code
lam = 1550
k = 4.25
t = 1000
prob_dead = weibull_dist(lam, k).cdf(t)
prob_dead
###Output
_____no_output_____
###Markdown
If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution.
###Code
from utils import make_binomial
n = 100
p = prob_dead
dist_num_dead = make_binomial(n, p)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
dist_num_dead.plot(label='known parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Predictive distribution with known parameters')
###Output
_____no_output_____
###Markdown
But that's based on the assumption that we know $\lambda$ and $k$, and we don't.Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities.So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities.We can use `make_mixture` to compute the posterior predictive distribution. It doesn't work with joint distributions, but we can convert the `DataFrame` that represents a joint distribution to a `Series`, like this:
###Code
posterior_series = posterior_bulb.stack()
posterior_series.head()
###Output
_____no_output_____
###Markdown
The result is a `Series` with a `MultiIndex` that contains two "levels": the first level contains the values of `k`; the second contains the values of `lam`.With the posterior in this form, we can iterate through the possible parameters and compute a predictive distribution for each pair.
###Code
pmf_seq = []
for (k, lam) in posterior_series.index:
prob_dead = weibull_dist(lam, k).cdf(t)
pmf = make_binomial(n, prob_dead)
pmf_seq.append(pmf)
###Output
_____no_output_____
###Markdown
Now we can use `make_mixture`, passing as parameters the posterior probabilities in `posterior_series` and the sequence of binomial distributions in `pmf_seq`.
###Code
from utils import make_mixture
post_pred = make_mixture(posterior_series, pmf_seq)
###Output
_____no_output_____
###Markdown
Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters.
###Code
dist_num_dead.plot(label='known parameters')
post_pred.plot(label='unknown parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Posterior predictive distribution')
###Output
_____no_output_____
###Markdown
The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs. SummaryThis chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains.We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways: knowing the exact duration of a lifetime, knowing a lower bound, and knowing that a lifetime fell in a given interval.These examples demonstrate a feature of Bayesian methods: they can be adapted to handle incomplete, or "censored", data with only small changes. As an exercise, you'll have a chance to work with one more type of censored data, when we are given an upper bound on a lifetime.The methods in this chapter work with any distribution with two parameters.In the exercises, you'll have a chance to estimate the parameters of a two-parameter gamma distribution, which is used to describe a variety of natural phenomena.And in the next chapter we'll move on to models with three parameters! Exercises **Exercise:** Using data about the lifetimes of light bulbs, we computed the posterior distribution from the parameters of a Weibull distribution, $\lambda$ and $k$, and the posterior predictive distribution for the number of dead bulbs, out of 100, after 1000 hours.Now suppose you do the experiment: You install 100 light bulbs, come back after 1000 hours, and find 20 dead light bulbs.Update the posterior distribution based on this data.How much does it change the posterior mean? Suggestions:1. Use a mesh grid to compute the probability of finding a bulb dead after 1000 hours for each pair of parameters.2. For each of those probabilities, compute the likelihood of finding 20 dead bulbs out of 100.3. Use those likelihoods to update the posterior distribution.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** In this exercise, we'll use one month of data to estimate the parameters of a distribution that describes daily rainfall in Seattle.Then we'll compute the posterior predictive distribution for daily rainfall and use it to estimate the probability of a rare event, like more than 1.5 inches of rain in a day.According to hydrologists, the distribution of total daily rainfall (for days with rain) is well modeled by a two-parametergamma distribution.When we worked with the one-parameter gamma distribution in >, we used the Greek letter $\alpha$ for the parameter.For the two-parameter gamma distribution, we will use $k$ for the "shape parameter", which determines the shape of the distribution, and the Greek letter $\theta$ or `theta` for the "scale parameter". The following function takes these parameters and returns a `gamma` object from SciPy.
###Code
import scipy.stats
def gamma_dist(k, theta):
"""Makes a gamma object.
k: shape parameter
theta: scale parameter
returns: gamma object
"""
return scipy.stats.gamma(k, scale=theta)
###Output
_____no_output_____
###Markdown
Now we need some data.The following cell downloads data I collected from the National Oceanic and Atmospheric Administration ([NOAA](http://www.ncdc.noaa.gov/cdo-web/search)) for Seattle, Washington in May 2020.
###Code
# Load the data file
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2203951.csv')
###Output
_____no_output_____
###Markdown
Now we can load it into a `DataFrame`:
###Code
weather = pd.read_csv('2203951.csv')
weather.head()
###Output
_____no_output_____
###Markdown
I'll make a Boolean Series to indicate which days it rained.
###Code
rained = weather['PRCP'] > 0
rained.sum()
###Output
_____no_output_____
###Markdown
And select the total rainfall on the days it rained.
###Code
prcp = weather.loc[rained, 'PRCP']
prcp.describe()
###Output
_____no_output_____
###Markdown
Here's what the CDF of the data looks like.
###Code
cdf_data = Cdf.from_seq(prcp)
cdf_data.plot()
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Distribution of rainfall on days it rained')
###Output
_____no_output_____
###Markdown
The maximum is 1.14 inches of rain is one day.To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model. I suggest you proceed in the following steps:1. Construct a prior distribution for the parameters of the gamma distribution. Note that $k$ and $\theta$ must be greater than 0.2. Use the observed rainfalls to update the distribution of parameters.3. Compute the posterior predictive distribution of rainfall, and use it to estimate the probability of getting more than 1.5 inches of rain in one day.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Table of Contents1 Modeling and Simulation in Python1.0.1 Code from previous chapters1.0.2 Contact number1.0.3 Analysis1.1 Exercises Modeling and Simulation in PythonChapter 14Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Code from previous chapters
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
init, t0, t_end = system.init, system.t0, system.t_end
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
def sweep_beta(beta_array, gamma):
"""Sweep a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
"""
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[system.beta] = calc_total_infected(results)
return sweep
def sweep_parameters(beta_array, gamma_array):
"""Sweep a range of values for beta and gamma.
beta_array: array of infection rates
gamma_array: array of recovery rates
returns: SweepFrame with one row for each beta
and one column for each gamma
"""
frame = SweepFrame(columns=gamma_array)
for gamma in gamma_array:
frame[gamma] = sweep_beta(beta_array, gamma)
return frame
###Output
_____no_output_____
###Markdown
Contact number Here's the `SweepFrame` from the previous chapter, with one row for each value of `beta` and one column for each value of `gamma`.
###Code
beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1]
gamma_array = [0.2, 0.4, 0.6, 0.8]
frame = sweep_parameters(beta_array, gamma_array)
frame.head()
frame.shape
###Output
_____no_output_____
###Markdown
The following loop shows how we can loop through the columns and rows of the `SweepFrame`. With 11 rows and 4 columns, there are 44 elements.
###Code
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
print(beta, gamma, frac_infected)
###Output
0.1 0.2 0.010756340768063644
0.2 0.2 0.11898421353185373
0.3 0.2 0.5890954199973404
0.4 0.2 0.8013385277185551
0.5 0.2 0.8965769637207062
0.6 0.2 0.942929291399791
0.7 0.2 0.966299311298026
0.8 0.2 0.9781518959989762
0.9 0.2 0.9840568957948106
1.0 0.2 0.9868823507202488
1.1 0.2 0.988148177093735
0.1 0.4 0.0036416926514175607
0.2 0.4 0.010763463373360094
0.3 0.4 0.030184952469116566
0.4 0.4 0.131562924303259
0.5 0.4 0.3964094037932606
0.6 0.4 0.5979016626615987
0.7 0.4 0.7284704154876106
0.8 0.4 0.8144604459153759
0.9 0.4 0.8722697237137128
1.0 0.4 0.9116692168795855
1.1 0.4 0.9386802509510287
0.1 0.6 0.002190722188881611
0.2 0.6 0.005446688837466351
0.3 0.6 0.010771139974975585
0.4 0.6 0.020916599304195316
0.5 0.6 0.04614035896610047
0.6 0.6 0.13288938996079536
0.7 0.6 0.3118432512847451
0.8 0.6 0.47832565854255393
0.9 0.6 0.605687582114665
1.0 0.6 0.7014254793376209
1.1 0.6 0.7738176405451065
0.1 0.8 0.0015665254038139675
0.2 0.8 0.003643953969662994
0.3 0.8 0.006526163529085194
0.4 0.8 0.010779807499500693
0.5 0.8 0.017639902596349066
0.6 0.8 0.030291868201986594
0.7 0.8 0.05882382948158804
0.8 0.8 0.13358889291095588
0.9 0.8 0.2668895539427739
1.0 0.8 0.40375121210421994
1.1 0.8 0.519583469821867
###Markdown
Now we can wrap that loop in a function and plot the results. For each element of the `SweepFrame`, we have `beta`, `gamma`, and `frac_infected`, and we plot `beta/gamma` on the x-axis and `frac_infected` on the y-axis.
###Code
def plot_sweep_frame(frame):
"""Plot the values from a SweepFrame.
For each (beta, gamma), compute the contact number,
beta/gamma
frame: SweepFrame with one row per beta, one column per gamma
"""
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
plot(beta/gamma, frac_infected, 'ro')
frame
###Output
_____no_output_____
###Markdown
Here's what it looks like:
###Code
plot_sweep_frame(frame)
decorate(xlabel='Contact number (beta/gamma)',
ylabel='Fraction infected')
savefig('figs/chap14-fig01.pdf')
###Output
Saving figure to file figs/chap14-fig01.pdf
###Markdown
It turns out that the ratio `beta/gamma`, called the "contact number" is sufficient to predict the total number of infections; we don't have to know `beta` and `gamma` separately.We can see that in the previous plot: when we plot the fraction infected versus the contact number, the results fall close to a curve. Analysis In the book we figured out the relationship between $c$ and $s_{\infty}$ analytically. Now we can compute it for a range of values:
###Code
s_inf_array = linspace(0.0001, 0.9999, 101);
c_array = log(s_inf_array) / (s_inf_array - 1);
###Output
_____no_output_____
###Markdown
`total_infected` is the change in $s$ from the beginning to the end.
###Code
frac_infected = 1 - s_inf_array
frac_infected_series = Series(frac_infected, index=c_array);
###Output
_____no_output_____
###Markdown
Now we can plot the analytic results and compare them to the simulations.
###Code
plot_sweep_frame(frame)
plot(frac_infected_series, label='Analysis')
decorate(xlabel='Contact number (c)',
ylabel='Fraction infected')
savefig('figs/chap14-fig02.pdf')
###Output
Saving figure to file figs/chap14-fig02.pdf
###Markdown
The agreement is generally good, except for values of `c` less than 1. Exercises **Exercise:** If we didn't know about contact numbers, we might have explored other possibilities, like the difference between `beta` and `gamma`, rather than their ratio.Write a version of `plot_sweep_frame`, called `plot_sweep_frame_difference`, that plots the fraction infected versus the difference `beta-gamma`.What do the results look like, and what does that imply?
###Code
frame
def plot_sweep_frame_2(frame):
for gamma in frame.columns:
beta_array = frame[gamma]
for beta in beta_array.index:
frac_infected = beta_array[beta]
plot(beta-gamma,frac_infected,'ro')
plot_sweep_frame_2(frame)
def plot_sweep_frame_2(frame):
for gamma in frame.columns:
beta_array = frame[gamma]
for beta in beta_array.index:
frac_infected = beta_array[beta]
print(frac_infected)
plot_sweep_frame_2(frame)
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you run a survey at the end of the semester and find that 26% of students had the Freshman Plague at some point.What is your best estimate of `c`?Hint: if you print `frac_infected_series`, you can read off the answer.
###Code
print(frac_infected_series)
abs(frac_infected_series-0.26).idxmin()
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 14Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Code from previous chapters
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
init, t0, t_end = system.init, system.t0, system.t_end
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
def sweep_beta(beta_array, gamma):
"""Sweep a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
"""
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[system.beta] = calc_total_infected(results)
return sweep
def sweep_parameters(beta_array, gamma_array):
"""Sweep a range of values for beta and gamma.
beta_array: array of infection rates
gamma_array: array of recovery rates
returns: SweepFrame with one row for each beta
and one column for each gamma
"""
frame = SweepFrame(columns=gamma_array)
for gamma in gamma_array:
frame[gamma] = sweep_beta(beta_array, gamma)
return frame
###Output
_____no_output_____
###Markdown
Contact number Here's the `SweepFrame` from the previous chapter, with one row for each value of `beta` and one column for each value of `gamma`.
###Code
beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1]
gamma_array = [0.2, 0.4, 0.6, 0.8]
frame = sweep_parameters(beta_array, gamma_array)
frame.head()
frame.shape
###Output
_____no_output_____
###Markdown
The following loop shows how we can loop through the columns and rows of the `SweepFrame`. With 11 rows and 4 columns, there are 44 elements.
###Code
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
print(beta, gamma, frac_infected)
###Output
_____no_output_____
###Markdown
Now we can wrap that loop in a function and plot the results. For each element of the `SweepFrame`, we have `beta`, `gamma`, and `frac_infected`, and we plot `beta/gamma` on the x-axis and `frac_infected` on the y-axis.
###Code
def plot_sweep_frame(frame):
"""Plot the values from a SweepFrame.
For each (beta, gamma), compute the contact number,
beta/gamma
frame: SweepFrame with one row per beta, one column per gamma
"""
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
plot(beta/gamma, frac_infected, 'ro')
###Output
_____no_output_____
###Markdown
Here's what it looks like:
###Code
plot_sweep_frame(frame)
decorate(xlabel='Contact number (beta/gamma)',
ylabel='Fraction infected')
savefig('figs/chap14-fig01.pdf')
###Output
_____no_output_____
###Markdown
It turns out that the ratio `beta/gamma`, called the "contact number" is sufficient to predict the total number of infections; we don't have to know `beta` and `gamma` separately.We can see that in the previous plot: when we plot the fraction infected versus the contact number, the results fall close to a curve. Analysis In the book we figured out the relationship between $c$ and $s_{\infty}$ analytically. Now we can compute it for a range of values:
###Code
s_inf_array = linspace(0.0001, 0.9999, 101);
c_array = log(s_inf_array) / (s_inf_array - 1);
###Output
_____no_output_____
###Markdown
`total_infected` is the change in $s$ from the beginning to the end.
###Code
frac_infected = 1 - s_inf_array
frac_infected_series = Series(frac_infected, index=c_array);
###Output
_____no_output_____
###Markdown
Now we can plot the analytic results and compare them to the simulations.
###Code
plot_sweep_frame(frame)
plot(frac_infected_series, label='Analysis')
decorate(xlabel='Contact number (c)',
ylabel='Fraction infected')
savefig('figs/chap14-fig02.pdf')
###Output
_____no_output_____
###Markdown
The agreement is generally good, except for values of `c` less than 1. Exercises **Exercise:** If we didn't know about contact numbers, we might have explored other possibilities, like the difference between `beta` and `gamma`, rather than their ratio.Write a version of `plot_sweep_frame`, called `plot_sweep_frame_difference`, that plots the fraction infected versus the difference `beta-gamma`.What do the results look like, and what does that imply?
###Code
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you run a survey at the end of the semester and find that 26% of students had the Freshman Plague at some point.What is your best estimate of `c`?Hint: if you print `frac_infected_series`, you can read off the answer.
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Survival Analysis Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event.In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions.Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data.As examples, we'll consider two applications that are a little less serious than life and death: the time until light bulbs fail and the time until dogs in a shelter are adopted.To describe these "survival times", we'll use the Weibull distribution. The Weibull DistributionThe [Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution) is often used in survival analysis because it is a good model for the distribution of lifetimes for manufactured products, at least over some parts of the range.SciPy provides several versions of the Weibull distribution; the one we'll use is called `weibull_min`.To make the interface consistent with our notation, I'll wrap it in a function that takes as parameters $\lambda$, which mostly affects the location or "central tendency" of the distribution, and $k$, which affects the shape.
###Code
from scipy.stats import weibull_min
def weibull_dist(lam, k):
return weibull_min(k, scale=lam)
###Output
_____no_output_____
###Markdown
As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$.
###Code
lam = 3
k = 0.8
actual_dist = weibull_dist(lam, k)
###Output
_____no_output_____
###Markdown
The result is an object that represents the distribution.Here's what the Weibull CDF looks like with those parameters.
###Code
import numpy as np
from empiricaldist import Cdf
from utils import decorate
qs = np.linspace(0, 12, 101)
ps = actual_dist.cdf(qs)
cdf = Cdf(ps, qs)
cdf.plot()
decorate(xlabel='Duration in time',
ylabel='CDF',
title='CDF of a Weibull distribution')
###Output
_____no_output_____
###Markdown
`actual_dist` provides `rvs`, which we can use to generate a random sample from this distribution.
###Code
np.random.seed(17)
data = actual_dist.rvs(10)
data
###Output
_____no_output_____
###Markdown
So, given the parameters of the distribution, we can generate a sample.Now let's see if we can go the other way: given the sample, we'll estimate the parameters.Here's a uniform prior distribution for $\lambda$:
###Code
from utils import make_uniform
lams = np.linspace(0.1, 10.1, num=101)
prior_lam = make_uniform(lams, name='lambda')
###Output
_____no_output_____
###Markdown
And a uniform prior for $k$:
###Code
ks = np.linspace(0.1, 5.1, num=101)
prior_k = make_uniform(ks, name='k')
###Output
_____no_output_____
###Markdown
I'll use `make_joint` to make a joint prior distribution for the two parameters.
###Code
from utils import make_joint
prior = make_joint(prior_lam, prior_k)
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows.Now I'll use `meshgrid` to make a 3-D mesh with $\lambda$ on the first axis (`axis=0`), $k$ on the second axis (`axis=1`), and the data on the third axis (`axis=2`).
###Code
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
###Output
_____no_output_____
###Markdown
Now we can use `weibull_dist` to compute the PDF of the Weibull distribution for each pair of parameters and each data point.
###Code
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
densities.shape
###Output
_____no_output_____
###Markdown
The likelihood of the data is the product of the probability densities along `axis=2`.
###Code
likelihood = densities.prod(axis=2)
likelihood.sum()
###Output
_____no_output_____
###Markdown
Now we can compute the posterior distribution in the usual way.
###Code
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps.It takes a joint prior distribution and the data, and returns a joint posterior distribution.
###Code
def update_weibull(prior, data):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
Here's how we use it.
###Code
posterior = update_weibull(prior, data)
###Output
_____no_output_____
###Markdown
And here's a contour plot of the joint posterior distribution.
###Code
from utils import plot_contour
plot_contour(posterior)
decorate(title='Posterior joint distribution of Weibull parameters')
###Output
_____no_output_____
###Markdown
It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3.And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8. Marginal DistributionsTo be more precise about these ranges, we can extract the marginal distributions:
###Code
from utils import marginal
posterior_lam = marginal(posterior, 0)
posterior_k = marginal(posterior, 1)
###Output
_____no_output_____
###Markdown
And compute the posterior means and 90% credible intervals.
###Code
import matplotlib.pyplot as plt
plt.axvline(3, color='C5')
posterior_lam.plot(color='C4', label='lambda')
decorate(xlabel='lam',
ylabel='PDF',
title='Posterior marginal distribution of lam')
###Output
_____no_output_____
###Markdown
The vertical gray line show the actual value of $\lambda$.Here's the marginal posterior distribution for $k$.
###Code
plt.axvline(0.8, color='C5')
posterior_k.plot(color='C12', label='k')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
###Output
_____no_output_____
###Markdown
The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely.But for both parameters, the actual value falls in the credible interval.
###Code
print(lam, posterior_lam.credible_interval(0.9))
print(k, posterior_k.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
Incomplete DataIn the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know).But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future.As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted.Some dogs might be snapped up immediately; others might have to wait longer.The people who operate the shelter might want to make inferences about the distribution of these residence times.Suppose you monitor arrivals and departures over 8 weeks and 10 dogs arrive during that interval.I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this.
###Code
np.random.seed(19)
start = np.random.uniform(0, 8, size=10)
start
###Output
_____no_output_____
###Markdown
Now let's suppose that the residence times follow the Weibull distribution we used in the previous example.We can generate a sample from that distribution like this:
###Code
np.random.seed(17)
duration = actual_dist.rvs(10)
duration
###Output
_____no_output_____
###Markdown
I'll use these values to construct a `DataFrame` that contains the arrival and departure times for each dog, called `start` and `end`.
###Code
import pandas as pd
d = dict(start=start, end=start+duration)
obs = pd.DataFrame(d)
###Output
_____no_output_____
###Markdown
For display purposes, I'll sort the rows of the `DataFrame` by arrival time.
###Code
obs = obs.sort_values(by='start', ignore_index=True)
obs
###Output
_____no_output_____
###Markdown
Notice that several of the lifelines extend past the observation window of 8 weeks.So if we observed this system at the beginning of Week 8, we would have incomplete information.Specifically, we would not know the future adoption times for Dogs 6, 7, and 8.I'll simulate this incomplete data by identifying the lifelines that extend past the observation window:
###Code
censored = obs['end'] > 8
###Output
_____no_output_____
###Markdown
`censored` is a Boolean Series that is `True` for lifelines that extend past Week 8.Data that is not available is sometimes called "censored" in the sense that it is hidden from us.But in this case it is hidden because we don't know the future, not because someone is censoring it.For the lifelines that are censored, I'll modify `end` to indicate when they are last observed and `status` to indicate that the observation is incomplete.
###Code
obs.loc[censored, 'end'] = 8
obs.loc[censored, 'status'] = 0
###Output
_____no_output_____
###Markdown
Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line.
###Code
def plot_lifelines(obs):
"""Plot a line for each observation.
obs: DataFrame
"""
for y, row in obs.iterrows():
start = row['start']
end = row['end']
status = row['status']
if status == 0:
# ongoing
plt.hlines(y, start, end, color='C0')
else:
# complete
plt.hlines(y, start, end, color='C1')
plt.plot(end, y, marker='o', color='C1')
decorate(xlabel='Time (weeks)',
ylabel='Dog index',
title='Lifelines showing censored and uncensored observations')
plt.gca().invert_yaxis()
plot_lifelines(obs)
###Output
_____no_output_____
###Markdown
And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines.
###Code
obs['T'] = obs['end'] - obs['start']
###Output
_____no_output_____
###Markdown
What we have simulated is the data that would be available at the beginning of Week 8. Using Incomplete DataNow, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times.First I'll split the data into two sets: `data1` contains residence times for dogs whose arrival and departure times are known; `data2` contains incomplete residence times for dogs who were not adopted during the observation interval.
###Code
data1 = obs.loc[~censored, 'T']
data2 = obs.loc[censored, 'T']
data1
data2
###Output
_____no_output_____
###Markdown
For the complete data, we can use `update_weibull`, which uses the PDF of the Weibull distribution to compute the likelihood of the data.
###Code
posterior1 = update_weibull(prior, data1)
###Output
_____no_output_____
###Markdown
For the incomplete data, we have to think a little harder.At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than `T`.And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds `T`.The following function is identical to `update_weibull` except that it uses `sf`, which computes the survival function, rather than `pdf`.
###Code
def update_weibull_incomplete(prior, data):
"""Update the prior using incomplete data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
# evaluate the survival function
probs = weibull_dist(lam_mesh, k_mesh).sf(data_mesh)
likelihood = probs.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
Here's the update with the incomplete data.
###Code
posterior2 = update_weibull_incomplete(posterior1, data2)
###Output
_____no_output_____
###Markdown
And here's what the joint posterior distribution looks like after both updates.
###Code
plot_contour(posterior2)
decorate(title='Posterior joint distribution, incomplete data')
###Output
_____no_output_____
###Markdown
Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider.We can see that more clearly by looking at the marginal distributions.
###Code
posterior_lam2 = marginal(posterior2, 0)
posterior_k2 = marginal(posterior2, 1)
###Output
_____no_output_____
###Markdown
Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data.
###Code
posterior_lam.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_lam2.plot(color='C2', label='Some censored')
decorate(xlabel='lambda',
ylabel='PDF',
title='Marginal posterior distribution of lambda')
###Output
_____no_output_____
###Markdown
The distribution with some incomplete data is substantially wider.As an aside, notice that the posterior distribution does not come all the way to 0 on the right side.That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter.If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior.Here's the posterior marginal distribution for $k$:
###Code
posterior_k.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_k2.plot(color='C12', label='Some censored')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
###Output
_____no_output_____
###Markdown
In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider.In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored.In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty.This example is based on data I generated; in the next section we'll do a similar analysis with real data. Light BulbsIn 2007 [researchers ran an experiment](https://www.researchgate.net/publication/225450325_Renewal_Rate_of_Filament_Lamps_Theory_and_Experiment) to characterize the distribution of lifetimes for light bulbs.Here is their description of the experiment:> An assembly of 50 new Philips (India) lamps with the rating 40 W, 220 V (AC) was taken and installed in the horizontal orientation and uniformly distributed over a lab area 11 m x 7 m.>> The assembly was monitored at regular intervals of 12 h to look for failures. The instants of recorded failures were [recorded] and a total of 32 data points were obtained such that even the last bulb failed.
###Code
import os
datafile = 'lamps.csv'
if not os.path.exists(datafile):
!wget https://gist.github.com/epogrebnyak/7933e16c0ad215742c4c104be4fbdeb1/raw/c932bc5b6aa6317770c4cbf43eb591511fec08f9/lamps.csv
###Output
_____no_output_____
###Markdown
We can load the data into a `DataFrame` like this:
###Code
df = pd.read_csv('lamps.csv', index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Column `h` contains the times when bulbs failed in hours; Column `f` contains the number of bulbs that failed at each time.We can represent these values and frequencies using a `Pmf`, like this:
###Code
from empiricaldist import Pmf
pmf_bulb = Pmf(df['f'].to_numpy(), df['h'])
pmf_bulb.normalize()
###Output
_____no_output_____
###Markdown
Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously. The average lifetime is about 1400 h.
###Code
pmf_bulb.mean()
###Output
_____no_output_____
###Markdown
Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data.Again, I'll start with uniform priors for $\lambda$ and $k$:
###Code
lams = np.linspace(1000, 2000, num=51)
prior_lam = make_uniform(lams, name='lambda')
ks = np.linspace(1, 10, num=51)
prior_k = make_uniform(ks, name='k')
###Output
_____no_output_____
###Markdown
For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations.They will run faster with fewer values, but the results will be less precise.As usual, we can use `make_joint` to make the prior joint distribution.
###Code
prior_bulb = make_joint(prior_lam, prior_k)
###Output
_____no_output_____
###Markdown
Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times.We can use `np.repeat` to transform the data.
###Code
data_bulb = np.repeat(df['h'], df['f'])
len(data_bulb)
###Output
_____no_output_____
###Markdown
Now we can use `update_weibull` to do the update.
###Code
posterior_bulb = update_weibull(prior_bulb, data_bulb)
###Output
_____no_output_____
###Markdown
Here's what the posterior joint distribution looks like:
###Code
plot_contour(posterior_bulb)
decorate(title='Joint posterior distribution, light bulbs')
###Output
_____no_output_____
###Markdown
To summarize this joint posterior distribution, we'll compute the posterior mean lifetime. Posterior MeansTo compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$.
###Code
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
###Output
_____no_output_____
###Markdown
Now for each pair of parameters we'll use `weibull_dist` to compute the mean.
###Code
means = weibull_dist(lam_mesh, k_mesh).mean()
means.shape
###Output
_____no_output_____
###Markdown
The result is an array with the same dimensions as the joint distribution.Now we need to weight each mean with the corresponding probability from the joint posterior.
###Code
prod = means * posterior_bulb
###Output
_____no_output_____
###Markdown
Finally we compute the sum of the weighted means.
###Code
prod.to_numpy().sum()
###Output
_____no_output_____
###Markdown
Based on the posterior distribution, we think the mean lifetime is about 1413 hours.The following function encapsulates these steps:
###Code
def joint_weibull_mean(joint):
"""Compute the mean of a joint distribution of Weibulls."""
lam_mesh, k_mesh = np.meshgrid(
joint.columns, joint.index)
means = weibull_dist(lam_mesh, k_mesh).mean()
prod = means * joint
return prod.to_numpy().sum()
###Output
_____no_output_____
###Markdown
Incomplete InformationThe previous update was not quite right, because it assumed each light bulb died at the instant we observed it. According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check.It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval.
###Code
def update_weibull_between(prior, data, dt=12):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
dist = weibull_dist(lam_mesh, k_mesh)
cdf1 = dist.cdf(data_mesh)
cdf2 = dist.cdf(data_mesh-12)
likelihood = (cdf1 - cdf2).prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval.Here's how we run the update.
###Code
posterior_bulb2 = update_weibull_between(prior_bulb, data_bulb)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
plot_contour(posterior_bulb2)
decorate(title='Joint posterior distribution, light bulbs')
###Output
_____no_output_____
###Markdown
Visually this result is almost identical to what we got using the PDF.And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct.To see whether it makes any difference at all, let's check the posterior means.
###Code
joint_weibull_mean(posterior_bulb)
joint_weibull_mean(posterior_bulb2)
###Output
_____no_output_____
###Markdown
When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less.And that makes sense: if we assume that a bulb is equally likely to expire at any point in the interval, the average would be the midpoint of the interval. Posterior Predictive DistributionSuppose you install 100 light bulbs of the kind in the previous section, and you come back to check on them after 1000 hours. Based on the posterior distribution we just computed, what is the distribution of the number of bulbs you find dead?If we knew the parameters of the Weibull distribution for sure, the answer would be a binomial distribution.For example, if we know that $\lambda=1550$ and $k=4.25$, we can use `weibull_dist` to compute the probability that a bulb dies before you return:
###Code
lam = 1550
k = 4.25
t = 1000
prob_dead = weibull_dist(lam, k).cdf(t)
prob_dead
###Output
_____no_output_____
###Markdown
If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution.
###Code
from utils import make_binomial
n = 100
p = prob_dead
dist_num_dead = make_binomial(n, p)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
dist_num_dead.plot(label='known parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Predictive distribution with known parameters')
###Output
_____no_output_____
###Markdown
But that's based on the assumption that we know $\lambda$ and $k$, and we don't.Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities.So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities.We can use `make_mixture` to compute the posterior predictive distribution. It doesn't work with joint distributions, but we can convert the `DataFrame` that represents a joint distribution to a `Series`, like this:
###Code
posterior_series = posterior_bulb.stack()
posterior_series.head()
###Output
_____no_output_____
###Markdown
The result is a `Series` with a `MultiIndex` that contains two "levels": the first level contains the values of `k`; the second contains the values of `lam`.With the posterior in this form, we can iterate through the possible parameters and compute a predictive distribution for each pair.
###Code
pmf_seq = []
for (k, lam) in posterior_series.index:
prob_dead = weibull_dist(lam, k).cdf(t)
pmf = make_binomial(n, prob_dead)
pmf_seq.append(pmf)
###Output
_____no_output_____
###Markdown
Now we can use `make_mixture`, passing as parameters the posterior probabilities in `posterior_series` and the sequence of binomial distributions in `pmf_seq`.
###Code
from utils import make_mixture
post_pred = make_mixture(posterior_series, pmf_seq)
###Output
_____no_output_____
###Markdown
Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters.
###Code
dist_num_dead.plot(label='known parameters')
post_pred.plot(label='unknown parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Posterior predictive distribution')
###Output
_____no_output_____
###Markdown
The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs. SummaryThis chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains.We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways: knowing the exact duration of a lifetime, knowing a lower bound, and knowing that a lifetime fell in a given interval.These examples demonstrate a feature of Bayesian methods: they can be adapted to handle incomplete, or "censored", data with only small changes. As an exercise, you'll have a chance to work with one more type of censored data, when we are given an upper bound on a lifetime.The methods in this chapter work with any distribution with two parameters.In the exercises, you'll have a chance to estimate the parameters of a two-parameter gamma distribution, which is used to describe a variety of natural phenomena.And in the next chapter we'll move on to models with three parameters! Exercises **Exercise:** Using data about the lifetimes of light bulbs, we computed the posterior distribution from the parameters of a Weibull distribution, $\lambda$ and $k$, and the posterior predictive distribution for the number of dead bulbs, out of 100, after 1000 hours.Now suppose you do the experiment: You install 100 light bulbs, come back after 1000 hours, and find 20 dead light bulbs.Update the posterior distribution based on this data.How much does it change the posterior mean? Suggestions:1. Use a mesh grid to compute the probability of finding a bulb dead after 1000 hours for each pair of parameters.2. For each of those probabilities, compute the likelihood of finding 20 dead bulbs out of 100.3. Use those likelihoods to update the posterior distribution.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** In this exercise, we'll use one month of data to estimate the parameters of a distribution that describes daily rainfall in Seattle.Then we'll compute the posterior predictive distribution for daily rainfall and use it to estimate the probability of a rare event, like more than 1.5 inches of rain in a day.According to hydrologists, the distribution of total daily rainfall (for days with rain) is well modeled by a two-parametergamma distribution.When we worked with the one-parameter gamma distribution in >, we used the Greek letter $\alpha$ for the parameter.For the two-parameter gamma distribution, we will use $k$ for the "shape parameter", which determines the shape of the distribution, and the Greek letter $\theta$ or `theta` for the "scale parameter". The following function takes these parameters and returns a `gamma` object from SciPy.
###Code
import scipy.stats
def gamma_dist(k, theta):
"""Makes a gamma object.
k: shape parameter
theta: scale parameter
returns: gamma object
"""
return scipy.stats.gamma(k, scale=theta)
###Output
_____no_output_____
###Markdown
Now we need some data.The following cell downloads data I collected from the National Oceanic and Atmospheric Administration ([NOAA](http://www.ncdc.noaa.gov/cdo-web/search)) for Seattle, Washington in May 2020.
###Code
# Load the data file
datafile = '2203951.csv'
if not os.path.exists(datafile):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2203951.csv
###Output
_____no_output_____
###Markdown
Now we can load it into a `DataFrame`:
###Code
weather = pd.read_csv('2203951.csv')
weather.head()
###Output
_____no_output_____
###Markdown
I'll make a Boolean Series to indicate which days it rained.
###Code
rained = weather['PRCP'] > 0
rained.sum()
###Output
_____no_output_____
###Markdown
And select the total rainfall on the days it rained.
###Code
prcp = weather.loc[rained, 'PRCP']
prcp.describe()
###Output
_____no_output_____
###Markdown
Here's what the CDF of the data looks like.
###Code
cdf_data = Cdf.from_seq(prcp)
cdf_data.plot()
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Distribution of rainfall on days it rained')
###Output
_____no_output_____
###Markdown
The maximum is 1.14 inches of rain is one day.To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model. I suggest you proceed in the following steps:1. Construct a prior distribution for the parameters of the gamma distribution. Note that $k$ and $\theta$ must be greater than 0.2. Use the observed rainfalls to update the distribution of parameters.3. Compute the posterior predictive distribution of rainfall, and use it to estimate the probability of getting more than 1.5 inches of rain in one day.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 14Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Code from previous chapters
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
init, t0, t_end = system.init, system.t0, system.t_end
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
def sweep_beta(beta_array, gamma):
"""Sweep a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
"""
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[system.beta] = calc_total_infected(results)
return sweep
def sweep_parameters(beta_array, gamma_array):
"""Sweep a range of values for beta and gamma.
beta_array: array of infection rates
gamma_array: array of recovery rates
returns: SweepFrame with one row for each beta
and one column for each gamma
"""
frame = SweepFrame(columns=gamma_array)
for gamma in gamma_array:
frame[gamma] = sweep_beta(beta_array, gamma)
return frame
###Output
_____no_output_____
###Markdown
Contact number Here's the `SweepFrame` from the previous chapter, with one row for each value of `beta` and one column for each value of `gamma`.
###Code
beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1]
gamma_array = [0.2, 0.4, 0.6, 0.8]
frame = sweep_parameters(beta_array, gamma_array)
frame.head()
frame.shape
###Output
_____no_output_____
###Markdown
The following loop shows how we can loop through the columns and rows of the `SweepFrame`. With 11 rows and 4 columns, there are 44 elements.
###Code
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
print(beta, gamma, frac_infected)
###Output
_____no_output_____
###Markdown
Now we can wrap that loop in a function and plot the results. For each element of the `SweepFrame`, we have `beta`, `gamma`, and `frac_infected`, and we plot `beta/gamma` on the x-axis and `frac_infected` on the y-axis.
###Code
def plot_sweep_frame(frame):
"""Plot the values from a SweepFrame.
For each (beta, gamma), compute the contact number,
beta/gamma
frame: SweepFrame with one row per beta, one column per gamma
"""
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
plot(beta/gamma, frac_infected, 'ro')
###Output
_____no_output_____
###Markdown
Here's what it looks like:
###Code
plot_sweep_frame(frame)
decorate(xlabel='Contact number (beta/gamma)',
ylabel='Fraction infected')
savefig('figs/chap14-fig01.pdf')
###Output
_____no_output_____
###Markdown
It turns out that the ratio `beta/gamma`, called the "contact number" is sufficient to predict the total number of infections; we don't have to know `beta` and `gamma` separately.We can see that in the previous plot: when we plot the fraction infected versus the contact number, the results fall close to a curve. Analysis In the book we figured out the relationship between $c$ and $s_{\infty}$ analytically. Now we can compute it for a range of values:
###Code
s_inf_array = linspace(0.0001, 0.9999, 101);
c_array = log(s_inf_array) / (s_inf_array - 1);
###Output
_____no_output_____
###Markdown
`total_infected` is the change in $s$ from the beginning to the end.
###Code
frac_infected = 1 - s_inf_array
frac_infected_series = Series(frac_infected, index=c_array);
###Output
_____no_output_____
###Markdown
Now we can plot the analytic results and compare them to the simulations.
###Code
plot_sweep_frame(frame)
plot(frac_infected_series, label='Analysis')
decorate(xlabel='Contact number (c)',
ylabel='Fraction infected')
savefig('figs/chap14-fig02.pdf')
###Output
_____no_output_____
###Markdown
The agreement is generally good, except for values of `c` less than 1. Exercises **Exercise:** If we didn't know about contact numbers, we might have explored other possibilities, like the difference between `beta` and `gamma`, rather than their ratio.Write a version of `plot_sweep_frame`, called `plot_sweep_frame_difference`, that plots the fraction infected versus the difference `beta-gamma`.What do the results look like, and what does that imply?
###Code
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose you run a survey at the end of the semester and find that 26% of students had the Freshman Plague at some point.What is your best estimate of `c`?Hint: if you print `frac_infected_series`, you can read off the answer.
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Survival Analysis Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event.In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions.Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data.As examples, we'll consider two applications that are a little less serious than life and death: the time until light bulbs fail and the time until dogs in a shelter are adopted.To describe these "survival times", we'll use the Weibull distribution. The Weibull distributionThe [Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution) is often used in survival analysis because it is a good model for the distribution of lifetimes for manufactured products, at least over some parts of the range.SciPy provides several versions of the Weibull distribution; the one we'll use is called `weibull_min`.To make it easier to use, I'll wrap it in a function that takes two parameters: $\lambda$, which mostly affects the location or "central tendency" of the distribution, and $k$, which affects the shape.
###Code
from scipy.stats import weibull_min
def weibull_dist(lam, k):
return weibull_min(k, scale=lam)
###Output
_____no_output_____
###Markdown
As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$.
###Code
lam = 3
k = 0.8
actual_dist = weibull_dist(lam, k)
###Output
_____no_output_____
###Markdown
The result is an object that represents the distribution.Here's what the Weibull CDF looks like with those parameters.
###Code
import numpy as np
from empiricaldist import Cdf
from utils import decorate
qs = np.linspace(0, 12, 101)
ps = actual_dist.cdf(qs)
cdf = Cdf(ps, qs)
cdf.plot()
decorate(xlabel='Duration in time', ylabel='CDF')
###Output
_____no_output_____
###Markdown
`actual_dist` provides `rvs`, which we can use to generate a random sample from this distribution.
###Code
np.random.seed(17)
data = actual_dist.rvs(10)
data
###Output
_____no_output_____
###Markdown
So, given the parameters of the distribution, we can generate a sample.Now let's see if we can go the other way: given the sample, we'll estimate the parameters.Here's a uniform prior distribution for $\lambda$:
###Code
from utils import make_uniform
lams = np.linspace(0.1, 10.1, num=101)
prior_lam = make_uniform(lams, name='lambda')
###Output
_____no_output_____
###Markdown
And a uniform prior for $k$:
###Code
ks = np.linspace(0.1, 5.1, num=101)
prior_k = make_uniform(ks, name='k')
###Output
_____no_output_____
###Markdown
I'll use `make_joint` to make a joint prior distribution for the two parameters.
###Code
from utils import make_joint
prior = make_joint(prior_lam, prior_k)
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows.Now I'll use `meshgrid` to make a 3-D mesh with $\lambda$ on the first axis (`axis=0`), $k$ on the second axis (`axis=1`), and the data on the third axis (`axis=2`).
###Code
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
###Output
_____no_output_____
###Markdown
Now we can use `weibull_dist` to compute the PDF of the Weibull distribution for each pair of parameters and each data point.
###Code
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
densities.shape
###Output
_____no_output_____
###Markdown
The likelihood of the data is the product of the probability densities along `axis=2`.
###Code
likelihood = densities.prod(axis=2)
likelihood.sum()
###Output
_____no_output_____
###Markdown
Now we can compute the posterior distribution in the usual way.
###Code
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps.It takes a joint prior distribution and the data, and returns a joint posterior distribution.
###Code
def update_weibull(prior, data):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
Here's how we use it.
###Code
posterior = update_weibull(prior, data)
###Output
_____no_output_____
###Markdown
And here's a contour plot of the joint posterior distribution.
###Code
from utils import plot_contour
plot_contour(posterior)
decorate(title='Posterior joint distribution of Weibull parameters')
###Output
_____no_output_____
###Markdown
It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3.And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8. Marginal distributionsTo be more precise about these ranges, we can extract the marginal distributions:
###Code
from utils import marginal
posterior_lam = marginal(posterior, 0)
posterior_k = marginal(posterior, 1)
###Output
_____no_output_____
###Markdown
And compute the posterior means and 90% credible intervals.
###Code
import matplotlib.pyplot as plt
plt.axvline(3, color='C5')
posterior_lam.plot(color='C4', label='lambda')
decorate(xlabel='lam',
ylabel='PDF',
title='Posterior marginal distribution of lam')
###Output
_____no_output_____
###Markdown
The vertical gray line show the actual value of $\lambda$.Here's the marginal posterior distribution for $k$.
###Code
plt.axvline(0.8, color='C5')
posterior_k.plot(color='C12', label='k')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
###Output
_____no_output_____
###Markdown
The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely.But for both parameters, the actual value falls in the credible interval.
###Code
print(lam, posterior_lam.credible_interval(0.9))
print(k, posterior_k.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
Incomplete DataIn the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know).But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future.As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted.Some dogs might be snapped up immediately; others might have to wait longer.The people who operate the shelter might want to make inferences about the distribution of these residence times.Suppose you monitor arrivals and departures over a 8 weeks, and 10 dogs arrive during that interval.I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this.
###Code
np.random.seed(19)
start = np.random.uniform(0, 8, size=10)
start
###Output
_____no_output_____
###Markdown
Now let's suppose that the residence times follow the Weibull distribution we used in the previous example.We can generate a sample from that distribution like this:
###Code
np.random.seed(17)
duration = actual_dist.rvs(10)
duration
###Output
_____no_output_____
###Markdown
I'll use these values to construct a `DataFrame` that contains the arrival and departure times for each dog, called `start` and `end`.
###Code
import pandas as pd
d = dict(start=start, end=start+duration)
obs = pd.DataFrame(d)
###Output
_____no_output_____
###Markdown
For display purposes, I'll sort the rows of the `DataFrame` by arrival time.
###Code
obs = obs.sort_values(by='start', ignore_index=True)
obs
###Output
_____no_output_____
###Markdown
Notice that several of the lifelines extend past the observation window of 8 weeks.So if we observed this system at the beginning of Week 8, we would have incomplete information.Specifically, we would not know the future adoption times for Dogs 6, 7, and 8.I'll simulate this incomplete data by identifying the lifelines that extend past the observation window:
###Code
censored = obs['end'] > 8
###Output
_____no_output_____
###Markdown
`censored` is a Boolean Series that is `True` for lifelines that extend past Week 8.Data that is not available is sometimes called "censored" in the sense that it is hidden from us.But in this case it is hidden because we don't know the future, not because someone is censoring it.For the lifelines that are censored, I'll modify `end` to indicate when they are last observed and `status` to indicate that the observation is incomplete.
###Code
obs.loc[censored, 'end'] = 8
obs.loc[censored, 'status'] = 0
###Output
_____no_output_____
###Markdown
Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line.
###Code
def plot_lifelines(obs):
"""Plot a line for each observation.
obs: DataFrame
"""
for y, row in obs.iterrows():
start = row['start']
end = row['end']
status = row['status']
if status == 0:
# ongoing
plt.hlines(y, start, end, color='C0')
else:
# complete
plt.hlines(y, start, end, color='C1')
plt.plot(end, y, marker='o', color='C1')
decorate(xlabel='Time (weeks)',
ylabel='Dog index')
plt.gca().invert_yaxis()
plot_lifelines(obs)
###Output
_____no_output_____
###Markdown
And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines.
###Code
obs['T'] = obs['end'] - obs['start']
###Output
_____no_output_____
###Markdown
What we have simulated is the data that would be available at the beginning of Week 8. Using Incomplete DataNow, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times.First I'll split the data into two sets: `data1` contains residence times for dogs whose arrival and departure times are known; `data2` contains incomplete residence times for dogs who were not adopted during the observation interval.
###Code
data1 = obs.loc[~censored, 'T']
data2 = obs.loc[censored, 'T']
data1
data2
###Output
_____no_output_____
###Markdown
For the complete data, we can use `update_weibull`, which uses the PDF of the Weibull distribution to compute the likelihood of the data.
###Code
posterior1 = update_weibull(prior, data1)
###Output
_____no_output_____
###Markdown
For the incomplete data, we have to think a little harder.At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than `T`.And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds `T`.The following function is identical to `update_weibull` except that it uses `sf`, which computes the survival function, rather than `pdf`.
###Code
def update_weibull_incomplete(prior, data):
"""Update the prior using incomplete data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).sf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
Here's the update with the incomplete data.
###Code
posterior2 = update_weibull_incomplete(posterior1, data2)
###Output
_____no_output_____
###Markdown
And here's what the joint posterior distribution looks like after both updates.
###Code
plot_contour(posterior2)
decorate(title='Posterior joint distribution, incomplete data')
###Output
_____no_output_____
###Markdown
Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider.We can see that more clearly by looking at the marginal distributions.
###Code
posterior_lam2 = marginal(posterior2, 0)
posterior_k2 = marginal(posterior2, 1)
###Output
_____no_output_____
###Markdown
Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data.
###Code
posterior_lam.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_lam2.plot(color='C2', label='Some censored')
decorate(xlabel='lambda',
ylabel='PDF',
title='Marginal posterior distribution of lambda')
###Output
_____no_output_____
###Markdown
The distribution with some incomplete data is substantially wider.As an aside, notice that the posterior distribution does not come all the way to 0 on the right side.That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter.If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior.Here's the posterior marginal distribution for $k$:
###Code
posterior_k.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_k2.plot(color='C12', label='Some censored')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
###Output
_____no_output_____
###Markdown
In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider.In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored.In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty.This example is based on data I generated; in the next section we'll do a similar analysis with real data. Light BulbsIn 2007 [researchers ran an experiment](https://www.researchgate.net/publication/225450325_Renewal_Rate_of_Filament_Lamps_Theory_and_Experiment) to characterize the distribution of lifetimes for light bulbs.Here is their description of the experiment:> An assembly of 50 new Philips (India) lamps with the rating 40 W, 220 V (AC) was taken and installed in the horizontal orientation and uniformly distributed over a lab area 11 m x 7 m.>> The assembly was monitored at regular intervals of 12 h to look for failures. The instants of recorded failures were [recorded] and a total of 32 data points were obtained such that even the last bulb failed.
###Code
import os
datafile = 'lamps.csv'
if not os.path.exists(datafile):
!wget https://gist.github.com/epogrebnyak/7933e16c0ad215742c4c104be4fbdeb1/raw/c932bc5b6aa6317770c4cbf43eb591511fec08f9/lamps.csv
###Output
_____no_output_____
###Markdown
We can load the data into a `DataFrame` like this:
###Code
df = pd.read_csv('lamps.csv', index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Column `h` contains the times when bulbs failed in hours; Column `f` contains the number of bulbs that failed at each time.We can represent these values and frequencies using a `Pmf`, like this:
###Code
from empiricaldist import Pmf
pmf_bulb = Pmf(df['f'].to_numpy(), df['h'])
pmf_bulb.normalize()
###Output
_____no_output_____
###Markdown
Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously. The average lifetime is about 1400 h.
###Code
pmf_bulb.mean()
###Output
_____no_output_____
###Markdown
Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data.Again, I'll start with uniform priors for $\lambda$ and $k$:
###Code
lams = np.linspace(1000, 2000, num=51)
prior_lam = make_uniform(lams, name='lambda')
ks = np.linspace(1, 10, num=51)
prior_k = make_uniform(ks, name='k')
###Output
_____no_output_____
###Markdown
For this example, there are 51 values in the prior distribtion, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations.They will run faster with fewer values, but the results will be less precise.As usual, we can use `make_joint` to make the prior joint distribution.
###Code
prior_bulb = make_joint(prior_lam, prior_k)
###Output
_____no_output_____
###Markdown
Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times.We can use `np.repeat` to transform the data.
###Code
data_bulb = np.repeat(df['h'], df['f'])
len(data_bulb)
###Output
_____no_output_____
###Markdown
Now we can use `update_weibull` to do the update.
###Code
posterior_bulb = update_weibull(prior_bulb, data_bulb)
###Output
_____no_output_____
###Markdown
Here's what the posterior joint distribution looks like:
###Code
plot_contour(posterior_bulb)
decorate(title='Joint posterior distribution, light bulbs')
###Output
_____no_output_____
###Markdown
To summarize this joint posterior distribution, we'll compute the posterior mean lifetime. Posterior MeansTo compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$.
###Code
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
###Output
_____no_output_____
###Markdown
Now for each pair of parameters we'll use `weibull_dist` to compute the mean.
###Code
means = weibull_dist(lam_mesh, k_mesh).mean()
means.shape
###Output
_____no_output_____
###Markdown
The result is an array with the same dimensions as the joint distribution.Now we need to weight each mean with the corresponding probability from the joint posterior.
###Code
prod = means * posterior_bulb
###Output
_____no_output_____
###Markdown
Finally we compute the sum of the weighted means.
###Code
prod.to_numpy().sum()
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps:
###Code
def joint_weibull_mean(joint):
"""Compute the mean of a joint distribution of Weibulls."""
lam_mesh, k_mesh = np.meshgrid(
joint.columns, joint.index)
means = weibull_dist(lam_mesh, k_mesh).mean()
prod = means * joint
return prod.to_numpy().sum()
###Output
_____no_output_____
###Markdown
Incomplete InformationThe previous update was not quite right, because it assumed each light bulb died at the instant we observed it. According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check.It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval.
###Code
def update_weibull_between(prior, data, dt=12):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
dist = weibull_dist(lam_mesh, k_mesh)
cdf1 = dist.cdf(data_mesh)
cdf2 = dist.cdf(data_mesh-12)
likelihood = (cdf1 - cdf2).prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval.Here's how we run the update.
###Code
posterior_bulb2 = update_weibull_between(prior_bulb, data_bulb)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
plot_contour(posterior_bulb2)
decorate(title='Joint posterior distribution, light bulbs')
###Output
_____no_output_____
###Markdown
Visually this result is almost identical to what we got using the PDF.And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct.To see whether it makes any difference at all, let's check the posterior means.
###Code
joint_weibull_mean(posterior_bulb)
joint_weibull_mean(posterior_bulb2)
###Output
_____no_output_____
###Markdown
When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less.And that makes sense: if we assume that a bulb is equally likely to expire at any point in the interval, the average would be the midpoint of the interval. Posterior Predictive DistributionSuppose you install 100 light bulbs of the kind in the previous section, and you come back to check on them after 1000 hours. Based on the posterior distribution we just computed, what is the distribution of the number of bulbs you find dead?If we knew the parameters of the Weibull distribution for sure, the answer would be a binomial distribution.For example, if we know that $\lambda=1550$ and $k=4.25$, we can use `weibull_dist` to compute the probability that a bulb dies before you return:
###Code
lam = 1550
k = 4.25
t = 1000
prob_dead = weibull_dist(lam, k).cdf(t)
prob_dead
###Output
_____no_output_____
###Markdown
If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution.
###Code
from utils import make_binomial
n = 100
p = prob_dead
dist_num_dead = make_binomial(n, p)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
dist_num_dead.plot(label='known parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Predictive distribution with known parameters')
###Output
_____no_output_____
###Markdown
But that's based on the assumption that we know $\lambda$ and $k$, and we don't.Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities.So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities.We can use `make_mixture` to compute the posterior predictive distribution. It doesn't work with joint distributions, but we can convert the `DataFrame` that represents a joint distribution to a `Series`, like this:
###Code
posterior_series = posterior_bulb.stack()
posterior_series.head()
###Output
_____no_output_____
###Markdown
The result is a `Series` with a `MultiIndex` that contains two "levels": the first level contains the values of `k`; the second contains the values of `lam`.With the posterior in this form, we can iterate through the possible parameters and compute a predictive distribution for each pair.
###Code
pmf_seq = []
for (k, lam) in posterior_series.index:
prob_dead = weibull_dist(lam, k).cdf(t)
pmf = make_binomial(n, prob_dead)
pmf_seq.append(pmf)
###Output
_____no_output_____
###Markdown
Now we can use `make_mixture`, passing as parameters the posterior probabilities in `posterior_series` and the sequence of binomial distributions in `pmf_seq`.
###Code
from utils import make_mixture
post_pred = make_mixture(posterior_series, pmf_seq)
###Output
_____no_output_____
###Markdown
Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters.
###Code
dist_num_dead.plot(label='known parameters')
post_pred.plot(label='unknown parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Posterior predictive distribution')
###Output
_____no_output_____
###Markdown
The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs. SummaryThis chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains.We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways: knowing the exact duration of a lifetime, knowing a lower bound, and knowing that a lifetime fell in a given interval.These examples demonstrate a feature of Bayesian methods: they can be adapted to handle incomplete, or "censored", data with only small changes. As an exercise, you'll have a chance to work with one more type of censored data, when we are given an upper bound on a lifetime.The methods in this chapter work with any distribution with two parameters.In the exercises, you'll have a chance to estimate the parameters of a two-parameter gamma distribution, which is used to describe a variety of natural phenomena.And in the next chapter we'll move on to models with three parameters! Exercises **Exercise:** Using data about the lifetimes of light bulbs, we computed the posterior distribution from the parameters of a Weibull distribution, $\lambda$ and $k$, and the posterior predictive distribution for the number of dead bulbs, out of 100, after 1000 hours.Now suppose you do the experiment: You install 100 light bulbs, come back after 1000 hours, and find 20 dead light bulbs. Update the posterior distribution based on this data.How much does it change the posterior mean? Suggestions:1. Use a mesh grid to compute the probability of finding a bulb dead after 1000 hours for each pair of parameters.2. For each of those probabilities, compute the likelihood of finding 20 dead bulbs out of 100.3. Use those likelihoods to update the posterior distribution.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** In this exercise, we'll use one month of data to estimate the parameters of a distribution that describes daily rainfall in Seattle.Then we'll compute the posterior predictive distribution for daily rainfall and use it to estimate the probability of a rare event, like more than 1.5 inches of rain in a day.According to hydrologists, the distribution of total daily rainfall (for days with rain) is well modeled by a two-parametergamma distribution.When we worked with the one-parameter gamma distribution in Chapter xxx, we used the Greek letter $\alpha$ for the parameter.For the two-parameter gamma distribution, we will use $k$ for the "shape parameter", which determines the shape of the distribution, and the Greek letter $\theta$ or `theta` for the "scale parameter". The following function takes these parameters and returns a `gamma` object from SciPy.
###Code
import scipy.stats
def gamma_dist(k, theta):
"""Makes a gamma object.
k: shape parameter
theta: scale parameter
returns: gamma object
"""
return scipy.stats.gamma(k, scale=theta)
###Output
_____no_output_____
###Markdown
Now we need some data.The following cell downloads data I collected from the National Oceanic and Atmospheric Administration ([NOAA](http://www.ncdc.noaa.gov/cdo-web/search)) for Seattle, Washington in May 2020.
###Code
# Load the data file
datafile = '2203951.csv'
if not os.path.exists(datafile):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2203951.csv
###Output
_____no_output_____
###Markdown
Now we can load it into a `DataFrame`:
###Code
weather = pd.read_csv('2203951.csv')
weather.head()
###Output
_____no_output_____
###Markdown
I'll make a Boolean Series to indicate which days it rained.
###Code
rained = weather['PRCP'] > 0
rained.sum()
###Output
_____no_output_____
###Markdown
And select the total rainfall on the days it rained.
###Code
prcp = weather.loc[rained, 'PRCP']
prcp.describe()
###Output
_____no_output_____
###Markdown
Here's what the CDF of the data looks like.
###Code
cdf_data = Cdf.from_seq(prcp)
cdf_data.plot()
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Distribution of rainfall on days it rained')
###Output
_____no_output_____
###Markdown
The maximum is 1.14 inches of rain is one day.To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model. I suggest you proceed in the following steps:1. Construct a prior distribution for the parameters of the gamma distribution. Note that $k$ and $\theta$ must be greater than 0.2. Use the observed rainfalls to update the distribution of parameters.3. Compute the posterior predictive distribution of rainfall, and use it to estimate the probability of getting more than 1.5 inches of rain in one day.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Survival Analysis Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event.In the context of medicine it is literally about survival, but it can be applied to the time until any kind of event, or instead of time it can be about space or other dimensions.Survival analysis is challenging because the data we have are often incomplete. But as we'll see, Bayesian methods are particularly good at working with incomplete data.As examples, we'll consider two applications that are a little less serious than life and death: the time until light bulbs fail and the time until dogs in a shelter are adopted.To describe these "survival times", we'll use the Weibull distribution. The Weibull DistributionThe [Weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution) is often used in survival analysis because it is a good model for the distribution of lifetimes for manufactured products, at least over some parts of the range.SciPy provides several versions of the Weibull distribution; the one we'll use is called `weibull_min`.To make the interface consistent with our notation, I'll wrap it in a function that takes as parameters $\lambda$, which mostly affects the location or "central tendency" of the distribution, and $k$, which affects the shape.
###Code
from scipy.stats import weibull_min
def weibull_dist(lam, k):
return weibull_min(k, scale=lam)
###Output
_____no_output_____
###Markdown
As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$.
###Code
lam = 3
k = 0.8
actual_dist = weibull_dist(lam, k)
###Output
_____no_output_____
###Markdown
The result is an object that represents the distribution.Here's what the Weibull CDF looks like with those parameters.
###Code
import numpy as np
from empiricaldist import Cdf
from utils import decorate
qs = np.linspace(0, 12, 101)
ps = actual_dist.cdf(qs)
cdf = Cdf(ps, qs)
cdf.plot()
decorate(xlabel='Duration in time',
ylabel='CDF',
title='CDF of a Weibull distribution')
###Output
_____no_output_____
###Markdown
`actual_dist` provides `rvs`, which we can use to generate a random sample from this distribution.
###Code
np.random.seed(17)
data = actual_dist.rvs(10)
data
###Output
_____no_output_____
###Markdown
So, given the parameters of the distribution, we can generate a sample.Now let's see if we can go the other way: given the sample, we'll estimate the parameters.Here's a uniform prior distribution for $\lambda$:
###Code
from utils import make_uniform
lams = np.linspace(0.1, 10.1, num=101)
prior_lam = make_uniform(lams, name='lambda')
###Output
_____no_output_____
###Markdown
And a uniform prior for $k$:
###Code
ks = np.linspace(0.1, 5.1, num=101)
prior_k = make_uniform(ks, name='k')
###Output
_____no_output_____
###Markdown
I'll use `make_joint` to make a joint prior distribution for the two parameters.
###Code
from utils import make_joint
prior = make_joint(prior_lam, prior_k)
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows.Now I'll use `meshgrid` to make a 3-D mesh with $\lambda$ on the first axis (`axis=0`), $k$ on the second axis (`axis=1`), and the data on the third axis (`axis=2`).
###Code
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
###Output
_____no_output_____
###Markdown
Now we can use `weibull_dist` to compute the PDF of the Weibull distribution for each pair of parameters and each data point.
###Code
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
densities.shape
###Output
_____no_output_____
###Markdown
The likelihood of the data is the product of the probability densities along `axis=2`.
###Code
likelihood = densities.prod(axis=2)
likelihood.sum()
###Output
_____no_output_____
###Markdown
Now we can compute the posterior distribution in the usual way.
###Code
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
###Output
_____no_output_____
###Markdown
The following function encapsulates these steps.It takes a joint prior distribution and the data, and returns a joint posterior distribution.
###Code
def update_weibull(prior, data):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
Here's how we use it.
###Code
posterior = update_weibull(prior, data)
###Output
_____no_output_____
###Markdown
And here's a contour plot of the joint posterior distribution.
###Code
from utils import plot_contour
plot_contour(posterior)
decorate(title='Posterior joint distribution of Weibull parameters')
###Output
_____no_output_____
###Markdown
It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3.And the range for $k$ is about 0.5 to 1.5, which contains the actual value, 0.8. Marginal DistributionsTo be more precise about these ranges, we can extract the marginal distributions:
###Code
from utils import marginal
posterior_lam = marginal(posterior, 0)
posterior_k = marginal(posterior, 1)
###Output
_____no_output_____
###Markdown
And compute the posterior means and 90% credible intervals.
###Code
import matplotlib.pyplot as plt
plt.axvline(3, color='C5')
posterior_lam.plot(color='C4', label='lambda')
decorate(xlabel='lam',
ylabel='PDF',
title='Posterior marginal distribution of lam')
###Output
_____no_output_____
###Markdown
The vertical gray line show the actual value of $\lambda$.Here's the marginal posterior distribution for $k$.
###Code
plt.axvline(0.8, color='C5')
posterior_k.plot(color='C12', label='k')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
###Output
_____no_output_____
###Markdown
The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely.But for both parameters, the actual value falls in the credible interval.
###Code
print(lam, posterior_lam.credible_interval(0.9))
print(k, posterior_k.credible_interval(0.9))
###Output
_____no_output_____
###Markdown
Incomplete DataIn the previous example we were given 10 random values from a Weibull distribution, and we used them to estimate the parameters (which we pretended we didn't know).But in many real-world scenarios, we don't have complete data; in particular, when we observe a system at a point in time, we generally have information about the past, but not the future.As an example, suppose you work at a dog shelter and you are interested in the time between the arrival of a new dog and when it is adopted.Some dogs might be snapped up immediately; others might have to wait longer.The people who operate the shelter might want to make inferences about the distribution of these residence times.Suppose you monitor arrivals and departures over a 8 weeks, and 10 dogs arrive during that interval.I'll assume that their arrival times are distributed uniformly, so I'll generate random values like this.
###Code
np.random.seed(19)
start = np.random.uniform(0, 8, size=10)
start
###Output
_____no_output_____
###Markdown
Now let's suppose that the residence times follow the Weibull distribution we used in the previous example.We can generate a sample from that distribution like this:
###Code
np.random.seed(17)
duration = actual_dist.rvs(10)
duration
###Output
_____no_output_____
###Markdown
I'll use these values to construct a `DataFrame` that contains the arrival and departure times for each dog, called `start` and `end`.
###Code
import pandas as pd
d = dict(start=start, end=start+duration)
obs = pd.DataFrame(d)
###Output
_____no_output_____
###Markdown
For display purposes, I'll sort the rows of the `DataFrame` by arrival time.
###Code
obs = obs.sort_values(by='start', ignore_index=True)
obs
###Output
_____no_output_____
###Markdown
Notice that several of the lifelines extend past the observation window of 8 weeks.So if we observed this system at the beginning of Week 8, we would have incomplete information.Specifically, we would not know the future adoption times for Dogs 6, 7, and 8.I'll simulate this incomplete data by identifying the lifelines that extend past the observation window:
###Code
censored = obs['end'] > 8
###Output
_____no_output_____
###Markdown
`censored` is a Boolean Series that is `True` for lifelines that extend past Week 8.Data that is not available is sometimes called "censored" in the sense that it is hidden from us.But in this case it is hidden because we don't know the future, not because someone is censoring it.For the lifelines that are censored, I'll modify `end` to indicate when they are last observed and `status` to indicate that the observation is incomplete.
###Code
obs.loc[censored, 'end'] = 8
obs.loc[censored, 'status'] = 0
###Output
_____no_output_____
###Markdown
Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line.
###Code
def plot_lifelines(obs):
"""Plot a line for each observation.
obs: DataFrame
"""
for y, row in obs.iterrows():
start = row['start']
end = row['end']
status = row['status']
if status == 0:
# ongoing
plt.hlines(y, start, end, color='C0')
else:
# complete
plt.hlines(y, start, end, color='C1')
plt.plot(end, y, marker='o', color='C1')
decorate(xlabel='Time (weeks)',
ylabel='Dog index',
title='Lifelines showing censored and uncensored observations')
plt.gca().invert_yaxis()
plot_lifelines(obs)
###Output
_____no_output_____
###Markdown
And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines.
###Code
obs['T'] = obs['end'] - obs['start']
###Output
_____no_output_____
###Markdown
What we have simulated is the data that would be available at the beginning of Week 8. Using Incomplete DataNow, let's see how we can use both kinds of data, complete and incomplete, to infer the parameters of the distribution of residence times.First I'll split the data into two sets: `data1` contains residence times for dogs whose arrival and departure times are known; `data2` contains incomplete residence times for dogs who were not adopted during the observation interval.
###Code
data1 = obs.loc[~censored, 'T']
data2 = obs.loc[censored, 'T']
data1
data2
###Output
_____no_output_____
###Markdown
For the complete data, we can use `update_weibull`, which uses the PDF of the Weibull distribution to compute the likelihood of the data.
###Code
posterior1 = update_weibull(prior, data1)
###Output
_____no_output_____
###Markdown
For the incomplete data, we have to think a little harder.At the end of the observation interval, we don't know what the residence time will be, but we can put a lower bound on it; that is, we can say that the residence time will be greater than `T`.And that means that we can compute the likelihood of the data using the survival function, which is the probability that a value from the distribution exceeds `T`.The following function is identical to `update_weibull` except that it uses `sf`, which computes the survival function, rather than `pdf`.
###Code
def update_weibull_incomplete(prior, data):
"""Update the prior using incomplete data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
# evaluate the survival function
probs = weibull_dist(lam_mesh, k_mesh).sf(data_mesh)
likelihood = probs.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
Here's the update with the incomplete data.
###Code
posterior2 = update_weibull_incomplete(posterior1, data2)
###Output
_____no_output_____
###Markdown
And here's what the joint posterior distribution looks like after both updates.
###Code
plot_contour(posterior2)
decorate(title='Posterior joint distribution, incomplete data')
###Output
_____no_output_____
###Markdown
Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider.We can see that more clearly by looking at the marginal distributions.
###Code
posterior_lam2 = marginal(posterior2, 0)
posterior_k2 = marginal(posterior2, 1)
###Output
_____no_output_____
###Markdown
Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data.
###Code
posterior_lam.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_lam2.plot(color='C2', label='Some censored')
decorate(xlabel='lambda',
ylabel='PDF',
title='Marginal posterior distribution of lambda')
###Output
_____no_output_____
###Markdown
The distribution with some incomplete data is substantially wider.As an aside, notice that the posterior distribution does not come all the way to 0 on the right side.That suggests that the range of the prior distribution is not wide enough to cover the most likely values for this parameter.If I were concerned about making this distribution more accurate, I would go back and run the update again with a wider prior.Here's the posterior marginal distribution for $k$:
###Code
posterior_k.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_k2.plot(color='C12', label='Some censored')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
###Output
_____no_output_____
###Markdown
In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider.In summary, we have seen how to combine complete and incomplete data to estimate the parameters of a Weibull distribution, which is useful in many real-world scenarios where some of the data are censored.In general, the posterior distributions are wider when we have incomplete data, because less information leads to more uncertainty.This example is based on data I generated; in the next section we'll do a similar analysis with real data. Light BulbsIn 2007 [researchers ran an experiment](https://www.researchgate.net/publication/225450325_Renewal_Rate_of_Filament_Lamps_Theory_and_Experiment) to characterize the distribution of lifetimes for light bulbs.Here is their description of the experiment:> An assembly of 50 new Philips (India) lamps with the rating 40 W, 220 V (AC) was taken and installed in the horizontal orientation and uniformly distributed over a lab area 11 m x 7 m.>> The assembly was monitored at regular intervals of 12 h to look for failures. The instants of recorded failures were [recorded] and a total of 32 data points were obtained such that even the last bulb failed.
###Code
import os
datafile = 'lamps.csv'
if not os.path.exists(datafile):
!wget https://gist.github.com/epogrebnyak/7933e16c0ad215742c4c104be4fbdeb1/raw/c932bc5b6aa6317770c4cbf43eb591511fec08f9/lamps.csv
###Output
_____no_output_____
###Markdown
We can load the data into a `DataFrame` like this:
###Code
df = pd.read_csv('lamps.csv', index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Column `h` contains the times when bulbs failed in hours; Column `f` contains the number of bulbs that failed at each time.We can represent these values and frequencies using a `Pmf`, like this:
###Code
from empiricaldist import Pmf
pmf_bulb = Pmf(df['f'].to_numpy(), df['h'])
pmf_bulb.normalize()
###Output
_____no_output_____
###Markdown
Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously. The average lifetime is about 1400 h.
###Code
pmf_bulb.mean()
###Output
_____no_output_____
###Markdown
Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data.Again, I'll start with uniform priors for $\lambda$ and $k$:
###Code
lams = np.linspace(1000, 2000, num=51)
prior_lam = make_uniform(lams, name='lambda')
ks = np.linspace(1, 10, num=51)
prior_k = make_uniform(ks, name='k')
###Output
_____no_output_____
###Markdown
For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations.They will run faster with fewer values, but the results will be less precise.As usual, we can use `make_joint` to make the prior joint distribution.
###Code
prior_bulb = make_joint(prior_lam, prior_k)
###Output
_____no_output_____
###Markdown
Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times.We can use `np.repeat` to transform the data.
###Code
data_bulb = np.repeat(df['h'], df['f'])
len(data_bulb)
###Output
_____no_output_____
###Markdown
Now we can use `update_weibull` to do the update.
###Code
posterior_bulb = update_weibull(prior_bulb, data_bulb)
###Output
_____no_output_____
###Markdown
Here's what the posterior joint distribution looks like:
###Code
plot_contour(posterior_bulb)
decorate(title='Joint posterior distribution, light bulbs')
###Output
_____no_output_____
###Markdown
To summarize this joint posterior distribution, we'll compute the posterior mean lifetime. Posterior MeansTo compute the posterior mean of a joint distribution, we'll make a mesh that contains the values of $\lambda$ and $k$.
###Code
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
###Output
_____no_output_____
###Markdown
Now for each pair of parameters we'll use `weibull_dist` to compute the mean.
###Code
means = weibull_dist(lam_mesh, k_mesh).mean()
means.shape
###Output
_____no_output_____
###Markdown
The result is an array with the same dimensions as the joint distribution.Now we need to weight each mean with the corresponding probability from the joint posterior.
###Code
prod = means * posterior_bulb
###Output
_____no_output_____
###Markdown
Finally we compute the sum of the weighted means.
###Code
prod.to_numpy().sum()
###Output
_____no_output_____
###Markdown
Based on the posterior distribution, we think the mean lifetime is about 1413 hours.The following function encapsulates these steps:
###Code
def joint_weibull_mean(joint):
"""Compute the mean of a joint distribution of Weibulls."""
lam_mesh, k_mesh = np.meshgrid(
joint.columns, joint.index)
means = weibull_dist(lam_mesh, k_mesh).mean()
prod = means * joint
return prod.to_numpy().sum()
###Output
_____no_output_____
###Markdown
Incomplete InformationThe previous update was not quite right, because it assumed each light bulb died at the instant we observed it. According to the report, the researchers only checked the bulbs every 12 hours. So if they see that a bulb has died, they know only that it died during the 12 hours since the last check.It is more strictly correct to use the following update function, which uses the CDF of the Weibull distribution to compute the probability that a bulb dies during a given 12 hour interval.
###Code
def update_weibull_between(prior, data, dt=12):
"""Update the prior based on data."""
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
dist = weibull_dist(lam_mesh, k_mesh)
cdf1 = dist.cdf(data_mesh)
cdf2 = dist.cdf(data_mesh-12)
likelihood = (cdf1 - cdf2).prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
###Output
_____no_output_____
###Markdown
The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval.Here's how we run the update.
###Code
posterior_bulb2 = update_weibull_between(prior_bulb, data_bulb)
###Output
_____no_output_____
###Markdown
And here are the results.
###Code
plot_contour(posterior_bulb2)
decorate(title='Joint posterior distribution, light bulbs')
###Output
_____no_output_____
###Markdown
Visually this result is almost identical to what we got using the PDF.And that's good news, because it suggests that using the PDF can be a good approximation even if it's not strictly correct.To see whether it makes any difference at all, let's check the posterior means.
###Code
joint_weibull_mean(posterior_bulb)
joint_weibull_mean(posterior_bulb2)
###Output
_____no_output_____
###Markdown
When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less.And that makes sense: if we assume that a bulb is equally likely to expire at any point in the interval, the average would be the midpoint of the interval. Posterior Predictive DistributionSuppose you install 100 light bulbs of the kind in the previous section, and you come back to check on them after 1000 hours. Based on the posterior distribution we just computed, what is the distribution of the number of bulbs you find dead?If we knew the parameters of the Weibull distribution for sure, the answer would be a binomial distribution.For example, if we know that $\lambda=1550$ and $k=4.25$, we can use `weibull_dist` to compute the probability that a bulb dies before you return:
###Code
lam = 1550
k = 4.25
t = 1000
prob_dead = weibull_dist(lam, k).cdf(t)
prob_dead
###Output
_____no_output_____
###Markdown
If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution.
###Code
from utils import make_binomial
n = 100
p = prob_dead
dist_num_dead = make_binomial(n, p)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
dist_num_dead.plot(label='known parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Predictive distribution with known parameters')
###Output
_____no_output_____
###Markdown
But that's based on the assumption that we know $\lambda$ and $k$, and we don't.Instead, we have a posterior distribution that contains possible values of these parameters and their probabilities.So the posterior predictive distribution is not a single binomial; instead it is a mixture of binomials, weighted with the posterior probabilities.We can use `make_mixture` to compute the posterior predictive distribution. It doesn't work with joint distributions, but we can convert the `DataFrame` that represents a joint distribution to a `Series`, like this:
###Code
posterior_series = posterior_bulb.stack()
posterior_series.head()
###Output
_____no_output_____
###Markdown
The result is a `Series` with a `MultiIndex` that contains two "levels": the first level contains the values of `k`; the second contains the values of `lam`.With the posterior in this form, we can iterate through the possible parameters and compute a predictive distribution for each pair.
###Code
pmf_seq = []
for (k, lam) in posterior_series.index:
prob_dead = weibull_dist(lam, k).cdf(t)
pmf = make_binomial(n, prob_dead)
pmf_seq.append(pmf)
###Output
_____no_output_____
###Markdown
Now we can use `make_mixture`, passing as parameters the posterior probabilities in `posterior_series` and the sequence of binomial distributions in `pmf_seq`.
###Code
from utils import make_mixture
post_pred = make_mixture(posterior_series, pmf_seq)
###Output
_____no_output_____
###Markdown
Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters.
###Code
dist_num_dead.plot(label='known parameters')
post_pred.plot(label='unknown parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Posterior predictive distribution')
###Output
_____no_output_____
###Markdown
The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs. SummaryThis chapter introduces survival analysis, which is used to answer questions about the time until an event, and the Weibull distribution, which is a good model for "lifetimes" (broadly interpreted) in a number of domains.We used joint distributions to represent prior probabilities for the parameters of the Weibull distribution, and we updated them three ways: knowing the exact duration of a lifetime, knowing a lower bound, and knowing that a lifetime fell in a given interval.These examples demonstrate a feature of Bayesian methods: they can be adapted to handle incomplete, or "censored", data with only small changes. As an exercise, you'll have a chance to work with one more type of censored data, when we are given an upper bound on a lifetime.The methods in this chapter work with any distribution with two parameters.In the exercises, you'll have a chance to estimate the parameters of a two-parameter gamma distribution, which is used to describe a variety of natural phenomena.And in the next chapter we'll move on to models with three parameters! Exercises **Exercise:** Using data about the lifetimes of light bulbs, we computed the posterior distribution from the parameters of a Weibull distribution, $\lambda$ and $k$, and the posterior predictive distribution for the number of dead bulbs, out of 100, after 1000 hours.Now suppose you do the experiment: You install 100 light bulbs, come back after 1000 hours, and find 20 dead light bulbs. Update the posterior distribution based on this data.How much does it change the posterior mean? Suggestions:1. Use a mesh grid to compute the probability of finding a bulb dead after 1000 hours for each pair of parameters.2. For each of those probabilities, compute the likelihood of finding 20 dead bulbs out of 100.3. Use those likelihoods to update the posterior distribution.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** In this exercise, we'll use one month of data to estimate the parameters of a distribution that describes daily rainfall in Seattle.Then we'll compute the posterior predictive distribution for daily rainfall and use it to estimate the probability of a rare event, like more than 1.5 inches of rain in a day.According to hydrologists, the distribution of total daily rainfall (for days with rain) is well modeled by a two-parametergamma distribution.When we worked with the one-parameter gamma distribution in >, we used the Greek letter $\alpha$ for the parameter.For the two-parameter gamma distribution, we will use $k$ for the "shape parameter", which determines the shape of the distribution, and the Greek letter $\theta$ or `theta` for the "scale parameter". The following function takes these parameters and returns a `gamma` object from SciPy.
###Code
import scipy.stats
def gamma_dist(k, theta):
"""Makes a gamma object.
k: shape parameter
theta: scale parameter
returns: gamma object
"""
return scipy.stats.gamma(k, scale=theta)
###Output
_____no_output_____
###Markdown
Now we need some data.The following cell downloads data I collected from the National Oceanic and Atmospheric Administration ([NOAA](http://www.ncdc.noaa.gov/cdo-web/search)) for Seattle, Washington in May 2020.
###Code
# Load the data file
datafile = '2203951.csv'
if not os.path.exists(datafile):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2203951.csv
###Output
_____no_output_____
###Markdown
Now we can load it into a `DataFrame`:
###Code
weather = pd.read_csv('2203951.csv')
weather.head()
###Output
_____no_output_____
###Markdown
I'll make a Boolean Series to indicate which days it rained.
###Code
rained = weather['PRCP'] > 0
rained.sum()
###Output
_____no_output_____
###Markdown
And select the total rainfall on the days it rained.
###Code
prcp = weather.loc[rained, 'PRCP']
prcp.describe()
###Output
_____no_output_____
###Markdown
Here's what the CDF of the data looks like.
###Code
cdf_data = Cdf.from_seq(prcp)
cdf_data.plot()
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Distribution of rainfall on days it rained')
###Output
_____no_output_____
###Markdown
The maximum is 1.14 inches of rain is one day.To estimate the probability of more than 1.5 inches, we need to extrapolate from the data we have, so our estimate will depend on whether the gamma distribution is really a good model. I suggest you proceed in the following steps:1. Construct a prior distribution for the parameters of the gamma distribution. Note that $k$ and $\theta$ must be greater than 0.2. Use the observed rainfalls to update the distribution of parameters.3. Compute the posterior predictive distribution of rainfall, and use it to estimate the probability of getting more than 1.5 inches of rain in one day.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 14Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Code from previous chapters
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def update_func(state, t, system):
"""Update the SIR model.
state: State (s, i, r)
t: time
system: System object
returns: State (sir)
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
init, t0, t_end = system.init, system.t0, system.t_end
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
def calc_total_infected(results):
"""Fraction of population infected during the simulation.
results: DataFrame with columns S, I, R
returns: fraction of population
"""
return get_first_value(results.S) - get_last_value(results.S)
def sweep_beta(beta_array, gamma):
"""Sweep a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
"""
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
results = run_simulation(system, update_func)
sweep[system.beta] = calc_total_infected(results)
return sweep
def sweep_parameters(beta_array, gamma_array):
"""Sweep a range of values for beta and gamma.
beta_array: array of infection rates
gamma_array: array of recovery rates
returns: SweepFrame with one row for each beta
and one column for each gamma
"""
frame = SweepFrame(columns=gamma_array)
for gamma in gamma_array:
frame[gamma] = sweep_beta(beta_array, gamma)
return frame
###Output
_____no_output_____
###Markdown
Contact number Here's the `SweepFrame` from the previous chapter, with one row for each value of `beta` and one column for each value of `gamma`.
###Code
beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1]
gamma_array = [0.2, 0.4, 0.6, 0.8]
frame = sweep_parameters(beta_array, gamma_array)
frame.head()
frame.shape
###Output
_____no_output_____
###Markdown
The following loop shows how we can loop through the columns and rows of the `SweepFrame`. With 11 rows and 4 columns, there are 44 elements.
###Code
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
print(beta, gamma, frac_infected)
###Output
0.1 0.2 0.010756340768063644
0.2 0.2 0.11898421353185373
0.3 0.2 0.5890954199973404
0.4 0.2 0.8013385277185551
0.5 0.2 0.8965769637207062
0.6 0.2 0.942929291399791
0.7 0.2 0.966299311298026
0.8 0.2 0.9781518959989762
0.9 0.2 0.9840568957948106
1.0 0.2 0.9868823507202488
1.1 0.2 0.988148177093735
0.1 0.4 0.0036416926514175607
0.2 0.4 0.010763463373360094
0.3 0.4 0.030184952469116566
0.4 0.4 0.131562924303259
0.5 0.4 0.3964094037932606
0.6 0.4 0.5979016626615987
0.7 0.4 0.7284704154876106
0.8 0.4 0.8144604459153759
0.9 0.4 0.8722697237137128
1.0 0.4 0.9116692168795855
1.1 0.4 0.9386802509510287
0.1 0.6 0.002190722188881611
0.2 0.6 0.005446688837466351
0.3 0.6 0.010771139974975585
0.4 0.6 0.020916599304195316
0.5 0.6 0.04614035896610047
0.6 0.6 0.13288938996079536
0.7 0.6 0.3118432512847451
0.8 0.6 0.47832565854255393
0.9 0.6 0.605687582114665
1.0 0.6 0.7014254793376209
1.1 0.6 0.7738176405451065
0.1 0.8 0.0015665254038139675
0.2 0.8 0.003643953969662994
0.3 0.8 0.006526163529085194
0.4 0.8 0.010779807499500693
0.5 0.8 0.017639902596349066
0.6 0.8 0.030291868201986594
0.7 0.8 0.05882382948158804
0.8 0.8 0.13358889291095588
0.9 0.8 0.2668895539427739
1.0 0.8 0.40375121210421994
1.1 0.8 0.519583469821867
###Markdown
Now we can wrap that loop in a function and plot the results. For each element of the `SweepFrame`, we have `beta`, `gamma`, and `frac_infected`, and we plot `beta/gamma` on the x-axis and `frac_infected` on the y-axis.
###Code
def plot_sweep_frame(frame):
"""Plot the values from a SweepFrame.
For each (beta, gamma), compute the contact number,
beta/gamma
frame: SweepFrame with one row per beta, one column per gamma
"""
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
plot(beta/gamma, frac_infected, 'ro')
###Output
_____no_output_____
###Markdown
Here's what it looks like:
###Code
plot_sweep_frame(frame)
decorate(xlabel='Contact number (beta/gamma)',
ylabel='Fraction infected')
savefig('figs/chap14-fig01.pdf')
###Output
Saving figure to file figs/chap14-fig01.pdf
###Markdown
It turns out that the ratio `beta/gamma`, called the "contact number" is sufficient to predict the total number of infections; we don't have to know `beta` and `gamma` separately.We can see that in the previous plot: when we plot the fraction infected versus the contact number, the results fall close to a curve. Analysis In the book we figured out the relationship between $c$ and $s_{\infty}$ analytically. Now we can compute it for a range of values:
###Code
s_inf_array = linspace(0.0001, 0.9999, 101);
c_array = log(s_inf_array) / (s_inf_array - 1);
###Output
_____no_output_____
###Markdown
`total_infected` is the change in $s$ from the beginning to the end.
###Code
frac_infected = 1 - s_inf_array
frac_infected_series = Series(frac_infected, index=c_array);
###Output
_____no_output_____
###Markdown
Now we can plot the analytic results and compare them to the simulations.
###Code
plot_sweep_frame(frame)
plot(frac_infected_series, label='Analysis')
decorate(xlabel='Contact number (c)',
ylabel='Fraction infected')
savefig('figs/chap14-fig02.pdf')
###Output
Saving figure to file figs/chap14-fig02.pdf
###Markdown
The agreement is generally good, except for values of `c` less than 1. Exercises **Exercise:** If we didn't know about contact numbers, we might have explored other possibilities, like the difference between `beta` and `gamma`, rather than their ratio.Write a version of `plot_sweep_frame`, called `plot_sweep_frame_difference`, that plots the fraction infected versus the difference `beta-gamma`.What do the results look like, and what does that imply?
###Code
def plot_sweep_frame_difference(frame):
"""Plot the values from a SweepFrame.
For each (beta, gamma), compute the contact number,
beta/gamma
frame: SweepFrame with one row per beta, one column per gamma
"""
for gamma in frame.columns:
column = frame[gamma]
for beta in column.index:
frac_infected = column[beta]
plot(beta-gamma, frac_infected, 'ro')
plot_sweep_frame_difference(frame)
decorate(xlabel='Beta - Gamma',
ylabel='Fraction infected')
###Output
_____no_output_____
###Markdown
I suppose this implies that while the recovery rate is higher than the contact rate, the fraction infected will be zero **Exercise:** Suppose you run a survey at the end of the semester and find that 26% of students had the Freshman Plague at some point.What is your best estimate of `c`?Hint: if you print `frac_infected_series`, you can read off the answer.
###Code
plt.figure(figsize=(30,5))
label = 'Frac Infected = .26'
plt.plot(frac_infected_series, label=label)
plt.plot(frac_infected_series.index, [.26]*len(frac_infected_series), label='26% Infections')
plt.stem(frac_infected_series.index, frac_infected_series)
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected',
loc='upper left');
plt.legend(bbox_to_anchor=(1.02, 1.02));
###Output
/Users/jitsen/anaconda/envs/ml-env/lib/python3.7/site-packages/ipykernel_launcher.py:5: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True.
"""
###Markdown
This is hard to read, so let's narrow down our search
###Code
plt.figure(figsize=(30,10))
label = 'Frac Infected = .26'
plt.plot(frac_infected_series.tail(40), label=label)
plt.plot(frac_infected_series.tail(40).index, [.26]*len(frac_infected_series.tail(40)), label='26% Infections')
plt.stem(frac_infected_series.tail(40).index, frac_infected_series.tail(40))
decorate(xlabel='Contact rate (beta)',
ylabel='Fraction infected',
loc='upper left');
plt.legend(bbox_to_anchor=(1.02, 1.02));
###Output
/Users/jitsen/anaconda/envs/ml-env/lib/python3.7/site-packages/ipykernel_launcher.py:5: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True.
"""
|
Code/Financial Terms Extractor/Web Extractor - Financial Terms.ipynb | ###Markdown
Exploration: Domain Extractor for Finicial-Related TermsAuthor: Runting Shao
###Code
#Import python package
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
import requests
from tqdm import tqdm
import ahocorasick
train_data = pd.read_csv('TCCSocialMediaData_combined_clean_emotes.csv')
#test_data = pd.read_csv('Data/Cleaned_Data/TCCSocialMediaData_test_clean.csv')
unique_domain_train = pd.unique(train_data.domain)
#unique_domain_test = pd.unique(test_data.domain)
#l1 = [x for x in unique_domain_train]
#l2 = [x for x in unique_domain_test]
#unique_domain = np.unique(l1 + l2).tolist()
print(len(unique_domain_train))
domains = unique_domain_train
search_terms = {
1: ["donation","donate","patron"],# Donate, Be a Patron etc.
2: ["store","shop"], # Shop, Shoping, Shop with us etc.
3: ["subscribe","subscription","membership"],
4: ["advertis"], # Advertise, Advertising, Advertisement
5: ["sale","deal","discount","% off","low price","coupon"],
6: ["free","no cost"],
7: ["money","cash","dollar"],
8: ["pay","buy","earn"],
9: ["newsletter"]
}
#This function uses the Aho-Corasick Algorithm to count the existence of a category of terms in a text string
def ahocorasickCount(terms, text):
count = 0
# Make a searcher
searcher = ahocorasick.Automaton()
for i, term in enumerate(terms):
searcher.add_word(term, i)
searcher.make_automaton()
# Add up all counts for a category of terms
for _ in searcher.iter(text):
count = count + 1
return count
class finicialTermsExtractor():
def __init__(self, userAgent):
self.userAgent = userAgent
'''Read through all the domains and get all availiable htmls
Parse the htmls with beautiful soup
Input: domains - list of domains
Output: 1. accessacle_domain - a dictionry with domain(key) and its html after parsing(value)
2. errors - a list of domains that are not able to open'''
def htmlCrawler(self, domains):
errors = []
accessable_domain = {}
headers = {'userAgent' : self.userAgent}
print("Html Crawl Progress - Getting html for all domains:")
for d in tqdm(domains):
fulllink = "http://www." + d
try:
req = requests.get(fulllink,headers,timeout=5)
soup = BeautifulSoup(req.text, "html.parser")
accessable_domain.update({d:soup})
except Exception as e:
errors.append(d)
return accessable_domain, errors
'''Check if the terms in "dict_sublink" existed in the sublink of the domain
Input: 1. accessable_domains - a dictionry with domain(key) and its html after parsing(value)
2. dict_sublink - dictionary of terms to be checked
Output: result - a dictionary containing categories of terms (key), whether terms exist in sublink(T/F)
Append true for a category if any of the terms in the category existed as sublink'''
def checkSublink(self, accessable_domains, search_terms):
result = search_terms.copy()
#Initialize a result dict
for k in result.keys():
result.update({k: []})
for domain in tqdm(accessable_domains):
domain_name = domain
if '.'in domain:
domain_name = domain[:domain.index('.')]
soup = accessable_domains.get(domain)
all_link_txt = soup.get_text()
#Get all link text and search for terms
for link in soup.find_all('a'):
href = link.get("href")
# Define if a href exists and not belong to the domain and text exists
if(href and (href[0] == '/' or href[0] =='#' or domain_name in href) and link.string):
all_link_txt = all_link_txt + link.string.lower()
for category in search_terms:
terms = search_terms.get(category) # list of finicial related terms
count = ahocorasickCount(terms, all_link_txt)# count terms existed in all_link_txt
result.get(category).append(count)
return result
'''Check if the terms in "dict_adcontent" existed in the text from third-party domains
Input: 1. accessable_domains - a dictionry with domain(key) and its html after parsing(value)
2. dict_adcontent - dictionary of terms to be checked
Output: result - a dictionary containing categories of terms (key), count of terms existed in text
Append count of all terms existed for a category'''
def checkAdContent(self, accessable_domains, search_terms):
#Initialize result
result = search_terms.copy()
for k in result.keys():
result.update({k: []})
for domain in tqdm(accessable_domains):
#Get domain name - domain: oann.com, domain name: oann
domain_name = domain
if '.'in domain:
domain_name = domain[:domain.index('.')]
soup = accessable_domains.get(domain)
ad_txt = ""
for link in soup.find_all('a'):
href = link.get("href")
# Define if a href exists and not belong to the domain
if(href and href[0] != '/' and href[0] !='#' and domain_name not in href):
# if the text exists
if(link.string):
ad_txt = ad_txt + link.string.lower()
# if img-alt exists
for img in link.find_all('img', alt= True):
ad_txt = ad_txt + img['alt'].lower()
for category in search_terms:
terms = search_terms.get(category) # list of finicial related terms
count = ahocorasickCount(terms, ad_txt)# count terms existed in ad_txt
result.get(category).append(count)
return result
#Apply user agent
userAgent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53'
crawler = finicialTermsExtractor(userAgent)
#Getting parsed htmls
accessable_domain, errors = crawler.htmlCrawler(domains)
print(accessable_domain.keys())
result_sublink = crawler.checkSublink(accessable_domain,search_terms)
result_adcontent = crawler.checkAdContent(accessable_domain,search_terms)
df_sublink = pd.DataFrame.from_dict(result_sublink)
df_sublink.insert(loc=0, column='domain', value = accessable_domain.keys())
df_sublink.columns = ["sublink_" + str(c) for c in list(df_sublink.columns)]
df_sublink = df_sublink.rename(columns = {'sublink_domain': 'domain'})
df_sublink.head()
df_adcontent = pd.DataFrame.from_dict(result_adcontent)
df_adcontent.insert(loc=0, column='domain', value = accessable_domain.keys())
df_adcontent.columns = ["adcontent_" + str(c) for c in list(df_adcontent.columns)]
df_adcontent = df_adcontent.rename(columns = {'adcontent_domain': 'domain'})
df_adcontent.head()
df_sublink.to_csv("data_sublink.csv")
df_adcontent.to_csv("data_adcontent.csv")
print(errors)
###Output
['seattletimes.com', 'lawenforcementtoday.com', 'cnsnews.com', 'washingtonpost.com', 'gopdailybrief.com', 'pittsburgh.cbslocal.com', 'bipartisanreport.com', 'politi.co', 'disrn.com', 'miamiherald.com', 'losangeles.cbslocal.com', 'newsobserver.com', 'thepatriotjournal.com', 'wtop.com', 'sonsoflibertymedia.com', '2020electioncenter.com', 'americanindependent.com', 'people.com', 'americanjournaldaily.com', 'charlotteobserver.com', 'mcclatchydc.com', 'wired.com', 'sacbee.com', 'usnews.com', 'currently.att.yahoo.com', 'chicago.cbslocal.com', 'heritage.org', 'newyork.cbslocal.com', 'kansascity.com', 'bearingarms.com', 'baltimore.cbslocal.com', 'fresnobee.com', 'mol.im', 'sacramento.cbslocal.com', 'news.sky.com', 'spectator.co.uk', 'modbee.com', 'theduran.com', 'philadelphia.cbslocal.com', 'star-telegram.com', 'gazettenet.com', 'forbiddenknowledgetv.net', 'idahostatesman.com', 'kentucky.com', 'georgiastarnews.com', 'jewishpress.com', 'hereistheevidence.com', 'thestate.com', 'boston.cbslocal.com', 'madamenoire.com', 'miami.cbslocal.com', 'communityimpact.com', 'denver.cbslocal.com', 'postandcourier.com', 'gothamist.com', 'kansas.com', 'corrieredellumbria.corr.it', 'connectingvets.radio.com', 'thenewstribune.com', 'dfw.cbslocal.com', 'minnesota.cbslocal.com', 'uk.finance.yahoo.com', 'bradenton.com', 'trtworld.com', 'ledger-enquirer.com', 'es-us.noticias.yahoo.com', 'olympics.nbcsports.com', 'gwinnettdailypost.com', 'myrtlebeachonline.com', 'blacknewschannel.com', 'sanfrancisco.cbslocal.com', 'uk.reuters.com', 'christiannews.net', 'graphics.reuters.com', 'news8000.com', 'thestranger.com', 'ca.sports.yahoo.com', 'ca.finance.yahoo.com', 'them.us', 'jacksonfreepress.com', 'heraldsun.com', 'tri-cityherald.com', 'leadertelegram.com', 'unionleader.com', 'theolympian.com', 'health.clevelandclinic.org', 'gooddaysacramento.cbslocal.com', 'record-eagle.com', 'english.alaraby.co.uk', 'bnd.com', 'shareblue.com', 'news.wbfo.org', 'onmilwaukee.com', 'centredaily.com', 'pressgazette.co.uk', 'macon.com', 'flkeysnews.com', 'bellinghamherald.com', 'sanluisobispo.com', 'wwaytv3.com', 'conservativeangle.com', 'es.noticias.yahoo.com', 'buzzsawpolitics.com', 'peta.org', 'columbian.com', 'uk.sports.yahoo.com', 'highline.huffingtonpost.com', 'gnews.org', 'swvatoday.com', '1010wins.radio.com', 'sunherald.com', 'au.sports.yahoo.com', 'my.clevelandclinic.org', 'durangoherald.com', 'lasvegas.cbslocal.com', 'wenatcheeworld.com', 'profootballtalk.nbcsports.com', 'portlandmercury.com', 'au.finance.yahoo.com', 'heraldonline.com', 'de.rt.com', 'postnewsera.com', 'catholicherald.co.uk', 'allianceforscience.cornell.edu', 'sg.finance.yahoo.com', 'it.businessinsider.com', 'godfatherpolitics.com', 'nytimespost.com', 'sptnkne.ws', 'thewashingtontime.com', 'collider.com', 'mercedsunstar.com', 'islandpacket.com']
|
04 - New Consumer Group.ipynb | ###Markdown
🦀 New Consumer Group ---Let's try to create a consumer on another `group_id` (`my_NEW_pizza_group`) and the `auto_offset_reset=earliest`
###Code
from kafka import KafkaConsumer
import json
from config.kafka_config import *
consumer_new_group = KafkaConsumer(
## NEW group_id #########
group_id='my_NEW_pizza_group',
#########################
bootstrap_servers=hostname+":"+str(port),
security_protocol="SSL",
ssl_cafile=cert_folder+"/ca.pem",
ssl_certfile=cert_folder+"/service.cert",
ssl_keyfile=cert_folder+"/service.key",
value_deserializer = lambda v: json.loads(v.decode('ascii')),
key_deserializer = lambda v: json.loads(v.decode('ascii')),
auto_offset_reset='earliest',
max_poll_records = 10
)
consumer_new_group.subscribe(topics=[topic_name])
consumer_new_group.subscription()
for message in consumer_new_group:
print ("%d:%d: key=%s value=%s" % (message.partition,
message.offset,
message.key,
message.value))
###Output
_____no_output_____ |
examples/tutorials/translations/português/Parte 10 - Aprendizagem Federada com Agregação Segura.ipynb | ###Markdown
Parte 10: Aprendizagem Federada com Agregação de Gradientes CriptografadaNas últimas seções, temos aprendido sobre computação criptografada através da construção de vários programas simples. Nesta seção, vamos voltar à [demonstração de Aprendizagem Federada da Parte 4](https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/translations/portugu%C3%AAs/Parte%2004%20-%20Aprendizado%20Federado%20por%20meio%20de%20Agregador%20Confi%C3%A1vel.ipynb), onde tivemos um "agregador de confiança" que foi responsável pela média das atualizações do modelo de vários _vorkers_.Vamos agora usar nossas novas ferramentas de computação criptografada para remover esse agregador confiável, pois ele é menos do que ideal, pois pressupõe que podemos encontrar alguém confiável o suficiente para ter acesso a essas informações confidenciais. Este nem sempre é o caso.Assim, nesta parte do tutorial, mostraremos como se pode usar o SMPC para realizar uma agregação segura, de modo a não precisarmos de um "agregador de confiança".Autores:- Theo Ryffel - Twitter: [@theoryffel](https://twitter.com/theoryffel)- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)Tradução:- Jeferson Silva - Github: [@jefersonf](https://github.com/jefersonf) Seção 1: Aprendizagem Federada UsualPrimeiro, aqui está um código que realiza o clássico aprendizado federado no _Boston Housing Dataset_. Esta seção do código é dividida em várias partes. Configuração
###Code
import pickle
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
class Parser:
"""Parameters for training"""
def __init__(self):
self.epochs = 10
self.lr = 0.001
self.test_batch_size = 8
self.batch_size = 8
self.log_interval = 10
self.seed = 1
args = Parser()
torch.manual_seed(args.seed)
kwargs = {}
###Output
_____no_output_____
###Markdown
Carregando o Conjunto de Dados
###Code
with open('../data/BostonHousing/boston_housing.pickle','rb') as f:
((X, y), (X_test, y_test)) = pickle.load(f)
X = torch.from_numpy(X).float()
y = torch.from_numpy(y).float()
X_test = torch.from_numpy(X_test).float()
y_test = torch.from_numpy(y_test).float()
# preprocessing
mean = X.mean(0, keepdim=True)
dev = X.std(0, keepdim=True)
mean[:, 3] = 0. # the feature at column 3 is binary,
dev[:, 3] = 1. # so we don't standardize it
X = (X - mean) / dev
X_test = (X_test - mean) / dev
train = TensorDataset(X, y)
test = TensorDataset(X_test, y_test)
train_loader = DataLoader(train, batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = DataLoader(test, batch_size=args.test_batch_size, shuffle=True, **kwargs)
###Output
_____no_output_____
###Markdown
Estrutura da Rede Neural Neural Network Structure
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(13, 32)
self.fc2 = nn.Linear(32, 24)
self.fc3 = nn.Linear(24, 1)
def forward(self, x):
x = x.view(-1, 13)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
optimizer = optim.SGD(model.parameters(), lr=args.lr)
###Output
_____no_output_____
###Markdown
Criando Gancho do PyTorch
###Code
import syft as sy
hook = sy.TorchHook(torch)
bob = sy.VirtualWorker(hook, id="bob")
alice = sy.VirtualWorker(hook, id="alice")
james = sy.VirtualWorker(hook, id="james")
compute_nodes = [bob, alice]
###Output
_____no_output_____
###Markdown
**Envie dados para os workers** Normalmente eles já o teriam, isto é apenas para fins de demonstração que nós o enviamos manualmente.
###Code
train_distributed_dataset = []
for batch_idx, (data,target) in enumerate(train_loader):
data = data.send(compute_nodes[batch_idx % len(compute_nodes)])
target = target.send(compute_nodes[batch_idx % len(compute_nodes)])
train_distributed_dataset.append((data, target))
###Output
_____no_output_____
###Markdown
Função de Treinamento
###Code
def train(epoch):
model.train()
for batch_idx, (data,target) in enumerate(train_distributed_dataset):
worker = data.location
model.send(worker)
optimizer.zero_grad()
# update the model
pred = model(data)
loss = F.mse_loss(pred.view(-1), target)
loss.backward()
optimizer.step()
model.get()
if batch_idx % args.log_interval == 0:
loss = loss.get()
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * data.shape[0], len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
###Output
_____no_output_____
###Markdown
Função de Teste
###Code
def test():
model.eval()
test_loss = 0
for data, target in test_loader:
output = model(data)
test_loss += F.mse_loss(output.view(-1), target, reduction='sum').item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}\n'.format(test_loss))
###Output
_____no_output_____
###Markdown
Treinando o Modelo
###Code
import time
t = time.time()
for epoch in range(1, args.epochs + 1):
train(epoch)
total_time = time.time() - t
print('Total', round(total_time, 2), 's')
###Output
_____no_output_____
###Markdown
Calculando a Performance
###Code
test()
###Output
_____no_output_____
###Markdown
Seção 2: Adicionando Agregação CriptografadaAgora vamos modificar um pouco este exemplo para agregar gradientes usando criptografia. A diferença principal é uma ou duas linhas de código na função `train()`, que nós vamos destacar. Por enquanto, vamos reprocessar nossos dados e inicializar um modelo para bob e alice.
###Code
remote_dataset = (list(),list())
train_distributed_dataset = []
for batch_idx, (data,target) in enumerate(train_loader):
data = data.send(compute_nodes[batch_idx % len(compute_nodes)])
target = target.send(compute_nodes[batch_idx % len(compute_nodes)])
remote_dataset[batch_idx % len(compute_nodes)].append((data, target))
def update(data, target, model, optimizer):
model.send(data.location)
optimizer.zero_grad()
pred = model(data)
loss = F.mse_loss(pred.view(-1), target)
loss.backward()
optimizer.step()
return model
bobs_model = Net()
alices_model = Net()
bobs_optimizer = optim.SGD(bobs_model.parameters(), lr=args.lr)
alices_optimizer = optim.SGD(alices_model.parameters(), lr=args.lr)
models = [bobs_model, alices_model]
params = [list(bobs_model.parameters()), list(alices_model.parameters())]
optimizers = [bobs_optimizer, alices_optimizer]
###Output
_____no_output_____
###Markdown
Construindo nossa Lógica de TreinamentoA única diferença **real** está dentro deste método de treino. Vamos vê-lo passo-a-passo. Parte A: Treino:
###Code
# this is selecting which batch to train on
data_index = 0
# update remote models
# we could iterate this multiple times before proceeding, but we're only iterating once per worker here
for remote_index in range(len(compute_nodes)):
data, target = remote_dataset[remote_index][data_index]
models[remote_index] = update(data, target, models[remote_index], optimizers[remote_index])
###Output
_____no_output_____
###Markdown
Parte B: Agregação Criptografada
###Code
# create a list where we'll deposit our encrypted model average
new_params = list()
# iterate through each parameter
for param_i in range(len(params[0])):
# for each worker
spdz_params = list()
for remote_index in range(len(compute_nodes)):
# select the identical parameter from each worker and copy it
copy_of_parameter = params[remote_index][param_i].copy()
# since SMPC can only work with integers (not floats), we need
# to use Integers to store decimal information. In other words,
# we need to use "Fixed Precision" encoding.
fixed_precision_param = copy_of_parameter.fix_precision()
# now we encrypt it on the remote machine. Note that
# fixed_precision_param is ALREADY a pointer. Thus, when
# we call share, it actually encrypts the data that the
# data is pointing TO. This returns a POINTER to the
# MPC secret shared object, which we need to fetch.
encrypted_param = fixed_precision_param.share(bob, alice, crypto_provider=james)
# now we fetch the pointer to the MPC shared value
param = encrypted_param.get()
# save the parameter so we can average it with the same parameter
# from the other workers
spdz_params.append(param)
# average params from multiple workers, fetch them to the local machine
# decrypt and decode (from fixed precision) back into a floating point number
new_param = (spdz_params[0] + spdz_params[1]).get().float_precision()/2
# save the new averaged parameter
new_params.append(new_param)
###Output
_____no_output_____
###Markdown
Parte C: Limpeza
###Code
with torch.no_grad():
for model in params:
for param in model:
param *= 0
for model in models:
model.get()
for remote_index in range(len(compute_nodes)):
for param_index in range(len(params[remote_index])):
params[remote_index][param_index].set_(new_params[param_index])
###Output
_____no_output_____
###Markdown
Vamos Juntar Tudo!!E agora que conhecemos cada etapa, nós podemos colocar todas juntas em mesmo ciclo de treinamento.
###Code
def train(epoch):
for data_index in range(len(remote_dataset[0])-1):
# update remote models
for remote_index in range(len(compute_nodes)):
data, target = remote_dataset[remote_index][data_index]
models[remote_index] = update(data, target, models[remote_index], optimizers[remote_index])
# encrypted aggregation
new_params = list()
for param_i in range(len(params[0])):
spdz_params = list()
for remote_index in range(len(compute_nodes)):
spdz_params.append(params[remote_index][param_i].copy().fix_precision().share(bob, alice, crypto_provider=james).get())
new_param = (spdz_params[0] + spdz_params[1]).get().float_precision()/2
new_params.append(new_param)
# cleanup
with torch.no_grad():
for model in params:
for param in model:
param *= 0
for model in models:
model.get()
for remote_index in range(len(compute_nodes)):
for param_index in range(len(params[remote_index])):
params[remote_index][param_index].set_(new_params[param_index])
def test():
models[0].eval()
test_loss = 0
for data, target in test_loader:
output = models[0](data)
test_loss += F.mse_loss(output.view(-1), target, reduction='sum').item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
test_loss /= len(test_loader.dataset)
print('Test set: Average loss: {:.4f}\n'.format(test_loss))
t = time.time()
for epoch in range(args.epochs):
print(f"Epoch {epoch + 1}")
train(epoch)
test()
total_time = time.time() - t
print('Total', round(total_time, 2), 's')
###Output
_____no_output_____ |
heat/Heat.ipynb | ###Markdown
Heat Demand 1) Head Demand Dataset
###Code
# Import data set
import pandas as pd
hd = pd.read_pickle("extracted_heat_demand_20210816.pkl")
hd.head()
# Transform POSIX timestamp to date time and set it as index
hd.drop_duplicates(subset=["timestamps"])
hd["timestamps"] = pd.to_datetime(hd["timestamps"], unit="s")
hd = hd.set_index("timestamps")
hd
# Format time to one -averaged- sample per day
# (to plot easily and get intuition on the data set)
hd_1d = hd.resample('1D').mean()
hd_1d.head()
hd_1d
###Output
_____no_output_____
###Markdown
The extracted heat demand dataset at this stage contains:- data for 762 days (2019-07-01 to 2021-07-31),- for 5 sites (105, 106, 107, 108, 109),- about *warm water* and *floor heating* demand,- both measured in *KW*
###Code
# Check variable type
hd_1d.info()
# Check amount of days
hd_1d.shape
# Check available sites and variables
hd_1d.columns
# Check no value is missing
hd.isnull().sum()
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(16, 6))
plt.title("Warm water demand over 2 years -daily average-")
plt.plot(hd_1d["r000105.m4r1.demand_warm_water_kw"], label='Rack 105 warm water demand (turned off at the end of 2020)')
plt.plot(hd_1d["r000106.m4r2.demand_warm_water_kw"], label='Rack 106 warm water demand (anomalous dataset)')
plt.plot(hd_1d["r000107.m4r3.demand_warm_water_kw"], label='Rack 107 warm water demand')
plt.plot(hd_1d["r000108.m4r4.demand_warm_water_kw"], label='Rack 108 warm water demand')
plt.ylabel("KW")
plt.legend()
plt.grid()
plt.show()
# We predict and study 108
hd108 = hd["r000108.m4r4.demand_warm_water_kw"].interpolate()
# Plot selected rack (108)
fig = plt.figure(figsize=(16, 6))
plt.title("Warm water demand over 2 years -15 minutes sample-")
plt.plot(hd108, label='Rack 108 warm water demand')
plt.ylabel("KW")
plt.legend()
plt.grid()
plt.show()
hd
import numpy as np
def prepare_heat_demand(hd, rack="r000108", node="m4r4", metric="demand_warm_water_kw"):
hd_resampled = hd.resample("15min").mean()
start = hd_resampled.index[0].timestamp() # Return POSIX timestamp
end = hd_resampled.index[-1].timestamp() # Return POSIX timestamp
step_seconds = 15 * 60 # 15 minutes in seconds
hd_resampled["timestamps"] = np.arange(start, end + step_seconds, step_seconds)
hd_resampled = hd_resampled[[f"{rack}.{node}.{metric}", "timestamps"]]
hd_resampled = hd_resampled.set_index("timestamps")
return hd_resampled
hd108_prep = prepare_heat_demand(hd)
hd108_prep
hd108_prep.replace(0, np.nan, inplace=True)
hd108_prep = hd108_prep.dropna()
hd108_prep # This is the target data frame
###Output
_____no_output_____
###Markdown
2) Weather Dataset
###Code
wd = pd.read_pickle("processed_weather_data_ham.pkl")
wd.head()
# Format time in index
wd.index = pd.to_datetime(wd.index, unit="s")
wd
wd.shape
# Check variable type
wd.info()
# Check no value is missing
wd.isnull().sum()
# Drop Weather_Type
wd = wd.drop(columns=["Weather_Type"])
# Interpolate missing NaN values
wd['Visibility'] = wd['Visibility'].interpolate()
wd['Wind_Gust'] = wd['Wind_Gust'].interpolate()
wd.head()
# Check no value is missing
wd.isnull().sum()
# Plot weather dataset
columns = ['Precipitation', 'Wind_Speed', 'Cloud_Cover', 'Relative_Humidity', 'effective_wind_chill', 'effective_heat_index']
fig, axs = plt.subplots(len(columns), 1, sharex=True, figsize=(16, len(columns)*3))
cmap = plt.cm.get_cmap('Dark2', len(columns))
for c, column in enumerate(columns):
axs[c].set_title(column)
axs[c].plot(wd[column], label=column, color=cmap(c))
axs[c].grid()
plt.show()
# Drop coumns
wd = wd.drop(columns=["Visibility"])
wd = wd.drop(columns=["Wind_Gust"]) #
wd = wd.drop(columns=["Name"]) # Name of the region (Hamburg)
wd = wd.drop(columns=["Conditions"]) # Clear, Rain, Overcast, etc.
wd
def prepare_weather_data(wd, steps=20, step_size=5): # best 50
sampled_wd = wd
concat = sampled_wd.copy(deep=True)
for step in range(1, steps + 1):
shifted = sampled_wd.shift(step * step_size)
concat = concat.join(shifted, rsuffix=f"_{ -1 * step * step_size}")
return concat
steps=20
step_size=5
wd_prep = prepare_weather_data(wd, steps=steps, step_size=step_size)
wd_prep = wd_prep.dropna()
wd_prep
###Output
_____no_output_____
###Markdown
With the shifting we end up with 7 + 7 * 20 = 147 columns:
###Code
wd_prep.shape
# Plot shifted temperature
fig = plt.figure(figsize=(16, 6))
cmap = plt.cm.get_cmap('winter', steps)
plt.title("Temperature Shift ({} steps, {} step size)".format(steps, step_size))
plt.plot(wd_prep["Temperature"], label='Step {}'.format(0), color=cmap(0))
for step in range(1, steps + 1):
plt.plot(wd_prep["Temperature_{}".format(-1 * step * step_size)], label='Step {}'.format(step), color=cmap(step))
plt.ylabel("Degrees")
plt.xlim([wd_prep.index[0], wd_prep.index[1000]])
plt.legend()
plt.grid()
plt.show()
# Bring the datasets back to posix timestamp
wd_prep.index = pd.to_datetime(wd_prep.index).astype(int) / 10**9
###Output
/tmp/ipykernel_505/1377623296.py:2: FutureWarning: casting datetime64[ns] values to int64 with .astype(...) is deprecated and will raise in a future version. Use .view(...) instead.
wd_prep.index = pd.to_datetime(wd_prep.index).astype(int) / 10**9
###Markdown
3) Merge both heat demand and weather datasets
###Code
result = wd_prep.join(hd108_prep, how='outer')
result_wo_na = result.dropna()
result_wo_na
from datetime import date
print("first sample: {}".format(date.fromtimestamp(result_wo_na.index[0])))
print("last sample: {}".format(date.fromtimestamp(result_wo_na.index[-1])))
print("sample count: {}".format(result_wo_na.index.size))
# for validation purposes the last 20k rows are cut
from copy import deepcopy
X = deepcopy(result_wo_na.iloc[:-20000,:-1])
y = deepcopy(result_wo_na.iloc[:-20000,-1:])
print("sample count: X={}, y={}".format(len(X), len(y)))
# random split of 20%
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=2)
print("sample count: X_train={}, y_train={}".format(len(X_train), len(y_train)))
print("sample count: X_test={}, y_test={}".format(len(X_test), len(y_test)))
###Output
sample count: X_train=32314, y_train=32314
sample count: X_test=8079, y_test=8079
###Markdown
4.a) Training XGB(direct from xgboost lib)
###Code
import xgboost as xgb
dtrain = xgb.DMatrix(X_train, label=y_train)
print("DMatrix dtrain num_col: {}, num_row: {}".format(
dtrain.num_col(),
dtrain.num_row()))
params = {'eta': 0.1}
model = xgb.train(params=params,
dtrain=dtrain,
num_boost_round=50,
verbose_eval=True)
# predict using the plaintext prediction
plaintext_predict = model.predict(xgb.DMatrix(X_test))
len(plaintext_predict)
###Output
DMatrix dtrain num_col: 147, num_row: 32314
###Markdown
4.b) Training XGBRegressor (from Scikit-Learn Wrapper interface for XGBoost)
###Code
# https://www.datatechnotes.com/2019/06/regression-example-with-xgbregressor-in.html
xgbr = xgb.XGBRegressor(verbosity=1, n_estimators=2500)
print(xgbr)
xgbr.fit(X_train, y_train)
score = xgbr.score(X_train, y_train)
print("Training score: ", score)
score = xgbr.score(X_test, y_test)
print("Training score: ", score)
model = xgbr.get_booster()
model.dump_model('heat.txt')
###Output
_____no_output_____
###Markdown
Encryption Preparation for XGBoost Model1. Set up some metadata information for the dataset.2. Set up the encryption materials3. Encrypt the model4. Encrypt the query5. Perform the prediction6. Decrypt the prediction
###Code
# 1. parsing to internal tree data structure, and output feature set
from ppxgboost import BoosterParser as boostparser
min_max = boostparser.training_dataset_parser(X_test)
enc_tree, feature_set, min_max = boostparser.model_to_trees(model, min_max)
# 2. Set up encryption materials.
from secrets import token_bytes
prf_key = token_bytes(16)
from ppxgboost import PaillierAPI as paillier
public_key, private_key = paillier.he_key_gen()
import sys
sys.path.append('../third-party')
from ope.pyope.ope import OPE
encrypter = OPE(token_bytes(16))
from ppxgboost.PPKey import PPBoostKey
ppBoostKey = PPBoostKey(public_key, prf_key, encrypter)
# 3. process the tree into enc_tree
from ppxgboost import PPBooster as ppbooster
from ppxgboost.PPBooster import MetaData
ppbooster.enc_xgboost_model(ppBoostKey, enc_tree, MetaData(min_max))
# 4. Encrypts the input vector for prediction (using prf_key_hash and ope-encrypter) based on the feature set.
ppbooster.enc_input_vector(prf_key, encrypter, feature_set, X_test, MetaData(min_max))
# 5. privacy-preserving evaluation.
import time
start = time.time()
values = ppbooster.predict_binary(enc_tree, X_test)
end = time.time()
print("Elapsed Time: ", end - start)
# evaluate predictions
from sklearn.metrics import accuracy_score
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: %.2f%%" % (accuracy * 100.0))
###Output
_____no_output_____
###Markdown
4.b) Training GBR
###Code
# training parameters
params = {'n_estimators': 2500, #4500
'max_depth': 16, #14 #16
'min_samples_split': 2, #2 #2
'learning_rate': 0.012, #0,015 #0.012
'loss': 'ls',
'subsample': 0.12, #0.13 #0.12
'max_features': 50, #48 #50
'verbose': 1}
# GBR model
from sklearn import datasets, ensemble
gbr = ensemble.GradientBoostingRegressor(**params)
# Train
# gbr.fit(X_train, y_train)
from sklearn.metrics import mean_squared_error
mse = mean_squared_error(y_test, gbr.predict(X_test))
print("The mean squared error (MSE) on test set: {:.4f}".format(mse))
score = gbr.score(X_test,y_test)
print("The score on test set: {:.4f}".format(score))
plt.plot(y_test[50:200], label='realData')
plt.plot(y_pred[50:200], label='predicted')
plt.legend(loc="upper left",prop={'size': 20})
plt.grid()
plt.show()
###Output
_____no_output_____ |
chap09/09-textbook-clustering-methods-02.ipynb | ###Markdown
Chapter 9 - Clustering Methods In $k$-means clustering, we need to specify the number of clusters $k$ before running the algorithm ($k$ is thought to be the hyperparameter of this algorithm). Hierarchical clustering is an alternative approach which does not require us to commit to a particular choice of $k$. Also, hierarchical clustering results in an easy-to-interpret tree-based representation of the observations called a dendrogram. Interpreting a DendrogramConsider a dataset with 45 observations in a 2-dimensional space. We aim to perform hierarchical clustering of the data. The following is the scatterplot of the observations.The following is how the dataset is visualised in a dendrogram. Each observation is represented as a leaf at the bottom of the tree. As we traverse up the tree, leaves fuse to form a branch. As we continue to traverse up, branches & leaves fuse further. Observations that fuse at the lower levels of the tree are quite similar while observations that fuse at the higher levels of the tree are quite different. Specifically, the height where the observation fuses, measure by the position on the $y$-axis, indicates how different the two observations are. To identify clusters using the dendrogram, we make a horizontal cut across the dendrogram The observations that belong to each branch is interpreted as one cluster. Observe that when we cut at position 9, we obtain two clusters, while if we cut at position 5, we get three clusters. As we cut at a lower position, we obtain more clusters. In other words, the height to cut the dendrogram serves the same role as determining $k$ in $k$-means clustering.Hierarchical clustering is named as the clusters obtained by cutting the dendrogram at a given height are nested within the clusters obtained by cutting the dendrogram at a greater height. (Using this example, the clusters obtained at height 5 will fuse to a smaller number of clusters obtained at height 9). However, the assumption of a hierarchical structher might be unrealistic. Clusters could be created by slicing and dicing the data across features (e.g. gender, followed by race). Hence, sometimes hierarchical clustering might yield worse results than $k$-means clustering. The Hierarchical Clustering AlgorithmBefore we run the algorithm, we define a dissimilarity measure between each pair of observations. Usually Euclidean measure is used. Then, from the bottom of the dendrogram, each of the $n$ observations is treated as its own cluster. The two clusters most similar to each other are then fused to form $n-1$ clusters. Then, the next two clusters most similar to each other are fused to form $n-2$ clusters. This proceeds until all observations are fused to form one cluster. Then the dendrogram is complete. The following is an example of the first few steps of the clustering.Dissimilarity needs to be extended to beyond a pair of observations to a pair of groups (of observations). This extends dissimilarity to the idea of linkage, which defines the dissimilarity between two groups of observations. The four common types of linkage are complete, average, single and centroid, and also Ward from SKLearn.- Complete: Compute the pairwise dissimilarity for each observation in cluster $A$ and cluster $B$. Record the largest of the dissimilarities.- Single: Compute the pairwise dissimilarity for each observation in cluster $A$ and cluster $B$. Record the smallest of the dissimilarities.- Average: Compute the pairwise dissimilarity for all observations in cluster $A$ and cluster $B$. Record the average of the dissimilarities.- Centroid: Compute and record the dissimilarity for the centroid of cluster $A$ and the centroid of cluster $B$.- Ward: The distance between cluster $A$ and $B$ is measured as the amount the RSS will increase after they are merged. Find the centroids of clusters $A$ and $B$ respectively and sum the distance between each observation to their respective clusters to find $\sum_{A} + \sum_{B}$. Then, find the centroid of $A \cup B$. Sum the distance between all observations to this centroid to get $\sum_{A\cup B}$. Record the increase in distance i.e. $\sum_{A\cup B} - (\sum_{A} + \sum_{B})$ The algorithm to use for hierarchical clustering is:```comments1. Begin with n observations and a measure (e.g. Euclidean distance)> Calculate the pairwise distance between all n(n-1) pairs of observation. Treat each observation as 1 cluster2. iterate until there is 1 cluster: 1. Find all pairwise intercluster dissimilarities and find the pair of clusters that is least similar 2. Fuse these clusters together```
###Code
import pickle
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn.cluster import AgglomerativeClustering
def load(fname):
mnist = None
try:
with open(fname, 'rb') as f:
mnist = pickle.load(f)
return mnist
except FileNotFoundError:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True)
with open(fname, 'wb') as f:
mnist = pickle.dump(mnist, f)
return mnist
# Ingest
mnist_data = load('mnist.data.pkl')
X, y = mnist_data['data'], mnist_data['target']
y_int = y.astype(int)
# Filter for only 2 and 7
y_2_idx = (y_int == 2)
y_7_idx = (y_int == 7)
X_2 = X[np.where(y_2_idx)]
X_7 = X[np.where(y_7_idx)]
lbl_2 = np.ones(X_2.shape[0])*2
lbl_7 = np.ones(X_7.shape[0])*7
Xsmall = np.concatenate([X_2, X_7])
ysmall = np.concatenate([lbl_2, lbl_7])
# Hierarchical clustering algorithm
clf = AgglomerativeClustering(n_clusters=2)
clf.fit(Xsmall)
# Note: Clustering took 1min 30s
y_predict = clf.labels_
# Check validity of clustering
y_df = pd.DataFrame({'y_test' : ysmall, 'y_predict' : y_predict})
y_df['y_predict'] = y_df['y_predict'].map({1:2,0:7})
y_df['y_test'] = y_df['y_test'].astype(int)
idx = y_df[~(y_df.y_test==y_df.y_predict)].sample(20).index
for i in idx:
print(y_df.loc[i].to_dict())
digit = Xsmall[i].reshape(28,28)
plt.imshow(digit, cmap=plt.cm.binary)
plt.show()
###Output
{'y_test': 7, 'y_predict': 2}
|
Spark-pubmed-abstracts-analysis.ipynb | ###Markdown
Pubmed abstracts analysis
###Code
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
glueContext = GlueContext(SparkContext.getOrCreate())
df=spark.read.json("s3n://aegovan-data/pubmed-json/")
df.printSchema()
df.count()
df.groupBy('pub_date.year').count().orderBy("year").show(n=500)
###Output
_____no_output_____ |
DS ao DEV - Modulo 5 - Exercicios.ipynb | ###Markdown
Classics
###Code
#API requests
url = 'http://books.toscrape.com/catalogue/category/books/classics_6'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
page = requests.get(url, headers=headers)
#Beautiful Soup Objects
soup = BeautifulSoup(page.text, 'html.parser')
books_classic = soup.find('ol', class_='row')
books_classic_list = books_classic.find_all('article', class_='product_pod')
#books_classic_list[0]
# ============= Scraping Name, Price, Availability ============== #
classic_attributes = [list(filter(None, p.get_text().split('\n'))) for p in books_classic_list]
book_classic_table = pd.DataFrame(classic_attributes)
book_classic_table.columns = ['Name', 'Price', 'del1', 'Availability', 'del2', 'del3']
book_classic_table = book_classic_table.drop(['del1', 'del2', 'del3'], axis=1)
#regex = '(\d+\.\d+)'
#book_classic_table['Price'] = book_classic_table['Price'].apply(lambda x: re.search(regex, x).group(1))
book_classic_table['Price'] = book_classic_table['Price'].apply(lambda x: x[2:])
# ============= Scraping Star-Rating ================== #
classic_rating = [list(p.find('p', class_='star-rating').get('class')) for p in books_classic_list]
classic_rating = pd.DataFrame(classic_rating, columns=['del1', 'Star Rating'])
classic_rating = classic_rating.drop('del1', axis=1)
# ============= Table Concat ============== #
book_classic_table = pd.concat([book_classic_table, classic_rating], axis=1)
# ============= Insert Datetime Scrapy Column ============== #
book_classic_table['Scrapy Datetime'] = datetime.now().strftime('%Y-%m-%d- %H:%M:%S')
book_classic_table['Catalog'] = 'Classics'
book_classic_table
###Output
_____no_output_____
###Markdown
Science Fiction
###Code
#API Requests
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
url = 'https://books.toscrape.com/catalogue/category/books/science-fiction_16'
page = requests.get(url, headers=headers)
#Beautiful Soup Objects
soup = BeautifulSoup(page.text, 'html.parser')
books_sf = soup.find('ol', class_='row')
books_sf_list = books_sf.find_all('article', class_='product_pod')
#books_sf_list[0]
# ============= Scraping Name, Price, Availability ============== #
sf_attributes = [list(filter(None, p.get_text().split('\n'))) for p in books_sf_list]
book_sf_table = pd.DataFrame(sf_attributes)
book_sf_table.columns = ['Name', 'Price', 'del1', 'Availability', 'del2', 'del3']
book_sf_table = book_sf_table.drop(['del1', 'del2', 'del3'], axis=1)
book_sf_table['Price'] = book_sf_table['Price'].apply(lambda x: x[2:])
#book_sf_table
# ============= Scraping Star-Rating ================== #
sf_rating = [list(p.find('p', class_='star-rating').get('class')) for p in books_sf_list]
sf_rating = pd.DataFrame(sf_rating, columns=['del1', 'Star Rating'])
sf_rating = sf_rating.drop('del1', axis=1)
# ============= Table Concat ============== #
book_sf_table = pd.concat([book_sf_table, sf_rating], axis=1)
# ============= Insert Datetime Scrapy Column ============== #
book_sf_table['Scrapy Datetime'] = datetime.now().strftime('%Y-%m-%d- %H:%M:%S')
book_sf_table['Catalog'] = 'Science Fiction'
book_sf_table
###Output
_____no_output_____
###Markdown
Humor
###Code
#API Requests
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
url = 'https://books.toscrape.com/catalogue/category/books/humor_30'
page = requests.get(url, headers=headers)
#Beautiful Soup Objects
soup = BeautifulSoup(page.text, 'html.parser')
books_humor = soup.find('ol', class_='row')
books_humor_list = books_humor.find_all('article', class_='product_pod')
#books_sf_list[0]
# ============= Scraping Name, Price, Availability ============== #
humor_attributes = [list(filter(None, p.get_text().split('\n'))) for p in books_humor_list]
book_humor_table = pd.DataFrame(humor_attributes)
book_humor_table.columns = ['Name', 'Price', 'del1', 'Availability', 'del2', 'del3']
book_humor_table = book_humor_table.drop(['del1', 'del2', 'del3'], axis=1)
book_humor_table['Price'] = book_humor_table['Price'].apply(lambda x: x[2:])
#book_humor_table
# ============= Scraping Star-Rating ================== #
humor_rating = [list(p.find('p', class_='star-rating').get('class')) for p in books_humor_list]
humor_rating = pd.DataFrame(humor_rating, columns=['del1', 'Star Rating'])
humor_rating = humor_rating.drop('del1', axis=1)
# ============= Table Concat ============== #
book_humor_table = pd.concat([book_humor_table, humor_rating], axis=1)
# ============= Insert Datetime Scrapy Column ============== #
book_humor_table['Scrapy Datetime'] = datetime.now().strftime('%Y-%m-%d- %H:%M:%S')
book_humor_table['Catalog'] = 'Humor'
book_humor_table
###Output
_____no_output_____
###Markdown
Business
###Code
#API Requests
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
url = 'https://books.toscrape.com/catalogue/category/books/business_35'
page = requests.get(url, headers=headers)
#Beautiful Soup Objects
soup = BeautifulSoup(page.text, 'html.parser')
books_business = soup.find('ol', class_='row')
books_business_list = books_business.find_all('article', class_='product_pod')
books_business_list[0]
# ============= Scraping Name, Price, Availability ============== #
business_attributes = [list(filter(None, p.get_text().split('\n'))) for p in books_business_list]
book_business_table = pd.DataFrame(business_attributes)
book_business_table.columns = ['Name', 'Price', 'del1', 'Availability', 'del2', 'del3']
book_business_table = book_business_table.drop(['del1', 'del2', 'del3'], axis=1)
book_business_table['Price'] = book_business_table['Price'].apply(lambda x: x[2:])
#book_humor_table
# ============= Scraping Star-Rating ================== #
business_rating = [list(p.find('p', class_='star-rating').get('class')) for p in books_business_list]
business_rating = pd.DataFrame(business_rating, columns=['del1', 'Star Rating'])
business_rating = business_rating.drop('del1', axis=1)
# ============= Table Concat ============== #
book_business_table = pd.concat([book_business_table, business_rating], axis=1)
# ============= Insert Datetime Scrapy Column ============== #
book_business_table['Scrapy Datetime'] = datetime.now().strftime('%Y-%m-%d- %H:%M:%S')
book_business_table['Catalog'] = 'Business'
book_business_table
###Output
_____no_output_____
###Markdown
Complete Catalog
###Code
book_catalog = pd.concat([book_classic_table, book_sf_table, book_humor_table, book_business_table], axis=0)
book_catalog.columns = ['name', 'price', 'availability', 'rating', 'datetime', 'catalog']
book_catalog.head()
book_catalog.to_csv('book catalog')
query_books_schema = """
CREATE TABLE books (
name TEXT,
price REAL,
availability TEXT,
rating TEXT,
datetime TEXT,
catalog TEXT
)
"""
conn = sqlite3.connect('book_catalog.sqlite')
cursor = conn.execute(query_books_schema)
conn.commit()
conn.close()
conn = create_engine('sqlite:///book_catalog.sqlite', echo=False)
book_catalog.to_sql('books', con=conn, if_exists='append', index=False)
query = """
SELECT * FROM books WHERE catalog = 'Humor'
"""
df = pd.read_sql_query(query, conn)
df
###Output
_____no_output_____ |
2. Improving Deep Neural Networks- Hyperparameter tuning- Regularization and Optimization/Initialization.ipynb | ###Markdown
InitializationWelcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can:- Speed up the convergence of gradient descent- Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
###Output
_____no_output_____
###Markdown
You would like a classifier to separate the blue dots from the red dots. 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument.- *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls.
###Code
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
"""
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
2 - Zero initializationThere are two types of parameters to initialize in a neural network:- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$**Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
###Code
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0. 0. 0.]
[ 0. 0. 0.]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0. 0.]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 0. 0. 0.] [ 0. 0. 0.]] **b1** [[ 0.] [ 0.]] **W2** [[ 0. 0.]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using zeros initialization.
###Code
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453
###Markdown
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
###Code
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**:- The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. 3 - Random initializationTo break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
###Code
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])*10
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[ 0.]
[ 0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] **b1** [[ 0.] [ 0.]] **W2** [[-0.82741481 -6.27000677]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using random initialization.
###Code
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/jovyan/work/week5/Initialization/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
###Markdown
If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
###Code
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.**In summary**:- Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initializationFinally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.)**Exercise**: Implement the following function to initialize your parameters with He initialization.**Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
###Code
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2./layers_dims[l-1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] **b2** [[ 0.]] Run the following code to train your model on 15,000 iterations using He initialization.
###Code
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
Optimization methods DEMO.ipynb | ###Markdown
Model with different optimization algorithms
###Code
def load_dataset():
np.random.seed(3)
train_X, train_Y = sklearn.datasets.make_moons(n_samples=300, noise=.2) #300 #0.2
# Visualize the data
plt.scatter(train_X[:, 0], train_X[:, 1], c=train_Y, s=40, cmap=plt.cm.Spectral)
train_X = train_X.T
train_Y = train_Y.reshape((1, train_Y.shape[0]))
return train_X, train_Y
def plot_decision_boundary(clf, X, y):
# Set min and max values and give it some padding
x_min, x_max = X[0, :].min() - 1, X[0, :].max() + 1
y_min, y_max = X[1, :].min() - 1, X[1, :].max() + 1
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = np.c_[xx.ravel(), yy.ravel()].T
Z = clf.predict(Z)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plot_data(X.T, y.T.ravel())
def plot_data(X, y):
plt.scatter(X[y == 0, 0], X[y == 0, 1], color="red", s=30, label="Cluster1")
plt.scatter(X[y == 1, 0], X[y == 1, 1], color="blue", s=30, label="Cluster2")
train_X, train_Y = load_dataset()
###Output
_____no_output_____
###Markdown
MINI-BATCH GRADIENT DESCENT
###Code
layers_dims = [train_X.shape[0], 5, 2, 1]
init_method = 'he'
activations = ('relu', 'sigmoid')
lambd = 0
optimizer_name = 'gd'
learning_rate = 0.0007
num_epochs = 10000
clf = neural_net.MLNN(layers_dims, init_method, activations, lambd, optimizer_name, learning_rate, num_epochs)
clf.train(train_X, train_Y)
predictions = clf.predict(train_X)
gd_accuracy = performance.compute_accuracy(train_Y, predictions)
print('Gradient descent optimization accuracy = ', gd_accuracy, '%')
###Output
Gradient descent optimization accuracy = 87.33333333333333 %
###Markdown
**GETTING THE MLNN PARAMS**
###Code
params = clf.get_params()
plt.plot(params['costs'])
plt.title('Cost 100 per epoch')
plot_decision_boundary(clf, train_X, train_Y)
###Output
_____no_output_____
###Markdown
MINI-BATCH GRADIENT DESCENT WITH MOMENTUM
###Code
layers_dims = [train_X.shape[0], 5, 2, 1]
init_method = 'he'
activations = ('relu', 'sigmoid')
lambd = 0
optimizer_name = 'momentum'
learning_rate = 0.0007
num_epochs = 10000
clf = neural_net.MLNN(layers_dims, init_method, activations, lambd,
optimizer_name, learning_rate, num_epochs)
clf.train(train_X, train_Y)
predictions = clf.predict(train_X)
momentum_accuracy = performance.compute_accuracy(train_Y, predictions)
print('Momentum optimization accuracy = ', momentum_accuracy, '%')
plot_decision_boundary(clf, train_X, train_Y)
params = clf.get_params()
plt.plot(params['costs'])
plt.title('Cost 100 per epoch')
###Output
_____no_output_____
###Markdown
MINI-BATCH GRADIENT DESCENT WITH RMSPROP
###Code
layers_dims = [train_X.shape[0], 5, 2, 1]
init_method = 'he'
activations = ('relu', 'sigmoid')
lambd = 0
optimizer_name = 'rmsprop'
learning_rate = 0.0007
num_epochs = 10000
clf = neural_net.MLNN(layers_dims, init_method, activations, lambd, optimizer_name, learning_rate, num_epochs)
clf.train(train_X, train_Y)
predictions = clf.predict(train_X)
rmsprop_accuracy = performance.compute_accuracy(train_Y, predictions)
print('RMSProp optimization accuracy = ', rmsprop_accuracy, '%')
plot_decision_boundary(clf, train_X, train_Y)
params = clf.get_params()
plt.plot(params['costs'])
plt.title('Cost 100 per epoch')
###Output
_____no_output_____
###Markdown
MINI-BATCH GRADIENT DESCENT WITH ADAM
###Code
layers_dims = [train_X.shape[0], 5, 2, 1]
init_method = 'xavier'
activations = ('relu', 'sigmoid')
lambd = 0
optimizer_name = 'adam'
learning_rate = 0.0007
num_epochs = 10000
clf = neural_net.MLNN(layers_dims, init_method, activations, lambd, optimizer_name, learning_rate, num_epochs)
clf.train(train_X, train_Y)
predictions = clf.predict(train_X)
adam_accuracy = performance.compute_accuracy(train_Y, predictions)
print('ADAM optimization accuracy = ', adam_accuracy, '%')
plot_decision_boundary(clf, train_X, train_Y)
params = clf.get_params()
plt.plot(params['costs'])
plt.title('Cost 100 per epoch')
###Output
_____no_output_____ |
#3 Data Manipulation & Visualization/Visualization/#3.2.11 - Customizing Ticks.ipynb | ###Markdown
Customizing Ticks Major and Minor TicksWithin each axis, there is the concept of a *major* tick mark, and a *minor* tick mark. As the names would imply, major ticks are usually bigger or more pronounced, while minor ticks are usually smaller. By default, Matplotlib rarely makes use of minor ticks, but one place you can see them is within logarithmic plots:
###Code
import matplotlib.pyplot as plt
plt.style.use('classic')
import numpy as np
ax = plt.axes(xscale='log', yscale='log')
ax.grid();
###Output
_____no_output_____
###Markdown
We see here that each major tick shows a large tickmark and a label, while each minor tick shows a smaller tickmark with no label.These tick properties—locations and labels—that is, can be customized by setting the ``formatter`` and ``locator`` objects of each axis. Let's examine these for the x axis of the just shown plot:
###Code
print(ax.xaxis.get_major_locator())
print(ax.xaxis.get_minor_locator())
print(ax.xaxis.get_major_formatter())
print(ax.xaxis.get_minor_formatter())
###Output
<matplotlib.ticker.LogFormatterSciNotation object at 0x00000218A0133438>
<matplotlib.ticker.LogFormatterSciNotation object at 0x00000218A00EDDA0>
###Markdown
We see that both major and minor tick labels have their locations specified by a ``LogLocator`` (which makes sense for a logarithmic plot). Minor ticks, though, have their labels formatted by a ``NullFormatter``: this says that no labels will be shown.We'll now show a few examples of setting these locators and formatters for various plots. Hiding Ticks or LabelsPerhaps the most common tick/label formatting operation is the act of hiding ticks or labels.This can be done using ``plt.NullLocator()`` and ``plt.NullFormatter()``, as shown here:
###Code
ax = plt.axes()
ax.plot(np.random.rand(50))
ax.yaxis.set_major_locator(plt.NullLocator())
ax.xaxis.set_major_formatter(plt.NullFormatter())
###Output
_____no_output_____
###Markdown
Notice that we've removed the labels (but kept the ticks/gridlines) from the x axis, and removed the ticks (and thus the labels as well) from the y axis.Having no ticks at all can be useful in many situations—for example, when you want to show a grid of images.For instance, consider the following figure, which includes images of different faces, an example often used in supervised machine learning problems.
###Code
fig, ax = plt.subplots(5, 5, figsize=(5, 5))
fig.subplots_adjust(hspace=0, wspace=0)
# Get some face data from scikit-learn
from sklearn.datasets import fetch_olivetti_faces
faces = fetch_olivetti_faces().images
for i in range(5):
for j in range(5):
ax[i, j].xaxis.set_major_locator(plt.NullLocator())
ax[i, j].yaxis.set_major_locator(plt.NullLocator())
ax[i, j].imshow(faces[10 * i + j], cmap="bone")
###Output
_____no_output_____
###Markdown
Notice that each image has its own axes, and we've set the locators to null because the tick values (pixel number in this case) do not convey relevant information for this particular visualization. Reducing or Increasing the Number of TicksOne common problem with the default settings is that smaller subplots can end up with crowded labels.We can see this in the plot grid shown here:
###Code
fig, ax = plt.subplots(4, 4, sharex=True, sharey=True)
###Output
_____no_output_____
###Markdown
Particularly for the x ticks, the numbers nearly overlap and make them quite difficult to decipher.We can fix this with the ``plt.MaxNLocator()``, which allows us to specify the maximum number of ticks that will be displayed.Given this maximum number, Matplotlib will use internal logic to choose the particular tick locations:
###Code
# For every axis, set the x and y major locator
for axi in ax.flat:
axi.xaxis.set_major_locator(plt.MaxNLocator(3))
axi.yaxis.set_major_locator(plt.MaxNLocator(3))
fig
###Output
_____no_output_____
###Markdown
This makes things much cleaner. If you want even more control over the locations of regularly-spaced ticks, you might also use ``plt.MultipleLocator``, which we'll discuss in the following section. Fancy Tick FormatsMatplotlib's default tick formatting can leave a lot to be desired: it works well as a broad default, but sometimes you'd like do do something more.Consider this plot of a sine and a cosine:
###Code
# Plot a sine and cosine curve
fig, ax = plt.subplots()
x = np.linspace(0, 3 * np.pi, 1000)
ax.plot(x, np.sin(x), lw=3, label='Sine')
ax.plot(x, np.cos(x), lw=3, label='Cosine')
# Set up grid, legend, and limits
ax.grid(True)
ax.legend(frameon=False)
ax.axis('equal')
ax.set_xlim(0, 3 * np.pi);
###Output
_____no_output_____
###Markdown
There are a couple changes we might like to make. First, it's more natural for this data to space the ticks and grid lines in multiples of $\pi$. We can do this by setting a ``MultipleLocator``, which locates ticks at a multiple of the number you provide. For good measure, we'll add both major and minor ticks in multiples of $\pi/4$:
###Code
ax.xaxis.set_major_locator(plt.MultipleLocator(np.pi / 2))
ax.xaxis.set_minor_locator(plt.MultipleLocator(np.pi / 4))
fig
###Output
_____no_output_____
###Markdown
But now these tick labels look a little bit silly: we can see that they are multiples of $\pi$, but the decimal representation does not immediately convey this.To fix this, we can change the tick formatter. There's no built-in formatter for what we want to do, so we'll instead use ``plt.FuncFormatter``, which accepts a user-defined function giving fine-grained control over the tick outputs:
###Code
def format_func(value, tick_number):
# find number of multiples of pi/2
N = int(np.round(2 * value / np.pi))
if N == 0:
return "0"
elif N == 1:
return r"$\pi/2$"
elif N == 2:
return r"$\pi$"
elif N % 2 > 0:
return r"${0}\pi/2$".format(N)
else:
return r"${0}\pi$".format(N // 2)
ax.xaxis.set_major_formatter(plt.FuncFormatter(format_func))
fig
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/end_to_end_ml/solutions/prepare_data_babyweight.ipynb | ###Markdown
Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student solution notebook. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
Prepare babyweight dataset**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student solution notebook. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
Prepare babyweight dataset.**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student solution notebook. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
###Markdown
Prepare babyweight dataset**Learning Objectives**1. Setup up the environment1. Preprocess natality dataset1. Augment natality dataset1. Create the train and eval tables in BigQuery1. Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform.In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format.Each learning objective will correspond to a __TODO__ in this student solution notebook. Set up environment variables and load necessary libraries Check that the Google BigQuery library is installed and if not, install it.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
###Output
Collecting google-cloud-bigquery==1.25.0
Downloading google_cloud_bigquery-1.25.0-py2.py3-none-any.whl (169 kB)
|████████████████████████████████| 169 kB 4.8 MB/s eta 0:00:01
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (3.13.0)
Requirement already satisfied: six<2.0.0dev,>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.15.0)
Requirement already satisfied: google-api-core<2.0dev,>=1.15.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.22.1)
Collecting google-resumable-media<0.6dev,>=0.5.0
Downloading google_resumable_media-0.5.1-py2.py3-none-any.whl (38 kB)
Requirement already satisfied: google-auth<2.0dev,>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.20.1)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.1.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigquery==1.25.0) (1.3.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->google-cloud-bigquery==1.25.0) (49.6.0.post20200814)
Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.51.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.24.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= 3.5 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.2.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (1.25.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0dev,>=1.15.0->google-cloud-bigquery==1.25.0) (2020.6.20)
Requirement already satisfied: pyasn1>=0.1.3 in /opt/conda/lib/python3.7/site-packages (from rsa<5,>=3.1.4; python_version >= 3.5->google-auth<2.0dev,>=1.9.0->google-cloud-bigquery==1.25.0) (0.4.8)
Installing collected packages: google-resumable-media, google-cloud-bigquery
ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
google-cloud-storage 1.30.0 requires google-resumable-media<2.0dev,>=0.6.0, but you'll have google-resumable-media 0.5.1 which is incompatible.
Successfully installed google-cloud-bigquery-1.25.0 google-resumable-media-0.5.1
###Markdown
**Note**: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
###Code
import os
from google.cloud import bigquery
###Output
_____no_output_____
###Markdown
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
###Code
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# TODO: Change environment variables
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT NAME
BUCKET = "BUCKET" # REPLACE WITH YOUR PROJECT NAME, DEFAULT BUCKET WILL BE PROJECT ID
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["BUCKET"] = PROJECT if BUCKET == "BUCKET" else BUCKET # DEFAULT BUCKET WILL BE PROJECT ID
os.environ["REGION"] = REGION
if PROJECT == "cloud-training-demos":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=publicdata&d=samples&t=natality&page=table) to access the dataset.The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. `weight_pounds` is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __babyweight__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
###Output
_____no_output_____
###Markdown
Create the training and evaluation data tablesSince there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to `weight_pounds`, `is_male`, `mother_age`, `plurality`, and `gestation_weeks` as well as some simple filtering and a column to hash on for repeatable splitting.* Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Preprocess and filter datasetWe have some preprocessing and filtering we would like to do to get our data in the right format for training.Preprocessing:* Cast `is_male` from `BOOL` to `STRING`* Cast `plurality` from `INTEGER` to `STRING` where `[1, 2, 3, 4, 5]` becomes `["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]`* Add `hashcolumn` hashing on `year` and `month`Filtering:* Only want data for years later than `2000`* Only want baby weights greater than `0`* Only want mothers whose age is greater than `0`* Only want plurality to be greater than `0`* Only want the number of weeks of gestation to be greater than `0`
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
###Output
_____no_output_____
###Markdown
Augment dataset to simulate missing dataNow we want to augment our dataset with our simulated babyweight data by setting all gender information to `Unknown` and setting plurality of all non-single births to `Multiple(2+)`.
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
###Output
_____no_output_____
###Markdown
Split augmented dataset into train and eval setsUsing `hashmonth`, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
###Output
_____no_output_____
###Markdown
Split augmented dataset into eval dataset
###Code
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset and training data table.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
###Output
_____no_output_____
###Markdown
Export from BigQuery to CSVs in GCSUse BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
###Code
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
###Output
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_train to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/train*.csv
Exported qwiklabs-gcp-4b437f7e5bfff9dd:babyweight.babyweight_data_eval to gs://qwiklabs-gcp-4b437f7e5bfff9dd/babyweight/data/eval*.csv
###Markdown
Verify CSV creationVerify that we correctly created the CSV files in our bucket.
###Code
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5
%%bash
gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5
###Output
weight_pounds,is_male,mother_age,plurality,gestation_weeks
2.74916440714,false,44,Single(1),30
3.68833364326,true,42,Single(1),31
9.49971886958,false,15,Single(1),46
8.4437046346,Unknown,15,Single(1),31
|
dataset/Betta_Train_GAN.ipynb | ###Markdown
Your First GAN GoalIn this notebook, you're going to create your first generative adversarial network (GAN) for this course! Specifically, you will build and train a GAN that can generate hand-written images of digits (0-9). You will be using PyTorch in this specialization, so if you're not familiar with this framework, you may find the [PyTorch documentation](https://pytorch.org/docs/stable/index.html) useful. The hints will also often include links to relevant documentation. Learning Objectives1. Build the generator and discriminator components of a GAN from scratch.2. Create generator and discriminator loss functions.3. Train your GAN and visualize the generated images. Getting StartedYou will begin by importing some useful packages and the dataset you will use to build and train your GAN. You are also provided with a visualizer function to help you investigate the images your GAN will create.
###Code
import calendar;
import time;
import os
import torch
from torch import nn
from tqdm.auto import tqdm
import torchvision
from torchvision import transforms
from torchvision.utils import save_image
from torch.utils.data import DataLoader
from Dataset import BettaDataset # dataset
torch.manual_seed(0) # Set for testing purposes, please do not change!
def show_tensor_images(image_tensor, num_images=25, size=(3, 60, 60)):
'''
Function for visualizing images: Given a tensor of images, number of images, and
size per image, plots and prints the images in a uniform grid.
'''
timestamp = calendar.timegm(time.gmtime())
image_unflat = image_tensor.detach().cpu().view(-1, *size)
save_image(image_unflat, 'betta_60_generated' + os.sep +'gen_' + str(timestamp) + '.jpg')
###Output
_____no_output_____
###Markdown
MNIST DatasetThe training images your discriminator will be using is from a dataset called [MNIST](http://yann.lecun.com/exdb/mnist/). It contains 60,000 images of handwritten digits, from 0 to 9, like these:You may notice that the images are quite pixelated -- this is because they are all only 28 x 28! The small size of its images makes MNIST ideal for simple training. Additionally, these images are also in black-and-white so only one dimension, or "color channel", is needed to represent them (more on this later in the course). TensorYou will represent the data using [tensors](https://pytorch.org/docs/stable/tensors.html). Tensors are a generalization of matrices: for example, a stack of three matrices with the amounts of red, green, and blue at different locations in a 64 x 64 pixel image is a tensor with the shape 3 x 64 x 64.Tensors are easy to manipulate and supported by [PyTorch](https://pytorch.org/), the machine learning library you will be using. Feel free to explore them more, but you can imagine these as multi-dimensional matrices or vectors! BatchesWhile you could train your model after generating one image, it is extremely inefficient and leads to less stable training. In GANs, and in machine learning in general, you will process multiple images per training step. These are called batches.This means that your generator will generate an entire batch of images and receive the discriminator's feedback on each before updating the model. The same goes for the discriminator, it will calculate its loss on the entire batch of generated images as well as on the reals before the model is updated. GeneratorThe first step is to build the generator component.You will start by creating a function to make a single layer/block for the generator's neural network. Each block should include a [linear transformation](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) to map to another shape, a [batch normalization](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html) for stabilization, and finally a non-linear activation function (you use a [ReLU here](https://pytorch.org/docs/master/generated/torch.nn.ReLU.html)) so the output can be transformed in complex ways. You will learn more about activations and batch normalization later in the course.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: get_generator_block
def get_generator_block(input_dim, output_dim):
'''
Function for returning a block of the generator's neural network
given input and output dimensions.
Parameters:
input_dim: the dimension of the input vector, a scalar
output_dim: the dimension of the output vector, a scalar
Returns:
a generator neural network layer, with a linear transformation
followed by a batch normalization and then a relu activation
'''
return nn.Sequential(
nn.Linear(input_dim, output_dim),
nn.BatchNorm1d(output_dim),
nn.ReLU(inplace=True),
)
# Verify the generator block function
def test_gen_block(in_features, out_features, num_test=1000):
block = get_generator_block(in_features, out_features)
# Check the three parts
assert len(block) == 3
assert type(block[0]) == nn.Linear
assert type(block[1]) == nn.BatchNorm1d
assert type(block[2]) == nn.ReLU
# Check the output shape
test_input = torch.randn(num_test, in_features)
test_output = block(test_input)
assert tuple(test_output.shape) == (num_test, out_features)
assert test_output.std() > 0.55
assert test_output.std() < 0.65
test_gen_block(25, 12)
test_gen_block(15, 28)
print("Success!")
###Output
Success!
###Markdown
Now you can build the generator class. It will take 3 values:* The noise vector dimension* The image dimension* The initial hidden dimensionUsing these values, the generator will build a neural network with 5 layers/blocks. Beginning with the noise vector, the generator will apply non-linear transformations via the block function until the tensor is mapped to the size of the image to be outputted (the same size as the real images from MNIST). You will need to fill in the code for final layer since it is different than the others. The final layer does not need a normalization or activation function, but does need to be scaled with a [sigmoid function](https://pytorch.org/docs/master/generated/torch.nn.Sigmoid.html). Finally, you are given a forward pass function that takes in a noise vector and generates an image of the output dimension using your neural network.Optional hints for Generator1. The output size of the final linear transformation should be im_dim, but remember you need to scale the outputs between 0 and 1 using the sigmoid function.2. [nn.Linear](https://pytorch.org/docs/master/generated/torch.nn.Linear.html) and [nn.Sigmoid](https://pytorch.org/docs/master/generated/torch.nn.Sigmoid.html) will be useful here.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: Generator
class Generator(nn.Module):
'''
Generator Class
Values:
z_dim: the dimension of the noise vector, a scalar
im_dim: the dimension of the images, fitted for the dataset used, a scalar
(Betta images are 60 x 60 = 3600 so that is your default)
hidden_dim: the inner dimension, a scalar
'''
def __init__(self, z_dim=10, im_dim=3600 * 3, hidden_dim=256):
super(Generator, self).__init__()
# Build the neural network
self.gen = nn.Sequential(
get_generator_block(z_dim, hidden_dim),
get_generator_block(hidden_dim * (2**0), hidden_dim * (2**1)),
get_generator_block(hidden_dim * (2**1), hidden_dim * (2**2)),
get_generator_block(hidden_dim * (2**2), hidden_dim * (2**3)),
get_generator_block(hidden_dim * (2**3), hidden_dim * (2**4)),
get_generator_block(hidden_dim * (2**4), hidden_dim * (2**5)),
nn.Linear(hidden_dim * (2**5), im_dim),
nn.Sigmoid()
)
def forward(self, noise):
'''
Function for completing a forward pass of the generator: Given a noise tensor,
returns generated images.
Parameters:
noise: a noise tensor with dimensions (n_samples, z_dim)
'''
return self.gen(noise)
# Needed for grading
def get_gen(self):
'''
Returns:
the sequential model
'''
return self.gen
# Verify the generator class
def test_generator(z_dim, im_dim, hidden_dim, num_test=10000):
gen = Generator(z_dim, im_dim, hidden_dim).get_gen()
# Check there are six modules in the sequential part
assert len(gen) == 8
assert str(gen.__getitem__(6)).replace(' ', '') == f'Linear(in_features={hidden_dim * (2**5)},out_features={im_dim},bias=True)'
assert str(gen.__getitem__(7)).replace(' ', '') == 'Sigmoid()'
test_input = torch.randn(num_test, z_dim)
test_output = gen(test_input)
# Check that the output shape is correct
assert tuple(test_output.shape) == (num_test, im_dim)
assert test_output.max() < 1, "Make sure to use a sigmoid"
assert test_output.min() > 0, "Make sure to use a sigmoid"
assert test_output.std() > 0.05, "Don't use batchnorm here"
assert test_output.std() < 0.15, "Don't use batchnorm here"
test_generator(5, 10, 20)
test_generator(20, 8, 24)
print("Success!")
###Output
Success!
###Markdown
NoiseTo be able to use your generator, you will need to be able to create noise vectors. The noise vector z has the important role of making sure the images generated from the same class don't all look the same -- think of it as a random seed. You will generate it randomly using PyTorch by sampling random numbers from the normal distribution. Since multiple images will be processed per pass, you will generate all the noise vectors at once.Note that whenever you create a new tensor using torch.ones, torch.zeros, or torch.randn, you either need to create it on the target device, e.g. `torch.ones(3, 3, device=device)`, or move it onto the target device using `torch.ones(3, 3).to(device)`. You do not need to do this if you're creating a tensor by manipulating another tensor or by using a variation that defaults the device to the input, such as `torch.ones_like`. In general, use `torch.ones_like` and `torch.zeros_like` instead of `torch.ones` or `torch.zeros` where possible.Optional hint for get_noise1. You will probably find [torch.randn](https://pytorch.org/docs/master/generated/torch.randn.html) useful here.
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: get_noise
def get_noise(n_samples, z_dim, device='cpu'):
'''
Function for creating noise vectors: Given the dimensions (n_samples, z_dim),
creates a tensor of that shape filled with random numbers from the normal distribution.
Parameters:
n_samples: the number of samples to generate, a scalar
z_dim: the dimension of the noise vector, a scalar
device: the device type
'''
return torch.randn(n_samples,z_dim,device=device)
# Verify the noise vector function
def test_get_noise(n_samples, z_dim, device='cpu'):
noise = get_noise(n_samples, z_dim, device)
# Make sure a normal distribution was used
assert tuple(noise.shape) == (n_samples, z_dim)
assert torch.abs(noise.std() - torch.tensor(1.0)) < 0.01
assert str(noise.device).startswith(device)
test_get_noise(1000, 100, 'cpu')
if torch.cuda.is_available():
test_get_noise(1000, 32, 'cuda')
print("Success!")
###Output
Success!
###Markdown
DiscriminatorThe second component that you need to construct is the discriminator. As with the generator component, you will start by creating a function that builds a neural network block for the discriminator.*Note: You use leaky ReLUs to prevent the "dying ReLU" problem, which refers to the phenomenon where the parameters stop changing due to consistently negative values passed to a ReLU, which result in a zero gradient. You will learn more about this in the following lectures!* REctified Linear Unit (ReLU) | Leaky ReLU:-------------------------:|:-------------------------: | 
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: get_discriminator_block
def get_discriminator_block(input_dim, output_dim):
'''
Discriminator Block
Function for returning a neural network of the discriminator given input and output dimensions.
Parameters:
input_dim: the dimension of the input vector, a scalar
output_dim: the dimension of the output vector, a scalar
Returns:
a discriminator neural network layer, with a linear transformation
followed by an nn.LeakyReLU activation with negative slope of 0.2
(https://pytorch.org/docs/master/generated/torch.nn.LeakyReLU.html)
'''
return nn.Sequential(
nn.Linear(input_dim, output_dim), #Layer 1
nn.LeakyReLU(0.2, inplace=True)
)
# Verify the discriminator block function
def test_disc_block(in_features, out_features, num_test=10000):
block = get_discriminator_block(in_features, out_features)
# Check there are two parts
assert len(block) == 2
test_input = torch.randn(num_test, in_features)
test_output = block(test_input)
# Check that the shape is right
assert tuple(test_output.shape) == (num_test, out_features)
# Check that the LeakyReLU slope is about 0.2
assert -test_output.min() / test_output.max() > 0.1
assert -test_output.min() / test_output.max() < 0.3
assert test_output.std() > 0.3
assert test_output.std() < 0.5
assert str(block.__getitem__(0)).replace(' ', '') == f'Linear(in_features={in_features},out_features={out_features},bias=True)'
assert str(block.__getitem__(1)).replace(' ', '').replace(',inplace=True', '') == 'LeakyReLU(negative_slope=0.2)'
test_disc_block(25, 12)
test_disc_block(15, 28)
print("Success!")
###Output
Success!
###Markdown
Now you can use these blocks to make a discriminator! The discriminator class holds 2 values:* The image dimension* The hidden dimensionThe discriminator will build a neural network with 4 layers. It will start with the image tensor and transform it until it returns a single number (1-dimension tensor) output. This output classifies whether an image is fake or real. Note that you do not need a sigmoid after the output layer since it is included in the loss function. Finally, to use your discrimator's neural network you are given a forward pass function that takes in an image tensor to be classified.
###Code
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: Discriminator
class Discriminator(nn.Module):
'''
Discriminator Class
Values:
im_dim: the dimension of the images, fitted for the dataset used, a scalar
(Betta images are 60x60 = 3600 so that is your default)
hidden_dim: the inner dimension, a scalar
'''
def __init__(self, im_dim=3600 * 3, hidden_dim=64):
super(Discriminator, self).__init__()
self.disc = nn.Sequential(
get_discriminator_block(im_dim, hidden_dim * (2**7)),
get_discriminator_block(hidden_dim * (2**7), hidden_dim * (2**6)),
get_discriminator_block(hidden_dim * (2**6), hidden_dim * (2**5)),
get_discriminator_block(hidden_dim * (2**5), hidden_dim * (2**4)),
get_discriminator_block(hidden_dim * (2**4), hidden_dim * (2**3)),
get_discriminator_block(hidden_dim * (2**3), hidden_dim * (2**2)),
get_discriminator_block(hidden_dim * (2**2), hidden_dim * (2**1)),
get_discriminator_block(hidden_dim * (2**1), hidden_dim),
nn.Linear(hidden_dim, 1)
)
def forward(self, image):
'''
Function for completing a forward pass of the discriminator: Given an image tensor,
returns a 1-dimension tensor representing fake/real.
Parameters:
image: a flattened image tensor with dimension (im_dim)
'''
return self.disc(image)
# Needed for grading
def get_disc(self):
'''
Returns:
the sequential model
'''
return self.disc
# Verify the discriminator class
def test_discriminator(z_dim, hidden_dim, num_test=100):
disc = Discriminator(z_dim, hidden_dim).get_disc()
# Check there are three parts
assert len(disc) == 9
assert type(disc.__getitem__(8)) == nn.Linear
# Check the linear layer is correct
test_input = torch.randn(num_test, z_dim)
test_output = disc(test_input)
assert tuple(test_output.shape) == (num_test, 1)
test_discriminator(5, 10)
test_discriminator(20, 8)
print("Success!")
###Output
Success!
###Markdown
TrainingNow you can put it all together!First, you will set your parameters: * criterion: the loss function * n_epochs: the number of times you iterate through the entire dataset when training * z_dim: the dimension of the noise vector * display_step: how often to display/visualize the images * batch_size: the number of images per forward/backward pass * lr: the learning rate * device: the device type, here using a GPU (which runs CUDA), not CPUNext, you will load the MNIST dataset as tensors using a dataloader.
###Code
# Set your parameters
criterion = nn.BCEWithLogitsLoss()
n_epochs = 999999
z_dim = 64
display_step = 100
batch_size = 128
lr = 0.0001
device = 'cpu'
# Load MNIST dataset as tensors
dataloader = DataLoader(
BettaDataset(),
batch_size=batch_size,
shuffle=True)
###Output
Dataset contains 1356 betta images.
###Markdown
Now, you can initialize your generator, discriminator, and optimizers. Note that each optimizer only takes the parameters of one particular model, since we want each optimizer to optimize only one of the models.
###Code
gen = Generator(z_dim).to(device)
print("done")
gen_opt = torch.optim.Adam(gen.parameters(), lr=lr)
disc = Discriminator().to(device)
print("done")
disc_opt = torch.optim.Adam(disc.parameters(), lr=lr)
###Output
done
done
###Markdown
Before you train your GAN, you will need to create functions to calculate the discriminator's loss and the generator's loss. This is how the discriminator and generator will know how they are doing and improve themselves. Since the generator is needed when calculating the discriminator's loss, you will need to call .detach() on the generator result to ensure that only the discriminator is updated!Remember that you have already defined a loss function earlier (`criterion`) and you are encouraged to use `torch.ones_like` and `torch.zeros_like` instead of `torch.ones` or `torch.zeros`. If you use `torch.ones` or `torch.zeros`, you'll need to pass `device=device` to them.
###Code
# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: get_disc_loss
def get_disc_loss(gen, disc, criterion, real, num_images, z_dim, device):
'''
Return the loss of the discriminator given inputs.
Parameters:
gen: the generator model, which returns an image given z-dimensional noise
disc: the discriminator model, which returns a single-dimensional prediction of real/fake
criterion: the loss function, which should be used to compare
the discriminator's predictions to the ground truth reality of the images
(e.g. fake = 0, real = 1)
real: a batch of real images
num_images: the number of images the generator should produce,
which is also the length of the real images
z_dim: the dimension of the noise vector, a scalar
device: the device type
Returns:
disc_loss: a torch scalar loss value for the current batch
'''
noise_vector = get_noise(num_images, z_dim, device=device)
fake_image = gen(noise_vector)
disc_fake_pred = disc(fake_image.detach())
disc_fake_loss = criterion(disc_fake_pred, torch.zeros_like(disc_fake_pred))
disc_real_pred = disc(real)
disc_real_loss = criterion(disc_real_pred, torch.ones_like(disc_real_pred))
disc_loss = (disc_fake_loss + disc_real_loss) / 2
return disc_loss
def test_disc_reasonable(num_images=10):
z_dim = 64
gen = torch.zeros_like
disc = nn.Identity()
criterion = torch.mul # Multiply
real = torch.ones(num_images, 1)
disc_loss = get_disc_loss(gen, disc, criterion, real, num_images, z_dim, 'cpu')
assert tuple(disc_loss.shape) == (num_images, z_dim)
assert torch.all(torch.abs(disc_loss - 0.5) < 1e-5)
gen = torch.ones_like
disc = nn.Identity()
criterion = torch.mul # Multiply
real = torch.zeros(num_images, 1)
assert torch.all(torch.abs(get_disc_loss(gen, disc, criterion, real, num_images, z_dim, 'cpu')) < 1e-5)
def test_disc_loss(max_tests = 10):
z_dim = 64
gen = Generator(z_dim).to(device)
gen_opt = torch.optim.Adam(gen.parameters(), lr=lr)
disc = Discriminator().to(device)
disc_opt = torch.optim.Adam(disc.parameters(), lr=lr)
num_steps = 0
for real, _ in dataloader:
cur_batch_size = len(real)
real = real.view(cur_batch_size, -1).to(device)
### Update discriminator ###
# Zero out the gradient before backpropagation
disc_opt.zero_grad()
# Calculate discriminator loss
disc_loss = get_disc_loss(gen, disc, criterion, real, cur_batch_size, z_dim, device)
print((disc_loss - 0.68).abs())
assert (disc_loss - 0.68).abs() < 0.05
# Update gradients
disc_loss.backward(retain_graph=True)
# Check that they detached correctly
assert gen.gen[0][0].weight.grad is None
# Update optimizer
old_weight = disc.disc[0][0].weight.data.clone()
disc_opt.step()
new_weight = disc.disc[0][0].weight.data
# Check that some discriminator weights changed
assert not torch.all(torch.eq(old_weight, new_weight))
num_steps += 1
if num_steps >= max_tests:
break
test_disc_reasonable()
test_disc_loss()
print("Success!")
# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: get_gen_loss
def get_gen_loss(gen, disc, criterion, num_images, z_dim, device):
'''
Return the loss of the generator given inputs.
Parameters:
gen: the generator model, which returns an image given z-dimensional noise
disc: the discriminator model, which returns a single-dimensional prediction of real/fake
criterion: the loss function, which should be used to compare
the discriminator's predictions to the ground truth reality of the images
(e.g. fake = 0, real = 1)
num_images: the number of images the generator should produce,
which is also the length of the real images
z_dim: the dimension of the noise vector, a scalar
device: the device type
Returns:
gen_loss: a torch scalar loss value for the current batch
'''
fake_noise = get_noise(num_images, z_dim, device=device)
fake = gen(fake_noise)
disc_fake_pred = disc(fake)
gen_loss = criterion(disc_fake_pred, torch.ones_like(disc_fake_pred))
return gen_loss
def test_gen_reasonable(num_images=10):
z_dim = 64
gen = torch.zeros_like
disc = nn.Identity()
criterion = torch.mul # Multiply
gen_loss_tensor = get_gen_loss(gen, disc, criterion, num_images, z_dim, 'cpu')
assert torch.all(torch.abs(gen_loss_tensor) < 1e-5)
#Verify shape. Related to gen_noise parametrization
assert tuple(gen_loss_tensor.shape) == (num_images, z_dim)
gen = torch.ones_like
disc = nn.Identity()
criterion = torch.mul # Multiply
real = torch.zeros(num_images, 1)
gen_loss_tensor = get_gen_loss(gen, disc, criterion, num_images, z_dim, 'cpu')
assert torch.all(torch.abs(gen_loss_tensor - 1) < 1e-5)
#Verify shape. Related to gen_noise parametrization
assert tuple(gen_loss_tensor.shape) == (num_images, z_dim)
def test_gen_loss(num_images):
z_dim = 64
gen = Generator(z_dim).to(device)
gen_opt = torch.optim.Adam(gen.parameters(), lr=lr)
disc = Discriminator().to(device)
disc_opt = torch.optim.Adam(disc.parameters(), lr=lr)
gen_loss = get_gen_loss(gen, disc, criterion, num_images, z_dim, device)
# Check that the loss is reasonable
assert (gen_loss - 0.7).abs() < 0.1
gen_loss.backward()
old_weight = gen.gen[0][0].weight.clone()
gen_opt.step()
new_weight = gen.gen[0][0].weight
assert not torch.all(torch.eq(old_weight, new_weight))
test_gen_reasonable(10)
test_gen_loss(18)
print("Success!")
###Output
Success!
###Markdown
Finally, you can put everything together! For each epoch, you will process the entire dataset in batches. For every batch, you will need to update the discriminator and generator using their loss. Batches are sets of images that will be predicted on before the loss functions are calculated (instead of calculating the loss function after each image). Note that you may see a loss to be greater than 1, this is okay since binary cross entropy loss can be any positive number for a sufficiently confident wrong guess. It’s also often the case that the discriminator will outperform the generator, especially at the start, because its job is easier. It's important that neither one gets too good (that is, near-perfect accuracy), which would cause the entire model to stop learning. Balancing the two models is actually remarkably hard to do in a standard GAN and something you will see more of in later lectures and assignments.After you've submitted a working version with the original architecture, feel free to play around with the architecture if you want to see how different architectural choices can lead to better or worse GANs. For example, consider changing the size of the hidden dimension, or making the networks shallower or deeper by changing the number of layers.<!-- In addition, be warned that this runs very slowly on a CPU. One way to run this more quickly is to use Google Colab: 1. Download the .ipynb2. Upload it to Google Drive and open it with Google Colab3. Make the runtime type GPU (under “Runtime” -> “Change runtime type” -> Select “GPU” from the dropdown)4. Replace `device = "cpu"` with `device = "cuda"`5. Make sure your `get_noise` function uses the right device -->But remember, don’t expect anything spectacular: this is only the first lesson. The results will get better with later lessons as you learn methods to help keep your generator and discriminator at similar levels. You should roughly expect to see this progression. On a GPU, this should take about 15 seconds per 500 steps, on average, while on CPU it will take roughly 1.5 minutes:
###Code
# UNQ_C8 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION:
cur_step = 0
mean_generator_loss = 0
mean_discriminator_loss = 0
test_generator = True # Whether the generator should be tested
gen_loss = False
error = False
for epoch in range(n_epochs):
# Dataloader returns the batches
for real, _ in tqdm(dataloader):
cur_batch_size = len(real)
# Flatten the batch of real images from the dataset
real = real.view(cur_batch_size, -1).to(device)
### Update discriminator ###
# Zero out the gradients before backpropagation
disc_opt.zero_grad()
# Calculate discriminator loss
disc_loss = get_disc_loss(gen, disc, criterion, real, cur_batch_size, z_dim, device)
# Update gradients
disc_loss.backward(retain_graph=True)
# Update optimizer
disc_opt.step()
# For testing purposes, to keep track of the generator weights
if test_generator:
old_generator_weights = gen.gen[0][0].weight.detach().clone()
# Update generator #
gen_opt.zero_grad()
gen_loss = get_gen_loss(gen, disc, criterion, cur_batch_size, z_dim, device)
gen_loss.backward()
gen_opt.step()
# For testing purposes, to check that your code changes the generator weights
if test_generator:
try:
assert lr > 0.0000002 or (gen.gen[0][0].weight.grad.abs().max() < 0.0005 and epoch == 0)
assert torch.any(gen.gen[0][0].weight.detach().clone() != old_generator_weights)
except:
error = True
print("Runtime tests have failed")
# Keep track of the average discriminator loss
mean_discriminator_loss += disc_loss.item() / display_step
# Keep track of the average generator loss
mean_generator_loss += gen_loss.item() / display_step
### Visualization code ###
if cur_step % display_step == 0 and cur_step > 0:
print(f"Step {cur_step}: Generator loss: {mean_generator_loss}, discriminator loss: {mean_discriminator_loss}")
fake_noise = get_noise(cur_batch_size, z_dim, device=device)
fake = gen(fake_noise)
show_tensor_images(fake)
# show_tensor_images(real)
mean_generator_loss = 0
mean_discriminator_loss = 0
cur_step += 1
###Output
_____no_output_____ |
Chapter 7 - Data Preparation and Visualization/PCA.ipynb | ###Markdown
Try different method
###Code
lda = LinearDiscriminantAnalysis()
lda.fit(pca_vals, df.TOTAL_PAID)
plt.plot(np.cumsum(lda.explained_variance_ratio_));
###Output
_____no_output_____
###Markdown
Look at payment columns
###Code
pmt = df[['MEDREIMB_IP', 'BENRES_IP', 'PPPYMT_IP', 'MEDREIMB_OP', 'BENRES_OP',
'PPPYMT_OP', 'MEDREIMB_CAR', 'BENRES_CAR', 'PPPYMT_CAR']]
pmt_norm = pd.DataFrame(scale(pmt))
pmt_norm.columns = pmt.columns
#pca_pmt = PCA(n_components=5)
pca_pmt = PCA()
pca_pmt.fit(pmt_norm)
pd.DataFrame(pca_pmt.transform(pmt_norm))
pmt_comp = pca_pmt.components_
pmt_comp_df = pd.DataFrame(pmt_comp)
pmt_comp_df.columns = pmt_norm.columns
pmt_comp_df
plt.plot(np.cumsum(pca_pmt.explained_variance_));
sns.heatmap(pmt_comp, annot=True);
###Output
_____no_output_____ |
examples/getting-started-movielens/inference-HugeCTR/Training-with-HugeCTR.ipynb | ###Markdown
OverviewIn this notebook, we want to provide an overview what HugeCTR framework is, its features and benefits. We will use HugeCTR to train a basic neural network architecture and deploy the saved model to Triton Inference Server. Learning Objectives:* Adopt NVTabular workflow to provide input files to HugeCTR* Define HugeCTR neural network architecture* Train a deep learning model with HugeCTR* Deploy HugeCTR to Triton Inference Server Why using HugeCTR?HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs).HugeCTR offers multiple advantages to train deep learning recommender systems:1. **Speed**: HugeCTR is a highly efficient framework written C++. We experienced up to 10x speed up. HugeCTR on a NVIDIA DGX A100 system proved to be the fastest commercially available solution for training the architecture Deep Learning Recommender Model (DLRM) developed by Facebook.2. **Scale**: HugeCTR supports model parallel scaling. It distributes the large embedding tables over multiple GPUs or multiple nodes. 3. **Easy-to-use**: Easy-to-use Python API similar to Keras. Examples for popular deep learning recommender systems architectures (Wide&Deep, DLRM, DCN, DeepFM) are available. Other Features of HugeCTRHugeCTR is designed to scale deep learning models for recommender systems. It provides a list of other important features:* Proficiency in oversubscribing models to train embedding tables with single nodes that don’t fit within the GPU or CPU memory (only required embeddings are prefetched from a parameter server per batch)* Asynchronous and multithreaded data pipelines* A highly optimized data loader.* Supported data formats such as parquet and binary* Integration with Triton Inference Server for deployment to production Getting Started In this example, we will train a neural network with HugeCTR. We will use NVTabular for preprocessing. Preprocessing and Feature Engineering with NVTabularWe use NVTabular to `Categorify` our categorical input columns.
###Code
# External dependencies
import os
import shutil
import gc
from os import path
import nvtabular as nvt
import numpy as np
from nvtabular.utils import download_file
# Get dataframe library - cudf or pandas
from nvtabular.dispatch import get_lib
df_lib = get_lib()
###Output
_____no_output_____
###Markdown
We define our base directory, containing the data.
###Code
# path to store raw and preprocessed data
BASE_DIR = "/model/data/"
###Output
_____no_output_____
###Markdown
If the data is not available in the base directory, we will download and unzip the data.
###Code
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", os.path.join(BASE_DIR, "ml-25m.zip")
)
###Output
_____no_output_____
###Markdown
Preparing the dataset with NVTabular First, we take a look at the movie metadata.Let's load the movie ratings.
###Code
ratings = df_lib.read_csv(os.path.join(BASE_DIR, "ml-25m", "ratings.csv"))
ratings.head()
###Output
_____no_output_____
###Markdown
We drop the timestamp column and split the ratings into training and test datasets. We use a simple random split.
###Code
ratings = ratings.drop("timestamp", axis=1)
# shuffle the dataset
ratings = ratings.sample(len(ratings), replace=False)
# split the train_df as training and validation data sets.
num_valid = int(len(ratings) * 0.2)
train = ratings[:-num_valid]
valid = ratings[-num_valid:]
train.head()
###Output
_____no_output_____
###Markdown
We save our train and valid datasets as parquet files on disk, and below we will read them in while initializing the Dataset objects.
###Code
train.to_parquet(BASE_DIR + "train.parquet")
valid.to_parquet(BASE_DIR + "valid.parquet")
del train
del valid
gc.collect()
###Output
_____no_output_____
###Markdown
Let's define our categorical and label columns. Note that in that example we do not have numerical columns.
###Code
CATEGORICAL_COLUMNS = ["userId", "movieId"]
LABEL_COLUMNS = ["rating"]
###Output
_____no_output_____
###Markdown
Let's add Categorify op for our categorical features, userId, movieId.
###Code
cat_features = CATEGORICAL_COLUMNS >> nvt.ops.Categorify(cat_cache="device")
###Output
_____no_output_____
###Markdown
The ratings are on a scale between 1-5. We want to predict a binary target with 1 are all ratings >=4 and 0 are all ratings <=3. We use the LambdaOp for it.
###Code
ratings = nvt.ColumnSelector(["rating"]) >> nvt.ops.LambdaOp(lambda col: (col > 3).astype("int8"))
###Output
_____no_output_____
###Markdown
We can visualize our calculation graph.
###Code
output = cat_features + ratings
(output).graph
###Output
_____no_output_____
###Markdown
We initialize our NVTabular workflow.
###Code
workflow = nvt.Workflow(output)
###Output
_____no_output_____
###Markdown
We initialize NVTabular Datasets, and use the part_size parameter, which defines the size read into GPU-memory at once, in nvt.Dataset.
###Code
train_dataset = nvt.Dataset(BASE_DIR + "train.parquet", part_size="100MB")
valid_dataset = nvt.Dataset(BASE_DIR + "valid.parquet", part_size="100MB")
###Output
_____no_output_____
###Markdown
First, we collect the training dataset statistics.
###Code
%%time
workflow.fit(train_dataset)
###Output
CPU times: user 623 ms, sys: 219 ms, total: 842 ms
Wall time: 847 ms
###Markdown
This step is slightly different for HugeCTR. HugeCTR expect the categorical input columns as `int64` and continuous/label columns as `float32` We can define output datatypes for our NVTabular workflow.
###Code
dict_dtypes = {}
for col in CATEGORICAL_COLUMNS:
dict_dtypes[col] = np.int64
for col in LABEL_COLUMNS:
dict_dtypes[col] = np.float32
###Output
_____no_output_____
###Markdown
Note: We do not have numerical output columns
###Code
train_dir = os.path.join(BASE_DIR, "train")
valid_dir = os.path.join(BASE_DIR, "valid")
if path.exists(train_dir):
shutil.rmtree(train_dir)
if path.exists(valid_dir):
shutil.rmtree(valid_dir)
###Output
_____no_output_____
###Markdown
In addition, we need to provide the data schema to the output calls. We need to define which output columns are `categorical`, `continuous` and which is the `label` columns. NVTabular will write metadata files, which HugeCTR requires to load the data and optimize training.
###Code
workflow.transform(train_dataset).to_parquet(
output_path=BASE_DIR + "train/",
shuffle=nvt.io.Shuffle.PER_PARTITION,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
workflow.transform(valid_dataset).to_parquet(
output_path=BASE_DIR + "valid/",
shuffle=False,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
###Output
_____no_output_____
###Markdown
Scaling Accelerated training with HugeCTR HugeCTR is a deep learning framework dedicated to recommendation systems. It is written in CUDA C++. As HugeCTR optimizes the training in CUDA++, we need to define the training pipeline and model architecture and execute it via the commandline. We will use the Python API, which is similar to Keras models. HugeCTR has three main components:* Solver: Specifies various details such as active GPU list, batchsize, and model_file* Optimizer: Specifies the type of optimizer and its hyperparameters* DataReader: Specifies the training/evaludation data* Model: Specifies embeddings, and dense layers. Note that embeddings must precede the dense layers **Solver**Let's take a look on the parameter for the `Solver`. We should be familiar from other frameworks for the hyperparameter.```solver = hugectr.CreateSolver(- vvgpu: GPU indices used in the training process, which has two levels. For example: [[0,1],[1,2]] indicates that two nodes are used in the first node. GPUs 0 and 1 are used while GPUs 1 and 2 are used for the second node. It is also possible to specify non-continuous GPU indices such as [0, 2, 4, 7] - batchsize: Minibatch size used in training- max_eval_batches: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset.If max_iter is used, the evaluation happens for max_eval_batches by repeating the evaluation dataset infinitely.On the other hand, with num_epochs, HugeCTR stops the evaluation if all the evaluation data is consumed - batchsize_eval: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset- mixed_precision: Enables mixed precision training with the scaler specified here. Only 128,256, 512, and 1024 scalers are supported)``` **Optimizer**The optimizer is the algorithm to update the model parameters. HugeCTR supports the common algorithms.```optimizer = CreateOptimizer(- optimizer_type: Optimizer algorithm - Adam, MomentumSGD, Nesterov, and SGD - learning_rate: Learning Rate for optimizer)``` **DataReader**The data reader defines the training and evaluation dataset.```reader = hugectr.DataReaderParams(- data_reader_type: Data format to read- source: The training dataset file list. IMPORTANT: This should be a list- eval_source: The evaluation dataset file list.- check_type: The data error detection mechanism (Sum: Checksum, None: no detection).- slot_size_array: The list of categorical feature cardinalities)``` **Model**We initialize the model with the solver, optimizer and data reader:```model = hugectr.Model(solver, reader, optimizer)```We can add multiple layers to the model with `model.add` function. We will focus on:- `Input` defines the input data- `SparseEmbedding` defines the embedding layer- `DenseLayer` defines dense layers, such as fully connected, ReLU, BatchNorm, etc.**HugeCTR organizes the layers by names. For each layer, we define the input and output names.** Input layer:This layer is required to define the input data.```hugectr.Input( label_dim: Number of label columns label_name: Name of label columns in network architecture dense_dim: Number of continuous columns dense_name: Name of contiunous columns in network architecture data_reader_sparse_param_array: Configuration how to read sparse data and its names)```SparseEmbedding:This layer defines embedding table```hugectr.SparseEmbedding( embedding_type: Different embedding options to distribute embedding tables workspace_size_per_gpu_in_mb: Maximum embedding table size in MB embedding_vec_size: Embedding vector size combiner: Intra-slot reduction op sparse_embedding_name: Layer name bottom_name: Input layer names optimizer: Optimizer to use)```DenseLayer:This layer is copied to each GPU and is normally used for the MLP tower.```hugectr.DenseLayer( layer_type: Layer type, such as FullyConnected, Reshape, Concat, Loss, BatchNorm, etc. bottom_names: Input layer names top_names: Layer name ...: Depending on the layer type additional parameter can be defined)```This is only a short introduction in the API. You can read more in the official docs: [Python Interface](https://github.com/NVIDIA/HugeCTR/blob/master/docs/python_interface.md) and [Layer Book](https://github.com/NVIDIA/HugeCTR/blob/master/docs/hugectr_layer_book.md) Let's define our modelWe walked through the documentation, but it is useful to understand the API. Finally, we can define our model. We will write the model to `./model.py` and execute it afterwards. We need the cardinalities of each categorical feature to assign as `slot_size_array` in the model below.
###Code
from nvtabular.ops import get_embedding_sizes
embeddings = get_embedding_sizes(workflow)
print(embeddings)
###Output
{'userId': (162542, 512), 'movieId': (56586, 512)}
###Markdown
Let's clear the directory and create the output folders
###Code
!rm -r /model/movielens_hugectr
!mkdir -p /model/movielens_hugectr/1
###Output
rm: cannot remove '/model/movielens_hugectr': No such file or directory
###Markdown
We use `graph_to_json` to convert the model to a JSON configuration, required for the inference.
###Code
%%writefile './model.py'
import hugectr
from mpi4py import MPI # noqa
solver = hugectr.CreateSolver(
vvgpu=[[0]],
batchsize=2048,
batchsize_eval=2048,
max_eval_batches=160,
i64_input_key=True,
use_mixed_precision=False,
repeat_dataset=True,
)
optimizer = hugectr.CreateOptimizer(optimizer_type=hugectr.Optimizer_t.Adam)
reader = hugectr.DataReaderParams(
data_reader_type=hugectr.DataReaderType_t.Parquet,
source=["/model/data/train/_file_list.txt"],
eval_source="/model/data/valid/_file_list.txt",
check_type=hugectr.Check_t.Non,
slot_size_array=[162542, 56586],
)
model = hugectr.Model(solver, reader, optimizer)
model.add(
hugectr.Input(
label_dim=1,
label_name="label",
dense_dim=0,
dense_name="dense",
data_reader_sparse_param_array=[
hugectr.DataReaderSparseParam("data1", nnz_per_slot=2, is_fixed_length=True, slot_num=2)
],
)
)
model.add(
hugectr.SparseEmbedding(
embedding_type=hugectr.Embedding_t.LocalizedSlotSparseEmbeddingHash,
workspace_size_per_gpu_in_mb=100,
embedding_vec_size=16,
combiner="sum",
sparse_embedding_name="sparse_embedding1",
bottom_name="data1",
optimizer=optimizer,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.Reshape,
bottom_names=["sparse_embedding1"],
top_names=["reshape1"],
leading_dim=32,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["reshape1"],
top_names=["fc1"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc1"],
top_names=["relu1"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu1"],
top_names=["fc2"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc2"],
top_names=["relu2"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu2"],
top_names=["fc3"],
num_output=1,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.BinaryCrossEntropyLoss,
bottom_names=["fc3", "label"],
top_names=["loss"],
)
)
model.compile()
model.summary()
model.fit(max_iter=2000, display=100, eval_interval=200, snapshot=1900)
model.graph_to_json(graph_config_file="/model/movielens_hugectr/1/movielens.json")
!python model.py
###Output
====================================================Model Init=====================================================
[26d18h35m13s][HUGECTR][INFO]: Global seed is 3848625588
[26d18h35m15s][HUGECTR][INFO]: Peer-to-peer access cannot be fully enabled.
Device 0: Tesla V100-SXM2-32GB
[26d18h35m15s][HUGECTR][INFO]: num of DataReader workers: 1
[26d18h35m15s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=1638400
[26d18h35m15s][HUGECTR][INFO]: All2All Warmup Start
[26d18h35m15s][HUGECTR][INFO]: All2All Warmup End
===================================================Model Compile===================================================
[26d18h35m17s][HUGECTR][INFO]: gpu0 start to init embedding
[26d18h35m17s][HUGECTR][INFO]: gpu0 init embedding done
===================================================Model Summary===================================================
Label Dense Sparse
label dense data1
(None, 1) (None, 0)
------------------------------------------------------------------------------------------------------------------
Layer Type Input Name Output Name Output Shape
------------------------------------------------------------------------------------------------------------------
LocalizedSlotSparseEmbeddingHash data1 sparse_embedding1 (None, 2, 16)
Reshape sparse_embedding1 reshape1 (None, 32)
InnerProduct reshape1 fc1 (None, 128)
ReLU fc1 relu1 (None, 128)
InnerProduct relu1 fc2 (None, 128)
ReLU fc2 relu2 (None, 128)
InnerProduct relu2 fc3 (None, 1)
BinaryCrossEntropyLoss fc3,label loss
------------------------------------------------------------------------------------------------------------------
=====================================================Model Fit=====================================================
[26d18h35m17s][HUGECTR][INFO]: Use non-epoch mode with number of iterations: 2000
[26d18h35m17s][HUGECTR][INFO]: Training batchsize: 2048, evaluation batchsize: 2048
[26d18h35m17s][HUGECTR][INFO]: Evaluation interval: 200, snapshot interval: 1900
[26d18h35m17s][HUGECTR][INFO]: Sparse embedding trainable: 1, dense network trainable: 1
[26d18h35m17s][HUGECTR][INFO]: Use mixed precision: 0, scaler: 1.000000, use cuda graph: 1
[26d18h35m17s][HUGECTR][INFO]: lr: 0.001000, warmup_steps: 1, decay_start: 0, decay_steps: 1, decay_power: 2.000000, end_lr: 0.000000
[26d18h35m17s][HUGECTR][INFO]: Training source file: /model/data/train/_file_list.txt
[26d18h35m17s][HUGECTR][INFO]: Evaluation source file: /model/data/valid/_file_list.txt
[26d18h35m17s][HUGECTR][INFO]: Iter: 100 Time(100 iters): 0.136342s Loss: 0.579462 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Iter: 200 Time(100 iters): 0.135109s Loss: 0.554109 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Evaluation, AUC: 0.745997
[26d18h35m17s][HUGECTR][INFO]: Eval Time for 160 iters: 0.073575s
[26d18h35m17s][HUGECTR][INFO]: Iter: 300 Time(100 iters): 0.210037s Loss: 0.571327 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Iter: 400 Time(100 iters): 0.132709s Loss: 0.546585 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.764737
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.070747s
[26d18h35m18s][HUGECTR][INFO]: Iter: 500 Time(100 iters): 0.216137s Loss: 0.552045 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 600 Time(100 iters): 0.133178s Loss: 0.541653 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.774266
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.069966s
[26d18h35m18s][HUGECTR][INFO]: Iter: 700 Time(100 iters): 0.204785s Loss: 0.524283 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 800 Time(100 iters): 0.133213s Loss: 0.530550 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.780663
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.081221s
[26d18h35m18s][HUGECTR][INFO]: Iter: 900 Time(100 iters): 0.216339s Loss: 0.541633 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 1000 Time(100 iters): 0.142290s Loss: 0.541528 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.786361
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.068574s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1100 Time(100 iters): 0.203976s Loss: 0.528578 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Iter: 1200 Time(100 iters): 0.133187s Loss: 0.522433 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.788285
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.076724s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1300 Time(100 iters): 0.213124s Loss: 0.524235 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Iter: 1400 Time(100 iters): 0.135103s Loss: 0.513423 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.793324
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.081245s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1500 Time(100 iters): 0.228339s Loss: 0.504689 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Iter: 1600 Time(100 iters): 0.133944s Loss: 0.515175 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Evaluation, AUC: 0.795201
[26d18h35m20s][HUGECTR][INFO]: Eval Time for 160 iters: 0.071934s
[26d18h35m20s][HUGECTR][INFO]: Iter: 1700 Time(100 iters): 0.207204s Loss: 0.515042 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Iter: 1800 Time(100 iters): 0.135032s Loss: 0.498440 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Evaluation, AUC: 0.795551
[26d18h35m20s][HUGECTR][INFO]: Eval Time for 160 iters: 0.071047s
[26d18h35m20s][HUGECTR][INFO]: Iter: 1900 Time(100 iters): 0.209134s Loss: 0.509593 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Rank0: Dump hash table from GPU0
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write hash table <key,value> pairs to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m20s][HUGECTR][INFO]: Dumping sparse weights to files, successful
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write optimzer state to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write optimzer state to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m21s][HUGECTR][INFO]: Dumping sparse optimzer states to files, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping dense weights to file, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping dense optimizer states to file, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping untrainable weights to file, successful
[26d18h35m21s][HUGECTR][INFO]: Save the model graph to /model/movielens_hugectr/1/movielens.json, successful
###Markdown
We trained our model. After training terminates, we can see that multiple `.model` files and folders are generated. We need to move them inside `1` folder under the `movielens_hugectr` folder. Let's create these folders first. Now we move our saved `.model` files inside `1` folder.
###Code
!mv *.model /model/movielens_hugectr/1/
###Output
_____no_output_____
###Markdown
Now we can save our models to be deployed at the inference stage. To do so we will use `export_hugectr_ensemble` method below. With this method, we can generate the `config.pbtxt` files automatically for each model. In doing so, we should also create a `hugectr_params` dictionary, and define the parameters like where the `movielens.json` file will be read, `slots` which corresponds to number of categorical features, `embedding_vector_size`, `max_nnz`, and `n_outputs` which is number of outputs. The script below creates an ensemble triton server model where - `workflow` is the the nvtabular workflow used in preprocessing, - `hugectr_model_path` is the HugeCTR model that should be served. This path includes the `.model` files.- `name` is the base name of the various triton models- `output_path` is the path where is model will be saved to.
###Code
from nvtabular.inference.triton import export_hugectr_ensemble
hugectr_params = dict()
hugectr_params["config"] = "/model/models/movielens/1/movielens.json"
hugectr_params["slots"] = 2
hugectr_params["max_nnz"] = 2
hugectr_params["embedding_vector_size"] = 16
hugectr_params["n_outputs"] = 1
export_hugectr_ensemble(
workflow=workflow,
hugectr_model_path="/model/movielens_hugectr/1/",
hugectr_params=hugectr_params,
name="movielens",
output_path="/model/models/",
label_columns=["rating"],
cats=CATEGORICAL_COLUMNS,
max_batch_size=64,
)
###Output
_____no_output_____
###Markdown
OverviewIn this notebook, we want to provide an overview what HugeCTR framework is, its features and benefits. We will use HugeCTR to train a basic neural network architecture and deploy the saved model to Triton Inference Server. Learning Objectives:* Adopt NVTabular workflow to provide input files to HugeCTR* Define HugeCTR neural network architecture* Train a deep learning model with HugeCTR* Deploy HugeCTR to Triton Inference Server Why using HugeCTR?HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs).HugeCTR offers multiple advantages to train deep learning recommender systems:1. **Speed**: HugeCTR is a highly efficient framework written C++. We experienced up to 10x speed up. HugeCTR on a NVIDIA DGX A100 system proved to be the fastest commercially available solution for training the architecture Deep Learning Recommender Model (DLRM) developed by Facebook.2. **Scale**: HugeCTR supports model parallel scaling. It distributes the large embedding tables over multiple GPUs or multiple nodes. 3. **Easy-to-use**: Easy-to-use Python API similar to Keras. Examples for popular deep learning recommender systems architectures (Wide&Deep, DLRM, DCN, DeepFM) are available. Other Features of HugeCTRHugeCTR is designed to scale deep learning models for recommender systems. It provides a list of other important features:* Proficiency in oversubscribing models to train embedding tables with single nodes that don’t fit within the GPU or CPU memory (only required embeddings are prefetched from a parameter server per batch)* Asynchronous and multithreaded data pipelines* A highly optimized data loader.* Supported data formats such as parquet and binary* Integration with Triton Inference Server for deployment to production Getting Started In this example, we will train a neural network with HugeCTR. We will use NVTabular for preprocessing. Preprocessing and Feature Engineering with NVTabularWe use NVTabular to `Categorify` our categorical input columns.
###Code
# External dependencies
import os
import shutil
import gc
import nvtabular as nvt
import numpy as np
from os import path
from sklearn.model_selection import train_test_split
from nvtabular.utils import download_file
# Get dataframe library - cudf or pandas
from nvtabular.dispatch import get_lib
df_lib = get_lib()
###Output
_____no_output_____
###Markdown
We define our base directory, containing the data.
###Code
# path to store raw and preprocessed data
BASE_DIR = "/model/data/"
###Output
_____no_output_____
###Markdown
If the data is not available in the base directory, we will download and unzip the data.
###Code
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", os.path.join(BASE_DIR, "ml-25m.zip")
)
###Output
downloading ml-25m.zip: 262MB [03:23, 1.29MB/s]
unzipping files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.34files/s]
###Markdown
Preparing the dataset with NVTabular First, we take a look at the movie metadata.Let's load the movie ratings.
###Code
ratings = df_lib.read_csv(os.path.join(BASE_DIR, "ml-25m", "ratings.csv"))
ratings.head()
###Output
_____no_output_____
###Markdown
We drop the timestamp column and split the ratings into training and test dataset. We use a simple random split.
###Code
ratings = ratings.drop("timestamp", axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.head()
###Output
_____no_output_____
###Markdown
We save our train and valid datasets as parquet files on disk, and below we will read them in while initializing the Dataset objects.
###Code
train.to_parquet(BASE_DIR + "train.parquet")
valid.to_parquet(BASE_DIR + "valid.parquet")
del train
del valid
gc.collect()
###Output
_____no_output_____
###Markdown
Let's define our categorical and label columns. Note that in that example we do not have numerical columns.
###Code
CATEGORICAL_COLUMNS = ["userId", "movieId"]
LABEL_COLUMNS = ["rating"]
###Output
_____no_output_____
###Markdown
Let's add Categorify op for our categorical features, userId, movieId.
###Code
cat_features = CATEGORICAL_COLUMNS >> nvt.ops.Categorify(cat_cache="device")
###Output
_____no_output_____
###Markdown
The ratings are on a scale between 1-5. We want to predict a binary target with 1 are all ratings >=4 and 0 are all ratings <=3. We use the LambdaOp for it.
###Code
ratings = nvt.ColumnGroup(["rating"]) >> (lambda col: (col > 3).astype("int8"))
###Output
_____no_output_____
###Markdown
We can visualize our calculation graph.
###Code
output = cat_features + ratings
(output).graph
###Output
_____no_output_____
###Markdown
We initialize our NVTabular workflow.
###Code
workflow = nvt.Workflow(output)
###Output
_____no_output_____
###Markdown
We initialize NVTabular Datasets, and use the part_size parameter, which defines the size read into GPU-memory at once, in nvt.Dataset.
###Code
train_dataset = nvt.Dataset(BASE_DIR + "train.parquet", part_size="100MB")
valid_dataset = nvt.Dataset(BASE_DIR + "valid.parquet", part_size="100MB")
###Output
_____no_output_____
###Markdown
First, we collect the training dataset statistics.
###Code
%%time
workflow.fit(train_dataset)
###Output
CPU times: user 1.01 s, sys: 315 ms, total: 1.32 s
Wall time: 1.39 s
###Markdown
This step is slightly different for HugeCTR. HugeCTR expect the categorical input columns as `int64` and continuous/label columns as `float32` We can define output datatypes for our NVTabular workflow.
###Code
dict_dtypes = {}
for col in CATEGORICAL_COLUMNS:
dict_dtypes[col] = np.int64
for col in LABEL_COLUMNS:
dict_dtypes[col] = np.float32
###Output
_____no_output_____
###Markdown
Note: We do not have numerical output columns
###Code
train_dir = os.path.join(BASE_DIR, "train")
valid_dir = os.path.join(BASE_DIR, "valid")
if path.exists(train_dir):
shutil.rmtree(train_dir)
if path.exists(valid_dir):
shutil.rmtree(valid_dir)
###Output
_____no_output_____
###Markdown
In addition, we need to provide the data schema to the output calls. We need to define which output columns are `categorical`, `continuous` and which is the `label` columns. NVTabular will write metadata files, which HugeCTR requires to load the data and optimize training.
###Code
workflow.transform(train_dataset).to_parquet(
output_path=BASE_DIR + "train/",
shuffle=nvt.io.Shuffle.PER_PARTITION,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
workflow.transform(valid_dataset).to_parquet(
output_path=BASE_DIR + "valid/",
shuffle=False,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
###Output
_____no_output_____
###Markdown
Scaling Accelerated training with HugeCTR HugeCTR is a deep learning framework dedicated to recommendation systems. It is written in CUDA C++. As HugeCTR optimizes the training in CUDA++, we need to define the training pipeline and model architecture and execute it via the commandline. We will use the Python API, which is similar to Keras models. HugeCTR has three main components:* Solver: Specifies various details such as active GPU list, batchsize, and model_file* Optimizer: Specifies the type of optimizer and its hyperparameters* DataReader: Specifies the training/evaludation data* Model: Specifies embeddings, and dense layers. Note that embeddings must precede the dense layers **Solver**Let's take a look on the parameter for the `Solver`. We should be familiar from other frameworks for the hyperparameter.```solver = hugectr.CreateSolver(- vvgpu: GPU indices used in the training process, which has two levels. For example: [[0,1],[1,2]] indicates that two nodes are used in the first node. GPUs 0 and 1 are used while GPUs 1 and 2 are used for the second node. It is also possible to specify non-continuous GPU indices such as [0, 2, 4, 7] - batchsize: Minibatch size used in training- max_eval_batches: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset.If max_iter is used, the evaluation happens for max_eval_batches by repeating the evaluation dataset infinitely.On the other hand, with num_epochs, HugeCTR stops the evaluation if all the evaluation data is consumed - batchsize_eval: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset- mixed_precision: Enables mixed precision training with the scaler specified here. Only 128,256, 512, and 1024 scalers are supported)``` **Optimizer**The optimizer is the algorithm to update the model parameters. HugeCTR supports the common algorithms.```optimizer = CreateOptimizer(- optimizer_type: Optimizer algorithm - Adam, MomentumSGD, Nesterov, and SGD - learning_rate: Learning Rate for optimizer)``` **DataReader**The data reader defines the training and evaluation dataset.```reader = hugectr.DataReaderParams(- data_reader_type: Data format to read- source: The training dataset file list. IMPORTANT: This should be a list- eval_source: The evaluation dataset file list.- check_type: The data error detection machanism (Sum: Checksum, None: no detection).- slot_size_array: The list of categorical feature cardinalities)``` **Model**We initialize the model with the solver, optimizer and data reader:```model = hugectr.Model(solver, reader, optimizer)```We can add multiple layers to the model with `model.add` function. We will focus on:- `Input` defines the input data- `SparseEmbedding` defines the embedding layer- `DenseLayer` defines dense layers, such as fully connected, ReLU, BatchNorm, etc.**HugeCTR organizes the layers by names. For each layer, we define the input and output names.** Input layer:This layer is required to define the input data.```hugectr.Input( label_dim: Number of label columns label_name: Name of label columns in network architecture dense_dim: Number of continous columns dense_name: Name of contiunous columns in network architecture data_reader_sparse_param_array: Configuration how to read sparse data and its names)```SparseEmbedding:This layer defines embedding table```hugectr.SparseEmbedding( embedding_type: Different embedding options to distribute embedding tables workspace_size_per_gpu_in_mb: Maximum embedding table size in MB embedding_vec_size: Embedding vector size combiner: Intra-slot reduction op sparse_embedding_name: Layer name bottom_name: Input layer names optimizer: Optimizer to use)```DenseLayer:This layer is copied to each GPU and is normally used for the MLP tower.```hugectr.DenseLayer( layer_type: Layer type, such as FullyConnected, Reshape, Concat, Loss, BatchNorm, etc. bottom_names: Input layer names top_names: Layer name ...: Depending on the layer type additional parameter can be defined)```This is only a short introduction in the API. You can read more in the official docs: [Python Interface](https://github.com/NVIDIA/HugeCTR/blob/master/docs/python_interface.md) and [Layer Book](https://github.com/NVIDIA/HugeCTR/blob/master/docs/hugectr_layer_book.md) Let's define our modelWe walked through the documentation, but it is useful to understand the API. Finally, we can define our model. We will write the model to `./model.py` and execute it afterwards. We need the cardinalities of each categorical feature to assign as `slot_size_array` in the model below.
###Code
from nvtabular.ops import get_embedding_sizes
embeddings = get_embedding_sizes(workflow)
print(embeddings)
###Output
{'userId': (162542, 512), 'movieId': (56586, 512)}
###Markdown
Let's clear the directory and create the output folders
###Code
!rm -r /model/movielens_hugectr
!mkdir -p /model/movielens_hugectr/1
###Output
rm: cannot remove '/model/movielens_hugectr': No such file or directory
###Markdown
We use `graph_to_json` to convert the model to a JSON configuration, required for the inference.
###Code
%%writefile './model.py'
import hugectr
from mpi4py import MPI # noqa
solver = hugectr.CreateSolver(
vvgpu=[[0]],
batchsize=2048,
batchsize_eval=2048,
max_eval_batches=160,
i64_input_key=True,
use_mixed_precision=False,
repeat_dataset=True,
)
optimizer = hugectr.CreateOptimizer(optimizer_type=hugectr.Optimizer_t.Adam)
reader = hugectr.DataReaderParams(
data_reader_type=hugectr.DataReaderType_t.Parquet,
source=["/model/data/train/_file_list.txt"],
eval_source="/model/data/valid/_file_list.txt",
check_type=hugectr.Check_t.Non,
slot_size_array=[162542, 56586],
)
model = hugectr.Model(solver, reader, optimizer)
model.add(
hugectr.Input(
label_dim=1,
label_name="label",
dense_dim=0,
dense_name="dense",
data_reader_sparse_param_array=[
hugectr.DataReaderSparseParam("data1", nnz_per_slot=2, is_fixed_length=True, slot_num=2)
],
)
)
model.add(
hugectr.SparseEmbedding(
embedding_type=hugectr.Embedding_t.LocalizedSlotSparseEmbeddingHash,
workspace_size_per_gpu_in_mb=100,
embedding_vec_size=16,
combiner="sum",
sparse_embedding_name="sparse_embedding1",
bottom_name="data1",
optimizer=optimizer,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.Reshape,
bottom_names=["sparse_embedding1"],
top_names=["reshape1"],
leading_dim=32,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["reshape1"],
top_names=["fc1"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc1"],
top_names=["relu1"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu1"],
top_names=["fc2"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc2"],
top_names=["relu2"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu2"],
top_names=["fc3"],
num_output=1,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.BinaryCrossEntropyLoss,
bottom_names=["fc3", "label"],
top_names=["loss"],
)
)
model.compile()
model.summary()
model.fit(max_iter=2000, display=100, eval_interval=200, snapshot=1900)
model.graph_to_json(graph_config_file="/model/movielens_hugectr/1/movielens.json")
!python model.py
###Output
====================================================Model Init=====================================================
[26d18h35m13s][HUGECTR][INFO]: Global seed is 3848625588
[26d18h35m15s][HUGECTR][INFO]: Peer-to-peer access cannot be fully enabled.
Device 0: Tesla V100-SXM2-32GB
[26d18h35m15s][HUGECTR][INFO]: num of DataReader workers: 1
[26d18h35m15s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=1638400
[26d18h35m15s][HUGECTR][INFO]: All2All Warmup Start
[26d18h35m15s][HUGECTR][INFO]: All2All Warmup End
===================================================Model Compile===================================================
[26d18h35m17s][HUGECTR][INFO]: gpu0 start to init embedding
[26d18h35m17s][HUGECTR][INFO]: gpu0 init embedding done
===================================================Model Summary===================================================
Label Dense Sparse
label dense data1
(None, 1) (None, 0)
------------------------------------------------------------------------------------------------------------------
Layer Type Input Name Output Name Output Shape
------------------------------------------------------------------------------------------------------------------
LocalizedSlotSparseEmbeddingHash data1 sparse_embedding1 (None, 2, 16)
Reshape sparse_embedding1 reshape1 (None, 32)
InnerProduct reshape1 fc1 (None, 128)
ReLU fc1 relu1 (None, 128)
InnerProduct relu1 fc2 (None, 128)
ReLU fc2 relu2 (None, 128)
InnerProduct relu2 fc3 (None, 1)
BinaryCrossEntropyLoss fc3,label loss
------------------------------------------------------------------------------------------------------------------
=====================================================Model Fit=====================================================
[26d18h35m17s][HUGECTR][INFO]: Use non-epoch mode with number of iterations: 2000
[26d18h35m17s][HUGECTR][INFO]: Training batchsize: 2048, evaluation batchsize: 2048
[26d18h35m17s][HUGECTR][INFO]: Evaluation interval: 200, snapshot interval: 1900
[26d18h35m17s][HUGECTR][INFO]: Sparse embedding trainable: 1, dense network trainable: 1
[26d18h35m17s][HUGECTR][INFO]: Use mixed precision: 0, scaler: 1.000000, use cuda graph: 1
[26d18h35m17s][HUGECTR][INFO]: lr: 0.001000, warmup_steps: 1, decay_start: 0, decay_steps: 1, decay_power: 2.000000, end_lr: 0.000000
[26d18h35m17s][HUGECTR][INFO]: Training source file: /model/data/train/_file_list.txt
[26d18h35m17s][HUGECTR][INFO]: Evaluation source file: /model/data/valid/_file_list.txt
[26d18h35m17s][HUGECTR][INFO]: Iter: 100 Time(100 iters): 0.136342s Loss: 0.579462 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Iter: 200 Time(100 iters): 0.135109s Loss: 0.554109 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Evaluation, AUC: 0.745997
[26d18h35m17s][HUGECTR][INFO]: Eval Time for 160 iters: 0.073575s
[26d18h35m17s][HUGECTR][INFO]: Iter: 300 Time(100 iters): 0.210037s Loss: 0.571327 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Iter: 400 Time(100 iters): 0.132709s Loss: 0.546585 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.764737
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.070747s
[26d18h35m18s][HUGECTR][INFO]: Iter: 500 Time(100 iters): 0.216137s Loss: 0.552045 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 600 Time(100 iters): 0.133178s Loss: 0.541653 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.774266
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.069966s
[26d18h35m18s][HUGECTR][INFO]: Iter: 700 Time(100 iters): 0.204785s Loss: 0.524283 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 800 Time(100 iters): 0.133213s Loss: 0.530550 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.780663
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.081221s
[26d18h35m18s][HUGECTR][INFO]: Iter: 900 Time(100 iters): 0.216339s Loss: 0.541633 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 1000 Time(100 iters): 0.142290s Loss: 0.541528 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.786361
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.068574s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1100 Time(100 iters): 0.203976s Loss: 0.528578 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Iter: 1200 Time(100 iters): 0.133187s Loss: 0.522433 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.788285
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.076724s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1300 Time(100 iters): 0.213124s Loss: 0.524235 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Iter: 1400 Time(100 iters): 0.135103s Loss: 0.513423 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.793324
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.081245s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1500 Time(100 iters): 0.228339s Loss: 0.504689 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Iter: 1600 Time(100 iters): 0.133944s Loss: 0.515175 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Evaluation, AUC: 0.795201
[26d18h35m20s][HUGECTR][INFO]: Eval Time for 160 iters: 0.071934s
[26d18h35m20s][HUGECTR][INFO]: Iter: 1700 Time(100 iters): 0.207204s Loss: 0.515042 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Iter: 1800 Time(100 iters): 0.135032s Loss: 0.498440 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Evaluation, AUC: 0.795551
[26d18h35m20s][HUGECTR][INFO]: Eval Time for 160 iters: 0.071047s
[26d18h35m20s][HUGECTR][INFO]: Iter: 1900 Time(100 iters): 0.209134s Loss: 0.509593 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Rank0: Dump hash table from GPU0
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write hash table <key,value> pairs to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m20s][HUGECTR][INFO]: Dumping sparse weights to files, successful
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write optimzer state to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write optimzer state to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m21s][HUGECTR][INFO]: Dumping sparse optimzer states to files, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping dense weights to file, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping dense optimizer states to file, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping untrainable weights to file, successful
[26d18h35m21s][HUGECTR][INFO]: Save the model graph to /model/movielens_hugectr/1/movielens.json, successful
###Markdown
We trained our model. After training terminates, we can see that multiple `.model` files and folders are generated. We need to move them inside `1` folder under the `movielens_hugectr` folder. Let's create these folders first. Now we move our saved `.model` files inside `1` folder.
###Code
!mv *.model /model/movielens_hugectr/1/
###Output
_____no_output_____
###Markdown
Now we can save our models to be deployed at the inference stage. To do so we will use `export_hugectr_ensemble` method below. With this method, we can generate the `config.pbtxt` files automatically for each model. In doing so, we should also create a `hugectr_params` dictionary, and define the parameters like where the `movielens.json` file will be read, `slots` which corresponds to number of categorical features, `embedding_vector_size`, `max_nnz`, and `n_outputs` which is number of outputs. The script below creates an ensemble triton server model where - `workflow` is the the nvtabular workflow used in preprocessing, - `hugectr_model_path` is the HugeCTR model that should be served. This path includes the `.model` files.- `name` is the base name of the various triton models- `output_path` is the path where is model will be saved to.
###Code
from nvtabular.inference.triton import export_hugectr_ensemble
hugectr_params = dict()
hugectr_params["config"] = "/model/models/movielens/1/movielens.json"
hugectr_params["slots"] = 2
hugectr_params["max_nnz"] = 2
hugectr_params["embedding_vector_size"] = 16
hugectr_params["n_outputs"] = 1
export_hugectr_ensemble(
workflow=workflow,
hugectr_model_path="/model/movielens_hugectr/1/",
hugectr_params=hugectr_params,
name="movielens",
output_path="/model/models/",
label_columns=["rating"],
cats=CATEGORICAL_COLUMNS,
max_batch_size=64,
)
###Output
_____no_output_____
###Markdown
OverviewIn this notebook, we want to provide an overview what HugeCTR framework is, its features and benefits. We will use HugeCTR to train a basic neural network architecture and deploy the saved model to Triton Inference Server. Learning Objectives:* Adopt NVTabular workflow to provide input files to HugeCTR* Define HugeCTR neural network architecture* Train a deep learning model with HugeCTR* Deploy HugeCTR to Triton Inference Server Why using HugeCTR?HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs).HugeCTR offers multiple advantages to train deep learning recommender systems:1. **Speed**: HugeCTR is a highly efficient framework written C++. We experienced up to 10x speed up. HugeCTR on a NVIDIA DGX A100 system proved to be the fastest commercially available solution for training the architecture Deep Learning Recommender Model (DLRM) developed by Facebook.2. **Scale**: HugeCTR supports model parallel scaling. It distributes the large embedding tables over multiple GPUs or multiple nodes. 3. **Easy-to-use**: Easy-to-use Python API similar to Keras. Examples for popular deep learning recommender systems architectures (Wide&Deep, DLRM, DCN, DeepFM) are available. Other Features of HugeCTRHugeCTR is designed to scale deep learning models for recommender systems. It provides a list of other important features:* Proficiency in oversubscribing models to train embedding tables with single nodes that don’t fit within the GPU or CPU memory (only required embeddings are prefetched from a parameter server per batch)* Asynchronous and multithreaded data pipelines* A highly optimized data loader.* Supported data formats such as parquet and binary* Integration with Triton Inference Server for deployment to production Getting Started In this example, we will train a neural network with HugeCTR. We will use NVTabular for preprocessing. Preprocessing and Feature Engineering with NVTabularWe use NVTabular to `Categorify` our categorical input columns.
###Code
# External dependencies
import os
import shutil
import gc
import nvtabular as nvt
import cudf
import numpy as np
from os import path
from sklearn.model_selection import train_test_split
from nvtabular.utils import download_file
###Output
_____no_output_____
###Markdown
We define our base directory, containing the data.
###Code
# path to store raw and preprocessed data
BASE_DIR = "/model/data/"
###Output
_____no_output_____
###Markdown
If the data is not available in the base directory, we will download and unzip the data.
###Code
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", os.path.join(BASE_DIR, "ml-25m.zip")
)
###Output
downloading ml-25m.zip: 262MB [00:09, 27.0MB/s]
unzipping files: 100%|██████████| 8/8 [00:10<00:00, 1.35s/files]
###Markdown
Preparing the dataset with NVTabular First, we take a look at the movie metadata.Let's load the movie ratings.
###Code
ratings = cudf.read_csv(os.path.join(BASE_DIR, "ml-25m", "ratings.csv"))
ratings.head()
###Output
_____no_output_____
###Markdown
We drop the timestamp column and split the ratings into training and test dataset. We use a simple random split.
###Code
ratings = ratings.drop("timestamp", axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.head()
###Output
_____no_output_____
###Markdown
We save our train and valid datasets as parquet files on disk, and below we will read them in while initializing the Dataset objects.
###Code
train.to_parquet(BASE_DIR + "train.parquet")
valid.to_parquet(BASE_DIR + "valid.parquet")
del train
del valid
gc.collect()
###Output
_____no_output_____
###Markdown
Let's define our categorical and label columns. Note that in that example we do not have numerical columns.
###Code
CATEGORICAL_COLUMNS = ["userId", "movieId"]
LABEL_COLUMNS = ["rating"]
###Output
_____no_output_____
###Markdown
Let's add Categorify op for our categorical features, userId, movieId.
###Code
cat_features = CATEGORICAL_COLUMNS >> nvt.ops.Categorify(cat_cache="device")
###Output
_____no_output_____
###Markdown
The ratings are on a scale between 1-5. We want to predict a binary target with 1 are all ratings >=4 and 0 are all ratings <=3. We use the LambdaOp for it.
###Code
ratings = nvt.ColumnGroup(["rating"]) >> (lambda col: (col > 3).astype("int8"))
###Output
_____no_output_____
###Markdown
We can visualize our calculation graph.
###Code
output = cat_features + ratings
(output).graph
###Output
_____no_output_____
###Markdown
We initialize our NVTabular workflow.
###Code
workflow = nvt.Workflow(output)
###Output
_____no_output_____
###Markdown
We initialize NVTabular Datasets, and use the part_size parameter, which defines the size read into GPU-memory at once, in nvt.Dataset.
###Code
train_dataset = nvt.Dataset(BASE_DIR + "train.parquet", part_size="100MB")
valid_dataset = nvt.Dataset(BASE_DIR + "valid.parquet", part_size="100MB")
###Output
_____no_output_____
###Markdown
First, we collect the training dataset statistics.
###Code
%%time
workflow.fit(train_dataset)
###Output
CPU times: user 860 ms, sys: 275 ms, total: 1.13 s
Wall time: 1.19 s
###Markdown
This step is slightly different for HugeCTR. HugeCTR expect the categorical input columns as `int64` and continuous/label columns as `float32` We can define output datatypes for our NVTabular workflow.
###Code
dict_dtypes = {}
for col in CATEGORICAL_COLUMNS:
dict_dtypes[col] = np.int64
for col in LABEL_COLUMNS:
dict_dtypes[col] = np.float32
###Output
_____no_output_____
###Markdown
Note: We do not have numerical output columns
###Code
train_dir = os.path.join(BASE_DIR, "train")
valid_dir = os.path.join(BASE_DIR, "valid")
if path.exists(train_dir):
shutil.rmtree(train_dir)
if path.exists(valid_dir):
shutil.rmtree(valid_dir)
###Output
_____no_output_____
###Markdown
In addition, we need to provide the data schema to the output calls. We need to define which output columns are `categorical`, `continuous` and which is the `label` columns. NVTabular will write metadata files, which HugeCTR requires to load the data and optimize training.
###Code
workflow.transform(train_dataset).to_parquet(
output_path=BASE_DIR + "train/",
shuffle=nvt.io.Shuffle.PER_PARTITION,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
workflow.transform(valid_dataset).to_parquet(
output_path=BASE_DIR + "valid/",
shuffle=False,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
###Output
_____no_output_____
###Markdown
Scaling Accelerated training with HugeCTR HugeCTR is a deep learning framework dedicated to recommendation systems. It is written in CUDA C++. As HugeCTR optimizes the training in CUDA++, we need to define the training pipeline and model architecture and execute it via the commandline. We will use the Python API, which is similar to Keras models. HugeCTR has three main components:* Solver: Specifies various details such as active GPU list, batchsize, and model_file* Optimizer: Specifies the type of optimizer and its hyperparameters* Model: Specifies training/evaluation data (and their paths), embeddings, and dense layers. Note that embeddings must precede the dense layers **Solver**Let's take a look on the parameter for the `Solver`. We should be familiar from other frameworks for the hyperparameter.```solver = hugectr.solver_parser_helper(- vvgpu: GPU indices used in the training process, which has two levels. For example: [[0,1],[1,2]] indicates that two nodes are used in the first node. GPUs 0 and 1 are used while GPUs 1 and 2 are used for the second node. It is also possible to specify non-continuous GPU indices such as [0, 2, 4, 7] - max_iter: Total number of training iterations- batchsize: Minibatch size used in training- display: Intervals to print loss on the screen- eval_interval: Evaluation interval in the unit of training iteration- max_eval_batches: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset.If max_iter is used, the evaluation happens for max_eval_batches by repeating the evaluation dataset infinitely.On the other hand, with num_epochs, HugeCTR stops the evaluation if all the evaluation data is consumed - batchsize_eval: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset- mixed_precision: Enables mixed precision training with the scaler specified here. Only 128,256, 512, and 1024 scalers are supported)``` **Optimizer**The optimizer is the algorithm to update the model parameters. HugeCTR supports the common algorithms.```optimizer = CreateOptimizer(- optimizer_type: Optimizer algorithm - Adam, MomentumSGD, Nesterov, and SGD - learning_rate: Learning Rate for optimizer)``` **Model**We initialize the model with the solver and optimizer:```model = hugectr.Model(solver, optimizer)```We can add multiple layers to the model with `model.add` function. We will focus on:- `Input` defines the input data- `SparseEmbedding` defines the embedding layer- `DenseLayer` defines dense layers, such as fully connected, ReLU, BatchNorm, etc.**HugeCTR organizes the layers by names. For each layer, we define the input and output names.** Input layer:This layer is required to define the input data.```hugectr.Input( data_reader_type: Data format to read source: The training dataset file list. eval_source: The evaluation dataset file list. check_type: The data error detection machanism (Sum: Checksum, None: no detection). label_dim: Number of label columns label_name: Name of label columns in network architecture dense_dim: Number of continous columns dense_name: Name of contiunous columns in network architecture slot_size_array: The list of categorical feature cardinalities data_reader_sparse_param_array: Configuration how to read sparse data sparse_names: Name of sparse/categorical columns in network architecture)```SparseEmbedding:This layer defines embedding table```hugectr.SparseEmbedding( embedding_type: Different embedding options to distribute embedding tables max_vocabulary_size_per_gpu: Maximum vocabulary size or cardinality across all the input features embedding_vec_size: Embedding vector size combiner: Intra-slot reduction op (0=sum, 1=average) sparse_embedding_name: Layer name bottom_name: Input layer names)```DenseLayer:This layer is copied to each GPU and is normally used for the MLP tower.```hugectr.DenseLayer( layer_type: Layer type, such as FullyConnected, Reshape, Concat, Loss, BatchNorm, etc. bottom_names: Input layer names top_names: Layer name ...: Depending on the layer type additional parameter can be defined)``` Let's define our modelWe walked through the documentation, but it is useful to understand the API. Finally, we can define our model. We will write the model to `./model.py` and execute it afterwards. We need the cardinalities of each categorical feature to assign as `slot_size_array` in the model below.
###Code
from nvtabular.ops import get_embedding_sizes
embeddings = get_embedding_sizes(workflow)
print(embeddings)
###Output
{'movieId': (56586, 512), 'userId': (162542, 512)}
###Markdown
In addition, we need the total cardinalities to be assigned as `max_vocabulary_size_per_gpu` parameter.
###Code
total_cardinality = embeddings["userId"][0] + embeddings["movieId"][0]
total_cardinality
%%writefile './model.py'
import hugectr
from mpi4py import MPI # noqa
solver = hugectr.solver_parser_helper(
vvgpu=[[0]],
max_iter=2000,
batchsize=2048,
display=100,
eval_interval=200,
batchsize_eval=2048,
max_eval_batches=160,
i64_input_key=True,
use_mixed_precision=False,
repeat_dataset=True,
snapshot=1900,
)
optimizer = hugectr.optimizer.CreateOptimizer(
optimizer_type=hugectr.Optimizer_t.Adam, use_mixed_precision=False
)
model = hugectr.Model(solver, optimizer)
model.add(
hugectr.Input(
data_reader_type=hugectr.DataReaderType_t.Parquet,
source="/model/data/train/_file_list.txt",
eval_source="/model/data/valid/_file_list.txt",
check_type=hugectr.Check_t.Non,
label_dim=1,
label_name="label",
dense_dim=0,
dense_name="dense",
slot_size_array=[162542, 56586],
data_reader_sparse_param_array=[
hugectr.DataReaderSparseParam(hugectr.DataReaderSparse_t.Distributed, 3, 1, 2)
],
sparse_names=["data1"],
)
)
model.add(
hugectr.SparseEmbedding(
embedding_type=hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash,
max_vocabulary_size_per_gpu=219128,
embedding_vec_size=16,
combiner=0,
sparse_embedding_name="sparse_embedding1",
bottom_name="data1",
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.Reshape,
bottom_names=["sparse_embedding1"],
top_names=["reshape1"],
leading_dim=32,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["reshape1"],
top_names=["fc1"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc1"],
top_names=["relu1"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu1"],
top_names=["fc2"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc2"],
top_names=["relu2"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu2"],
top_names=["fc3"],
num_output=1,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.BinaryCrossEntropyLoss,
bottom_names=["fc3", "label"],
top_names=["loss"],
)
)
model.compile()
model.summary()
model.fit()
!python model.py
###Output
===================================Model Init====================================
[21d15h00m55s][HUGECTR][INFO]: Global seed is 3138621309
[21d15h00m56s][HUGECTR][INFO]: Peer-to-peer access cannot be fully enabled.
Device 0: Tesla V100-DGXS-16GB
[21d15h00m56s][HUGECTR][INFO]: num of DataReader workers: 1
[21d15h00m56s][HUGECTR][INFO]: num_internal_buffers 1
[21d15h00m56s][HUGECTR][INFO]: num_internal_buffers 1
[21d15h00m56s][HUGECTR][INFO]: Vocabulary size: 219128
[21d15h00m56s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=219128
[21d15h00m58s][HUGECTR][INFO]: gpu0 start to init embedding
[21d15h00m58s][HUGECTR][INFO]: gpu0 init embedding done
==================================Model Summary==================================
Label Name Dense Name Sparse Name
label dense data1
--------------------------------------------------------------------------------
Layer Type Input Name Output Name
--------------------------------------------------------------------------------
DistributedHash data1 sparse_embedding1
Reshape sparse_embedding1 reshape1
InnerProduct reshape1 fc1
ReLU fc1 relu1
InnerProduct relu1 fc2
ReLU fc2 relu2
InnerProduct relu2 fc3
BinaryCrossEntropyLoss fc3, label loss
--------------------------------------------------------------------------------
=====================================Model Fit====================================
[21d15h00m58s][HUGECTR][INFO]: Use non-epoch mode with number of iterations: 2000
[21d15h00m58s][HUGECTR][INFO]: Training batchsize: 2048, evaluation batchsize: 2048
[21d15h00m58s][HUGECTR][INFO]: Evaluation interval: 200, snapshot interval: 1900
[21d15h00m58s][HUGECTR][INFO]: Iter: 100 Time(100 iters): 0.055490s Loss: 0.585059 lr:0.001000
[21d15h00m58s][HUGECTR][INFO]: Iter: 200 Time(100 iters): 0.053898s Loss: 0.584555 lr:0.001000
[21d15h00m58s][HUGECTR][INFO]: Evaluation, AUC: 0.746342
[21d15h00m58s][HUGECTR][INFO]: Eval Time for 160 iters: 0.038284s
[21d15h00m58s][HUGECTR][INFO]: Iter: 300 Time(100 iters): 0.094700s Loss: 0.560614 lr:0.001000
[21d15h00m58s][HUGECTR][INFO]: Iter: 400 Time(100 iters): 0.054163s Loss: 0.538758 lr:0.001000
[21d15h00m58s][HUGECTR][INFO]: Evaluation, AUC: 0.764121
[21d15h00m58s][HUGECTR][INFO]: Eval Time for 160 iters: 0.038577s
[21d15h00m59s][HUGECTR][INFO]: Iter: 500 Time(100 iters): 0.104862s Loss: 0.550330 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Iter: 600 Time(100 iters): 0.054508s Loss: 0.550764 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Evaluation, AUC: 0.773638
[21d15h00m59s][HUGECTR][INFO]: Eval Time for 160 iters: 0.037368s
[21d15h00m59s][HUGECTR][INFO]: Iter: 700 Time(100 iters): 0.093117s Loss: 0.551641 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Iter: 800 Time(100 iters): 0.054593s Loss: 0.546905 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Evaluation, AUC: 0.779528
[21d15h00m59s][HUGECTR][INFO]: Eval Time for 160 iters: 0.046200s
[21d15h00m59s][HUGECTR][INFO]: Iter: 900 Time(100 iters): 0.102006s Loss: 0.553355 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Iter: 1000 Time(100 iters): 0.064870s Loss: 0.537847 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Evaluation, AUC: 0.784503
[21d15h00m59s][HUGECTR][INFO]: Eval Time for 160 iters: 0.036215s
[21d15h00m59s][HUGECTR][INFO]: Iter: 1100 Time(100 iters): 0.091835s Loss: 0.546954 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Iter: 1200 Time(100 iters): 0.054377s Loss: 0.530833 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Evaluation, AUC: 0.785607
[21d15h00m59s][HUGECTR][INFO]: Eval Time for 160 iters: 0.036650s
[21d15h00m59s][HUGECTR][INFO]: Iter: 1300 Time(100 iters): 0.092484s Loss: 0.534738 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Iter: 1400 Time(100 iters): 0.054482s Loss: 0.512133 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Evaluation, AUC: 0.790324
[21d15h00m59s][HUGECTR][INFO]: Eval Time for 160 iters: 0.045695s
[21d15h00m59s][HUGECTR][INFO]: Iter: 1500 Time(100 iters): 0.111380s Loss: 0.527739 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Iter: 1600 Time(100 iters): 0.054460s Loss: 0.517671 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Evaluation, AUC: 0.793347
[21d15h00m59s][HUGECTR][INFO]: Eval Time for 160 iters: 0.036242s
[21d15h00m59s][HUGECTR][INFO]: Iter: 1700 Time(100 iters): 0.091824s Loss: 0.519957 lr:0.001000
[21d15h00m59s][HUGECTR][INFO]: Iter: 1800 Time(100 iters): 0.054464s Loss: 0.522904 lr:0.001000
[21d15h10m00s][HUGECTR][INFO]: Evaluation, AUC: 0.794346
[21d15h10m00s][HUGECTR][INFO]: Eval Time for 160 iters: 0.036052s
[21d15h10m00s][HUGECTR][INFO]: Iter: 1900 Time(100 iters): 0.091779s Loss: 0.541558 lr:0.001000
[21d15h10m00s][HUGECTR][INFO]: Rank0: Dump hash table from GPU0
[21d15h10m00s][HUGECTR][INFO]: Rank0: Write hash table <key,value> pairs to file
[21d15h10m00s][HUGECTR][INFO]: Done
###Markdown
We trained our model. After training terminates, we can see that two `.model` files are generated. We need to move them inside `1` folder under the `movielens_hugectr` folder. Let's create these folders first.
###Code
!mkdir -p /model/movielens_hugectr/1
###Output
_____no_output_____
###Markdown
Now we move our saved `.model` files inside `1` folder.
###Code
!mv *.model /model/movielens_hugectr/1/
###Output
_____no_output_____
###Markdown
Note that these stored `.model` files will be used in the inference. Now we have to create a JSON file for inference which has a similar configuration as our training file. We should remove the solver and optimizer clauses and add the inference clause in the JSON file. The paths of the stored dense model and sparse model(s) should be specified at dense_model_file and sparse_model_file within the inference clause. We need to make some modifications to data in the layers clause. Besides, we need to change the last layer from BinaryCrossEntropyLoss to Sigmoid. The rest of "layers" should be exactly the same as that in the training model.py file.Now let's create a `movielens.json` file inside the `movielens/1` folder. We have already retrieved the cardinality of each categorical column using `get_embedding_sizes` function above. We will use these cardinalities below in the `movielens.json` file as well.
###Code
%%writefile '/model/movielens_hugectr/1/movielens.json'
{
"inference": {
"max_batchsize": 64,
"hit_rate_threshold": 0.6,
"dense_model_file": "/model/models/movielens/1/_dense_1900.model",
"sparse_model_file": "/model/models/movielens/1/0_sparse_1900.model",
"label": 1,
"input_key_type": "I64"
},
"layers": [
{
"name": "data",
"type": "Data",
"format": "Parquet",
"slot_size_array": [162542, 56586],
"source": "/model/data/train/_file_list.txt",
"eval_source": "/model/data/valid/_file_list.txt",
"check": "Sum",
"label": {"top": "label", "label_dim": 1},
"dense": {"top": "dense", "dense_dim": 0},
"sparse": [
{
"top": "data1",
"type": "DistributedSlot",
"max_feature_num_per_sample": 3,
"slot_num": 2
}
]
},
{
"name": "sparse_embedding1",
"type": "DistributedSlotSparseEmbeddingHash",
"bottom": "data1",
"top": "sparse_embedding1",
"sparse_embedding_hparam": {
"max_vocabulary_size_per_gpu": 219128,
"embedding_vec_size": 16,
"combiner": 0
}
},
{
"name": "reshape1",
"type": "Reshape",
"bottom": "sparse_embedding1",
"top": "reshape1",
"leading_dim": 32
},
{
"name": "fc1",
"type": "InnerProduct",
"bottom": "reshape1",
"top": "fc1",
"fc_param": {"num_output": 128}
},
{"name": "relu1", "type": "ReLU", "bottom": "fc1", "top": "relu1"},
{
"name": "fc2",
"type": "InnerProduct",
"bottom": "relu1",
"top": "fc2",
"fc_param": {"num_output": 128}
},
{"name": "relu2", "type": "ReLU", "bottom": "fc2", "top": "relu2"},
{
"name": "fc3",
"type": "InnerProduct",
"bottom": "relu2",
"top": "fc3",
"fc_param": {"num_output": 1}
},
{"name": "sigmoid", "type": "Sigmoid", "bottom": "fc3", "top": "sigmoid"}
]
}
###Output
Writing /model/movielens_hugectr/1/movielens.json
###Markdown
Now we can save our models to be deployed at the inference stage. To do so we will use `export_hugectr_ensemble` method below. With this method, we can generate the `config.pbtxt` files automatically for each model. In doing so, we should also create a `hugectr_params` dictionary, and define the parameters like where the `movielens.json` file will be read, `slots` which corresponds to number of categorical features, `embedding_vector_size`, `max_nnz`, and `n_outputs` which is number of outputs. The script below creates an ensemble triton server model where - `workflow` is the the nvtabular workflow used in preprocessing, - `hugectr_model_path` is the HugeCTR model that should be served. This path includes the `.model` files.- `name` is the base name of the various triton models- `output_path` is the path where is model will be saved to.
###Code
from nvtabular.inference.triton import export_hugectr_ensemble
hugectr_params = dict()
hugectr_params["config"] = "/model/models/movielens/1/movielens.json"
hugectr_params["slots"] = 2
hugectr_params["max_nnz"] = 2
hugectr_params["embedding_vector_size"] = 16
hugectr_params["n_outputs"] = 1
export_hugectr_ensemble(
workflow=workflow,
hugectr_model_path="/model/movielens_hugectr/1/",
hugectr_params=hugectr_params,
name="movielens",
output_path="/model/models/",
label_columns=["rating"],
cats=CATEGORICAL_COLUMNS,
max_batch_size=64,
)
###Output
_____no_output_____
###Markdown
OverviewIn this notebook, we want to provide an overview what HugeCTR framework is, its features and benefits. We will use HugeCTR to train a basic neural network architecture and deploy the saved model to Triton Inference Server. Learning Objectives:* Adopt NVTabular workflow to provide input files to HugeCTR* Define HugeCTR neural network architecture* Train a deep learning model with HugeCTR* Deploy HugeCTR to Triton Inference Server Why using HugeCTR?HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs).HugeCTR offers multiple advantages to train deep learning recommender systems:1. **Speed**: HugeCTR is a highly efficient framework written C++. We experienced up to 10x speed up. HugeCTR on a NVIDIA DGX A100 system proved to be the fastest commercially available solution for training the architecture Deep Learning Recommender Model (DLRM) developed by Facebook.2. **Scale**: HugeCTR supports model parallel scaling. It distributes the large embedding tables over multiple GPUs or multiple nodes. 3. **Easy-to-use**: Easy-to-use Python API similar to Keras. Examples for popular deep learning recommender systems architectures (Wide&Deep, DLRM, DCN, DeepFM) are available. Other Features of HugeCTRHugeCTR is designed to scale deep learning models for recommender systems. It provides a list of other important features:* Proficiency in oversubscribing models to train embedding tables with single nodes that don’t fit within the GPU or CPU memory (only required embeddings are prefetched from a parameter server per batch)* Asynchronous and multithreaded data pipelines* A highly optimized data loader.* Supported data formats such as parquet and binary* Integration with Triton Inference Server for deployment to production Getting Started In this example, we will train a neural network with HugeCTR. We will use NVTabular for preprocessing. Preprocessing and Feature Engineering with NVTabularWe use NVTabular to `Categorify` our categorical input columns.
###Code
# External dependencies
import os
import shutil
import gc
import nvtabular as nvt
import numpy as np
from os import path
from sklearn.model_selection import train_test_split
from nvtabular.utils import download_file
# Get dataframe library - cudf or pandas
from nvtabular.dispatch import get_lib
df_lib = get_lib()
###Output
_____no_output_____
###Markdown
We define our base directory, containing the data.
###Code
# path to store raw and preprocessed data
BASE_DIR = "/model/data/"
###Output
_____no_output_____
###Markdown
If the data is not available in the base directory, we will download and unzip the data.
###Code
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", os.path.join(BASE_DIR, "ml-25m.zip")
)
###Output
downloading ml-25m.zip: 262MB [03:23, 1.29MB/s]
unzipping files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.34files/s]
###Markdown
Preparing the dataset with NVTabular First, we take a look at the movie metadata.Let's load the movie ratings.
###Code
ratings = df_lib.read_csv(os.path.join(BASE_DIR, "ml-25m", "ratings.csv"))
ratings.head()
###Output
_____no_output_____
###Markdown
We drop the timestamp column and split the ratings into training and test dataset. We use a simple random split.
###Code
ratings = ratings.drop("timestamp", axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.head()
###Output
_____no_output_____
###Markdown
We save our train and valid datasets as parquet files on disk, and below we will read them in while initializing the Dataset objects.
###Code
train.to_parquet(BASE_DIR + "train.parquet")
valid.to_parquet(BASE_DIR + "valid.parquet")
del train
del valid
gc.collect()
###Output
_____no_output_____
###Markdown
Let's define our categorical and label columns. Note that in that example we do not have numerical columns.
###Code
CATEGORICAL_COLUMNS = ["userId", "movieId"]
LABEL_COLUMNS = ["rating"]
###Output
_____no_output_____
###Markdown
Let's add Categorify op for our categorical features, userId, movieId.
###Code
cat_features = CATEGORICAL_COLUMNS >> nvt.ops.Categorify(cat_cache="device")
###Output
_____no_output_____
###Markdown
The ratings are on a scale between 1-5. We want to predict a binary target with 1 are all ratings >=4 and 0 are all ratings <=3. We use the LambdaOp for it.
###Code
ratings = nvt.ColumnGroup(["rating"]) >> (lambda col: (col > 3).astype("int8"))
###Output
_____no_output_____
###Markdown
We can visualize our calculation graph.
###Code
output = cat_features + ratings
(output).graph
###Output
_____no_output_____
###Markdown
We initialize our NVTabular workflow.
###Code
workflow = nvt.Workflow(output)
###Output
_____no_output_____
###Markdown
We initialize NVTabular Datasets, and use the part_size parameter, which defines the size read into GPU-memory at once, in nvt.Dataset.
###Code
train_dataset = nvt.Dataset(BASE_DIR + "train.parquet", part_size="100MB")
valid_dataset = nvt.Dataset(BASE_DIR + "valid.parquet", part_size="100MB")
###Output
_____no_output_____
###Markdown
First, we collect the training dataset statistics.
###Code
%%time
workflow.fit(train_dataset)
###Output
CPU times: user 1.01 s, sys: 315 ms, total: 1.32 s
Wall time: 1.39 s
###Markdown
This step is slightly different for HugeCTR. HugeCTR expect the categorical input columns as `int64` and continuous/label columns as `float32` We can define output datatypes for our NVTabular workflow.
###Code
dict_dtypes = {}
for col in CATEGORICAL_COLUMNS:
dict_dtypes[col] = np.int64
for col in LABEL_COLUMNS:
dict_dtypes[col] = np.float32
###Output
_____no_output_____
###Markdown
Note: We do not have numerical output columns
###Code
train_dir = os.path.join(BASE_DIR, "train")
valid_dir = os.path.join(BASE_DIR, "valid")
if path.exists(train_dir):
shutil.rmtree(train_dir)
if path.exists(valid_dir):
shutil.rmtree(valid_dir)
###Output
_____no_output_____
###Markdown
In addition, we need to provide the data schema to the output calls. We need to define which output columns are `categorical`, `continuous` and which is the `label` columns. NVTabular will write metadata files, which HugeCTR requires to load the data and optimize training.
###Code
workflow.transform(train_dataset).to_parquet(
output_path=BASE_DIR + "train/",
shuffle=nvt.io.Shuffle.PER_PARTITION,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
workflow.transform(valid_dataset).to_parquet(
output_path=BASE_DIR + "valid/",
shuffle=False,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
###Output
_____no_output_____
###Markdown
Scaling Accelerated training with HugeCTR HugeCTR is a deep learning framework dedicated to recommendation systems. It is written in CUDA C++. As HugeCTR optimizes the training in CUDA++, we need to define the training pipeline and model architecture and execute it via the commandline. We will use the Python API, which is similar to Keras models. HugeCTR has three main components:* Solver: Specifies various details such as active GPU list, batchsize, and model_file* Optimizer: Specifies the type of optimizer and its hyperparameters* DataReader: Specifies the training/evaludation data* Model: Specifies embeddings, and dense layers. Note that embeddings must precede the dense layers **Solver**Let's take a look on the parameter for the `Solver`. We should be familiar from other frameworks for the hyperparameter.```solver = hugectr.CreateSolver(- vvgpu: GPU indices used in the training process, which has two levels. For example: [[0,1],[1,2]] indicates that two nodes are used in the first node. GPUs 0 and 1 are used while GPUs 1 and 2 are used for the second node. It is also possible to specify non-continuous GPU indices such as [0, 2, 4, 7] - batchsize: Minibatch size used in training- max_eval_batches: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset.If max_iter is used, the evaluation happens for max_eval_batches by repeating the evaluation dataset infinitely.On the other hand, with num_epochs, HugeCTR stops the evaluation if all the evaluation data is consumed - batchsize_eval: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset- mixed_precision: Enables mixed precision training with the scaler specified here. Only 128,256, 512, and 1024 scalers are supported)``` **Optimizer**The optimizer is the algorithm to update the model parameters. HugeCTR supports the common algorithms.```optimizer = CreateOptimizer(- optimizer_type: Optimizer algorithm - Adam, MomentumSGD, Nesterov, and SGD - learning_rate: Learning Rate for optimizer)``` **DataReader**The data reader defines the training and evaluation dataset.```reader = hugectr.DataReaderParams(- data_reader_type: Data format to read- source: The training dataset file list. IMPORTANT: This should be a list- eval_source: The evaluation dataset file list.- check_type: The data error detection mechanism (Sum: Checksum, None: no detection).- slot_size_array: The list of categorical feature cardinalities)``` **Model**We initialize the model with the solver, optimizer and data reader:```model = hugectr.Model(solver, reader, optimizer)```We can add multiple layers to the model with `model.add` function. We will focus on:- `Input` defines the input data- `SparseEmbedding` defines the embedding layer- `DenseLayer` defines dense layers, such as fully connected, ReLU, BatchNorm, etc.**HugeCTR organizes the layers by names. For each layer, we define the input and output names.** Input layer:This layer is required to define the input data.```hugectr.Input( label_dim: Number of label columns label_name: Name of label columns in network architecture dense_dim: Number of continuous columns dense_name: Name of contiunous columns in network architecture data_reader_sparse_param_array: Configuration how to read sparse data and its names)```SparseEmbedding:This layer defines embedding table```hugectr.SparseEmbedding( embedding_type: Different embedding options to distribute embedding tables workspace_size_per_gpu_in_mb: Maximum embedding table size in MB embedding_vec_size: Embedding vector size combiner: Intra-slot reduction op sparse_embedding_name: Layer name bottom_name: Input layer names optimizer: Optimizer to use)```DenseLayer:This layer is copied to each GPU and is normally used for the MLP tower.```hugectr.DenseLayer( layer_type: Layer type, such as FullyConnected, Reshape, Concat, Loss, BatchNorm, etc. bottom_names: Input layer names top_names: Layer name ...: Depending on the layer type additional parameter can be defined)```This is only a short introduction in the API. You can read more in the official docs: [Python Interface](https://github.com/NVIDIA/HugeCTR/blob/master/docs/python_interface.md) and [Layer Book](https://github.com/NVIDIA/HugeCTR/blob/master/docs/hugectr_layer_book.md) Let's define our modelWe walked through the documentation, but it is useful to understand the API. Finally, we can define our model. We will write the model to `./model.py` and execute it afterwards. We need the cardinalities of each categorical feature to assign as `slot_size_array` in the model below.
###Code
from nvtabular.ops import get_embedding_sizes
embeddings = get_embedding_sizes(workflow)
print(embeddings)
###Output
{'userId': (162542, 512), 'movieId': (56586, 512)}
###Markdown
Let's clear the directory and create the output folders
###Code
!rm -r /model/movielens_hugectr
!mkdir -p /model/movielens_hugectr/1
###Output
rm: cannot remove '/model/movielens_hugectr': No such file or directory
###Markdown
We use `graph_to_json` to convert the model to a JSON configuration, required for the inference.
###Code
%%writefile './model.py'
import hugectr
from mpi4py import MPI # noqa
solver = hugectr.CreateSolver(
vvgpu=[[0]],
batchsize=2048,
batchsize_eval=2048,
max_eval_batches=160,
i64_input_key=True,
use_mixed_precision=False,
repeat_dataset=True,
)
optimizer = hugectr.CreateOptimizer(optimizer_type=hugectr.Optimizer_t.Adam)
reader = hugectr.DataReaderParams(
data_reader_type=hugectr.DataReaderType_t.Parquet,
source=["/model/data/train/_file_list.txt"],
eval_source="/model/data/valid/_file_list.txt",
check_type=hugectr.Check_t.Non,
slot_size_array=[162542, 56586],
)
model = hugectr.Model(solver, reader, optimizer)
model.add(
hugectr.Input(
label_dim=1,
label_name="label",
dense_dim=0,
dense_name="dense",
data_reader_sparse_param_array=[
hugectr.DataReaderSparseParam("data1", nnz_per_slot=2, is_fixed_length=True, slot_num=2)
],
)
)
model.add(
hugectr.SparseEmbedding(
embedding_type=hugectr.Embedding_t.LocalizedSlotSparseEmbeddingHash,
workspace_size_per_gpu_in_mb=100,
embedding_vec_size=16,
combiner="sum",
sparse_embedding_name="sparse_embedding1",
bottom_name="data1",
optimizer=optimizer,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.Reshape,
bottom_names=["sparse_embedding1"],
top_names=["reshape1"],
leading_dim=32,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["reshape1"],
top_names=["fc1"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc1"],
top_names=["relu1"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu1"],
top_names=["fc2"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc2"],
top_names=["relu2"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu2"],
top_names=["fc3"],
num_output=1,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.BinaryCrossEntropyLoss,
bottom_names=["fc3", "label"],
top_names=["loss"],
)
)
model.compile()
model.summary()
model.fit(max_iter=2000, display=100, eval_interval=200, snapshot=1900)
model.graph_to_json(graph_config_file="/model/movielens_hugectr/1/movielens.json")
!python model.py
###Output
====================================================Model Init=====================================================
[26d18h35m13s][HUGECTR][INFO]: Global seed is 3848625588
[26d18h35m15s][HUGECTR][INFO]: Peer-to-peer access cannot be fully enabled.
Device 0: Tesla V100-SXM2-32GB
[26d18h35m15s][HUGECTR][INFO]: num of DataReader workers: 1
[26d18h35m15s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=1638400
[26d18h35m15s][HUGECTR][INFO]: All2All Warmup Start
[26d18h35m15s][HUGECTR][INFO]: All2All Warmup End
===================================================Model Compile===================================================
[26d18h35m17s][HUGECTR][INFO]: gpu0 start to init embedding
[26d18h35m17s][HUGECTR][INFO]: gpu0 init embedding done
===================================================Model Summary===================================================
Label Dense Sparse
label dense data1
(None, 1) (None, 0)
------------------------------------------------------------------------------------------------------------------
Layer Type Input Name Output Name Output Shape
------------------------------------------------------------------------------------------------------------------
LocalizedSlotSparseEmbeddingHash data1 sparse_embedding1 (None, 2, 16)
Reshape sparse_embedding1 reshape1 (None, 32)
InnerProduct reshape1 fc1 (None, 128)
ReLU fc1 relu1 (None, 128)
InnerProduct relu1 fc2 (None, 128)
ReLU fc2 relu2 (None, 128)
InnerProduct relu2 fc3 (None, 1)
BinaryCrossEntropyLoss fc3,label loss
------------------------------------------------------------------------------------------------------------------
=====================================================Model Fit=====================================================
[26d18h35m17s][HUGECTR][INFO]: Use non-epoch mode with number of iterations: 2000
[26d18h35m17s][HUGECTR][INFO]: Training batchsize: 2048, evaluation batchsize: 2048
[26d18h35m17s][HUGECTR][INFO]: Evaluation interval: 200, snapshot interval: 1900
[26d18h35m17s][HUGECTR][INFO]: Sparse embedding trainable: 1, dense network trainable: 1
[26d18h35m17s][HUGECTR][INFO]: Use mixed precision: 0, scaler: 1.000000, use cuda graph: 1
[26d18h35m17s][HUGECTR][INFO]: lr: 0.001000, warmup_steps: 1, decay_start: 0, decay_steps: 1, decay_power: 2.000000, end_lr: 0.000000
[26d18h35m17s][HUGECTR][INFO]: Training source file: /model/data/train/_file_list.txt
[26d18h35m17s][HUGECTR][INFO]: Evaluation source file: /model/data/valid/_file_list.txt
[26d18h35m17s][HUGECTR][INFO]: Iter: 100 Time(100 iters): 0.136342s Loss: 0.579462 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Iter: 200 Time(100 iters): 0.135109s Loss: 0.554109 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Evaluation, AUC: 0.745997
[26d18h35m17s][HUGECTR][INFO]: Eval Time for 160 iters: 0.073575s
[26d18h35m17s][HUGECTR][INFO]: Iter: 300 Time(100 iters): 0.210037s Loss: 0.571327 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Iter: 400 Time(100 iters): 0.132709s Loss: 0.546585 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.764737
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.070747s
[26d18h35m18s][HUGECTR][INFO]: Iter: 500 Time(100 iters): 0.216137s Loss: 0.552045 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 600 Time(100 iters): 0.133178s Loss: 0.541653 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.774266
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.069966s
[26d18h35m18s][HUGECTR][INFO]: Iter: 700 Time(100 iters): 0.204785s Loss: 0.524283 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 800 Time(100 iters): 0.133213s Loss: 0.530550 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.780663
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.081221s
[26d18h35m18s][HUGECTR][INFO]: Iter: 900 Time(100 iters): 0.216339s Loss: 0.541633 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 1000 Time(100 iters): 0.142290s Loss: 0.541528 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.786361
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.068574s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1100 Time(100 iters): 0.203976s Loss: 0.528578 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Iter: 1200 Time(100 iters): 0.133187s Loss: 0.522433 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.788285
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.076724s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1300 Time(100 iters): 0.213124s Loss: 0.524235 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Iter: 1400 Time(100 iters): 0.135103s Loss: 0.513423 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.793324
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.081245s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1500 Time(100 iters): 0.228339s Loss: 0.504689 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Iter: 1600 Time(100 iters): 0.133944s Loss: 0.515175 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Evaluation, AUC: 0.795201
[26d18h35m20s][HUGECTR][INFO]: Eval Time for 160 iters: 0.071934s
[26d18h35m20s][HUGECTR][INFO]: Iter: 1700 Time(100 iters): 0.207204s Loss: 0.515042 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Iter: 1800 Time(100 iters): 0.135032s Loss: 0.498440 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Evaluation, AUC: 0.795551
[26d18h35m20s][HUGECTR][INFO]: Eval Time for 160 iters: 0.071047s
[26d18h35m20s][HUGECTR][INFO]: Iter: 1900 Time(100 iters): 0.209134s Loss: 0.509593 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Rank0: Dump hash table from GPU0
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write hash table <key,value> pairs to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m20s][HUGECTR][INFO]: Dumping sparse weights to files, successful
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write optimzer state to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write optimzer state to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m21s][HUGECTR][INFO]: Dumping sparse optimzer states to files, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping dense weights to file, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping dense optimizer states to file, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping untrainable weights to file, successful
[26d18h35m21s][HUGECTR][INFO]: Save the model graph to /model/movielens_hugectr/1/movielens.json, successful
###Markdown
We trained our model. After training terminates, we can see that multiple `.model` files and folders are generated. We need to move them inside `1` folder under the `movielens_hugectr` folder. Let's create these folders first. Now we move our saved `.model` files inside `1` folder.
###Code
!mv *.model /model/movielens_hugectr/1/
###Output
_____no_output_____
###Markdown
Now we can save our models to be deployed at the inference stage. To do so we will use `export_hugectr_ensemble` method below. With this method, we can generate the `config.pbtxt` files automatically for each model. In doing so, we should also create a `hugectr_params` dictionary, and define the parameters like where the `movielens.json` file will be read, `slots` which corresponds to number of categorical features, `embedding_vector_size`, `max_nnz`, and `n_outputs` which is number of outputs. The script below creates an ensemble triton server model where - `workflow` is the the nvtabular workflow used in preprocessing, - `hugectr_model_path` is the HugeCTR model that should be served. This path includes the `.model` files.- `name` is the base name of the various triton models- `output_path` is the path where is model will be saved to.
###Code
from nvtabular.inference.triton import export_hugectr_ensemble
hugectr_params = dict()
hugectr_params["config"] = "/model/models/movielens/1/movielens.json"
hugectr_params["slots"] = 2
hugectr_params["max_nnz"] = 2
hugectr_params["embedding_vector_size"] = 16
hugectr_params["n_outputs"] = 1
export_hugectr_ensemble(
workflow=workflow,
hugectr_model_path="/model/movielens_hugectr/1/",
hugectr_params=hugectr_params,
name="movielens",
output_path="/model/models/",
label_columns=["rating"],
cats=CATEGORICAL_COLUMNS,
max_batch_size=64,
)
###Output
_____no_output_____
###Markdown
OverviewIn this notebook, we want to provide an overview what HugeCTR framework is, its features and benefits. We will use HugeCTR to train a basic neural network architecture and deploy the saved model to Triton Inference Server. Learning Objectives:* Adopt NVTabular workflow to provide input files to HugeCTR* Define HugeCTR neural network architecture* Train a deep learning model with HugeCTR* Deploy HugeCTR to Triton Inference Server Why using HugeCTR?HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs).HugeCTR offers multiple advantages to train deep learning recommender systems:1. **Speed**: HugeCTR is a highly efficient framework written C++. We experienced up to 10x speed up. HugeCTR on a NVIDIA DGX A100 system proved to be the fastest commercially available solution for training the architecture Deep Learning Recommender Model (DLRM) developed by Facebook.2. **Scale**: HugeCTR supports model parallel scaling. It distributes the large embedding tables over multiple GPUs or multiple nodes. 3. **Easy-to-use**: Easy-to-use Python API similar to Keras. Examples for popular deep learning recommender systems architectures (Wide&Deep, DLRM, DCN, DeepFM) are available. Other Features of HugeCTRHugeCTR is designed to scale deep learning models for recommender systems. It provides a list of other important features:* Proficiency in oversubscribing models to train embedding tables with single nodes that don’t fit within the GPU or CPU memory (only required embeddings are prefetched from a parameter server per batch)* Asynchronous and multithreaded data pipelines* A highly optimized data loader.* Supported data formats such as parquet and binary* Integration with Triton Inference Server for deployment to production Getting Started In this example, we will train a neural network with HugeCTR. We will use NVTabular for preprocessing. Preprocessing and Feature Engineering with NVTabularWe use NVTabular to `Categorify` our categorical input columns.
###Code
# External dependencies
import os
import time
import gc
import nvtabular as nvt
import cudf
import numpy as np
from os import path
from sklearn.model_selection import train_test_split
from nvtabular.utils import download_file
###Output
_____no_output_____
###Markdown
We define our base directory, containing the data.
###Code
# path to store raw and preprocessed data
BASE_DIR = '/model/data/'
###Output
_____no_output_____
###Markdown
If the data is not available in the base directory, we will download and unzip the data.
###Code
download_file("http://files.grouplens.org/datasets/movielens/ml-25m.zip",
os.path.join(BASE_DIR, "ml-25m.zip"))
###Output
downloading ml-25m.zip: 262MB [00:43, 6.09MB/s]
unzipping files: 100%|██████████| 8/8 [00:09<00:00, 1.19s/files]
###Markdown
Preparing the dataset with NVTabular First, we take a look at the movie metadata.Let's load the movie ratings.
###Code
ratings = cudf.read_csv(os.path.join(BASE_DIR, "ml-25m", "ratings.csv"))
ratings.head()
###Output
_____no_output_____
###Markdown
We drop the timestamp column and split the ratings into training and test dataset. We use a simple random split.
###Code
ratings = ratings.drop('timestamp', axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.head()
###Output
_____no_output_____
###Markdown
We save our train and valid datasets as parquet files on disk, and below we will read them in while initializing the Dataset objects.
###Code
train.to_parquet(BASE_DIR + 'train.parquet')
valid.to_parquet(BASE_DIR + 'valid.parquet')
del train
del valid
gc.collect()
###Output
_____no_output_____
###Markdown
Let's define our categorical and label columns. Note that in that example we do not have numerical columns.
###Code
CATEGORICAL_COLUMNS = ['userId', 'movieId']
LABEL_COLUMNS = ['rating']
###Output
_____no_output_____
###Markdown
Let's add Categorify op for our categorical features, userId, movieId.
###Code
cat_features = CATEGORICAL_COLUMNS >> nvt.ops.Categorify(cat_cache="device")
###Output
_____no_output_____
###Markdown
The ratings are on a scale between 1-5. We want to predict a binary target with 1 are all ratings >=4 and 0 are all ratings <=3. We use the LambdaOp for it.
###Code
ratings = nvt.ColumnGroup(['rating']) >> (lambda col: (col>3).astype('int8'))
###Output
_____no_output_____
###Markdown
We can visualize our calculation graph.
###Code
output = cat_features+ratings
(output).graph
###Output
_____no_output_____
###Markdown
We initialize our NVTabular workflow.
###Code
workflow = nvt.Workflow(output)
###Output
_____no_output_____
###Markdown
We initialize NVTabular Datasets, and use the part_size parameter, which defines the size read into GPU-memory at once, in nvt.Dataset.
###Code
train_dataset = nvt.Dataset(BASE_DIR + 'train.parquet', part_size='100MB')
valid_dataset = nvt.Dataset(BASE_DIR + 'valid.parquet', part_size='100MB')
###Output
_____no_output_____
###Markdown
First, we collect the training dataset statistics.
###Code
%%time
workflow.fit(train_dataset)
###Output
CPU times: user 884 ms, sys: 333 ms, total: 1.22 s
Wall time: 1.32 s
###Markdown
This step is slightly different for HugeCTR. HugeCTR expect the categorical input columns as `int64` and continuous/label columns as `float32` We can define output datatypes for our NVTabular workflow.
###Code
dict_dtypes={}
for col in CATEGORICAL_COLUMNS:
dict_dtypes[col] = np.int64
for col in LABEL_COLUMNS:
dict_dtypes[col] = np.float32
###Output
_____no_output_____
###Markdown
Note: We do not have numerical output columns
###Code
if path.exists(BASE_DIR + 'train'):
!rm -r $BASE_DIR/train
if path.exists(BASE_DIR + 'valid'):
!rm -r $BASE_DIR/valid
###Output
_____no_output_____
###Markdown
In addition, we need to provide the data schema to the output calls. We need to define which output columns are `categorical`, `continuous` and which is the `label` columns. NVTabular will write metadata files, which HugeCTR requires to load the data and optimize training.
###Code
workflow.transform(train_dataset).to_parquet(output_path=BASE_DIR + 'train/',
shuffle=nvt.io.Shuffle.PER_PARTITION,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes
)
workflow.transform(valid_dataset).to_parquet(output_path=BASE_DIR + 'valid/',
shuffle=False,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes
)
###Output
_____no_output_____
###Markdown
Scaling Accelerated training with HugeCTR HugeCTR is a deep learning framework dedicated to recommendation systems. It is written in CUDA C++. As HugeCTR optimizes the training in CUDA++, we need to define the training pipeline and model architecture and execute it via the commandline. We will use the Python API, which is similar to Keras models. HugeCTR has three main components:* Solver: Specifies various details such as active GPU list, batchsize, and model_file* Optimizer: Specifies the type of optimizer and its hyperparameters* Model: Specifies training/evaluation data (and their paths), embeddings, and dense layers. Note that embeddings must precede the dense layers **Solver**Let's take a look on the parameter for the `Solver`. We should be familiar from other frameworks for the hyperparameter.```solver = hugectr.solver_parser_helper(- vvgpu: GPU indices used in the training process, which has two levels. For example: [[0,1],[1,2]] indicates that two nodes are used in the first node. GPUs 0 and 1 are used while GPUs 1 and 2 are used for the second node. It is also possible to specify non-continuous GPU indices such as [0, 2, 4, 7] - max_iter: Total number of training iterations- batchsize: Minibatch size used in training- display: Intervals to print loss on the screen- eval_interval: Evaluation interval in the unit of training iteration- max_eval_batches: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset.If max_iter is used, the evaluation happens for max_eval_batches by repeating the evaluation dataset infinitely.On the other hand, with num_epochs, HugeCTR stops the evaluation if all the evaluation data is consumed - batchsize_eval: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset- mixed_precision: Enables mixed precision training with the scaler specified here. Only 128,256, 512, and 1024 scalers are supported)``` **Optimizer**The optimizer is the algorithm to update the model parameters. HugeCTR supports the common algorithms.```optimizer = CreateOptimizer(- optimizer_type: Optimizer algorithm - Adam, MomentumSGD, Nesterov, and SGD - learning_rate: Learning Rate for optimizer)``` **Model**We initialize the model with the solver and optimizer:```model = hugectr.Model(solver, optimizer)```We can add multiple layers to the model with `model.add` function. We will focus on:- `Input` defines the input data- `SparseEmbedding` defines the embedding layer- `DenseLayer` defines dense layers, such as fully connected, ReLU, BatchNorm, etc.**HugeCTR organizes the layers by names. For each layer, we define the input and output names.** Input layer:This layer is required to define the input data.```hugectr.Input( data_reader_type: Data format to read source: The training dataset file list. eval_source: The evaluation dataset file list. check_type: The data error detection machanism (Sum: Checksum, None: no detection). label_dim: Number of label columns label_name: Name of label columns in network architecture dense_dim: Number of continous columns dense_name: Name of contiunous columns in network architecture slot_size_array: The list of categorical feature cardinalities data_reader_sparse_param_array: Configuration how to read sparse data sparse_names: Name of sparse/categorical columns in network architecture)```SparseEmbedding:This layer defines embedding table```hugectr.SparseEmbedding( embedding_type: Different embedding options to distribute embedding tables max_vocabulary_size_per_gpu: Maximum vocabulary size or cardinality across all the input features embedding_vec_size: Embedding vector size combiner: Intra-slot reduction op (0=sum, 1=average) sparse_embedding_name: Layer name bottom_name: Input layer names)```DenseLayer:This layer is copied to each GPU and is normally used for the MLP tower.```hugectr.DenseLayer( layer_type: Layer type, such as FullyConnected, Reshape, Concat, Loss, BatchNorm, etc. bottom_names: Input layer names top_names: Layer name ...: Depending on the layer type additional parameter can be defined)``` Let's define our modelWe walked through the documentation, but it is useful to understand the API. Finally, we can define our model. We will write the model to `./model.py` and execute it afterwards. We need the cardinalities of each categorical feature to assign as `slot_size_array` in the model below.
###Code
from nvtabular.ops import get_embedding_sizes
embeddings = get_embedding_sizes(workflow)
print(embeddings)
###Output
{'movieId': (56586, 512), 'userId': (162542, 512)}
###Markdown
In addition, we need the total cardinalities to be assigned as `max_vocabulary_size_per_gpu` parameter.
###Code
total_cardinality = embeddings['movieId'][0] + embeddings['userId'][0]
total_cardinality
%%writefile './model.py'
import hugectr
from mpi4py import MPI
solver = hugectr.solver_parser_helper(vvgpu = [[0]],
max_iter = 2000,
batchsize = 2048,
display = 100,
eval_interval = 200,
batchsize_eval = 2048,
max_eval_batches = 160,
i64_input_key = True,
use_mixed_precision = False,
repeat_dataset = True,
snapshot = 1900
)
optimizer = hugectr.optimizer.CreateOptimizer(
optimizer_type = hugectr.Optimizer_t.Adam,
use_mixed_precision = False
)
model = hugectr.Model(solver, optimizer)
model.add(
hugectr.Input(
data_reader_type = hugectr.DataReaderType_t.Parquet,
source = "/model/data/train/_file_list.txt",
eval_source = "/model/data/valid/_file_list.txt",
check_type = hugectr.Check_t.Non,
label_dim = 1,
label_name = "label",
dense_dim = 0,
dense_name = "dense",
slot_size_array = [56586, 162542],
data_reader_sparse_param_array = [
hugectr.DataReaderSparseParam(hugectr.DataReaderSparse_t.Distributed, 3, 1, 2)
],
sparse_names = ["data1"]
)
)
model.add(
hugectr.SparseEmbedding(
embedding_type = hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash,
max_vocabulary_size_per_gpu = 219128,
embedding_vec_size = 16,
combiner = 0,
sparse_embedding_name = "sparse_embedding1",
bottom_name = "data1"
)
)
model.add(
hugectr.DenseLayer(
layer_type = hugectr.Layer_t.Reshape,
bottom_names = ["sparse_embedding1"],
top_names = ["reshape1"],
leading_dim=32
)
)
model.add(
hugectr.DenseLayer(
layer_type = hugectr.Layer_t.InnerProduct,
bottom_names = ["reshape1"],
top_names = ["fc1"],
num_output=128
)
)
model.add(
hugectr.DenseLayer(
layer_type = hugectr.Layer_t.ReLU,
bottom_names = ["fc1"],
top_names = ["relu1"],
)
)
model.add(
hugectr.DenseLayer(
layer_type = hugectr.Layer_t.InnerProduct,
bottom_names = ["relu1"],
top_names = ["fc2"],
num_output=128
)
)
model.add(
hugectr.DenseLayer(
layer_type = hugectr.Layer_t.ReLU,
bottom_names = ["fc2"],
top_names = ["relu2"],
)
)
model.add(
hugectr.DenseLayer(
layer_type = hugectr.Layer_t.InnerProduct,
bottom_names = ["relu2"],
top_names = ["fc3"],
num_output=1
)
)
model.add(
hugectr.DenseLayer(
layer_type = hugectr.Layer_t.BinaryCrossEntropyLoss,
bottom_names = ["fc3", "label"],
top_names = ["loss"])
)
model.compile()
model.summary()
model.fit()
!python model.py
###Output
===================================Model Init====================================
[12d22h09m04s][HUGECTR][INFO]: Global seed is 2523917653
[12d22h09m06s][HUGECTR][INFO]: Peer-to-peer access cannot be fully enabled.
Device 0: Tesla V100-DGXS-16GB
[12d22h09m06s][HUGECTR][INFO]: num of DataReader workers: 1
[12d22h09m06s][HUGECTR][INFO]: num_internal_buffers 1
[12d22h09m06s][HUGECTR][INFO]: num_internal_buffers 1
[12d22h09m06s][HUGECTR][INFO]: Vocabulary size: 219128
[12d22h09m06s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=219128
[12d22h09m07s][HUGECTR][INFO]: gpu0 start to init embedding
[12d22h09m07s][HUGECTR][INFO]: gpu0 init embedding done
==================================Model Summary==================================
Label Name Dense Name Sparse Name
label dense data1
--------------------------------------------------------------------------------
Layer Type Input Name Output Name
--------------------------------------------------------------------------------
DistributedHash data1 sparse_embedding1
Reshape sparse_embedding1 reshape1
InnerProduct reshape1 fc1
ReLU fc1 relu1
InnerProduct relu1 fc2
ReLU fc2 relu2
InnerProduct relu2 fc3
BinaryCrossEntropyLoss fc3, label loss
--------------------------------------------------------------------------------
=====================================Model Fit====================================
[12d22h90m70s][HUGECTR][INFO]: Use non-epoch mode with number of iterations: 2000
[12d22h90m70s][HUGECTR][INFO]: Training batchsize: 2048, evaluation batchsize: 2048
[12d22h90m70s][HUGECTR][INFO]: Evaluation interval: 200, snapshot interval: 1900
[12d22h90m70s][HUGECTR][INFO]: Iter: 100 Time(100 iters): 0.052433s Loss: 0.584569 lr:0.001000
[12d22h90m70s][HUGECTR][INFO]: Iter: 200 Time(100 iters): 0.050910s Loss: 0.574016 lr:0.001000
[12d22h90m70s][HUGECTR][INFO]: Evaluation, AUC: 0.742104
[12d22h90m70s][HUGECTR][INFO]: Eval Time for 160 iters: 0.037350s
[12d22h90m70s][HUGECTR][INFO]: Iter: 300 Time(100 iters): 0.097618s Loss: 0.567825 lr:0.001000
[12d22h90m70s][HUGECTR][INFO]: Iter: 400 Time(100 iters): 0.050943s Loss: 0.537596 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.759488
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.032945s
[12d22h90m80s][HUGECTR][INFO]: Iter: 500 Time(100 iters): 0.096795s Loss: 0.542408 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 600 Time(100 iters): 0.050967s Loss: 0.542498 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.773175
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.032986s
[12d22h90m80s][HUGECTR][INFO]: Iter: 700 Time(100 iters): 0.085280s Loss: 0.537160 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 800 Time(100 iters): 0.051053s Loss: 0.536568 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.778617
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.044035s
[12d22h90m80s][HUGECTR][INFO]: Iter: 900 Time(100 iters): 0.096313s Loss: 0.522038 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1000 Time(100 iters): 0.061872s Loss: 0.527347 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.784214
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.032451s
[12d22h90m80s][HUGECTR][INFO]: Iter: 1100 Time(100 iters): 0.084576s Loss: 0.539346 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1200 Time(100 iters): 0.050991s Loss: 0.540385 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.785587
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.033604s
[12d22h90m80s][HUGECTR][INFO]: Iter: 1300 Time(100 iters): 0.085920s Loss: 0.526508 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1400 Time(100 iters): 0.050974s Loss: 0.529692 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.790832
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.044729s
[12d22h90m80s][HUGECTR][INFO]: Iter: 1500 Time(100 iters): 0.108554s Loss: 0.512485 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1600 Time(100 iters): 0.050959s Loss: 0.553773 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.792876
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.034639s
[12d22h90m80s][HUGECTR][INFO]: Iter: 1700 Time(100 iters): 0.086896s Loss: 0.511820 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1800 Time(100 iters): 0.050913s Loss: 0.529587 lr:0.001000
[12d22h90m90s][HUGECTR][INFO]: Evaluation, AUC: 0.794456
[12d22h90m90s][HUGECTR][INFO]: Eval Time for 160 iters: 0.034695s
[12d22h90m90s][HUGECTR][INFO]: Iter: 1900 Time(100 iters): 0.086743s Loss: 0.520362 lr:0.001000
[12d22h90m90s][HUGECTR][INFO]: Rank0: Dump hash table from GPU0
[12d22h90m90s][HUGECTR][INFO]: Rank0: Write hash table <key,value> pairs to file
[12d22h90m90s][HUGECTR][INFO]: Done
###Markdown
We trained our model. After training terminates, we can see that two `.model` files are generated. We need to move them inside `1` folder under the `movielens_hugectr` folder. Let's create these folders first.
###Code
!mkdir -p /model/movielens_hugectr/1
###Output
_____no_output_____
###Markdown
Now we move our saved `.model` files inside `1` folder.
###Code
!mv *.model /model/movielens_hugectr/1/
###Output
_____no_output_____
###Markdown
Note that these stored `.model` files will be used in the inference. Now we have to create a JSON file for inference which has a similar configuration as our training file. We should remove the solver and optimizer clauses and add the inference clause in the JSON file. The paths of the stored dense model and sparse model(s) should be specified at dense_model_file and sparse_model_file within the inference clause. We need to make some modifications to data in the layers clause. Besides, we need to change the last layer from BinaryCrossEntropyLoss to Sigmoid. The rest of "layers" should be exactly the same as that in the training model.py file.Now let's create a `movielens.json` file inside the `movielens/1` folder. We have already retrieved the cardinality of each categorical column using `get_embedding_sizes` function above. We will use these cardinalities below in the `movielens.json` file as well.
###Code
%%writefile '/model/movielens_hugectr/1/movielens.json'
{
"inference": {
"max_batchsize": 64,
"hit_rate_threshold": 0.6,
"dense_model_file": "/model/models/movielens/1/_dense_1900.model",
"sparse_model_file": "/model/models/movielens/1/0_sparse_1900.model",
"label": 1,
"input_key_type": "I64"
},
"layers": [
{
"name": "data",
"type": "Data",
"format": "Parquet",
"slot_size_array": [56586, 162542],
"source": "/model/data/train/_file_list.txt",
"eval_source": "/model/data/valid/_file_list.txt",
"check": "Sum",
"label": {
"top": "label",
"label_dim": 1
},
"dense": {
"top": "dense",
"dense_dim": 0
},
"sparse": [
{
"top": "data1",
"type": "DistributedSlot",
"max_feature_num_per_sample": 3,
"slot_num": 2
}
]
},
{
"name": "sparse_embedding1",
"type": "DistributedSlotSparseEmbeddingHash",
"bottom": "data1",
"top": "sparse_embedding1",
"sparse_embedding_hparam": {
"max_vocabulary_size_per_gpu": 219128,
"embedding_vec_size": 16,
"combiner": 0
}
},
{
"name": "reshape1",
"type": "Reshape",
"bottom": "sparse_embedding1",
"top": "reshape1",
"leading_dim": 32
},
{
"name": "fc1",
"type": "InnerProduct",
"bottom": "reshape1",
"top": "fc1",
"fc_param": {
"num_output": 128
}
},
{
"name": "relu1",
"type": "ReLU",
"bottom": "fc1",
"top": "relu1"
},
{
"name": "fc2",
"type": "InnerProduct",
"bottom": "relu1",
"top": "fc2",
"fc_param": {
"num_output": 128
}
},
{
"name": "relu2",
"type": "ReLU",
"bottom": "fc2",
"top": "relu2"
},
{
"name": "fc3",
"type": "InnerProduct",
"bottom": "relu2",
"top": "fc3",
"fc_param": {
"num_output": 1
}
},
{
"name": "sigmoid",
"type": "Sigmoid",
"bottom": "fc3",
"top": "sigmoid"
}
]
}
###Output
Overwriting /model/movielens_hugectr/1/movielens.json
###Markdown
Now we can save our models to be deployed at the inference stage. To do so we will use `export_hugectr_ensemble` method below. With this method, we can generate the `config.pbtxt` files automatically for each model. In doing so, we should also create a `hugectr_params` dictionary, and define the parameters like where the `movielens.json` file will be read, `slots` which corresponds to number of categorical features, `embedding_vector_size`, `max_nnz`, and `n_outputs` which is number of outputs. The script below creates an ensemble triton server model where - `workflow` is the the nvtabular workflow used in preprocessing, - `hugectr_model_path` is the HugeCTR model that should be served. This path includes the `.model` files.- `name` is the base name of the various triton models- `output_path` is the path where is model will be saved to.
###Code
from nvtabular.inference.triton import export_hugectr_ensemble
hugectr_params = dict()
hugectr_params["config"] = "/model/models/movielens/1/movielens.json"
hugectr_params["slots"] = 2
hugectr_params["max_nnz"] = 2
hugectr_params["embedding_vector_size"] = 16
hugectr_params["n_outputs"] = 1
export_hugectr_ensemble(workflow=workflow,
hugectr_model_path="/model/movielens_hugectr/1/",
hugectr_params=hugectr_params,
name="movielens",
output_path="/model/models/",
label_columns=["rating"],
cats=CATEGORICAL_COLUMNS,
max_batch_size=64)
###Output
_____no_output_____
###Markdown
OverviewIn this notebook, we want to provide an overview what HugeCTR framework is, its features and benefits. We will use HugeCTR to train a basic neural network architecture and deploy the saved model to Triton Inference Server. Learning Objectives:* Adopt NVTabular workflow to provide input files to HugeCTR* Define HugeCTR neural network architecture* Train a deep learning model with HugeCTR* Deploy HugeCTR to Triton Inference Server Why using HugeCTR?HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs).HugeCTR offers multiple advantages to train deep learning recommender systems:1. **Speed**: HugeCTR is a highly efficient framework written C++. We experienced up to 10x speed up. HugeCTR on a NVIDIA DGX A100 system proved to be the fastest commercially available solution for training the architecture Deep Learning Recommender Model (DLRM) developed by Facebook.2. **Scale**: HugeCTR supports model parallel scaling. It distributes the large embedding tables over multiple GPUs or multiple nodes. 3. **Easy-to-use**: Easy-to-use Python API similar to Keras. Examples for popular deep learning recommender systems architectures (Wide&Deep, DLRM, DCN, DeepFM) are available. Other Features of HugeCTRHugeCTR is designed to scale deep learning models for recommender systems. It provides a list of other important features:* Proficiency in oversubscribing models to train embedding tables with single nodes that don’t fit within the GPU or CPU memory (only required embeddings are prefetched from a parameter server per batch)* Asynchronous and multithreaded data pipelines* A highly optimized data loader.* Supported data formats such as parquet and binary* Integration with Triton Inference Server for deployment to production Getting Started In this example, we will train a neural network with HugeCTR. We will use NVTabular for preprocessing. Preprocessing and Feature Engineering with NVTabularWe use NVTabular to `Categorify` our categorical input columns.
###Code
# External dependencies
import os
import shutil
import gc
import nvtabular as nvt
import cudf
import numpy as np
from os import path
from sklearn.model_selection import train_test_split
from nvtabular.utils import download_file
###Output
_____no_output_____
###Markdown
We define our base directory, containing the data.
###Code
# path to store raw and preprocessed data
BASE_DIR = "/model/data/"
###Output
_____no_output_____
###Markdown
If the data is not available in the base directory, we will download and unzip the data.
###Code
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", os.path.join(BASE_DIR, "ml-25m.zip")
)
###Output
downloading ml-25m.zip: 262MB [03:23, 1.29MB/s]
unzipping files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.34files/s]
###Markdown
Preparing the dataset with NVTabular First, we take a look at the movie metadata.Let's load the movie ratings.
###Code
ratings = cudf.read_csv(os.path.join(BASE_DIR, "ml-25m", "ratings.csv"))
ratings.head()
###Output
_____no_output_____
###Markdown
We drop the timestamp column and split the ratings into training and test dataset. We use a simple random split.
###Code
ratings = ratings.drop("timestamp", axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.head()
###Output
_____no_output_____
###Markdown
We save our train and valid datasets as parquet files on disk, and below we will read them in while initializing the Dataset objects.
###Code
train.to_parquet(BASE_DIR + "train.parquet")
valid.to_parquet(BASE_DIR + "valid.parquet")
del train
del valid
gc.collect()
###Output
_____no_output_____
###Markdown
Let's define our categorical and label columns. Note that in that example we do not have numerical columns.
###Code
CATEGORICAL_COLUMNS = ["userId", "movieId"]
LABEL_COLUMNS = ["rating"]
###Output
_____no_output_____
###Markdown
Let's add Categorify op for our categorical features, userId, movieId.
###Code
cat_features = CATEGORICAL_COLUMNS >> nvt.ops.Categorify(cat_cache="device")
###Output
_____no_output_____
###Markdown
The ratings are on a scale between 1-5. We want to predict a binary target with 1 are all ratings >=4 and 0 are all ratings <=3. We use the LambdaOp for it.
###Code
ratings = nvt.ColumnGroup(["rating"]) >> (lambda col: (col > 3).astype("int8"))
###Output
_____no_output_____
###Markdown
We can visualize our calculation graph.
###Code
output = cat_features + ratings
(output).graph
###Output
_____no_output_____
###Markdown
We initialize our NVTabular workflow.
###Code
workflow = nvt.Workflow(output)
###Output
_____no_output_____
###Markdown
We initialize NVTabular Datasets, and use the part_size parameter, which defines the size read into GPU-memory at once, in nvt.Dataset.
###Code
train_dataset = nvt.Dataset(BASE_DIR + "train.parquet", part_size="100MB")
valid_dataset = nvt.Dataset(BASE_DIR + "valid.parquet", part_size="100MB")
###Output
_____no_output_____
###Markdown
First, we collect the training dataset statistics.
###Code
%%time
workflow.fit(train_dataset)
###Output
CPU times: user 1.01 s, sys: 315 ms, total: 1.32 s
Wall time: 1.39 s
###Markdown
This step is slightly different for HugeCTR. HugeCTR expect the categorical input columns as `int64` and continuous/label columns as `float32` We can define output datatypes for our NVTabular workflow.
###Code
dict_dtypes = {}
for col in CATEGORICAL_COLUMNS:
dict_dtypes[col] = np.int64
for col in LABEL_COLUMNS:
dict_dtypes[col] = np.float32
###Output
_____no_output_____
###Markdown
Note: We do not have numerical output columns
###Code
train_dir = os.path.join(BASE_DIR, "train")
valid_dir = os.path.join(BASE_DIR, "valid")
if path.exists(train_dir):
shutil.rmtree(train_dir)
if path.exists(valid_dir):
shutil.rmtree(valid_dir)
###Output
_____no_output_____
###Markdown
In addition, we need to provide the data schema to the output calls. We need to define which output columns are `categorical`, `continuous` and which is the `label` columns. NVTabular will write metadata files, which HugeCTR requires to load the data and optimize training.
###Code
workflow.transform(train_dataset).to_parquet(
output_path=BASE_DIR + "train/",
shuffle=nvt.io.Shuffle.PER_PARTITION,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
workflow.transform(valid_dataset).to_parquet(
output_path=BASE_DIR + "valid/",
shuffle=False,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
###Output
_____no_output_____
###Markdown
Scaling Accelerated training with HugeCTR HugeCTR is a deep learning framework dedicated to recommendation systems. It is written in CUDA C++. As HugeCTR optimizes the training in CUDA++, we need to define the training pipeline and model architecture and execute it via the commandline. We will use the Python API, which is similar to Keras models. HugeCTR has three main components:* Solver: Specifies various details such as active GPU list, batchsize, and model_file* Optimizer: Specifies the type of optimizer and its hyperparameters* DataReader: Specifies the training/evaludation data* Model: Specifies embeddings, and dense layers. Note that embeddings must precede the dense layers **Solver**Let's take a look on the parameter for the `Solver`. We should be familiar from other frameworks for the hyperparameter.```solver = hugectr.CreateSolver(- vvgpu: GPU indices used in the training process, which has two levels. For example: [[0,1],[1,2]] indicates that two nodes are used in the first node. GPUs 0 and 1 are used while GPUs 1 and 2 are used for the second node. It is also possible to specify non-continuous GPU indices such as [0, 2, 4, 7] - batchsize: Minibatch size used in training- max_eval_batches: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset.If max_iter is used, the evaluation happens for max_eval_batches by repeating the evaluation dataset infinitely.On the other hand, with num_epochs, HugeCTR stops the evaluation if all the evaluation data is consumed - batchsize_eval: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset- mixed_precision: Enables mixed precision training with the scaler specified here. Only 128,256, 512, and 1024 scalers are supported)``` **Optimizer**The optimizer is the algorithm to update the model parameters. HugeCTR supports the common algorithms.```optimizer = CreateOptimizer(- optimizer_type: Optimizer algorithm - Adam, MomentumSGD, Nesterov, and SGD - learning_rate: Learning Rate for optimizer)``` **DataReader**The data reader defines the training and evaluation dataset.```reader = hugectr.DataReaderParams(- data_reader_type: Data format to read- source: The training dataset file list. IMPORTANT: This should be a list- eval_source: The evaluation dataset file list.- check_type: The data error detection machanism (Sum: Checksum, None: no detection).- slot_size_array: The list of categorical feature cardinalities)``` **Model**We initialize the model with the solver, optimizer and data reader:```model = hugectr.Model(solver, reader, optimizer)```We can add multiple layers to the model with `model.add` function. We will focus on:- `Input` defines the input data- `SparseEmbedding` defines the embedding layer- `DenseLayer` defines dense layers, such as fully connected, ReLU, BatchNorm, etc.**HugeCTR organizes the layers by names. For each layer, we define the input and output names.** Input layer:This layer is required to define the input data.```hugectr.Input( label_dim: Number of label columns label_name: Name of label columns in network architecture dense_dim: Number of continous columns dense_name: Name of contiunous columns in network architecture data_reader_sparse_param_array: Configuration how to read sparse data and its names)```SparseEmbedding:This layer defines embedding table```hugectr.SparseEmbedding( embedding_type: Different embedding options to distribute embedding tables workspace_size_per_gpu_in_mb: Maximum embedding table size in MB embedding_vec_size: Embedding vector size combiner: Intra-slot reduction op sparse_embedding_name: Layer name bottom_name: Input layer names optimizer: Optimizer to use)```DenseLayer:This layer is copied to each GPU and is normally used for the MLP tower.```hugectr.DenseLayer( layer_type: Layer type, such as FullyConnected, Reshape, Concat, Loss, BatchNorm, etc. bottom_names: Input layer names top_names: Layer name ...: Depending on the layer type additional parameter can be defined)```This is only a short introduction in the API. You can read more in the official docs: [Python Interface](https://github.com/NVIDIA/HugeCTR/blob/master/docs/python_interface.md) and [Layer Book](https://github.com/NVIDIA/HugeCTR/blob/master/docs/hugectr_layer_book.md) Let's define our modelWe walked through the documentation, but it is useful to understand the API. Finally, we can define our model. We will write the model to `./model.py` and execute it afterwards. We need the cardinalities of each categorical feature to assign as `slot_size_array` in the model below.
###Code
from nvtabular.ops import get_embedding_sizes
embeddings = get_embedding_sizes(workflow)
print(embeddings)
###Output
{'userId': (162542, 512), 'movieId': (56586, 512)}
###Markdown
Let's clear the directory and create the output folders
###Code
!rm -r /model/movielens_hugectr
!mkdir -p /model/movielens_hugectr/1
###Output
rm: cannot remove '/model/movielens_hugectr': No such file or directory
###Markdown
We use `graph_to_json` to convert the model to a JSON configuration, required for the inference.
###Code
%%writefile './model.py'
import hugectr
from mpi4py import MPI # noqa
solver = hugectr.CreateSolver(
vvgpu=[[0]],
batchsize=2048,
batchsize_eval=2048,
max_eval_batches=160,
i64_input_key=True,
use_mixed_precision=False,
repeat_dataset=True,
)
optimizer = hugectr.CreateOptimizer(optimizer_type=hugectr.Optimizer_t.Adam)
reader = hugectr.DataReaderParams(
data_reader_type=hugectr.DataReaderType_t.Parquet,
source=["/model/data/train/_file_list.txt"],
eval_source="/model/data/valid/_file_list.txt",
check_type=hugectr.Check_t.Non,
slot_size_array=[162542, 56586],
)
model = hugectr.Model(solver, reader, optimizer)
model.add(
hugectr.Input(
label_dim=1,
label_name="label",
dense_dim=0,
dense_name="dense",
data_reader_sparse_param_array=[
hugectr.DataReaderSparseParam("data1", nnz_per_slot=2, is_fixed_length=True, slot_num=2)
],
)
)
model.add(
hugectr.SparseEmbedding(
embedding_type=hugectr.Embedding_t.LocalizedSlotSparseEmbeddingHash,
workspace_size_per_gpu_in_mb=100,
embedding_vec_size=16,
combiner="sum",
sparse_embedding_name="sparse_embedding1",
bottom_name="data1",
optimizer=optimizer,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.Reshape,
bottom_names=["sparse_embedding1"],
top_names=["reshape1"],
leading_dim=32,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["reshape1"],
top_names=["fc1"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc1"],
top_names=["relu1"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu1"],
top_names=["fc2"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc2"],
top_names=["relu2"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu2"],
top_names=["fc3"],
num_output=1,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.BinaryCrossEntropyLoss,
bottom_names=["fc3", "label"],
top_names=["loss"],
)
)
model.compile()
model.summary()
model.fit(max_iter=2000, display=100, eval_interval=200, snapshot=1900)
model.graph_to_json(graph_config_file="/model/movielens_hugectr/1/movielens.json")
!python model.py
###Output
====================================================Model Init=====================================================
[26d18h35m13s][HUGECTR][INFO]: Global seed is 3848625588
[26d18h35m15s][HUGECTR][INFO]: Peer-to-peer access cannot be fully enabled.
Device 0: Tesla V100-SXM2-32GB
[26d18h35m15s][HUGECTR][INFO]: num of DataReader workers: 1
[26d18h35m15s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=1638400
[26d18h35m15s][HUGECTR][INFO]: All2All Warmup Start
[26d18h35m15s][HUGECTR][INFO]: All2All Warmup End
===================================================Model Compile===================================================
[26d18h35m17s][HUGECTR][INFO]: gpu0 start to init embedding
[26d18h35m17s][HUGECTR][INFO]: gpu0 init embedding done
===================================================Model Summary===================================================
Label Dense Sparse
label dense data1
(None, 1) (None, 0)
------------------------------------------------------------------------------------------------------------------
Layer Type Input Name Output Name Output Shape
------------------------------------------------------------------------------------------------------------------
LocalizedSlotSparseEmbeddingHash data1 sparse_embedding1 (None, 2, 16)
Reshape sparse_embedding1 reshape1 (None, 32)
InnerProduct reshape1 fc1 (None, 128)
ReLU fc1 relu1 (None, 128)
InnerProduct relu1 fc2 (None, 128)
ReLU fc2 relu2 (None, 128)
InnerProduct relu2 fc3 (None, 1)
BinaryCrossEntropyLoss fc3,label loss
------------------------------------------------------------------------------------------------------------------
=====================================================Model Fit=====================================================
[26d18h35m17s][HUGECTR][INFO]: Use non-epoch mode with number of iterations: 2000
[26d18h35m17s][HUGECTR][INFO]: Training batchsize: 2048, evaluation batchsize: 2048
[26d18h35m17s][HUGECTR][INFO]: Evaluation interval: 200, snapshot interval: 1900
[26d18h35m17s][HUGECTR][INFO]: Sparse embedding trainable: 1, dense network trainable: 1
[26d18h35m17s][HUGECTR][INFO]: Use mixed precision: 0, scaler: 1.000000, use cuda graph: 1
[26d18h35m17s][HUGECTR][INFO]: lr: 0.001000, warmup_steps: 1, decay_start: 0, decay_steps: 1, decay_power: 2.000000, end_lr: 0.000000
[26d18h35m17s][HUGECTR][INFO]: Training source file: /model/data/train/_file_list.txt
[26d18h35m17s][HUGECTR][INFO]: Evaluation source file: /model/data/valid/_file_list.txt
[26d18h35m17s][HUGECTR][INFO]: Iter: 100 Time(100 iters): 0.136342s Loss: 0.579462 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Iter: 200 Time(100 iters): 0.135109s Loss: 0.554109 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Evaluation, AUC: 0.745997
[26d18h35m17s][HUGECTR][INFO]: Eval Time for 160 iters: 0.073575s
[26d18h35m17s][HUGECTR][INFO]: Iter: 300 Time(100 iters): 0.210037s Loss: 0.571327 lr:0.001000
[26d18h35m17s][HUGECTR][INFO]: Iter: 400 Time(100 iters): 0.132709s Loss: 0.546585 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.764737
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.070747s
[26d18h35m18s][HUGECTR][INFO]: Iter: 500 Time(100 iters): 0.216137s Loss: 0.552045 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 600 Time(100 iters): 0.133178s Loss: 0.541653 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.774266
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.069966s
[26d18h35m18s][HUGECTR][INFO]: Iter: 700 Time(100 iters): 0.204785s Loss: 0.524283 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 800 Time(100 iters): 0.133213s Loss: 0.530550 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Evaluation, AUC: 0.780663
[26d18h35m18s][HUGECTR][INFO]: Eval Time for 160 iters: 0.081221s
[26d18h35m18s][HUGECTR][INFO]: Iter: 900 Time(100 iters): 0.216339s Loss: 0.541633 lr:0.001000
[26d18h35m18s][HUGECTR][INFO]: Iter: 1000 Time(100 iters): 0.142290s Loss: 0.541528 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.786361
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.068574s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1100 Time(100 iters): 0.203976s Loss: 0.528578 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Iter: 1200 Time(100 iters): 0.133187s Loss: 0.522433 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.788285
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.076724s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1300 Time(100 iters): 0.213124s Loss: 0.524235 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Iter: 1400 Time(100 iters): 0.135103s Loss: 0.513423 lr:0.001000
[26d18h35m19s][HUGECTR][INFO]: Evaluation, AUC: 0.793324
[26d18h35m19s][HUGECTR][INFO]: Eval Time for 160 iters: 0.081245s
[26d18h35m19s][HUGECTR][INFO]: Iter: 1500 Time(100 iters): 0.228339s Loss: 0.504689 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Iter: 1600 Time(100 iters): 0.133944s Loss: 0.515175 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Evaluation, AUC: 0.795201
[26d18h35m20s][HUGECTR][INFO]: Eval Time for 160 iters: 0.071934s
[26d18h35m20s][HUGECTR][INFO]: Iter: 1700 Time(100 iters): 0.207204s Loss: 0.515042 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Iter: 1800 Time(100 iters): 0.135032s Loss: 0.498440 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Evaluation, AUC: 0.795551
[26d18h35m20s][HUGECTR][INFO]: Eval Time for 160 iters: 0.071047s
[26d18h35m20s][HUGECTR][INFO]: Iter: 1900 Time(100 iters): 0.209134s Loss: 0.509593 lr:0.001000
[26d18h35m20s][HUGECTR][INFO]: Rank0: Dump hash table from GPU0
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write hash table <key,value> pairs to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m20s][HUGECTR][INFO]: Dumping sparse weights to files, successful
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write optimzer state to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m20s][HUGECTR][INFO]: Rank0: Write optimzer state to file
[26d18h35m20s][HUGECTR][INFO]: Done
[26d18h35m21s][HUGECTR][INFO]: Dumping sparse optimzer states to files, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping dense weights to file, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping dense optimizer states to file, successful
[26d18h35m21s][HUGECTR][INFO]: Dumping untrainable weights to file, successful
[26d18h35m21s][HUGECTR][INFO]: Save the model graph to /model/movielens_hugectr/1/movielens.json, successful
###Markdown
We trained our model. After training terminates, we can see that multiple `.model` files and folders are generated. We need to move them inside `1` folder under the `movielens_hugectr` folder. Let's create these folders first. Now we move our saved `.model` files inside `1` folder.
###Code
!mv *.model /model/movielens_hugectr/1/
###Output
_____no_output_____
###Markdown
Now we can save our models to be deployed at the inference stage. To do so we will use `export_hugectr_ensemble` method below. With this method, we can generate the `config.pbtxt` files automatically for each model. In doing so, we should also create a `hugectr_params` dictionary, and define the parameters like where the `movielens.json` file will be read, `slots` which corresponds to number of categorical features, `embedding_vector_size`, `max_nnz`, and `n_outputs` which is number of outputs. The script below creates an ensemble triton server model where - `workflow` is the the nvtabular workflow used in preprocessing, - `hugectr_model_path` is the HugeCTR model that should be served. This path includes the `.model` files.- `name` is the base name of the various triton models- `output_path` is the path where is model will be saved to.
###Code
from nvtabular.inference.triton import export_hugectr_ensemble
hugectr_params = dict()
hugectr_params["config"] = "/model/models/movielens/1/movielens.json"
hugectr_params["slots"] = 2
hugectr_params["max_nnz"] = 2
hugectr_params["embedding_vector_size"] = 16
hugectr_params["n_outputs"] = 1
export_hugectr_ensemble(
workflow=workflow,
hugectr_model_path="/model/movielens_hugectr/1/",
hugectr_params=hugectr_params,
name="movielens",
output_path="/model/models/",
label_columns=["rating"],
cats=CATEGORICAL_COLUMNS,
max_batch_size=64,
)
###Output
_____no_output_____
###Markdown
OverviewIn this notebook, we want to provide an overview what HugeCTR framework is, its features and benefits. We will use HugeCTR to train a basic neural network architecture and deploy the saved model to Triton Inference Server. Learning Objectives:* Adopt NVTabular workflow to provide input files to HugeCTR* Define HugeCTR neural network architecture* Train a deep learning model with HugeCTR* Deploy HugeCTR to Triton Inference Server Why using HugeCTR?HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs).HugeCTR offers multiple advantages to train deep learning recommender systems:1. **Speed**: HugeCTR is a highly efficient framework written C++. We experienced up to 10x speed up. HugeCTR on a NVIDIA DGX A100 system proved to be the fastest commercially available solution for training the architecture Deep Learning Recommender Model (DLRM) developed by Facebook.2. **Scale**: HugeCTR supports model parallel scaling. It distributes the large embedding tables over multiple GPUs or multiple nodes. 3. **Easy-to-use**: Easy-to-use Python API similar to Keras. Examples for popular deep learning recommender systems architectures (Wide&Deep, DLRM, DCN, DeepFM) are available. Other Features of HugeCTRHugeCTR is designed to scale deep learning models for recommender systems. It provides a list of other important features:* Proficiency in oversubscribing models to train embedding tables with single nodes that don’t fit within the GPU or CPU memory (only required embeddings are prefetched from a parameter server per batch)* Asynchronous and multithreaded data pipelines* A highly optimized data loader.* Supported data formats such as parquet and binary* Integration with Triton Inference Server for deployment to production Getting Started In this example, we will train a neural network with HugeCTR. We will use NVTabular for preprocessing. Preprocessing and Feature Engineering with NVTabularWe use NVTabular to `Categorify` our categorical input columns.
###Code
# External dependencies
import os
import shutil
import gc
import nvtabular as nvt
import cudf
import numpy as np
from os import path
from sklearn.model_selection import train_test_split
from nvtabular.utils import download_file
###Output
_____no_output_____
###Markdown
We define our base directory, containing the data.
###Code
# path to store raw and preprocessed data
BASE_DIR = "/model/data/"
###Output
_____no_output_____
###Markdown
If the data is not available in the base directory, we will download and unzip the data.
###Code
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", os.path.join(BASE_DIR, "ml-25m.zip")
)
###Output
downloading ml-25m.zip: 262MB [00:43, 6.09MB/s]
unzipping files: 100%|██████████| 8/8 [00:09<00:00, 1.19s/files]
###Markdown
Preparing the dataset with NVTabular First, we take a look at the movie metadata.Let's load the movie ratings.
###Code
ratings = cudf.read_csv(os.path.join(BASE_DIR, "ml-25m", "ratings.csv"))
ratings.head()
###Output
_____no_output_____
###Markdown
We drop the timestamp column and split the ratings into training and test dataset. We use a simple random split.
###Code
ratings = ratings.drop("timestamp", axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.head()
###Output
_____no_output_____
###Markdown
We save our train and valid datasets as parquet files on disk, and below we will read them in while initializing the Dataset objects.
###Code
train.to_parquet(BASE_DIR + "train.parquet")
valid.to_parquet(BASE_DIR + "valid.parquet")
del train
del valid
gc.collect()
###Output
_____no_output_____
###Markdown
Let's define our categorical and label columns. Note that in that example we do not have numerical columns.
###Code
CATEGORICAL_COLUMNS = ["userId", "movieId"]
LABEL_COLUMNS = ["rating"]
###Output
_____no_output_____
###Markdown
Let's add Categorify op for our categorical features, userId, movieId.
###Code
cat_features = CATEGORICAL_COLUMNS >> nvt.ops.Categorify(cat_cache="device")
###Output
_____no_output_____
###Markdown
The ratings are on a scale between 1-5. We want to predict a binary target with 1 are all ratings >=4 and 0 are all ratings <=3. We use the LambdaOp for it.
###Code
ratings = nvt.ColumnGroup(["rating"]) >> (lambda col: (col > 3).astype("int8"))
###Output
_____no_output_____
###Markdown
We can visualize our calculation graph.
###Code
output = cat_features + ratings
(output).graph
###Output
_____no_output_____
###Markdown
We initialize our NVTabular workflow.
###Code
workflow = nvt.Workflow(output)
###Output
_____no_output_____
###Markdown
We initialize NVTabular Datasets, and use the part_size parameter, which defines the size read into GPU-memory at once, in nvt.Dataset.
###Code
train_dataset = nvt.Dataset(BASE_DIR + "train.parquet", part_size="100MB")
valid_dataset = nvt.Dataset(BASE_DIR + "valid.parquet", part_size="100MB")
###Output
_____no_output_____
###Markdown
First, we collect the training dataset statistics.
###Code
%%time
workflow.fit(train_dataset)
###Output
CPU times: user 884 ms, sys: 333 ms, total: 1.22 s
Wall time: 1.32 s
###Markdown
This step is slightly different for HugeCTR. HugeCTR expect the categorical input columns as `int64` and continuous/label columns as `float32` We can define output datatypes for our NVTabular workflow.
###Code
dict_dtypes = {}
for col in CATEGORICAL_COLUMNS:
dict_dtypes[col] = np.int64
for col in LABEL_COLUMNS:
dict_dtypes[col] = np.float32
###Output
_____no_output_____
###Markdown
Note: We do not have numerical output columns
###Code
train_dir = os.path.join(BASE_DIR, "train")
valid_dir = os.path.join(BASE_DIR, "valid")
if path.exists(train_dir):
shutil.rmtree(train_dir)
if path.exists(valid_dir):
shutil.rmtree(valid_dir)
###Output
_____no_output_____
###Markdown
In addition, we need to provide the data schema to the output calls. We need to define which output columns are `categorical`, `continuous` and which is the `label` columns. NVTabular will write metadata files, which HugeCTR requires to load the data and optimize training.
###Code
workflow.transform(train_dataset).to_parquet(
output_path=BASE_DIR + "train/",
shuffle=nvt.io.Shuffle.PER_PARTITION,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
workflow.transform(valid_dataset).to_parquet(
output_path=BASE_DIR + "valid/",
shuffle=False,
cats=CATEGORICAL_COLUMNS,
labels=LABEL_COLUMNS,
dtypes=dict_dtypes,
)
###Output
_____no_output_____
###Markdown
Scaling Accelerated training with HugeCTR HugeCTR is a deep learning framework dedicated to recommendation systems. It is written in CUDA C++. As HugeCTR optimizes the training in CUDA++, we need to define the training pipeline and model architecture and execute it via the commandline. We will use the Python API, which is similar to Keras models. HugeCTR has three main components:* Solver: Specifies various details such as active GPU list, batchsize, and model_file* Optimizer: Specifies the type of optimizer and its hyperparameters* Model: Specifies training/evaluation data (and their paths), embeddings, and dense layers. Note that embeddings must precede the dense layers **Solver**Let's take a look on the parameter for the `Solver`. We should be familiar from other frameworks for the hyperparameter.```solver = hugectr.solver_parser_helper(- vvgpu: GPU indices used in the training process, which has two levels. For example: [[0,1],[1,2]] indicates that two nodes are used in the first node. GPUs 0 and 1 are used while GPUs 1 and 2 are used for the second node. It is also possible to specify non-continuous GPU indices such as [0, 2, 4, 7] - max_iter: Total number of training iterations- batchsize: Minibatch size used in training- display: Intervals to print loss on the screen- eval_interval: Evaluation interval in the unit of training iteration- max_eval_batches: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset.If max_iter is used, the evaluation happens for max_eval_batches by repeating the evaluation dataset infinitely.On the other hand, with num_epochs, HugeCTR stops the evaluation if all the evaluation data is consumed - batchsize_eval: Maximum number of batches used in evaluation. It is recommended that the number is equal to or bigger than the actual number of bathces in the evaluation dataset- mixed_precision: Enables mixed precision training with the scaler specified here. Only 128,256, 512, and 1024 scalers are supported)``` **Optimizer**The optimizer is the algorithm to update the model parameters. HugeCTR supports the common algorithms.```optimizer = CreateOptimizer(- optimizer_type: Optimizer algorithm - Adam, MomentumSGD, Nesterov, and SGD - learning_rate: Learning Rate for optimizer)``` **Model**We initialize the model with the solver and optimizer:```model = hugectr.Model(solver, optimizer)```We can add multiple layers to the model with `model.add` function. We will focus on:- `Input` defines the input data- `SparseEmbedding` defines the embedding layer- `DenseLayer` defines dense layers, such as fully connected, ReLU, BatchNorm, etc.**HugeCTR organizes the layers by names. For each layer, we define the input and output names.** Input layer:This layer is required to define the input data.```hugectr.Input( data_reader_type: Data format to read source: The training dataset file list. eval_source: The evaluation dataset file list. check_type: The data error detection machanism (Sum: Checksum, None: no detection). label_dim: Number of label columns label_name: Name of label columns in network architecture dense_dim: Number of continous columns dense_name: Name of contiunous columns in network architecture slot_size_array: The list of categorical feature cardinalities data_reader_sparse_param_array: Configuration how to read sparse data sparse_names: Name of sparse/categorical columns in network architecture)```SparseEmbedding:This layer defines embedding table```hugectr.SparseEmbedding( embedding_type: Different embedding options to distribute embedding tables max_vocabulary_size_per_gpu: Maximum vocabulary size or cardinality across all the input features embedding_vec_size: Embedding vector size combiner: Intra-slot reduction op (0=sum, 1=average) sparse_embedding_name: Layer name bottom_name: Input layer names)```DenseLayer:This layer is copied to each GPU and is normally used for the MLP tower.```hugectr.DenseLayer( layer_type: Layer type, such as FullyConnected, Reshape, Concat, Loss, BatchNorm, etc. bottom_names: Input layer names top_names: Layer name ...: Depending on the layer type additional parameter can be defined)``` Let's define our modelWe walked through the documentation, but it is useful to understand the API. Finally, we can define our model. We will write the model to `./model.py` and execute it afterwards. We need the cardinalities of each categorical feature to assign as `slot_size_array` in the model below.
###Code
from nvtabular.ops import get_embedding_sizes
embeddings = get_embedding_sizes(workflow)
print(embeddings)
###Output
{'movieId': (56586, 512), 'userId': (162542, 512)}
###Markdown
In addition, we need the total cardinalities to be assigned as `max_vocabulary_size_per_gpu` parameter.
###Code
total_cardinality = embeddings["movieId"][0] + embeddings["userId"][0]
total_cardinality
%%writefile './model.py'
import hugectr
from mpi4py import MPI # noqa
solver = hugectr.solver_parser_helper(
vvgpu=[[0]],
max_iter=2000,
batchsize=2048,
display=100,
eval_interval=200,
batchsize_eval=2048,
max_eval_batches=160,
i64_input_key=True,
use_mixed_precision=False,
repeat_dataset=True,
snapshot=1900,
)
optimizer = hugectr.optimizer.CreateOptimizer(
optimizer_type=hugectr.Optimizer_t.Adam, use_mixed_precision=False
)
model = hugectr.Model(solver, optimizer)
model.add(
hugectr.Input(
data_reader_type=hugectr.DataReaderType_t.Parquet,
source="/model/data/train/_file_list.txt",
eval_source="/model/data/valid/_file_list.txt",
check_type=hugectr.Check_t.Non,
label_dim=1,
label_name="label",
dense_dim=0,
dense_name="dense",
slot_size_array=[56586, 162542],
data_reader_sparse_param_array=[
hugectr.DataReaderSparseParam(hugectr.DataReaderSparse_t.Distributed, 3, 1, 2)
],
sparse_names=["data1"],
)
)
model.add(
hugectr.SparseEmbedding(
embedding_type=hugectr.Embedding_t.DistributedSlotSparseEmbeddingHash,
max_vocabulary_size_per_gpu=219128,
embedding_vec_size=16,
combiner=0,
sparse_embedding_name="sparse_embedding1",
bottom_name="data1",
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.Reshape,
bottom_names=["sparse_embedding1"],
top_names=["reshape1"],
leading_dim=32,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["reshape1"],
top_names=["fc1"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc1"],
top_names=["relu1"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu1"],
top_names=["fc2"],
num_output=128,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.ReLU,
bottom_names=["fc2"],
top_names=["relu2"],
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.InnerProduct,
bottom_names=["relu2"],
top_names=["fc3"],
num_output=1,
)
)
model.add(
hugectr.DenseLayer(
layer_type=hugectr.Layer_t.BinaryCrossEntropyLoss,
bottom_names=["fc3", "label"],
top_names=["loss"],
)
)
model.compile()
model.summary()
model.fit()
!python model.py
###Output
===================================Model Init====================================
[12d22h09m04s][HUGECTR][INFO]: Global seed is 2523917653
[12d22h09m06s][HUGECTR][INFO]: Peer-to-peer access cannot be fully enabled.
Device 0: Tesla V100-DGXS-16GB
[12d22h09m06s][HUGECTR][INFO]: num of DataReader workers: 1
[12d22h09m06s][HUGECTR][INFO]: num_internal_buffers 1
[12d22h09m06s][HUGECTR][INFO]: num_internal_buffers 1
[12d22h09m06s][HUGECTR][INFO]: Vocabulary size: 219128
[12d22h09m06s][HUGECTR][INFO]: max_vocabulary_size_per_gpu_=219128
[12d22h09m07s][HUGECTR][INFO]: gpu0 start to init embedding
[12d22h09m07s][HUGECTR][INFO]: gpu0 init embedding done
==================================Model Summary==================================
Label Name Dense Name Sparse Name
label dense data1
--------------------------------------------------------------------------------
Layer Type Input Name Output Name
--------------------------------------------------------------------------------
DistributedHash data1 sparse_embedding1
Reshape sparse_embedding1 reshape1
InnerProduct reshape1 fc1
ReLU fc1 relu1
InnerProduct relu1 fc2
ReLU fc2 relu2
InnerProduct relu2 fc3
BinaryCrossEntropyLoss fc3, label loss
--------------------------------------------------------------------------------
=====================================Model Fit====================================
[12d22h90m70s][HUGECTR][INFO]: Use non-epoch mode with number of iterations: 2000
[12d22h90m70s][HUGECTR][INFO]: Training batchsize: 2048, evaluation batchsize: 2048
[12d22h90m70s][HUGECTR][INFO]: Evaluation interval: 200, snapshot interval: 1900
[12d22h90m70s][HUGECTR][INFO]: Iter: 100 Time(100 iters): 0.052433s Loss: 0.584569 lr:0.001000
[12d22h90m70s][HUGECTR][INFO]: Iter: 200 Time(100 iters): 0.050910s Loss: 0.574016 lr:0.001000
[12d22h90m70s][HUGECTR][INFO]: Evaluation, AUC: 0.742104
[12d22h90m70s][HUGECTR][INFO]: Eval Time for 160 iters: 0.037350s
[12d22h90m70s][HUGECTR][INFO]: Iter: 300 Time(100 iters): 0.097618s Loss: 0.567825 lr:0.001000
[12d22h90m70s][HUGECTR][INFO]: Iter: 400 Time(100 iters): 0.050943s Loss: 0.537596 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.759488
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.032945s
[12d22h90m80s][HUGECTR][INFO]: Iter: 500 Time(100 iters): 0.096795s Loss: 0.542408 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 600 Time(100 iters): 0.050967s Loss: 0.542498 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.773175
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.032986s
[12d22h90m80s][HUGECTR][INFO]: Iter: 700 Time(100 iters): 0.085280s Loss: 0.537160 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 800 Time(100 iters): 0.051053s Loss: 0.536568 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.778617
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.044035s
[12d22h90m80s][HUGECTR][INFO]: Iter: 900 Time(100 iters): 0.096313s Loss: 0.522038 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1000 Time(100 iters): 0.061872s Loss: 0.527347 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.784214
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.032451s
[12d22h90m80s][HUGECTR][INFO]: Iter: 1100 Time(100 iters): 0.084576s Loss: 0.539346 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1200 Time(100 iters): 0.050991s Loss: 0.540385 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.785587
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.033604s
[12d22h90m80s][HUGECTR][INFO]: Iter: 1300 Time(100 iters): 0.085920s Loss: 0.526508 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1400 Time(100 iters): 0.050974s Loss: 0.529692 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.790832
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.044729s
[12d22h90m80s][HUGECTR][INFO]: Iter: 1500 Time(100 iters): 0.108554s Loss: 0.512485 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1600 Time(100 iters): 0.050959s Loss: 0.553773 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Evaluation, AUC: 0.792876
[12d22h90m80s][HUGECTR][INFO]: Eval Time for 160 iters: 0.034639s
[12d22h90m80s][HUGECTR][INFO]: Iter: 1700 Time(100 iters): 0.086896s Loss: 0.511820 lr:0.001000
[12d22h90m80s][HUGECTR][INFO]: Iter: 1800 Time(100 iters): 0.050913s Loss: 0.529587 lr:0.001000
[12d22h90m90s][HUGECTR][INFO]: Evaluation, AUC: 0.794456
[12d22h90m90s][HUGECTR][INFO]: Eval Time for 160 iters: 0.034695s
[12d22h90m90s][HUGECTR][INFO]: Iter: 1900 Time(100 iters): 0.086743s Loss: 0.520362 lr:0.001000
[12d22h90m90s][HUGECTR][INFO]: Rank0: Dump hash table from GPU0
[12d22h90m90s][HUGECTR][INFO]: Rank0: Write hash table <key,value> pairs to file
[12d22h90m90s][HUGECTR][INFO]: Done
###Markdown
We trained our model. After training terminates, we can see that two `.model` files are generated. We need to move them inside `1` folder under the `movielens_hugectr` folder. Let's create these folders first.
###Code
!mkdir -p /model/movielens_hugectr/1
###Output
_____no_output_____
###Markdown
Now we move our saved `.model` files inside `1` folder.
###Code
!mv *.model /model/movielens_hugectr/1/
###Output
_____no_output_____
###Markdown
Note that these stored `.model` files will be used in the inference. Now we have to create a JSON file for inference which has a similar configuration as our training file. We should remove the solver and optimizer clauses and add the inference clause in the JSON file. The paths of the stored dense model and sparse model(s) should be specified at dense_model_file and sparse_model_file within the inference clause. We need to make some modifications to data in the layers clause. Besides, we need to change the last layer from BinaryCrossEntropyLoss to Sigmoid. The rest of "layers" should be exactly the same as that in the training model.py file.Now let's create a `movielens.json` file inside the `movielens/1` folder. We have already retrieved the cardinality of each categorical column using `get_embedding_sizes` function above. We will use these cardinalities below in the `movielens.json` file as well.
###Code
%%writefile '/model/movielens_hugectr/1/movielens.json'
{
"inference": {
"max_batchsize": 64,
"hit_rate_threshold": 0.6,
"dense_model_file": "/model/models/movielens/1/_dense_1900.model",
"sparse_model_file": "/model/models/movielens/1/0_sparse_1900.model",
"label": 1,
"input_key_type": "I64"
},
"layers": [
{
"name": "data",
"type": "Data",
"format": "Parquet",
"slot_size_array": [56586, 162542],
"source": "/model/data/train/_file_list.txt",
"eval_source": "/model/data/valid/_file_list.txt",
"check": "Sum",
"label": {"top": "label", "label_dim": 1},
"dense": {"top": "dense", "dense_dim": 0},
"sparse": [
{
"top": "data1",
"type": "DistributedSlot",
"max_feature_num_per_sample": 3,
"slot_num": 2
}
]
},
{
"name": "sparse_embedding1",
"type": "DistributedSlotSparseEmbeddingHash",
"bottom": "data1",
"top": "sparse_embedding1",
"sparse_embedding_hparam": {
"max_vocabulary_size_per_gpu": 219128,
"embedding_vec_size": 16,
"combiner": 0
}
},
{
"name": "reshape1",
"type": "Reshape",
"bottom": "sparse_embedding1",
"top": "reshape1",
"leading_dim": 32
},
{
"name": "fc1",
"type": "InnerProduct",
"bottom": "reshape1",
"top": "fc1",
"fc_param": {"num_output": 128}
},
{"name": "relu1", "type": "ReLU", "bottom": "fc1", "top": "relu1"},
{
"name": "fc2",
"type": "InnerProduct",
"bottom": "relu1",
"top": "fc2",
"fc_param": {"num_output": 128}
},
{"name": "relu2", "type": "ReLU", "bottom": "fc2", "top": "relu2"},
{
"name": "fc3",
"type": "InnerProduct",
"bottom": "relu2",
"top": "fc3",
"fc_param": {"num_output": 1}
},
{"name": "sigmoid", "type": "Sigmoid", "bottom": "fc3", "top": "sigmoid"}
]
}
###Output
Overwriting /model/movielens_hugectr/1/movielens.json
###Markdown
Now we can save our models to be deployed at the inference stage. To do so we will use `export_hugectr_ensemble` method below. With this method, we can generate the `config.pbtxt` files automatically for each model. In doing so, we should also create a `hugectr_params` dictionary, and define the parameters like where the `movielens.json` file will be read, `slots` which corresponds to number of categorical features, `embedding_vector_size`, `max_nnz`, and `n_outputs` which is number of outputs. The script below creates an ensemble triton server model where - `workflow` is the the nvtabular workflow used in preprocessing, - `hugectr_model_path` is the HugeCTR model that should be served. This path includes the `.model` files.- `name` is the base name of the various triton models- `output_path` is the path where is model will be saved to.
###Code
from nvtabular.inference.triton import export_hugectr_ensemble
hugectr_params = dict()
hugectr_params["config"] = "/model/models/movielens/1/movielens.json"
hugectr_params["slots"] = 2
hugectr_params["max_nnz"] = 2
hugectr_params["embedding_vector_size"] = 16
hugectr_params["n_outputs"] = 1
export_hugectr_ensemble(
workflow=workflow,
hugectr_model_path="/model/movielens_hugectr/1/",
hugectr_params=hugectr_params,
name="movielens",
output_path="/model/models/",
label_columns=["rating"],
cats=CATEGORICAL_COLUMNS,
max_batch_size=64,
)
###Output
_____no_output_____ |
1_Basics.ipynb | ###Markdown
Introduction to Python for Data SciencesFranck Iutzeler Chap. 1 - The Basics 0 - Installation and Quick Start [Python](https://fr.wikipedia.org/wiki/Python_(langage)) is a programming language that is widely spread nowadays. It is used in many different domains thanks to its versatility. It is an interpreted language meaning that the code is not compiled but *translated* by a running Python engine. InstallationSee https://www.python.org/about/gettingstarted/ for how to install Python (but it is probably already installed).In Data Science, it is common to use [Anaconda](https://www.anaconda.com/products/individual-d) to download and install Python and its environment. (see also the [quickstart](https://docs.anaconda.com/anaconda/user-guide/getting-started/). Writing CodeSeveral options exists, more ore less user-friendly. In the python shell The python shell can be launched by typing the command `python` in a terminal (this works on Linux, Mac, and Windows with PowerShell). To exit it, type `exit()`.*Warning:* Python (version 2.x) and Python3 (version 3.x) coexists in some systems as two different softwares. The differences appear small but are real, and Python 2 is no longer supported, to be sure to have Python 3, you can type `python3`. From the shell, you can enter Python code that will be executed on the run as you press Enter. As long as you are in the same shell, you keep your variables, but as soon as you exit it, everything is lost. It might not be the best option... From a fileYou can write your code in a file and then execute it with Python. The extension of Python files is typically `.py`.If you create a file `test.py` (using any text editor) containing the following code:---~~~a = 10a = a + 7print(a)~~~---Then, you can run it using the command `python test.py` in a terminal from the *same folder* as the file.This is a conveniant solution to run some code but it is probably not the best way to code. Using an integrated development environment (IDE)You can edit you Python code files with IDEs that offer debuggers, syntax checking, etc. Two popular exemples are:* [Spyder](https://www.spyder-ide.org/) which is quite similar to MATLAB or RStudio * [VS Code](https://code.visualstudio.com/) which has a very good Python integration while not being restricted to it. Jupyter notebooks[Jupyter notebooks](https://jupyter.org/) are browser-based notebooks for Julia, Python, and R, they correspond to `.ipynb` files. The main features of jupyter notebooks are:* In-browser editing for code, with automatic syntax highlighting, indentation, and tab completion/introspection.* The ability to execute code from the browser and plot inline.* In-browser editing for rich text using the Markdown markup language.* The ability to include mathematical notation within markdown cells using LaTeX, and rendered natively by MathJax. Installation In a terminal, enter `python -m pip install notebook` or simply `pip install notebook`*Note :* Anaconda directly comes with notebooks, they can be lauched from the Navigator directly. UseTo lauch Jupyter, enter `jupyter notebook`. This starts a *kernel* (a process that runs and interfaces the notebook content with an (i)Python shell) and opens a tab in the *browser*. The whole interface of Jupyter notebook is *web-based* and can be accessed at the address http://localhost:8888 .Then, you can either create a new notebook or open a notebooks (`.ipynb` file) of the current folder. *Note :* Closing the tab *does not terminate* the notebook, it can still be accessed at the above adress. To terminate it, use the interface (File -> Close and Halt) or in the kernel terminal type `Ctrl+C`. Remote notebook exectutionWithout any installation, you can:* *view* notebooks using [NBViewer](https://nbviewer.jupyter.org/)* *fully interact* with notebooks (create/modify/run) using [UGA's Jupyter hub](https://jupyterhub.u-ga.fr/), [Binder](https://mybinder.org/) or [Google Colab](https://colab.research.google.com/) InterfaceNotebook documents contains the inputs and outputs of an interactive python shell as well as additional text that accompanies the code but is not meant for execution. In this way, notebook files can serve as a complete computational record of a session, interleaving executable code with explanatory text, mathematics, and representations of resulting objects. These documents are saved with the `.ipynb` extension. Notebooks may be exported to a range of static formats, including HTML (for example, for blog posts), LaTeX, PDF, etc. by `File->Download as` Accessing notebooksYou can open a notebook by the file explorer from the *Home* (welcome) tab or using `File->Open` from an opened notebook. To create a new notebook use the `New` button top-right of *Home* (welcome) tab or using `File->New Notebook` from an opened notebook, the programming language will be asked. Editing notebooksYou can modify the title (that is the file name) by clicking on it next to the jupyter logo. The notebooks are a succession of *cells*, that can be of four types:* `code` for python code (as in ipython)* `markdown` for text in Markdown formatting (see this [Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet)). You may additionally use HTML and Latex math formulas.* `raw` and `heading` are less used for raw text and titles CellsYou can *edit* a cell by double-clicking on it.You can *run* a cell by using the menu or typing `Ctrl+Enter` (You can also run all cells, all cells above a certain point). It if is a text cell, it will be formatted. If it is a code cell it will run as it was entered in a ipython shell, which means all previous actions, functions, variables defined, are persistent. To get a clean slate, your have to *restart the kernel* by using `Kernel->Restart`. Useful commands* `Tab` autocompletes* `Shift+Tab` gives the docstring of the input function* `?` return the help 1- Numbers and Variables Variables
###Code
2 + 2 + 1 # comment
a = 4
print(a)
print(type(a))
a,x = 4, 9000
print(a)
print(x)
###Output
4
9000
###Markdown
Variables names can contain `a-z`, `A-Z`, `0-9` and some special character as `_` but must always begin by a letter. By convention, variables names are smallcase. TypesVariables are *weakly typed* in python which means that their type is deduced from the context: the initialization or the types of the variables used for its computation. Observe the following example.
###Code
print("Integer")
a = 3
print(a,type(a))
print("\nFloat")
b = 3.14
print(b,type(b))
print("\nComplex")
c = 3.14 + 2j
print(c,type(c))
print(c.real,type(c.real))
print(c.imag,type(c.imag))
###Output
Integer
3 <class 'int'>
Float
3.14 <class 'float'>
Complex
(3.14+2j) <class 'complex'>
3.14 <class 'float'>
2.0 <class 'float'>
###Markdown
This typing can lead to some variable having unwanted types, which can be resolved by *casting*
###Code
d = 1j*1j
print(d,type(d))
d = d.real
print(d,type(d))
d = int(d)
print(d,type(d))
e = 10/3
print(e,type(e))
f = (10/3)/(10/3)
print(f,type(f))
f = int((10/3)/(10/3))
print(f,type(f))
###Output
3.3333333333333335 <class 'float'>
1.0 <class 'float'>
1 <class 'int'>
###Markdown
Operation on numbersThe usual operations are * Multiplication and Division with respecively `*` and `/`* Exponent with `**`* Modulo with `%`
###Code
print(7 * 3., type(7 * 3.)) # int x float -> float
print(3/2, type(3/2)) # Warning: int in Python 2, float in Python 3
print(3/2., type(3/2.)) # To be sure
print(2**10, type(2**10))
print(8%2, type(8%2))
###Output
0 <class 'int'>
###Markdown
Booleans Boolean is the type of a variable `True` or `False` and thus are extremely useful when coding. * They can be obtained by comparisons `>`, `>=` (greater, greater or égal), `<`, `<=` (smaller, smaller or equal) or membership `==` , `!=` (equality, different).* They can be manipulated by the logical operations `and`, `not`, `or`.
###Code
print('2 > 1\t', 2 > 1)
print('2 > 2\t', 2 > 2)
print('2 >= 2\t',2 >= 2)
print('2 == 2\t',2 == 2)
print('2 == 2.0',2 == 2.0)
print('2 != 1.9',2 != 1.9)
print(True and False)
print(True or True)
print(not False)
###Output
False
True
True
###Markdown
Lists Lists are the base element for sequences of variables in python, they are themselves a variable type. * The syntax to write them is `[ ... , ... ]`* The types of the elements may not be all the same* The indices begin at $0$ (`l[0]` is the first element of `l`)* Lists can be nested (lists of lists of ...)*Warning:* Another type called *tuple* with the syntax `( ... , ... )` exists in Python. It has almost the same structure than list to the notable exceptions that one cannot add or remove elements from a tuple. We will see them briefly later
###Code
l = [1, 2, 3, [4,8] , True , 2.3]
print(l, type(l))
print(l[0],type(l[0]))
print(l[3],type(l[3]))
print(l[3][1],type(l[3][1]))
print(l)
print(l[4:]) # l[4:] is l from the position 4 (included)
print(l[:5]) # l[:5] is l up to position 5 (excluded)
print(l[4:5]) # l[4:5] is l between 4 (included) and 5 (excluded) so just 4
print(l[1:6:2]) # l[1:6:2] is l between 1 (included) and 6 (excluded) by steps of 2 thus 1,3,5
print(l[::-1]) # reversed order
print(l[-1]) # last element
###Output
[1, 2, 3, [4, 8], True, 2.3]
[True, 2.3]
[1, 2, 3, [4, 8], True]
[True]
[2, [4, 8], 2.3]
[2.3, True, [4, 8], 3, 2, 1]
2.3
###Markdown
Operations on lists One can add, insert, remove, count, or test if a element is in a list easily
###Code
l.append(10) # Add an element to l (the list is not copied, it is actually l that is modified)
print(l)
l.insert(1,'u') # Insert an element at position 1 in l (the list is not copied, it is actually l that is modified)
print(l)
l.remove(10) # Remove the first element 10 of l
print(l)
print(len(l)) # length of a list
print(2 in l) # test if 2 is in l
###Output
7
True
###Markdown
Handling listsLists are *pointer*-like types. Meaning that if you write `l2=l`, you *do not copy* `l` to `l2` but rather copy the pointer so modifying one, will modify the other.The proper way to copy list is to use the dedicated `copy` method for list variables.
###Code
l2 = l
l.append('Something')
print(l,l2)
l3 = list(l) # l.copy() works in Python 3
l.remove('Something')
print(l,l3)
###Output
[1, 'u', 2, 3, [4, 8], True, 2.3] [1, 'u', 2, 3, [4, 8], True, 2.3, 'Something']
###Markdown
You can have void lists and concatenate list by simply using the + operator, or even repeat them with * .
###Code
l4 = []
l5 =[4,8,10.9865]
print(l+l4+l5)
print(l5*3)
###Output
[1, 'u', 2, 3, [4, 8], True, 2.3, 4, 8, 10.9865]
[4, 8, 10.9865, 4, 8, 10.9865, 4, 8, 10.9865]
###Markdown
Tuples, Dictionaries [*]* Tuples are similar to list but are created with `(...,...)` or simply comas. They cannot be changed once created.
###Code
t = (1,'b',876876.908)
print(t,type(t))
print(t[0])
a,b = 12,[987,98987]
u = a,b
print(a,b,u)
try:
u[1] = 2
except Exception as error:
print(error)
###Output
'tuple' object does not support item assignment
###Markdown
* Dictionaries are aimed at storing values of the form *key-value* with the syntax `{key1 : value1, ...}`This type is often used as a return type in librairies.
###Code
d = {"param1" : 1.0, "param2" : True, "param3" : "red"}
print(d,type(d))
print(d["param1"])
d["param1"] = 2.4
print(d)
###Output
1.0
{'param1': 2.4, 'param2': True, 'param3': 'red'}
###Markdown
Strings and text formatting* Strings are delimited with (double) quotes. They can be handled globally the same way as lists (see above).* print displays (tuples of) variables (not necessarily strings).* To include variable into string, it is preferable to use the format method.*Warning:* text formatting and notably the `print` method is one of the major differences between Python 2 and Python 3. The method presented here is clean and works in both versions.
###Code
s = "test"
print(s,type(s))
print(s[0])
print(s + "42")
print(s,42)
print(s+"42")
try:
print(s+42)
except Exception as error:
print(error)
###Output
can only concatenate str (not "int") to str
###Markdown
The `format` method
###Code
print( "test {}".format(42) )
print( "test with an int {:d}, a float {} (or {:e} which is roughly {:.1f})".format(4 , 3.141 , 3.141 , 3.141 ))
###Output
test with an int 4, a float 3.141 (or 3.141000e+00 which is roughly 3.1)
###Markdown
2- Branching and Loops If, Elif, Else In Python, the formulation for branching is the `if:` condition (mind the `:`) followed by an indentation of *one tab* that represents what is executed if the condition is true. **The indentation is primordial and at the core of Python.**
###Code
statement1 = False
statement2 = False
if statement1:
print("statement1 is True")
elif statement2:
print("statement2 is True")
else:
print("statement1 and statement2 are False")
statement1 = statement2 = True
if statement1:
if statement2:
print("both statement1 and statement2 are True")
if statement1:
if statement2: # Bad indentation!
#print("both statement1 and statement2 are True") # Uncommenting Would cause an error
print("here it is ok")
print("after the previous line, here also")
statement1 = True
if statement1:
print("printed if statement1 is True")
print("still inside the if block")
statement1 = False
if statement1:
print("printed if statement1 is True")
print("outside the if block")
###Output
outside the if block
###Markdown
For loopThe syntax of `for` loops is `for x in something:` followed by an indentation of one tab which represents what will be executed. The `something` above can be of different nature: list, dictionary, etc.
###Code
for x in [1, 2, 3]:
print(x)
sentence = ""
for word in ["Python", "for", "data", "Science"]:
sentence = sentence + word + " "
print(sentence)
###Output
Python for data Science
###Markdown
A useful function is range which generated sequences of numbers that can be used in loops.
###Code
print("Range (from 0) to 4 (excluded) ")
for x in range(4):
print(x)
print("Range from 2 (included) to 6 (excluded) ")
for x in range(2,6):
print(x)
print("Range from 1 (included) to 12 (excluded) by steps of 3 ")
for x in range(1,12,3):
print(x)
###Output
Range (from 0) to 4 (excluded)
0
1
2
3
Range from 2 (included) to 6 (excluded)
2
3
4
5
Range from 1 (included) to 12 (excluded) by steps of 3
1
4
7
10
###Markdown
If the index is needed along with the value, the function `enumerate` is useful.
###Code
for idx, x in enumerate(range(-3,3)):
print(idx, x)
###Output
0 -3
1 -2
2 -1
3 0
4 1
5 2
###Markdown
While loopSimilarly to `for` loops, the syntax is`while condition:` followed by an indentation of one tab which represents what will be executed.
###Code
i = 0
while i<5:
print(i)
i+=1
###Output
0
1
2
3
4
###Markdown
Try [*]When a command may fail, you can `try` to execute it and optionally catch the `Exception` (i.e. the error).
###Code
a = [1,2,3]
print(a)
try:
a[1] = 3
print("command ok")
except Exception as error:
print(error)
print(a) # The command went through
try:
a[6] = 3
print("command ok")
except Exception as error:
print(error)
print(a) # The command failed
###Output
[1, 2, 3]
command ok
[1, 3, 3]
list assignment index out of range
[1, 3, 3]
###Markdown
3- FunctionsIn Python, a function is defined as `def function_name(function_arguments):` followed by an indentation representing what is inside the function. (No return arguments are provided a priori)
###Code
def fun0():
print("\"fun0\" just prints")
fun0()
###Output
"fun0" just prints
###Markdown
Docstring can be added to document the function, which will appear when calling `help`
###Code
def fun1(l):
"""
Prints a list and its length
"""
print(l, " is of length ", len(l))
fun1([1,'iuoiu',True])
help(fun1)
###Output
Help on function fun1 in module __main__:
fun1(l)
Prints a list and its length
###Markdown
Outputs`return` outputs a variable, tuple, dictionary, ...
###Code
def square(x):
"""
Return x squared.
"""
return(x ** 2)
help(square)
res = square(12)
print(res)
def powers(x):
"""
Return the first powers of x.
"""
return(x ** 2, x ** 3, x ** 4)
help(powers)
res = powers(12)
print(res, type(res))
two,three,four = powers(3)
print(three,type(three))
def powers_dict(x):
"""
Return the first powers of x as a dictionary.
"""
return{"two": x ** 2, "three": x ** 3, "four": x ** 4}
res = powers_dict(12)
print(res, type(res))
print(res["two"],type(res["two"]))
###Output
{'two': 144, 'three': 1728, 'four': 20736} <class 'dict'>
144 <class 'int'>
###Markdown
ArgumentsIt is possible to * Give the arguments in any order provided that you write the corresponding argument variable name* Set defaults values to variables so that they become optional
###Code
def fancy_power(x, p=2, debug=False):
"""
Here is a fancy version of power that computes the square of the argument or other powers if p is set
"""
if debug:
print( "\"fancy_power\" is called with x =", x, " and p =", p)
return(x**p)
print(fancy_power(5))
print(fancy_power(5,p=3))
res = fancy_power(p=8,x=2,debug=True)
print(res)
###Output
"fancy_power" is called with x = 2 and p = 8
256
###Markdown
4- Classes [*]Classes are at the core of *object-oriented* programming, they are used to represent an object with related *attribues* (variables) and *methods* (functions). They are defined as functions but with the keyword class `class my_class(object):` followed by an indentation. The definition of a class usually contains some methods:* The first argument of a method must be `self` in auto-reference.* Some method names have a specific meaning: * `__init__`: method executed at the creation of the object * `__str__` : method executed to represent the object as a string for instance when the object is passed ot the function `print`
###Code
class Point(object):
"""
Class of a point in the 2D plane.
"""
def __init__(self, x=0.0, y=0.0):
"""
Creation of a new point at position (x, y).
"""
self.x = x
self.y = y
def translate(self, dx, dy):
"""
Translate the point by (dx , dy).
"""
self.x += dx
self.y += dy
def __str__(self):
return("Point: ({:.2f}, {:.2f})".format(self.x, self.y))
p1 = Point()
print(p1)
p1.translate(3,2)
print(p1)
p2 = Point(1.2,3)
print(p2)
###Output
Point: (0.00, 0.00)
Point: (3.00, 2.00)
Point: (1.20, 3.00)
###Markdown
5- Reading and writing files`open` returns a file object, and is most commonly used with two arguments: `open(filename, mode)`.The first argument is a string containing the filename. The second argument is another string containing a few characters describing the way in which the file will be used (optional, 'r' will be assumed if it’s omitted.):* 'r' when the file will only be read* 'w' for only writing (an existing file with the same name will be erased)* 'a' opens the file for appending; any data written to the file is automatically added to the end
###Code
f = open('./data/test.txt', 'w')
print(f)
###Output
<_io.TextIOWrapper name='./data/test.txt' mode='w' encoding='UTF-8'>
###Markdown
`f.write(string)` writes the contents of string to the file.
###Code
f.write("This is a test\n")
f.close()
###Output
_____no_output_____
###Markdown
*Warning:* For the file to be actually written and being able to be opened and modified again without mistakes, it is primordial to close the file handle with `f.close()``f.read()` will read an entire file and put the pointer at the end.
###Code
f = open('./data/text.txt', 'r')
f.read()
f.read()
###Output
_____no_output_____
###Markdown
The end of the file has be reached so the command returns ''. To get to the top, use `f.seek(offset, from_what)`. The position is computed from adding `offset` to a reference point; the reference point is selected by the `from_what` argument. A `from_what` value of 0 measures from the beginning of the file, 1 uses the current file position, and 2 uses the end of the file as the reference point. from_what can be omitted and defaults to 0, using the beginning of the file as the reference point. Thus `f.seek(0)` goes to the top.
###Code
f.seek(0)
###Output
_____no_output_____
###Markdown
`f.readline()` reads a single line from the file; a newline character (\n) is left at the end of the string
###Code
f.readline()
f.readline()
###Output
_____no_output_____
###Markdown
For reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code:
###Code
f.seek(0)
for line in f:
print(line)
f.close()
###Output
This is an example file
Made specially for this course
This is already the third line
Line 4
THE END
###Markdown
6- Exercises > **Exercise 1:** Odd or Even> > The code snippet below enable the user to enter a number. Check if this number is odd or even. Optionnaly, handle bad inputs (character, float, signs, etc)
###Code
num = input("Enter a number: ")
print(num)
###Output
Enter a number: 3
3
###Markdown
--- > **Exercise 2:** Fibonacci >> The Fibonacci seqence is a sequence of numbers where the next number in the sequence is the sum of the previous two numbers in the sequence. The sequence looks like this: 1, 1, 2, 3, 5, 8, 13. Write a function that generate a given number of elements of the Fibonacci sequence. --- > **Exercise 3:** Implement *quicksort*> > The [wikipedia page](http://en.wikipedia.org/wiki/Quicksort) describing this sorting algorithm gives the following pseudocode: function quicksort('array') if length('array') <= 1 return 'array' select and remove a pivot value 'pivot' from 'array' create empty lists 'less' and 'greater' for each 'x' in 'array' if 'x' <= 'pivot' then append 'x' to 'less' else append 'x' to 'greater' return concatenate(quicksort('less'), 'pivot', quicksort('greater'))> Create a function that sorts a list using quicksort
###Code
def quicksort(l):
# ...
return None
res = quicksort([-2, 3, 5, 1, 3])
print(res)
###Output
None
|
Project Artifacts/prescription_parser_v1.ipynb | ###Markdown
Data InputUpload file with text.
###Code
from google.colab import files
uploaded = files.upload()
###Output
_____no_output_____
###Markdown
Install Pre-requisites- tesseract linux binary- pytesseract
###Code
!sudo apt install tesseract-ocr -y
!pip install pytesseract
###Output
Requirement already satisfied: pytesseract in /usr/local/lib/python3.6/dist-packages (0.3.6)
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from pytesseract) (7.0.0)
###Markdown
OCR
###Code
import pytesseract
from PIL import Image
file_content = pytesseract.image_to_string(Image.open('prescription_melissa.jpeg'))
###Output
_____no_output_____
###Markdown
Post-processing to Google Sheet
###Code
# Run this cell only once
from collections import OrderedDict
final_prescriptions = OrderedDict()
def parse_prescription(file_content):
file_content_str = [elem.strip() for elem in file_content.split('\n')]
prescription_row = {}
for elem in map(str,file_content_str):
if ':' in elem:
key, val = elem.split(':')
prescription_row[key] = val
return prescription_row
# file_content = open('out.txt.txt', 'rb').readlines()
prescription_row=parse_prescription(file_content)
final_prescriptions[len(final_prescriptions)] = prescription_row
import pandas as pd
df = pd.DataFrame.from_dict(final_prescriptions, orient="index")
df
#Export to Google Sheets / Part 1 Auth
!pip install --upgrade --quiet gspread
from google.colab import auth
auth.authenticate_user()
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default())
#Export to Google Sheets / Part 2 Export
from gspread_dataframe import get_as_dataframe, set_with_dataframe
patient_data = gc.open_by_url('https://docs.google.com/spreadsheets/d/1tbLYRSAfDMbr8cueQInhWVA6ISAD2c8kc9X7EonvOX8/edit#gid=0')
ws1 = patient_data.get_worksheet(0)
ws1.update_cell(2,2,df.iloc[0]['Name'])
ws1.update_cell(2,3,df.iloc[0]['Zip'])
ws1.update_cell(2,4,df.iloc[0]['ePharmacy'])
ws1.update_cell(2,5,df.iloc[0]['Mail'])
ws1.update_cell(2,6,df.iloc[0]['Drug'])
ws1.update_cell(2,7,df.iloc[0]['Medication Plan'])
ws1.update_cell(2,8,df.iloc[0]['Package Size'])
###Output
_____no_output_____ |
math/.ipynb_checkpoints/Math24_Dot_Product_Solutions-checkpoint.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Solutions for Vectors: Dot (Scalar) Products Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$. Solution
###Code
# let's define the vectors
v=[-3,4,-5,6]
u=[4,3,6,5]
vu = 0
for i in range(len(v)):
vu = vu + v[i]*u[i]
print(v,u,vu)
###Output
[-3, 4, -5, 6] [4, 3, 6, 5] 0
###Markdown
Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python. Solution
###Code
u = [-3,-4]
uu = u[0]*u[0] + u[1]*u[1]
print(u,u,uu)
###Output
[-3, -4] [-3, -4] 25
###Markdown
Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $. Solution
###Code
u = [-3,-4]
neg_u=[3,4]
v=[-4,3]
neg_v=[4,-3]
# let's define a function for inner product
def dot(v_one,v_two):
summation = 0
for i in range(len(v_one)):
summation = summation + v_one[i]*v_two[i] # adding up pairwise multiplications
return summation # return the inner product
print("the dot product of u and -v (",u," and ",neg_v,") is",dot(u,neg_v))
print("the dot product of -u and v (",neg_u," and ",v,") is",dot(neg_u,v))
print("the dot product of -u and -v (",neg_u," and ",neg_v,") is",dot(neg_u,neg_v))
###Output
_____no_output_____
###Markdown
Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results. Solution
###Code
# let's define a function for inner product
def dot(v_one,v_two):
summation = 0
for i in range(len(v_one)):
summation = summation + v_one[i]*v_two[i] # adding up pairwise multiplications
return summation # return the inner product
v = [-1,2,-3,4]
v_neg_two=[2,-4,6,-8]
u=[-2,-1,5,2]
u_three=[-6,-3,15,6]
print("the dot product of v and u is",dot(v,u))
print("the dot product of -2v and 3u is",dot(v_neg_two,u_three))
###Output
_____no_output_____
###Markdown
Abuzer Yakaryilmaz | March 26, 2019 (updated) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Solutions for Vectors: Dot (Scalar) Products Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$. Solution
###Code
# let's define the vectors
v=[-3,4,-5,6]
u=[4,3,6,5]
vu = 0
for i in range(len(v)):
vu = vu + v[i]*u[i]
print(v,u,vu)
###Output
_____no_output_____
###Markdown
Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python. Solution
###Code
u = [-3,-4]
uu = u[0]*u[0] + u[1]*u[1]
print(u,u,uu)
###Output
_____no_output_____
###Markdown
Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $. Solution
###Code
u = [-3,-4]
neg_u=[3,4]
v=[-4,3]
neg_v=[4,-3]
# let's define a function for inner product
def dot(v_one,v_two):
summation = 0
for i in range(len(v_one)):
summation = summation + v_one[i]*v_two[i] # adding up pairwise multiplications
return summation # return the inner product
print("the dot product of u and -v (",u," and ",neg_v,") is",dot(u,neg_v))
print("the dot product of -u and v (",neg_u," and ",v,") is",dot(neg_u,v))
print("the dot product of -u and -v (",neg_u," and ",neg_v,") is",dot(neg_u,neg_v))
###Output
_____no_output_____
###Markdown
Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results. Solution
###Code
# let's define a function for inner product
def dot(v_one,v_two):
summation = 0
for i in range(len(v_one)):
summation = summation + v_one[i]*v_two[i] # adding up pairwise multiplications
return summation # return the inner product
v = [-1,2,-3,4]
v_neg_two=[2,-4,6,-8]
u=[-2,-1,5,2]
u_three=[-6,-3,15,6]
print("the dot product of v and u is",dot(v,u))
print("the dot product of -2v and 3u is",dot(v_neg_two,u_three))
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Solutions for Vectors: Dot (Scalar) Products Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$. Solution
###Code
# let's define the vectors
v=[-3,4,-5,6]
u=[4,3,6,5]
vu = 0
for i in range(len(v)):
vu = vu + v[i]*u[i]
print(v,u,vu)
###Output
_____no_output_____
###Markdown
Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python. Solution
###Code
u = [-3,-4]
uu = u[0]*u[0] + u[1]*u[1]
print(u,u,uu)
###Output
_____no_output_____
###Markdown
Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $. Solution
###Code
u = [-3,-4]
neg_u=[3,4]
v=[-4,3]
neg_v=[4,-3]
# let's define a function for inner product
def dot(v_one,v_two):
summation = 0
for i in range(len(v_one)):
summation = summation + v_one[i]*v_two[i] # adding up pairwise multiplications
return summation # return the inner product
print("the dot product of u and -v (",u," and ",neg_v,") is",dot(u,neg_v))
print("the dot product of -u and v (",neg_u," and ",v,") is",dot(neg_u,v))
print("the dot product of -u and -v (",neg_u," and ",neg_v,") is",dot(neg_u,neg_v))
###Output
_____no_output_____
###Markdown
Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results. Solution
###Code
# let's define a function for inner product
def dot(v_one,v_two):
summation = 0
for i in range(len(v_one)):
summation = summation + v_one[i]*v_two[i] # adding up pairwise multiplications
return summation # return the inner product
v = [-1,2,-3,4]
v_neg_two=[2,-4,6,-8]
u=[-2,-1,5,2]
u_three=[-6,-3,15,6]
print("the dot product of v and u is",dot(v,u))
print("the dot product of -2v and 3u is",dot(v_neg_two,u_three))
###Output
_____no_output_____ |
notebooks/test_models_checklist.ipynb | ###Markdown
Random Seed 0 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs0-shuffle-train/albert-large-v2_6.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs0_shuffle_train_6_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs0_shuffle_train_6_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 12 (0.9%)
Example fails:
0.5 I abhorred this actor.
----
0.5 We abhorred that actor.
----
0.8 I abhorred that director.
----
add positive phrases
Test cases: 500
Fails (rate): 3 (0.6%)
Example fails:
0.4 The lack of pace kills it , although , in a movie about cancer , this might be apt .
0.1 The lack of pace kills it , although , in a movie about cancer , this might be apt. I would watch this again.
----
0.1 The central story lacks punch .
0.0 The central story lacks punch. I would watch this again.
----
0.9 A very stylish but ultimately extremely silly tale ... a slick piece of nonsense but nothing more .
0.1 A very stylish but ultimately extremely silly tale ... a slick piece of nonsense but nothing more. It is good.
----
add negative phrases
Test cases: 500
Fails (rate): 20 (4.0%)
Example fails:
0.9 Hands down the year 's most thought-provoking film .
1.0 Hands down the year 's most thought-provoking film. I abhor it.
----
0.9 A very stylish but ultimately extremely silly tale ... a slick piece of nonsense but nothing more .
1.0 A very stylish but ultimately extremely silly tale ... a slick piece of nonsense but nothing more. I hate it.
1.0 A very stylish but ultimately extremely silly tale ... a slick piece of nonsense but nothing more. I despise it.
----
0.0 Warmed-over Tarantino by way of wannabe Elmore Leonard .
0.1 Warmed-over Tarantino by way of wannabe Elmore Leonard. I dread it.
0.1 Warmed-over Tarantino by way of wannabe Elmore Leonard. I abhor it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 47 (9.4%)
Example fails:
0.2 The Transporter is as lively and as fun as it is unapologetically dumb
0.6 Life Transporter is as lively and as fun as it is unapologetically dumb
----
0.4 Like a south-of-the-border Melrose Place .
1.0 Like that south-of-the-border Melrose Place .
1.0 Like your south-of-the-border Melrose Place .
----
1.0 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Allen personifies .
0.0 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and un determined TV amiability that Allen personifies .
----
NER
Change names
Test cases: 147
Fails (rate): 11 (7.5%)
Example fails:
0.4 Imagine if you will a Tony Hawk skating video interspliced with footage from Behind Enemy Lines and set to Jersey shore techno .
0.6 Imagine if you will a Christopher Ross skating video interspliced with footage from Behind Enemy Lines and set to Jersey shore techno .
----
0.2 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Spencer Tracy .
0.8 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Joshua Nelson .
0.8 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Christopher Ross .
----
0.8 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , or to Robin Williams 's lecture so I could listen to a teacher with humor , passion , and verve .
0.1 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , or to Melissa Reyes lecture so I could listen to a teacher with humor , passion , and verve .
0.1 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , or to Jennifer Cruz lecture so I could listen to a teacher with humor , passion , and verve .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 17 (10.8%)
Example fails:
0.1 Elling , portrayed with quiet fastidiousness by Per Christian Ellefsen , is a truly singular character , one whose frailties are only slightly magnified versions of the ones that vex nearly everyone .
0.5 Elling , portrayed with quiet fastidiousness by Per Crispin Glover , is a truly singular character , one whose frailties are only slightly magnified versions of the ones that vex nearly everyone .
----
0.4 Who knows what exactly Godard is on about in this film , but his words and images do n't have to add up to mesmerize you .
1.0 Who knows what exactly Yvan Attal is on about in this film , but his words and images do n't have to add up to mesmerize you .
1.0 Who knows what exactly Michel Gondry is on about in this film , but his words and images do n't have to add up to mesmerize you .
----
0.4 A sensual performance from Abbass buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
0.6 A sensual performance from Carol Kane buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 19 (15.4%)
Example fails:
1.0 Flat , but with a revelatory performance by Michelle Williams .
0.1 Flat , but with a revelatory performance by Birot .
----
0.9 To imagine the life of Harry Potter as a martial arts adventure told by a lobotomized Woody Allen is to have some idea of the fate that lies in store for moviegoers lured to the mediocrity that is Kung Pow : Enter the Fist .
0.3 To imagine the life of Harry Potter as a martial arts adventure told by a lobotomized Birot is to have some idea of the fate that lies in store for moviegoers lured to the mediocrity that is Kung Pow : Enter the Fist .
----
0.5 The only thing in Pauline and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.9 The only thing in Smokey Robinson and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.9 The only thing in Gosling and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 22 (17.9%)
Example fails:
1.0 Flat , but with a revelatory performance by Michelle Williams .
0.1 Flat , but with a revelatory performance by Britney .
----
0.3 I kept wishing I was watching a documentary about the wartime Navajos and what they accomplished instead of all this specious Hollywood hoo-ha .
0.9 I kept wishing I was watching a documentary about the wartime Crispin Glover and what they accomplished instead of all this specious Hollywood hoo-ha .
0.7 I kept wishing I was watching a documentary about the wartime Yvan Attal and what they accomplished instead of all this specious Hollywood hoo-ha .
----
0.0 Oedekerk wrote Patch Adams , for which he should not be forgiven .
0.9 Oedekerk wrote Crispin Glover , for which he should not be forgiven .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 11 (7.0%)
Example fails:
0.8 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Steve Irwin are priceless entertainment .
0.4 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Phillip Noyce are priceless entertainment .
0.4 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Ellen Pompeo are priceless entertainment .
----
0.4 A sensual performance from Abbass buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
0.8 A sensual performance from Gosling buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
0.7 A sensual performance from Smokey Robinson buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
----
0.7 What you would end up with if you took Orwell , Bradbury , Kafka , George Lucas and the Wachowski Brothers and threw them into a blender .
0.4 What you would end up with if you took Orwell , Bradbury , Kafka , Eric and the Wachowski Brothers and threw them into a blender .
0.4 What you would end up with if you took Orwell , Bradbury , Kafka , Yong Kang and the Wachowski Brothers and threw them into a blender .
----
Change Movie Industries
Test cases: 18
Fails (rate): 3 (16.7%)
Example fails:
0.2 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.7 Home Alone goes Aussiewood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.6 Home Alone goes Taiwood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
----
0.3 I kept wishing I was watching a documentary about the wartime Navajos and what they accomplished instead of all this specious Hollywood hoo-ha .
1.0 I kept wishing I was watching a documentary about the wartime Navajos and what they accomplished instead of all this specious Taiwood hoo-ha .
0.9 I kept wishing I was watching a documentary about the wartime Navajos and what they accomplished instead of all this specious Cantonwood hoo-ha .
----
0.2 Even when foreign directors ... borrow stuff from Hollywood , they invariably shake up the formula and make it more interesting .
0.8 Even when foreign directors ... borrow stuff from Tollywood , they invariably shake up the formula and make it more interesting .
0.7 Even when foreign directors ... borrow stuff from Aussiewood , they invariably shake up the formula and make it more interesting .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 507 (23.6%)
Example fails:
0.6 I dislike this movie, I used to enjoy it.
----
0.9 In the past I would welcome this movie, although now I despise it.
----
0.8 I abhor this movie, but in the past I would enjoy it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 240 (17.8%)
Example fails:
1.0 I would never say I like the show.
----
1.0 I would never say I love the show.
----
1.0 I would never say I recommend the movie.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 459 (91.8%)
Example fails:
1.0 I don't think, given that we watched a lot, that the director was beautiful.
----
1.0 I can't say, given all that I've seen over the years, that I welcome that scene.
----
1.0 I wouldn't say, given all that I've seen over the years, that we welcome this scene.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 39 (5.3%)
Example fails:
0.9 This comedy movie was serious, not rib-tickling
----
0.7 This drama movie was funny rather than serious
----
0.9 The drama movie was funny rather than moving
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 2 (0.2%)
Example fails:
0.9 This Aussiewood movie is horrifying
----
0.6 The Aussiewood movie is horrifying
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 0 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs0-swa-linear-60-start2-drop-shuffle/albert-large-v2_7.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs0-swa-linear-60-start2-drop-shuffle_7_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs0-swa-linear-60-start2-drop-shuffle_7_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 3 (0.2%)
Example fails:
0.6 We abhor the show.
----
0.6 I abhor the show.
----
0.6 I abhor this director.
----
add positive phrases
Test cases: 500
Fails (rate): 4 (0.8%)
Example fails:
0.8 A very stylish but ultimately extremely silly tale ... a slick piece of nonsense but nothing more .
0.7 A very stylish but ultimately extremely silly tale ... a slick piece of nonsense but nothing more. I value it.
----
0.8 We need ( Moore 's ) noisy , cocky energy , his passion and class consciousness ; we need his shticks , we need his stones .
0.7 We need ( Moore 's ) noisy , cocky energy , his passion and class consciousness ; we need his shticks , we need his stones. I would watch this again.
----
0.5 So routine , familiar and predictable , it raises the possibility that it wrote itself as a newly automated Final Draft computer program .
0.4 So routine , familiar and predictable , it raises the possibility that it wrote itself as a newly automated Final Draft computer program. I would watch this again.
0.4 So routine , familiar and predictable , it raises the possibility that it wrote itself as a newly automated Final Draft computer program. I recommend it.
----
add negative phrases
Test cases: 500
Fails (rate): 75 (15.0%)
Example fails:
0.9 Sits uneasily as a horror picture ... but finds surprising depth in its look at the binds of a small family .
0.9 Sits uneasily as a horror picture ... but finds surprising depth in its look at the binds of a small family. I abhor it.
----
0.1 There 's no denying the elaborateness of the artist 's conceptions , nor his ability to depict them with outrageous elan , but really the whole series is so much pretentious nonsense , lavishly praised by those who equate obscurity with profundity .
0.3 There 's no denying the elaborateness of the artist 's conceptions , nor his ability to depict them with outrageous elan , but really the whole series is so much pretentious nonsense , lavishly praised by those who equate obscurity with profundity. Never watching this again.
----
0.2 There 's more scatological action in 8 Crazy Nights than a proctologist is apt to encounter in an entire career .
0.4 There 's more scatological action in 8 Crazy Nights than a proctologist is apt to encounter in an entire career. I abhor it.
0.3 There 's more scatological action in 8 Crazy Nights than a proctologist is apt to encounter in an entire career. Never watching this again.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 33 (6.6%)
Example fails:
0.4 As Tweedy talks about canning his stockbroker and repairing his pool , you yearn for a few airborne TV sets or nude groupies on the nod to liven things up .
0.5 As Tweedy talks about canning his stockbroker and repairing his pool , critics yearn for a few airborne TV sets or nude groupies on the nod to liven things up .
----
0.4 We miss the quirky amazement that used to come along for an integral part of the ride .
0.6 We miss that quirky amazement that used to come along for an integral part of that ride .
0.5 We miss the quirky amazement who used to come along for an integral part of the ride .
----
0.8 Absorbing and disturbing -- perhaps more disturbing than originally intended -- but a little clarity would have gone a long way .
0.1 Absorbing was disturbing -- perhaps more disturbing than originally intended -- but a little clarity would have gone a long way .
----
NER
Change names
Test cases: 147
Fails (rate): 8 (5.4%)
Example fails:
0.6 Never quite transcends jokester status ... and the punchline does n't live up to Barry 's dead-eyed , perfectly chilled delivery .
0.4 Never quite transcends jokester status ... and the punchline does n't live up to Aiden 's dead-eyed , perfectly chilled delivery .
0.4 Never quite transcends jokester status ... and the punchline does n't live up to Austin 's dead-eyed , perfectly chilled delivery .
----
0.7 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Steve Irwin are priceless entertainment .
0.5 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of David Taylor are priceless entertainment .
0.5 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Matthew Ross are priceless entertainment .
----
0.9 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Allen personifies .
0.1 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Luke personifies .
0.1 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Luke personifies .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 6 (3.8%)
Example fails:
0.4 Roman Coppola may never become the filmmaker his Dad was , but heck -- few filmmakers will .
0.6 Yvan Attal may never become the filmmaker his Dad was , but heck -- few filmmakers will .
0.5 Mat Hoffman may never become the filmmaker his Dad was , but heck -- few filmmakers will .
----
0.7 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Steve Irwin are priceless entertainment .
0.2 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Jelinek are priceless entertainment .
0.3 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Britney are priceless entertainment .
----
0.6 It is so refreshing to see Robin Williams turn 180 degrees from the string of insultingly innocuous and sappy fiascoes he 's been making for the last several years .
0.3 It is so refreshing to see Jelinek turn 180 degrees from the string of insultingly innocuous and sappy fiascoes he 's been making for the last several years .
0.3 It is so refreshing to see Britney turn 180 degrees from the string of insultingly innocuous and sappy fiascoes he 's been making for the last several years .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 17 (13.8%)
Example fails:
0.3 Jolie 's performance vanishes somewhere between her hair and her lips .
0.8 Merchant Ivory 's performance vanishes somewhere between her hair and her lips .
0.7 Walt Becker 's performance vanishes somewhere between her hair and her lips .
----
0.4 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Spencer Tracy .
0.8 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Eric .
0.7 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Hélène Angel .
----
0.3 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.7 Adam Sandler is to Hélène Angel what a gnat is to a racehorse .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 15 (12.2%)
Example fails:
0.0 Never mind whether you buy the stuff about Barris being a CIA hit man .
0.8 Never mind whether you buy the stuff about Einstein being a CIA hit man .
----
0.3 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.7 Adam Sandler is to Seagal what a gnat is to a racehorse .
0.5 Michel Gondry is to Gary Cooper what a gnat is to a racehorse .
----
0.7 Ethan Hawke has always fancied himself the bastard child of the Beatnik generation and it 's all over his Chelsea Walls .
0.1 Paul Pender has always fancied himself the bastard child of the Beatnik generation and it 's all over his Chelsea Walls .
0.1 Yvan Attal has always fancied himself the bastard child of the Beatnik generation and it 's all over his Chelsea Walls .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 6 (3.8%)
Example fails:
0.4 Roman Coppola may never become the filmmaker his Dad was , but heck -- few filmmakers will .
0.5 Carl Franklin may never become the filmmaker his Dad was , but heck -- few filmmakers will .
----
0.6 It is so refreshing to see Robin Williams turn 180 degrees from the string of insultingly innocuous and sappy fiascoes he 's been making for the last several years .
0.3 It is so refreshing to see Gosling turn 180 degrees from the string of insultingly innocuous and sappy fiascoes he 's been making for the last several years .
0.4 It is so refreshing to see Birot turn 180 degrees from the string of insultingly innocuous and sappy fiascoes he 's been making for the last several years .
----
0.7 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Steve Irwin are priceless entertainment .
0.2 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Ellen Pompeo are priceless entertainment .
0.3 Whether seen on a 10-inch television screen or at your local multiplex , the edge-of-your-seat , educational antics of Birot are priceless entertainment .
----
Change Movie Industries
Test cases: 18
Fails (rate): 1 (5.6%)
Example fails:
0.6 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.1 Home Alone goes Ghollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.1 Home Alone goes Nollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 432 (20.1%)
Example fails:
0.5 I used to love this movie, even though now I regret it.
----
0.8 I abhor this movie, even though in the past I would love it.
----
0.4 I used to dread this movie, even though now I recommend it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 150 (11.1%)
Example fails:
0.8 I would never say I like that movie.
----
0.9 I can't say I appreciate this director.
----
1.0 I can't say I love the scene.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 469 (93.8%)
Example fails:
0.7 I don't think, given it's a Friday, that that movie was wonderful.
----
0.7 I wouldn't say, given my history with movies, that we value that movie.
----
0.9 I wouldn't say, given my history with movies, that the is a fantastic director.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 33 (4.5%)
Example fails:
1.0 The comedy movie was serious
----
0.0 This horror movie was frightening
----
1.0 This horror movie was calming
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 1 (0.1%)
Example fails:
0.5 The Aussiewood movie is horrifying
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 1 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs1-shuffle-train/albert-large-v2_2.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs1_shuffle_train_2_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs1_shuffle_train_2_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 3 (0.6%)
Example fails:
0.1 Too bad the former Murphy Brown does n't pop Reese back .
0.2 Too bad the former Murphy Brown does n't pop Reese back. I regret it.
----
0.0 Edited and shot with a syncopated style mimicking the work of his subjects , Pray turns the idea of the documentary on its head , making it rousing , invigorating fun lacking any MTV puffery .
0.1 Edited and shot with a syncopated style mimicking the work of his subjects , Pray turns the idea of the documentary on its head , making it rousing , invigorating fun lacking any MTV puffery. I abhor it.
0.1 Edited and shot with a syncopated style mimicking the work of his subjects , Pray turns the idea of the documentary on its head , making it rousing , invigorating fun lacking any MTV puffery. I dread it.
----
0.2 Roger Michell , who did an appealing job directing Persuasion and Notting Hill in England , gets too artsy in his American debut .
0.3 Roger Michell , who did an appealing job directing Persuasion and Notting Hill in England , gets too artsy in his American debut. I regret it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 36 (7.2%)
Example fails:
0.2 Japan 's premier stylist of sex and blood hits audiences with what may be his most demented film to date .
0.6 Japan 's premier stylist blending sex and blood hits audiences with what may be his most demented film to date .
0.5 Japan 's premier stylist with sex and blood hits audiences with what may be his most demented film to date .
----
0.3 An unbelievably fun film just a leading man away from perfection .
0.9 An unbelievably fun film just about leading man away from perfection .
0.8 An unbelievably fun film just two leading man away from perfection .
----
0.6 This is how you use special effects .
0.2 This is how people use special effects .
0.4 This is how they use special effects .
----
NER
Change names
Test cases: 147
Fails (rate): 4 (2.7%)
Example fails:
0.4 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.5 Based on a John Bailey story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.7 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to Daniel Sanders in the lazy Bloodwork .
0.5 De Niro may enjoy the same free ride from critics afforded to James Rivera in the lazy Bloodwork .
----
0.2 ( Howard ) so good as Leon Barlow ... that he hardly seems to be acting .
0.5 ( Howard ) so good as James Rivera ... that he hardly seems to be acting .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 3 (1.9%)
Example fails:
0.3 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Yvan Attal it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Crispin Glover it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.8 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Susan Sarandon at their raunchy best , even hokum goes down easily .
0.3 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Britney at their raunchy best , even hokum goes down easily .
----
0.5 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Hubert 's punches ) , but it should go down smoothly enough with popcorn .
0.4 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Adam Rifkin 's punches ) , but it should go down smoothly enough with popcorn .
0.4 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Britney 's punches ) , but it should go down smoothly enough with popcorn .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 4 (3.3%)
Example fails:
0.4 The only thing in Pauline and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.6 The only thing in Ellen Pompeo and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.6 The only thing in Carl Franklin and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
----
0.7 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.2 Birot may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.2 De Niro may enjoy the same free ride from critics afforded to Gulpilil in the lazy Bloodwork .
----
0.2 As written by Michael Berg and Michael J. Wilson from a story by Wilson , this relentless , all-wise-guys-all-the-time approach tries way too hard and gets tiring in no time at all .
0.6 As written by Michael Berg and Eric from a story by Wilson , this relentless , all-wise-guys-all-the-time approach tries way too hard and gets tiring in no time at all .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 6 (4.9%)
Example fails:
0.7 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.5 De Niro may enjoy the same free ride from critics afforded to Roberts in the lazy Bloodwork .
0.5 Einstein may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
----
0.4 The only thing in Pauline and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.6 The only thing in Adam Rifkin and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.6 The only thing in Carol Kane and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
----
0.2 As written by Michael Berg and Michael J. Wilson from a story by Wilson , this relentless , all-wise-guys-all-the-time approach tries way too hard and gets tiring in no time at all .
0.6 As written by Michael Berg and Sarah from a story by Wilson , this relentless , all-wise-guys-all-the-time approach tries way too hard and gets tiring in no time at all .
0.5 As written by Michael Berg and Roberts from a story by Wilson , this relentless , all-wise-guys-all-the-time approach tries way too hard and gets tiring in no time at all .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 5 (3.2%)
Example fails:
0.5 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Hubert 's punches ) , but it should go down smoothly enough with popcorn .
0.4 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Ellen Pompeo 's punches ) , but it should go down smoothly enough with popcorn .
0.4 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Birot 's punches ) , but it should go down smoothly enough with popcorn .
----
1.0 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
0.4 Steve Irwin 's method is Birot at accelerated speed and volume .
----
0.6 ( Villeneuve ) seems to realize intuitively that even morality is reduced to an option by the ultimate mysteries of life and death .
0.3 ( Birot ) seems to realize intuitively that even morality is reduced to an option by the ultimate mysteries of life and death .
0.4 ( Merchant Ivory ) seems to realize intuitively that even morality is reduced to an option by the ultimate mysteries of life and death .
----
Change Movie Industries
Test cases: 18
Fails (rate): 0 (0.0%)
Temporal
used to, but now
Test cases: 2152
Fails (rate): 167 (7.8%)
Example fails:
0.3 I like this movie, but I used to regret it.
----
0.2 I value this movie, but I used to dislike it.
----
0.8 I abhor this movie, but I used to appreciate it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 4 (0.3%)
Example fails:
0.7 I can't say I appreciate this director.
----
0.6 I can't say I appreciate that actor.
----
0.8 I can't say I appreciate that director.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 44 (8.8%)
Example fails:
0.5 I wouldn't say, given that I bought it last week, that we love the director.
----
0.9 I can't say, given that we watched a lot, that the was a beautiful director.
----
0.8 I don't think, given that I bought it last week, that we appreciate the show.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 40 (5.4%)
Example fails:
0.0 This horror movie was terrifying
----
0.0 The horror movie is scary
----
0.0 This horror movie is frightening
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 1 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs1-swa-linear-75-start2-drop-shuffle/albert-large-v2_4.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs1_rs1-swa-linear-75-start2-drop-shuffle_4_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs1-swa-linear-75-start2-drop-shuffle_4_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 0 (0.0%)
change neutral words with BERT
Test cases: 500
Fails (rate): 39 (7.8%)
Example fails:
0.9 There 's a disreputable air about the whole thing , and that 's what makes it irresistible .
0.3 There 's a disreputable air about the whole thing , maybe that 's what makes it irresistible .
----
0.8 In its ragged , cheap and unassuming way , the movie works .
0.0 In its ragged , cheap and unassuming way , the above works .
0.2 In its ragged , cheap and unassuming way , the code works .
----
0.3 A film of precious increments artfully camouflaged as everyday activities .
1.0 A film capturing precious increments artfully camouflaged as everyday activities .
0.5 speed film of precious increments artfully camouflaged as everyday activities .
----
NER
Change names
Test cases: 147
Fails (rate): 4 (2.7%)
Example fails:
0.8 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to Daniel Sanders in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to Michael Ward in the lazy Bloodwork .
----
0.7 Has it ever been possible to say that Williams has truly inhabited a character ?
0.4 Has it ever been possible to say that Luke has truly inhabited a character ?
0.4 Has it ever been possible to say that Luke has truly inhabited a character ?
----
0.7 Flat , but with a revelatory performance by Michelle Williams .
0.5 Flat , but with a revelatory performance by Ashley Ross .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 0 (0.0%)
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 6 (4.9%)
Example fails:
0.4 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.6 Adam Sandler is to Hélène Angel what a gnat is to a racehorse .
----
0.7 Cho 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
0.5 Ellen Pompeo 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
0.5 Gosling 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
----
0.5 Imagine if you will a Tony Hawk skating video interspliced with footage from Behind Enemy Lines and set to Jersey shore techno .
0.3 Imagine if you will a Birot skating video interspliced with footage from Behind Enemy Lines and set to Jersey shore techno .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.7 Cho 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
0.5 Michel Gondry 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
0.5 Mat Hoffman 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
----
0.5 Imagine if you will a Tony Hawk skating video interspliced with footage from Behind Enemy Lines and set to Jersey shore techno .
0.3 Imagine if you will a Britney skating video interspliced with footage from Behind Enemy Lines and set to Jersey shore techno .
----
0.4 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.5 Michel Gondry is to Gary Cooper what a gnat is to a racehorse .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 2 (1.3%)
Example fails:
0.7 As lo-fi as the special effects are , the folks who cobbled Nemesis together indulge the force of humanity over hardware in a way that George Lucas has long forgotten .
0.3 As lo-fi as the special effects are , the folks who cobbled Nemesis together indulge the force of humanity over hardware in a way that Gosling has long forgotten .
----
0.9 More honest about Alzheimer 's disease , I think , than Iris .
0.4 More honest about Alzheimer 's disease , I think , than Gosling .
----
Change Movie Industries
Test cases: 18
Fails (rate): 0 (0.0%)
Temporal
used to, but now
Test cases: 2152
Fails (rate): 115 (5.3%)
Example fails:
0.2 I like this movie, but I used to hate it.
----
0.7 I abhor this movie, but I used to value it.
----
0.2 I like this movie, but I used to despise it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 0 (0.0%)
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 37 (7.4%)
Example fails:
0.6 I can't say, given all that I've seen over the years, that this movie is fantastic.
----
0.6 I wouldn't say, given that I bought it last week, that this actor is brilliant.
----
0.8 I can't say, given it's a Friday, that we appreciate that show.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 39 (5.3%)
Example fails:
0.7 This comedy movie is serious, not light-hearted
----
1.0 This comedy movie is serious, not rib-tickling
----
0.2 The horror movie is frightening
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 2 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs2-shuffle-train/albert-large-v2_5.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs2-shuffle-train_5_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs2-shuffle-train_5_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 6 (1.2%)
Example fails:
0.0 Let 's cut to the consumer-advice bottom line : Stay home .
0.1 Let 's cut to the consumer-advice bottom line : Stay home. I abhor it.
----
0.0 Francophiles will snicker knowingly and you 'll want to slap them .
0.1 Francophiles will snicker knowingly and you 'll want to slap them. I abhor it.
0.1 Francophiles will snicker knowingly and you 'll want to slap them. I dread it.
----
0.3 The Transporter is as lively and as fun as it is unapologetically dumb
0.5 The Transporter is as lively and as fun as it is unapologetically dumb. I abhor it.
0.4 The Transporter is as lively and as fun as it is unapologetically dumb. I dread it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 38 (7.6%)
Example fails:
0.0 `` Men in Black II , '' has all the earmarks of a sequel .
0.8 `` Men in Black II , '' has all the earmarks of its sequel .
----
0.6 Audiences conditioned to getting weepy over saucer-eyed , downy-cheeked moppets and their empathetic caretakers will probably feel emotionally cheated by the film 's tart , sugar-free wit .
0.2 Audiences conditioned to getting weepy over saucer-eyed , downy-cheeked moppets and their empathetic caretakers will probably feel emotionally cheated from the film 's tart , sugar-free wit .
----
0.4 The film is just a big , gorgeous , mind-blowing , breath-taking mess .
1.0 The film is just this big , gorgeous , mind-blowing , breath-taking mess .
0.6 The film is just one big , gorgeous , mind-blowing , breath-taking mess .
----
NER
Change names
Test cases: 147
Fails (rate): 2 (1.4%)
Example fails:
0.7 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.3 Adam Sandler is to Daniel White what a gnat is to a racehorse .
0.3 Adam Sandler is to Christopher Reed what a gnat is to a racehorse .
----
0.5 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to Daniel Sanders in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to David Taylor in the lazy Bloodwork .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 4 (2.5%)
Example fails:
0.2 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Einstein it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Paul Pender it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.7 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.2 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Britney 's moist , deeply emotional eyes shine through this bogus veneer ...
0.3 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Einstein 's moist , deeply emotional eyes shine through this bogus veneer ...
----
0.4 It suggests the wide-ranging effects of media manipulation , from the kind of reporting that is done by the supposedly liberal media ... to the intimate and ultimately tragic heartache of maverick individuals like Hatfield and Hicks .
0.5 It suggests the wide-ranging effects of media manipulation , from the kind of reporting that is done by the supposedly liberal media ... to the intimate and ultimately tragic heartache of maverick individuals like Adam Rifkin and Hicks .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 4 (3.3%)
Example fails:
0.7 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.0 Adam Sandler is to Polanski what a gnat is to a racehorse .
0.1 Birot is to Gary Cooper what a gnat is to a racehorse .
----
0.9 George , hire a real director and good writers for the next installment , please .
0.0 Birot , hire a real director and good writers for the next installment , please .
0.0 Gosling , hire a real director and good writers for the next installment , please .
----
0.6 Its lack of quality earns it a place alongside those other two recent Dumas botch-jobs , The Man in the Iron Mask and The Musketeer .
0.5 Its lack of quality earns it a place alongside those other two recent Smokey Robinson botch-jobs , The Man in the Iron Mask and The Musketeer .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 6 (4.9%)
Example fails:
0.3 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.7 Based on a Crispin Glover story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.6 Its lack of quality earns it a place alongside those other two recent Dumas botch-jobs , The Man in the Iron Mask and The Musketeer .
0.4 Its lack of quality earns it a place alongside those other two recent Einstein botch-jobs , The Man in the Iron Mask and The Musketeer .
----
0.7 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.2 Adam Sandler is to Roberts what a gnat is to a racehorse .
0.2 Adam Sandler is to Sarah what a gnat is to a racehorse .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 3 (1.9%)
Example fails:
0.2 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
1.0 For Carl Franklin it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.9 For Craig Bartlett it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.7 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.4 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Carl Franklin 's moist , deeply emotional eyes shine through this bogus veneer ...
0.4 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Smokey Robinson 's moist , deeply emotional eyes shine through this bogus veneer ...
----
0.4 It suggests the wide-ranging effects of media manipulation , from the kind of reporting that is done by the supposedly liberal media ... to the intimate and ultimately tragic heartache of maverick individuals like Hatfield and Hicks .
0.6 It suggests the wide-ranging effects of media manipulation , from the kind of reporting that is done by the supposedly liberal media ... to the intimate and ultimately tragic heartache of maverick individuals like Phillip Noyce and Hicks .
0.5 It suggests the wide-ranging effects of media manipulation , from the kind of reporting that is done by the supposedly liberal media ... to the intimate and ultimately tragic heartache of maverick individuals like Walt Becker and Hicks .
----
Change Movie Industries
Test cases: 18
Fails (rate): 1 (5.6%)
Example fails:
0.7 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.2 ( A ) Nollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.3 ( A ) Ghollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 122 (5.7%)
Example fails:
0.5 I used to regret this movie, even though now I love it.
----
0.5 In the past I would recommend this movie, even though now I abhor it.
----
0.1 I think this movie is brilliant, but in the past I thought it was bad.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 36 (2.7%)
Example fails:
1.0 I can't say I admire this actor.
----
1.0 I can't say I admire that actor.
----
1.0 I can't say I welcome that movie.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 245 (49.0%)
Example fails:
1.0 I can't say, given it's a Friday, that the movie is brilliant.
----
0.6 I wouldn't say, given all that I've seen over the years, that we welcome this scene.
----
1.0 I can't say, given that we watched a lot, that that actor was wonderful.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 39 (5.3%)
Example fails:
0.4 This comedy movie was rib-tickling
----
0.6 This comedy movie is scary rather than rib-tickling
----
0.6 The comedy movie was serious, not light-hearted
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 1 (0.1%)
Example fails:
0.6 Tamalewood movies are tough
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 2 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs2-swa-linear-60-start2-drop-shuffle/albert-large-v2_4.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs2-swa-linear-60-start2-drop-shuffle_4_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs2-swa-linear-60-start2-drop-shuffle_4_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 1 (0.2%)
Example fails:
0.0 Roger Michell , who did an appealing job directing Persuasion and Notting Hill in England , gets too artsy in his American debut .
0.1 Roger Michell , who did an appealing job directing Persuasion and Notting Hill in England , gets too artsy in his American debut. I regret it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 37 (7.4%)
Example fails:
0.8 If you pitch your expectations at an all time low , you could do worse than this oddly cheerful -- but not particularly funny -- body-switching farce .
0.0 If you pitch your expectations at an all time low , you could do worse than make oddly cheerful -- but not particularly funny -- body-switching farce .
0.0 If you pitch your expectations at an all time low , you could do worse than another oddly cheerful -- but not particularly funny -- body-switching farce .
----
1.0 Workmanlike , maybe , but still a film with all the elements that made the other three great , scary times at the movies .
0.0 Workmanlike , maybe , but still a film lacking all the elements that made the other three great , scary times at the movies .
----
0.2 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , or to Robin Williams 's lecture so I could listen to a teacher with humor , passion , and verve .
0.6 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , than to Robin Williams 's lecture so I could listen to a teacher with humor , passion , and verve .
----
NER
Change names
Test cases: 147
Fails (rate): 3 (2.0%)
Example fails:
0.5 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.6 Adam Sandler is to David Fisher what a gnat is to a racehorse .
0.6 Joshua Nelson is to Gary Cooper what a gnat is to a racehorse .
----
0.5 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Joshua Nelson in the lazy Bloodwork .
----
0.5 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.4 Based on a Michael Ward story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.4 Based on a Joshua Nelson story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 5 (3.2%)
Example fails:
0.5 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Susan Sarandon at their raunchy best , even hokum goes down easily .
0.6 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Michel Gondry at their raunchy best , even hokum goes down easily .
0.6 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Yvan Attal at their raunchy best , even hokum goes down easily .
----
0.4 ( Fessenden ) is much more into ambiguity and creating mood than he is for on screen thrills
0.5 ( Crispin Glover ) is much more into ambiguity and creating mood than he is for on screen thrills
----
0.6 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Hubert 's punches ) , but it should go down smoothly enough with popcorn .
0.5 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Jelinek 's punches ) , but it should go down smoothly enough with popcorn .
0.5 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Adam Rifkin 's punches ) , but it should go down smoothly enough with popcorn .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 6 (4.9%)
Example fails:
0.5 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.7 Walt Becker is to Gary Cooper what a gnat is to a racehorse .
----
0.5 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.2 Based on a Birot story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.3 Based on a Merchant Ivory story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.4 Ray Liotta and Jason Patric do some of their best work in their underwritten roles , but do n't be fooled : Nobody deserves any prizes here .
0.6 Ray Liotta and Eric do some of their best work in their underwritten roles , but do n't be fooled : Nobody deserves any prizes here .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.5 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.3 Based on a Britney story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.4 Based on a Jelinek story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.4 Ray Liotta and Jason Patric do some of their best work in their underwritten roles , but do n't be fooled : Nobody deserves any prizes here .
0.5 Ray Liotta and Roberts do some of their best work in their underwritten roles , but do n't be fooled : Nobody deserves any prizes here .
----
0.5 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.6 Michel Gondry is to Gary Cooper what a gnat is to a racehorse .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 4 (2.5%)
Example fails:
0.5 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Susan Sarandon at their raunchy best , even hokum goes down easily .
0.6 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Phillip Noyce at their raunchy best , even hokum goes down easily .
----
0.1 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.5 For Walt Becker it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.4 ( Fessenden ) is much more into ambiguity and creating mood than he is for on screen thrills
0.6 ( Carl Franklin ) is much more into ambiguity and creating mood than he is for on screen thrills
----
Change Movie Industries
Test cases: 18
Fails (rate): 2 (11.1%)
Example fails:
0.7 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.4 ( A ) Ghollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
----
0.2 Even when foreign directors ... borrow stuff from Hollywood , they invariably shake up the formula and make it more interesting .
0.7 Even when foreign directors ... borrow stuff from Taiwood , they invariably shake up the formula and make it more interesting .
0.7 Even when foreign directors ... borrow stuff from Cantonwood , they invariably shake up the formula and make it more interesting .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 96 (4.5%)
Example fails:
0.4 I used to regret this movie, even though now I appreciate it.
----
0.5 I value this movie, but I used to dislike it.
----
0.6 In the past I would value this movie, even though now I hate it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 23 (1.7%)
Example fails:
1.0 I can't say I welcome that actor.
----
1.0 I can't say I appreciate that show.
----
1.0 I can't say I appreciate this actor.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 14 (2.8%)
Example fails:
0.7 I wouldn't say, given that I bought it last week, that we love this actor.
----
0.6 I can't say, given that I bought it last week, that we appreciate that show.
----
0.7 I can't say, given all that I've seen over the years, that that is a wonderful director.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 34 (4.6%)
Example fails:
1.0 This comedy movie is serious
----
1.0 The drama movie is funny rather than serious
----
0.2 The horror movie was frightening
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 3 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs3-shuffle-train/albert-large-v2_1.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs3-shuffle-train_1_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs3-shuffle-train_1_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 0 (0.0%)
change neutral words with BERT
Test cases: 500
Fails (rate): 35 (7.0%)
Example fails:
0.2 A strong first quarter , slightly less so second quarter , and average second half .
0.5 really strong first quarter , slightly less so second quarter , and average second half .
----
0.9 Evokes a little of the fear that parents have for the possible futures of their children -- and the sometimes bad choices mothers and fathers make in the interests of doing them good .
0.0 Evokes very little of the fear that parents have for the possible futures of their children -- and the sometimes bad choices mothers and fathers make in the interests of doing them good .
0.0 Evokes is little of the fear that parents have for the possible futures of their children -- and the sometimes bad choices mothers and fathers make in the interests of doing them good .
----
0.5 Like a south-of-the-border Melrose Place .
1.0 Like this south-of-the-border Melrose Place .
0.9 Like our south-of-the-border Melrose Place .
----
NER
Change names
Test cases: 147
Fails (rate): 2 (1.4%)
Example fails:
0.5 Has it ever been possible to say that Williams has truly inhabited a character ?
0.3 Has it ever been possible to say that Luke has truly inhabited a character ?
0.3 Has it ever been possible to say that Luke has truly inhabited a character ?
----
0.3 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Joshua Nelson in the lazy Bloodwork .
0.5 De Niro may enjoy the same free ride from critics afforded to John Bailey in the lazy Bloodwork .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 1 (0.6%)
Example fails:
0.3 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
0.9 Steve Irwin 's method is Carol Kane at accelerated speed and volume .
0.9 Steve Irwin 's method is Einstein at accelerated speed and volume .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 3 (2.4%)
Example fails:
0.6 Warmed-over Tarantino by way of wannabe Elmore Leonard .
0.3 Warmed-over Tarantino by way of wannabe Foster .
0.3 Warmed-over Tarantino by way of wannabe Merchant Ivory .
----
0.5 Flat , but with a revelatory performance by Michelle Williams .
0.6 Flat , but with a revelatory performance by Birot .
0.6 Flat , but with a revelatory performance by Gosling .
----
0.5 Has it ever been possible to say that Williams has truly inhabited a character ?
0.2 Has it ever been possible to say that Birot has truly inhabited a character ?
0.2 Has it ever been possible to say that Merchant Ivory has truly inhabited a character ?
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.5 Has it ever been possible to say that Williams has truly inhabited a character ?
0.2 Has it ever been possible to say that Britney has truly inhabited a character ?
0.4 Has it ever been possible to say that Jelinek has truly inhabited a character ?
----
0.5 ( Director ) Byler may yet have a great movie in him , but Charlotte Sometimes is only half of one .
0.4 ( Director ) Britney may yet have a great movie in him , but Charlotte Sometimes is only half of one .
----
0.3 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Sarah in the lazy Bloodwork .
0.5 De Niro may enjoy the same free ride from critics afforded to Gary Fleder in the lazy Bloodwork .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 2 (1.3%)
Example fails:
0.8 More honest about Alzheimer 's disease , I think , than Iris .
0.4 More honest about Alzheimer 's disease , I think , than Birot .
----
0.3 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
1.0 Steve Irwin 's method is Carl Franklin at accelerated speed and volume .
0.9 Steve Irwin 's method is Foster at accelerated speed and volume .
----
Change Movie Industries
Test cases: 18
Fails (rate): 0 (0.0%)
Temporal
used to, but now
Test cases: 2152
Fails (rate): 123 (5.7%)
Example fails:
0.4 I like this movie, but I used to abhor it.
----
0.5 In the past I would recommend this movie, even though now I hate it.
----
0.7 I abhor this movie, but in the past I would value it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 13 (1.0%)
Example fails:
0.8 I would never say I admire this director.
----
0.9 I can't say I appreciate the director.
----
1.0 I would never say I welcome this director.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 244 (48.8%)
Example fails:
1.0 I can't say, given my history with movies, that the movie was fantastic.
----
0.9 I wouldn't say, given it's a Friday, that that scene was amazing.
----
0.9 I don't think, given that I bought it last week, that this was a beautiful show.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 33 (4.5%)
Example fails:
1.0 This horror movie is calming
----
1.0 The comedy movie is serious, not rib-tickling
----
0.9 This comedy movie was serious, not rib-tickling
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 3 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs3-swa-linear-60-start2-drop-shuffle/albert-large-v2_8.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs3-swa-linear-60-start2-drop-shuffle_8_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs3-swa-linear-60-start2-drop-shuffle_8_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 0 (0.0%)
change neutral words with BERT
Test cases: 500
Fails (rate): 40 (8.0%)
Example fails:
0.2 The 3D images only enhance the film 's otherworldly quality , giving it a strange combo of you-are-there closeness with the disorienting unreality of the seemingly broken-down fourth wall of the movie screen .
0.6 The 3D images only enhance the film 's otherworldly quality , giving it a strange combo of you-are-there closeness with the disorienting unreality of the seemingly broken-down fourth wall of the big screen .
----
1.0 The production values are up there .
0.0 The production values are up below .
----
0.9 Run , do n't walk , to see this barbed and bracing comedy on the big screen .
0.1 Run , do n't walk , to see more barbed and bracing comedy on the big screen .
----
NER
Change names
Test cases: 147
Fails (rate): 3 (2.0%)
Example fails:
0.6 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.3 Imagine the James Woods character from Videodrome making a home movie of Melissa White and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.4 Imagine the James Woods character from Videodrome making a home movie of Ashley Hughes and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
0.5 More honest about Alzheimer 's disease , I think , than Iris .
0.4 More honest about Alzheimer 's disease , I think , than Katherine .
0.4 More honest about Alzheimer 's disease , I think , than Chelsea .
----
0.8 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
0.4 Matthew Ross method is Ernest Hemmingway at accelerated speed and volume .
0.4 Daniel Sanders method is Ernest Hemmingway at accelerated speed and volume .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 5 (3.2%)
Example fails:
0.6 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.3 For Britney it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.8 Sam Jones became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
0.5 Britney became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
0.5 Einstein became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
----
0.5 More honest about Alzheimer 's disease , I think , than Iris .
0.2 More honest about Alzheimer 's disease , I think , than Einstein .
0.3 More honest about Alzheimer 's disease , I think , than Carol Kane .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 2 (1.6%)
Example fails:
0.4 While McFarlane 's animation lifts the film firmly above the level of other coming-of-age films ... it 's also so jarring that it 's hard to get back into the boys ' story .
0.6 While Craig Bartlett 's animation lifts the film firmly above the level of other coming-of-age films ... it 's also so jarring that it 's hard to get back into the boys ' story .
0.6 While Walt Becker 's animation lifts the film firmly above the level of other coming-of-age films ... it 's also so jarring that it 's hard to get back into the boys ' story .
----
0.6 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.2 Imagine the Birot character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.3 Imagine the James Woods character from Videodrome making a home movie of Doug Liman and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 4 (3.3%)
Example fails:
0.6 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.4 Imagine the James Woods character from Videodrome making a home movie of Gary Fleder and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.5 Imagine the Michel Gondry character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
0.4 While McFarlane 's animation lifts the film firmly above the level of other coming-of-age films ... it 's also so jarring that it 's hard to get back into the boys ' story .
0.7 While Adam Rifkin 's animation lifts the film firmly above the level of other coming-of-age films ... it 's also so jarring that it 's hard to get back into the boys ' story .
0.6 While Einstein 's animation lifts the film firmly above the level of other coming-of-age films ... it 's also so jarring that it 's hard to get back into the boys ' story .
----
0.5 Where last time jokes flowed out of Cho 's life story , which provided an engrossing dramatic through line , here the comedian hides behind obviously constructed routines .
0.4 Where last time jokes flowed out of Britney 's life story , which provided an engrossing dramatic through line , here the comedian hides behind obviously constructed routines .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 3 (1.9%)
Example fails:
0.6 With Danilo Donati 's witty designs and Dante Spinotti 's luscious cinematography , this might have made a decent children 's movie -- if only Benigni had n't insisted on casting himself in the title role .
0.4 With Danilo Donati 's witty designs and Dante Spinotti 's luscious cinematography , this might have made a decent children 's movie -- if only Gosling had n't insisted on casting himself in the title role .
----
0.5 More honest about Alzheimer 's disease , I think , than Iris .
0.0 More honest about Alzheimer 's disease , I think , than Birot .
0.2 More honest about Alzheimer 's disease , I think , than Gosling .
----
0.5 The whole cast looks to be having so much fun with the slapstick antics and silly street patois , tossing around obscure expressions like Bellini and Mullinski , that the compact 86 minutes breezes by .
0.4 The whole cast looks to be having so much fun with the slapstick antics and silly street patois , tossing around obscure expressions like Birot and Mullinski , that the compact 86 minutes breezes by .
----
Change Movie Industries
Test cases: 18
Fails (rate): 1 (5.6%)
Example fails:
0.3 Even when foreign directors ... borrow stuff from Hollywood , they invariably shake up the formula and make it more interesting .
0.7 Even when foreign directors ... borrow stuff from Tamalewood , they invariably shake up the formula and make it more interesting .
0.7 Even when foreign directors ... borrow stuff from Aussiewood , they invariably shake up the formula and make it more interesting .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 33 (1.5%)
Example fails:
0.8 I abhor this movie, but in the past I would recommend it.
----
0.6 In the past I would enjoy this movie, even though now I dread it.
----
0.3 I value this movie, but I used to despise it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 0 (0.0%)
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 18 (3.6%)
Example fails:
0.9 I can't say, given it's a Friday, that the actor was beautiful.
----
0.9 I don't think, given that I bought it last week, that this actor was amazing.
----
0.7 I can't say, given my history with movies, that this actor is amazing.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 36 (4.9%)
Example fails:
0.1 This horror movie is frightening
----
0.6 This comedy movie was scary rather than rib-tickling
----
1.0 This drama movie was funny rather than serious
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 4 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs4-shuffle-train/albert-large-v2_1.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs4-shuffle-train_1_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs4-shuffle-train_1_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 3 (0.6%)
Example fails:
0.1 Where last time jokes flowed out of Cho 's life story , which provided an engrossing dramatic through line , here the comedian hides behind obviously constructed routines .
0.2 Where last time jokes flowed out of Cho 's life story , which provided an engrossing dramatic through line , here the comedian hides behind obviously constructed routines. Never watching this again.
0.2 Where last time jokes flowed out of Cho 's life story , which provided an engrossing dramatic through line , here the comedian hides behind obviously constructed routines. I regret it.
----
0.0 Roger Michell , who did an appealing job directing Persuasion and Notting Hill in England , gets too artsy in his American debut .
0.1 Roger Michell , who did an appealing job directing Persuasion and Notting Hill in England , gets too artsy in his American debut. I regret it.
----
0.0 The essential problem in Orange County is that , having created an unusually vivid set of characters worthy of its strong cast , the film flounders when it comes to giving them something to do .
0.1 The essential problem in Orange County is that , having created an unusually vivid set of characters worthy of its strong cast , the film flounders when it comes to giving them something to do. I regret it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 39 (7.8%)
Example fails:
0.3 George , hire a real director and good writers for the next installment , please .
0.6 George , hire a real director and good writers for your next installment , please .
----
0.2 There 's something fundamental missing from this story : something or someone to care about .
0.7 There 's something fundamental missing from every story : something or someone to care about .
----
0.4 A film of precious increments artfully camouflaged as everyday activities .
1.0 A film capturing precious increments artfully camouflaged as everyday activities .
0.7 A film in precious increments artfully camouflaged as everyday activities .
----
NER
Change names
Test cases: 147
Fails (rate): 4 (2.7%)
Example fails:
0.6 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.4 Imagine the James Woods character from Videodrome making a home movie of Melissa White and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
0.3 Sucking all the ` classic ' out of Robert Louis Stevenson 's Treasure Island and filling the void with sci-fi video game graphics and Disney-fied adolescent angst ...
0.5 Sucking all the ` classic ' out of James Rivera Treasure Island and filling the void with sci-fi video game graphics and Disney-fied adolescent angst ...
0.5 Sucking all the ` classic ' out of William Ross Treasure Island and filling the void with sci-fi video game graphics and Disney-fied adolescent angst ...
----
0.3 George , hire a real director and good writers for the next installment , please .
0.6 Austin , hire a real director and good writers for the next installment , please .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 2 (1.3%)
Example fails:
0.5 That Zhang would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
0.1 Carol Kane would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
0.1 Britney would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
----
0.5 Ms. Fulford-Wierzbicki is almost spooky in her sulky , calculating Lolita turn .
0.4 Ms. Jelinek is almost spooky in her sulky , calculating Lolita turn .
0.4 Ms. Britney is almost spooky in her sulky , calculating Lolita turn .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 6 (4.9%)
Example fails:
0.5 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.4 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Birot would know how to do .
0.4 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Ellen Pompeo would know how to do .
----
0.3 George , hire a real director and good writers for the next installment , please .
0.7 Foster , hire a real director and good writers for the next installment , please .
0.5 Merchant Ivory , hire a real director and good writers for the next installment , please .
----
0.6 Sluggishly directed by episodic TV veteran Joe Zwick , it 's a sitcom without the snap-crackle .
0.5 Sluggishly directed by episodic TV veteran Merchant Ivory , it 's a sitcom without the snap-crackle .
0.5 Sluggishly directed by episodic TV veteran Birot , it 's a sitcom without the snap-crackle .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 6 (4.9%)
Example fails:
0.3 George , hire a real director and good writers for the next installment , please .
0.6 Crispin Glover , hire a real director and good writers for the next installment , please .
0.6 Michel Gondry , hire a real director and good writers for the next installment , please .
----
0.8 Watching Haneke 's film is , aptly enough , a challenge and a punishment .
0.5 Watching Britney 's film is , aptly enough , a challenge and a punishment .
----
0.6 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.4 Imagine the James Woods character from Videodrome making a home movie of Gary Fleder and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.4 Imagine the Yvan Attal character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 5 (3.2%)
Example fails:
0.8 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
0.5 Steve Irwin 's method is Birot at accelerated speed and volume .
----
0.7 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Kaputschnik . ''
0.4 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Birot . ''
0.4 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Merchant Ivory . ''
----
0.5 That Zhang would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
0.1 Birot would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
0.1 Gosling would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
----
Change Movie Industries
Test cases: 18
Fails (rate): 1 (5.6%)
Example fails:
0.5 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.2 Home Alone goes Hogawood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.3 Home Alone goes Ghollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 148 (6.9%)
Example fails:
0.8 In the past I would like this movie, even though now I dread it.
----
0.3 I think this movie is beautiful, but I used to think it was bad.
----
0.9 I despise this movie, but in the past I would recommend it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 0 (0.0%)
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 25 (5.0%)
Example fails:
0.7 I don't think, given it's a Friday, that we appreciate the actor.
----
1.0 I can't say, given it's a Friday, that we appreciate this director.
----
0.7 I wouldn't say, given that I bought it last week, that I appreciate that show.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 35 (4.8%)
Example fails:
0.9 This comedy movie is serious
----
0.8 The horror movie was calming
----
1.0 The comedy movie is serious
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 4 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs4-swa-linear-75-start2-drop-shuffle/albert-large-v2_6.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs4-swa-linear-75-start2-drop-shuffle_6_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs4-swa-linear-75-start2-drop-shuffle_6_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 2 (0.4%)
Example fails:
0.0 Without September 11 , Collateral Damage would have been just another bad movie .
0.1 Without September 11 , Collateral Damage would have been just another bad movie. Never watching this again.
----
0.2 Is this progress ?
0.4 Is this progress. Never watching this again.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 33 (6.6%)
Example fails:
0.9 The Transporter is as lively and as fun as it is unapologetically dumb
0.3 * Transporter is as lively and as fun as it is unapologetically dumb
----
0.6 If you pitch your expectations at an all time low , you could do worse than this oddly cheerful -- but not particularly funny -- body-switching farce .
0.1 If you pitch your expectations at an all time low , you could do worse than make oddly cheerful -- but not particularly funny -- body-switching farce .
0.1 If you pitch your expectations at an all time low , you could do worse than watching oddly cheerful -- but not particularly funny -- body-switching farce .
----
0.8 In its ragged , cheap and unassuming way , the movie works .
0.1 In its ragged , cheap and unassuming way , the above works .
0.2 In its ragged , cheap and unassuming way , the code works .
----
NER
Change names
Test cases: 147
Fails (rate): 4 (2.7%)
Example fails:
0.5 George , hire a real director and good writers for the next installment , please .
0.4 Luke , hire a real director and good writers for the next installment , please .
0.4 Luke , hire a real director and good writers for the next installment , please .
----
0.7 Has it ever been possible to say that Williams has truly inhabited a character ?
0.3 Has it ever been possible to say that Luke has truly inhabited a character ?
0.3 Has it ever been possible to say that Luke has truly inhabited a character ?
----
0.5 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.3 Imagine the James Woods character from Videodrome making a home movie of Melissa White and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.3 Imagine the James Woods character from Videodrome making a home movie of Amanda Torres and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 2 (1.3%)
Example fails:
0.4 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.9 For Paul Pender it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.9 For Michel Gondry it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.5 ( Davis ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.4 ( Paul Pender ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.4 ( Einstein ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.7 Has it ever been possible to say that Williams has truly inhabited a character ?
0.1 Has it ever been possible to say that Birot has truly inhabited a character ?
0.2 Has it ever been possible to say that Merchant Ivory has truly inhabited a character ?
----
0.5 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.0 Imagine the Birot character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.3 Imagine the Merchant Ivory character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
0.5 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to Gulpilil in the lazy Bloodwork .
0.4 Birot may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 8 (6.5%)
Example fails:
0.6 While Benigni ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
0.4 While Britney ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
0.4 While Einstein ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
----
0.5 George , hire a real director and good writers for the next installment , please .
0.3 Britney , hire a real director and good writers for the next installment , please .
----
0.5 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.3 Einstein may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 3 (1.9%)
Example fails:
0.4 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.9 For Walt Becker it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.9 For Craig Bartlett it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.5 ( Davis ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.3 ( Craig Bartlett ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.4 ( Walt Becker ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
----
0.9 More honest about Alzheimer 's disease , I think , than Iris .
0.1 More honest about Alzheimer 's disease , I think , than Birot .
0.5 More honest about Alzheimer 's disease , I think , than Gosling .
----
Change Movie Industries
Test cases: 18
Fails (rate): 0 (0.0%)
Temporal
used to, but now
Test cases: 2152
Fails (rate): 49 (2.3%)
Example fails:
0.0 I think this movie is good, but I used to think it was terrible.
----
0.4 I like this movie, but I used to despise it.
----
0.6 I abhor this movie, but I used to recommend it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 3 (0.2%)
Example fails:
0.5 I would never say I appreciate the actor.
----
0.7 I can't say I appreciate the director.
----
0.9 I would never say I appreciate the director.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 34 (6.8%)
Example fails:
0.9 I wouldn't say, given it's a Friday, that this director is beautiful.
----
0.9 I can't say, given it's a Friday, that this is a wonderful movie.
----
0.9 I can't say, given that I bought it last week, that we appreciate that show.
----
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 37 (5.0%)
Example fails:
0.6 This children movie is scary
----
1.0 This drama movie is funny rather than serious
----
0.9 The comedy movie is serious, not light-hearted
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 5 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs5-shuffle-train/albert-large-v2_4.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs5-shuffle-train_4_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs5-shuffle-train_4_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 39 (7.8%)
Example fails:
0.0 The issue of faith is not explored very deeply
0.1 The issue of faith is not explored very deeply. I despise it.
0.1 The issue of faith is not explored very deeply. I regret it.
----
0.0 Even legends like Alfred Hitchcock and John Huston occasionally directed trifles ... so it 's no surprise to see a world-class filmmaker like Zhang Yimou behind the camera for a yarn that 's ultimately rather inconsequential .
0.2 Even legends like Alfred Hitchcock and John Huston occasionally directed trifles ... so it 's no surprise to see a world-class filmmaker like Zhang Yimou behind the camera for a yarn that 's ultimately rather inconsequential. I despise it.
0.2 Even legends like Alfred Hitchcock and John Huston occasionally directed trifles ... so it 's no surprise to see a world-class filmmaker like Zhang Yimou behind the camera for a yarn that 's ultimately rather inconsequential. I dread it.
----
0.0 Too bad the former Murphy Brown does n't pop Reese back .
0.1 Too bad the former Murphy Brown does n't pop Reese back. I abhor it.
0.1 Too bad the former Murphy Brown does n't pop Reese back. I dread it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 43 (8.6%)
Example fails:
1.0 Here 's a British flick gleefully unconcerned with plausibility , yet just as determined to entertain you .
0.1 Here 's a British flick gleefully unconcerned with plausibility , yet just as determined to entertain Americans .
0.3 Here 's a British flick gleefully unconcerned with plausibility , yet just as determined to entertain itself .
----
1.0 This is how you use special effects .
0.3 This is how people use special effects .
0.4 This is how they use special effects .
----
0.3 He just wants them to be part of the action , the wallpaper of his chosen reality .
1.0 He just wants them to be part of that action , that wallpaper of his chosen reality .
0.8 He just wants them to be part of this action , this wallpaper of his chosen reality .
----
NER
Change names
Test cases: 147
Fails (rate): 3 (2.0%)
Example fails:
0.7 Has it ever been possible to say that Williams has truly inhabited a character ?
0.1 Has it ever been possible to say that Luke has truly inhabited a character ?
0.1 Has it ever been possible to say that Luke has truly inhabited a character ?
----
0.5 ` Abandon all hope , ye who enter here ' ... you should definitely let Dante 's gloomy words be your guide .
0.6 ` Abandon all hope , ye who enter here ' ... you should definitely let Samuel 's gloomy words be your guide .
----
0.4 Flat , but with a revelatory performance by Michelle Williams .
0.6 Flat , but with a revelatory performance by Elizabeth Nelson .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 6 (3.8%)
Example fails:
0.6 While not as aggressively impressive as its American counterpart , `` In the Bedroom , '' Moretti 's film makes its own , quieter observations
0.4 While not as aggressively impressive as its American counterpart , `` In the Bedroom , '' Britney 's film makes its own , quieter observations
----
0.5 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
1.0 Steve Irwin 's method is Crispin Glover at accelerated speed and volume .
1.0 Steve Irwin 's method is Carol Kane at accelerated speed and volume .
----
0.8 As lo-fi as the special effects are , the folks who cobbled Nemesis together indulge the force of humanity over hardware in a way that George Lucas has long forgotten .
0.4 As lo-fi as the special effects are , the folks who cobbled Nemesis together indulge the force of humanity over hardware in a way that Britney has long forgotten .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 8 (6.5%)
Example fails:
0.8 George , hire a real director and good writers for the next installment , please .
0.1 Birot , hire a real director and good writers for the next installment , please .
0.4 Gosling , hire a real director and good writers for the next installment , please .
----
0.5 Its lack of quality earns it a place alongside those other two recent Dumas botch-jobs , The Man in the Iron Mask and The Musketeer .
0.6 Its lack of quality earns it a place alongside those other two recent Phillip Noyce botch-jobs , The Man in the Iron Mask and The Musketeer .
----
0.7 Has it ever been possible to say that Williams has truly inhabited a character ?
0.1 Has it ever been possible to say that Merchant Ivory has truly inhabited a character ?
0.2 Has it ever been possible to say that Ellen Pompeo has truly inhabited a character ?
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 6 (4.9%)
Example fails:
0.4 Flat , but with a revelatory performance by Michelle Williams .
0.5 Flat , but with a revelatory performance by Crispin Glover .
----
0.7 Has it ever been possible to say that Williams has truly inhabited a character ?
0.1 Has it ever been possible to say that Britney has truly inhabited a character ?
0.2 Has it ever been possible to say that Mat Hoffman has truly inhabited a character ?
----
0.8 George , hire a real director and good writers for the next installment , please .
0.2 Einstein , hire a real director and good writers for the next installment , please .
0.3 Britney , hire a real director and good writers for the next installment , please .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 5 (3.2%)
Example fails:
0.6 More honest about Alzheimer 's disease , I think , than Iris .
0.4 More honest about Alzheimer 's disease , I think , than Birot .
0.5 More honest about Alzheimer 's disease , I think , than Ellen Pompeo .
----
0.5 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
1.0 Steve Irwin 's method is Carl Franklin at accelerated speed and volume .
1.0 Steve Irwin 's method is Foster at accelerated speed and volume .
----
0.5 Sam Jones became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
0.2 Birot became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
0.4 Gosling became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
----
Change Movie Industries
Test cases: 18
Fails (rate): 2 (11.1%)
Example fails:
0.4 Even when foreign directors ... borrow stuff from Hollywood , they invariably shake up the formula and make it more interesting .
0.7 Even when foreign directors ... borrow stuff from Aussiewood , they invariably shake up the formula and make it more interesting .
0.7 Even when foreign directors ... borrow stuff from Taiwood , they invariably shake up the formula and make it more interesting .
----
0.5 It 's getting harder and harder to ignore the fact that Hollywood is n't laughing with us , folks .
0.3 It 's getting harder and harder to ignore the fact that Hogawood is n't laughing with us , folks .
0.4 It 's getting harder and harder to ignore the fact that Kollywood is n't laughing with us , folks .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 49 (2.3%)
Example fails:
0.1 I welcome this movie, but I used to regret it.
----
0.3 I think this movie is fantastic, but I used to think it was terrible.
----
0.3 I recommend this movie, but I used to abhor it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 0 (0.0%)
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 43 (8.6%)
Example fails:
1.0 I can't say, given all that I've seen over the years, that this is a fantastic director.
----
1.0 I can't say, given that I bought it last week, that the actor is wonderful.
----
1.0 I can't say, given all that I've seen over the years, that that is a wonderful director.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 42 (5.7%)
Example fails:
1.0 The comedy movie is serious
----
0.7 This horror movie is laughable, not terrifying
----
0.8 This drama movie is funny, not moving
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 5 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs5-swa-linear-60-start2-drop-shuffle/albert-large-v2_3.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs5-swa-linear-60-start2-drop-shuffle_3_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs5-swa-linear-60-start2-drop-shuffle_3_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 1 (0.2%)
Example fails:
0.9 Is n't it great ?
0.0 Is n't it great. I would watch this again.
0.1 Is n't it great. I value it.
----
add negative phrases
Test cases: 500
Fails (rate): 9 (1.8%)
Example fails:
0.0 You would be better off investing in the worthy EMI recording that serves as the soundtrack , or the home video of the 1992 Malfitano-Domingo production .
0.1 You would be better off investing in the worthy EMI recording that serves as the soundtrack , or the home video of the 1992 Malfitano-Domingo production. I regret it.
----
0.0 Schindler 's List it ai n't .
0.2 Schindler 's List it ai n't. I regret it.
----
0.1 This pep-talk for faith , hope and charity does little to offend , but if saccharine earnestness were a crime , the film 's producers would be in the clink for life .
0.2 This pep-talk for faith , hope and charity does little to offend , but if saccharine earnestness were a crime , the film 's producers would be in the clink for life. I dread it.
0.2 This pep-talk for faith , hope and charity does little to offend , but if saccharine earnestness were a crime , the film 's producers would be in the clink for life. I abhor it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 41 (8.2%)
Example fails:
0.9 Happily for Mr. Chin -- though unhappily for his subjects -- the invisible hand of the marketplace wrote a script that no human screenwriter could have hoped to match .
0.5 Happily for Mr. Chin -- though unhappily for his subjects -- some invisible hand of some marketplace wrote a script that no human screenwriter could have hoped to match .
----
1.0 Who knows what exactly Godard is on about in this film , but his words and images do n't have to add up to mesmerize you .
0.0 Who knows what exactly Godard is on about in this film , but his words and images do n't seem to add up to mesmerize you .
0.0 Who knows what exactly Godard is on about in this film , but his words and images do n't have to add up to mesmerize either .
----
0.7 Run , do n't walk , to see this barbed and bracing comedy on the big screen .
0.2 Run , do n't walk , to see more barbed and bracing comedy on the big screen .
----
NER
Change names
Test cases: 147
Fails (rate): 3 (2.0%)
Example fails:
0.5 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.4 Based on a Michael Ward story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.4 Based on a Matthew Ross story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.5 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
0.1 Daniel Sanders method is Ernest Hemmingway at accelerated speed and volume .
0.1 Michael Ward method is Ernest Hemmingway at accelerated speed and volume .
----
0.4 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.8 Joshua Nelson is to Gary Cooper what a gnat is to a racehorse .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 2 (1.3%)
Example fails:
0.5 Watching Beanie and his gang put together his slasher video from spare parts and borrowed materials is as much fun as it must have been for them to make it .
0.8 Watching Crispin Glover and his gang put together his slasher video from spare parts and borrowed materials is as much fun as it must have been for them to make it .
0.8 Watching Carol Kane and his gang put together his slasher video from spare parts and borrowed materials is as much fun as it must have been for them to make it .
----
0.3 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Paul Pender it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Jelinek it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 2 (1.6%)
Example fails:
0.4 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.7 Walt Becker is to Gary Cooper what a gnat is to a racehorse .
----
0.5 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.3 Based on a Merchant Ivory story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.4 Based on a Birot story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 2 (1.6%)
Example fails:
0.5 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.3 Based on a Britney story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.4 Based on a Yvan Attal story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.4 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.6 Michel Gondry is to Gary Cooper what a gnat is to a racehorse .
0.6 Yvan Attal is to Gary Cooper what a gnat is to a racehorse .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 3 (1.9%)
Example fails:
0.5 Watching Beanie and his gang put together his slasher video from spare parts and borrowed materials is as much fun as it must have been for them to make it .
0.9 Watching Walt Becker and his gang put together his slasher video from spare parts and borrowed materials is as much fun as it must have been for them to make it .
0.7 Watching Gosling and his gang put together his slasher video from spare parts and borrowed materials is as much fun as it must have been for them to make it .
----
0.3 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Carl Franklin it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Walt Becker it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.5 More honest about Alzheimer 's disease , I think , than Iris .
0.3 More honest about Alzheimer 's disease , I think , than Birot .
----
Change Movie Industries
Test cases: 18
Fails (rate): 1 (5.6%)
Example fails:
0.5 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.6 ( A ) Hogawood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 99 (4.6%)
Example fails:
0.2 I value this movie, but I used to hate it.
----
0.6 In the past I would admire this movie, even though now I hate it.
----
0.6 In the past I would like this movie, even though now I dislike it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 6 (0.4%)
Example fails:
0.6 I can't say I welcome this scene.
----
0.9 I can't say I welcome the scene.
----
0.8 I can't say I welcome this movie.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 80 (16.0%)
Example fails:
1.0 I can't say, given my history with movies, that this actor is amazing.
----
1.0 I can't say, given that I bought it last week, that the actor is wonderful.
----
0.8 I can't say, given that we watched a lot, that the was a beautiful director.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 39 (5.3%)
Example fails:
1.0 This comedy movie is serious, not rib-tickling
----
1.0 The comedy movie is serious
----
1.0 The comedy movie was serious, not rib-tickling
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 6 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs6-shuffle-train/albert-large-v2_2.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs6-shuffle-train_2_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs6-shuffle-train_2_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 11 (2.2%)
Example fails:
0.1 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Spencer Tracy .
0.2 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Spencer Tracy. I abhor it.
0.2 And in truth , cruel as it may sound , he makes Arnold Schwarzenegger look like Spencer Tracy. I despise it.
----
0.2 ( Director ) Byler may yet have a great movie in him , but Charlotte Sometimes is only half of one .
0.3 ( Director ) Byler may yet have a great movie in him , but Charlotte Sometimes is only half of one. I dread it.
----
0.1 Though the book runs only about 300 pages , it is so densely packed ... that even an ambitious adaptation and elaborate production like Mr. Schepisi 's seems skimpy and unclear .
0.2 Though the book runs only about 300 pages , it is so densely packed ... that even an ambitious adaptation and elaborate production like Mr. Schepisi 's seems skimpy and unclear. I abhor it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 46 (9.2%)
Example fails:
0.9 Must be seen to be believed .
0.5 Must be seen before be believed .
----
0.7 A film of precious increments artfully camouflaged as everyday activities .
0.5 making film of precious increments artfully camouflaged as everyday activities .
----
0.3 The stories here suffer from the chosen format .
0.9 The stories here suffer in the chosen format .
0.7 The stories here suffer through the chosen format .
----
NER
Change names
Test cases: 147
Fails (rate): 3 (2.0%)
Example fails:
0.4 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , or to Robin Williams 's lecture so I could listen to a teacher with humor , passion , and verve .
0.7 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , or to Sarah Bennett lecture so I could listen to a teacher with humor , passion , and verve .
0.6 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , or to Jennifer Cruz lecture so I could listen to a teacher with humor , passion , and verve .
----
0.4 Arnold 's jump from little screen to big will leave frowns on more than a few faces .
0.5 Nathaniel 's jump from little screen to big will leave frowns on more than a few faces .
----
0.8 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Allen personifies .
0.3 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Luke personifies .
0.3 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Luke personifies .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 3 (1.9%)
Example fails:
0.4 That Zhang would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
0.8 Adam Rifkin would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
0.8 Michel Gondry would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
----
0.4 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Michel Gondry it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Adam Rifkin it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.4 ( Fessenden ) is much more into ambiguity and creating mood than he is for on screen thrills
0.5 ( Einstein ) is much more into ambiguity and creating mood than he is for on screen thrills
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.6 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.4 Imagine the Birot character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
0.9 George , hire a real director and good writers for the next installment , please .
0.1 Birot , hire a real director and good writers for the next installment , please .
----
0.4 Arnold 's jump from little screen to big will leave frowns on more than a few faces .
0.7 Phillip Noyce 's jump from little screen to big will leave frowns on more than a few faces .
0.6 Smokey Robinson 's jump from little screen to big will leave frowns on more than a few faces .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 4 (3.3%)
Example fails:
0.8 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Allen personifies .
0.4 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Britney personifies .
0.4 Not so much funny as aggressively sitcom-cute , it 's full of throwaway one-liners , not-quite jokes , and a determined TV amiability that Einstein personifies .
----
0.4 While Benigni ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
0.5 While Michel Gondry ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
0.5 While Adam Rifkin ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
----
0.5 Director Brian Levant , who never strays far from his sitcom roots , skates blithely from one implausible situation to another , pausing only to tie up loose ends with more bows than you 'll find on a French poodle .
0.7 Director Yvan Attal , who never strays far from his sitcom roots , skates blithely from one implausible situation to another , pausing only to tie up loose ends with more bows than you 'll find on a French poodle .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 4 (2.5%)
Example fails:
0.4 ( Fessenden ) is much more into ambiguity and creating mood than he is for on screen thrills
0.6 ( Carl Franklin ) is much more into ambiguity and creating mood than he is for on screen thrills
----
0.4 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Walt Becker it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Carl Franklin it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.4 That Zhang would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
0.9 Walt Becker would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
0.8 Phillip Noyce would make such a strainingly cute film -- with a blind orphan at its center , no less -- indicates where his ambitions have wandered .
----
Change Movie Industries
Test cases: 18
Fails (rate): 0 (0.0%)
Temporal
used to, but now
Test cases: 2152
Fails (rate): 151 (7.0%)
Example fails:
0.1 I recommend this movie, but I used to dislike it.
----
0.3 I appreciate this movie, but I used to dislike it.
----
1.0 I hate this movie, but I used to recommend it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 16 (1.2%)
Example fails:
1.0 I can't say I appreciate this director.
----
1.0 I can't say I appreciate the actor.
----
0.7 I can't say I admire this director.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 88 (17.6%)
Example fails:
1.0 I can't say, given that I bought it last week, that this was a brilliant movie.
----
1.0 I wouldn't say, given all that I've seen over the years, that this actor is wonderful.
----
0.8 I don't think, given it's a Friday, that this director is amazing.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 42 (5.7%)
Example fails:
1.0 The comedy movie is serious, not rib-tickling
----
0.3 This comedy movie was rib-tickling
----
0.0 This horror movie was terrifying
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 6 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs6-swa-linear-60-start2-drop-shuffle/albert-large-v2_7.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs6-swa-linear-60-start2-drop-shuffle_7_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs6-swa-linear-60-start2-drop-shuffle_7_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 1 (0.2%)
Example fails:
1.0 Is n't it great ?
0.0 Is n't it great. I would watch this again.
0.0 Is n't it great. I value it.
----
add negative phrases
Test cases: 500
Fails (rate): 3 (0.6%)
Example fails:
0.0 Director Dirk Shafer and co-writer Greg Hinton ride the dubious divide where gay porn reaches for serious drama .
0.1 Director Dirk Shafer and co-writer Greg Hinton ride the dubious divide where gay porn reaches for serious drama. I abhor it.
----
0.0 Roger Michell , who did an appealing job directing Persuasion and Notting Hill in England , gets too artsy in his American debut .
0.1 Roger Michell , who did an appealing job directing Persuasion and Notting Hill in England , gets too artsy in his American debut. I abhor it.
----
0.0 This pep-talk for faith , hope and charity does little to offend , but if saccharine earnestness were a crime , the film 's producers would be in the clink for life .
0.1 This pep-talk for faith , hope and charity does little to offend , but if saccharine earnestness were a crime , the film 's producers would be in the clink for life. I abhor it.
0.1 This pep-talk for faith , hope and charity does little to offend , but if saccharine earnestness were a crime , the film 's producers would be in the clink for life. I regret it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 34 (6.8%)
Example fails:
0.3 It is not a mass-market entertainment but an uncompromising attempt by one artist to think about another .
0.8 It is not a mass-market entertainment but an uncompromising attempt by one artist to think with another .
0.7 It is not a mass-market entertainment but the uncompromising attempt by one artist to think about another .
----
0.9 In its ragged , cheap and unassuming way , the movie works .
0.4 In its ragged , cheap and unassuming way , the above works .
----
0.0 The film is just a big , gorgeous , mind-blowing , breath-taking mess .
1.0 The film is just this big , gorgeous , mind-blowing , breath-taking mess .
----
NER
Change names
Test cases: 147
Fails (rate): 4 (2.7%)
Example fails:
0.3 George , hire a real director and good writers for the next installment , please .
1.0 Austin , hire a real director and good writers for the next installment , please .
1.0 Scott , hire a real director and good writers for the next installment , please .
----
0.6 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.3 John Bailey is to Gary Cooper what a gnat is to a racehorse .
0.3 Joshua Nelson is to Gary Cooper what a gnat is to a racehorse .
----
0.4 Arnold 's jump from little screen to big will leave frowns on more than a few faces .
0.7 Nathaniel 's jump from little screen to big will leave frowns on more than a few faces .
0.5 Luke 's jump from little screen to big will leave frowns on more than a few faces .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 7 (4.5%)
Example fails:
0.2 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Paul Pender it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Mat Hoffman it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.6 A sensual performance from Abbass buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
0.4 A sensual performance from Britney buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
0.4 A sensual performance from Crispin Glover buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
----
0.1 More honest about Alzheimer 's disease , I think , than Iris .
0.8 More honest about Alzheimer 's disease , I think , than Paul Pender .
0.6 More honest about Alzheimer 's disease , I think , than Adam Rifkin .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 6 (4.9%)
Example fails:
0.3 George , hire a real director and good writers for the next installment , please .
1.0 Foster , hire a real director and good writers for the next installment , please .
1.0 Walt Becker , hire a real director and good writers for the next installment , please .
----
0.4 Arnold 's jump from little screen to big will leave frowns on more than a few faces .
0.6 Walt Becker 's jump from little screen to big will leave frowns on more than a few faces .
0.6 Phillip Noyce 's jump from little screen to big will leave frowns on more than a few faces .
----
0.8 Has it ever been possible to say that Williams has truly inhabited a character ?
0.3 Has it ever been possible to say that Merchant Ivory has truly inhabited a character ?
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 9 (7.3%)
Example fails:
0.3 While Benigni ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
0.5 While Michel Gondry ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
0.5 While Adam Rifkin ( who stars and co-wrote ) seems to be having a wonderful time , he might be alone in that .
----
0.3 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.9 Based on a Crispin Glover story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.6 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.3 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Jelinek would know how to do .
0.3 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Crispin Glover would know how to do .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 7 (4.5%)
Example fails:
0.8 Sam Jones became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
0.2 Merchant Ivory became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
----
0.6 A sensual performance from Abbass buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
0.4 A sensual performance from Ellen Pompeo buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
0.4 A sensual performance from Carl Franklin buoys the flimsy story , but her inner journey is largely unexplored and we 're left wondering about this exotic-looking woman whose emotional depths are only hinted at .
----
0.1 More honest about Alzheimer 's disease , I think , than Iris .
0.7 More honest about Alzheimer 's disease , I think , than Craig Bartlett .
0.6 More honest about Alzheimer 's disease , I think , than Carl Franklin .
----
Change Movie Industries
Test cases: 18
Fails (rate): 2 (11.1%)
Example fails:
0.6 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.4 Home Alone goes Hogawood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
----
0.1 Even when foreign directors ... borrow stuff from Hollywood , they invariably shake up the formula and make it more interesting .
0.9 Even when foreign directors ... borrow stuff from Cantonwood , they invariably shake up the formula and make it more interesting .
0.9 Even when foreign directors ... borrow stuff from Taiwood , they invariably shake up the formula and make it more interesting .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 68 (3.2%)
Example fails:
0.0 I think this movie is brilliant, but I used to think it was bad.
----
0.7 I despise this movie, but in the past I would recommend it.
----
0.3 I value this movie, but I used to hate it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 4 (0.3%)
Example fails:
1.0 I can't say I appreciate this actor.
----
0.9 I can't say I appreciate that actor.
----
1.0 I can't say I appreciate the director.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 50 (10.0%)
Example fails:
1.0 I can't say, given that I bought it last week, that this was a brilliant movie.
----
0.8 I can't say, given it's a Friday, that we appreciate this director.
----
1.0 I can't say, given my history with movies, that the actor is wonderful.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 37 (5.0%)
Example fails:
1.0 This comedy movie is serious
----
0.6 This comedy movie is scary rather than rib-tickling
----
0.0 The horror movie was frightening
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 4 (0.3%)
Example fails:
0.5 Hallyuwood movies are tough
----
0.6 Hogawood movies are tough
----
0.5 The Tamalewood movie is tough
----
###Markdown
Random Seed 7 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs7-shuffle-train/albert-large-v2_2.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs7-shuffle-train_2_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs7-shuffle-train_2_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 2 (0.4%)
Example fails:
0.0 Too bad the former Murphy Brown does n't pop Reese back .
0.1 Too bad the former Murphy Brown does n't pop Reese back. I regret it.
----
0.9 This is such a high-energy movie where the drumming and the marching are so excellent , who cares if the story 's a little weak .
0.9 This is such a high-energy movie where the drumming and the marching are so excellent , who cares if the story 's a little weak. I dread it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 39 (7.8%)
Example fails:
0.9 Disney 's live-action division has a history of releasing cinematic flotsam , but this is one occasion when they have unearthed a rare gem .
0.4 Disney 's live-action division has a history of releasing cinematic flotsam , but this is one occasion when they seemingly unearthed a rare gem .
----
1.0 An impressive if flawed effort that indicates real talent .
0.0 An impressive if flawed effort hardly indicates real talent .
0.1 An impressive if flawed effort rarely indicates real talent .
----
0.5 In its ragged , cheap and unassuming way , the movie works .
0.0 In its ragged , cheap and unassuming way , the above works .
0.1 In its ragged , cheap and unassuming way , the code works .
----
NER
Change names
Test cases: 147
Fails (rate): 1 (0.7%)
Example fails:
0.5 George , hire a real director and good writers for the next installment , please .
0.4 Luke , hire a real director and good writers for the next installment , please .
0.4 Luke , hire a real director and good writers for the next installment , please .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 4 (2.5%)
Example fails:
0.5 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Paul Pender it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Einstein it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.0 Who knows what exactly Godard is on about in this film , but his words and images do n't have to add up to mesmerize you .
1.0 Who knows what exactly Crispin Glover is on about in this film , but his words and images do n't have to add up to mesmerize you .
----
0.2 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Kaputschnik . ''
1.0 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Mat Hoffman . ''
1.0 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Carol Kane . ''
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.8 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.2 Imagine the Birot character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.4 Imagine the James Woods character from Videodrome making a home movie of Polanski and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
0.4 Warmed-over Tarantino by way of wannabe Elmore Leonard .
0.5 Warmed-over Tarantino by way of wannabe Walt Becker .
----
0.7 Watching Haneke 's film is , aptly enough , a challenge and a punishment .
0.5 Watching Birot 's film is , aptly enough , a challenge and a punishment .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.6 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.4 Adam Sandler is to Reginald Hudlin what a gnat is to a racehorse .
0.4 Adam Sandler is to Reginald Hudlin what a gnat is to a racehorse .
----
0.7 Watching Haneke 's film is , aptly enough , a challenge and a punishment .
0.4 Watching Britney 's film is , aptly enough , a challenge and a punishment .
----
0.6 I would have preferred a transfer down the hall to Mr. Holland 's class for the music , or to Robin Williams 's lecture so I could listen to a teacher with humor , passion , and verve .
0.5 I would have preferred a transfer down the hall to Mr. Yvan Attal 's class for the music , or to Robin Williams 's lecture so I could listen to a teacher with humor , passion , and verve .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 3 (1.9%)
Example fails:
0.5 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.8 For Carl Franklin it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Walt Becker it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.7 More honest about Alzheimer 's disease , I think , than Iris .
0.3 More honest about Alzheimer 's disease , I think , than Birot .
0.5 More honest about Alzheimer 's disease , I think , than Foster .
----
0.2 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Kaputschnik . ''
1.0 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Walt Becker . ''
1.0 Droll caper-comedy remake of `` Big Deal on Madonna Street '' that 's a sly , amusing , laugh-filled little gem in which the ultimate `` Bellini '' begins to look like a `` real Carl Franklin . ''
----
Change Movie Industries
Test cases: 18
Fails (rate): 0 (0.0%)
Temporal
used to, but now
Test cases: 2152
Fails (rate): 94 (4.4%)
Example fails:
0.3 I think this movie is fantastic, but I used to think it was terrible.
----
0.2 I appreciate this movie, but I used to hate it.
----
0.1 I admire this movie, but I used to hate it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 11 (0.8%)
Example fails:
1.0 I would never say I appreciate this director.
----
1.0 I can't say I appreciate this actor.
----
1.0 I would never say I appreciate the director.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 70 (14.0%)
Example fails:
1.0 I wouldn't say, given all that I've seen over the years, that this actor is wonderful.
----
0.8 I can't say, given that I bought it last week, that that movie was amazing.
----
0.6 I wouldn't say, given that we watched a lot, that the director is wonderful.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 32 (4.3%)
Example fails:
1.0 The comedy movie is serious, not rib-tickling
----
0.8 The drama movie was funny rather than serious
----
0.9 This drama movie is funny rather than serious
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 7 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs7-swa-linear-60-start2-drop-shuffle/albert-large-v2_6.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs7-swa-linear-60-start2-drop-shuffle_6_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs7-swa-linear-60-start2-drop-shuffle_6_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 1 (0.2%)
Example fails:
0.0 Schindler 's List it ai n't .
0.2 Schindler 's List it ai n't. I regret it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 34 (6.8%)
Example fails:
0.8 Like a south-of-the-border Melrose Place .
0.1 Like another south-of-the-border Melrose Place .
0.5 Like in south-of-the-border Melrose Place .
----
1.0 Workmanlike , maybe , but still a film with all the elements that made the other three great , scary times at the movies .
0.0 Workmanlike , maybe , but still a film lacking all the elements that made the other three great , scary times at the movies .
----
1.0 In its ragged , cheap and unassuming way , the movie works .
0.3 In its ragged , cheap and unassuming way , the above works .
----
NER
Change names
Test cases: 147
Fails (rate): 4 (2.7%)
Example fails:
0.6 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.4 Imagine the Joseph Phillips character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.5 Imagine the James Woods character from Videodrome making a home movie of Melissa White and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
0.6 ( Davis ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.5 ( Luke ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.5 ( Luke ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
----
0.6 The principals in this cast are all fine , but Bishop and Stevenson are standouts .
0.2 The principals in this cast are all fine , but Bishop and William are standouts .
0.4 The principals in this cast are all fine , but Bishop and Luke are standouts .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 4 (2.5%)
Example fails:
0.8 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.2 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Einstein 's moist , deeply emotional eyes shine through this bogus veneer ...
0.3 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Britney 's moist , deeply emotional eyes shine through this bogus veneer ...
----
0.7 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.3 For Britney it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.6 ( Davis ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.3 ( Einstein ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.5 ( Crispin Glover ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.6 Imagine the James Woods character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.2 Imagine the Birot character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
0.5 Imagine the Craig Bartlett character from Videodrome making a home movie of Audrey Rose and showing it to the kid from The Sixth Sense and you 've imagined The Ring .
----
0.3 Oedekerk wrote Patch Adams , for which he should not be forgiven .
0.8 Oedekerk wrote Carl Franklin , for which he should not be forgiven .
----
0.7 Its lack of quality earns it a place alongside those other two recent Dumas botch-jobs , The Man in the Iron Mask and The Musketeer .
0.5 Its lack of quality earns it a place alongside those other two recent Gosling botch-jobs , The Man in the Iron Mask and The Musketeer .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.5 The only thing in Pauline and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.8 The only thing in Jelinek and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.7 The only thing in Crispin Glover and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
----
0.2 Has it ever been possible to say that Williams has truly inhabited a character ?
0.7 Has it ever been possible to say that Michel Gondry has truly inhabited a character ?
----
0.3 Oedekerk wrote Patch Adams , for which he should not be forgiven .
0.8 Oedekerk wrote Einstein , for which he should not be forgiven .
0.5 Oedekerk wrote Crispin Glover , for which he should not be forgiven .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 3 (1.9%)
Example fails:
0.8 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.3 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Smokey Robinson 's moist , deeply emotional eyes shine through this bogus veneer ...
0.4 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Carl Franklin 's moist , deeply emotional eyes shine through this bogus veneer ...
----
0.6 ( Davis ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.5 ( Birot ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.5 ( Smokey Robinson ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
----
0.8 More honest about Alzheimer 's disease , I think , than Iris .
0.2 More honest about Alzheimer 's disease , I think , than Birot .
0.2 More honest about Alzheimer 's disease , I think , than Gosling .
----
Change Movie Industries
Test cases: 18
Fails (rate): 1 (5.6%)
Example fails:
0.4 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.6 Home Alone goes Tollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.6 Home Alone goes Ghollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 90 (4.2%)
Example fails:
0.4 I welcome this movie, but I used to hate it.
----
0.9 I despise this movie, but I used to value it.
----
0.7 I hate this movie, but I used to appreciate it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 97 (7.2%)
Example fails:
1.0 I would never say I enjoy this actor.
----
1.0 I can't say I love this show.
----
1.0 I can't say I appreciate the director.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 248 (49.6%)
Example fails:
1.0 I can't say, given all that I've seen over the years, that the show is good.
----
1.0 I wouldn't say, given that I bought it last week, that we love the director.
----
1.0 I can't say, given all that I've seen over the years, that I welcome that scene.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 34 (4.6%)
Example fails:
1.0 This drama movie was funny rather than serious
----
0.1 This horror movie was frightening
----
0.3 The horror movie was frightening
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 8 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs8-shuffle-train/albert-large-v2_3.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs8-shuffle-train_3_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs8-shuffle-train_3_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 1 (0.2%)
Example fails:
0.9 Is n't it great ?
0.0 Is n't it great. I would watch this again.
0.4 Is n't it great. I recommend it.
----
add negative phrases
Test cases: 500
Fails (rate): 2 (0.4%)
Example fails:
0.1 The only thing that could possibly make them less interesting than they already are is for them to get full montied into a scrappy , jovial team .
0.2 The only thing that could possibly make them less interesting than they already are is for them to get full montied into a scrappy , jovial team. Never watching this again.
----
0.0 Jolie 's performance vanishes somewhere between her hair and her lips .
0.1 Jolie 's performance vanishes somewhere between her hair and her lips. Never watching this again.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 42 (8.4%)
Example fails:
0.2 A strong first quarter , slightly less so second quarter , and average second half .
0.9 A strong first quarter , slightly less so second quarter , below average second half .
0.8 extremely strong first quarter , slightly less so second quarter , and average second half .
----
0.8 If your senses have n't been dulled by slasher films and gorefests , if you 're a connoisseur of psychological horror , this is your ticket .
0.2 If your senses already n't been dulled by slasher films and gorefests , if you 're a connoisseur of psychological horror , this is your ticket .
----
0.4 Do n't plan on the perfect ending , but Sweet Home Alabama hits the mark with critics who escaped from a small town life .
0.8 Do n't plan on the perfect ending , but Sweet Home Alabama hits the mark in critics who escaped from a small town life .
----
NER
Change names
Test cases: 147
Fails (rate): 4 (2.7%)
Example fails:
0.7 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.3 De Niro may enjoy the same free ride from critics afforded to Daniel Sanders in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to Joseph Phillips in the lazy Bloodwork .
----
0.5 Tim Story 's not there yet - but ` Barbershop ' shows he 's on his way .
0.3 Matthew Ross 's not there yet - but ` Barbershop ' shows he 's on his way .
0.3 Michael Ward 's not there yet - but ` Barbershop ' shows he 's on his way .
----
0.6 ( Davis ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.5 ( Luke ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
0.5 ( Luke ) has a bright , chipper style that keeps things moving , while never quite managing to connect her wish-fulfilling characters to the human race .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 8 (5.1%)
Example fails:
0.4 With Danilo Donati 's witty designs and Dante Spinotti 's luscious cinematography , this might have made a decent children 's movie -- if only Benigni had n't insisted on casting himself in the title role .
0.5 With Danilo Donati 's witty designs and Dante Spinotti 's luscious cinematography , this might have made a decent children 's movie -- if only Einstein had n't insisted on casting himself in the title role .
----
0.4 Not everything works , but the average is higher than in Mary and most other recent comedies .
0.5 Not everything works , but the average is higher than in Yvan Attal and most other recent comedies .
0.5 Not everything works , but the average is higher than in Crispin Glover and most other recent comedies .
----
0.4 As lo-fi as the special effects are , the folks who cobbled Nemesis together indulge the force of humanity over hardware in a way that George Lucas has long forgotten .
0.6 As lo-fi as the special effects are , the folks who cobbled Nemesis together indulge the force of humanity over hardware in a way that Einstein has long forgotten .
0.6 As lo-fi as the special effects are , the folks who cobbled Nemesis together indulge the force of humanity over hardware in a way that Crispin Glover has long forgotten .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.8 Flat , but with a revelatory performance by Michelle Williams .
0.5 Flat , but with a revelatory performance by Birot .
----
0.4 Being author Wells ' great-grandson , you 'd think filmmaker Simon Wells would have more reverence for the material .
0.5 Being author Walt Becker ' great-grandson , you 'd think filmmaker Simon Walt Becker would have more reverence for the material .
----
0.7 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.2 De Niro may enjoy the same free ride from critics afforded to Gulpilil in the lazy Bloodwork .
0.3 De Niro may enjoy the same free ride from critics afforded to Polanski in the lazy Bloodwork .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 3 (2.4%)
Example fails:
0.8 Watching Haneke 's film is , aptly enough , a challenge and a punishment .
0.3 Watching Michel Gondry 's film is , aptly enough , a challenge and a punishment .
----
0.8 Cho 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
0.5 Michel Gondry 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
----
0.7 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to Seagal in the lazy Bloodwork .
0.4 De Niro may enjoy the same free ride from critics afforded to Reginald Hudlin in the lazy Bloodwork .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 9 (5.7%)
Example fails:
0.5 Ms. Fulford-Wierzbicki is almost spooky in her sulky , calculating Lolita turn .
0.4 Ms. Birot is almost spooky in her sulky , calculating Lolita turn .
----
0.4 With Danilo Donati 's witty designs and Dante Spinotti 's luscious cinematography , this might have made a decent children 's movie -- if only Benigni had n't insisted on casting himself in the title role .
0.5 With Danilo Donati 's witty designs and Dante Spinotti 's luscious cinematography , this might have made a decent children 's movie -- if only Merchant Ivory had n't insisted on casting himself in the title role .
----
0.5 Tim Story 's not there yet - but ` Barbershop ' shows he 's on his way .
0.2 Birot 's not there yet - but ` Barbershop ' shows he 's on his way .
0.2 Craig Bartlett 's not there yet - but ` Barbershop ' shows he 's on his way .
----
Change Movie Industries
Test cases: 18
Fails (rate): 0 (0.0%)
Temporal
used to, but now
Test cases: 2152
Fails (rate): 157 (7.3%)
Example fails:
0.3 I enjoy this movie, but in the past I would hate it.
----
0.2 I like this movie, but in the past I would hate it.
----
0.1 I value this movie, but I used to regret it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 2 (0.1%)
Example fails:
0.5 I would never say I admire this director.
----
0.5 I would never say I appreciate this director.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 144 (28.8%)
Example fails:
0.8 I don't think, given all that I've seen over the years, that the scene is amazing.
----
0.6 I don't think, given my history with movies, that this movie was beautiful.
----
1.0 I don't think, given that I bought it last week, that we appreciate the show.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 34 (4.6%)
Example fails:
0.1 This drama movie was serious
----
1.0 This comedy movie is serious, not rib-tickling
----
0.0 The horror movie is terrifying
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 3 (0.2%)
Example fails:
0.6 Hogawood movies are tough
----
0.5 Tamalewood movies are tough
----
0.5 Hallyuwood movies are tough
----
###Markdown
Random Seed 8 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs8-swa-linear-60-start2-drop-shuffle/albert-large-v2_4.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs8-swa-linear-60-start2-drop-shuffle_4_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs8-swa-linear-60-start2-drop-shuffle_4_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 0 (0.0%)
change neutral words with BERT
Test cases: 500
Fails (rate): 46 (9.2%)
Example fails:
1.0 The production values are up there .
0.0 The production values are up below .
0.3 The production values are up by .
----
1.0 Workmanlike , maybe , but still a film with all the elements that made the other three great , scary times at the movies .
0.0 Workmanlike , maybe , but still a film lacking all the elements that made the other three great , scary times at the movies .
----
1.0 Run , do n't walk , to see this barbed and bracing comedy on the big screen .
0.3 Run , do n't walk , to see more barbed and bracing comedy on the big screen .
----
NER
Change names
Test cases: 147
Fails (rate): 2 (1.4%)
Example fails:
0.4 Tim Story 's not there yet - but ` Barbershop ' shows he 's on his way .
0.5 Joseph Phillips 's not there yet - but ` Barbershop ' shows he 's on his way .
0.5 Joshua Nelson 's not there yet - but ` Barbershop ' shows he 's on his way .
----
0.5 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.7 John Bailey is to Gary Cooper what a gnat is to a racehorse .
0.6 Joshua Nelson is to Gary Cooper what a gnat is to a racehorse .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 6 (3.8%)
Example fails:
0.9 While not as aggressively impressive as its American counterpart , `` In the Bedroom , '' Moretti 's film makes its own , quieter observations
0.5 While not as aggressively impressive as its American counterpart , `` In the Bedroom , '' Carol Kane 's film makes its own , quieter observations
----
0.6 The whole cast looks to be having so much fun with the slapstick antics and silly street patois , tossing around obscure expressions like Bellini and Mullinski , that the compact 86 minutes breezes by .
0.5 The whole cast looks to be having so much fun with the slapstick antics and silly street patois , tossing around obscure expressions like Bellini and Seagal , that the compact 86 minutes breezes by .
0.5 The whole cast looks to be having so much fun with the slapstick antics and silly street patois , tossing around obscure expressions like Yvan Attal and Mullinski , that the compact 86 minutes breezes by .
----
0.4 Tim Story 's not there yet - but ` Barbershop ' shows he 's on his way .
0.9 Crispin Glover 's not there yet - but ` Barbershop ' shows he 's on his way .
0.9 Yvan Attal 's not there yet - but ` Barbershop ' shows he 's on his way .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.3 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.5 Based on a Carl Franklin story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
0.5 Based on a Walt Becker story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.4 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Hélène Angel in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Eric in the lazy Bloodwork .
----
0.5 Adam Sandler is to Gary Cooper what a gnat is to a racehorse .
0.7 Walt Becker is to Gary Cooper what a gnat is to a racehorse .
0.6 Craig Bartlett is to Gary Cooper what a gnat is to a racehorse .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.5 Being author Wells ' great-grandson , you 'd think filmmaker Simon Wells would have more reverence for the material .
0.6 Being author Paul Pender ' great-grandson , you 'd think filmmaker Simon Paul Pender would have more reverence for the material .
----
0.3 Based on a David Leavitt story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
1.0 Based on a Crispin Glover story , the film shares that writer 's usual blend of observant cleverness , too-facile coincidence and slightly noxious preciousness .
----
0.4 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Roberts in the lazy Bloodwork .
0.5 De Niro may enjoy the same free ride from critics afforded to Reginald Hudlin in the lazy Bloodwork .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 5 (3.2%)
Example fails:
0.5 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Walt Becker it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Carl Franklin it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.4 Tim Story 's not there yet - but ` Barbershop ' shows he 's on his way .
0.8 Merchant Ivory 's not there yet - but ` Barbershop ' shows he 's on his way .
0.8 Gosling 's not there yet - but ` Barbershop ' shows he 's on his way .
----
0.6 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Susan Sarandon at their raunchy best , even hokum goes down easily .
0.4 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Birot at their raunchy best , even hokum goes down easily .
0.4 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Merchant Ivory at their raunchy best , even hokum goes down easily .
----
Change Movie Industries
Test cases: 18
Fails (rate): 0 (0.0%)
Temporal
used to, but now
Test cases: 2152
Fails (rate): 66 (3.1%)
Example fails:
0.5 I admire this movie, but I used to despise it.
----
0.8 In the past I would recommend this movie, although now I regret it.
----
0.0 In the past I would dislike this movie, even though now I liked it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 0 (0.0%)
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 38 (7.6%)
Example fails:
0.6 I wouldn't say, given that we watched a lot, that the director is wonderful.
----
0.9 I wouldn't say, given that we watched a lot, that this movie is amazing.
----
1.0 I wouldn't say, given all that I've seen over the years, that this actor is wonderful.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 34 (4.6%)
Example fails:
0.7 This comedy movie was serious
----
1.0 This comedy movie is serious, not rib-tickling
----
1.0 The comedy movie was serious, not rib-tickling
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 9 - Vanilla
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs9-shuffle-train/albert-large-v2_4.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs9-shuffle-train_4_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs9-shuffle-train_4_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 0 (0.0%)
change neutral words with BERT
Test cases: 500
Fails (rate): 34 (6.8%)
Example fails:
0.6 Like a south-of-the-border Melrose Place .
0.1 Like another south-of-the-border Melrose Place .
0.3 Like in south-of-the-border Melrose Place .
----
1.0 Workmanlike , maybe , but still a film with all the elements that made the other three great , scary times at the movies .
0.0 Workmanlike , maybe , but still a film lacking all the elements that made the other three great , scary times at the movies .
----
0.4 In its ragged , cheap and unassuming way , the movie works .
0.6 In its ragged , cheap and unassuming way , the technique works .
0.6 In its ragged , cheap and unassuming way , this movie works .
----
NER
Change names
Test cases: 147
Fails (rate): 1 (0.7%)
Example fails:
0.0 More honest about Alzheimer 's disease , I think , than Iris .
0.8 More honest about Alzheimer 's disease , I think , than Taylor .
0.7 More honest about Alzheimer 's disease , I think , than Maria .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 5 (3.2%)
Example fails:
0.7 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Lohman 's moist , deeply emotional eyes shine through this bogus veneer ...
0.5 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Britney 's moist , deeply emotional eyes shine through this bogus veneer ...
0.5 ( A ) Hollywood sheen bedevils the film from the very beginning ... ( but ) Einstein 's moist , deeply emotional eyes shine through this bogus veneer ...
----
0.6 It suggests the wide-ranging effects of media manipulation , from the kind of reporting that is done by the supposedly liberal media ... to the intimate and ultimately tragic heartache of maverick individuals like Hatfield and Hicks .
0.4 It suggests the wide-ranging effects of media manipulation , from the kind of reporting that is done by the supposedly liberal media ... to the intimate and ultimately tragic heartache of maverick individuals like Britney and Hicks .
0.4 It suggests the wide-ranging effects of media manipulation , from the kind of reporting that is done by the supposedly liberal media ... to the intimate and ultimately tragic heartache of maverick individuals like Crispin Glover and Hicks .
----
0.3 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.6 For Einstein it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.6 For Paul Pender it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 4 (3.3%)
Example fails:
0.6 Watching Haneke 's film is , aptly enough , a challenge and a punishment .
0.3 Watching Foster 's film is , aptly enough , a challenge and a punishment .
----
0.5 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.7 De Niro may enjoy the same free ride from critics afforded to Eric in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Yong Kang in the lazy Bloodwork .
----
0.2 Flat , but with a revelatory performance by Michelle Williams .
0.8 Flat , but with a revelatory performance by Gosling .
0.6 Flat , but with a revelatory performance by Merchant Ivory .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 5 (4.1%)
Example fails:
0.5 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.7 De Niro may enjoy the same free ride from critics afforded to Sarah in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Reginald Hudlin in the lazy Bloodwork .
----
0.6 Watching Haneke 's film is , aptly enough , a challenge and a punishment .
0.2 Watching Britney 's film is , aptly enough , a challenge and a punishment .
0.3 Watching Einstein 's film is , aptly enough , a challenge and a punishment .
----
0.8 The only thing in Pauline and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.5 The only thing in Britney and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 5 (3.2%)
Example fails:
0.0 More honest about Alzheimer 's disease , I think , than Iris .
0.9 More honest about Alzheimer 's disease , I think , than Carl Franklin .
0.8 More honest about Alzheimer 's disease , I think , than Phillip Noyce .
----
0.9 Benefits from a strong performance from Zhao , but it 's Dong Jie 's face you remember at the end .
0.3 Benefits from a strong performance from Birot , but it 's Dong Jie 's face you remember at the end .
----
0.9 Steve Irwin 's method is Ernest Hemmingway at accelerated speed and volume .
0.0 Steve Irwin 's method is Birot at accelerated speed and volume .
----
Change Movie Industries
Test cases: 18
Fails (rate): 3 (16.7%)
Example fails:
0.7 I kept wishing I was watching a documentary about the wartime Navajos and what they accomplished instead of all this specious Hollywood hoo-ha .
0.2 I kept wishing I was watching a documentary about the wartime Navajos and what they accomplished instead of all this specious Aussiewood hoo-ha .
0.2 I kept wishing I was watching a documentary about the wartime Navajos and what they accomplished instead of all this specious Peruliwood hoo-ha .
----
0.2 Even when foreign directors ... borrow stuff from Hollywood , they invariably shake up the formula and make it more interesting .
0.8 Even when foreign directors ... borrow stuff from Cantonwood , they invariably shake up the formula and make it more interesting .
0.8 Even when foreign directors ... borrow stuff from Aussiewood , they invariably shake up the formula and make it more interesting .
----
0.8 An epic of grandeur and scale that 's been decades gone from the popcorn pushing sound stages of Hollywood .
0.2 An epic of grandeur and scale that 's been decades gone from the popcorn pushing sound stages of Aussiewood .
0.2 An epic of grandeur and scale that 's been decades gone from the popcorn pushing sound stages of Hallyuwood .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 76 (3.5%)
Example fails:
0.3 I like this movie, but in the past I would regret it.
----
0.7 In the past I would like this movie, even though now I dread it.
----
0.3 I like this movie, but I used to despise it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 15 (1.1%)
Example fails:
0.7 I can't say I welcome that director.
----
1.0 I can't say I welcome this movie.
----
0.6 I can't say I welcome that actor.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 0 (0.0%)
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 37 (5.0%)
Example fails:
0.0 This horror movie was scary
----
0.7 This horror movie was calming
----
0.0 The horror movie was scary
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
###Markdown
Random Seed 9 - SWA
###Code
model_name = "albert-large-v2"
checkpoint_path = "model-outputs/final-models/rs9-swa-linear-75-start2-drop-shuffle/albert-large-v2_6.pt"
pipeline = BatchedInference.from_model_name(
model_name, checkpoint_path=checkpoint_path, device="cuda"
)
def pred_and_conf(data, batch_size=32):
data = [data[i:i + batch_size] for i in range(0, len(data), batch_size)]
predictions = []
confidences = []
for data in data:
preds, confs = pipeline(data)
preds = preds.numpy().tolist()
confs = confs.numpy()
predictions.append(preds)
confidences.append(confs)
predictions = np.hstack(predictions)
confidences = np.vstack(confidences)
return predictions, confidences
results_path = "results/checklist/rs9-swa-linear-75-start2-drop-shuffle_6_testset_19_07_21.json"
config = {
"project_name": "checklist_evaluation",
"run_name": "rs9-swa-linear-75-start2-drop-shuffle_6_testset_19_07_21",
"model": "albert-large-v2",
"checkpoint": checkpoint_path,
"test_suite": test_suite_path,
"results_path": results_path
}
wandb.init(config=config, project=config["project_name"], name=config["run_name"])
test_suite = TestSuite.from_file(test_suite_path)
test_suite.run(pred_and_conf, overwrite=True, seed=0)
save_test_results(config, test_suite)
test_suite.summary()
###Output
Vocabulary
Single positive words
Test cases: 22
Fails (rate): 0 (0.0%)
Single negative words
Test cases: 14
Fails (rate): 0 (0.0%)
Sentiment-laden words in context
Test cases: 1350
Fails (rate): 0 (0.0%)
add positive phrases
Test cases: 500
Fails (rate): 0 (0.0%)
add negative phrases
Test cases: 500
Fails (rate): 1 (0.2%)
Example fails:
0.1 In the end , Punch-Drunk Love is one of those films that I wanted to like much more than I actually did .
0.3 In the end , Punch-Drunk Love is one of those films that I wanted to like much more than I actually did. I dread it.
----
change neutral words with BERT
Test cases: 500
Fails (rate): 39 (7.8%)
Example fails:
0.1 Ecks this one off your must-see list .
0.6 Ecks just one off your must-see list .
----
0.4 Must be seen to be believed .
1.0 Must be seen must be believed .
----
0.9 Absorbing and disturbing -- perhaps more disturbing than originally intended -- but a little clarity would have gone a long way .
0.0 Absorbing was disturbing -- perhaps more disturbing than originally intended -- but a little clarity would have gone a long way .
0.0 Absorbing is disturbing -- perhaps more disturbing than originally intended -- but a little clarity would have gone a long way .
----
NER
Change names
Test cases: 147
Fails (rate): 4 (2.7%)
Example fails:
0.2 After seeing SWEPT AWAY , I feel sorry for Madonna .
0.6 After seeing SWEPT AWAY , I feel sorry for Chelsea .
0.5 After seeing SWEPT AWAY , I feel sorry for Nicole .
----
0.5 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.7 De Niro may enjoy the same free ride from critics afforded to John Bailey in the lazy Bloodwork .
0.6 De Niro may enjoy the same free ride from critics afforded to Joshua Nelson in the lazy Bloodwork .
----
0.2 George , hire a real director and good writers for the next installment , please .
0.6 Scott , hire a real director and good writers for the next installment , please .
----
Polarizing Negative Names - Positive Instances
Test cases: 157
Fails (rate): 4 (2.5%)
Example fails:
0.3 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.6 For Michel Gondry it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.6 For Paul Pender it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.6 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Hubert 's punches ) , but it should go down smoothly enough with popcorn .
0.4 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Yvan Attal 's punches ) , but it should go down smoothly enough with popcorn .
0.4 Wasabi is slight fare indeed , with the entire project having the feel of something tossed off quickly ( like one of Jelinek 's punches ) , but it should go down smoothly enough with popcorn .
----
0.7 Sam Jones became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
0.3 Einstein became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
0.3 Britney became a very lucky filmmaker the day Wilco got dropped from their record label , proving that one man 's ruin may be another 's fortune .
----
Polarizing Positive Names - Negative Instances
Test cases: 123
Fails (rate): 7 (5.7%)
Example fails:
0.6 Cho 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
0.4 Ellen Pompeo 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
0.4 Phillip Noyce 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
----
0.5 De Niro may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
0.7 Walt Becker may enjoy the same free ride from critics afforded to Clint Eastwood in the lazy Bloodwork .
----
0.4 The only thing in Pauline and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.6 The only thing in Ellen Pompeo and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
0.6 The only thing in Walt Becker and Paulette that you have n't seen before is a scene featuring a football field-sized Oriental rug crafted out of millions of vibrant flowers .
----
Polarizing Negative Names - Negative Instances
Test cases: 123
Fails (rate): 10 (8.1%)
Example fails:
0.2 After seeing SWEPT AWAY , I feel sorry for Madonna .
0.7 After seeing SWEPT AWAY , I feel sorry for Yvan Attal .
0.5 After seeing SWEPT AWAY , I feel sorry for Jelinek .
----
0.2 George , hire a real director and good writers for the next installment , please .
0.8 Paul Pender , hire a real director and good writers for the next installment , please .
0.8 Crispin Glover , hire a real director and good writers for the next installment , please .
----
0.6 Cho 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
0.4 Carol Kane 's fans are sure to be entertained ; it 's only fair in the interest of full disclosure to say that -- on the basis of this film alone -- I 'm not one of them .
----
Polarizing Positive Names - Positive Instances
Test cases: 157
Fails (rate): 4 (2.5%)
Example fails:
0.3 For Benigni it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Carl Franklin it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
0.7 For Walt Becker it was n't Shakespeare whom he wanted to define his career with but Pinocchio .
----
0.8 More honest about Alzheimer 's disease , I think , than Iris .
0.0 More honest about Alzheimer 's disease , I think , than Birot .
0.2 More honest about Alzheimer 's disease , I think , than Gosling .
----
0.6 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Susan Sarandon at their raunchy best , even hokum goes down easily .
0.4 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Walt Becker at their raunchy best , even hokum goes down easily .
0.4 When your leading ladies are a couple of screen-eating dominatrixes like Goldie Hawn and Carl Franklin at their raunchy best , even hokum goes down easily .
----
Change Movie Industries
Test cases: 18
Fails (rate): 1 (5.6%)
Example fails:
0.4 Home Alone goes Hollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.6 Home Alone goes Kollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
0.5 Home Alone goes Nollywood , a funny premise until the kids start pulling off stunts not even Steven Spielberg would know how to do .
----
Temporal
used to, but now
Test cases: 2152
Fails (rate): 84 (3.9%)
Example fails:
0.3 I think this movie is wonderful, but in the past I thought it was terrible.
----
0.4 I welcome this movie, but I used to dislike it.
----
0.6 I abhor this movie, but I used to like it.
----
Negation
Simple negations: negative
Test cases: 1350
Fails (rate): 159 (11.8%)
Example fails:
1.0 I would never say I welcome this director.
----
0.8 I can't say I value that scene.
----
1.0 I can't say I love the actor.
----
Hard: Negation of positive with neutral stuff in the middle (should be negative)
Test cases: 500
Fails (rate): 234 (46.8%)
Example fails:
1.0 I can't say, given that we watched a lot, that we welcome this director.
----
1.0 I can't say, given all that I've seen over the years, that we welcome the director.
----
1.0 I wouldn't say, given it's a Friday, that I welcome this show.
----
Synonym/Antonym
Movie sentiments
Test cases: 58
Fails (rate): 0 (0.0%)
Sentiment
Movie genre specific sentiments
Test cases: 736
Fails (rate): 38 (5.2%)
Example fails:
1.0 The horror movie is calming
----
0.6 This comedy movie is serious
----
1.0 This drama movie was funny rather than serious
----
Movie Industries specific sentiments
Test cases: 1200
Fails (rate): 0 (0.0%)
|
Week-2/digits_classification.ipynb | ###Markdown
MNIST digits classification with TensorFlow
###Code
import numpy as np
from sklearn.metrics import accuracy_score
from matplotlib import pyplot as plt
%matplotlib inline
import tensorflow as tf
print("We're using TF", tf.__version__)
import sys
sys.path.append("../..")
from collections import defaultdict
import numpy as np
from keras.models import save_model
import tensorflow as tf
import keras
from keras import backend as K
from matplotlib import pyplot as plt
from IPython.display import clear_output, display_html, HTML
import contextlib
import time
import io
import urllib
import base64
# import grading
# import matplotlib_utils
# from importlib import reload
# reload(matplotlib_utils)
# # import grading_utils
# # reload(grading_utils)
# import keras_utils
# from keras_utils import reset_tf_session
###Output
We're using TF 1.15.2
###Markdown
Fill in your Coursera token and emailTo successfully submit your answers to our grader, please fill in your Coursera submission token and email
###Code
grader = grading.Grader(assignment_key="XtD7ho3TEeiHQBLWejjYAA",
all_parts=["9XaAS", "vmogZ", "RMv95", "i8bgs", "rE763"])
# token expires every 30 min
COURSERA_TOKEN = "### YOUR TOKEN HERE ###"
COURSERA_EMAIL = "### YOUR EMAIL HERE ###"
###Output
_____no_output_____
###Markdown
Look at the dataIn this task we have 50000 28x28 images of digits from 0 to 9.We will train a classifier on this data.
###Code
import keras
def load_dataset(flatten=False):
(X_train, y_train), (X_test, y_test) = keras.datasets.mnist.load_data()
# normalize x
X_train = X_train.astype(float) / 255.
X_test = X_test.astype(float) / 255.
# we reserve the last 10000 training examples for validation
X_train, X_val = X_train[:-10000], X_train[-10000:]
y_train, y_val = y_train[:-10000], y_train[-10000:]
if flatten:
X_train = X_train.reshape([X_train.shape[0], -1])
X_val = X_val.reshape([X_val.shape[0], -1])
X_test = X_test.reshape([X_test.shape[0], -1])
return X_train, y_train, X_val, y_val, X_test, y_test
# import preprocessed_mnist
X_train, y_train, X_val, y_val, X_test, y_test = load_dataset()
# X contains rgb values divided by 255
print("X_train [shape %s] sample patch:\n" % (str(X_train.shape)), X_train[1, 15:20, 5:10])
print("A closeup of a sample patch:")
plt.imshow(X_train[1, 15:20, 5:10], cmap="Greys")
plt.show()
print("And the whole sample:")
plt.imshow(X_train[1], cmap="Greys")
plt.show()
print("y_train [shape %s] 10 samples:\n" % (str(y_train.shape)), y_train[:10])
###Output
X_train [shape (50000, 28, 28)] sample patch:
[[0. 0.29803922 0.96470588 0.98823529 0.43921569]
[0. 0.33333333 0.98823529 0.90196078 0.09803922]
[0. 0.33333333 0.98823529 0.8745098 0. ]
[0. 0.33333333 0.98823529 0.56862745 0. ]
[0. 0.3372549 0.99215686 0.88235294 0. ]]
A closeup of a sample patch:
###Markdown
Linear modelYour task is to train a linear classifier $\vec{x} \rightarrow y$ with SGD using TensorFlow.You will need to calculate a logit (a linear transformation) $z_k$ for each class: $$z_k = \vec{x} \cdot \vec{w_k} + b_k \quad k = 0..9$$And transform logits $z_k$ to valid probabilities $p_k$ with softmax: $$p_k = \frac{e^{z_k}}{\sum_{i=0}^{9}{e^{z_i}}} \quad k = 0..9$$We will use a cross-entropy loss to train our multi-class classifier:$$\text{cross-entropy}(y, p) = -\sum_{k=0}^{9}{\log(p_k)[y = k]}$$ where $$[x]=\begin{cases} 1, \quad \text{if $x$ is true} \\ 0, \quad \text{otherwise} \end{cases}$$Cross-entropy minimization pushes $p_k$ close to 1 when $y = k$, which is what we want.Here's the plan:* Flatten the images (28x28 -> 784) with `X_train.reshape((X_train.shape[0], -1))` to simplify our linear model implementation* Use a matrix placeholder for flattened `X_train`* Convert `y_train` to one-hot encoded vectors that are needed for cross-entropy* Use a shared variable `W` for all weights (a column $\vec{w_k}$ per class) and `b` for all biases.* Aim for ~0.93 validation accuracy
###Code
X_train_flat = X_train.reshape((X_train.shape[0], -1))
print(X_train_flat.shape)
X_val_flat = X_val.reshape((X_val.shape[0], -1))
print(X_val_flat.shape)
import keras
y_train_oh = keras.utils.to_categorical(y_train, 10)
y_val_oh = keras.utils.to_categorical(y_val, 10)
print(y_train_oh.shape)
print(y_train_oh[:3], y_train[:3])
def reset_tf_session():
curr_session = tf.get_default_session()
# close current session
if curr_session is not None:
curr_session.close()
# reset graph
K.clear_session()
# create new session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
s = tf.InteractiveSession(config=config)
K.set_session(s)
return s
# run this again if you remake your graph
s = reset_tf_session()
# Model parameters: W and b
W = tf.get_variable("W",shape=(784,10)) ### tf.get_variable(...) with shape[0] = 784
b = tf.get_variable("b",shape=(10,)) ### tf.get_variable(...)
# Placeholders for the input data
input_X = tf.placeholder(tf.float32,shape=(None,784)) ### tf.placeholder(...) for flat X with shape[0] = None for any batch size
input_y = tf.placeholder(tf.int32,shape=(None,10)) ### tf.placeholder(...) for one-hot encoded true labels
# Compute predictions
logits = input_X@W ### logits for input_X, resulting shape should be [input_X.shape[0], 10]
probas =tf.nn.softmax(logits) ### YOUR CODE HERE ### apply tf.nn.softmax to logits
classes = tf.argmax(probas,axis=1) ### YOUR CODE HERE ### apply tf.argmax to find a class index with highest probability
# Loss should be a scalar number: average loss over all the objects with tf.reduce_mean().
# Use tf.nn.softmax_cross_entropy_with_logits on top of one-hot encoded input_y and logits.
# It is identical to calculating cross-entropy on top of probas, but is more numerically friendly (read the docs).
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=input_y,logits=logits))### YOUR CODE HERE ### cross-entropy loss
# Use a default tf.train.AdamOptimizer to get an SGD step
step = tf.train.AdamOptimizer().minimize(loss) ### optimizer step that minimizes the loss
#!/usr/bin/env python
# -*- coding: utf-8 -*-
def clear_and_display_figure(fig, sleep=0.01):
img_data = io.BytesIO()
fig.savefig(img_data, format='jpeg')
img_data.seek(0)
uri = 'data:image/jpeg;base64,' + urllib.request.quote(base64.b64encode(img_data.getbuffer()))
img_data.close()
clear_output(wait=True)
display_html(HTML('<img src="' + uri + '">'))
time.sleep(sleep)
class SimpleMovieWriter(object):
"""
Usage example:
anim = animation.FuncAnimation(...)
anim.save(None, writer=SimpleMovieWriter(sleep=0.01))
"""
def __init__(self, sleep=0.1):
self.sleep = sleep
def setup(self, fig):
self.fig = fig
def grab_frame(self, **kwargs):
clear_and_display_figure(self.fig, self.sleep)
@contextlib.contextmanager
def saving(self, fig, *args, **kwargs):
self.setup(fig)
try:
yield self
finally:
pass
class SimpleTrainingCurves(object):
def __init__(self, loss_name, metric_name):
self.fig, (self.ax1, self.ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
self.ax1.set_title(loss_name)
self.ax2.set_title(metric_name)
self.train_loss_curve, = self.ax1.plot([], [], 'r', label='train', lw=2)
self.valid_loss_curve, = self.ax1.plot([], [], 'g', label='valid', lw=2)
self.train_metric_curve, = self.ax2.plot([], [], 'r', label='train', lw=2)
self.valid_metric_curve, = self.ax2.plot([], [], 'g', label='valid', lw=2)
self.iter = 0
self.y_limits_1 = [None, None]
self.y_limits_2 = [None, None]
plt.close(self.fig)
def _update_y_limits(self, limits, *values):
limits[0] = min(list(values) + ([limits[0]] if limits[0] else []))
limits[1] = max(list(values) + ([limits[1]] if limits[1] else []))
def _update_curve(self, curve, value, label):
x, y = curve.get_data()
curve.set_data(list(x) + [self.iter], list(y) + [value])
curve.set_label("{}: {}".format(label, value))
def _set_y_limits(self, ax, limits):
spread = limits[1] - limits[0]
ax.set_ylim(limits[0] - 0.05*spread, limits[1] + 0.05*spread)
def add(self, train_loss, valid_loss, train_metric, valid_metric):
self._update_curve(self.train_loss_curve, train_loss, "train")
self._update_curve(self.valid_loss_curve, valid_loss, "valid")
self._update_curve(self.train_metric_curve, train_metric, "train")
self._update_curve(self.valid_metric_curve, valid_metric, "valid")
self.ax1.set_xlim(0, self.iter)
self.ax2.set_xlim(0, self.iter)
self._update_y_limits(self.y_limits_1, train_loss, valid_loss)
self._update_y_limits(self.y_limits_2, train_metric, valid_metric)
self._set_y_limits(self.ax1, self.y_limits_1)
self._set_y_limits(self.ax2, self.y_limits_2)
clear_and_display_figure(self.fig)
self.ax1.legend()
self.ax2.legend()
self.iter += 1
s.run(tf.global_variables_initializer())
BATCH_SIZE = 512
EPOCHS = 40
# for logging the progress right here in Jupyter (for those who don't have TensorBoard)
simpleTrainingCurves =SimpleTrainingCurves("cross-entropy", "accuracy")
for epoch in range(EPOCHS): # we finish an epoch when we've looked at all training samples
batch_losses = []
for batch_start in range(0, X_train_flat.shape[0], BATCH_SIZE): # data is already shuffled
_, batch_loss = s.run([step, loss], {input_X: X_train_flat[batch_start:batch_start+BATCH_SIZE],
input_y: y_train_oh[batch_start:batch_start+BATCH_SIZE]})
# collect batch losses, this is almost free as we need a forward pass for backprop anyway
batch_losses.append(batch_loss)
train_loss = np.mean(batch_losses)
val_loss = s.run(loss, {input_X: X_val_flat, input_y: y_val_oh}) # this part is usually small
train_accuracy = accuracy_score(y_train, s.run(classes, {input_X: X_train_flat})) # this is slow and usually skipped
valid_accuracy = accuracy_score(y_val, s.run(classes, {input_X: X_val_flat}))
simpleTrainingCurves.add(train_loss, val_loss, train_accuracy, valid_accuracy)
###Output
_____no_output_____
###Markdown
Submit a linear model
###Code
## GRADED PART, DO NOT CHANGE!
# Testing shapes
grader.set_answer("9XaAS", grading_utils.get_tensors_shapes_string([W, b, input_X, input_y, logits, probas, classes]))
# Validation loss
grader.set_answer("vmogZ", s.run(loss, {input_X: X_val_flat, input_y: y_val_oh}))
# Validation accuracy
grader.set_answer("RMv95", accuracy_score(y_val, s.run(classes, {input_X: X_val_flat})))
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
###Output
_____no_output_____
###Markdown
MLP with hidden layers Previously we've coded a dense layer with matrix multiplication by hand. But this is not convenient, you have to create a lot of variables and your code becomes a mess. In TensorFlow there's an easier way to make a dense layer:```pythonhidden1 = tf.layers.dense(inputs, 256, activation=tf.nn.sigmoid)```That will create all the necessary variables automatically.Here you can also choose an activation function (remember that we need it for a hidden layer!).Now define the MLP with 2 hidden layers and restart training with the cell above.You're aiming for ~0.97 validation accuracy here.
###Code
# write the code here to get a new `step` operation and then run the cell with training loop above.
# name your variables in the same way (e.g. logits, probas, classes, etc) for safety.
### YOUR CODE HERE ###
hidden1=tf.layers.dense(input_X,1024,activation=tf.nn.sigmoid)
logits=tf.layers.dense(hidden1,10)
probas =tf.nn.softmax(logits) ### YOUR CODE HERE ### apply tf.nn.softmax to logits
classes = tf.argmax(probas,axis=1) ### YOUR CODE HERE ### apply tf.argmax to find a class index with highest probability
# Loss should be a scalar number: average loss over all the objects with tf.reduce_mean().
# Use tf.nn.softmax_cross_entropy_with_logits on top of one-hot encoded input_y and logits.
# It is identical to calculating cross-entropy on top of probas, but is more numerically friendly (read the docs).
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=input_y,logits=logits))### YOUR CODE HERE ### cross-entropy loss
# Use a default tf.train.AdamOptimizer to get an SGD step
step = tf.train.AdamOptimizer().minimize(loss) ### optimizer step that minimizes the loss
###Output
_____no_output_____
###Markdown
Submit the MLP with 2 hidden layersRun these cells after training the MLP with 2 hidden layers
###Code
## GRADED PART, DO NOT CHANGE!
# Validation loss for MLP
grader.set_answer("i8bgs", s.run(loss, {input_X: X_val_flat, input_y: y_val_oh}))
# Validation accuracy for MLP
grader.set_answer("rE763", accuracy_score(y_val, s.run(classes, {input_X: X_val_flat})))
# you can make submission with answers so far to check yourself at this stage
grader.submit(COURSERA_EMAIL, COURSERA_TOKEN)
###Output
_____no_output_____ |
MNIST Project.ipynb | ###Markdown
We will predict digits of MNIST dataset using KNN classifier Let's import necesssary libraries
###Code
from sklearn.datasets import fetch_openml
from sklearn.neighbors import KNeighborsClassifier
import matplotlib
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import precision_score, recall_score, f1_score, confusion_matrix
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import roc_curve
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score
#To plot fancy looking plots
%matplotlib inline
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
matplotlib.rc('axes', labelsize=14)
matplotlib.rc('xtick', labelsize=12)
matplotlib.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "MNIST"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
# custom function to save the images on which we are gonna work
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# We have taken as_frame = False, so that we can work on numpy arrays. If we exclude this parameter then we will get a panda dataframe as the dataset.
dataset = fetch_openml('mnist_784', version=1, as_frame=False)
# Let's see what type of catagories we are gonna face from this dataset! If you are using the panda dataframe, then use head() instead of keys()
dataset.keys()
###Output
_____no_output_____
###Markdown
Sk-learn gives us this dataset pre-modified. So we have a dataset where the sample set and label/target set are already classified and also shuffled!
###Code
# data = Our sample data class
# target = Our labels
X,Y = dataset["data"], dataset["target"]
X.shape
Y.shape
###Output
_____no_output_____
###Markdown
Our main aim is to detect the digits from the images. So this is a classification issue in the language of ML. We are break this code into 2 parts: 1) We are gonna select one image and train our model to detect that image correcly (This process is called Binary Classifier). 2) We are gonna consider the whole dataset to train our model and predict the digits. In both cases we are gonna use KNN algorithm.
###Code
# Let's choose a random image from our sample set
target_digit = X[5]
target_digit_img = target_digit.reshape(28,28) # We are reshaping this data so that it can be plotted as an image
plt.imshow(target_digit_img, cmap = matplotlib.cm.binary, interpolation="nearest")
plt.axis("off")
save_fig("target_digit_plot")
plt.show()
Y[5] # We can see that our labels are in the string format! But we will use integer format so that we can use these data for calculation afterward
Y = Y.astype(np.uint8) # Converting strings to integers
Y[5]
# SK-learn gives us this set with predefined train and test set! The first 60000 data is for training purpose and rest are for testing purpose. If you want to use change this ratio then you can do that. But that might hamper your model training.
X_train, X_test, y_train, y_test = X[:60000], X[60000:], Y[:60000], Y[60000:]
y_train_2 = (y_train == 2) # We are taking our target label for both train and test set for now
y_test_2 = (y_test == 2)
knn_classifier = KNeighborsClassifier(n_neighbors=3)
knn_classifier.fit(X_train, y_train_2)
knn_classifier.predict([target_digit])
# We can see that our model could predict correctly! But we will run some evaluation tests for assurance
# We will now apply the cross validation test and find out the accuracy of our model for 3 consecutive validations
cross_val_score(knn_classifier, X_train, y_train_2, cv=3, scoring="accuracy")
# Our accuracy is really good. But this is not good in the sense of ML. We might have overfitted our model. Let's check the confusion matrix to find out the model predictions precisely.
y_train_prediction = cross_val_predict(knn_classifier, X_train, y_train_2, cv=3)
confusion_matrix(y_train_2,y_train_prediction)
precision_score(y_train_2,y_train_prediction)
recall_score(y_train_2,y_train_prediction)
f1_score(y_train_2,y_train_prediction)
# So our confusion matrices gave pretty good scores till now. It means we are on the right track. Seeing these scores, we can directly say that our model is doing good for this particular image. But for the sake of investigation, we will draw the ROC curve to find out the relation between True positive rates and False positive rates.
y_knn_predictions = cross_val_predict(knn_classifier, X_train, y_train_2, cv=3,
method="predict_proba")
# We have used the method predict proba to get the predicted scores of our training set. It is required for drawing the ROC Curve.
y_knn_predictions.shape
# We can see that it is a 2D data. But we can only use a 1D data for the curve!
y_scores = y_knn_predictions[:, 1] # So we are taking only the 1st column of the predicted scores
fpr, tpr, thresholds = roc_curve(y_train_2, y_scores)
# Let's draw a fancy function for drawing a fancy looking ROC Curve
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--') # dashed diagonal
plt.axis([0, 1, 0, 1]) # Not shown in the book
plt.xlabel('False Positive Rate (Fall-Out)', fontsize=16) # Not shown
plt.ylabel('True Positive Rate (Recall)', fontsize=16) # Not shown
plt.grid(True) # Not shown
plt.figure(figsize=(8, 6)) # Not shown
plot_roc_curve(fpr, tpr)
fpr_90 = fpr[np.argmax(tpr >= recall_90_precision)] # Not shown
plt.plot([fpr_90, fpr_90], [0., recall_90_precision], "r:") # Not shown
plt.plot([0.0, fpr_90], [recall_90_precision, recall_90_precision], "r:") # Not shown
plt.plot([fpr_90], [recall_90_precision], "ro") # Not shown
save_fig("roc_curve_plot") # Not shown
plt.show()
# Till now it is good enough. As we are using KNN classifier, this graph is not important for our model training. Because ROC curve is used to determine the threshold of a binary classifier or for plotting two different curves of two different algorithms and determine which is better using the AUC.
###Output
Saving figure roc_curve_plot
###Markdown
Now we will take first 1000 image set to train our model and predict the digits
###Code
knn_classifier.fit(X_train[:1000], y_train[:1000])
knn_classifier.predict([target_digit])
# Our model predicts correctly! But still we will go for cross validation just like we did before.
target_digit_scores = knn_classifier.predict_proba([target_digit]) # We are taking the prediction score for out target label
target_digit_scores
cross_val_score(knn_classifier, X_train, y_train, cv=3, scoring="accuracy")
# We can see that out prediction is 96% accurate on an average!
# We will now scale our data so that we can avoid any unnecessary fitting error
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(knn_classifier, X_train_scaled, y_train, cv=3, scoring="accuracy")
# We can see that our prediction accuracy has decreased to 93% now. So we will check the confusion matrix
y_train_pred = cross_val_predict(knn_classifier, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
# This is a big array! Let's just plot a graph for better visualization
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
# Great! Our prediction looks good.
# For increasing the accuracy we are gonna work on the hyperparameters of KNN now. We are use Grid Search to determine the best params of KNN for this model.
param_grid = [{'weights': ["uniform", "distance"], 'n_neighbors': [3, 4, 5]}]
grid_search = GridSearchCV(knn_classifier, param_grid, cv=5, verbose=3)
grid_search.fit(X_train, y_train)
grid_search.best_params_
y_pred = grid_search.predict(X_test)
accuracy_score(y_test, y_pred)
# Great! Now we have an accuracy level of 97%, with scaled data and standard precision and recall scores. But as we are machine learning engineers, automation always suits us, Right?
# To give our training a fancy outlook, now we will go for automating this project.
def mnist_knn_pipeline( dataset_sample, dataset_label, target_digit_pos, cross_valid = False):
X,Y = dataset_sample, dataset_label
target_digit = X[target_digit_pos]
target_digit_img = target_digit.reshape(28,28)
plt.imshow(target_digit_img, cmap = matplotlib.cm.binary, interpolation="nearest")
plt.axis("off")
save_fig("target_digit_plot")
plt.show()
X_train, X_test, y_train, y_test = X[:60000], X[60000:], Y[:60000], Y[60000:]
knn_classifier = KNeighborsClassifier(n_neighbors=3)
knn_classifier.fit(X_train[:1000], y_train[:1000])
if (cross_valid == True):
cross_valid_score = cross_val_score(knn_classifier, X_train, y_train, cv=3, scoring="accuracy")
for c in cross_valid_score:
if (c > 90 ):
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train.astype(np.float64))
cross_valid_score = cross_val_score(knn_classifier, X_train_scaled, y_train, cv=3, scoring="accuracy")
break
y_train_pred = cross_val_predict(knn_classifier, X_train, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
print("Confusion Matrix Plot")
plt.show()
param_grid = [{'weights': ["uniform", "distance"], 'n_neighbors': [3, 4, 5]}]
grid_search = GridSearchCV(knn_classifier, param_grid, cv=5, verbose=3)
grid_search.fit(X_train, y_train)
grid_search_best_params = grid_search.best_params_
y_pred = grid_search.predict(X_test)
target_predict = grid_search.predict([target_digit])
acc_score = accuracy_score(y_test, y_pred)
result = {"Predicted Digit: ": target_predict, "Accuracy: ": acc_score, "Cross Validation Score: ": cross_valid_score}
return result
Result = mnist_knn_pipeline(dataset["data"], dataset["target"], 5, cross_valid = True)
Result
###Output
Saving figure target_digit_plot
|
Learning_Tensorflow/Advanced_Tensorflow/Core_tf/distributed_training.ipynb | ###Markdown
Distributed training with TensorFlow tf.distribute.Strategy is a TensorFlow API to distribute training across multiple GPUs, multiple machines or TPUs. Using this API, you can distribute your existing models and training code with minimal code changes.tf.distribute.Strategy has been designed with these key goals in mind: Easy to use and support multiple user segments, including researchers, ML engineers, etc. Provide good performance out of the box. Easy switching between strategies.tf.distribute.Strategy can be used with a high-level API like Keras, and can also be used to distribute custom training loops (and, in general, any computation using TensorFlow).In TensorFlow 2.0, you can execute your programs eagerly, or in a graph using tf.function. tf.distribute.Strategy intends to support both these modes of execution. Although we discuss training most of the time in this guide, this API can also be used for distributing evaluation and prediction on different platforms.You can use tf.distribute.Strategy with very few changes to your code, because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.
###Code
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import os
###Output
_____no_output_____
###Markdown
GPU Strategies tf.distribute.Strategy intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are:- Synchronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture.- Hardware platform: You may want to scale your training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs.In order to support these use cases, there are six strategies available. Mirrored Strategy tf.distribute.MirroredStrategy supports synchronous distributed training on multiple GPUs on one machine. It creates one replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called MirroredVariable. These variables are kept in sync with each other by applying identical updates. Efficient all-reduce algorithms are used to communicate the variable updates across the devices. All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device. It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation
###Code
mirrored_strategy = tf.distribute.MirroredStrategy()
###Output
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
###Markdown
This will create a MirroredStrategy instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication.If you wish to use only some of the GPUs on your machine, you can do so like this:
###Code
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0"])
###Output
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
###Markdown
If you wish to override the cross device communication, you can do so using the cross_device_ops argument by supplying an instance of tf.distribute.CrossDeviceOps. Currently, tf.distribute.HierarchicalCopyAllReduce and tf.distribute.ReductionToOneDevice are two options other than tf.distribute.NcclAllReduce which is the default.
###Code
mirrored_strategy = tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.HierarchicalCopyAllReduce())
###Output
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
###Markdown
Central Storage Strategy tf.distribute.experimental.CentralStorageStrategy does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU.
###Code
central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy()
###Output
INFO:tensorflow:ParameterServerStrategy (CentralStorageStrategy if you are using a single machine) with compute_devices = ('/device:GPU:0',), variable_device = '/device:GPU:0'
###Markdown
This will create a CentralStorageStrategy instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggregated before being applied to variables. MultiWorkerMirroredStrategy tf.distribute.experimental.MultiWorkerMirroredStrategy is very similar to MirroredStrategy. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to MirroredStrategy, it creates copies of all variables in the model on each device across all workers.It uses CollectiveOps as the multi-worker all-reduce communication method used to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes.It also implements additional performance optimizations. For example, it includes a static optimization that converts multiple all-reductions on small tensors into fewer all-reductions on larger tensors. In addition, we are designing it to have a plugin architecture - so that in the future, you will be able to plugin algorithms that are better tuned for your hardware. Note that collective ops also implement other collective operations such as broadcast and all-gather.
###Code
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
###Output
WARNING:tensorflow:Collective ops is not configured at program startup. Some performance features may not be enabled.
INFO:tensorflow:Using MirroredStrategy with devices ('/device:GPU:0',)
INFO:tensorflow:Single-worker MultiWorkerMirroredStrategy with local_devices = ('/device:GPU:0',), communication = CollectiveCommunication.AUTO
###Markdown
MultiWorkerMirroredStrategy currently allows you to choose between two different implementations of collective ops. CollectiveCommunication.RING implements ring-based collectives using gRPC as the communication layer. CollectiveCommunication.NCCL uses Nvidia's NCCL to implement collectives. CollectiveCommunication.AUTO defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster.
###Code
multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(tf.distribute.experimental.CollectiveCommunication.NCCL)
###Output
WARNING:tensorflow:Collective ops is not configured at program startup. Some performance features may not be enabled.
INFO:tensorflow:Using MirroredStrategy with devices ('/device:GPU:0',)
INFO:tensorflow:Single-worker MultiWorkerMirroredStrategy with local_devices = ('/device:GPU:0',), communication = CollectiveCommunication.NCCL
###Markdown
TPUStrategy tf.distribute.experimental.TPUStrategy lets you run your TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. In terms of distributed training architecture, TPUStrategy is the same MirroredStrategy - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in TPUStrategy.
###Code
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
###Output
WARNING:tensorflow:TPU system 10.103.142.26:8470 has already been initialized. Reinitializing the TPU can cause previously created variables on TPU to be lost.
###Markdown
The TPUClusterResolver instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it.If you want to use this for Cloud TPUs:- You must specify the name of your TPU resource in the tpu argument.- You must initialize the tpu system explicitly at the start of the program. This is required before TPUs can be used for computation. Initializing the tpu system also wipes out the TPU memory, so it's important to complete this step first in order to avoid losing state.
###Code
# Alternatively if the above does not work
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver('grpc://' + o.s.environ['COLAB_TPU_ADDR'])
tf.contrib.distribute.resolver.initialize_tpu_system(resolver)
strategy = tf.contrib.distribute.TPUStrategy(resolver)
# then use with strategy.scope() and continue the code.
# Automatically parellelized no trouble with that
###Output
_____no_output_____
###Markdown
ParameterServerStrategy tf.distribute.experimental.ParameterServerStrategy supports parameter servers training on multiple machines. In this setup, some machines are designated as workers and some as parameter servers. Each variable of the model is placed on one parameter server. Computation is replicated across all GPUs of all the workers.
###Code
ps_strategy = tf.distribute.experimental.ParameterServerStrategy()
###Output
_____no_output_____
###Markdown
One Device Strategy tf.distribute.OneDeviceStrategy runs on a single device. This strategy will place any variables created in its scope on the specified device. Input distributed through this strategy will be prefetched to the specified device. Moreover, any functions called via strategy.run will also be placed on the specified device.You can use this strategy to test your code before switching to other strategies which actually distributes to multiple devices/machines.
###Code
strategy = tf.distribute.OneDeviceStrategy(device="gpu:0")
###Output
_____no_output_____
###Markdown
Using with tf.Keras We've integrated tf.distribute.Strategy into tf.keras which is TensorFlow's implementation of the Keras API specification. tf.keras is a high-level API to build and train models. By integrating into tf.keras backend, we've made it seamless for you to distribute your training written in the Keras training framework.Here's what you need to change in your code:- Create an instance of the appropriate tf.distribute.Strategy- Move the creation and compiling of Keras model inside strategy.scope.We support all types of Keras models - sequential, functional and subclassed.
###Code
mirrored_stratgy = tf.distribute.MirroredStrategy()
with mirrored_stratgy.scope():
model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1, ))])
model.compile(loss='mse', optimizer='sgd')
###Output
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
###Markdown
In this example we used MirroredStrategy so we can run this on a machine with multiple GPUs. strategy.scope() indicated which parts of the code to run distributed. Creating a model inside this scope allows us to create mirrored variables instead of regular variables. Compiling under the scope allows us to know that the user intends to train this model using this strategy. Once this is set up, you can fit your model like you would normally. MirroredStrategy takes care of replicating the model's training on the available GPUs, aggregating gradients, and more./
###Code
dataset = tf.data.Dataset.from_tensor_slices(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)
import numpy as np
inputs, targets = np.ones((100, 1)), np.ones((100, 1))
history = model.fit(inputs, targets, epochs=2, batch_size=10)
###Output
Train on 100 samples
Epoch 1/2
100/100 [==============================] - 1s 12ms/sample - loss: 0.9645
Epoch 2/2
100/100 [==============================] - 0s 249us/sample - loss: 0.4263
###Markdown
In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using MirroredStrategy with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use strategy.num_replicas_in_sync to get the number of replicas.
###Code
BATCHES_SIZE_PER_REPLICA = 5
global_batch_size = (BATCHES_SIZE_PER_REPLICA * mirrored_stratgy.num_replicas_in_sync)
dataset = tf.data.Dataset.from_tensor_slices(([1.], [1.])).repeat(100)
dataset = dataset.batch(global_batch_size)
LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15}
learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size]
###Output
_____no_output_____
###Markdown
Support Currently In TF 2.0 release, MirroredStrategy, TPUStrategy, CentralStorageStrategy and MultiWorkerMirroredStrategy are supported in Keras. Except MirroredStrategy, others are currently experimental and are subject to change. Support for other strategies will be coming soon. The API and how to use will be exactly the same as above. Simple Example
###Code
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
import os
datasets, info = tfds.load(name="mnist", with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
###Output
_____no_output_____
###Markdown
Create a MirroredStrategy object. This will handle distribution, and provides a context manager (tf.distribute.MirroredStrategy.scope) to build your model inside.
###Code
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
###Output
Number of devices: 1
###Markdown
When training a model with multiple GPUs, you can use the extra computing power effectively by increasing the batch size. In general, use the largest batch size that fits the GPU memory, and tune the learning rate accordingly.
###Code
num_train_examples = info.splits['train'].num_examples
num_test_examples = info.splits['test'].num_examples
BUFFER_SIZE = 10000
BUFFER_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCHES_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
###Output
_____no_output_____
###Markdown
Apply this function to the training and test data, shuffle the training data, and batch it for training. Notice we are also keeping an in-memory cache of the training data to improve performance.
###Code
train_dataset = mnist_train.map(scale).cache().shuffle(BATCH_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Create and compile the Keras model in the context of strategy.scope.
###Code
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.Adam(), metrics=['acc'])
###Output
_____no_output_____
###Markdown
The callbacks used here are:- TensorBoard: This callback writes a log for TensorBoard which allows you to visualize the graphs.- Model Checkpoint: This callback saves the model after every epoch.- Learning Rate Scheduler: Using this callback, you can schedule the learning rate to change after every epoch/batch.For illustrative purposes, add a print callback to display the learning rate in the notebook.
###Code
checkpoint_dir = '/training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >=3 and epoch <= 7:
return 1e-4
else:
return 1e-5
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('\nLearning rate for epoch {} is {}'.format(epoch + 1, model.optimizer.lr.numpy()))
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
###Output
_____no_output_____
###Markdown
Now, train the model in the usualway, calling fit on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not.
###Code
model.fit(train_dataset, epochs=12, callbacks=callbacks)
###Output
_____no_output_____
###Markdown
Custom Training This tutorial demonstrates how to use tf.distribute.Strategy with custom training loops. We will train a simple CNN model on the fashion MNIST dataset. The fashion MNIST dataset contains 60000 train images of size 28 x 28 and 10000 test images of size 28 x 28.We are using custom training loops to train our model because they give us flexibility and a greater control on training. Moreover, it is easier to debug the model and the training loop.
###Code
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images = tf.expand_dims(train_images, axis=-1)
test_images = tf.expand_dims(test_images, axis=-1)
print(train_images.shape, test_images.shape)
train_images = train_images / np.float32(255)
test_images = test_images / np.float32(255)
###Output
_____no_output_____
###Markdown
Create a strategy to distribute the variables and the graphHow does tf.distribute.MirroredStrategy strategy work?All the variables and the model graph is replicated on the replicas.Input is evenly distributed across the replicas.Each replica calculates the loss and gradients for the input it received.The gradients are synced across all the replicas by summing them.After the sync, the same update is made to the copies of the variables on each replica.
###Code
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
###Output
Number of devices: 1
###Markdown
Setup input pipeline Export the graph and the variables to the platform-agnostic SavedModel format. After your model is saved, you can load it with or without the scope.
###Code
BUFFER_SIZE = len(train_images)
BATCHES_SIZE_PER_REPLICA = 64
GLOBAL_BATCH_SIZE = BATCHES_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
EPOCHS = 10
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE)
test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE)
train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)
test_dist_dataset = strategy.experimental_distribute_dataset(test_dataset)
def create_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(64, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
return model
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
###Output
_____no_output_____
###Markdown
Define the loss function Normally, on a single machine with 1 GPU/CPU, loss is divided by the number of examples in the batch of input.So, how should the loss be calculated when using a tf.distribute.Strategy?- For an example, let's say you have 4 GPU's and a batch size of 64. One batch of input is distributed across the replicas (4 GPUs), each replica getting an input of size 16.- The model on each replica does a forward pass with its respective input and calculates the loss. Now, instead of dividing the loss by the number of examples in its respective input (BATCH_SIZE_PER_REPLICA = 16), the loss should be divided by the GLOBAL_BATCH_SIZE (64). Why do this?- This needs to be done because after the gradients are calculated on each replica, they are synced across the replicas by summing them. How to do this in TensorFlow?- If you're writing a custom training loop, as in this tutorial, you should sum the per example losses and divide the sum by the GLOBAL_BATCH_SIZE: scale_loss = tf.reduce_sum(loss) * (1. / GLOBAL_BATCH_SIZE) or you can use tf.nn.compute_average_loss which takes the per example loss, optional sample weights, and GLOBAL_BATCH_SIZE as arguments and returns the scaled loss.- If you are using regularization losses in your model then you need to scale the loss value by number of replicas. You can do this by using the tf.nn.scale_regularization_loss function. - Using tf.reduce_mean is not recommended. Doing so divides the loss by actual per replica batch size which may vary step to step. - This reduction and scaling is done automatically in keras model.compile and model.fit - If using tf.keras.losses classes (as in the example below), the loss reduction needs to be explicitly specified to be one of NONE or SUM. AUTO and SUM_OVER_BATCH_SIZE are disallowed when used with tf.distribute.Strategy. AUTO is disallowed because the user should explicitly think about what reduction they want to make sure it is correct in the distributed case. SUM_OVER_BATCH_SIZE is disallowed because currently it would only divide by per replica batch size, and leave the dividing by number of replicas to the user, which might be easy to miss. So instead we ask the user do the reduction themselves explicitly.
###Code
with strategy.scope():
# Set reduction to `none` so we can do the reduction afterwards and divide by
# global batch size.
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction=tf.keras.losses.Reduction.NONE)
def compute_loss(labels, predictions):
per_example_loss = loss_object(labels, predictions)
return tf.nn.compute_average_loss(per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Define the metrics to track loss and accuracyThese metrics track the test loss and training and test accuracy. You can use .result() to get the accumulated statistics at any time.
###Code
with strategy.scope():
test_loss = tf.keras.metrics.Mean(name="test_loss")
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name="train_accuracy")
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name="test_accuracy")
###Output
_____no_output_____
###Markdown
Training Loop
###Code
with strategy.scope():
model = create_model()
optimizer = tf.keras.optimizers.Adam()
checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)
with strategy.scope():
def train_step(inputs):
images, labels = inputs
with tf.GradientTape() as tape:
predictions = model(images, training=True)
loss = compute_loss(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_accuracy.update_state(labels, predictions)
return loss
def test_step(inputs):
images, labels = inputs
predictions = model(images, training=False)
t_loss = loss_object(labels, predictions)
test_loss.update_state(t_loss)
test_accuracy.update_state(labels, predictions)
with strategy.scope():
# `run` replicates the provided computation and runs it
# with the distributed input.
@tf.function
def distributed_train_step(dataset_inputs):
per_replica_losses = strategy.experimental_run_v2(train_step, args=(dataset_inputs,))
return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,
axis=None)
@tf.function
def distributed_test_step(dataset_inputs):
return strategy.experimental_run_v2(test_step, args=(dataset_inputs,))
for epoch in range(EPOCHS):
# TRAIN LOOP
total_loss = 0.0
num_batches = 0
for x in train_dist_dataset:
total_loss += distributed_train_step(x)
num_batches += 1
train_loss = total_loss / num_batches
# TEST LOOP
for x in test_dist_dataset:
distributed_test_step(x)
if epoch % 2 == 0:
checkpoint.save(checkpoint_prefix)
template = ("Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, "
"Test Accuracy: {}")
print (template.format(epoch+1, train_loss,
train_accuracy.result()*100, test_loss.result(),
test_accuracy.result()*100))
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
###Output
_____no_output_____ |
caos_2019-2020/sem24-http-libcurl-cmake/http_libcurl_cmake.ipynb | ###Markdown
HTTP, libcurl, cmake Видео с семинара → <img src="video.jpg" width="320" height="160" align="left" alt="Видео с семинара"> HTTP[HTTP (HyperText Transfer Protocol)](https://ru.wikipedia.org/wiki/HTTP) — протокол прикладного/транспортного уровня передачи данных. Изначально был создан как протокол прикладного уровня для передачи документов в html формате (теги и все вот это). Но позже был распробован и сейчас может используется для передачи произвольных данных, что характерно для транспортного уровня.Отправка HTTP запроса:* Из терминала * С помощью netcat, telnet на уровне TCP, самостоятельно формируя HTTP запрос. * С помощью curl на уровне HTTP* Из python на уровне HTTP* Из программы на C на уровне HTTP* Более разнообразное использование HTTP HTTP 1.1 и HTTP/2На семинаре будем рассматривать HTTP 1.1, но стоит знать, что текущая версия протокола существенно более эффективна.[Как HTTP/2 сделает веб быстрее / Хабр](https://habr.com/ru/company/nix/blog/304518/)| HTTP 1.1 | HTTP/2 ||----------|--------|| одно соединение - один запрос, как следствие вынужденная конкатенация, встраивание и спрайтинг (spriting) данных, | несколько запросов на соединение || все нужные заголовки каждый раз отправляются полностью | сжатие заголовков, позволяет не отправлять каждый раз одни и те же заголовки || | возможность отправки данных по инициативе сервера || текстовый протокол | двоичный протокол || | приоритезация потоков - клиент может сообщать, что ему более важно| [Ридинг Яковлева](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/http-curl) libcurlБиблиотека умеющая все то же, что и утилита curl. cmakeРешает задачу кроссплатформенной сборки* Фронтенд для систем непосредственно занимающихся сборкой* cmake хорошо интегрирован с многими IDE * CMakeLists.txt в корне дерева исходников - главный конфигурационный файл и главный индикатор того, что проект собирается с помощью cmakeПримеры:* Простой пример * Пример с libcurl [Введение в CMake / Хабр](https://habr.com/ru/post/155467/)[Ридинг Яковлева](https://github.com/victor-yacovlev/mipt-diht-caos/blob/master/practice/linux_basics/cmake.md)[Документация для libCURL](https://curl.haxx.se/libcurl/c/) Комментарии к ДЗ HTTP из терминала На уровне TCP
###Code
%%bash
# make request string
VAR=$(cat <<HEREDOC_END
GET / HTTP/1.1
Host: ejudge.atp-fivt.org
HEREDOC_END
)
# Если работаем в терминале, то просто пишем "nc ejudge.atp-fivt.org 80" и вводим запрос
# ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓ - имитация ввода в stdin. "-q1" - чтобы netcat не закрылся сразу после закрытия stdin
echo -e "$VAR\n" | nc -q1 ejudge.atp-fivt.org 80 | head -n 14
# ↑↑↑↑↑↑↑↑↑↑↑↑ - обрезаем только начало вывода, чтобы не затопило выводом
# Можно еще исползовать telnet: "telnet ejudge.atp-fivt.org 80"
import time
a = TInteractiveLauncher("telnet ejudge.atp-fivt.org 80 | head -n 10")
a.write("""\
GET / HTTP/1.1
Host: ejudge.atp-fivt.org
""")
time.sleep(1)
a.close()
%%bash
VAR=$(cat <<HEREDOC_END
USER [email protected]
HEREDOC_END
)
# попытка загрузить почту по POP3 протоколу (не получится, там надо с шифрованием заморочиться)
echo -e "$VAR\n" | nc -q1 pop.yandex.ru 110
###Output
+OK POP Ya! na@8-b74d0a35481b xRKan3s0qW21
-ERR [AUTH] Working without SSL/TLS encryption is not allowed. Please visit https://yandex.ru/support/mail-new/mail-clients/ssl.html sc=xRKan3s0qW21_180928_8-b74d0a35481b
###Markdown
Сразу на уровне HTTPcurl - возволяет делать произвольные HTTP запросыwget - в первую очередь предназначен для скачивания файлов. Например, умеет выкачивать страницу рекурсивно
###Code
%%bash
curl ejudge.atp-fivt.org | head -n 10
%%bash
wget ejudge.atp-fivt.org -O - | head -n 10
###Output
<html>
<head>
<meta charset="utf-8"/>
<title>АКОС ФИВТ МФТИ</title>
</head>
<body>
<h1>Ejudge для АКОС на ФИВТ МФТИ</h1>
<h2>Весенний семестр</h2>
<h3>Группы ПМФ</h3>
<p><b>!!!!!!!!!!</b> <a href="/client?contest_id=19">Контрольная 15 мая 2019</a><b>!!!!!!!!!</b></p>
###Markdown
HTTP из python
###Code
import requests
data = requests.get("http://ejudge.atp-fivt.org").content.decode()
print(data[:200])
###Output
<html>
<head>
<meta charset="utf-8"/>
<title>АКОС ФИВТ МФТИ</title>
</head>
<body>
<h1>Ejudge для АКОС на ФИВТ МФТИ</h1>
<h2>Весенний семестр</h2>
<h3>Группы ПМФ</h3>
<p>
###Markdown
HTTP из C`sudo apt install libcurl4-openssl-dev`Пример от Яковлева
###Code
%%cpp curl_easy.c
%run gcc -Wall curl_easy.c -lcurl -o curl_easy.exe
%run ./curl_easy.exe | head -n 5
#include <curl/curl.h>
#include <assert.h>
int main() {
CURL *curl = curl_easy_init();
assert(curl);
CURLcode res;
curl_easy_setopt(curl, CURLOPT_URL, "http://ejudge.atp-fivt.org");
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
assert(res == 0);
return 0;
}
###Output
_____no_output_____
###Markdown
Потрогаем HTTP более разнообразно Установка: https://install.advancedrestclient.com/ - программка для удобной отправки разнообразных http запросов`pip3 install --user wsgidav cheroot` - webdav сервер
###Code
!mkdir webdav_dir 2>&1 | grep -v "File exists" || true
!echo "Hello!" > webdav_dir/file.txt
a = TInteractiveLauncher("wsgidav --port=9024 --root=./webdav_dir --auth=anonymous --host=0.0.0.0")
!curl localhost:9024 | head -n 4
!curl -X "PUT" localhost:9024/curl_added_file.txt --data-binary @curl_easy.c
!ls webdav_dir
!cat webdav_dir/curl_added_file.txt | grep main -C 2
!curl -X "DELETE" localhost:9024/curl_added_file.txt
!ls webdav_dir
os.kill(a.get_pid(), signal.SIGINT)
a.close()
###Output
_____no_output_____
###Markdown
libcurlУстановка: `sudo apt-get install libcurl4-openssl-dev` (Но это не точно! Воспоминания годичной давности. Напишите мне пожалуйста получится или не получится)Документация: https://curl.haxx.se/libcurl/c/CURLOPT_WRITEFUNCTION.html Интересный факт: размер chunk'a всегда равен 1.Модифицирпованный пример от Яковлева
###Code
%%cpp curl_medium.c
%run gcc -Wall curl_medium.c -lcurl -o curl_medium.exe
%run ./curl_medium.exe "http://ejudge.atp-fivt.org" | head -n 5
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <assert.h>
#include <curl/curl.h>
typedef struct {
char *data;
size_t length;
size_t capacity;
} buffer_t;
static size_t callback_function(
char *ptr, // буфер с прочитанными данными
size_t chunk_size, // размер фрагмента данных; всегда равен 1
size_t nmemb, // количество фрагментов данных
void *user_data // произвольные данные пользователя
) {
buffer_t *buffer = user_data;
size_t total_size = chunk_size * nmemb;
size_t required_capacity = buffer->length + total_size;
if (required_capacity > buffer->capacity) {
required_capacity *= 2;
buffer->data = realloc(buffer->data, required_capacity);
assert(buffer->data);
buffer->capacity = required_capacity;
}
memcpy(buffer->data + buffer->length, ptr, total_size);
buffer->length += total_size;
return total_size;
}
int main(int argc, char *argv[]) {
assert(argc == 2);
const char* url = argv[1];
CURL *curl = curl_easy_init();
assert(curl);
CURLcode res;
// регистрация callback-функции записи
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, callback_function);
// указатель &buffer будет передан в callback-функцию
// параметром void *user_data
buffer_t buffer = {.data = NULL, .length = 0, .capacity = 0};
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &buffer);
curl_easy_setopt(curl, CURLOPT_URL, url);
res = curl_easy_perform(curl);
assert(res == 0);
write(STDOUT_FILENO, buffer.data, buffer.length);
free(buffer.data);
curl_easy_cleanup(curl);
}
###Output
_____no_output_____
###Markdown
cmakeУстановка: `apt-get install cmake cmake-extras` Простой примерИсточник: [Введение в CMake / Хабр](https://habr.com/ru/post/155467/). Там же можно найти множество более интересных примеров.
###Code
!mkdir simple_cmake_example 2>&1 | grep -v "File exists" || true
%%cmake simple_cmake_example/CMakeLists.txt
cmake_minimum_required(VERSION 2.8) # Проверка версии CMake.
# Если версия установленой программы
# старее указаной, произайдёт аварийный выход.
add_executable(main main.cpp) # Создает исполняемый файл с именем main
# из исходника main.cpp
%%cpp simple_cmake_example/main.cpp
%run mkdir simple_cmake_example/build #// cоздаем директорию для файлов сборки
%# // переходим в нее, вызываем cmake, чтобы он создал правильный Makefile
%# // а затем make, который по Makefile правильно все соберет
%run cd simple_cmake_example/build && cmake .. && make
%run simple_cmake_example/build/main #// запускаем собранный бинарь
%run ls -la simple_cmake_example #// смотрим, а что же теперь есть в основной директории
%run ls -la simple_cmake_example/build #// ... и в директории сборки
%run rm -r simple_cmake_example/build #// удаляем директорию с файлами сборки
#include <iostream>
int main(int argc, char** argv)
{
std::cout << "Hello, World!" << std::endl;
return 0;
}
###Output
_____no_output_____
###Markdown
Пример с libcurl
###Code
!mkdir curl_cmake_example || true
!cp curl_medium.c curl_cmake_example/main.c
%%cmake curl_cmake_example/CMakeLists.txt
%run mkdir curl_cmake_example/build
%run cd curl_cmake_example/build && cmake .. && make
%run curl_cmake_example/build/main "http://ejudge.atp-fivt.org" | head -n 5 #// запускаем собранный бинарь
%run rm -r curl_cmake_example/build
cmake_minimum_required(VERSION 2.8)
set(CMAKE_C_FLAGS "-std=gnu11") # дополнительные опции компилятора Си
# найти библиотеку CURL; опция REQUIRED означает,
# что библиотека является обязательной для сборки проекта,
# и если необходимые файлы не будут найдены, cmake
# завершит работу с ошибкой
find_package(CURL REQUIRED)
# это библиотека в проекте не нужна, просто пример, как написать обработку случаев, когда библиотека не найдена
find_package(SDL)
if(NOT SDL_FOUND)
message(">>>>> Failed to find SDL (not a problem)")
else()
message(">>>>> Managed to find SDL, can add include directories, add target libraries")
endif()
# это библиотека в проекте не нужна, просто пример, как подключить модуль интеграции с pkg-config
find_package(PkgConfig REQUIRED)
# и ненужный в этом проекте FUSE через pkg-config
pkg_check_modules(
FUSE # имя префикса для названий выходных переменных
# REQUIRED # опционально можно писать, чтобы было required
fuse3 # имя библиотеки, должен существовать файл fuse3.pc
)
if(NOT FUSE_FOUND)
message(">>>>> Failed to find FUSE (not a problem)")
else()
message(">>>>> Managed to find FUSE, can add include directories, add target libraries")
endif()
# добавляем цель собрать исполняемый файл из перечисленных исходнико
add_executable(main main.c)
# добавляет в список каталогов для цели main,
# которые превратятся в опции -I компилятора для всех
# каталогов, которые перечислены в переменной CURL_INCLUDE_DIRECTORIES
target_include_directories(main PUBLIC ${CURL_INCLUDE_DIRECTORIES})
# include_directories(${CURL_INCLUDE_DIRECTORIES}) # можно вот так
# для цели my_cool_program указываем библиотеки, с которыми
# программа будет слинкована (в результате станет опциями -l и -L)
target_link_libraries(main ${CURL_LIBRARIES})
###Output
_____no_output_____ |
Prace_domowe/Praca_domowa5/Grupa3/WisniewskiJacek/pd5.ipynb | ###Markdown
Praca Domowa nr 5 Jacek Wiśniewski WstępW tej pracy zajmę się problemem klasteryzacji. Przetestuję 2 metody klaseryzujące oraz 2 sposoby wybierania liczby klastrów.
###Code
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
from sklearn.neighbors.nearest_centroid import NearestCentroid
from sklearn.metrics import silhouette_samples, silhouette_score
import pandas as pd
import os
import matplotlib.pyplot as plt
data = pd.read_csv("..\..\clustering.csv", header = None)
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], alpha = 0.7)
plt.title("Points from clustering.csv")
plt.show()
###Output
_____no_output_____
###Markdown
KmeansPierwszą metodą będzie metoda k najbliższych średnich. Do wyboru liczby klastrów wykorzystam wykres inercji oraz zasadę "łokcia".
###Code
inertias = []
centroids = []
labels = []
for k in range(1, 21):
model = KMeans(n_clusters = k)
model.fit(data)
inertias.append(model.inertia_)
centroids.append(model.cluster_centers_)
labels.append(model.predict(data))
plt.plot(range(1, 21), inertias, '-o')
plt.xlabel("Number of clusters")
plt.ylabel("Inertia")
plt.xticks(range(1, 21))
plt.show()
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], alpha = 0.7, c = labels[7])
plt.scatter(centroids[7][:, 0], centroids[7][:, 1], s = 50, c = 'red')
plt.show()
###Output
_____no_output_____
###Markdown
Agglomerative ClusteringDrugą metodą będzie klasteryzacja aglomeracyjna a do wyboru liczby klastrów wykorzystam miarę silhouette.
###Code
centroids = []
labels = []
scores = []
for k in range(2, 21):
model = AgglomerativeClustering(n_clusters = k, linkage = 'ward')
prediction = model.fit_predict(data)
labels.append(prediction)
clf = NearestCentroid()
clf.fit(data, prediction)
centroids.append(clf.centroids_)
scores.append(silhouette_score(data, prediction))
plt.plot(range(2, 21), scores, '-o')
plt.xlabel("Number of clusters")
plt.ylabel("Silhouette score")
plt.xticks(range(2, 21))
plt.show()
plt.scatter(data.iloc[:, 0], data.iloc[:, 1], alpha = 0.7, c = labels[7])
plt.scatter(centroids[7][:, 0], centroids[7][:, 1], s = 50, c = 'red')
plt.show()
###Output
_____no_output_____ |
M3_W5_Project_v01.ipynb | ###Markdown
M3-W5 Project: Make your Data shine!-------- Assignment Steps:Step 1: In this module project your task is to pick a dataset from the link below Step 2: Load it to Python using an appropriate library (pandas, sqllite3, etc.) Step 3: Understand the issues (take a look at the issues section for each dataset on the given URL) Step 4: Clean the data (take care of outliers, missing values, data types, etc.) and provide explanations for all steps you took while cleaning the data Step 5: Explore and visualize your data Overarching: Submit your work as a Jupyter Notebook with all the code and narrative. Note: [Summary](sum)__________
###Code
import pandas as pd
import numpy as np
import sqlite3
import matplotlib.pyplot as plt
import seaborn as sns
import csv
import pandas_profiling
# Optional packages:
# import fuzzywuzzy
# import requests
# from bs4 import BeautifulSoup
# from datetime import datetime as dt
###Output
_____no_output_____
###Markdown
Step 1: Pick a datasetPreliminary pick: The Database of The Metropolitan Museum of Art Open Access with the file "MetObjects.csv"____________ Step 2: Load the database to Python using an appropriate library (pandas, sqllite3, etc.)Since we are handling a csv-file we will import via csv-package____________
###Code
file = "MetObjects.csv"
df = pd.read_csv(file, encoding='utf-8') # , na_values="Not Stated"
#df.head()
df.info()
#### Overview of missing data in a heatmap with seaborn - cool stuff ;-)
combined_updated = df.set_index('Object Number')
sns.heatmap(combined_updated.isnull(), cbar=False)
df.head()
!pip install pandas-profiling
# pandas_profiling.ProfileReport(df)
df.isnull().mean()
df.isnull().sum().sum()
###Output
_____no_output_____ |
Artificial Neural Network.ipynb | ###Markdown
process test images
###Code
test_images = mnist.test.images
test_labels = mnist.test.labels
mnist_test = process_data(test_images, test_labels)
###Output
_____no_output_____
###Markdown
process validation images
###Code
validation_images = mnist.validation.images
validation_labels = mnist.validation.labels
mnist_validation = process_data(validation_images, test_labels)
###Output
_____no_output_____
###Markdown
run the ANN
###Code
ann_mnist = ArtificialNeuralNetwork([784, 30, 10])
ann_mnist.track_progress(train = mnist_train, mini_batch_size = 10, alpha = 3.0, epochs = 30, test = mnist_test)
###Output
Epoch 0: 9113 / 10000
Epoch 1: 9221 / 10000
Epoch 2: 9330 / 10000
Epoch 3: 9385 / 10000
Epoch 4: 9410 / 10000
Epoch 5: 9403 / 10000
Epoch 6: 9429 / 10000
Epoch 7: 9415 / 10000
Epoch 8: 9445 / 10000
Epoch 9: 9457 / 10000
Epoch 10: 9448 / 10000
Epoch 11: 9487 / 10000
Epoch 12: 9457 / 10000
Epoch 13: 9495 / 10000
Epoch 14: 9470 / 10000
Epoch 15: 9475 / 10000
Epoch 16: 9470 / 10000
Epoch 17: 9500 / 10000
Epoch 18: 9496 / 10000
Epoch 19: 9507 / 10000
Epoch 20: 9490 / 10000
Epoch 21: 9499 / 10000
Epoch 22: 9491 / 10000
Epoch 23: 9499 / 10000
Epoch 24: 9511 / 10000
Epoch 25: 9492 / 10000
Epoch 26: 9485 / 10000
Epoch 27: 9515 / 10000
Epoch 28: 9486 / 10000
Epoch 29: 9525 / 10000
###Markdown
1. peak accuracy 95.25% when run on a [784, 30, 10] network
###Code
ann_mnist2 = ArtificialNeuralNetwork([784, 100, 10])
ann_mnist2.track_progress(train = mnist_train, mini_batch_size = 10, alpha = 3.0, epochs = 30, test = mnist_test)
###Output
Epoch 0: 6681 / 10000
Epoch 1: 7504 / 10000
Epoch 2: 8427 / 10000
Epoch 3: 8490 / 10000
Epoch 4: 8541 / 10000
Epoch 5: 8591 / 10000
Epoch 6: 8596 / 10000
Epoch 7: 9521 / 10000
Epoch 8: 9533 / 10000
Epoch 9: 9584 / 10000
Epoch 10: 9573 / 10000
Epoch 11: 9601 / 10000
Epoch 12: 9604 / 10000
Epoch 13: 9625 / 10000
Epoch 14: 9630 / 10000
Epoch 15: 9634 / 10000
Epoch 16: 9634 / 10000
Epoch 17: 9640 / 10000
Epoch 18: 9658 / 10000
Epoch 19: 9654 / 10000
Epoch 20: 9652 / 10000
Epoch 21: 9655 / 10000
Epoch 22: 9655 / 10000
Epoch 23: 9661 / 10000
Epoch 24: 9662 / 10000
Epoch 25: 9666 / 10000
Epoch 26: 9672 / 10000
Epoch 27: 9678 / 10000
Epoch 28: 9671 / 10000
Epoch 29: 9673 / 10000
###Markdown
2. peak accuracy 96.78% when run on a [784, 100, 10] network
###Code
ann_mnist3 = ArtificialNeuralNetwork([784, 10])
ann_mnist3.track_progress(train = mnist_train, mini_batch_size = 10, alpha = 3.0, epochs = 30, test = mnist_test)
###Output
Epoch 0: 6444 / 10000
Epoch 1: 7172 / 10000
Epoch 2: 7221 / 10000
Epoch 3: 7250 / 10000
Epoch 4: 7219 / 10000
Epoch 5: 7264 / 10000
Epoch 6: 7262 / 10000
Epoch 7: 7280 / 10000
Epoch 8: 7272 / 10000
Epoch 9: 7282 / 10000
Epoch 10: 7269 / 10000
Epoch 11: 7291 / 10000
Epoch 12: 7270 / 10000
Epoch 13: 7297 / 10000
Epoch 14: 7285 / 10000
Epoch 15: 7285 / 10000
Epoch 16: 7273 / 10000
Epoch 17: 7290 / 10000
Epoch 18: 7277 / 10000
Epoch 19: 7293 / 10000
Epoch 20: 7287 / 10000
Epoch 21: 7294 / 10000
Epoch 22: 7299 / 10000
Epoch 23: 7313 / 10000
Epoch 24: 7272 / 10000
Epoch 25: 7285 / 10000
Epoch 26: 7281 / 10000
Epoch 27: 7270 / 10000
Epoch 28: 7289 / 10000
Epoch 29: 7306 / 10000
###Markdown
3. peak accuracy 73.13% when run on a [784, 10] network
###Code
ann_mnist4 = ArtificialNeuralNetwork([784, 200, 150, 100, 10])
###Output
_____no_output_____
###Markdown
WARNING: very long execution time, takes 2.5 min per epoch
###Code
ann_mnist4.track_progress(train = mnist_train, mini_batch_size = 10, alpha = 3.0, epochs = 30, test = mnist_test)
###Output
Epoch 0: 7191 / 10000
Epoch 1: 7387 / 10000
Epoch 2: 8529 / 10000
Epoch 3: 8649 / 10000
Epoch 4: 8579 / 10000
Epoch 5: 8634 / 10000
Epoch 6: 8727 / 10000
Epoch 7: 8660 / 10000
Epoch 8: 8797 / 10000
Epoch 9: 9497 / 10000
Epoch 10: 9523 / 10000
Epoch 11: 9591 / 10000
Epoch 12: 9565 / 10000
Epoch 13: 9550 / 10000
Epoch 14: 9605 / 10000
Epoch 15: 9597 / 10000
Epoch 16: 9628 / 10000
Epoch 17: 9579 / 10000
Epoch 18: 9662 / 10000
Epoch 19: 9647 / 10000
Epoch 20: 9646 / 10000
Epoch 21: 9620 / 10000
Epoch 22: 9636 / 10000
Epoch 23: 9667 / 10000
Epoch 24: 9640 / 10000
Epoch 25: 9632 / 10000
Epoch 26: 9644 / 10000
Epoch 27: 9683 / 10000
Epoch 28: 9661 / 10000
Epoch 29: 9698 / 10000
###Markdown
4. peak accuracy 96.98% when run on a [784, 200, 150, 100, 10] network For practical purposes, use a [784, 100, 10] network.
###Code
ann_mnist5 = ArtificialNeuralNetwork([784, 100, 10])
ann_mnist5.track_progress(train = mnist_train, mini_batch_size = 32, alpha = 3.0, epochs = 30, test = mnist_test)
###Output
Epoch 0: 5408 / 10000
Epoch 1: 5619 / 10000
Epoch 2: 5675 / 10000
Epoch 3: 5716 / 10000
Epoch 4: 5736 / 10000
Epoch 5: 5742 / 10000
Epoch 6: 5763 / 10000
Epoch 7: 5774 / 10000
Epoch 8: 5777 / 10000
Epoch 9: 5783 / 10000
Epoch 10: 5790 / 10000
Epoch 11: 5793 / 10000
Epoch 12: 5794 / 10000
Epoch 13: 5801 / 10000
Epoch 14: 5813 / 10000
Epoch 15: 5807 / 10000
Epoch 16: 5817 / 10000
Epoch 17: 5900 / 10000
Epoch 18: 6703 / 10000
Epoch 19: 6718 / 10000
Epoch 20: 6723 / 10000
Epoch 21: 6722 / 10000
Epoch 22: 6731 / 10000
Epoch 23: 6719 / 10000
Epoch 24: 6744 / 10000
Epoch 25: 6744 / 10000
Epoch 26: 6743 / 10000
Epoch 27: 6758 / 10000
Epoch 28: 6749 / 10000
Epoch 29: 6748 / 10000
###Markdown
5. peak accuracy 67.58% when run on a [784, 100, 10] network with a mini-batch size of 32 Start using 1000 train images and validation data (instead of test data) of 5000 images to tune the hyperparameters.
###Code
ann_mnist7 = ArtificialNeuralNetwork([784, 100, 10])
ann_mnist6.track_progress(train = mnist_train, mini_batch_size = 10, alpha = 3.0, epochs = 30,
test = mnist_validation)
ann_mnist1 = ArtificialNeuralNetwork([784, 30, 10])
ann_mnist1.track_progress(train = mnist_train, mini_batch_size = 10, alpha = 3.0, epochs = 30, test = mnist_test)
ann_mnisti = ArtificialNeuralNetwork([784, 30, 10], cross_entropy = True)
ann_mnisti.track_progress(train = mnist_train, mini_batch_size = 10, alpha = 0.5, epochs = 30, test = mnist_test)
ann_mnistl = ArtificialNeuralNetwork([784, 100, 10], cross_entropy = True)
ann_mnistl.track_progress(train = mnist_train, mini_batch_size = 10, alpha = 0.1, epochs = 60, lmbda = 5.0, test = mnist_test)
###Output
Epoch 1: 9300 / 10000
Epoch 2: 9487 / 10000
Epoch 3: 9556 / 10000
Epoch 4: 9612 / 10000
Epoch 5: 9632 / 10000
Epoch 6: 9676 / 10000
Epoch 7: 9691 / 10000
Epoch 8: 9705 / 10000
Epoch 9: 9707 / 10000
Epoch 10: 9728 / 10000
Epoch 11: 9739 / 10000
Epoch 12: 9741 / 10000
Epoch 13: 9733 / 10000
Epoch 14: 9741 / 10000
Epoch 15: 9739 / 10000
Epoch 16: 9761 / 10000
Epoch 17: 9772 / 10000
Epoch 18: 9765 / 10000
Epoch 19: 9768 / 10000
Epoch 20: 9766 / 10000
Epoch 21: 9771 / 10000
Epoch 22: 9773 / 10000
Epoch 23: 9771 / 10000
Epoch 24: 9783 / 10000
Epoch 25: 9774 / 10000
Epoch 26: 9787 / 10000
Epoch 27: 9784 / 10000
Epoch 28: 9784 / 10000
Epoch 29: 9786 / 10000
Epoch 30: 9780 / 10000
Epoch 31: 9791 / 10000
Epoch 32: 9788 / 10000
Epoch 33: 9796 / 10000
Epoch 34: 9789 / 10000
Epoch 35: 9789 / 10000
Epoch 36: 9786 / 10000
Epoch 37: 9797 / 10000
Epoch 38: 9792 / 10000
Epoch 39: 9795 / 10000
Epoch 40: 9792 / 10000
Epoch 41: 9788 / 10000
Epoch 42: 9794 / 10000
Epoch 43: 9799 / 10000
Epoch 44: 9790 / 10000
Epoch 45: 9800 / 10000
Epoch 46: 9798 / 10000
Epoch 47: 9789 / 10000
Epoch 48: 9793 / 10000
Epoch 49: 9799 / 10000
Epoch 50: 9787 / 10000
Epoch 51: 9799 / 10000
Epoch 52: 9782 / 10000
Epoch 53: 9793 / 10000
Epoch 54: 9804 / 10000
Epoch 55: 9796 / 10000
Epoch 56: 9803 / 10000
Epoch 57: 9796 / 10000
Epoch 58: 9804 / 10000
Epoch 59: 9799 / 10000
Epoch 60: 9803 / 10000
###Markdown
process raw data into train and test lists of tuples
###Code
a = np.array([[1,2,3],
[4,5,6],
[7,8,9],
[10,11,12],
[13,14,15]])
b = np.array([[11,22],
[33,44],
[55,66],
[77,88],
[99,101]])
a.shape[1]
emp_lis = []
for (ai,bi) in zip(a,b):
emp_lis.append((ai.reshape(3,1),
bi.reshape(2,1)))
emp_lis
def process_data(data_inputs, data_labels):
"""Function to package data into a list of tuples
consisting of input data and target output arrays.
Returns a list of data tuples.
`data_inputs`: (2D array)
`data_labels`: (2D array)."""
input_col = data_inputs.shape[1]
label_col = data_labels.shape[1]
data_list = []
for (di,dl) in zip(data_inputs, data_labels):
data_list.append((di.reshape(input_col,1),
dl.reshape(label_col,1)))
return (data_list)
###Output
_____no_output_____
###Markdown
process training images
###Code
train_images = mnist.train.images
train_labels = mnist.train.labels
mnist_train = process_data(train_images, train_labels)
###Output
_____no_output_____
###Markdown
Based on Demystifying Neural Networks[link](https://www.youtube.com/watch?v=UJwK6jAStmgt=43.049197)X = [hourSleep, hourStudy] size=3x2, y = scoreTest size=3x1X|y---|---3, 5| 755, 1| 8210, 2| 93X = input y = output Z = hiddenlayer W = weights a = layer activity delta = backpropagating error
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def sigmoid(z):
# Apply sigmoid activation function
return 1/(1+np.exp(-z))
testInput = np.arange(-6, 6, 0.01)
plt.plot(testInput, sigmoid(testInput), linewidth=2)
plt.grid(1)
class Neural_Network(object):
def __init__(self):
# Define HyperParameters
self.inputLayerSize = 2
self.outputLayerSize = 1
self.hiddenLayerSize = 3
# Weights (parameters)
self.W1 = np.random.randn(self.inputLayerSize, self.hiddenLayerSize)
self.W2 = np.random.randn(self.hiddenLayerSize, self.outputLayerSize)
def forward(self, X):
# Propagate inputs through Network
self.z2 = np.dot(X, self.W1)
self.a2 = self.sigmoid(self.z2)
self.z3 = np.dot(self.a2, self.W2)
yHat = self.sigmoid(self.z3)
return yHat
def sigmoid(self, z):
# Apply sigmoid activation function to scalar, vector or matrix
return 1/(1+np.exp(-z))
def sigmoidprime(self, z):
return self.sigmoid(z) * (1- self.sigmoid(z))
def costfunctionprime(self, X, y):
# compute derivative with respect to W1 and W2
self.yHat = self.forward(X)
delta3 = np.multiply(-(y-self.yHat), self.sigmoidprime(self.z3))
dJdW2 = np.dot(self.a2.T, delta3) # transpose multiplication means division
delta2 = np.dot(delta3, self.W2.T) * self.sigmoidprime(self.z2)
dJdW1 = np.dot(X.T, delta2)
return dJdW1, dJdW2
###Output
_____no_output_____ |
examples/archive/Regression_Advance2.ipynb | ###Markdown
Regressorion with Orbit - Advance II Continue from demo I, we revisit the regression from with multivariate regressors and observe the limit of each regression penalty.
###Code
import pandas as pd
import numpy as np
import gc
import warnings
warnings.filterwarnings('ignore')
import statsmodels.api as sm
import matplotlib.pyplot as plt
%matplotlib inline
from orbit.models.lgt import LGTMAP, LGTAggregated, LGTFull
from orbit.models.dlt import DLTMAP, DLTAggregated, DLTFull
from orbit.diagnostics.plot import plot_posterior_params
from orbit.constants.palette import QualitativePalette
from orbit.utils.simulation import make_ts_multiplicative
# randomization is using numpy with this version
print(np.__version__)
###Output
1.19.0
###Markdown
Simulation of Regression with Trend This time, we simulate regressor in a `multivariate normal` such that our observed values of regressors happen with covariance structure.
###Code
# To scale regressor values in a nicer way
REG_BASE = 1000
COEFS = np.array([0.03, 0.08, -0.3, 0.35, 0.22], dtype=np.float64)
SEED = 2020
COVAR = np.array([[0.2, 0.3, 0.0, 0.0, 0.0],
[0.3, 0.2, 0.4, 0.0, 0.0],
[0.0, 0.4, 0.2, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.2, 0.1],
[0.0, 0.0, 0.0, 0.1, 0.2]], dtype=np.float64)
# Looks like the RuntimeWarning is not impactful
raw_df, trend, seas, coefs = make_ts_multiplicative(
series_len=200, seasonality=52, coefs=COEFS,
regressor_log_loc=0.0, noise_to_signal_ratio=1.0,
regressor_log_cov=COVAR,
regression_sparsity=0.5, obs_val_base=1000, regresspr_val_base=REG_BASE, trend_type='rw',
seas_scale=.05, response_col='response', seed=SEED
)
num_of_regressors = len(coefs)
coefs
###Output
_____no_output_____
###Markdown
Estimating Coefficients I - full relevance Assume we observe the data frame `df` and the scaler `REG_BASE`
###Code
df = raw_df.copy()
regressor_cols = [f"regressor_{x}" for x in range(1, num_of_regressors + 1)]
response_col = "response"
df[regressor_cols] = df[regressor_cols]/REG_BASE
df[regressor_cols] = df[regressor_cols].apply(np.log1p)
df[response_col] = np.log(df[response_col])
mod_auto_ridge = DLTFull(
response_col=response_col,
date_col="date",
regressor_col=regressor_cols,
seasonality=52,
seed=SEED,
is_multiplicative=False,
regression_penalty='auto_ridge',
num_warmup=4000,
num_sample=1000,
stan_mcmc_control={'adapt_delta':0.9},
)
mod_auto_ridge.fit(df=df)
mod_fixed_ridge1 = DLTFull(
response_col=response_col,
date_col="date",
regressor_col=regressor_cols,
seasonality=52,
seed=SEED,
is_multiplicative=False,
regression_penalty='fixed_ridge',
regressor_sigma_prior=[0.5] * num_of_regressors,
num_warmup=4000,
num_sample=1000,
)
mod_fixed_ridge1.fit(df=df)
mod_fixed_ridge2 = DLTFull(
response_col=response_col,
date_col="date",
regressor_col=regressor_cols,
seasonality=52,
seed=SEED,
is_multiplicative=False,
regression_penalty='fixed_ridge',
regressor_sigma_prior=[0.05] * num_of_regressors,
num_warmup=4000,
num_sample=1000,
)
mod_fixed_ridge2.fit(df=df)
coef_auto_ridge = np.median(mod_auto_ridge._posterior_samples['rr_beta'], axis=0)
coef_fixed_ridge1 =np.median(mod_fixed_ridge1._posterior_samples['rr_beta'], axis=0)
coef_fixed_ridge2 =np.median(mod_fixed_ridge2._posterior_samples['rr_beta'], axis=0)
###Output
_____no_output_____
###Markdown
Small `sigma_prior` may lead to over-regularize.
###Code
lw=3
plt.figure(figsize=(16, 8))
plt.title("Weights of the model")
plt.plot(coef_auto_ridge, color=QualitativePalette.Line4.value[0], linewidth=lw, label="Auto Ridge", alpha=0.8, linestyle='--')
plt.plot(coef_fixed_ridge1, color=QualitativePalette.Line4.value[1], linewidth=lw, label="Fixed Ridge1", alpha=0.8, linestyle='--')
plt.plot(coef_fixed_ridge2, color=QualitativePalette.Line4.value[2], linewidth=lw, label="Fixed Ridge2", alpha=0.8, linestyle='--')
plt.plot(coefs, color="black", linewidth=lw, label="Ground truth", alpha=0.5)
plt.legend()
plt.grid()
scale_priors = np.round(np.arange(0.05, 0.5 + 0.01, 0.05), 2)
print(scale_priors)
coef_sum_list = []
for idx, scale_prior in enumerate(scale_priors):
print(f"Fitting with scale prior: {scale_prior}")
# fit a fixed ridge
mod = DLTAggregated(
response_col=response_col,
date_col="date",
regressor_col=regressor_cols,
seasonality=52,
seed=SEED,
is_multiplicative=False,
regression_penalty='fixed_ridge',
regressor_sigma_prior=[scale_prior] * num_of_regressors,
num_sample=1000,
num_warmup=4000,
)
mod.fit(df=df)
temp = mod.get_regression_coefs()
temp['scale_prior'] = scale_prior
temp.rename(columns={'coefficient': 'fixed_ridge_estimate'}, inplace=True)
temp.drop(['regressor_sign'], inplace=True, axis=1)
# fit a auto ridge
mod = DLTAggregated(
response_col=response_col,
date_col="date",
regressor_col=regressor_cols,
seasonality=52,
seed=SEED,
is_multiplicative=False,
regression_penalty='auto_ridge',
auto_ridge_scale=scale_prior,
regressor_sigma_prior=[scale_prior] * num_of_regressors,
num_sample=1000,
num_warmup=4000,
stan_mcmc_control={'adapt_delta':0.9},
)
mod.fit(df=df)
temp2 = mod.get_regression_coefs()
temp['auto_ridge_estimate'] = temp2['coefficient'].values
coef_sum_list.append(temp)
coef_summary = pd.concat(coef_sum_list, axis=0)
del temp, coef_sum_list
gc.collect()
figsize=(12, 12)
fig, axes = plt.subplots(len(regressor_cols), 1, facecolor='w', figsize=figsize)
idx = 0
lw=3
# for idx, reg in enumerate(regressor_cols):
for ax, reg in zip(axes, regressor_cols):
sub_df = coef_summary[coef_summary['regressor'] == reg]
x = sub_df['scale_prior'].values
y = sub_df['fixed_ridge_estimate'].values
ax.plot(x, y, marker='.', color=QualitativePalette.Line4.value[0], label="fixed_ridge_estimate",
lw=lw, markersize=20, alpha=0.8)
y = sub_df['auto_ridge_estimate'].values
ax.plot(x, y, marker='.', color=QualitativePalette.Line4.value[1], label="auto_ridge_estimate",
lw=lw, markersize=20, alpha=0.8)
ax.axhline(y=coefs[idx], marker=None, color='black', label='ground_truth', lw=lw, alpha=0.5, linestyle='--')
ax.grid(True, which='both', c='gray', ls='-', lw=1, alpha=0.2)
ax.set_title(reg, fontsize=16)
ax.set_ylim(coefs[idx] - 0.15, coefs[idx] + 0.15)
idx += 1
handles, labels = ax.get_legend_handles_labels()
fig.legend(handles, labels, loc='lower right')
fig.tight_layout()
###Output
_____no_output_____ |
python/basic/symbol.ipynb | ###Markdown
Symbol TutorialBesides the tensor computation interface [NDArray](./ndarray.ipynb), another main object in MXNet is the `Symbol` provided by `mxnet.symbol`, or `mxnet.sym` for short. A symbol represents a multi-output symbolic expression. They are composited by operators, such as simple matrix operations (e.g. “+”), or a neural network layer (e.g. convolution layer). An operator can take several input variables, produce more than one output variables, and have internal state variables. A variable can be either free, which we can bind with value later, or an output of another symbol. Symbol Composition Basic OperatorsThe following example composites a simple expression `a+b`. We first create the placeholders `a` and `b` with names using `mx.sym.Variable`, and then construct the desired symbol by using the operator `+`. When the string name is not given during creating, MXNet will automatically generate a unique name for the symbol, which is the case for `c`.
###Code
import mxnet as mx
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
c = a + b
(a, b, c)
###Output
_____no_output_____
###Markdown
Most `NDArray` operators can be applied to `Symbol`, for example:
###Code
# elemental wise times
d = a * b
# matrix multiplication
e = mx.sym.dot(a, b)
# reshape
f = mx.sym.Reshape(d+e, shape=(1,4))
# broadcast
g = mx.sym.broadcast_to(f, shape=(2,4))
mx.viz.plot_network(symbol=g)
###Output
_____no_output_____
###Markdown
Basic Neural NetworksBesides the basic operators, `Symbol` has a rich set of neural network layers. The following codes construct a two layer fully connected neural work and then visualize the structure by given the input data shape.
###Code
# Output may vary
net = mx.sym.Variable('data')
net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=128)
net = mx.sym.Activation(data=net, name='relu1', act_type="relu")
net = mx.sym.FullyConnected(data=net, name='fc2', num_hidden=10)
net = mx.sym.SoftmaxOutput(data=net, name='out')
mx.viz.plot_network(net, shape={'data':(100,200)})
###Output
_____no_output_____
###Markdown
Modulelized Construction for Deep NetworksFor deep networks, such as the Google Inception, constructing layer by layer is painful given the large number of layers. For these networks, we often modularize the construction. Take the Google Inception as an example, we can first define a factory function to chain the convolution layer, batch normalization layer, and Relu activation layer together:
###Code
# Output may vary
def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), name=None, suffix=''):
conv = mx.symbol.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix))
bn = mx.symbol.BatchNorm(data=conv, name='bn_%s%s' %(name, suffix))
act = mx.symbol.Activation(data=bn, act_type='relu', name='relu_%s%s' %(name, suffix))
return act
prev = mx.symbol.Variable(name="Previos Output")
conv_comp = ConvFactory(data=prev, num_filter=64, kernel=(7,7), stride=(2, 2))
shape = {"Previos Output" : (128, 3, 28, 28)}
mx.viz.plot_network(symbol=conv_comp, shape=shape)
###Output
_____no_output_____
###Markdown
Then we define a function that constructs an Inception module based on `ConvFactory`
###Code
# Output may vary
def InceptionFactoryA(data, num_1x1, num_3x3red, num_3x3, num_d3x3red, num_d3x3, pool, proj, name):
# 1x1
c1x1 = ConvFactory(data=data, num_filter=num_1x1, kernel=(1, 1), name=('%s_1x1' % name))
# 3x3 reduce + 3x3
c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce')
c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), name=('%s_3x3' % name))
# double 3x3 reduce + double 3x3
cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce')
cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_0' % name))
cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_1' % name))
# pool + proj
pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(1, 1), pad=(1, 1), pool_type=pool, name=('%s_pool_%s_pool' % (pool, name)))
cproj = ConvFactory(data=pooling, num_filter=proj, kernel=(1, 1), name=('%s_proj' % name))
# concat
concat = mx.symbol.Concat(*[c1x1, c3x3, cd3x3, cproj], name='ch_concat_%s_chconcat' % name)
return concat
prev = mx.symbol.Variable(name="Previos Output")
in3a = InceptionFactoryA(prev, 64, 64, 64, 64, 96, "avg", 32, name="in3a")
mx.viz.plot_network(symbol=in3a, shape=shape)
###Output
_____no_output_____
###Markdown
Finally we can obtain the whole network by chaining multiple inception modulas. A complete example is available at [mxnet/example/image-classification/symbol_inception-bn.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/symbol_inception-bn.py) Group Multiple SymbolsTo construct neural networks with multiple loss layers, we can use `mxnet.sym.Group` to group multiple symbols together. The following example group two outputs:
###Code
net = mx.sym.Variable('data')
fc1 = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=128)
net = mx.sym.Activation(data=fc1, name='relu1', act_type="relu")
out1 = mx.sym.SoftmaxOutput(data=net, name='softmax')
out2 = mx.sym.LinearRegressionOutput(data=net, name='regression')
group = mx.sym.Group([out1, out2])
group.list_outputs()
###Output
_____no_output_____
###Markdown
Relations to NDArrayAs can be seen now, both Symbol and NDArray provide multi-dimensional array operations, such as `c=a+b` in MXNet. Sometimes users are confused which way to use. We briefly clarify the difference here, more detailed explanation are available [here](http://mxnet.readthedocs.io/en/latest/system/program_model.html). The `NDArray` provides an imperative programming alike interface, in which the computations are evaluated sentence by sentence. While `Symbol` is closer to declarative programming, in which we first declare the computation, and then evaluate with data. Examples in this category include regular expression and SQL.The pros for `NDArray`:- straightforward- easy to work with other language features (for loop, if-else condition, ..) and libraries (numpy, ..)- easy to step-by-step debugThe pros for `Symbol`:- provides almost all functionalities of NDArray, such as +, \*, sin, and reshape - provides a large number of neural network related operators such as Convolution, Activation, and BatchNorm- provides automatic differentiation - easy to construct and manipulate complex computations such as deep neural networks- easy to save, load, and visualization- easy for the backend to optimize the computation and memory usageWe will show on the [mixed programming tutorial](./mixed.ipynb) how these two interfaces can be used together to develop a complete training program. This tutorial will focus on the usage of Symbol. Symbol Manipulation *One important difference of `Symbol` comparing to `NDArray` is that, we first declare the computation, and then bind with data to run. In this section we introduce the functions to manipulate a symbol directly. But note that, most of them are wrapped nicely by the [`mx.module`](./module.ipynb). One can skip this section safely. Shape InferenceFor each symbol, we can query its inputs (or arguments) and outputs. We can also inference the output shape by given the input shape, which facilitates memory allocation.
###Code
arg_name = c.list_arguments() # get the names of the inputs
out_name = c.list_outputs() # get the names of the outputs
arg_shape, out_shape, _ = c.infer_shape(a=(2,3), b=(2,3))
{'input' : dict(zip(arg_name, arg_shape)),
'output' : dict(zip(out_name, out_shape))}
###Output
_____no_output_____
###Markdown
Bind with Data and EvaluateThe symbol `c` we constructed declares what computation should be run. To evaluate it, we need to feed arguments, namely free variables, with data first. We can do it by using the `bind` method, which accepts device context and a `dict` mapping free variable names to `NDArray`s as arguments and returns an executor. The executor provides method `forward` for evaluation and attribute `outputs` to get all results.
###Code
"""test_ndarray_element_value"""
import numpy as np
def test_val(nd_array, val):
for num in np.nditer(nd_array):
assert round(num, 7) == round(val, 7), "NDArray element value incorrect."
ex = c.bind(ctx=mx.cpu(), args={'a' : mx.nd.ones([2,3]),
'b' : mx.nd.ones([2,3])})
ex.forward()
test_val(ex.outputs[0].asnumpy(), 2)
print 'number of outputs = %d\nthe first output = \n%s' % (
len(ex.outputs), ex.outputs[0].asnumpy())
###Output
number of outputs = 1
the first output =
[[ 2. 2. 2.]
[ 2. 2. 2.]]
###Markdown
We can evaluate the same symbol on GPU with different data
###Code
ex_gpu = c.bind(ctx=mx.gpu(), args={'a' : mx.nd.ones([3,4], mx.gpu())*2,
'b' : mx.nd.ones([3,4], mx.gpu())*3})
ex_gpu.forward()
ex_gpu.outputs[0].asnumpy()
###Output
_____no_output_____
###Markdown
Load and SaveSimilar to NDArray, we can either serialize a `Symbol` object by using `pickle`, or use `save` and `load` directly. Different to the binary format chosen by `NDArray`, `Symbol` uses the more readable json format for serialization. The `tojson` method returns the json string.
###Code
print(c.tojson())
c.save('symbol-c.json')
c2 = mx.symbol.load('symbol-c.json')
c.tojson() == c2.tojson()
###Output
{
"nodes": [
{
"op": "null",
"name": "a",
"inputs": []
},
{
"op": "null",
"name": "b",
"inputs": []
},
{
"op": "elemwise_add",
"name": "_plus0",
"inputs": [[0, 0, 0], [1, 0, 0]]
}
],
"arg_nodes": [0, 1],
"node_row_ptr": [0, 1, 2, 3],
"heads": [[2, 0, 0]],
"attrs": {"mxnet_version": ["int", 901]}
}
###Markdown
Customized Symbol *Most operators such as `mx.sym.Convolution` and `mx.sym.Reshape` are implemented in C++ for better performance. MXNet also allows users to write new operators using any frontend language such as Python. It often makes the developing and debugging much easier. To implement an operator in Python, we just need to define the two computation methods `forward` and `backward` with several methods for querying the properties, such as `list_arguments` and `infer_shape`. `NDArray` is the default type of arguments in both `forward` and `backward`. Therefore we often also implement the computation with `NDArray` operations. To show the flexibility of MXNet, however, we will demonstrate an implementation of the `softmax` layer using NumPy. Though a NumPy based operator can be only run on CPU and also lose some optimizations which can be applied on NDArray, it enjoys the rich functionalities provided by NumPy.We first create a subclass of `mx.operator.CustomOp` and then define `forward` and `backward`.
###Code
class Softmax(mx.operator.CustomOp):
def forward(self, is_train, req, in_data, out_data, aux):
x = in_data[0].asnumpy()
y = np.exp(x - x.max(axis=1).reshape((x.shape[0], 1)))
y /= y.sum(axis=1).reshape((x.shape[0], 1))
self.assign(out_data[0], req[0], mx.nd.array(y))
def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
l = in_data[1].asnumpy().ravel().astype(np.int)
y = out_data[0].asnumpy()
y[np.arange(l.shape[0]), l] -= 1.0
self.assign(in_grad[0], req[0], mx.nd.array(y))
###Output
_____no_output_____
###Markdown
Here we use `asnumpy` to convert the `NDArray` inputs into `numpy.ndarray`. Then using `CustomOp.assign` to assign the results back to `mxnet.NDArray` based on the value of req, which could be "over write" or "add to". Next we create a subclass of `mx.operator.CustomOpProp` for querying the properties.
###Code
# register this operator into MXNet by name "softmax"
@mx.operator.register("softmax")
class SoftmaxProp(mx.operator.CustomOpProp):
def __init__(self):
# softmax is a loss layer so we don’t need gradient input
# from layers above.
super(SoftmaxProp, self).__init__(need_top_grad=False)
def list_arguments(self):
return ['data', 'label']
def list_outputs(self):
return ['output']
def infer_shape(self, in_shape):
data_shape = in_shape[0]
label_shape = (in_shape[0][0],)
output_shape = in_shape[0]
return [data_shape, label_shape], [output_shape], []
def create_operator(self, ctx, shapes, dtypes):
return Softmax()
###Output
_____no_output_____
###Markdown
Finally, we can use `mx.sym.Custom` with the register name to use this operator```pythonnet = mx.symbol.Custom(data=prev_input, op_type='softmax')``` Advanced Usages * Type CastMXNet uses 32-bit float in default. Sometimes we want to use a lower precision data type for better accuracy-performance trade-off. For example, The Nvidia Tesla Pascal GPUs (e.g. P100) have improved 16-bit float performance, while GTX Pascal GPUs (e.g. GTX 1080) are fast on 8-bit integers. We can use the `mx.sym.Cast` operator to convert the data type.
###Code
a = mx.sym.Variable('data')
b = mx.sym.Cast(data=a, dtype='float16')
arg, out, _ = b.infer_type(data='float32')
assert out[0] is np.float16, "Type cast failed."
print({'input':arg, 'output':out})
c = mx.sym.Cast(data=a, dtype='uint8')
arg, out, _ = c.infer_type(data='int32')
assert out[0] is np.uint8, "Type cast failed."
print({'input':arg, 'output':out})
###Output
{'input': [<type 'numpy.float32'>], 'output': [<type 'numpy.float16'>]}
{'input': [<type 'numpy.int32'>], 'output': [<type 'numpy.uint8'>]}
###Markdown
Variable SharingSometimes we want to share the contents between several symbols. This can be simply done by bind these symbols with the same array.
###Code
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
c = mx.sym.Variable('c')
d = a + b * c
data = mx.nd.ones((2,3))*2
ex = d.bind(ctx=mx.cpu(), args={'a':data, 'b':data, 'c':data})
ex.forward()
test_val(ex.outputs[0].asnumpy(), 6)
ex.outputs[0].asnumpy()
###Output
_____no_output_____
###Markdown
Symbol TutorialBesides the tensor computation interface [NDArray](./ndarray.ipynb), another main object in MXNet is the `Symbol` provided by `mxnet.symbol`, or `mxnet.sym` for short. A symbol represents a multi-output symbolic expression. They are composited by operators, such as simple matrix operations (e.g. “+”), or a neural network layer (e.g. convolution layer). An operator can take several input variables, produce more than one output variables, and have internal state variables. A variable can be either free, which we can bind with value later, or an output of another symbol. Symbol Composition Basic OperatorsThe following example composites a simple expression `a+b`. We first create the placeholders `a` and `b` with names using `mx.sym.Variable`, and then construct the desired symbol by using the operator `+`. When the string name is not given during creating, MXNet will automatically generate a unique name for the symbol, which is the case for `c`.
###Code
import mxnet as mx
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
c = a + b
(a, b, c)
###Output
_____no_output_____
###Markdown
Most `NDArray` operators can be applied to `Symbol`, for example:
###Code
# elemental wise times
d = a * b
# matrix multiplication
e = mx.sym.dot(a, b)
# reshape
f = mx.sym.Reshape(d+e, shape=(1,4))
# broadcast
g = mx.sym.broadcast_to(f, shape=(2,4))
mx.viz.plot_network(symbol=g)
###Output
_____no_output_____
###Markdown
Basic Neural NetworksBesides the basic operators, `Symbol` has a rich set of neural network layers. The following codes construct a two layer fully connected neural work and then visualize the structure by given the input data shape.
###Code
# Output may vary
net = mx.sym.Variable('data')
net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=128)
net = mx.sym.Activation(data=net, name='relu1', act_type="relu")
net = mx.sym.FullyConnected(data=net, name='fc2', num_hidden=10)
net = mx.sym.SoftmaxOutput(data=net, name='out')
mx.viz.plot_network(net, shape={'data':(100,200)})
###Output
_____no_output_____
###Markdown
Modulelized Construction for Deep NetworksFor deep networks, such as the Google Inception, constructing layer by layer is painful given the large number of layers. For these networks, we often modularize the construction. Take the Google Inception as an example, we can first define a factory function to chain the convolution layer, batch normalization layer, and Relu activation layer together:
###Code
# Output may vary
def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), name=None, suffix=''):
conv = mx.symbol.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix))
bn = mx.symbol.BatchNorm(data=conv, name='bn_%s%s' %(name, suffix))
act = mx.symbol.Activation(data=bn, act_type='relu', name='relu_%s%s' %(name, suffix))
return act
prev = mx.symbol.Variable(name="Previos Output")
conv_comp = ConvFactory(data=prev, num_filter=64, kernel=(7,7), stride=(2, 2))
shape = {"Previos Output" : (128, 3, 28, 28)}
mx.viz.plot_network(symbol=conv_comp, shape=shape)
###Output
_____no_output_____
###Markdown
Then we define a function that constructs an Inception module based on `ConvFactory`
###Code
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
def InceptionFactoryA(data, num_1x1, num_3x3red, num_3x3, num_d3x3red, num_d3x3, pool, proj, name):
# 1x1
c1x1 = ConvFactory(data=data, num_filter=num_1x1, kernel=(1, 1), name=('%s_1x1' % name))
# 3x3 reduce + 3x3
c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce')
c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), name=('%s_3x3' % name))
# double 3x3 reduce + double 3x3
cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce')
cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_0' % name))
cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_1' % name))
# pool + proj
pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(1, 1), pad=(1, 1), pool_type=pool, name=('%s_pool_%s_pool' % (pool, name)))
cproj = ConvFactory(data=pooling, num_filter=proj, kernel=(1, 1), name=('%s_proj' % name))
# concat
concat = mx.symbol.Concat(*[c1x1, c3x3, cd3x3, cproj], name='ch_concat_%s_chconcat' % name)
return concat
prev = mx.symbol.Variable(name="Previos Output")
in3a = InceptionFactoryA(prev, 64, 64, 64, 64, 96, "avg", 32, name="in3a")
mx.viz.plot_network(symbol=in3a, shape=shape)
###Output
_____no_output_____
###Markdown
Finally we can obtain the whole network by chaining multiple inception modulas. A complete example is available at [mxnet/example/image-classification/symbol_inception-bn.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/symbol_inception-bn.py) Group Multiple SymbolsTo construct neural networks with multiple loss layers, we can use `mxnet.sym.Group` to group multiple symbols together. The following example group two outputs:
###Code
net = mx.sym.Variable('data')
fc1 = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=128)
net = mx.sym.Activation(data=fc1, name='relu1', act_type="relu")
out1 = mx.sym.SoftmaxOutput(data=net, name='softmax')
out2 = mx.sym.LinearRegressionOutput(data=net, name='regression')
group = mx.sym.Group([out1, out2])
group.list_outputs()
###Output
_____no_output_____
###Markdown
Relations to NDArrayAs can be seen now, both Symbol and NDArray provide multi-dimensional array operations, such as `c=a+b` in MXNet. Sometimes users are confused which way to use. We briefly clarify the difference here, more detailed explanation are available [here](http://mxnet.readthedocs.io/en/latest/system/program_model.html). The `NDArray` provides an imperative programming alike interface, in which the computations are evaluated sentence by sentence. While `Symbol` is closer to declarative programming, in which we first declare the computation, and then evaluate with data. Examples in this category include regular expression and SQL.The pros for `NDArray`:- straightforward- easy to work with other language features (for loop, if-else condition, ..) and libraries (numpy, ..)- easy to step-by-step debugThe pros for `Symbol`:- provides almost all functionalities of NDArray, such as +, \*, sin, and reshape - provides a large number of neural network related operators such as Convolution, Activation, and BatchNorm- provides automatic differentiation - easy to construct and manipulate complex computations such as deep neural networks- easy to save, load, and visualization- easy for the backend to optimize the computation and memory usageWe will show on the [mixed programming tutorial](./mixed.ipynb) how these two interfaces can be used together to develop a complete training program. This tutorial will focus on the usage of Symbol. Symbol Manipulation *One important difference of `Symbol` comparing to `NDArray` is that, we first declare the computation, and then bind with data to run. In this section we introduce the functions to manipulate a symbol directly. But note that, most of them are wrapped nicely by the [`mx.module`](./module.ipynb). One can skip this section safely. Shape InferenceFor each symbol, we can query its inputs (or arguments) and outputs. We can also inference the output shape by given the input shape, which facilitates memory allocation.
###Code
arg_name = c.list_arguments() # get the names of the inputs
out_name = c.list_outputs() # get the names of the outputs
arg_shape, out_shape, _ = c.infer_shape(a=(2,3), b=(2,3))
{'input' : dict(zip(arg_name, arg_shape)),
'output' : dict(zip(out_name, out_shape))}
###Output
_____no_output_____
###Markdown
Bind with Data and EvaluateThe symbol `c` we constructed declares what computation should be run. To evaluate it, we need to feed arguments, namely free variables, with data first. We can do it by using the `bind` method, which accepts device context and a `dict` mapping free variable names to `NDArray`s as arguments and returns an executor. The executor provides method `forward` for evaluation and attribute `outputs` to get all results.
###Code
ex = c.bind(ctx=mx.cpu(), args={'a' : mx.nd.ones([2,3]),
'b' : mx.nd.ones([2,3])})
ex.forward()
print 'number of outputs = %d\nthe first output = \n%s' % (
len(ex.outputs), ex.outputs[0].asnumpy())
###Output
number of outputs = 1
the first output =
[[ 2. 2. 2.]
[ 2. 2. 2.]]
###Markdown
We can evaluate the same symbol on GPU with different data
###Code
ex_gpu = c.bind(ctx=mx.gpu(), args={'a' : mx.nd.ones([3,4], mx.gpu())*2,
'b' : mx.nd.ones([3,4], mx.gpu())*3})
ex_gpu.forward()
ex_gpu.outputs[0].asnumpy()
###Output
_____no_output_____
###Markdown
Load and SaveSimilar to NDArray, we can either serialize a `Symbol` object by using `pickle`, or use `save` and `load` directly. Different to the binary format chosen by `NDArray`, `Symbol` uses the more readable json format for serialization. The `tojson` method returns the json string.
###Code
print(c.tojson())
c.save('symbol-c.json')
c2 = mx.symbol.load('symbol-c.json')
c.tojson() == c2.tojson()
###Output
{
"nodes": [
{
"op": "null",
"name": "a",
"inputs": []
},
{
"op": "null",
"name": "b",
"inputs": []
},
{
"op": "elemwise_add",
"name": "_plus0",
"inputs": [[0, 0, 0], [1, 0, 0]]
}
],
"arg_nodes": [0, 1],
"node_row_ptr": [0, 1, 2, 3],
"heads": [[2, 0, 0]],
"attrs": {"mxnet_version": ["int", 901]}
}
###Markdown
Customized Symbol *Most operators such as `mx.sym.Convolution` and `mx.sym.Reshape` are implemented in C++ for better performance. MXNet also allows users to write new operators using any frontend language such as Python. It often makes the developing and debugging much easier. To implement an operator in Python, we just need to define the two computation methods `forward` and `backward` with several methods for querying the properties, such as `list_arguments` and `infer_shape`. `NDArray` is the default type of arguments in both `forward` and `backward`. Therefore we often also implement the computation with `NDArray` operations. To show the flexibility of MXNet, however, we will demonstrate an implementation of the `softmax` layer using NumPy. Though a NumPy based operator can be only run on CPU and also lose some optimizations which can be applied on NDArray, it enjoys the rich functionalities provided by NumPy.We first create a subclass of `mx.operator.CustomOp` and then define `forward` and `backward`.
###Code
class Softmax(mx.operator.CustomOp):
def forward(self, is_train, req, in_data, out_data, aux):
x = in_data[0].asnumpy()
y = np.exp(x - x.max(axis=1).reshape((x.shape[0], 1)))
y /= y.sum(axis=1).reshape((x.shape[0], 1))
self.assign(out_data[0], req[0], mx.nd.array(y))
def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
l = in_data[1].asnumpy().ravel().astype(np.int)
y = out_data[0].asnumpy()
y[np.arange(l.shape[0]), l] -= 1.0
self.assign(in_grad[0], req[0], mx.nd.array(y))
###Output
_____no_output_____
###Markdown
Here we use `asnumpy` to convert the `NDArray` inputs into `numpy.ndarray`. Then using `CustomOp.assign` to assign the results back to `mxnet.NDArray` based on the value of req, which could be "over write" or "add to". Next we create a subclass of `mx.operator.CustomOpProp` for querying the properties.
###Code
# register this operator into MXNet by name "softmax"
@mx.operator.register("softmax")
class SoftmaxProp(mx.operator.CustomOpProp):
def __init__(self):
# softmax is a loss layer so we don’t need gradient input
# from layers above.
super(SoftmaxProp, self).__init__(need_top_grad=False)
def list_arguments(self):
return ['data', 'label']
def list_outputs(self):
return ['output']
def infer_shape(self, in_shape):
data_shape = in_shape[0]
label_shape = (in_shape[0][0],)
output_shape = in_shape[0]
return [data_shape, label_shape], [output_shape], []
def create_operator(self, ctx, shapes, dtypes):
return Softmax()
###Output
_____no_output_____
###Markdown
Finally, we can use `mx.sym.Custom` with the register name to use this operator```pythonnet = mx.symbol.Custom(data=prev_input, op_type='softmax')``` Advanced Usages * Type CastMXNet uses 32-bit float in default. Sometimes we want to use a lower precision data type for better accuracy-performance trade-off. For example, The Nvidia Tesla Pascal GPUs (e.g. P100) have improved 16-bit float performance, while GTX Pascal GPUs (e.g. GTX 1080) are fast on 8-bit integers. We can use the `mx.sym.Cast` operator to convert the data type.
###Code
a = mx.sym.Variable('data')
b = mx.sym.Cast(data=a, dtype='float16')
arg, out, _ = b.infer_type(data='float32')
print({'input':arg, 'output':out})
c = mx.sym.Cast(data=a, dtype='uint8')
arg, out, _ = c.infer_type(data='int32')
print({'input':arg, 'output':out})
###Output
{'input': [<type 'numpy.float32'>], 'output': [<type 'numpy.float16'>]}
{'input': [<type 'numpy.int32'>], 'output': [<type 'numpy.uint8'>]}
###Markdown
Variable SharingSometimes we want to share the contents between several symbols. This can be simply done by bind these symbols with the same array.
###Code
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
c = mx.sym.Variable('c')
d = a + b * c
data = mx.nd.ones((2,3))*2
ex = d.bind(ctx=mx.cpu(), args={'a':data, 'b':data, 'c':data})
ex.forward()
ex.outputs[0].asnumpy()
###Output
_____no_output_____ |
DeepOnKHATT.ipynb | ###Markdown
---
**NOTE** \
After installing tensorflow 1.5, you need to restart runtime. You will be asked to do so by clicking button upon installation completion.
This notebook is developed to run on Colab.
---
###Code
!pip3 install tensorflow==1.15.0
import tensorflow as tf
print(tf.__version__)
!git clone https://github.com/fakhralwajih/DeepOnKHATT.git
# change dir to DeepOnKHATT dir
import os
os.chdir('DeepOnKHATT')
#download pre-trained models
!gdown --id 1-YAltfi_4Klvu_-f72iSkHboM46-iH_t --output lm/trie
!gdown --id 1MqhnAcXMwT_nq_z-01CRhWKLYJYZBa1A --output lm/lm.binary
!gdown --id 1Z_gzzWVjskv_1JqErGuz8ZVfCSNaC3VY --output models/models.zip
!unzip models/models.zip -d models/
#install decoder
!pip3 install ds_ctcdecoder==0.6.1
from features.feature import calculate_feature_vector_sequence
from features.preprocessing import preprocess_handwriting
from rnn import BiRNN as BiRNN_model
from datasets import pad_sequences,sparse_tuple_from ,handwriting_to_input_vector
import argparse
import numpy as np
import tensorflow as tf
from ds_ctcdecoder import ctc_beam_search_decoder, Scorer
from text import Alphabet,get_arabic_letters,decodex
letters_ar=get_arabic_letters()
alphabet = Alphabet('alphabet.txt')
#convert this to funcation
mapping={}
with open('arabic_mapping.txt','r', encoding='utf-8') as inf:
for line in inf:
key,val=line.split('\t')
mapping[key]=val.strip()
mapping[' ']=' '
###Output
_____no_output_____
###Markdown
imports for writing canvas
###Code
from IPython.display import HTML, Image
from google.colab.output import eval_js
from base64 import b64decode
from configparser import ConfigParser
config_file='neural_network.ini'
model_path='models/model.ckpt-14'
parser = ConfigParser()
parser.read('neural_network.ini')
config_header='nn'
network_type = parser.get(config_header , 'network_type')
n_context = parser.getint(config_header, 'n_context')
n_input = parser.getint(config_header, 'n_input')
beam_search_decoder = parser.get(config_header, 'beam_search_decoder')
#LM setting
config_header='lm'
lm_alpha=parser.getfloat(config_header , 'lm_alpha')
lm_beta=parser.getfloat(config_header , 'lm_beta')
beam_width=parser.getint(config_header , 'beam_width')
cutoff_prob=parser.getfloat(config_header , 'cutoff_prob')
cutoff_top_n= parser.getint(config_header , 'cutoff_top_n')
conf_path='neural_network.ini'
input_tensor = tf.placeholder(tf.float32, [None, None, n_input + (2 * n_input * n_context)], name='input')
seq_length = tf.placeholder(tf.int32, [None], name='seq_length')
logits, summary_op = BiRNN_model(conf_path,input_tensor,tf.to_int64(seq_length),n_input,n_context)
#if you need to try greedy decoder without LM
# decoded, log_prob = ctc_ops.ctc_greedy_decoder(logits, seq_length, merge_repeated=True)
lm_binary_path='lm/lm.binary'
lm_trie_path='lm/trie'
scorer=None
scorer = Scorer(lm_alpha, lm_beta,lm_binary_path, lm_trie_path,alphabet)
config_file='neural_network.ini'
saver = tf.train.Saver()
# create the session
sess = tf.Session()
saver.restore(sess, 'models/model.ckpt-14')
print('Model restored')
canvas_html = """
<canvas id="mycanvas" width=%d height=%d style="border:1px solid #000000;"></canvas>
<br />
<button>Recoginize</button>
<script>
var canvas = document.getElementById('mycanvas')
var ctx = canvas.getContext('2d')
ctx.lineWidth = %d
ctx.canvas.style.touchAction = "none";
var button = document.querySelector('button')
var mouse = {x: 0, y: 0}
var points=[]
canvas.addEventListener('pointermove', function(e) {
mouse.x = e.pageX - this.offsetLeft
mouse.y = e.pageY - this.offsetTop
})
canvas.onpointerdown = ()=>{
ctx.beginPath()
ctx.moveTo(mouse.x, mouse.y)
canvas.addEventListener('pointermove', onPaint)
}
canvas.onpointerup = ()=>{
canvas.removeEventListener('pointermove', onPaint)
points.pop()
points.push([mouse.x,mouse.y,1])
}
var onPaint = ()=>{
ctx.lineTo(mouse.x, mouse.y)
ctx.stroke()
points.push([mouse.x,mouse.y,0])
}
var data = new Promise(resolve=>{
button.onclick = ()=>{
resolve(canvas.toDataURL('image/png'))
}
})
</script>
"""
def draw(filename='drawing.png', w=900, h=200, line_width=1):
display(HTML(canvas_html % (w, h, line_width)))
data = eval_js("data")
points=eval_js("points")
# strokes = Utils.Rearrange(strokes, 20);
points=np.array(points)
# points=rearrange(points)
# print("Points before pre",points.shape)
NORM_ARGS = ["origin","filp_h","smooth", "slope", "resample", "slant", "height"]
FEAT_ARGS = ["x_cor","y_cor","penup","dir", "curv", "vic_aspect", "vic_curl", "vic_line", "vic_slope", "bitmap"]
# print("Normalizing trajectory...")
traj = preprocess_handwriting(points, NORM_ARGS)
# print(traj)
# print("Calculating feature vector sequence...")
feat_seq_mat = calculate_feature_vector_sequence(traj, FEAT_ARGS)
feat_seq_mat=feat_seq_mat.astype('float32')
feat_seq_mat.shape
data = []
train_input=handwriting_to_input_vector(feat_seq_mat,20,9)
train_input = train_input.astype('float32')
data.append(train_input)
# data_len
data = np.asarray(data)
# data_len = np.asarray(train_input)
# Pad input to max_time_step of this batch
source, source_lengths = pad_sequences(data)
my_logits=sess.run(logits, feed_dict={
input_tensor: source,
seq_length: source_lengths}
)
my_logits = np.squeeze(my_logits)
maxT, _ = my_logits.shape # dim0=t, dim1=c
# apply softmax
res = np.zeros(my_logits.shape)
for t in range(maxT):
y = my_logits[t, :]
e = np.exp(y)
s = np.sum(e)
res[t, :] = e / s
decoded = ctc_beam_search_decoder(res, alphabet, beam_width,
scorer=scorer, cutoff_prob=cutoff_prob,
cutoff_top_n=cutoff_top_n)
print("Result : "+decodex(decoded[0][1],mapping))
###Output
_____no_output_____
###Markdown
**Note** \
The next cell, you can run multiple times as you wish. You may want to write many samples. \
All the above cells should run only once. \
Run the cell and write on the canvas while the cell is running. Then, click on "Recoginize" button to get the recognition result.
###Code
draw()
###Output
_____no_output_____ |
cs231n/assignment2/test/pt-nn-new-module.ipynb | ###Markdown
PyTorch: nn -- Define new ModulesA PyTorch Module is a neural net layer; it inputs and outputs VariablesModules can contain weights(as Variables) or other ModulesYou can define your own Modules using autograd!
###Code
import torch
from torch.autograd import Variable
# Define our whole model as a single Module
class TwoLayerNet(torch.nn.Module):
# Initializer sets up two children(Modules can contain modules)
def __init__(self, D_in, H, D_out):
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
# Define forward pass using child modules and autograd ops on Variables
# No need to define backward -- autograd will handle it
def forward(self, x):
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
N, D_in, H, D_out = 64, 1000, 100, 10
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out), requires_grad=False)
model = TwoLayerNet(D_in, H, D_out)
criterion = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
for t in range(500):
# Forward pass: feed data to model, and prediction to loss function
y_pred = model(x)
loss = criterion(y_pred, y)
# Backward pass: compute all gradients
optimizer.zero_grad()
loss.backward()
# Update all parameters after computing gradients
optimizer.step()
###Output
_____no_output_____ |
day4/transit/Photometry.ipynb | ###Markdown
Point-Source PhotometryIf it's one thing an observational astronomer can do, it's measuring the flux of something. The easiest of these somethings are stars. We will see why that is.When we talk about **flux** in the context of this notebook, what we mean is the number of photons recorded in an image like this:Fluxes are often converted to magnitudes, a logarithmic measure. This goes back to the practice of ancient Arabs, who classified the brightest stars with number 1, and the faintest one they could see with number 6. The most common convention today is the so-called AB magnitude:$$m_\text{AB} = -2.5 \log_{10} f - 48.60$$where $f$ is the *spectral* flux in units of $erg/s/cm^2/Hz$, so it's the energy received by the instrument per time, collecting area and frequency.This all boils down to counting photons. Exercise 1:FITS is the ubiquitous file format in astronomy. Open the file `data/point-source.fits` with [astropy](https://docs.astropy.org/en/stable/io/fits/). Plot the image.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____ |
Udacity - TensorFlow/Cats_vs_Dogs_Without_Data_Augmentation.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Dogs vs Cats Image Classification Without Image Augmentation Run in Google Colab View source on GitHub In this tutorial, we will discuss how to classify images into pictures of cats or pictures of dogs. We'll build an image classifier using `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. Specific concepts that will be covered:In the process, we will build practical experience and develop intuition around the following concepts* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator`class — How can we efficiently work with data on disk to interface with our model? * _Overfitting_ - what is it, how to identify it? . **Before you begin**Before running the code in this notebook, reset the runtime by going to **Runtime -> Reset all runtimes** in the menu above. If you have been working through several notebooks, this will help you avoid reaching Colab's memory limits. Importing packages Let's start by importing required packages:* os — to read files and directory structure* numpy — for some matrix math outside of TensorFlow* matplotlib.pyplot — to plot the graph and display images in our training and validation data
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
For the TensorFlow imports, we directly specify Keras symbols (Sequential, Dense, etc.). This enables us to refer to these names directly in our code without having to qualify their full names (for example, `Dense` instead of `tf.keras.layer.Dense`).
###Code
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
tf.logging.set_verbosity(tf.logging.ERROR)
###Output
_____no_output_____
###Markdown
Data Loading To build our image classifier, we begin by downloading the dataset. The dataset we are using is a filtered version of Dogs vs. Cats dataset from Kaggle (ultimately, this dataset is provided by Microsoft Research).In previous Colabs, we've used TensorFlow Datasets, which is a very easy and convenient way to use datasets. In this Colab however, we will make use of the class `tf.keras.preprocessing.image.ImageDataGenerator` which will read data from disk. We therefore need to directly download *Dogs vs. Cats* from a URL and unzip it to the Colab filesystem.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
zip_dir = tf.keras.utils.get_file('cats_and_dogs_filterted.zip', origin=_URL, extract=True)
###Output
_____no_output_____
###Markdown
The dataset we have downloaded has the following directory structure. cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ...] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ...] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...]We can list the directories with the following terminal command:
###Code
zip_dir_base = os.path.dirname(zip_dir)
!find $zip_dir_base -type d -print
###Output
_____no_output_____
###Markdown
We'll now assign variables with the proper file path for the training and validation sets.
###Code
base_dir = os.path.join(os.path.dirname(zip_dir), 'cats_and_dogs_filtered')
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understanding our data Let's look at how many cats and dogs images we have in our training and validation directory
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
Setting Model Parameters For convenience, we'll set up variables that will be used later while pre-processing our dataset and training our network.
###Code
BATCH_SIZE = 100 # Number of training examples to process before updating our models variables
IMG_SHAPE = 150 # Our training data consists of images with width of 150 pixels and height of 150 pixels
###Output
_____no_output_____
###Markdown
Data Preparation Images must be formatted into appropriately pre-processed floating point tensors before being fed into the network. The steps involved in preparing these images are:1. Read images from the disk2. Decode contents of these images and convert it into proper grid format as per their RGB content3. Convert them into floating point tensors4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.Fortunately, all these tasks can be done using the class **tf.keras.preprocessing.image.ImageDataGenerator**.We can set this up in a couple of lines of code.
###Code
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
###Output
_____no_output_____
###Markdown
After defining our generators for training and validation images, **flow_from_directory** method will load images from the disk, apply rescaling, and resize them using single line of code.
###Code
train_data_gen = train_image_generator.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=BATCH_SIZE,
directory=validation_dir,
shuffle=False,
target_size=(IMG_SHAPE,IMG_SHAPE), #(150,150)
class_mode='binary')
###Output
_____no_output_____
###Markdown
Visualizing Training images We can visualize our training images by getting a batch of images from the training generator, and then plotting a few of them using `matplotlib`.
###Code
sample_training_images, _ = next(train_data_gen)
###Output
_____no_output_____
###Markdown
The `next` function returns a batch from the dataset. One batch is a tuple of (*many images*, *many labels*). For right now, we're discarding the labels because we just want to look at the images.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
plotImages(sample_training_images[:5]) # Plot images 0-4
###Output
_____no_output_____
###Markdown
Model Creation Define the modelThe model consists of four convolution blocks with a max pool layer in each of them. Then we have a fully connected layer with 512 units, with a `relu` activation function. The model will output class probabilities for two classes — dogs and cats — using `softmax`.
###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(2, activation='softmax')
])
###Output
_____no_output_____
###Markdown
Compile the modelAs usual, we will use the `adam` optimizer. Since we are output a softmax categorization, we'll use `sparse_categorical_crossentropy` as the loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so we are passing in the metrics argument.
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model SummaryLet's look at all the layers of our network using **summary** method.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model It's time we train our network. Since our batches are coming from a generator (`ImageDataGenerator`), we'll use `fit_generator` instead of `fit`.
###Code
EPOCHS = 100
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(BATCH_SIZE))),
epochs=EPOCHS,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(BATCH_SIZE)))
)
###Output
_____no_output_____
###Markdown
Visualizing results of the training We'll now visualize the results we get after training our network.
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(EPOCHS)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.savefig('./foo.png')
plt.show()
###Output
_____no_output_____ |
Fiocruz.ipynb | ###Markdown
Teste: Campinas (2019) Finbra (f)
###Code
finbra_rec17 = pd.read_csv('C:\\Users\\Gabriel\\Desktop\\FINBRA\\Contas anuais\\encode\\receitas_2017.csv', sep=';', error_bad_lines=False, skiprows=3, header=0)
finbra_rec18 = pd.read_csv('C:\\Users\\Gabriel\\Desktop\\FINBRA\\Contas anuais\\encode\\receitas_2018.csv', sep=';', error_bad_lines=False, skiprows=3, header=0)
finbra_rec19 = pd.read_csv('C:\\Users\\Gabriel\\Desktop\\FINBRA\\Contas anuais\\encode\\receitas_2019.csv', sep=';', error_bad_lines=False, skiprows=3, header=0)
finbra_rec20 = pd.read_csv('C:\\Users\\Gabriel\\Desktop\\FINBRA\\Contas anuais\\encode\\receitas_2020.csv', sep=';', error_bad_lines=False, skiprows=3, header=0)
###Output
_____no_output_____
###Markdown
Configurando a base do Finbra
###Code
bases = [finbra_rec17, finbra_rec18, finbra_rec19, finbra_rec20]
f_rec = []
for df in bases:
df['Valor'] = df['Valor'].str.replace(',','.')
df['Valor'] = df['Valor'].astype('float')
df['Cod.IBGE'] = df['Cod.IBGE'].astype('str').str[:-1].astype('int64')
f_rec.append(df)
f_rec[0] = f_rec17
f_rec[1] = f_rec18
f_rec[2] = f_rec19
f_rec[3] = f_rec20
#A) RECEITAS PRÓPRIAS (IPTU, ISS, ITBI)
f_rec19[f_rec19['Cod.IBGE'] == 350950][f_rec19['Coluna'] == 'Receitas Brutas Realizadas'][f_rec19['Conta'].str.match('1.1.1.8.01.1.0 Imposto sobre a Propriedade Predial e Territorial Urbana')
| f_rec19['Conta'].str.match('1.1.1.8.01.4.0 Imposto sobre Transmissão ¿Inter Vivos¿ de Bens Imóveis e de Direitos Reais sobre Imóveis')
| f_rec19['Conta'].str.match('1.1.1.8.02.0.0 Impostos sobre a Produção, Circulação de Mercadorias e Serviços')]
#B) TRANSFERÊNCIAS (quota-parte do FPM + quota-parte do ITR + quota-parte da Lei Kandir + IRRF + quota-parte do ICMS +
#quota-parte do IPVA + quota-parte do IPI Exportação)
f_rec19[f_rec19['Cod.IBGE'] == 350950][f_rec19['Coluna'] == 'Receitas Brutas Realizadas'][f_rec19['Conta'].str.match('1.7.1.8.01.2.0 Cota-Parte do Fundo de Participação dos Municípios - Cota Mensal')
| f_rec19['Conta'].str.match('1.7.1.8.01.5.0 Cota-Parte do Imposto Sobre a Propriedade Territorial Rural')
| f_rec19['Conta'].str.match('1.1.1.3.03.0.0 - Imposto sobre a Renda - Retido na Fonte')
| f_rec19['Conta'].str.match('1.7.2.8.01.1.0 Cota-Parte do ICMS')
| f_rec19['Conta'].str.match('1.7.2.8.01.2.0 Cota-Parte do IPVA')
| f_rec19['Conta'].str.match('1.7.2.8.01.3.0 Cota-Parte do IPI - Municípios')]
# Lei Kandir (Lei Complementar 87/96)
#C) OUTRAS RECEITAS CORRENTES: observar alterações na forma como o Finbra apresenta as receitas com multas e juros sobre
#dívidas nos anos de 2017 e 2018-2020 (Dos Santos et al., 2020)
###Output
_____no_output_____ |
docs/tutorials/3_understanding-event-data.ipynb | ###Markdown
Understanding Event Data IntroductionNeutron-scattering data may be recorded in "event mode":For each detected neutron a (pulse) timestamp and a time-of-flight is stored.This notebook will develop an understanding of how do work with this type of data.Our objective is *not* to demonstrate or develop a full reduction workflow.Instead we *develop understanding of data structures and opportunities* that event data provides.This tutorial contains exercises, but solutions are included directly.We encourage you to download this notebook and run through it step by step before looking at the solutions.Event data is a particularly challenging concept so make sure to understand every aspect before moving on.We recommend to use a recent version of *JupyterLab*:The solutions are included as hidden cells and shown only on demand.We use data containing event data from the POWGEN powder diffractometer at SNS.Note that the data has been modified for the purpose of this tutorial and is not entirely in its original state.We begin by loading the file and plot the raw data:
###Code
import scipp as sc
import scippneutron as scn
da = scn.data.tutorial_event_data()
da.plot()
###Output
_____no_output_____
###Markdown
We can see some diffraction lines, but they are oddly blurry.There is also an artifact from the prompt-pulse visible at $4000~\mu s$.This tutorial illustrates how event data gives us the power to understand and deal with the underlying issues.Before we start the investigation we cover some basics of working with event data. Inspecting event dataAs usual, to begin exploring a loaded file, we can inspect the HTML representation of a scipp object shown by Jupyter when typing a variable at the end of a cell (this can also be done using `sc.to_html(da)`, anywhere in a cell):
###Code
da
###Output
_____no_output_____
###Markdown
We can tell that this is binned (event) data from the `dtype` of the data (usually `DataArrayView`) as well as the inline preview, denoting that this is binned data with lists of given lengths.The meaning of these can best be understood using a graphical depiction of `da`, created using `sc.show`:
###Code
sc.show(da)
###Output
_____no_output_____
###Markdown
Each value (yellow cube with dots) is a small table containing event parameters such as pulse time, time-of-flight, and weights (usually 1 for raw data).**Definitions**:1. In scipp we refer to each of these cubes (containing a table of events) as a *bin*. We can think of this as a bin (or bucket) containing a number of records.2. An array of bins (such as the array a yellow cubes with dots, shown above) is referred to as *binned variable*. For example, `da.data` is a binned variable.3. A data array with data given by a binned variable is referred to as *binned data*. Binned data is a precursor to dense or histogrammed data.As we will see below binned data lets us do things that cannot or cannot properly be done with dense data, such as filtering or resampling.Each bin "contains" a small table, essentially a 1-D data array.For efficiency and consistency scipp does not actually store an individual data array for every bin.Instead each bin is a view to a section (slice) of a long table containing all the events from all bins combined.This explains the `dtype=DataArrayView` seen in the HTML representation above.For many practical purposes such a view of a data arrays behaves just like any other data array.The values of the bins can be accessed using the `values` property.For dense data this might give us a `float` value, for binned data we obtain a table.Here we access the 500th event list (counting from zero):
###Code
da.values[500]
###Output
_____no_output_____
###Markdown
ExerciseUse `sc.to_html()`, `sc.show()`, and `sc.table()` to explore and understand `da` as well as individual values of `da` such as `da.values[500]`. From binned data to dense dataWhile we often want to perform many operations on our data in event mode, a basic but important step is transformation of event data into dense data, since typically only the latter is suitable for data analysis software or plotting purposes.There are two options we can use for this transformation, described in the following. Option 1: Summing binsIf the existing binning is sufficient for our purpose we may simply sum over the rows of the tables making up the bin values:
###Code
da_bin_sum = da.bins.sum()
###Output
_____no_output_____
###Markdown
Here we used the special `bins` property of our data array to apply an operation to each of the bins in `da`.Once we have summed the bin values there are no more bins, and the `bins` property is `None`:
###Code
print(da_bin_sum.bins)
###Output
_____no_output_____
###Markdown
We can visualize the result, which dense (histogram) data.Make sure to compare the representations with those obtained above for binned data (`da`):
###Code
sc.to_html(da_bin_sum)
sc.show(da_bin_sum)
###Output
_____no_output_____
###Markdown
We can use `da_bins_sum` to, e.g., plot the total counts per spectrum by summing over the `tof` dimension:
###Code
da_bin_sum.sum('tof').plot(marker='.')
###Output
_____no_output_____
###Markdown
Note:In this case there is just a single time-of-flight bin so we could have used `da_bin_sum['tof', 0]` instead of `da_bin_sum.sum('tof')`. Option 2: HistogrammingFor performance and memory reasons binned data often contains the minimum number of bins that is "necessary" for a given purpose.In this case `da` only contains a single time-of-flight bin (essentially just as information what the lower and upper bounds are in which we can expect events), which is not practical for downstream applications such as data analysis or plotting.Instead of simply summing over all events in a bin we may thus *histogram* data.Note that scipp makes the distinction between binning data (preserving all events individually) and histogramming data (summing all events that fall inside a bin).For simplicity we consider only a single spectrum:
###Code
spec = da['spectrum', 8050]
sc.show(spec)
sc.table(spec.values[0]['event', :5])
###Output
_____no_output_____
###Markdown
Note the chained slicing above:We access the zeroth event list and select the first 5 slices along the `event` dimension (which is the only dimension, since the event list is a 1-D table).We use one of the [scipp functions for creating a new variable](https://scipp.github.io/reference/creation-functions.html) to define the desired bin edge of our histogram.In this case we use `sc.linspace` (another useful option is `sc.geomspace`):
###Code
tof_edges = sc.linspace(dim='tof', start=18.0, stop=17000, num=100, unit='us')
sc.histogram(spec, bins=tof_edges).plot()
###Output
_____no_output_____
###Markdown
ExerciseChange `tof_edges` to control what is plotted:- Change the number of bins, e.g., to a finer resolution.- Change the start and stop of the edges to plot only a smaller time-of-flight region. Solution
###Code
tof_edges = sc.linspace(dim='tof', start=2000.0, stop=15000, num=200, unit='us')
sc.histogram(spec, bins=tof_edges).plot()
###Output
_____no_output_____
###Markdown
Masking event data — Binning by existing parametersWhile quickly converting binned (event) data into dense (histogrammed) data has its applications, we may typically want to work with binned data as long as possible.We have learned in [Working with masks](2_working-with-masks.ipynb) how to mask dense, histogrammed, data.How can we mask a time-of-flight region, e.g., to mask a prompt-pulse, in *event mode*?Let us sum all spectra and define a dummy data array (named `prompt`) to illustrate the objective:
###Code
spec = da['spectrum', 8050].copy()
# Start and stop are fictitious and this prompt pulse is not actually present in the raw data from SNS
prompt_start = 4000.0 * sc.Unit('us')
prompt_stop = 5000.0 * sc.Unit('us')
prompt_tof_edges = sc.sort(
sc.concat([spec.coords['tof'], prompt_start, prompt_stop], 'tof'), 'tof'
)
prompt = sc.DataArray(
data=sc.array(dims=['tof'], values=[0, 11000, 0], unit='counts'),
coords={'tof': prompt_tof_edges},
)
tof_edges = sc.linspace(dim='tof', start=0.0, stop=17000, num=1701, unit='us')
spec_hist = sc.histogram(da.bins.concat('spectrum'), bins=tof_edges)
sc.plot({'spec': spec_hist, 'prompt': prompt})
###Output
_____no_output_____
###Markdown
Masking eventsWe now want to mask out the prompt-pulse, i.e., the peak with exponential falloff inside the region where `prompt` in the figure above is nonzero.We can do so by checking (for every event) whether the time-of-flight is within the region covered by the prompt-pulse.As above, we first consider only a single spectrum.The result can be stored as a new mask:
###Code
spec1 = da['spectrum', 8050].copy() # copy since we do some modifications below
event_tof = spec.bins.coords['tof']
mask = (prompt_start <= event_tof) & (event_tof < prompt_stop)
spec1.bins.masks['prompt_pulse'] = mask
sc.plot({'original': da['spectrum', 8050], 'prompt_mask': spec1})
###Output
_____no_output_____
###Markdown
Here we have used the `bins` property once more.Take note of the following:- We can access coords "inside" the bins using the `coords` dict provided by the `bins` property. This provides access to "columns" of the event tables held by the bins such as `spec.bins.coords['tof']`.- We can do arithmetic (or other) computation with these "columns", in this case comparing with scalar (non-binned) variables.- New "columns" can be added, in this case we add a new mask column via `spec.bins.masks`.**Definitions**:For a data array `da` we refer to- coordinates such as `da.coords['tof']` as *bin coordinate* and- coordinates such as `da.bins.coords['tof']` as *event coordinate*.The table representation (`sc.table`) and `sc.show` illustrate this process of masking:
###Code
sc.table(spec1.values[0]['event', :5])
sc.show(spec1)
###Output
_____no_output_____
###Markdown
We have added a new column to the event table, defining *for every event* whether it is masked or not.The generally recommended solution is different though, since masking individual events has unnecessary overhead and forces masks to be applied when converting to dense data.A better approach is described in the next section. ExerciseTo get familiar with the `bins` property, try the following:- Compute the neutron velocities for all events in `spec1`. Note: The total flight path length can be computed using `scn.Ltotal(spec1, scatter=True)`.- Add the neutron velocity as a new event coordinate.- Use, e.g., `sc.show` to verify that the coordinate has been added as expected.- Use `del` to remove the event coordinate and verify that the coordinate was indeed removed. Solution
###Code
spec1.bins.coords['v'] = scn.Ltotal(spec1, scatter=True) / spec1.bins.coords['tof']
sc.show(spec1)
sc.to_html(spec1.values[0])
del spec1.bins.coords['v']
sc.to_html(spec1.values[0])
###Output
_____no_output_____
###Markdown
Masking binsRather than masking individual events, let us simply "sort" the events depending on whether they fall below, inside, or above the region of the prompt-pulse.We do not actually need to fully sort the events but rather use a *binning* proceedure, using `sc.bin`:
###Code
spec2 = da['spectrum', 8050].copy() # copy since we do some modifications below
spec2 = sc.bin(spec2, edges=[prompt_tof_edges])
prompt_mask = sc.array(dims=spec2.dims, values=[False, True, False])
spec2.masks['prompt_pulse'] = prompt_mask
sc.show(spec2)
###Output
_____no_output_____
###Markdown
Compare this to the graphical representation for `spec1` above and to the figure of the prompt pulse.The start and stop of the prompt pulse are used to cut the total time-of-flight interval into three sections (bins).The center bin is masked:
###Code
spec2.masks['prompt_pulse']
###Output
_____no_output_____
###Markdown
We can also plot the two options of the masked spectrum for comparison.Note how in the second, recommended, option the mask is preserved in the plot, whereas in the first case the histogramming performed by `plot` necessarily has to apply the mask:
###Code
sc.plot({'event-mask': spec1, 'bin-mask (1.1x)': spec2 * sc.scalar(1.1)})
###Output
_____no_output_____
###Markdown
Bonus questionWhy did we not use a fine binning, e.g., with 1000 time-of-flight bins and mask a range of bins, similar to how it would be done for histogrammed (non-event) data? Solution - This would add a lot of over overhead from handling many bins. If our instrument had 1.000.000 pixels we would have 1.000.000.000 bins, which comes with significant memory overhead but first and foremost compute overhead. Binning by new parametersAfter having understood how to mask a prompt-pulse we continue by considering the proton-charge log:
###Code
proton_charge = da.attrs['proton_charge'].value
proton_charge.plot(marker='.')
###Output
_____no_output_____
###Markdown
To mask a time-of-flight range, we have used `sc.bin` to adapt the binning along the *existing* `tof` dimension.`sc.bin` can also be used to introduce binning along *new* dimension.We define our desired pulse-time edges:
###Code
tmin = proton_charge.coords['time'].min()
tmax = proton_charge.coords['time'].max()
pulse_time = sc.arange(
dim='pulse_time',
start=tmin.value,
stop=tmax.value,
step=(tmax.value - tmin.value) / 10,
)
pulse_time
###Output
_____no_output_____
###Markdown
As above we work with a single spectrum for now and then use `sc.bin`.The result has two dimensions, `tof` and `pulse_time`:
###Code
spec = da['spectrum', 8050]
binned_spec = sc.bin(spec, edges=[pulse_time])
binned_spec
###Output
_____no_output_____
###Markdown
We can plot the binned spectrum, resulting in a 2-D plot:
###Code
binned_spec.plot()
###Output
_____no_output_____
###Markdown
In this case the plot is not very readable since there are so few events in the spectrum that we resolve individual events as tiny dots.Note that this is independent of the bin sizes since `plot()` resamples dynamically and can thus resolve events within bins.We can use the `resolution` option to obtain a more useful plot:
###Code
binned_spec.plot(resolution={'x': 100, 'y': 20})
###Output
_____no_output_____
###Markdown
We may also ignore the `tof` dimension if we are simply interested in the time-evolution of the counts in this spectrum.We can do so by concatenating all bins along the `tof` dimension as follows:
###Code
binned_spec.bins.concat('tof').plot()
###Output
_____no_output_____
###Markdown
ExerciseUsing the same approach as for masking a time-of-flight bin in the previous section, mask the time period starting at about 16:30 where the proton charge is very low.- Define appropriate edges for pulse time (use as few bins as possible, not the 10 pulse-time bins from the binning example above).- Use `sc.bin` to apply the new binning. Make sure to combine this with the time-of-flight binning to mask the prompt pulse.- Set an appropriate bin mask.- Plot the result to confirm that the mask is set and defined as expected.Note:In practice masking bad pulses would usually be done on a pulse-by-pulse basis.This requires a slightly more complex approach and is beyond the scope of this introduction.Hint:Pulse time is stored as `datetime64`.A simple way to create these is using an offset from a know start time such as `tmin`:
###Code
tmin + sc.to_unit(sc.scalar(7, unit='min'), 'ns')
###Output
_____no_output_____
###Markdown
Solution
###Code
pulse_time_edges = tmin + sc.to_unit(
sc.array(dims=['pulse_time'], values=[0, 43, 55, 92], unit='min'), 'ns'
)
# Alternative solution to creating edges:
# t1 = tmin + sc.to_unit(43 * sc.Unit('min'), 'ns')
# t2 = tmin + sc.to_unit(55 * sc.Unit('min'), 'ns')
# pulse_time_edges = sc.array(dims=['pulse_time'], unit='ns', values=[tmin.value, t1.value, t2.value, tmax.value])
pulse_time_mask = sc.array(dims=['pulse_time'], values=[False, True, False])
binned_spec = sc.bin(spec, edges=[prompt_tof_edges, pulse_time_edges])
binned_spec.masks['prompt_pulse'] = prompt_mask
binned_spec.masks['bad_beam'] = pulse_time_mask
binned_spec.plot(resolution={'x': 100, 'y': 20})
sc.show(binned_spec)
###Output
_____no_output_____
###Markdown
Higher dimensions and cutsFor purposes of plotting, fitting, or data analysis in general we will typically need to convert binned data to dense data.We discussed the basic options for this in [From binned data to dense data](From-binned-data-to-dense-data).In particular when dealing with higher-dimensional data these options may not be sufficient.For example we may want to:- Create a 1-D or 2-D cut through a 3-D volume.- Create a 2-D cut but integrate over an interval in the remaining dimension.- Create multi-dimensional cuts that are not aligned with existing binning.All of the above can be achieved using tools we have already used, but not all of them are covered in this tutorial. ExerciseAdapt the above code used for binning and masking the *single spectrum* (`spec`) along `pulse_time` and `tof` to the *full data array* (`da`).Hint: This is trivial. Solution
###Code
binned_da = sc.bin(da, edges=[prompt_tof_edges, pulse_time_edges])
binned_da.masks['prompt_pulse'] = prompt_mask
binned_da.masks['bad_beam'] = pulse_time_mask
binned_da.transpose().plot()
###Output
_____no_output_____
###Markdown
Removing binned dimensionsLet us now convert our data to $d$-spacing (interplanar lattice spacing).This works just like for dense data:
###Code
import scippneutron as scn
da_dspacing = scn.convert(binned_da, 'tof', 'dspacing', scatter=True)
# `dspacing` is now a multi-dimensional coordinate, which makes plotting inconvenient, so we adapt the binning
dspacing = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=3.0, num=4)
da_dspacing = sc.bin(da_dspacing, edges=[dspacing])
da_dspacing
da_dspacing.transpose().plot()
###Output
_____no_output_____
###Markdown
After conversion to $d$-spacing we may want to combine data from all spectra.For dense data we would have used `da_dspacing.sum('spectrum')`.For binned data this is not possible (since the events list in every spectrum have different lengths).Instead we need to *concatenate* the lists from bins across spectra:
###Code
da_dspacing_total = da_dspacing.bins.concat('spectrum')
da_dspacing_total.plot()
###Output
_____no_output_____
###Markdown
If we zoom in we can now understand the reason for the blurry diffraction lines observed at the very start of this tutorial:The lines are not horizontal, i.e., $d$-spacing appears to depend on the pulse time.Note that the effect depicted here was added artifically for the purpose of this tutorial and is likely much larger than what could be observed in practice from changes in sample environment parameters such as (pressure or temperature).Our data has three pulse-time bins (setup earlier for masking an area with low proton charge).We can thus use slicing to compare the diffraction pattern at different times (used as a stand-in for a changing sample-environment parameter):
###Code
tmp = da_dspacing_total
lines = {}
lines['total'] = tmp.bins.concat('pulse_time')
for i in 0, 2:
lines[f'interval{i}'] = tmp['pulse_time', i]
sc.plot(lines, resolution=1000, norm='log')
###Output
_____no_output_____
###Markdown
How can we extract thinner `pulse_time` slices?We can use `sc.bin` with finer pulse-time binning, such that individual slices are thinner.Instead of manually setting up a `dict` of slices we can use `sc.collapse`:
###Code
pulse_time = sc.arange(
dim='pulse_time',
start=tmin.value,
stop=tmax.value,
step=(tmax.value - tmin.value) / 10,
)
split = sc.bin(da_dspacing_total, edges=[pulse_time])
sc.plot(sc.collapse(split, keep='dspacing'), resolution=5000)
###Output
_____no_output_____
###Markdown
Making a 1-D cutInstead of summing over all spectra we may want to group spectra based on a $2\theta$ interval they fall into.We first compute $2\theta$ for every spectrum and store it as a new coordinate:
###Code
da_dspacing.coords['two_theta'] = scn.two_theta(da_dspacing)
###Output
_____no_output_____
###Markdown
We can then define the boundaries we want to use for our "cut".Here we use just a single bin in each of the three dimensions:
###Code
two_theta_cut = sc.linspace(dim='two_theta', unit='rad', start=0.4, stop=1.0, num=2)
# Do not use many bins, fewer is better for performance
dspacing_cut = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=2)
pulse_time_cut = tmin + sc.to_unit(
sc.array(dims=['pulse_time'], unit='s', values=[0, 10 * 60]), 'ns'
)
cut = sc.bin(
da_dspacing, edges=[two_theta_cut, dspacing_cut, pulse_time_cut], erase=['spectrum']
)
cut
###Output
_____no_output_____
###Markdown
We can then use slicing (to remove unwanted dimensions) and `sc.histogram` to get the desired binning:
###Code
cut = cut['pulse_time', 0] # squeeze pulse time (dim of length 1)
cut = cut['two_theta', 0] # squeeze two_theta (dim of length 1)
dspacing_edges = sc.linspace(
dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=5000
)
cut = sc.histogram(cut, bins=dspacing_edges)
cut.plot()
###Output
_____no_output_____
###Markdown
Exercise- Adjust the start and stop values in the cut edges above to adjust the "thickness" of the cut.- Adjust the edges used for histogramming. Making a 2-D cut Exercise- Adapt the code of the 1-D cut to create 100 `two_theta` bins.- Make a 2-D plot (with `dspacing` and `two_theta` on the axes). Solution
###Code
two_theta_cut = sc.linspace(dim='two_theta', unit='rad', start=0.4, stop=1.0, num=101)
dspacing_cut = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=2)
pulse_time_cut = tmin + sc.to_unit(
sc.array(dims=['pulse_time'], unit='s', values=[0, 10 * 60]), 'ns'
)
cut = sc.bin(
da_dspacing, edges=[two_theta_cut, dspacing_cut, pulse_time_cut], erase=['spectrum']
)
cut = cut['pulse_time', 0] # squeeze pulse time (dim of length 1)
dspacing_edges = sc.linspace(
dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=5000
)
cut = sc.histogram(cut, bins=dspacing_edges)
cut.plot()
###Output
_____no_output_____
###Markdown
Understanding Event Data IntroductionNeutron-scattering data may be recorded in "event mode":For each detected neutron a (pulse) timestamp and a time-of-flight is stored.This notebook will develop an understanding of how do work with this type of data.Our objective is *not* to demonstrate or develop a full reduction workflow.Instead we *develop understanding of data structures and opportunities* that event data provides.This tutorial contains exercises, but solutions are included directly.We encourage you to download this notebook and run through it step by step before looking at the solutions.Event data is a particularly challenging concept so make sure to understand every aspect before moving on.We recommend to use a recent version of *JupyterLab*:The solutions are included as hidden cells and shown only on demand.We use data containing event data from the POWGEN powder diffractometer at SNS.Note that the data has been modified for the purpose of this tutorial and is not entirely in its original state.We begin by loading the file and plot the raw data:
###Code
import scipp as sc
import scippneutron as scn
da = scn.data.tutorial_event_data()
da.plot()
###Output
_____no_output_____
###Markdown
We can see some diffraction lines, but they are oddly blurry.There is also an artifact from the prompt-pulse visible at $4000~\mu s$.This tutorial illustrates how event data gives us the power to understand and deal with the underlying issues.Before we start the investigation we cover some basics of working with event data. Inspecting event dataAs usual, to begin exploring a loaded file, we can inspect the HTML representation of a scipp object shown by Jupyter when typing a variable at the end of a cell (this can also be done using `sc.to_html(da)`, anywhere in a cell):
###Code
da
###Output
_____no_output_____
###Markdown
We can tell that this is binned (event) data from the `dtype` of the data (usually `DataArrayView`) as well as the inline preview, denoting that this is binned data with lists of given lengths.The meaning of these can best be understood using a graphical depiction of `da`, created using `sc.show`:
###Code
sc.show(da)
###Output
_____no_output_____
###Markdown
Each value (yellow cube with dots) is a small table containing event parameters such as pulse time, time-of-flight, and weights (usually 1 for raw data).**Definitions**:1. In scipp we refer to each of these cubes (containing a table of events) as a *bin*. We can think of this as a bin (or bucket) containing a number of records.2. An array of bins (such as the array a yellow cubes with dots, shown above) is referred to as *binned variable*. For example, `da.data` is a binned variable.3. A data array with data given by a binned variable is referred to as *binned data*. Binned data is a precursor to dense or histogrammed data.As we will see below binned data lets us do things that cannot or cannot properly be done with dense data, such as filtering or resampling.Each bin "contains" a small table, essentially a 1-D data array.For efficiency and consistency scipp does not actually store an individual data array for every bin.Instead each bin is a view to a section (slice) of a long table containing all the events from all bins combined.This explains the `dtype=DataArrayView` seen in the HTML representation above.For many practical purposes such a view of a data arrays behaves just like any other data array.The values of the bins can be accessed using the `values` property.For dense data this might give us a `float` value, for binned data we obtain a table.Here we access the 500th event list (counting from zero):
###Code
da.values[500]
###Output
_____no_output_____
###Markdown
ExerciseUse `sc.to_html()`, `sc.show()`, and `sc.table()` to explore and understand `da` as well as individual values of `da` such as `da.values[500]`. From binned data to dense dataWhile we often want to perform many operations on our data in event mode, a basic but important step is transformation of event data into dense data, since typically only the latter is suitable for data analysis software or plotting purposes.There are two options we can use for this transformation, described in the following. Option 1: Summing binsIf the existing binning is sufficient for our purpose we may simply sum over the rows of the tables making up the bin values:
###Code
da_bin_sum = da.bins.sum()
###Output
_____no_output_____
###Markdown
Here we used the special `bins` property of our data array to apply an operation to each of the bins in `da`.Once we have summed the bin values there are no more bins, and the `bins` property is `None`:
###Code
print(da_bin_sum.bins)
###Output
_____no_output_____
###Markdown
We can visualize the result, which dense (histogram) data.Make sure to compare the representations with those obtained above for binned data (`da`):
###Code
sc.to_html(da_bin_sum)
sc.show(da_bin_sum)
###Output
_____no_output_____
###Markdown
We can use `da_bins_sum` to, e.g., plot the total counts per spectrum by summing over the `tof` dimension:
###Code
da_bin_sum.sum('tof').plot(marker='.')
###Output
_____no_output_____
###Markdown
Note:In this case there is just a single time-of-flight bin so we could have used `da_bin_sum['tof', 0]` instead of `da_bin_sum.sum('tof')`. Option 2: HistogrammingFor performance and memory reasons binned data often contains the minimum number of bins that is "necessary" for a given purpose.In this case `da` only contains a single time-of-flight bin (essentially just as information what the lower and upper bounds are in which we can expect events), which is not practical for downstream applications such as data analysis or plotting.Instead of simply summing over all events in a bin we may thus *histogram* data.Note that scipp makes the distinction between binning data (preserving all events individually) and histogramming data (summing all events that fall inside a bin).For simplicity we consider only a single spectrum:
###Code
spec = da['spectrum', 8050]
sc.show(spec)
sc.table(spec.values[0]['event',:5])
###Output
_____no_output_____
###Markdown
Note the chained slicing above:We access the zeroth event list and select the first 5 slices along the `event` dimension (which is the only dimension, since the event list is a 1-D table).We use one of the [scipp functions for creating a new variable](https://scipp.github.io/reference/api.htmlcreation-functions) to define the desired bin edge of our histogram.In this case we use `sc.linspace` (another useful option is `sc.geomspace`):
###Code
tof_edges = sc.linspace(dim='tof', start=18.0, stop=17000, num=100, unit='us')
sc.histogram(spec, bins=tof_edges).plot()
###Output
_____no_output_____
###Markdown
ExerciseChange `tof_edges` to control what is plotted:- Change the number of bins, e.g., to a finer resolution.- Change the start and stop of the edges to plot only a smaller time-of-flight region. Solution
###Code
tof_edges = sc.linspace(dim='tof', start=2000.0, stop=15000, num=200, unit='us')
sc.histogram(spec, bins=tof_edges).plot()
###Output
_____no_output_____
###Markdown
Masking event data — Binning by existing parametersWhile quickly converting binned (event) data into dense (histogrammed) data has its applications, we may typically want to work with binned data as long as possible.We have learned in [Working with masks](2_working-with-masks.ipynb) how to mask dense, histogrammed, data.How can we mask a time-of-flight region, e.g., to mask a prompt-pulse, in *event mode*?Let us sum all spectra and define a dummy data array (named `prompt`) to illustrate the objective:
###Code
spec = da['spectrum', 8050].copy()
# Start and stop are fictitious and this prompt pulse is not actually present in the raw data from SNS
prompt_start = 4000.0 * sc.Unit('us')
prompt_stop = 5000.0 * sc.Unit('us')
prompt_edges = sc.concatenate(prompt_start, prompt_stop, 'tof')
prompt_tof_edges = sc.sort(sc.concatenate(spec.coords['tof'], prompt_edges, 'tof'), 'tof')
prompt = sc.DataArray(data=sc.Variable(dims=['tof'], values=[0,11000,0], unit='counts'),
coords={'tof':prompt_tof_edges})
tof_edges = sc.linspace(dim='tof', start=0.0, stop=17000, num=1701, unit='us')
spec_hist = sc.histogram(da.bins.concatenate('spectrum'), bins=tof_edges)
sc.plot({'spec':spec_hist, 'prompt':prompt})
###Output
_____no_output_____
###Markdown
Masking eventsWe now want to mask out the prompt-pulse, i.e., the peak with exponential falloff inside the region where `prompt` in the figure above is nonzero.We can do so by checking (for every event) whether the time-of-flight is within the region covered by the prompt-pulse.As above, we first consider only a single spectrum.The result can be stored as a new mask:
###Code
spec1 = da['spectrum', 8050].copy() # copy since we do some modifications below
event_tof = spec.bins.coords['tof']
spec1.bins.masks['prompt_pulse'] = (prompt_start <= event_tof) & (event_tof < prompt_stop)
sc.plot({'original': da['spectrum', 8050], 'prompt_mask': spec1})
###Output
_____no_output_____
###Markdown
Here we have used the `bins` property once more.Take note of the following:- We can access coords "inside" the bins using the `coords` dict provided by the `bins` property. This provides access to "columns" of the event tables held by the bins such as `spec.bins.coords['tof']`.- We can do arithmetic (or other) computation with these "columns", in this case comparing with scalar (non-binned) variables.- New "columns" can be added, in this case we add a new mask column via `spec.bins.masks`.**Definitions**:For a data array `da` we refer to- coordinates such as `da.coords['tof']` as *bin coordinate* and- coordinates such as `da.bins.coords['tof']` as *event coordinate*.The table representation (`sc.table`) and `sc.show` illustrate this process of masking:
###Code
sc.table(spec1.values[0]['event',:5])
sc.show(spec1)
###Output
_____no_output_____
###Markdown
We have added a new column to the event table, defining *for every event* whether it is masked or not.The generally recommended solution is different though, since masking individual events has unnecessary overhead and forces masks to be applied when converting to dense data.A better approach is described in the next section. ExerciseTo get familiar with the `bins` property, try the following:- Compute the neutron velocities for all events in `spec1`. Note: The total flight path length can be computed using `scn.Ltotal(spec1, scatter=True)`.- Add the neutron velocity as a new event coordinate.- Use, e.g., `sc.show` to verify that the coordinate has been added as expected.- Use `del` to remove the event coordinate and verify that the coordinate was indeed removed. Solution
###Code
spec1.bins.coords['v'] = scn.Ltotal(spec1, scatter=True) / spec1.bins.coords['tof']
sc.show(spec1)
sc.to_html(spec1.values[0])
del spec1.bins.coords['v']
sc.to_html(spec1.values[0])
###Output
_____no_output_____
###Markdown
Masking binsRather than masking individual events, let us simply "sort" the events depending on whether they fall below, inside, or above the region of the prompt-pulse.We do not actually need to fully sort the events but rather use a *binning* proceedure, using `sc.bin`:
###Code
spec2 = da['spectrum', 8050].copy() # copy since we do some modifications below
spec2 = sc.bin(spec2, edges=[prompt_tof_edges])
prompt_mask = sc.array(dims=spec2.dims, values=[False, True, False])
spec2.masks['prompt_pulse'] = prompt_mask
sc.show(spec2)
###Output
_____no_output_____
###Markdown
Compare this to the graphical representation for `spec1` above and to the figure of the prompt pulse.The start and stop of the prompt pulse are used to cut the total time-of-flight interval into three sections (bins).The center bin is masked:
###Code
spec2.masks['prompt_pulse']
###Output
_____no_output_____
###Markdown
We can also plot the two options of the masked spectrum for comparison.Note how in the second, recommended, option the mask is preserved in the plot, whereas in the first case the histogramming performed by `plot` necessarily has to apply the mask:
###Code
sc.plot({'event-mask':spec1, 'bin-mask (1.1x)':spec2*sc.scalar(1.1)})
###Output
_____no_output_____
###Markdown
Bonus questionWhy did we not use a fine binning, e.g., with 1000 time-of-flight bins and mask a range of bins, similar to how it would be done for histogrammed (non-event) data? Solution - This would add a lot of over overhead from handling many bins. If our instrument had 1.000.000 pixels we would have 1.000.000.000 bins, which comes with significant memory overhead but first and foremost compute overhead. Binning by new parametersAfter having understood how to mask a prompt-pulse we continue by considering the proton-charge log:
###Code
proton_charge = da.attrs['proton_charge'].value
proton_charge.plot(marker='.')
###Output
_____no_output_____
###Markdown
To mask a time-of-flight range, we have used `sc.bin` to adapt the binning along the *existing* `tof` dimension.`sc.bin` can also be used to introduce binning along *new* dimension.We define our desired pulse-time edges:
###Code
tmin = proton_charge.coords['time'].min()
tmax = proton_charge.coords['time'].max()
pulse_time = sc.arange(dim='pulse_time', start=tmin.value, stop=tmax.value, step=(tmax.value - tmin.value) / 10)
pulse_time
###Output
_____no_output_____
###Markdown
As above we work with a single spectrum for now and then use `sc.bin`.The result has two dimensions, `tof` and `pulse_time`:
###Code
spec = da['spectrum', 8050]
binned_spec = sc.bin(spec, edges=[pulse_time])
binned_spec
###Output
_____no_output_____
###Markdown
We can plot the binned spectrum, resulting in a 2-D plot:
###Code
binned_spec.plot()
###Output
_____no_output_____
###Markdown
In this case the plot is not very readable since there are so few events in the spectrum that we resolve individual events as tiny dots.Note that this is independent of the bin sizes since `plot()` resamples dynamically and can thus resolve events within bins.We can use the `resolution` option to obtain a more useful plot:
###Code
binned_spec.plot(resolution={'x':100, 'y':20})
###Output
_____no_output_____
###Markdown
We may also ignore the `tof` dimension if we are simply interested in the time-evolution of the counts in this spectrum.We can do so by concatenating all bins along the `tof` dimension as follows:
###Code
binned_spec.bins.concatenate('tof').plot()
###Output
_____no_output_____
###Markdown
ExerciseUsing the same approach as for masking a time-of-flight bin in the previous section, mask the time period starting at about 16:30 where the proton charge is very low.- Define appropriate edges for pulse time (use as few bins as possible, not the 10 pulse-time bins from the binning example above).- Use `sc.bin` to apply the new binning. Make sure to combine this with the time-of-flight binning to mask the prompt pulse.- Set an appropriate bin mask.- Plot the result to confirm that the mask is set and defined as expected.Note:In practice masking bad pulses would usually be done on a pulse-by-pulse basis.This requires a slightly more complex approach and is beyond the scope of this introduction.Hint:Pulse time is stored as `datetime64`.A simple way to create these is using an offset from a know start time such as `tmin`:
###Code
tmin + sc.to_unit(sc.scalar(7, unit='min'), 'ns')
###Output
_____no_output_____
###Markdown
Solution
###Code
pulse_time_edges = tmin + sc.to_unit(
sc.array(dims=['pulse_time'],
values=[0, 43, 55, 92], unit='min'), 'ns')
# Alternative solution to creating edges:
# t1 = tmin + sc.to_unit(43 * sc.Unit('min'), 'ns')
# t2 = tmin + sc.to_unit(55 * sc.Unit('min'), 'ns')
# pulse_time_edges = sc.array(dims=['pulse_time'], unit='ns', values=[tmin.value, t1.value, t2.value, tmax.value])
pulse_time_mask = sc.array(dims=['pulse_time'], values=[False, True, False])
binned_spec = sc.bin(spec, edges=[prompt_tof_edges, pulse_time_edges])
binned_spec.masks['prompt_pulse'] = prompt_mask
binned_spec.masks['bad_beam'] = pulse_time_mask
binned_spec.plot(resolution={'x':100, 'y':20})
sc.show(binned_spec)
###Output
_____no_output_____
###Markdown
Higher dimensions and cutsFor purposes of plotting, fitting, or data analysis in general we will typically need to convert binned data to dense data.We discussed the basic options for this in [From binned data to dense data](From-binned-data-to-dense-data).In particular when dealing with higher-dimensional data these options may not be sufficient.For example we may want to:- Create a 1-D or 2-D cut through a 3-D volume.- Create a 2-D cut but integrate over an interval in the remaining dimension.- Create multi-dimensional cuts that are not aligned with existing binning.All of the above can be achieved using tools we have already used, but not all of them are covered in this tutorial. ExerciseAdapt the above code used for binning and masking the *single spectrum* (`spec`) along `pulse_time` and `tof` to the *full data array* (`da`).Hint: This is trivial. Solution
###Code
binned_da = sc.bin(da, edges=[prompt_tof_edges, pulse_time_edges])
binned_da.masks['prompt_pulse'] = prompt_mask
binned_da.masks['bad_beam'] = pulse_time_mask
binned_da.transpose().plot()
###Output
_____no_output_____
###Markdown
Removing binned dimensionsLet us now convert our data to $d$-spacing (interplanar lattice spacing).This works just like for dense data:
###Code
import scippneutron as scn
da_dspacing = scn.convert(binned_da, 'tof', 'dspacing', scatter=True)
# `dspacing` is now a multi-dimensional coordinate, which makes plotting inconvenient, so we adapt the binning
dspacing = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=3.0, num=4)
da_dspacing = sc.bin(da_dspacing, edges=[dspacing])
da_dspacing
da_dspacing.transpose().plot()
###Output
_____no_output_____
###Markdown
After conversion to $d$-spacing we may want to combine data from all spectra.For dense data we would have used `da_dspacing.sum('spectrum')`.For binned data this is not possible (since the events list in every spectrum have different lengths).Instead we need to *concatenate* the lists from bins across spectra:
###Code
da_dspacing_total = da_dspacing.bins.concatenate('spectrum')
da_dspacing_total.plot()
###Output
_____no_output_____
###Markdown
If we zoom in we can now understand the reason for the blurry diffraction lines observed at the very start of this tutorial:The lines are not horizontal, i.e., $d$-spacing appears to depend on the pulse time.Note that the effect depicted here was added artifically for the purpose of this tutorial and is likely much larger than what could be observed in practice from changes in sample environment parameters such as (pressure or temperature).Our data has three pulse-time bins (setup earlier for masking an area with low proton charge).We can thus use slicing to compare the diffraction pattern at different times (used as a stand-in for a changing sample-environment parameter):
###Code
tmp = da_dspacing_total
lines = {}
lines['total'] = tmp.bins.concatenate('pulse_time')
for i in 0,2:
lines[f'interval{i}'] = tmp['pulse_time', i]
sc.plot(lines, resolution=1000, norm='log')
###Output
_____no_output_____
###Markdown
How can we extract thinner `pulse_time` slices?We can use `sc.bin` with finer pulse-time binning, such that individual slices are thinner.Instead of manually setting up a `dict` of slices we can use `sc.collapse`:
###Code
pulse_time = sc.arange(dim='pulse_time', start=tmin.value, stop=tmax.value, step=(tmax.value - tmin.value) / 10)
split = sc.bin(da_dspacing_total, edges=[pulse_time])
sc.plot(sc.collapse(split, keep='dspacing'), resolution=5000)
###Output
_____no_output_____
###Markdown
Making a 1-D cutInstead of summing over all spectra we may want to group spectra based on a $2\theta$ interval they fall into.We first compute $2\theta$ for every spectrum and store it as a new coordinate:
###Code
da_dspacing.coords['two_theta'] = scn.two_theta(da_dspacing)
###Output
_____no_output_____
###Markdown
We can then define the boundaries we want to use for our "cut".Here we use just a single bin in each of the three dimensions:
###Code
two_theta_cut = sc.linspace(dim='two_theta', unit='rad', start=0.4, stop=1.0, num=2)
# Do not use many bins, fewer is better for performance
dspacing_cut = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=2)
pulse_time_cut = tmin + sc.to_unit(sc.array(dims=['pulse_time'], unit='s', values=[0,10*60]), 'ns')
cut = sc.bin(da_dspacing, edges=[two_theta_cut, dspacing_cut, pulse_time_cut], erase=['spectrum'])
cut
###Output
_____no_output_____
###Markdown
We can then use slicing (to remove unwanted dimensions) and `sc.histogram` to get the desired binning:
###Code
cut = cut['pulse_time', 0] # squeeze pulse time (dim of length 1)
cut = cut['two_theta', 0] # squeeze two_theta (dim of length 1)
dspacing_edges = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=5000)
cut = sc.histogram(cut, bins=dspacing_edges)
cut.plot()
###Output
_____no_output_____
###Markdown
Exercise- Adjust the start and stop values in the cut edges above to adjust the "thickness" of the cut.- Adjust the edges used for histogramming. Making a 2-D cut Exercise- Adapt the code of the 1-D cut to create 100 `two_theta` bins.- Make a 2-D plot (with `dspacing` and `two_theta` on the axes). Solution
###Code
two_theta_cut = sc.linspace(dim='two_theta', unit='rad', start=0.4, stop=1.0, num=101)
dspacing_cut = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=2)
pulse_time_cut = tmin + sc.to_unit(sc.array(dims=['pulse_time'], unit='s', values=[0,10*60]), 'ns')
cut = sc.bin(da_dspacing, edges=[two_theta_cut, dspacing_cut, pulse_time_cut], erase=['spectrum'])
cut = cut['pulse_time', 0] # squeeze pulse time (dim of length 1)
dspacing_edges = sc.linspace(dim='dspacing', unit='Angstrom', start=0.0, stop=2.0, num=5000)
cut = sc.histogram(cut, bins=dspacing_edges)
cut.plot()
###Output
_____no_output_____ |
frank/nb2.ipynb | ###Markdown
Fourier component
###Code
import numpy as np
from numpy import fft
from matplotlib import pyplot as plt
def fourierExtrapolation(x, n_predict, n_harm):
n = x.size # number of harmonics in model
t = np.arange(0, n)
x_freqdom = fft.fft(x)
f = fft.fftfreq(n) # frequencies
indexes = list(range(n))
# sort indexes by frequency
indexes.sort(key = lambda i: np.absolute(f[i]))
t = np.arange(0, n + n_predict)
restored_sig = np.zeros(t.size)
for i in indexes[:1 + n_harm * 2]:
ampli = np.absolute(x_freqdom[i]) / n # amplitude
phase = np.angle(x_freqdom[i]) # phase
restored_sig += ampli * np.cos(2 * np.pi * f[i] * t + phase)
return restored_sig
###Output
_____no_output_____
###Markdown
The function above will preform a fft on data and use the first n_harm frequencies to reconstruct the data(ignoring the high frequency part)
###Code
silso_monthly = np.loadtxt('SILSO_monthly_inJD.txt')
n = 0
x = silso_monthly[:,2]
t = silso_monthly[:,1]
n = x.size # number of harmonics in model
x_freqdom = fft.rfft(x)
f = fft.rfftfreq(n, d=(t[-1]-t[0])/len(t))
fig,axs = plt.subplots(1,2, figsize=(10,4), dpi=120)
plt.subplot(121)
plt.plot(f, np.abs(x_freqdom), lw=0.6, alpha=1)
plt.axvline(f[n_harm], ls='--', c='r', lw=1.5, alpha=0.7, label='Cut off=%1.2f Yr'%(1/f[n_harm]))
plt.xlim([0,2])
plt.ylim(bottom=3e2)
plt.yscale('log')
plt.legend(frameon=0)
plt.ylabel(r"Amplitude")
plt.xlabel(r"Frequency [1/Yr]")
plt.subplot(122)
plt.plot(f, np.abs(x_freqdom), lw=0.6, alpha=1)
plt.axvline(1./11, ls='--', c='C1', lw=1.5, alpha=0.7, label='11 Years')
plt.axvline(1./100, ls='--', c='C2', lw=1.5, alpha=0.7, label='100 Years')
plt.xlim([-.01,0.3])
plt.ylim(bottom=0)
# plt.yscale('log')
plt.legend(frameon=0)
plt.ylabel(r"Amplitude")
plt.xlabel(r"Frequency [1/Yr]")
plt.tight_layout()
# plt.savefig('FFT_spec.png')
plt.show()
n_predict = 300
n_harm = 60
extrapolation = fourierExtrapolation(x, n_predict, n_harm)
step = (t[-1]-t[0])/len(t)
extra_frac_year = np.concatenate((t, np.linspace(t[-1], t[-1]+n_predict*step+0.5*step, n_predict)) )
fig,ax = plt.subplots(figsize=(7,5), dpi=120)
plt.plot(t, x, 'b', label = 'SILSO Monthly SSN', lw=0.5, alpha=0.4)
plt.plot(extra_frac_year, extrapolation, 'r', label = 'FFT reconstruction and extrapolation', lw=0.8)
plt.legend(frameon=0)
plt.xlim([1750,2050])
plt.ylabel(r"SSN")
plt.xlabel(r"Year")
plt.savefig('FFT_extr.png')
plt.show()
fig,ax = plt.subplots(figsize=(4,5), dpi=120)
plt.plot(t, x, 'b', lw=0.5, alpha=0.8)
plt.plot(extra_frac_year, extrapolation, 'r', lw=1)
plt.xlim([1975,2035])
plt.axvline(2022, ls='--', c='C1', lw=1.5, alpha=0.7, label='2022, peak SSN = 140')
plt.legend(frameon=0)
plt.savefig('FFT_extr2.png')
plt.show()
###Output
_____no_output_____
###Markdown
Fourier component
###Code
import numpy as np
from numpy import fft
from matplotlib import pyplot as plt
def fourierExtrapolation(x, n_predict, n_harm):
n = x.size # number of harmonics in model
t = np.arange(0, n)
p = np.polyfit(t, x, 1) # find linear trend in x
x_notrend = x - p[0] * t # detrended x
x_freqdom = fft.fft(x_notrend) # detrended x in frequency domain
f = fft.fftfreq(n) # frequencies
indexes = list(range(n))
# sort indexes by frequency
indexes.sort(key = lambda i: np.absolute(f[i]))
t = np.arange(0, n + n_predict)
restored_sig = np.zeros(t.size)
for i in indexes[:1 + n_harm * 2]:
ampli = np.absolute(x_freqdom[i]) / n # amplitude
phase = np.angle(x_freqdom[i]) # phase
restored_sig += ampli * np.cos(2 * np.pi * f[i] * t + phase)
return restored_sig + p[0] * t
###Output
_____no_output_____
###Markdown
The function above will preform a fft on data and use the first n_harm frequencies to reconstruct the data(ignoring the high frequency part)
###Code
silso_monthly = np.loadtxt('SILSO_monthly_inJD.txt')
n = 0
x = silso_monthly[:,2]
t = silso_monthly[:,1]
n_predict = 500
n_harm = 100
extrapolation = fourierExtrapolation(x, n_predict, n_harm)
step = (t[-1]-t[0])/len(t)
extra_frac_year = np.arange(t[0], t[-1]+n_predict*step, step)
fig,ax = plt.subplots(figsize=(10,5), dpi=120)
plt.plot(t, x, 'b', label = 'x', lw=0.5, alpha=0.3)
plt.plot(extra_frac_year, extrapolation, 'r', label = 'extrapolation', lw=1)
plt.legend(frameon=0)
plt.show()
fig,ax = plt.subplots(figsize=(10,5), dpi=120)
plt.plot(t, x, 'b', label = 'x', lw=0.5, alpha=0.6)
plt.plot(silso_monthly[:,1], silso_monthly[:,2], 'b', label = 'x', lw=0.5, alpha=0.6)
plt.plot(extra_frac_year, extrapolation, 'r', label = 'extrapolation', lw=1)
plt.xlim([1950,2050])
plt.legend(frameon=0)
plt.show()
###Output
_____no_output_____ |
Machine_Learning_WashingTon/Clustering & Retrieval/week6 hierarchical clustering/6_hierarchical_clustering_skl.ipynb | ###Markdown
Hierarchical Clustering **Hierarchical clustering** refers to a class of clustering methods that seek to build a **hierarchy** of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. Import packages The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html).
###Code
import pandas as pd # see below for install instruction
import matplotlib.pyplot as plt
import numpy as np
from scipy.sparse import csr_matrix
from sklearn.cluster import KMeans # we'll be using scikit-learn's KMeans for this assignment
from sklearn.metrics import pairwise_distances
from sklearn.preprocessing import normalize
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load the Wikipedia dataset
###Code
wiki = pd.read_csv('people_wiki.csv')
###Output
_____no_output_____
###Markdown
As we did in previous assignments, let's extract the TF-IDF features:
###Code
def load_sparse_csr(filename):
loader = np.load(filename)
data = loader['data']
indices = loader['indices']
indptr = loader['indptr']
shape = loader['shape']
return csr_matrix( (data, indices, indptr), shape)
tf_idf = load_sparse_csr('people_wiki_tf_idf.npz')
map_index_to_word = pd.read_json('people_wiki_map_index_to_word.json',typ='series')
###Output
_____no_output_____
###Markdown
To run k-means on this dataset, we should convert the data matrix into a sparse matrix. To be consistent with the k-means assignment, let's normalize all vectors to have unit norm.
###Code
tf_idf = normalize(tf_idf)
###Output
_____no_output_____
###Markdown
Bipartition the Wikipedia dataset using k-means Recall our workflow for clustering text data with k-means:1. Load the dataframe containing a dataset, such as the Wikipedia text dataset.2. Extract the data matrix from the dataframe.3. Run k-means on the data matrix with some value of k.4. Visualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article).Let us modify the workflow to perform bipartitioning:1. Load the dataframe containing a dataset, such as the Wikipedia text dataset.2. Extract the data matrix from the dataframe.3. Run k-means on the data matrix with k=2.4. Divide the data matrix into two parts using the cluster assignments.5. Divide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization.6. Visualize the bipartition of data.We'd like to be able to repeat Steps 3-6 multiple times to produce a **hierarchy** of clusters such as the following:``` (root) | +------------+-------------+ | | Cluster Cluster +------+-----+ +------+-----+ | | | | Cluster Cluster Cluster Cluster```Each **parent cluster** is bipartitioned to produce two **child clusters**. At the very top is the **root cluster**, which consists of the entire dataset.Now we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster:* `dataframe`: a subset of the original dataframe that correspond to member rows of the cluster* `matrix`: same set of rows, stored in sparse matrix format* `centroid`: the centroid of the cluster (not applicable for the root cluster)Rather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters).
###Code
def bipartition(cluster, maxiter=400, num_runs=4, seed=None):
'''cluster: should be a dictionary containing the following keys
* dataframe: original dataframe
* matrix: same data, in matrix format
* centroid: centroid for this particular cluster'''
data_matrix = cluster['matrix']
dataframe = cluster['dataframe']
# Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow.
kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=-1,verbose=1)
kmeans_model.fit(data_matrix)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
# Divide the data matrix into two parts using the cluster assignments.
data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \
data_matrix[cluster_assignment==1]
# Divide the dataframe into two parts, again using the cluster assignments.
cluster_assignment_sa = np.array(cluster_assignment) # minor format conversion
dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \
dataframe[cluster_assignment_sa==1]
# Package relevant variables for the child clusters
cluster_left_child = {'matrix': data_matrix_left_child,
'dataframe': dataframe_left_child,
'centroid': centroids[0]}
cluster_right_child = {'matrix': data_matrix_right_child,
'dataframe': dataframe_right_child,
'centroid': centroids[1]}
return (cluster_left_child, cluster_right_child)
###Output
_____no_output_____
###Markdown
The following cell performs bipartitioning of the Wikipedia dataset. Allow 20-60 seconds to finish.Note. For the purpose of the assignment, we set an explicit seed (`seed=1`) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs.
###Code
wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster
left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=6, seed=1)
###Output
_____no_output_____
###Markdown
Let's examine the contents of one of the two clusters, which we call the `left_child`, referring to the tree visualization above.
###Code
left_child
###Output
_____no_output_____
###Markdown
And here is the content of the other cluster we named `right_child`.
###Code
right_child
###Output
_____no_output_____
###Markdown
Visualize the bipartition We provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid.
###Code
def display_single_tf_idf_cluster(cluster, map_index_to_word):
'''map_index_to_word: SFrame specifying the mapping betweeen words and column indices'''
wiki_subset = cluster['dataframe']
tf_idf_subset = cluster['matrix']
centroid = cluster['centroid']
# Print top 5 words with largest TF-IDF weights in the cluster
idx = centroid.argsort()[::-1]
for i in xrange(5):
print('{0:s}:{1:.3f}'.format(map_index_to_word.index[idx[i]], centroid[idx[i]])),
print('')
# Compute distances from the centroid to all data points in the cluster.
distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten()
# compute nearest neighbors of the centroid within the cluster.
nearest_neighbors = distances.argsort()
# For 8 nearest neighbors, print the title as well as first 180 characters of text.
# Wrap the text at 80-character mark.
for i in xrange(8):
text = ' '.join(wiki_subset.iloc[nearest_neighbors[i]]['text'].split(None, 25)[0:25])
print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset.iloc[nearest_neighbors[i]]['name'],
distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))
print('')
###Output
_____no_output_____
###Markdown
Let's visualize the two child clusters:
###Code
display_single_tf_idf_cluster(left_child, map_index_to_word)
display_single_tf_idf_cluster(right_child, map_index_to_word)
###Output
zwolsman:0.025 zx10r:0.017 zwigoff:0.012 zyuganovs:0.011 zyntherius:0.011
* Anita Kunz 0.97401
anita e kunz oc born 1956 is a canadianborn artist and illustratorkunz has lived in london
new york and toronto contributing to magazines and working
* Janet Jackson 0.97472
janet damita jo jackson born may 16 1966 is an american singer songwriter and actress know
n for a series of sonically innovative socially conscious and
* Madonna (entertainer) 0.97475
madonna louise ciccone tkoni born august 16 1958 is an american singer songwriter actress
and businesswoman she achieved popularity by pushing the boundaries of lyrical
* %C3%81ine Hyland 0.97536
ine hyland ne donlon is emeritus professor of education and former vicepresident of univer
sity college cork ireland she was born in 1942 in athboy co
* Jane Fonda 0.97621
jane fonda born lady jayne seymour fonda december 21 1937 is an american actress writer po
litical activist former fashion model and fitness guru she is
* Christine Robertson 0.97643
christine mary robertson born 5 october 1948 is an australian politician and former austra
lian labor party member of the new south wales legislative council serving
* Pat Studdy-Clift 0.97643
pat studdyclift is an australian author specialising in historical fiction and nonfictionb
orn in 1925 she lived in gunnedah until she was sent to a boarding
* Alexandra Potter 0.97646
alexandra potter born 1970 is a british author of romantic comediesborn in bradford yorksh
ire england and educated at liverpool university gaining an honors degree in
###Markdown
The left cluster consists of athletes, whereas the right cluster consists of non-athletes. So far, we have a single-level hierarchy consisting of two clusters, as follows: ``` Wikipedia + | +--------------------------+--------------------+ | | + + Athletes Non-athletes``` Is this hierarchy good enough? **When building a hierarchy of clusters, we must keep our particular application in mind.** For instance, we might want to build a **directory** for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the `athletes` and `non-athletes` clusters. Perform recursive bipartitioning Cluster of athletes To help identify the clusters we've built so far, let's give them easy-to-read aliases:
###Code
athletes = left_child
non_athletes = right_child
###Output
_____no_output_____
###Markdown
Using the bipartition function, we produce two child clusters of the athlete cluster:
###Code
# Bipartition the cluster of athletes
left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=6, seed=1)
###Output
_____no_output_____
###Markdown
The left child cluster mainly consists of baseball players:
###Code
display_single_tf_idf_cluster(left_child_athletes, map_index_to_word)
###Output
zvuku:0.054 zwerge:0.043 zumars:0.038 zx10rborn:0.035 zuidams:0.030
* Tony Smith (footballer, born 1957) 0.94677
anthony tony smith born 20 february 1957 is a former footballer who played as a central de
fender in the football league in the 1970s and
* Justin Knoedler 0.94746
justin joseph knoedler born july 17 1980 in springfield illinois is a former major league
baseball catcherknoedler was originally drafted by the st louis cardinals
* Chris Day 0.94849
christopher nicholas chris day born 28 july 1975 is an english professional footballer who
plays as a goalkeeper for stevenageday started his career at tottenham
* Todd Williams 0.94882
todd michael williams born february 13 1971 in syracuse new york is a former major league
baseball relief pitcher he attended east syracuseminoa high school
* Todd Curley 0.95007
todd curley born 14 january 1973 is a former australian rules footballer who played for co
llingwood and the western bulldogs in the australian football league
* Ashley Prescott 0.95015
ashley prescott born 11 september 1972 is a former australian rules footballer he played w
ith the richmond and fremantle football clubs in the afl between
* Tommy Anderson (footballer) 0.95037
thomas cowan tommy anderson born 24 september 1934 in haddington is a scottish former prof
essional footballer he played as a forward and was noted for
* Leslie Lea 0.95065
leslie lea born 5 october 1942 in manchester is an english former professional footballer
he played as a midfielderlea began his professional career with blackpool
###Markdown
On the other hand, the right child cluster is a mix of players in association football, Austrailian rules football and ice hockey:
###Code
display_single_tf_idf_cluster(right_child_athletes, map_index_to_word)
###Output
zowie:0.045 zuberi:0.043 zululand:0.035 zyiit:0.031 zygouli:0.031
* Alessandra Aguilar 0.93880
alessandra aguilar born 1 july 1978 in lugo is a spanish longdistance runner who specialis
es in marathon running she represented her country in the event
* Heather Samuel 0.93999
heather barbara samuel born 6 july 1970 is a retired sprinter from antigua and barbuda who
specialized in the 100 and 200 metres in 1990
* Viola Kibiwot 0.94037
viola jelagat kibiwot born december 22 1983 in keiyo district is a runner from kenya who s
pecialises in the 1500 metres kibiwot won her first
* Ayelech Worku 0.94052
ayelech worku born june 12 1979 is an ethiopian longdistance runner most known for winning
two world championships bronze medals on the 5000 metres she
* Krisztina Papp 0.94105
krisztina papp born 17 december 1982 in eger is a hungarian long distance runner she is th
e national indoor record holder over 5000 mpapp began
* Petra Lammert 0.94230
petra lammert born 3 march 1984 in freudenstadt badenwrttemberg is a former german shot pu
tter and current bobsledder she was the 2009 european indoor champion
* Morhad Amdouni 0.94231
morhad amdouni born 21 january 1988 in portovecchio is a french middle and longdistance ru
nner he was european junior champion in track and cross country
* Brian Davis (golfer) 0.94378
brian lester davis born 2 august 1974 is an english professional golferdavis was born in l
ondon he turned professional in 1994 and became a member
###Markdown
Our hierarchy of clusters now looks like this:``` Wikipedia + | +--------------------------+--------------------+ | | + + Athletes Non-athletes + | +-----------+--------+ | | | association football/ + Austrailian rules football/ baseball ice hockey``` Should we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, **we would like to achieve similar level of granularity for all clusters.**Notice that the right child cluster is more coarse than the left child cluster. The right cluster posseses a greater variety of topics than the left (ice hockey/association football/Austrialian football vs. baseball). So the right child cluster should be subdivided further to produce finer child clusters. Let's give the clusters aliases as well:
###Code
baseball = left_child_athletes
ice_hockey_football = right_child_athletes
###Output
_____no_output_____
###Markdown
Cluster of ice hockey players and football players In answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights.Let us bipartition the cluster of ice hockey and football players.
###Code
left_child_ihs, right_child_ihs = bipartition(ice_hockey_football, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_ihs, map_index_to_word)
display_single_tf_idf_cluster(right_child_ihs, map_index_to_word)
###Output
zowie:0.064 zyiit:0.039 zwolsman:0.038 zealandamerican:0.038 zolecki:0.037
* Heather Samuel 0.91590
heather barbara samuel born 6 july 1970 is a retired sprinter from antigua and barbuda who
specialized in the 100 and 200 metres in 1990
* Krisztina Papp 0.91672
krisztina papp born 17 december 1982 in eger is a hungarian long distance runner she is th
e national indoor record holder over 5000 mpapp began
* Ayelech Worku 0.91892
ayelech worku born june 12 1979 is an ethiopian longdistance runner most known for winning
two world championships bronze medals on the 5000 metres she
* Viola Kibiwot 0.91906
viola jelagat kibiwot born december 22 1983 in keiyo district is a runner from kenya who s
pecialises in the 1500 metres kibiwot won her first
* Alessandra Aguilar 0.91955
alessandra aguilar born 1 july 1978 in lugo is a spanish longdistance runner who specialis
es in marathon running she represented her country in the event
* Antonina Yefremova 0.92054
antonina yefremova born 19 july 1981 is a ukrainian sprinter who specializes in the 400 me
tres yefremova received a twoyear ban in 2012 for using
* Marian Burnett 0.92251
marian joan burnett born 22 february 1976 in linden is a female middledistance runner from
guyana who specialises in the 800 metres she competed in
* Wang Xiuting 0.92377
wang xiuting chinese born 11 may 1965 in shandong is a chinese former longdistance runners
he rose to prominence with a victory in the 10000 metres
zuberi:0.118 zadnji:0.089 zimmermann:0.062 zekiye:0.060 zululand:0.054
* Bob Heintz 0.88057
robert edward heintz born may 1 1970 is an american professional golfer who plays on the n
ationwide tourheintz was born in syosset new york he
* Tim Conley 0.88274
tim conley born december 8 1958 is an american professional golfer who played on the pga t
our nationwide tour and most recently the champions tourconley
* Bruce Zabriski 0.88279
bruce zabriski born august 3 1957 is an american professional golfer who played on the pga
tour european tour and the nationwide tourzabriski joined the
* Sonny Skinner 0.88438
sonny skinner born august 18 1960 is an american professional golfer who plays on the cham
pions tourskinner was born in portsmouth virginia he turned professional
* Brian Davis (golfer) 0.88482
brian lester davis born 2 august 1974 is an english professional golferdavis was born in l
ondon he turned professional in 1994 and became a member
* Greg Chalmers 0.88501
greg j chalmers born 11 october 1973 is an australian professional golfer who has played o
n both the european tour and the pga tourchalmers was
* Todd Barranger 0.88552
todd barranger born october 19 1968 is an american professional golfer who played on the p
ga tour asian tour and the nationwide tourbarranger joined the
* Dick Mast 0.88830
richard mast born march 23 1951 is an american professional golfer who has played on the p
ga tour nationwide tour and champions tourmast was born
###Markdown
**Quiz Question**. Which diagram best describes the hierarchy right after splitting the `ice_hockey_football` cluster? Refer to the quiz form for the diagrams. **Caution**. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters.* **If a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster.** Thus, we may be misled if we judge the purity of clusters solely by their top documents and words. * **Many interesting topics are hidden somewhere inside the clusters but do not appear in the visualization.** We may need to subdivide further to discover new topics. For instance, subdividing the `ice_hockey_football` cluster led to the appearance of runners and golfers. Cluster of non-athletes Now let us subdivide the cluster of non-athletes.
###Code
# Bipartition the cluster of non-athletes
left_child_non_athletes, right_child_non_athletes = bipartition(non_athletes, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_non_athletes, map_index_to_word)
display_single_tf_idf_cluster(right_child_non_athletes, map_index_to_word)
###Output
zwolsman:0.039 zx10r:0.030 zwigoff:0.023 zwacksalles:0.021 zupanprofessor:0.015
* Madonna (entertainer) 0.96092
madonna louise ciccone tkoni born august 16 1958 is an american singer songwriter actress
and businesswoman she achieved popularity by pushing the boundaries of lyrical
* Janet Jackson 0.96153
janet damita jo jackson born may 16 1966 is an american singer songwriter and actress know
n for a series of sonically innovative socially conscious and
* Cher 0.96540
cher r born cherilyn sarkisian may 20 1946 is an american singer actress and television ho
st described as embodying female autonomy in a maledominated industry
* Laura Smith 0.96600
laura smith is a canadian folk singersongwriter she is best known for her 1995 single shad
e of your love one of the years biggest hits
* Natashia Williams 0.96677
natashia williamsblach born august 2 1978 is an american actress and former wonderbra camp
aign model who is perhaps best known for her role as shane
* Anita Kunz 0.96716
anita e kunz oc born 1956 is a canadianborn artist and illustratorkunz has lived in london
new york and toronto contributing to magazines and working
* Maggie Smith 0.96747
dame margaret natalie maggie smith ch dbe born 28 december 1934 is an english actress she
made her stage debut in 1952 and has had
* Lizzie West 0.96752
lizzie west born in brooklyn ny on july 21 1973 is a singersongwriter her music can be des
cribed as a blend of many genres including
###Markdown
Neither of the clusters show clear topics, apart from the genders. Let us divide them further.
###Code
male_non_athletes = left_child_non_athletes
female_non_athletes = right_child_non_athletes
###Output
_____no_output_____ |
EmisProc/ptse/ptseE.ipynb | ###Markdown
CAMx高空點源排放檔案之產生 背景- 此處處理TEDS PM/VOCs年排放量之劃分、與時變係數相乘、整併到光化模式網格系統內。- 高空點源的**時變係數**檔案需按CEMS數據先行[展開](https://sinotec2.github.io/Focus-on-Air-Quality/EmisProc/ptse/ptseE_ONS/)。- 排放量整體處理原則參見[處理程序總綱](https://sinotec2.github.io/Focus-on-Air-Quality/EmsProc/處理程序總綱)、針對[點源之處理](https://sinotec2.github.io/Focus-on-Air-Quality/EmisProc/ptse/)及[龐大`.dbf`檔案之讀取](https://sinotec2.github.io/Focus-on-Air-Quality/EmisProc/dbf2csv.py/),為此處之前處理。程式也會呼叫到[ptse_sub](https://sinotec2.github.io/Focus-on-Air-Quality/EmisProc/ptse/ptse_sub/)中的副程式 副程式說明 [ptse_sub](https://sinotec2.github.io/Focus-on-Air-Quality/EmisProc/ptse/ptse_sub/) 對煙道座標進行叢集整併如題所示。整併的理由有幾:- 鄰近煙流在近距離重疊後,會因整體煙流熱量的提升而提升其最終煙流高度,從而降低對地面的影響,此一現象並未在模式的煙流次網格模式中予以考量,須在模式外先行處理。- 重複計算較小煙流對濃度沒有太大的影響,卻耗費大量儲存、處理的電腦資源,因此整併有其必須性。 - 即使將較小的點源按高度切割併入面源,還是保留此一機制,以避免點源個數無限制擴張,有可能放大因品質不佳造成奇異性。 cluster_xy- 調用`sklearn`之`KMeans`來做叢集分析
###Code
kuang@node03 /nas1/TEDS/teds11/ptse
$ cat -n cluster_xy.py
def cluster_xy(df,C_NO):
from sklearn.cluster import KMeans
from pandas import DataFrame
import numpy as np
import sys
###Output
_____no_output_____
###Markdown
- 由資料庫中選擇同一**管編**之煙道出來
###Code
b=df.loc[df.CP_NO.map(lambda x:x[:8]==C_NO)]
n=len(b)
if n==0:sys.exit('fail filtering of '+C_NO)
colb=b.columns
###Output
_____no_output_____
###Markdown
- 此**管編**所有煙道的座標及高度,整併為一個大矩陣 - 進行KMeans叢集分析
###Code
x=[b.XY[i][0] for i in b.index]
y=[b.XY[i][1] for i in b.index]
z=b.HEI
M=np.array([x, y, z]).T
clt = KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300, \
n_clusters=10, n_init=10, n_jobs=-1, precompute_distances='auto', \
random_state=None, tol=0.0001, verbose=0)
kmeans=clt.fit(M)
###Output
_____no_output_____
###Markdown
- 放棄原來的座標,改採叢集的平均位置
###Code
# np.array(clt.cluster_centers_) for group
b_lab=np.array(clt.labels_)
df.loc[b.index,'UTM_E']=[np.array(clt.cluster_centers_)[i][0] for i in b_lab]
df.loc[b.index,'UTM_N']=[np.array(clt.cluster_centers_)[i][1] for i in b_lab]
return df
###Output
_____no_output_____
###Markdown
XY_pivot- 工廠點源個數太多者(如中鋼),在座標叢集化之前,以pivot_tab取其管道(**管煙**編號=**管編**+**煙編**)排放量之加總值 - 調用模組
###Code
#XY clustering in CSC before pivotting
#df=cluster_xy(df,'E5600841')
#pivotting
def XY_pivot(df,col_id,col_em,col_mn,col_mx):
from pandas import pivot_table,merge
import numpy as np
###Output
_____no_output_____
###Markdown
- 3類不同屬性欄位適用不同的`aggfunc` - 排放量(`col_em`):加總 - 煙道高度(`col_mx`):最大值 - 煙道其他參數(`col_mn`):平均
###Code
df_pv1=pivot_table(df,index=col_id,values=col_em,aggfunc=np.sum).reset_index()
df_pv2=pivot_table(df,index=col_id,values=col_mn,aggfunc=np.mean).reset_index()
df_pv3=pivot_table(df,index=col_id,values=col_mx,aggfunc=max).reset_index()
###Output
_____no_output_____
###Markdown
- 整併、求取等似直徑、以使流量能守恒
###Code
df1=merge(df_pv1,df_pv2,on=col_id)
df=merge(df1,df_pv3,on=col_id)
df['DIA']=[np.sqrt(4/3.14159*q/60*(t+273)/273/v) for q,t,v in zip(df.ORI_QU1,df.TEMP,df.VEL)]
return df
###Output
_____no_output_____
###Markdown
主程式說明 程式之執行- 此處按月執行。由於nc檔案時間展開後,檔案延長非常緩慢,拆分成主程式(`ptseE.py`)與輸出程式(`wrtE.py`)二段進行。
###Code
for m in 0{1..9} 1{0..2};do python ptseE.py 19$m;done
for m in 0{1..9} 1{0..2};do python wrtE.py 19$m;done
###Output
_____no_output_____
###Markdown
程式基本定義、資料庫檔案QC、nc檔案之延展- 調用模組 - 因無另存處理過後的資料庫,因此程式還是會用到[ptse_sub](https://sinotec2.github.io/Focus-on-Air-Quality/EmisProc/ptse/ptse_sub/)中的副程式`CORRECT`, `add_PMS`, `check_nan`, `check_landsea`, `FillNan`, `WGS_TWD`, `Elev_YPM`
###Code
#kuang@node03 /nas1/TEDS/teds11/ptse
#$ cat -n ptseE.py
#! crding = utf8
from pandas import *
import numpy as np
import os, sys, subprocess
import netCDF4
import twd97
import datetime
from calendar import monthrange
from scipy.io import FortranFile
from mostfreqword import mostfreqword
from ptse_sub import CORRECT, add_PMS, check_nan, check_landsea, FillNan, WGS_TWD, Elev_YPM
from ioapi_dates import jul2dt, dt2jul
from cluster_xy import cluster_xy, XY_pivot
###Output
_____no_output_____
###Markdown
- 程式相依性及年月定義(由引數) - `pncgen`、`ncks`是在`wrtE.py`階段使用
###Code
#Main
#locate the programs and root directory
pncg=subprocess.check_output('which pncgen',shell=True).decode('utf8').strip('\n')
ncks=subprocess.check_output('which ncks',shell=True).decode('utf8').strip('\n')
hmp=subprocess.check_output('pwd',shell=True).decode('utf8').strip('\n').split('/')[1]
P='./'
#time and space initiates
ym='1901' #sys.argv[1]
mm=ym[2:4]
mo=int(mm)
yr=2000+int(ym[:2]);TEDS='teds'+str(int((yr-2016)/3)+10)
###Output
_____no_output_____
###Markdown
- 使用`Hs`進行篩選「高空」點源
###Code
Hs=10 #cutting height of stacks
###Output
_____no_output_____
###Markdown
- 起迄日期、模擬範圍中心點位置
###Code
ntm=(monthrange(yr,mo)[1]+2)*24+1
bdate=datetime.datetime(yr,mo,1)+datetime.timedelta(days=-1+8./24)
edate=bdate+datetime.timedelta(days=ntm/24)#monthrange(yr,mo)[1]+3)
Latitude_Pole, Longitude_Pole = 23.61000, 120.9900
Xcent, Ycent = twd97.fromwgs84(Latitude_Pole, Longitude_Pole)
###Output
_____no_output_____
###Markdown
- nc模版的應用與延展、注意`name`在新版`NCF`可能會被保留不能更改。(另在`CAMx`程式碼中處理)
###Code
#prepare the uamiv template
print('template applied')
NCfname='fortBE.413_'+TEDS+'.ptsE'+mm+'.nc'
try:
nc = netCDF4.Dataset(NCfname, 'r+')
except:
os.system('cp '+P+'template_v7.nc '+NCfname)
nc = netCDF4.Dataset(NCfname, 'r+')
V=[list(filter(lambda x:nc.variables[x].ndim==j, [i for i in nc.variables])) for j in [1,2,3,4]]
nt,nv,dt=nc.variables[V[2][0]].shape
nv=len([i for i in V[1] if i !='CP_NO'])
nc.SDATE,nc.STIME=dt2jul(bdate)
nc.EDATE,nc.ETIME=dt2jul(edate)
nc.NOTE='Point Emission'
nc.NOTE=nc.NOTE+(60-len(nc.NOTE))*' '
nc.NVARS=nv
#Name-names may encounter conflicts with newer versions of NCFs and PseudoNetCDFs.
#nc.name='PTSOURCE '
nc.NSTEPS=ntm
if 'ETFLAG' not in V[2]:
zz=nc.createVariable('ETFLAG',"i4",("TSTEP","VAR","DATE-TIME"))
if nt!=ntm or (nc.variables['TFLAG'][0,0,0]!=nc.SDATE and nc.variables['TFLAG'][0,0,1]!=nc.STIME):
for t in range(ntm):
sdate,stime=dt2jul(bdate+datetime.timedelta(days=t/24.))
nc.variables['TFLAG'][t,:,0]=[sdate for i in range(nv)]
nc.variables['TFLAG'][t,:,1]=[stime for i in range(nv)]
ndate,ntime=dt2jul(bdate+datetime.timedelta(days=(t+1)/24.))
nc.variables['ETFLAG'][t,:,0]=[ndate for i in range(nv)]
nc.variables['ETFLAG'][t,:,1]=[ntime for i in range(nv)]
nc.close()
#template OK
###Output
template applied
###Markdown
- 污染物名稱對照、變數群組定義
###Code
#item sets definitions
c2s={'NMHC':'NMHC','SOX':'SO2','NOX':'NO2','CO':'CO','PM':'PM'}
c2m={'SOX':64,'NOX':46,'CO':28,'PM':1}
cole=[i+'_EMI' for i in c2s]+['PM25_EMI']
XYHDTV=['UTM_E','UTM_N','HEI','DIA','TEMP','VEL']
colT=['HD1','DY1','HY1']
colc=['CCRS','FCRS','CPRM','FPRM']
###Output
_____no_output_____
###Markdown
- 讀取點源資料庫並進行品質管控。新版`coding`只接受`big5`
###Code
#Input the TEDS csv file
try:
df = read_csv('point.csv', encoding='big5')
except:
df = read_csv('point.csv')
df = check_nan(df)
df = check_landsea(df)
df = WGS_TWD(df)
df = Elev_YPM(df)
#only P??? an re tak einto account
boo=(df.HEI>=Hs) & (df.NO_S.map(lambda x:x[0]=='P'))
df=df.loc[boo].reset_index(drop=True)
#delete the zero emission sources
df['SUM']=[i+j+k+l+m for i,j,k,l,m in zip(df.SOX_EMI,df.NOX_EMI,df.CO_EMI,df.PM_EMI,df.NMHC_EMI)]
df=df.loc[df.SUM>0].reset_index(drop=True)
df['DY1']=[i*j for i,j in zip(df.DW1,df.WY1)]
df['HY1']=[i*j for i,j in zip(df.HD1,df.DY1)]
df=CORRECT(df)
df['CP_NO'] = [i + j for i, j in zip(list(df['C_NO']), list(df['NO_S']))]
#
###Output
/opt/miniconda3/envs/geocat/lib/python3.9/site-packages/IPython/core/interactiveshell.py:3444: DtypeWarning: Columns (54) have mixed types.Specify dtype option on import or set low_memory=False.
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
- 座標轉換
###Code
#Coordinate translation
df.UTM_E=df.UTM_E-Xcent
df.UTM_N=df.UTM_N-Ycent
df.SCC=[str(int(i)) for i in df.SCC]
df.loc[df.SCC=='0','SCC']='0'*10
###Output
_____no_output_____
###Markdown
- 對**管煙**編號進行資料庫重整 - 3類不同屬性欄位適用不同的`aggfunc` - 排放量(`col_em`):加總 - 煙道高度(`col_mx`):最大值 - 煙道其他參數(`col_mn`):平均
###Code
#pivot table along the dimension of NO_S (P???)
df_cp=pivot_table(df,index='CP_NO',values=cole+['ORI_QU1'],aggfunc=sum).reset_index()
df_xy=pivot_table(df,index='CP_NO',values=XYHDTV+colT,aggfunc=np.mean).reset_index()
df_sc=pivot_table(df,index='CP_NO',values='SCC', aggfunc=mostfreqword).reset_index()
df1=merge(df_cp,df_xy,on='CP_NO')
df=merge(df1,df_sc,on='CP_NO')
df.head()#debugging
###Output
_____no_output_____
###Markdown
- 排放量單位轉換 - 因小時數在`ons`中已經應用了(個別煙道加總為1.),因此只需考量重量之轉換即可。 - 留下之前計算版本,以供對照
###Code
#T/year to g/hour
for c in cole:
df[c]=[i*1E6 for i in df[c]]
# df[c]=[i*1E6/j/k for i,j,k in zip(df[c],df.DY1,df.HD1)]
###Output
_____no_output_____
###Markdown
- CAMx版本差異在此選擇 - 6~7版的差異可以參考[CMAQ/CAMx排放量檔案之轉換](http://www.evernote.com/l/AH1z_n2U--lM-poNlQnghsjFBfDEY6FalgM/), - 如要做不同版本,應重新準備模版階段
###Code
#determination of camx version
ver=7
if 'XSTK' in V[0]:ver=6
print('NMHC/PM splitting and expanding')
###Output
NMHC/PM splitting and expanding
###Markdown
VOCs與PM的劃分- 讀取profile number、preference table
###Code
print('NMHC/PM splitting and expanding')
#prepare the profile and CBMs
fname='/'+hmp+'/SMOKE4.5/data/ge_dat/gsref.cmaq_cb05_soa.txt'
gsref=read_csv(fname,delimiter=';',header=None)
col='SCC Speciation_profile_number Pollutant_ID'.split()+['C'+str(i) for i in range(3,10)]
gsref.columns=col
for c in col[3:]:
del gsref[c]
fname='/'+hmp+'/SMOKE4.5/data/ge_dat/gspro.cmaq_cb05_soa.txt'
gspro=read_csv(fname,delimiter=';',header=None)
col=['Speciation_profile_number','Pollutant_ID','Species_ID','Split_factor','Divisor','Mass_Fraction']
gspro.columns=col
gspro.head() #debugging
###Output
NMHC/PM splitting and expanding
###Markdown
- 自從TEDS9之後,國內新增(或修正)很多資料庫中沒有的SCC。此處以對照既有SCC碼簡單處理。 - 對照方式:網站上找到最新的SCC資料庫,找到新SCC的製程類別特性 - 找到既有SCC profile number資料表中,前幾碼(4~6)相同的SCC,如果類似就指定取代 - 如果真的找不到,也只能找數字最接近的取代
###Code
#new SCC since TEDS9,erase and substude
sccMap={
'30111103':'30111199', #not in df_scc2
'30112401':'30112403', #Industrial Processes Chemical Manufacturing Chloroprene Chlorination Reactor
'30115606':'30115607',#Industrial Processes Chemical Manufacturing Cumene Aluminum Chloride Catalyst Process: DIPB Strip
'30118110':'30118109',#Industrial Processes Chemical Manufacturing Toluene Diisocyanate Residue Vacuum Distillation
'30120554':'30120553', #not known, 548~ Propylene Oxide Mixed Hydrocarbon Wash-Decant System Vent
'30117410':'30117421',
'30117411':'30117421',
'30117614':'30117612',
'30121125':'30121104',
'30201111':'30201121',
'30300508':'30300615',
'30301024':'30301014',
'30400213':'30400237',
'30120543':'30120502',
'40300215':'40300212'} #not known
for s in sccMap:
df.loc[df.SCC==s,'SCC']=sccMap[s]
###Output
_____no_output_____
###Markdown
- 只篩選有關的SCC以縮減資料庫長度、提升對照速度 - 因有些profile number含有英文字,在此先做整理,以使格式一致
###Code
#reduce gsref and gspro
dfV=df.loc[df.NMHC_EMI>0].reset_index(drop=True)
gsrefV=gsref.loc[gsref.SCC.map(lambda x:x in set(dfV.SCC))].reset_index(drop=True)
prof_alph=set([i for i in set(gsrefV.Speciation_profile_number) if i.isalpha()])
gsrefV=gsrefV.loc[gsrefV.Speciation_profile_number.map(lambda x:x not in prof_alph)].reset_index(drop=True)
gsproV=gspro.loc[gspro.Speciation_profile_number.map(lambda x:x in set(gsrefV.Speciation_profile_number))].reset_index(drop=True)
set(gsproV.Species_ID) #debugging
###Output
_____no_output_____
###Markdown
- 只篩選有關的profile number且污染物含有`TOG`者
###Code
pp=[]
for p in set(gspro.Speciation_profile_number):
a=gsproV.loc[gsproV.Speciation_profile_number==p]
if 'TOG' not in set(a.Pollutant_ID):pp.append(p)
boo=(gspro.Speciation_profile_number.map(lambda x:x not in pp)) & (gspro.Pollutant_ID=='TOG')
gsproV=gspro.loc[boo].reset_index(drop=True)
set(gsproV.Species_ID) #debugging
###Output
_____no_output_____
###Markdown
- 準備乘數矩陣,其大小為`(SCC總數,CBM物質項目總數)`
###Code
cbm=list(set([i for i in set(gsproV.Species_ID) if i in V[1]]))
idx=gsproV.loc[gsproV.Species_ID.map(lambda x:x in cbm)].index
sccV=list(set(dfV.SCC))
sccV.sort()
nscc=len(sccV)
prod=np.zeros(shape=(nscc,len(cbm)))
###Output
_____no_output_____
###Markdown
- 對得到SCC,但是資料庫中卻沒有`TOG`也沒有`VOC`。記下、執行下個SCC
###Code
#dfV but with PM scc(no TOG/VOC in gspro), modify those SCC to '0'*10 in dfV, drop the pro_no in gsproV
noTOG_scc=[]
for i in range(nscc):
s=sccV[i]
p=list(gsrefV.loc[gsrefV.SCC==s,'Speciation_profile_number'])[0]
a=gsproV.loc[gsproV.Speciation_profile_number==p]
if 'TOG' not in set(a.Pollutant_ID) and 'VOC' not in set(a.Pollutant_ID):
noTOG_scc.append(s)
continue
len(noTOG_scc),noTOG_scc[:5] #debugging
###Output
_____no_output_____
###Markdown
- 找到對應的profile number,將分率存入乘數矩陣`prod`
###Code
for i in range(nscc):
if 'TOG' not in set(a.Pollutant_ID) and 'VOC' not in set(a.Pollutant_ID): continue
boo=(gsproV.Speciation_profile_number==p) & (gsproV.Pollutant_ID=='TOG')
a=gsproV.loc[boo]
for c in a.Species_ID:
if c not in cbm:continue
j=cbm.index(c)
f=a.loc[a.Species_ID==c,'Mass_Fraction']
d=a.loc[a.Species_ID==c,'Divisor']
prod[i,j]+=f/d
print(sccV[5])
print([(cbm[i],prod[5,i]) for i in range(len(cbm))]) #debugging
###Output
10100601
[('FORM', 0.0), ('TOL', 0.0), ('ETHA', 0.003591739000299312), ('TERP', 0.0), ('ETH', 0.01240500192491409), ('ALD2', 0.0), ('OLE', 0.0004752826771990361), ('MEOH', 0.0), ('BENZ', 0.00394306622046861), ('XYL', 0.0), ('ETOH', 0.0), ('ISOP', 0.0), ('ALDX', 0.0), ('POA', 0.0), ('IOLE', 0.0), ('PAR', 0.004418340684538093)]
###Markdown
- `NMHC_EMI`排放量乘上乘數矩陣,形成`CBM`排放量
###Code
df.loc[df.SCC.map(lambda x:x in noTOG_scc),'SCC']='0'*10
for c in cbm:
df[c]=0.
for s in set(dfV.SCC):
i=sccV.index(s)
idx=df.loc[df.SCC==s].index
for c in cbm:
j=cbm.index(c)
df.loc[idx,c]=[prod[i,j]*k for k in df.loc[idx,'NMHC_EMI']]
df.loc[(df.SCC==sccV[5])&(df.NMHC_EMI>0)].head() #debugging
###Output
_____no_output_____
###Markdown
- PM的劃分,詳見[ptse_sub](https://sinotec2.github.io/Focus-on-Air-Quality/EmisProc/ptse/ptse_sub/%E7%B0%A1%E5%96%AE%E7%9A%84pm%E5%8A%83%E5%88%86%E5%89%AF%E7%A8%8B%E5%BC%8F)
###Code
#PM splitting
df=add_PMS(df)
df.loc[0]
###Output
_____no_output_____
###Markdown
年均排放量乘上時變係數- **時變係數**檔案,因筆數與不同的定義高度有關,須先執行後,將檔名寫在程式內。 - 此處的`fns0`\~`fns30`分別是高度0\~30所對應的**時變係數**檔案 - 好處是:如此可以將對照關係記錄下來 - 壞處是:必須手動修改程式碼、每版teds的檔案名稱會不一樣
###Code
#pivot along the axis of XY coordinates
#def. of columns and dicts
fns0={
'CO' :'CO_ECP7496_MDH8760_ONS.bin',
'NMHC':'NMHC_ECP2697_MDH8760_ONS.bin',
'NOX' :'NOX_ECP13706_MDH8760_ONS.bin',
'PM' :'PM_ECP17835_MDH8760_ONS.bin',
'SOX' :'SOX_ECP8501_MDH8760_ONS.bin'}
fns10={
'CO' :'CO_ECP4919_MDH8784_ONS.bin',
'NMHC':'NMHC_ECP3549_MDH8784_ONS.bin',
'NOX' :'NOX_ECP9598_MDH8784_ONS.bin',
'PM' :'PM_ECP11052_MDH8784_ONS.bin',
'SOX' :'SOX_ECP7044_MDH8784_ONS.bin'}
fns30={
'CO' :'CO_ECP1077_MDH8784_ONS.bin',
'NMHC':'NMHC_ECP1034_MDH8784_ONS.bin',
'NOX' :'NOX_ECP1905_MDH8784_ONS.bin',
'PM' :'PM_ECP2155_MDH8784_ONS.bin',
'SOX' :'SOX_ECP1468_MDH8784_ONS.bin'}
F={0:fns0,10:fns10,30:fns30}
fns=F[Hs]
###Output
_____no_output_____
###Markdown
- 排放名稱與空品名稱對照表、排放名稱與分子量對照表
###Code
cols={i:[c2s[i]] for i in c2s}
cols.update({'NMHC':cbm,'PM':colc})
colp={c2s[i]:i+'_EMI' for i in fns}
colp.update({i:i for i in cbm+colc})
lspec=[i for i in list(colp) if i not in ['NMHC','PM']]
c2m={i:1 for i in colp}
c2m.update({'SO2':64,'NO2':46,'CO':28})
col_id=["C_NO","XY"]
col_em=list(colp.values())
col_mn=['TEMP','VEL','UTM_E', 'UTM_N','HY1','HD1','DY1']
col_mx=['HEI']
df['XY']=[(x,y) for x,y in zip(df.UTM_E,df.UTM_N)]
df["C_NO"]=[x[:8] for x in df.CP_NO]
###Output
_____no_output_____
###Markdown
- 讀取**時變係數**並將其常態化(時間軸加總為1.0) - SPECa為每煙道逐時排放量
###Code
print('Time fraction multiplying and expanding')
#matching of the bin filenames
nopts=len(df)
SPECa=np.zeros(shape=(ntm,nopts,len(lspec)))
id365=365
if yr%4==0:id365=366
###Output
Time fraction multiplying and expanding
###Markdown
- 依序開啟各污染物之**時變係數**檔案
###Code
for spe in list(fns)[:1]: #read the first one for eg.
fnameO=fns[spe]
with FortranFile(fnameO, 'r') as f:
cp = f.read_record(dtype=np.dtype('U12'))
mdh = f.read_record(dtype=np.int)
ons = f.read_record(dtype=float)
print(cp[:5],mdh[:5]) #debugging
###Output
/tmp/ipykernel_19354/1135795365.py:5: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
mdh = f.read_record(dtype=np.int)
###Markdown
- `FortranFile`的特色是只有總長度,沒有形狀,需要`reshape` - 對時間軸加總
###Code
ons=ons.reshape(len(cp),len(mdh))
s_ons=np.sum(ons,axis=1)
print(ons[:5,:5]) #debugging
###Output
[[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]]
###Markdown
- 重新整理順序,只處理有排放的煙道(加總>0者)、且**管煙**編號存在於資料庫(確認用)
###Code
#only those CP with emission accounts
idx=np.where(s_ons>0)
cp1 = [i for i in cp[idx[0]] if i in list(df.CP_NO)]
len(idx[0]) #debugging
###Output
_____no_output_____
###Markdown
- 因為序列的`index`指令會很耗時,先將做成`array`。
###Code
idx= np.array([list(cp).index(i) for i in cp1])
cp, ons, s_ons =cp1,ons[idx,:],s_ons[idx]
#normalize to be the fractions in a year
len(idx) #debugging
###Output
_____no_output_____
###Markdown
- 常態化
###Code
ons=ons/s_ons[:,None]
###Output
_____no_output_____
###Markdown
- 製作煙道、當月啟始及終結時間的標籤 - 從全年的**時變**係數矩陣中提取當月部分,存成`ons2`矩陣
###Code
idx_cp=[list(df.CP_NO).index(i) for i in cp]
ibdate=list(mdh).index(int(bdate.strftime('%m%d%H')))
iedate=list(mdh).index(int(edate.strftime('%m%d%H')))
ons2=np.zeros(shape=(nopts,ntm)) #time fractions for this month
if ibdate>iedate:
endp=id365*24-ibdate
ons2[idx_cp,:endp]=ons[:,ibdate:]
ons2[idx_cp,endp:ntm]=ons[:,:iedate]
else:
ons2[idx_cp,:]=ons[:,ibdate:iedate]
print(cp[:5]) #debugging
print(s_ons[:5]) #debugging
print(ons2[:5,:5]) #debugging
###Output
['A3400047P002', 'A3400109P001', 'A3400118P001', 'A3400172P001', 'A34A1392P001']
[5460. 8736. 8736. 8736. 8736.]
[[0.00018315 0.00018315 0.00018315 0.00018315 0.00018315]
[0.00011447 0.00011447 0.00011447 0.00011447 0.00011447]
[0.00011447 0.00011447 0.00011447 0.00011447 0.00011447]
[0.00011447 0.00011447 0.00011447 0.00011447 0.00011447]
[0.00011447 0.00011447 0.00011447 0.00011447 0.00011447]]
###Markdown
- `SPEC`為全年排放總量(時間軸為定值),單位轉成gmole,或g。
###Code
NREC,NC=nopts,len(cols[spe])
ons =np.zeros(shape=(ntm,NREC,NC))
SPEC=np.zeros(shape=(ntm,NREC,NC))
for c in cols[spe]:
ic=cols[spe].index(c)
for t in range(ntm):
SPEC[t,:,ic]=df[colp[c]]/c2m[c]
###Output
_____no_output_____
###Markdown
- 借用之前用過的`ons`矩陣來儲存ons2的轉置結果。並進行排放總量與**時變**係數相乘
###Code
OT=ons2.T[:,:]
for ic in range(NC):
ons[:,:,ic]=OT
#whole matrix production is faster than idx_cp selectively manupilated
for c in cols[spe]:
if c not in V[1]:continue
ic=cols[spe].index(c)
icp=lspec.index(c)
SPECa[:,:,icp]=SPEC[:,:,ic]*ons[:,:,ic]
###Output
_____no_output_____
###Markdown
- 形成新的逐時資料表(`dfT`) - **管煙**編號序列重新整理,取得順位標籤`CP_NOi` - 時間標籤`idatetime`
###Code
print('pivoting along the C_NO axis')
#forming the DataFrame
CPlist=list(set(df.CP_NO))
CPlist.sort()
pwrt=int(np.log10(len(CPlist))+1)
CPdict={i:CPlist.index(i) for i in CPlist}
df['CP_NOi']=[CPdict[i] for i in df.CP_NO]
idatetime=np.array([i for i in range(ntm) for j in range(nopts)],dtype=int)
dfT=DataFrame({'idatetime':idatetime})
print(pwrt) #debugging
###Output
pivoting along the C_NO axis
5
###Markdown
- 將原來的排放資料表`df`展開成`dft`
###Code
ctmp=np.zeros(shape=(ntm*nopts))
for c in col_mn+col_mx+['CP_NOi']+['ORI_QU1']:
clst=np.array(list(df[c]))
for t in range(ntm):
t1,t2=t*nopts,(t+1)*nopts
a=clst
if c=='CP_NOi':a=t*10**(pwrt)+clst
ctmp[t1:t2]=a
dfT[c]=ctmp
###Output
_____no_output_____
###Markdown
- 將排放量矩陣壓平後,覆蓋原本的資料庫`df`內容。
###Code
#dfT.C_NOi=np.array(dfT.C_NOi,dtype=int)
for c in lspec:
icp=lspec.index(c)
dfT[c]=SPECa[:,:,icp].flatten()
#usage: orig df, index, sum_cols, mean_cols, max_cols
df=XY_pivot(dfT,['CP_NOi'],lspec,col_mn+['ORI_QU1'],col_mx).reset_index()
df['CP_NO']=[int(j)%10**pwrt for j in df.CP_NOi]
print(df.head()) #debugging
df.tail() #debugging
###Output
index CP_NOi ALD2 ALDX BENZ CCRS CO CPRM ETH ETHA ... \
0 0 0.0 0.0 0.0 0.0 0.0 4.840398e+06 0.0 0.0 0.0 ...
1 1 1.0 0.0 0.0 0.0 0.0 1.300039e+06 0.0 0.0 0.0 ...
2 2 2.0 0.0 0.0 0.0 0.0 3.352302e+05 0.0 0.0 0.0 ...
3 3 3.0 0.0 0.0 0.0 0.0 2.972102e+06 0.0 0.0 0.0 ...
4 4 4.0 0.0 0.0 0.0 0.0 1.017955e+06 0.0 0.0 0.0 ...
HD1 HY1 ORI_QU1 TEMP UTM_E UTM_N VEL HEI \
0 15.0 5460.0 189.2 202.0 54060.527015 154494.049724 9.1 51.0
1 24.0 8736.0 30.0 120.0 54325.725374 154014.654113 7.3 35.0
2 24.0 8736.0 30.0 150.0 54179.197179 155026.404876 9.1 53.0
3 24.0 8736.0 3.0 150.0 54200.082535 155161.330165 9.1 76.0
4 24.0 8736.0 45.0 150.0 54383.208009 154491.108945 9.1 57.0
DIA CP_NO
0 0.876163 0
1 0.354318 1
2 0.329237 2
3 0.104114 3
4 0.403231 4
[5 rows x 36 columns]
###Markdown
- 進行pivot_table整併
###Code
pv=XY_pivot(df,['CP_NO'],lspec,col_mn+['ORI_QU1'],col_mx).reset_index()
Bdict={CPdict[j]:[bytes(i,encoding='utf-8') for i in j] for j in CPlist}
pv['CP_NOb'] =[Bdict[i] for i in pv.CP_NO]
nopts=len(set(pv))
pv.head()
###Output
_____no_output_____
###Markdown
- 控制料堆排放量 - 將`df`另存成`fth`檔案,釋放記憶體,以接續nc檔案之儲存。
###Code
#blanck the PY sources
PY=pv.loc[pv.CP_NOb.map(lambda x:x[8:10]==[b'P', b'Y'])]
nPY=len(PY)
a=np.zeros(ntm*nPY)
for t in range(ntm):
t1,t2=t*nPY,(t+1)*nPY
a[t1:t2]=t*nopts+np.array(PY.index,dtype=int)
for c in colc:
ca=df.loc[a,c]/5.
df.loc[a,c]=ca
df.to_feather('df'+mm+'.fth')
pv.set_index('CP_NO').to_csv('pv'+mm+'.csv')
sys.exit()
###Output
_____no_output_____ |
Youcheng/.ipynb_checkpoints/EntryClassForScraping_99-checkpoint.ipynb | ###Markdown
Entry Class For Scraping Using 9.9 公益 as an example Warning: 我可能有很多废话,但是废话里面可能也有要点...怎么办~~~-陈晓理 0 Prelude~~~ (俗称,前戏)网页爬虫(Scraping)是现代数据采集的重要渠道,爬虫技术应该说也一直伴随互联网的的快速发展而不断演进着。大部分网站也不甘心自己的宝贵数据就这么被爬虫给一个个爬了下来,于是网页技术中的反爬虫技巧也在不断的提高,爬虫与反爬虫这两个对手就在这‘你来我往’中不断提升各自的修行。我们这一次开一个头,先解决9.9公益日面对的问题,由此稍微展开一小点,让彭小天天朋友以小见大,以后见到其他的类型的时候,不会慌,知道应该往哪个方向寻找解决方案,当然,往 **陈晓理** 老师这边找解决方案肯定是没有问题的!在讲网页爬虫之前,我们要先讲一下网页web page,我们爬数据都是从网页上爬下来,那网页是怎样的一种格式?我们到底爬的是什么东西?我们看到的网页都是有浏览器解析后的呈现,浏览器解析的是网页的代码文件。网页代码文件分几种,基本的是 **html, css, javascript**, 其中1. **html**是大结构2. **css**是用来格式化这个大结构,让它变得更漂亮更有展现力3. **javascript**就是更灵活更强大的工具,来执行更丰富的功能 用我的话来说,html 搭建的是骨架,肌肉和皮肤  css来做丰胸,垫下巴,割双眼皮,美发染发  Javascript就是用一个大的鼓风机把你的人吹得更飘逸,更动感,更迷人~~~  可以了可以了,看个十七八遍就行了。梦想是美好的,现实是骨感的。我们讲网络爬虫呢,主要还是针对html,诶,诶,不要走啊,不要走啊~~~~你要这样想,我们都是真诚的人,我们都喜欢透过现象看本质,要抓重点,有了重点,才有李云龙,才有新垣结衣,来来来,我们来看看html长成什么样子 1 HTML长什么样
###Code
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>My First Heading</h1>
<p>My first paragraph.</p>
</body>
</html>
(有没有看到,每一个部分写完了都要以反斜杠结尾,表示已经描述完成这个部分,就跟用对讲机结尾时讲的'Over'一样。脑补“导演,导演,这是第一幕,over“ - "导演,导演,这是第二幕第1场,over“。)
html是非常结构化的语言,一层结构下面有另外一层结构,一环套一环,每一环节里面再详细描述,这个部分应该写什么内容,怎么写。像上面这段html,作用是搭建出网页的结构,头(head)是什么,主体(body)内容是什么,主体内容中的标题(h1)是什么,段落内容(p)是什么。在浏览器解析了这段代码,读取、建立网页的树形结构,并渲染、绘制出我们看到的网页。
我们要爬的数据,就嵌在这些层级结构中。
###Output
_____no_output_____
###Markdown
 这是一般的结构,看起来如果不习惯的话,没关系,我想一个例子
###Code
(这里是我在类比,真实情况不是这样的,仅仅是表达我对乔帮主的敬意)
比如说假如下面就是一个网页,网页由图片和文字组成,一个是乔帮主的头像,另外是文字信息
假如哈,以下是假如:
<html>
<图片>乔帮主.jpg</图片>
<标题 字体大小=大 位置信息=左边> Steve Jobs</标题>
<段落 字体大小=中等 位置信息=左边> 1955-2011</段落>
</html>
###Output
_____no_output_____
###Markdown
这里面的层级结构更简单  
###Code
那么问题来了,我现在有10万个网页,都是已经去世的名人的头像(图像),附上姓名(文字),生卒年(文字),我们想要整理名人的生卒年信息,我不可能手动一个一个地去copy paste。那么,我们就需要用网络爬虫,到网页中把重点的信息挑出来,存到我们自己的数据库里面(往往最开始的信息是不能直接用的,要经过进一步的清理和整理之后才能用)。
###Output
_____no_output_____
###Markdown
比如在这里,我们需要网络爬虫沿着html的层级结构,定位到乔帮主的姓名和生卒年信息,然后copy下来。但首先,我们需要查看html代码,确定这样的信息能不能爬下来,应该用什么样的方法来爬,然后用人工的方式定位这些信息的位置,然后再用其他工具,将人工定位转为用工具来识别位置的特征,实现定位。所以,最基本的网络爬虫中,有两个方面是最主要的:通过查看html网页结构,我们必须了解1. **数据来源类型**2. **定位数据位置** 2 来看看99公益日结果页面的网页结构 这里有若干知识点 1. 首先,如何查看给定网页的html代码? 来来来,这样,以chrome为例,将网址(http://ssl.gongyi.qq.com/m/201799/realtime.html?tp=2&o=1) copy到地址栏,回车,然后在页面上用鼠标右键点击,选择** inspect ** ,chrome就将你带入查看网页代码的界面了  那么为什么我们会看到页面上出现“为了更好的体验,请是用竖屏浏览”, 这是因为这个网页是专门给手机看的。没事儿,在**inspect**模式中, 选择右上方的“**手机阅览模式 toggle divice toolbar**”自然就去掉这个页面,然后就能够看到最关心的榜单页面了   2.1 查看html结构,确定数据位置和来源 点击页面右上方的**Elements**一栏,进入HTML代码查看页面, 我们可以清楚的看到HTML的层次结构,点击小三角还能够展开这一项的内容,同时,左边页面将显示该段代码对应的网页位置。  我们也可以直接用鼠标右键移到左边网页中的位置,右键点击,然后在出现菜单中点击**inspect**,就可以在右侧的代码页面中看到对应的代码,非常方便  我们关注的有这几个关键信息:1. 基金会名称2. 基金会排名3. 参与捐赠的人数4. 捐赠总额 在HTML中,我们看到了相应的信息以及他们的位置,那我们试着把这页html拿下来,仔细看看怎么爬。 2.2 动手下载并读取网页HTML 我们要用到的是python 的几个经典包,包括 BeautifulSoup, urllib, 或者requests
###Code
from bs4 import BeautifulSoup
import urllib.request
import urllib.parse
import requests
# urllib.request
import re
import json
import json2html
import pandas as pd
#
url = "http://ssl.gongyi.qq.com/m/201799/realtime.html?tp=2&o=1"
page = urllib.request.urlopen(url)
soup = BeautifulSoup(page, 'lxml')
###Output
_____no_output_____
###Markdown
该网页的 HTML信息就存到了soup这个变量里面,然后我们看一看这些HTML信息是什么
###Code
soup #可以用 ctrl+F 来搜索关键词"携手“ 然后就可以看到相应的HTML代码
###Output
_____no_output_____
###Markdown
令人奇怪的是,这里我们没有看到任何数字,没有携手多少人的具体数字,也没有具体筹款额的数字,甚至连基金会的名称也没有,这是怎么回事? 这里的显示结果表明:我们在页面看到的“携手”和“筹款额”,并不是静止的数字,而应该是从数据库里面提取出来的结果,我们现在拿下来的HTML代码,在相应的位置上,显示的是在从数据库里面提取具体数字结果的地址和变量。这就引出了网页的另外几个知识点 2.3 数据接口 我们看到的这个腾讯的公益日排行的页面,是html结尾,一般来说是静态的网页,但是他的数据肯定是要从后台数据库中去调用,那我们的关键就要找这个与后台数据库交互的接口,这种接口叫做API (Application Programming Interface)。那么问题来了,如何找到这个API? 我是通过这样的方式找的:(很可惜,现在这样的方式已经失效了,可能是我去爬腾讯的数据,他们发现了,结果把这个API关了)1. 点击查看页面上方的Network一栏,会看到一系列的文件以及其状态(Name, Status, Type, Initiator, Size)  2.点击左方网页底部的“查看更多”,之后浏览器会显示后面20个基金会的名单,这个时候,我们可以看到在Network一栏,有更新的文件和数据进来了,点击最新的文件,(现在这个方法已经失效,最新文件出不来,腾讯给改了,所以按照这个方式对待最新的页面是找不到API的)  (在旧的网站中)在response headers中可以找到以下APIhttp://ssl.gongyi.qq.com/cgi-bin/1799_rank_ngo?type=ngobym&pg=1&md=9&jsoncallback=_martch99_sear_fn_(现在已经失效了)其中, pg=1 就是代表这第一页,你将这个编程 pg=2,那么就是第二页,那么我们就可以爬下来这样的所有数据!
###Code
url_page1='http://ssl.gongyi.qq.com/cgi-bin/1799_rank_ngo?type=ngobym&pg=1&md=9&jsoncallback=_martch99_sear_fn_'
url_page2='http://ssl.gongyi.qq.com/cgi-bin/1799_rank_ngo?type=ngobym&pg=2&md=9&jsoncallback=_martch99_sear_fn_'
url_page3='http://ssl.gongyi.qq.com/cgi-bin/1799_rank_ngo?type=ngobym&pg=3&md=9&jsoncallback=_martch99_sear_fn_'
url_page4='http://ssl.gongyi.qq.com/cgi-bin/1799_rank_ngo?type=ngobym&pg=4&md=9&jsoncallback=_martch99_sear_fn_'
page1 = requests.get(url_page1).text #现在这个应该没用了,这个url失效了,但方法依然有效
page1_json_text = re.search(r'{(.*)\}',page1).group() #通过这样方法提取我们想要的位置的文字
#这种方法叫做正则表达,学起来学习曲线稍微有点陡。
#以下是变量 page1_json_text 的内容。需要观察这个数据类型,是json数据格式,表达形式是
#{“变量1”:值1,“变量2“:值2,。。。}
page1_json_text = '{"code":0,"msg":"success","op_time":"1506283468","data":{"rd":{"tmm":30000051392,"money":30576863074,"projs":6239},"ngo":{"list":[{"id":"105","mn":4456857294,"tms":408417,"title":"%E4%B8%AD%E5%8D%8E%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb02c1d1e85c613266cee8519851c922856aaf089b341875caffb84f65f1a720d6ca8aba90545558a50","desc":"%E4%B8%AD%E5%8D%8E%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%85%B7%E6%9C%89%E5%85%AC%E5%8B%9F%E8%B5%84%E6%A0%BC%E7%9A%84%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E3%80%82%E6%8C%89%E7%85%A7%E6%B0%91%E9%97%B4%E6%80%A7%E3%80%81%E8%B5%84%E5%8A%A9%E5%9E%8B%E3%80%81%E5%90%88%E4%BD%9C%E5%8A%9E%E3%80%81%E5%85%A8%E9%80%8F%E6%98%8E%E7%9A%84%E6%96%B9%E9%92%88%EF%BC%8C%E5%AF%B9%E5%9B%B0%E5%A2%83%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E8%BF%9B%E8%A1%8C%E6%95%91%E5%8A%A9%E3%80%82","uin":"2724954300","rk":"1"},{"id":"100","mn":3059102635,"tms":337033,"title":"%E4%B8%AD%E5%9B%BD%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fe2f18f1858ae40342a1d6b32da1b99058fbef4f3973897f35771419640797199588681c158878e89a3fec12800d0dceb","desc":"%E6%88%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E2005%E5%B9%B46%E6%9C%8814%E6%97%A5%EF%BC%8C%E5%8E%9F%E5%90%8D%E4%B8%AD%E5%9B%BD%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E6%95%99%E8%82%B2%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C2011%E5%B9%B47%E6%9C%8815%E6%97%A5%E7%BB%8F%E6%B0%91%E6%94%BF%E9%83%A8%E6%89%B9%E5%87%86%E6%9B%B4%E5%90%8D%E4%B8%BA%E2%80%9C%E4%B8%AD%E5%9B%BD%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E2%80%9D%E3%80%82%E8%8B%B1%E6%96%87%E8%AF%91%E5%90%8D%EF%BC%9AChina%20Social%20Welfare%20Foundation%EF%BC%8C%E7%BC%A9%E5%86%99%EF%BC%9ACSWF%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E6%80%A7%E8%B4%A8%EF%BC%9A%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%8F%91%E8%B5%B7%E4%BA%BA%EF%BC%9A%E6%B0%91%E6%94%BF%E9%83%A8%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E5%AE%97%E6%97%A8%EF%BC%9A%E4%BB%A5%E6%B0%91%E4%B8%BA%E6%9C%AC%E3%80%81%E5%85%B3%E6%B3%A8%E6%B0%91%E7%94%9F%E3%80%81%E6%89%B6%E5%8D%B1%E6%B5%8E%E5%9B%B0%E3%80%81%E5%85%B1%E4%BA%AB%E5%92%8C%E8%B0%90%EF%BC%8C%E6%9C%8D%E5%8A%A1%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E4%BA%8B%E4%B8%9A%E3%80%82","uin":"2693566221","rk":"2"},{"id":"83","mn":1692998810,"tms":228060,"title":"%E4%B8%AD%E5%9B%BD%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F11ad1700130bf2720d6d95d5548081508562893642109561f4f8442fdbe654a19f49e217381c5117","desc":"%E4%B8%AD%E5%9B%BD%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E5%9F%BA%E9%87%91%E4%BC%9A%28%E7%AE%80%E7%A7%B0%EF%BC%9A%E4%B8%AD%E5%9B%BD%E5%84%BF%E5%9F%BA%E4%BC%9A%29%E6%88%90%E7%AB%8B%E4%BA%8E1981%E5%B9%B47%E6%9C%8828%E6%97%A5%EF%BC%8C%E6%98%AF%E6%96%B0%E4%B8%AD%E5%9B%BD%E6%88%90%E7%AB%8B%E5%90%8E%E7%9A%84%E7%AC%AC%E4%B8%80%E5%AE%B6%E5%9B%BD%E5%AE%B6%E7%BA%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E4%B8%AD%E5%9B%BD%E5%84%BF%E5%9F%BA%E4%BC%9A%E4%BB%A5%E7%AB%AD%E8%AF%9A%E6%9C%8D%E5%8A%A1%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E6%95%99%E8%82%B2%E7%A6%8F%E5%88%A9%E4%BA%8B%E4%B8%9A%E3%80%81%E6%9C%8D%E5%8A%A1%E7%A4%BE%E4%BC%9A%E3%80%81%E6%9C%8D%E5%8A%A1%E5%A4%A7%E5%B1%80%E4%B8%BA%E5%AE%97%E6%97%A8%EF%BC%8C%E7%B2%BE%E5%BF%83%E6%89%93%E9%80%A0%E5%92%8C%E6%B7%B1%E5%8C%96%E6%8B%93%E5%B1%95%E4%BA%86%22%E6%98%A5%E8%95%BE%E8%AE%A1%E5%88%92%22%E3%80%81%22%E5%AE%89%E5%BA%B7%E8%AE%A1%E5%88%92%22%E3%80%81%22%E5%84%BF%E7%AB%A5%E5%BF%AB%E4%B9%90%E5%AE%B6%E5%9B%AD%22%E3%80%81%E2%80%9CHELLO%E5%B0%8F%E5%AD%A9%E2%80%9D%E7%AD%89%E5%93%81%E7%89%8C%E9%A1%B9%E7%9B%AE%E3%80%82%E5%BD%A2%E6%88%90%E5%84%BF%E7%AB%A5%E6%95%99%E8%82%B2%E8%B5%84%E5%8A%A9%E3%80%81%E5%A4%A7%E7%97%85%E6%95%91%E5%8A%A9%E3%80%81%E5%AE%89%E5%85%A8%E5%81%A5%E5%BA%B7%E3%80%81%E7%81%BE%E5%90%8E%E7%B4%A7%E6%80%A5%E6%8F%B4%E5%8A%A9%E7%AB%8B%E4%BD%93%E5%8C%96%E8%B5%84%E5%8A%A9%E6%9C%8D%E5%8A%A1%E4%BD%93%E7%B3%BB%EF%BC%8C%E8%A2%AB%E6%B0%91%E6%94%BF%E9%83%A8%E8%AF%84%E4%B8%BA5A%E7%BA%A7%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82","uin":"611990116","rk":"3"},{"id":"163","mn":1433104147,"tms":93774,"title":"%E4%B8%8A%E6%B5%B7%E4%BB%81%E5%BE%B7%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb044e49b84f82fcafae0161c35900d02bce438319ba79f5180074efdadb8558a03ff3ab90c23039d52","desc":"%E4%B8%8A%E6%B5%B7%E4%BB%81%E5%BE%B7%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%94%B1%E7%88%B1%E5%BE%B7%E5%8F%91%E8%B5%B7%EF%BC%8C2011%E5%B9%B412%E6%9C%88%E5%9C%A8%E4%B8%8A%E6%B5%B7%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%E7%9A%84%E6%94%AF%E6%8C%81%E5%9E%8B%E6%B0%91%E9%97%B4%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E4%BB%81%E5%BE%B7%E7%AB%8B%E8%B6%B3%E4%B8%8A%E6%B5%B7%EF%BC%8C%E8%BE%90%E5%B0%84%E5%85%A8%E5%9B%BD%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%94%AF%E6%8C%81%E6%B0%91%E9%97%B4%E5%85%AC%E7%9B%8A%EF%BC%8C%E6%8E%A8%E5%8A%A8%E5%85%AC%E7%9B%8A%E5%88%9B%E6%96%B0%EF%BC%8C%E4%BF%83%E8%BF%9B%E8%A1%8C%E4%B8%9A%E5%8F%91%E5%B1%95%E3%80%82","uin":"3253755055","rk":"4"},{"id":"245","mn":1367782623,"tms":593776,"title":"%E6%B7%B1%E5%9C%B3%E5%B8%82%E7%88%B1%E4%BD%91%E6%9C%AA%E6%9D%A5%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F79d2cd69fead5672fb9f05be86aa5d74dfaf27ad20e3b59f6dc0916fb3f4e38812419bb79fcfe40895bdd78cf6aa1ddf","desc":"%E6%B7%B1%E5%9C%B3%E5%B8%82%E7%88%B1%E4%BD%91%E6%9C%AA%E6%9D%A5%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%9C%A8%E6%B7%B1%E5%9C%B3%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E4%B8%93%E6%B3%A8%E4%BA%8E%E4%B8%A4%E5%A4%A7%E5%85%AC%E7%9B%8A%E9%A2%86%E5%9F%9F%EF%BC%9A%E5%AF%B9%E5%A4%84%E4%BA%8E%E5%9B%B0%E5%A2%83%E5%84%BF%E7%AB%A5%E7%9A%84%E5%85%A8%E6%96%B9%E4%BD%8D%E6%95%91%E5%8A%A9%EF%BC%9B%E6%94%AF%E6%8C%81%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E9%A2%86%E5%9F%9F%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E7%9A%84%E5%8F%91%E5%B1%95%E3%80%82","uin":"3257591536","rk":"5"},{"id":"144","mn":1340834819,"tms":83629,"title":"%E6%97%A0%E9%94%A1%E7%81%B5%E5%B1%B1%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F1e38afedb5ae397e937c4ce112c51b2d0653d5c99cc64f45bda46fa99dc2f2a17667789348caaae2f76ca91f2f25fa03","desc":"%E8%B7%B5%E5%B1%A5%E4%BA%BA%E9%97%B4%E4%BD%9B%E6%95%99%EF%BC%8C%E5%87%80%E5%8C%96%E4%B8%96%E9%81%93%E4%BA%BA%E5%BF%83%E3%80%82%E7%81%B5%E5%B1%B1%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E2004%E5%B9%B412%E6%9C%88%EF%BC%8C%E6%98%AF%E7%94%B1%E6%97%A0%E9%94%A1%E7%81%B5%E5%B1%B1%E6%96%87%E5%8C%96%E6%97%85%E6%B8%B8%E9%9B%86%E5%9B%A2%E6%9C%89%E9%99%90%E5%85%AC%E5%8F%B8%E5%92%8C%E6%97%A0%E9%94%A1%E5%B8%82%E7%A5%A5%E7%AC%A6%E7%A6%85%E5%AF%BA%E5%8F%91%E8%B5%B7%EF%BC%8C%E5%9C%A8%E6%B1%9F%E8%8B%8F%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9E%8B%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%8E%B0%E5%9B%B4%E7%BB%95%E4%BA%94%E5%A4%A7%E5%B9%B3%E5%8F%B0%E5%BC%80%E5%B1%95%E9%A1%B9%E7%9B%AE%EF%BC%9A%E9%9D%92%E5%B9%B4%E6%88%90%E9%95%BF%E5%B9%B3%E5%8F%B0%E3%80%81%E8%A1%8C%E4%B8%9A%E5%8F%91%E5%B1%95%E5%B9%B3%E5%8F%B0%E3%80%81%E7%A4%BE%E4%BC%9A%E5%88%9B%E6%96%B0%E5%B9%B3%E5%8F%B0%E3%80%81%E7%A4%BE%E5%8C%BA%E6%B2%BB%E7%90%86%E5%B9%B3%E5%8F%B0%E5%92%8C%E5%9B%BD%E9%99%85%E4%BA%A4%E6%B5%81%E6%8F%B4%E5%8A%A9%E5%B9%B3%E5%8F%B0%E3%80%82%E7%81%B5%E5%B1%B1%E7%9A%84%E5%B7%A5%E4%BD%9C%E5%8E%9F%E5%88%99%E6%98%AF%EF%BC%9A%E8%A7%84%E8%8C%83%E3%80%81%E4%B8%93%E4%B8%9A%E3%80%81%E5%93%81%E7%89%8C%E3%80%81%E9%80%8F%E6%98%8E%E3%80%822010%E5%B9%B4%EF%BC%8C%E8%A2%AB%E6%B0%91%E6%94%BF%E9%83%A8%E8%AF%84%E4%B8%BA%E2%80%9C%E5%85%A8%E5%9B%BD%E5%85%88%E8%BF%9B%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E2%80%9D%E3%80%822015%E5%B9%B4%EF%BC%8C%E5%9C%A8%E7%AC%AC%E4%BA%94%E5%B1%8A%E4%B8%AD%E5%9B%BD%E5%85%AC%E7%9B%8A%E8%8A%82%E8%8E%B7%E5%BE%97%E2%80%9C2015%E5%B9%B4%E5%BA%A6%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%E5%A5%96%E2%80%9D%20%E3%80%82","uin":"2997007428","rk":"6"},{"id":"40","mn":1292478766,"tms":97226,"title":"%E7%88%B1%E5%BE%B7%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3e28f14aa051684286b10ad99b1ac2b070f3e555d54ca27544c0477d961f15579aa6deebbe88ee1ee1256b4ead56b434","desc":"%E7%88%B1%E5%BE%B7%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1985%E5%B9%B44%E6%9C%88%EF%BC%8C%E6%97%A8%E5%9C%A8%E4%BF%83%E8%BF%9B%E6%88%91%E5%9B%BD%E7%9A%84%E6%95%99%E8%82%B2%E3%80%81%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E3%80%81%E5%8C%BB%E7%96%97%E5%8D%AB%E7%94%9F%E3%80%81%E7%A4%BE%E5%8C%BA%E5%8F%91%E5%B1%95%E4%B8%8E%E7%8E%AF%E5%A2%83%E4%BF%9D%E6%8A%A4%E3%80%81%E7%81%BE%E5%AE%B3%E7%AE%A1%E7%90%86%E7%AD%89%E5%90%84%E9%A1%B9%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%EF%BC%8C%E8%BF%84%E4%BB%8A%E4%B8%BA%E6%AD%A2%EF%BC%8C%E9%A1%B9%E7%9B%AE%E5%8C%BA%E5%9F%9F%E7%B4%AF%E8%AE%A1%E8%A6%86%E7%9B%96%E5%85%A8%E5%9B%BD31%E4%B8%AA%E7%9C%81%E3%80%81%E5%B8%82%E3%80%81%E8%87%AA%E6%B2%BB%E5%8C%BA%EF%BC%8C%E9%80%BE%E5%8D%83%E4%B8%87%E4%BA%BA%E5%8F%97%E7%9B%8A%E3%80%82","uin":"95001117","rk":"7"},{"id":"21","mn":1162045890,"tms":115886,"title":"%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%A5%B3%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F18908ac1703cb32b08e108143e2737cd1f1d625578e0c18fbd4affc2b2230251b52b515ed7bbadc19607af09a6658a45","desc":"%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%A5%B3%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E7%AE%80%E7%A7%B0%E2%80%9C%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E2%80%9D%E8%8B%B1%E6%96%87%E5%90%8D%E7%A7%B0%EF%BC%9AChina%20Women%26amp%3Bamp%3B%23039%3Bs%20Development%20Foundation%EF%BC%8C%E7%BC%A9%E5%86%99%EF%BC%9A%20CWDF%E3%80%82%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E6%98%AF5A%E7%BA%A7%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C1988%E5%B9%B412%E6%9C%88%E7%94%B1%E5%85%A8%E5%9B%BD%E5%A6%87%E8%81%94%E5%8F%91%E8%B5%B7%E6%88%90%E7%AB%8B%E3%80%82%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E6%98%AF%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%85%B6%E9%9D%A2%E5%90%91%E5%85%AC%E4%BC%97%E5%8B%9F%E6%8D%90%E7%9A%84%E5%9C%B0%E5%9F%9F%E6%98%AF%E4%B8%AD%E5%9B%BD%E4%BB%A5%E5%8F%8A%E8%AE%B8%E5%8F%AF%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E5%8B%9F%E6%8D%90%E7%9A%84%E5%9B%BD%E5%AE%B6%E5%92%8C%E5%9C%B0%E5%8C%BA%E3%80%82%0A%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E7%BB%B4%E6%8A%A4%E5%A6%87%E5%A5%B3%E6%9D%83%E7%9B%8A%EF%BC%8C%E6%8F%90%E9%AB%98%E5%A6%87%E5%A5%B3%E7%B4%A0%E8%B4%A8%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%A6%87%E5%A5%B3%E5%92%8C%E5%A6%87%E5%A5%B3%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%EF%BC%8C%E4%B8%BA%E6%9E%84%E5%BB%BA%E5%92%8C%E8%B0%90%E7%A4%BE%E4%BC%9A%E4%BD%9C%E5%87%BA%E5%BA%94%E6%9C%89%E7%9A%84%E8%B4%A1%E7%8C%AE%E3%80%82%0A%E9%95%BF%E6%9C%9F%E4%BB%A5%E6%9D%A5%EF%BC%8C%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E7%9D%80%E7%9C%BC%E4%BA%8E%E5%A6%87%E5%A5%B3%E7%BE%A4%E4%BC%97%E6%9C%80%E5%85%B3%E5%BF%83%E3%80%81%E6%9C%80%E7%9B%B4%E6%8E%A5%E3%80%81%E6%9C%80%E7%8E%B0%E5%AE%9E%E7%9A%84%E5%88%A9%E7%9B%8A%E9%97%AE%E9%A2%98%EF%BC%8C%E5%9C%A8%E5%9B%B4%E7%BB%95%E5%A6%87%E5%A5%B3%E6%89%B6%E8%B4%AB%E3%80%81%E5%A6%87%E5%A5%B3%E5%81%A5%E5%BA%B7%E3%80%81%E5%A5%B3%E6%80%A7%E5%88%9B%E4%B8%9A%E7%AD%89%E6%96%B9%E9%9D%A2%EF%BC%8C%E5%AE%9E%E6%96%BD%E4%BA%86%E4%B8%80%E7%B3%BB%E5%88%97%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E9%A1%B9%E7%9B%AE%EF%BC%8C%E5%8F%96%E5%BE%97%E4%BA%86%E6%98%8E%E6%98%BE%E7%9A%84%E7%A4%BE%E4%BC%9A%E6%88%90%E6%95%88%EF%BC%8C%E7%BB%84%E7%BB%87%E5%AE%9E%E6%96%BD%E7%9A%84%E2%80%9C%E6%AF%8D%E4%BA%B2%E5%B0%8F%E9%A2%9D%E5%BE%AA%E7%8E%AF%E2%80%9D%E3%80%81%E2%80%9C%E6%AF%8D%E4%BA%B2%E5%81%A5%E5%BA%B7%E5%BF%AB%E8%BD%A6%E2%80%9D%E3%80%81%E2%80%9C%E6%AF%8D%E4%BA%B2%E6%B0%B4%E7%AA%96%E2%80%9D%E3%80%81%20%E2%80%9C%E8%B4%AB%E5%9B%B0%E8%8B%B1%E6%A8%A1%E6%AF%8D%E4%BA%B2%E8%B5%84%E5%8A%A9%E8%AE%A1%E5%88%92%E2%80%9D%E3%80%81%E2%80%9C%E6%AF%8D%E4%BA%B2%E9%82%AE%E5%8C%85%E2%80%9D5%E4%B8%AA%E9%A1%B9%E7%9B%AE%E5%88%86%E5%88%AB%E8%8E%B7%E5%BE%97%E4%B8%AD%E5%9B%BD%E6%94%BF%E5%BA%9C%E6%9C%80%E9%AB%98%E6%85%88%E5%96%84%E5%A5%96%E9%A1%B9%E2%80%94%E4%B8%AD%E5%8D%8E%E6%85%88%E5%96%84%E5%A5%96%E3%80%82","uin":"2081457189","rk":"8"},{"id":"78","mn":1128105177,"tms":213704,"title":"%E4%B8%AD%E5%9B%BD%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3c9f5dac15075332f0af7d7ef2e5d7f537068afaf6b01ae205da94bd0c97331eef8a40321163fdb39c4886bcee050fde","desc":"%E4%B8%AD%E5%9B%BD%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A%28%E8%8B%B1%E6%96%87%E5%90%8D%3AChina%20Foundation%20for%20Poverty%20Alleviation%EF%BC%8C%E7%BC%A9%E5%86%99%3ACFPA%29%E6%88%90%E7%AB%8B%E4%BA%8E1989%E5%B9%B43%E6%9C%88%EF%BC%8C%E7%94%B1%E5%9B%BD%E5%8A%A1%E9%99%A2%E6%89%B6%E8%B4%AB%E5%BC%80%E5%8F%91%E9%A2%86%E5%AF%BC%E5%B0%8F%E7%BB%84%E5%8A%9E%E5%85%AC%E5%AE%A4%E4%B8%BB%E7%AE%A1%EF%BC%8C%E6%98%AF%E5%AF%B9%E6%B5%B7%E5%86%85%E5%A4%96%E6%8D%90%E8%B5%A0%E5%9F%BA%E9%87%91%E8%BF%9B%E8%A1%8C%E7%AE%A1%E7%90%86%E7%9A%84%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%EF%BC%8C%E6%98%AF%E7%8B%AC%E7%AB%8B%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%E6%B3%95%E4%BA%BA%E3%80%82","uin":"1162992508","rk":"9"},{"id":"102","mn":999433142,"tms":95211,"title":"%E4%B8%8A%E6%B5%B7%E7%9C%9F%E7%88%B1%E6%A2%A6%E6%83%B3%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0cdb413bd5e29bee196b6862e708ea7c1e3e2cd95b518a222a86e678416094c0d8c7d74c09c90eac3","desc":"%E4%B8%8A%E6%B5%B7%E7%9C%9F%E7%88%B1%E6%A2%A6%E6%83%B3%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%94%B1%E9%87%91%E8%9E%8D%E6%9C%BA%E6%9E%84%E5%92%8C%E4%B8%8A%E5%B8%82%E5%85%AC%E5%8F%B8%E7%9A%84%E4%B8%93%E4%B8%9A%E7%AE%A1%E7%90%86%E4%BA%BA%E5%91%98%E5%8F%91%E8%B5%B7%E5%92%8C%E8%BF%90%E8%90%A5%E7%9A%84%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E4%BF%83%E8%BF%9B%E6%95%99%E8%82%B2%E5%9D%87%E8%A1%A1%EF%BC%8C%E5%8F%91%E5%B1%95%E7%B4%A0%E5%85%BB%E6%95%99%E8%82%B2%EF%BC%8C%E5%B8%AE%E5%8A%A9%E5%AD%A9%E5%AD%90%E8%87%AA%E4%BF%A1%E3%80%81%E4%BB%8E%E5%AE%B9%E3%80%81%E6%9C%89%E5%B0%8A%E4%B8%A5%E5%9C%B0%E6%88%90%E9%95%BF%E3%80%82%0A%E7%9C%9F%E7%88%B1%E6%A2%A6%E6%83%B3%E9%A6%96%E5%88%9B%E2%80%9C%E6%A2%A6%E6%83%B3%E4%B8%AD%E5%BF%83%E2%80%9D%E7%B4%A0%E5%85%BB%E6%95%99%E8%82%B2%E6%9C%8D%E5%8A%A1%E4%BD%93%E7%B3%BB%EF%BC%8C%E8%B7%A8%E7%95%8C%E5%85%B1%E5%88%9B%E6%95%99%E8%82%B2%E7%94%9F%E6%80%81%EF%BC%8C%E5%B7%B2%E5%9C%A8%E5%85%A8%E5%9B%BD31%E4%B8%AA%E7%9C%81%E4%B8%BA280%E4%B8%87%E5%B8%88%E7%94%9F%E6%8F%90%E4%BE%9B%E5%85%AC%E7%9B%8A%E4%BA%A7%E5%93%81%E5%92%8C%E6%9C%8D%E5%8A%A1%E3%80%82","uin":"2271431773","rk":"10"},{"id":"16","mn":835895133,"tms":99731,"title":"%E6%B7%B1%E5%9C%B3%E5%A3%B9%E5%9F%BA%E9%87%91%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F7767c9653cd14ee9e67532e45389742881e2c231861883f935f3b01d6beeb0f74b78a0c902325169","desc":"%E5%A3%B9%E5%9F%BA%E9%87%91%E6%98%AF%E6%9D%8E%E8%BF%9E%E6%9D%B0%E5%85%88%E7%94%9F2007%E5%B9%B44%E6%9C%88%E5%88%9B%E7%AB%8B%E7%9A%84%E5%88%9B%E6%96%B0%E5%9E%8B%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%EF%BC%8C2011%E5%B9%B41%E6%9C%88%E4%BD%9C%E4%B8%BA%E4%B8%AD%E5%9B%BD%E7%AC%AC%E4%B8%80%E5%AE%B6%E6%B0%91%E9%97%B4%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E8%90%BD%E6%88%B7%E6%B7%B1%E5%9C%B3%E3%80%82%E5%A3%B9%E5%9F%BA%E9%87%91%E4%BB%A5%E2%80%9C%E5%B0%BD%E6%88%91%E6%89%80%E8%83%BD%EF%BC%8C%E4%BA%BA%E4%BA%BA%E5%85%AC%E7%9B%8A%E2%80%9D%E4%B8%BA%E6%84%BF%E6%99%AF%EF%BC%8C%E6%90%AD%E5%BB%BA%E4%B8%93%E4%B8%9A%E9%80%8F%E6%98%8E%E7%9A%84%E5%85%AC%E7%9B%8A%E5%B9%B3%E5%8F%B0%EF%BC%8C%E4%B8%93%E6%B3%A8%E4%BA%8E%E7%81%BE%E5%AE%B3%E6%95%91%E5%8A%A9%E3%80%81%E5%84%BF%E7%AB%A5%E5%85%B3%E6%80%80%E4%B8%8E%E5%8F%91%E5%B1%95%E3%80%81%E5%85%AC%E7%9B%8A%E6%94%AF%E6%8C%81%E4%B8%8E%E5%88%9B%E6%96%B0%E4%B8%89%E5%A4%A7%E9%A2%86%E5%9F%9F%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%88%90%E4%B8%BA%E4%B8%AD%E5%9B%BD%E5%85%AC%E7%9B%8A%E7%9A%84%E5%BC%80%E6%8B%93%E8%80%85%E3%80%81%E5%88%9B%E6%96%B0%E8%80%85%E5%92%8C%E6%8E%A8%E5%8A%A8%E8%80%85%E3%80%82%0A","uin":"95001115","rk":"11"},{"id":"103","mn":824865299,"tms":59315,"title":"%E5%8C%97%E4%BA%AC%E6%96%B0%E9%98%B3%E5%85%89%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F38ca03395aad2883343ff9c3884555db36c4eeebe849e78abb864fa61898f499cbaddcd6d2d7be63171de2c66ddcbba3","desc":"%E5%8C%97%E4%BA%AC%E6%96%B0%E9%98%B3%E5%85%89%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E2009%E5%B9%B44%E6%9C%88%EF%BC%8C%E4%B8%93%E6%B3%A8%E4%B8%93%E4%B8%9A%E6%8A%97%E5%87%BB%E7%99%BD%E8%A1%80%E7%97%85%EF%BC%8C%E4%B8%BA%E6%82%A3%E8%80%85%E6%8F%90%E4%BE%9B%E5%9B%BD%E9%99%85%E9%AA%A8%E9%AB%93%E9%85%8D%E5%9E%8B%E6%A3%80%E7%B4%A2%E3%80%81%E7%9B%B4%E6%8E%A5%E7%BB%8F%E6%B5%8E%E8%B5%84%E5%8A%A9%E3%80%81%E4%BF%A1%E6%81%AF%E6%9C%8D%E5%8A%A1%E3%80%81%E5%8C%BB%E5%AD%A6%E7%A0%94%E7%A9%B6%E5%92%8C%E5%8C%BB%E7%94%9F%E8%BF%9B%E4%BF%AE%E6%94%AF%E6%8C%81%E3%80%81%E6%94%BF%E7%AD%96%E5%80%A1%E5%AF%BC%E7%AD%89%E5%A4%9A%E7%A7%8D%E6%9C%8D%E5%8A%A1%E3%80%82%0A","uin":"2915144362","rk":"12"},{"id":"145","mn":772131120,"tms":188274,"title":"%E9%98%BF%E6%8B%89%E5%96%84SEE%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F64d34056c1f59af7416974df7d83f2639fbbe21f1ef6b0ee5a15c1ce9c7ac5e146484483c632a003fbac86c4529ab663","desc":"%E9%98%BF%E6%8B%89%E5%96%84SEE%E5%9F%BA%E9%87%91%E4%BC%9A%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%89%93%E9%80%A0%E4%BC%81%E4%B8%9A%E5%AE%B6%E3%80%81NGO%E3%80%81%E5%85%AC%E4%BC%97%E5%85%B1%E5%90%8C%E5%8F%82%E4%B8%8E%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%8C%96%E4%BF%9D%E6%8A%A4%E5%B9%B3%E5%8F%B0%EF%BC%8C%E5%8F%AF%E6%8C%81%E7%BB%AD%E5%9C%B0%E4%BF%9D%E6%8A%A4%E7%94%9F%E6%80%81%E7%8E%AF%E5%A2%83%EF%BC%8C%E5%85%B1%E5%90%8C%E5%AE%88%E6%8A%A4%E7%A2%A7%E6%B0%B4%E8%93%9D%E5%A4%A9%E3%80%82","uin":"2382868980","rk":"13"},{"id":"101","mn":632056442,"tms":55558,"title":"%E6%B7%B1%E5%9C%B3%E5%B8%82%E6%85%88%E5%96%84%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F748864bd25db5ee05df00b58f780b77c4053ec6b271b9b842e38020f445c787accf44ae89b2614a8","desc":"%E6%B7%B1%E5%9C%B3%E5%B8%82%E6%85%88%E5%96%84%E4%BC%9A%EF%BC%88%E8%8B%B1%E6%96%87%E5%90%8DSHENZHEN%20CHARITY%20FEDERATION%EF%BC%89%E6%98%AF%E5%9C%A8%E5%B8%82%E5%A7%94%E3%80%81%E5%B8%82%E6%94%BF%E5%BA%9C%E9%AB%98%E5%BA%A6%E9%87%8D%E8%A7%86%E5%92%8C%E6%94%AF%E6%8C%81%E4%B8%8B%EF%BC%8C%E7%94%B1%E7%A4%BE%E4%BC%9A%E5%90%84%E7%95%8C%E7%83%AD%E5%BF%83%E4%BA%8E%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E7%9A%84%E6%9C%BA%E6%9E%84%E3%80%81%E5%9B%A2%E4%BD%93%E5%92%8C%E4%B8%AA%E4%BA%BA%E7%BB%84%E6%88%90%EF%BC%8C%E5%8F%91%E5%8A%A8%E5%92%8C%E6%8E%A5%E5%8F%97%E5%9B%BD%E5%86%85%E5%A4%96%E7%BB%84%E7%BB%87%E5%92%8C%E4%B8%AA%E4%BA%BA%EF%BC%8C%E8%87%AA%E6%84%BF%E5%90%91%E6%B7%B1%E5%9C%B3%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E6%8D%90%E8%B5%A0%E6%88%96%E8%B5%84%E5%8A%A9%E8%B4%A2%E4%BA%A7%E5%B9%B6%E8%BF%9B%E8%A1%8C%E7%AE%A1%E7%90%86%E5%92%8C%E8%BF%90%E7%94%A8%E7%9A%84%E3%80%81%E5%85%B7%E6%9C%89%E5%9B%BD%E5%AE%B6%E5%85%AC%E5%8B%9F%E8%B5%84%E8%B4%A8%E5%92%8C%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E7%9A%84%E5%85%AC%E7%9B%8A%E6%80%A7%E3%80%81%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E3%80%82","uin":"2779160918","rk":"14"},{"id":"79","mn":498402175,"tms":51523,"title":"%E4%B8%AD%E5%8D%8E%E6%80%9D%E6%BA%90%E5%B7%A5%E7%A8%8B%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F74371d8caf56a60900c64c4ca020f087b3d1098c9a209036b43efa2b6f367297c9ff53354646ec81","desc":"%E4%B8%AD%E5%8D%8E%E6%80%9D%E6%BA%90%E5%B7%A5%E7%A8%8B%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%94%B1%E4%B8%AD%E5%85%B1%E4%B8%AD%E5%A4%AE%E7%BB%9F%E6%88%98%E9%83%A8%E4%B8%BB%E7%AE%A1%EF%BC%8C%E6%B0%91%E5%BB%BA%E4%B8%AD%E5%A4%AE%E5%8F%91%E8%B5%B7%E5%B9%B6%E8%B4%9F%E8%B4%A3%E6%97%A5%E5%B8%B8%E7%AE%A1%E7%90%86%EF%BC%8C%E4%BA%8E2007%E5%B9%B43%E6%9C%88%E5%9C%A8%E6%B0%91%E6%94%BF%E9%83%A8%E6%AD%A3%E5%BC%8F%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%E7%9A%84%E5%85%A8%E5%9B%BD%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%AE%83%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%E8%B5%84%E5%8A%A9%E4%BB%A5%E6%89%B6%E8%B4%AB%E5%92%8C%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%E4%B8%BA%E4%B8%BB%E7%9A%84%E2%80%9C%E6%80%9D%E6%BA%90%E5%B7%A5%E7%A8%8B%E2%80%9D%E6%B4%BB%E5%8A%A8%EF%BC%8C%E5%B8%AE%E5%8A%A9%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%E8%A7%A3%E5%86%B3%E7%94%9F%E4%BA%A7%E7%94%9F%E6%B4%BB%E5%9B%B0%E9%9A%BE%EF%BC%8C%E4%BF%83%E8%BF%9B%E4%B8%AD%E5%9B%BD%E8%B4%AB%E5%9B%B0%E5%9C%B0%E5%8C%BA%E7%BB%8F%E6%B5%8E%E5%92%8C%E7%A4%BE%E4%BC%9A%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E3%80%82%E7%8E%B0%E4%BB%BB%E7%90%86%E4%BA%8B%E9%95%BF%E4%B8%BA%E5%85%A8%E5%9B%BD%E4%BA%BA%E5%A4%A7%E5%B8%B8%E5%A7%94%E4%BC%9A%E5%89%AF%E5%A7%94%E5%91%98%E9%95%BF%E3%80%81%E6%B0%91%E5%BB%BA%E4%B8%AD%E5%A4%AE%E4%B8%BB%E5%B8%AD%E9%99%88%E6%98%8C%E6%99%BA%E3%80%82","uin":"2806409577","rk":"15"},{"id":"232","mn":476635139,"tms":35338,"title":"%E9%99%95%E8%A5%BF%E7%9C%81%E6%85%88%E5%96%84%E5%8D%8F%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3dbf6daf6feea59d70db851052c4c0d16533dc3c208c7836b31a54d4738f63bbe421424200dcd425","desc":"%E5%9D%9A%E5%AE%88%E2%80%9C%E5%85%AC%E7%9B%8A%E8%87%B3%E4%B8%8A%E3%80%81%E4%BB%A5%E4%BA%BA%E4%B8%BA%E6%9C%AC%E2%80%9D%E7%9A%84%E7%90%86%E5%BF%B5%EF%BC%8C%E7%A7%89%E6%8C%81%E2%80%9C%E5%AE%89%E8%80%81%E6%8A%9A%E5%AD%A4%E3%80%81%E6%B5%8E%E8%B4%AB%E8%A7%A3%E5%9B%B0%E2%80%9D%E7%9A%84%E5%AE%97%E6%97%A8%EF%BC%8C%E4%BB%A5%E7%88%B1%E5%BF%83%E4%B8%BA%E5%8A%A8%E5%8A%9B%EF%BC%8C%E4%BB%A5%E5%8B%9F%E6%8D%90%E4%B8%BA%E6%89%8B%E6%AE%B5%EF%BC%8C%E4%BB%A5%E5%B8%AE%E5%8A%A9%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%E4%B8%BA%E7%9B%AE%E7%9A%84%EF%BC%8C%E5%8D%93%E6%9C%89%E6%88%90%E6%95%88%E5%9C%B0%E5%BC%80%E5%B1%95%E4%BA%86%E5%90%84%E9%A1%B9%E6%85%88%E5%96%84%E6%B4%BB%E5%8A%A8%EF%BC%8C%E7%B4%AF%E8%AE%A1%E5%8B%9F%E9%9B%86%E5%96%84%E6%AC%BE%EF%BC%88%E5%8C%85%E6%8B%AC%E7%89%A9%E8%B5%84%E6%8A%98%E4%BB%B7%EF%BC%8910%E4%BA%BF%E4%BD%99%E5%85%83%E3%80%82%E6%83%A0%E5%8F%8A%E5%9B%B0%E9%9A%BE%E7%BE%A4%E4%BC%97700%E4%BD%99%E4%B8%87%E4%BA%BA%E3%80%82","uin":"157755130","rk":"16"},{"id":"149","mn":443166863,"tms":30858,"title":"%E6%88%90%E9%83%BD%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb09601def96e39bef44a172fd82d3d71e38e8b23da99f1c5c612c6a43959c1bb451b77a5cf1a4b5925","desc":"%E6%88%90%E9%83%BD%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E4%BA%8E2009%E5%B9%B412%E6%9C%88%E6%88%90%E7%AB%8B%EF%BC%8C%E5%AE%83%E7%9A%84%E5%89%8D%E8%BA%AB%E2%80%94%E6%88%90%E9%83%BD%E6%85%88%E5%96%84%E4%BC%9A%E4%BA%8E1995%E5%B9%B45%E6%9C%88%E6%88%90%E7%AB%8B%E3%80%82%E5%A4%9A%E5%B9%B4%E6%9D%A5%EF%BC%8C%E5%9C%A8%E5%81%9A%E5%A5%BD%E6%97%A5%E5%B8%B8%E6%89%B6%E8%80%81%E3%80%81%E5%8A%A9%E6%AE%8B%E3%80%81%E6%95%91%E5%AD%A4%E3%80%81%E6%B5%8E%E5%9B%B0%E3%80%81%E8%B5%88%E7%81%BE%E7%AD%89%E6%95%91%E5%8A%A9%E5%B7%A5%E4%BD%9C%E7%9A%84%E5%90%8C%E6%97%B6%EF%BC%8C%E5%9D%9A%E6%8C%81%E5%BC%80%E6%8B%93%E5%88%9B%E6%96%B0%EF%BC%8C%E7%9D%80%E5%8A%9B%E5%AE%9E%E6%96%BD%E4%BA%86%E4%BB%A5%E2%80%9C%E9%98%B3%E5%85%89%E2%80%9D%E5%91%BD%E5%90%8D%E7%9A%84%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E7%B3%BB%E5%88%97%E5%93%81%E7%89%8C%EF%BC%8C%E5%88%9D%E6%AD%A5%E5%BD%A2%E6%88%90%E4%BA%86%E4%BB%A5%E5%B8%AE%E5%9B%B0%E5%8A%A9%E5%AD%A6%E4%B8%BA%E4%B8%BB%EF%BC%8C%E6%B6%B5%E7%9B%96%E5%BB%BA%E6%88%BF%E3%80%81%E5%8A%A9%E8%80%81%E3%80%81%E6%89%B6%E8%B4%AB%E7%AD%89%E6%96%B9%E9%9D%A2%E7%9A%84%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E4%BD%93%E7%B3%BB%EF%BC%8C%E5%8F%91%E6%8C%A5%E4%BA%86%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E5%9C%A8%E7%A4%BE%E4%BC%9A%E4%BF%9D%E9%9A%9C%E4%BD%93%E7%B3%BB%E4%B8%AD%E7%9A%84%E9%87%8D%E8%A6%81%E8%A1%A5%E5%85%85%E4%BD%9C%E7%94%A8%EF%BC%8C%E8%AE%A9%E6%88%90%E9%83%BD%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E5%83%8F%E2%80%9C%E9%98%B3%E5%85%89%E2%80%9D%E4%B8%80%E6%A0%B7%E6%B8%A9%E6%9A%96%E7%9D%80%E5%85%A8%E5%B8%82%E5%9F%8E%E4%B9%A1%E8%B4%AB%E5%9B%B0%E7%BE%A4%E4%BD%93%EF%BC%8C%E4%B8%BA%E6%9E%84%E5%BB%BA%E5%92%8C%E8%B0%90%E7%A4%BE%E4%BC%9A%E4%BD%9C%E5%87%BA%E4%BA%86%E7%A7%AF%E6%9E%81%E8%B4%A1%E7%8C%AE%E3%80%82","uin":"2197847273","rk":"17"},{"id":"228","mn":441871280,"tms":22666,"title":"%E9%95%BF%E6%B2%99%E5%B8%82%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3e28f14aa0516842697a39f1ed4317f38c8ced93c7c2e5d5b4e4202be22474dfce0a879e14e5993fe74802bb32205da4","desc":"%E5%AE%97%E6%97%A8%EF%BC%9A%E7%BB%84%E7%BB%87%E5%92%8C%E5%9B%A2%E7%BB%93%E7%A4%BE%E4%BC%9A%E5%90%84%E7%95%8C%E5%8A%9B%E9%87%8F%EF%BC%8C%E8%81%94%E7%B3%BB%E6%B5%B7%E5%86%85%E5%A4%96%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E5%92%8C%E7%9F%A5%E5%90%8D%E4%BA%BA%E5%A3%AB%EF%BC%8C%E5%8F%91%E6%89%AC%E4%BA%BA%E9%81%93%E4%B8%BB%E4%B9%89%E7%B2%BE%E7%A5%9E%EF%BC%8C%E5%BC%98%E6%89%AC%E4%B8%AD%E5%8D%8E%E6%B0%91%E6%97%8F%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E7%9A%84%E4%BC%98%E8%89%AF%E7%BE%8E%E5%BE%B7%EF%BC%8C%E5%BC%80%E5%B1%95%E5%A4%9A%E7%A7%8D%E5%BD%A2%E5%BC%8F%E7%9A%84%E7%A4%BE%E4%BC%9A%E6%95%91%E5%8A%A9%E5%B7%A5%E4%BD%9C%EF%BC%8C%E4%BD%BF%E8%80%81%E6%9C%89%E6%89%80%E5%85%BB%E3%80%81%E7%97%85%E6%9C%89%E6%89%80%E5%8C%BB%E3%80%81%E5%B9%BC%E6%9C%89%E6%89%80%E6%89%98%E3%80%81%E6%AE%8B%E6%9C%89%E6%89%80%E9%9D%A0%E3%80%81%E5%9B%B0%E6%9C%89%E6%89%80%E5%B8%AE%E3%80%81%E8%B4%AB%E6%9C%89%E6%89%80%E6%89%B6%EF%BC%8C%E4%BF%83%E8%BF%9B%E7%A4%BE%E4%BC%9A%E5%92%8C%E8%B0%90%E8%BF%9B%E6%AD%A5%E3%80%82%E7%B2%BE%E7%A5%9E%EF%BC%9A%E5%9C%A8%E5%A5%89%E7%8C%AE%E4%BB%96%E4%BA%BA%E4%B8%AD%E6%88%90%E5%B0%B1%E8%87%AA%E5%B7%B1%EF%BC%81","uin":"32772014","rk":"18"},{"id":"97","mn":353420448,"tms":30631,"title":"%E4%B8%8A%E6%B5%B7%E8%81%94%E5%8A%9D%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F4d42a3acde967538f041f82f10bbf004fc85c369774df221fa9196850905acf49fb066f9b4d185e5c3294e292ee42922","desc":"%E4%B8%8A%E6%B5%B7%E8%81%94%E5%8A%9D%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E4%B8%80%E5%AE%B6%E6%B0%91%E9%97%B4%E5%8F%91%E8%B5%B7%E7%9A%84%E8%B5%84%E5%8A%A9%E5%9E%8B%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E4%BB%A5%E8%81%94%E5%90%88%E5%8A%9D%E5%8B%9F%EF%BC%8C%E6%94%AF%E6%8C%81%E6%B0%91%E9%97%B4%E5%85%AC%E7%9B%8A%E4%B8%BA%E4%BD%BF%E5%91%BD%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E8%AE%A9%E4%B8%AD%E5%9B%BD%E6%B0%91%E9%97%B4%E5%85%AC%E7%9B%8A%E6%8B%A5%E6%9C%89%E4%BA%92%E4%BF%A1%EF%BC%8C%E5%90%88%E4%BD%9C%EF%BC%8C%E5%8F%AF%E6%8C%81%E7%BB%AD%E5%8F%91%E5%B1%95%E7%9A%84%E7%8E%AF%E5%A2%83%E3%80%82","uin":"1503328566","rk":"19"},{"id":"307","mn":334618870,"tms":22511,"title":"%E5%8C%97%E4%BA%AC%E5%A4%A9%E4%BD%BF%E5%A6%88%E5%A6%88%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A","logo":"http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb067d93eae12ea9834d1063b0935fa61b93b6c67f689f11c245fb0fecedfd1aa0b51e3f6d74b7e760c","desc":"%E5%A4%A9%E4%BD%BF%E5%A6%88%E5%A6%88%E5%A4%9A%E5%B9%B4%E6%9D%A5%E4%B8%80%E7%9B%B4%E6%B4%BB%E8%B7%83%E5%9C%A8%E5%84%BF%E7%AB%A5%E5%A4%A7%E7%97%85%E6%95%91%E5%8A%A9%E9%A2%86%E5%9F%9F%EF%BC%8C%E4%BB%A5%E6%B1%87%E8%81%9A%E7%88%B1%E5%BF%83%EF%BC%8C%E4%BF%9D%E6%8A%A4%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%E7%9A%84%E7%94%9F%E5%91%BD%E3%80%81%E5%81%A5%E5%BA%B7%E3%80%81%E7%94%9F%E5%AD%98%E3%80%81%E5%8F%91%E5%B1%95%E6%9D%83%E5%88%A9%E4%B8%BA%E5%AE%97%E6%97%A8%EF%BC%8C%E4%B8%BB%E8%A6%81%E5%BC%80%E5%B1%95%E7%89%B9%E6%AE%8A%E7%BE%A4%E4%BD%93%E7%9A%84%E5%8C%BB%E7%96%97%E6%95%91%E5%8A%A9%E3%80%81%E5%BA%B7%E5%A4%8D%E5%85%B3%E6%80%80%E5%92%8C%E4%BF%A1%E6%81%AF%E5%92%A8%E8%AF%A2%E7%AD%89%E5%85%AC%E7%9B%8A%E6%B4%BB%E5%8A%A8%E3%80%82%E5%A4%A9%E4%BD%BF%E5%A6%88%E5%A6%88%E5%9B%A2%E9%98%9F%E6%9B%BE%E4%BA%8E2008%E5%B9%B4%E5%92%8C2012%E5%B9%B4%E4%B8%A4%E6%AC%A1%E8%8E%B7%E5%BE%97%E4%B8%AD%E5%8D%8E%E6%85%88%E5%96%84%E5%A5%96%E3%80%82","uin":"2175409800","rk":"20"}],"havenext":"1"}}}'
page1_json_text
###Output
_____no_output_____
###Markdown
既然是这样的json数据,那么就要用json数据的读取工具来读取
###Code
json_page1 = json.loads(page1_json_text)
###Output
_____no_output_____
###Markdown
这是第一页的数据,同理,我们可以读取第2,3,4,5,6,。。。页的数据,但我们需要把这些数据整合在一起
###Code
#读取相应页面的html
page2 = requests.get(url_page2).text
page3 = requests.get(url_page3).text
page4 = requests.get(url_page4).text
#抓取对应位置的段落
page2_json_text = re.search(r'\{(.*)\}',page2).group()
page3_json_text = re.search(r'\{(.*)\}',page3).group()
page4_json_text = re.search(r'\{(.*)\}',page4).group()
#将对应段落的数据串读进json格式
json_page2 = json.loads(page2_json_text)
json_page3 = json.loads(page3_json_text)
json_page4 = json.loads(page4_json_text)
#将json格式中的重要数据项内容分配给 其他变量,这样的数据都是list类型
list_page2 = json_page2['data']['ngo']['list']
list_page3 = json_page3['data']['ngo']['list']
list_page4 = json_page4['data']['ngo']['list']
page2_json_text
#合并list类型的变量
combined_list = list_page1 + list_page2 + list_page3 + list_page4
#这些都是以前爬出来的信息了,现在同样的方法已经被腾讯屏蔽了
combined_list = [{'desc': '%E4%B8%AD%E5%8D%8E%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%85%B7%E6%9C%89%E5%85%AC%E5%8B%9F%E8%B5%84%E6%A0%BC%E7%9A%84%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E3%80%82%E6%8C%89%E7%85%A7%E6%B0%91%E9%97%B4%E6%80%A7%E3%80%81%E8%B5%84%E5%8A%A9%E5%9E%8B%E3%80%81%E5%90%88%E4%BD%9C%E5%8A%9E%E3%80%81%E5%85%A8%E9%80%8F%E6%98%8E%E7%9A%84%E6%96%B9%E9%92%88%EF%BC%8C%E5%AF%B9%E5%9B%B0%E5%A2%83%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E8%BF%9B%E8%A1%8C%E6%95%91%E5%8A%A9%E3%80%82',
'id': '105',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb02c1d1e85c613266cee8519851c922856aaf089b341875caffb84f65f1a720d6ca8aba90545558a50',
'mn': 4456857294,
'rk': '1',
'title': '%E4%B8%AD%E5%8D%8E%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 408417,
'uin': '2724954300'},
{'desc': '%E6%88%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E2005%E5%B9%B46%E6%9C%8814%E6%97%A5%EF%BC%8C%E5%8E%9F%E5%90%8D%E4%B8%AD%E5%9B%BD%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E6%95%99%E8%82%B2%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C2011%E5%B9%B47%E6%9C%8815%E6%97%A5%E7%BB%8F%E6%B0%91%E6%94%BF%E9%83%A8%E6%89%B9%E5%87%86%E6%9B%B4%E5%90%8D%E4%B8%BA%E2%80%9C%E4%B8%AD%E5%9B%BD%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E2%80%9D%E3%80%82%E8%8B%B1%E6%96%87%E8%AF%91%E5%90%8D%EF%BC%9AChina%20Social%20Welfare%20Foundation%EF%BC%8C%E7%BC%A9%E5%86%99%EF%BC%9ACSWF%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E6%80%A7%E8%B4%A8%EF%BC%9A%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%8F%91%E8%B5%B7%E4%BA%BA%EF%BC%9A%E6%B0%91%E6%94%BF%E9%83%A8%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E5%AE%97%E6%97%A8%EF%BC%9A%E4%BB%A5%E6%B0%91%E4%B8%BA%E6%9C%AC%E3%80%81%E5%85%B3%E6%B3%A8%E6%B0%91%E7%94%9F%E3%80%81%E6%89%B6%E5%8D%B1%E6%B5%8E%E5%9B%B0%E3%80%81%E5%85%B1%E4%BA%AB%E5%92%8C%E8%B0%90%EF%BC%8C%E6%9C%8D%E5%8A%A1%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E4%BA%8B%E4%B8%9A%E3%80%82',
'id': '100',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fe2f18f1858ae40342a1d6b32da1b99058fbef4f3973897f35771419640797199588681c158878e89a3fec12800d0dceb',
'mn': 3059102635,
'rk': '2',
'title': '%E4%B8%AD%E5%9B%BD%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 337033,
'uin': '2693566221'},
{'desc': '%E4%B8%AD%E5%9B%BD%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E5%9F%BA%E9%87%91%E4%BC%9A%28%E7%AE%80%E7%A7%B0%EF%BC%9A%E4%B8%AD%E5%9B%BD%E5%84%BF%E5%9F%BA%E4%BC%9A%29%E6%88%90%E7%AB%8B%E4%BA%8E1981%E5%B9%B47%E6%9C%8828%E6%97%A5%EF%BC%8C%E6%98%AF%E6%96%B0%E4%B8%AD%E5%9B%BD%E6%88%90%E7%AB%8B%E5%90%8E%E7%9A%84%E7%AC%AC%E4%B8%80%E5%AE%B6%E5%9B%BD%E5%AE%B6%E7%BA%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E4%B8%AD%E5%9B%BD%E5%84%BF%E5%9F%BA%E4%BC%9A%E4%BB%A5%E7%AB%AD%E8%AF%9A%E6%9C%8D%E5%8A%A1%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E6%95%99%E8%82%B2%E7%A6%8F%E5%88%A9%E4%BA%8B%E4%B8%9A%E3%80%81%E6%9C%8D%E5%8A%A1%E7%A4%BE%E4%BC%9A%E3%80%81%E6%9C%8D%E5%8A%A1%E5%A4%A7%E5%B1%80%E4%B8%BA%E5%AE%97%E6%97%A8%EF%BC%8C%E7%B2%BE%E5%BF%83%E6%89%93%E9%80%A0%E5%92%8C%E6%B7%B1%E5%8C%96%E6%8B%93%E5%B1%95%E4%BA%86%22%E6%98%A5%E8%95%BE%E8%AE%A1%E5%88%92%22%E3%80%81%22%E5%AE%89%E5%BA%B7%E8%AE%A1%E5%88%92%22%E3%80%81%22%E5%84%BF%E7%AB%A5%E5%BF%AB%E4%B9%90%E5%AE%B6%E5%9B%AD%22%E3%80%81%E2%80%9CHELLO%E5%B0%8F%E5%AD%A9%E2%80%9D%E7%AD%89%E5%93%81%E7%89%8C%E9%A1%B9%E7%9B%AE%E3%80%82%E5%BD%A2%E6%88%90%E5%84%BF%E7%AB%A5%E6%95%99%E8%82%B2%E8%B5%84%E5%8A%A9%E3%80%81%E5%A4%A7%E7%97%85%E6%95%91%E5%8A%A9%E3%80%81%E5%AE%89%E5%85%A8%E5%81%A5%E5%BA%B7%E3%80%81%E7%81%BE%E5%90%8E%E7%B4%A7%E6%80%A5%E6%8F%B4%E5%8A%A9%E7%AB%8B%E4%BD%93%E5%8C%96%E8%B5%84%E5%8A%A9%E6%9C%8D%E5%8A%A1%E4%BD%93%E7%B3%BB%EF%BC%8C%E8%A2%AB%E6%B0%91%E6%94%BF%E9%83%A8%E8%AF%84%E4%B8%BA5A%E7%BA%A7%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82',
'id': '83',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F11ad1700130bf2720d6d95d5548081508562893642109561f4f8442fdbe654a19f49e217381c5117',
'mn': 1692998810,
'rk': '3',
'title': '%E4%B8%AD%E5%9B%BD%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 228060,
'uin': '611990116'},
{'desc': '%E4%B8%8A%E6%B5%B7%E4%BB%81%E5%BE%B7%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%94%B1%E7%88%B1%E5%BE%B7%E5%8F%91%E8%B5%B7%EF%BC%8C2011%E5%B9%B412%E6%9C%88%E5%9C%A8%E4%B8%8A%E6%B5%B7%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%E7%9A%84%E6%94%AF%E6%8C%81%E5%9E%8B%E6%B0%91%E9%97%B4%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E4%BB%81%E5%BE%B7%E7%AB%8B%E8%B6%B3%E4%B8%8A%E6%B5%B7%EF%BC%8C%E8%BE%90%E5%B0%84%E5%85%A8%E5%9B%BD%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%94%AF%E6%8C%81%E6%B0%91%E9%97%B4%E5%85%AC%E7%9B%8A%EF%BC%8C%E6%8E%A8%E5%8A%A8%E5%85%AC%E7%9B%8A%E5%88%9B%E6%96%B0%EF%BC%8C%E4%BF%83%E8%BF%9B%E8%A1%8C%E4%B8%9A%E5%8F%91%E5%B1%95%E3%80%82',
'id': '163',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb044e49b84f82fcafae0161c35900d02bce438319ba79f5180074efdadb8558a03ff3ab90c23039d52',
'mn': 1433104147,
'rk': '4',
'title': '%E4%B8%8A%E6%B5%B7%E4%BB%81%E5%BE%B7%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 93774,
'uin': '3253755055'},
{'desc': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E7%88%B1%E4%BD%91%E6%9C%AA%E6%9D%A5%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%9C%A8%E6%B7%B1%E5%9C%B3%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E4%B8%93%E6%B3%A8%E4%BA%8E%E4%B8%A4%E5%A4%A7%E5%85%AC%E7%9B%8A%E9%A2%86%E5%9F%9F%EF%BC%9A%E5%AF%B9%E5%A4%84%E4%BA%8E%E5%9B%B0%E5%A2%83%E5%84%BF%E7%AB%A5%E7%9A%84%E5%85%A8%E6%96%B9%E4%BD%8D%E6%95%91%E5%8A%A9%EF%BC%9B%E6%94%AF%E6%8C%81%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E9%A2%86%E5%9F%9F%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E7%9A%84%E5%8F%91%E5%B1%95%E3%80%82',
'id': '245',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F79d2cd69fead5672fb9f05be86aa5d74dfaf27ad20e3b59f6dc0916fb3f4e38812419bb79fcfe40895bdd78cf6aa1ddf',
'mn': 1367782623,
'rk': '5',
'title': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E7%88%B1%E4%BD%91%E6%9C%AA%E6%9D%A5%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 593776,
'uin': '3257591536'},
{'desc': '%E8%B7%B5%E5%B1%A5%E4%BA%BA%E9%97%B4%E4%BD%9B%E6%95%99%EF%BC%8C%E5%87%80%E5%8C%96%E4%B8%96%E9%81%93%E4%BA%BA%E5%BF%83%E3%80%82%E7%81%B5%E5%B1%B1%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E2004%E5%B9%B412%E6%9C%88%EF%BC%8C%E6%98%AF%E7%94%B1%E6%97%A0%E9%94%A1%E7%81%B5%E5%B1%B1%E6%96%87%E5%8C%96%E6%97%85%E6%B8%B8%E9%9B%86%E5%9B%A2%E6%9C%89%E9%99%90%E5%85%AC%E5%8F%B8%E5%92%8C%E6%97%A0%E9%94%A1%E5%B8%82%E7%A5%A5%E7%AC%A6%E7%A6%85%E5%AF%BA%E5%8F%91%E8%B5%B7%EF%BC%8C%E5%9C%A8%E6%B1%9F%E8%8B%8F%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9E%8B%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%8E%B0%E5%9B%B4%E7%BB%95%E4%BA%94%E5%A4%A7%E5%B9%B3%E5%8F%B0%E5%BC%80%E5%B1%95%E9%A1%B9%E7%9B%AE%EF%BC%9A%E9%9D%92%E5%B9%B4%E6%88%90%E9%95%BF%E5%B9%B3%E5%8F%B0%E3%80%81%E8%A1%8C%E4%B8%9A%E5%8F%91%E5%B1%95%E5%B9%B3%E5%8F%B0%E3%80%81%E7%A4%BE%E4%BC%9A%E5%88%9B%E6%96%B0%E5%B9%B3%E5%8F%B0%E3%80%81%E7%A4%BE%E5%8C%BA%E6%B2%BB%E7%90%86%E5%B9%B3%E5%8F%B0%E5%92%8C%E5%9B%BD%E9%99%85%E4%BA%A4%E6%B5%81%E6%8F%B4%E5%8A%A9%E5%B9%B3%E5%8F%B0%E3%80%82%E7%81%B5%E5%B1%B1%E7%9A%84%E5%B7%A5%E4%BD%9C%E5%8E%9F%E5%88%99%E6%98%AF%EF%BC%9A%E8%A7%84%E8%8C%83%E3%80%81%E4%B8%93%E4%B8%9A%E3%80%81%E5%93%81%E7%89%8C%E3%80%81%E9%80%8F%E6%98%8E%E3%80%822010%E5%B9%B4%EF%BC%8C%E8%A2%AB%E6%B0%91%E6%94%BF%E9%83%A8%E8%AF%84%E4%B8%BA%E2%80%9C%E5%85%A8%E5%9B%BD%E5%85%88%E8%BF%9B%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E2%80%9D%E3%80%822015%E5%B9%B4%EF%BC%8C%E5%9C%A8%E7%AC%AC%E4%BA%94%E5%B1%8A%E4%B8%AD%E5%9B%BD%E5%85%AC%E7%9B%8A%E8%8A%82%E8%8E%B7%E5%BE%97%E2%80%9C2015%E5%B9%B4%E5%BA%A6%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%E5%A5%96%E2%80%9D%20%E3%80%82',
'id': '144',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F1e38afedb5ae397e937c4ce112c51b2d0653d5c99cc64f45bda46fa99dc2f2a17667789348caaae2f76ca91f2f25fa03',
'mn': 1340834819,
'rk': '6',
'title': '%E6%97%A0%E9%94%A1%E7%81%B5%E5%B1%B1%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 83629,
'uin': '2997007428'},
{'desc': '%E7%88%B1%E5%BE%B7%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1985%E5%B9%B44%E6%9C%88%EF%BC%8C%E6%97%A8%E5%9C%A8%E4%BF%83%E8%BF%9B%E6%88%91%E5%9B%BD%E7%9A%84%E6%95%99%E8%82%B2%E3%80%81%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E3%80%81%E5%8C%BB%E7%96%97%E5%8D%AB%E7%94%9F%E3%80%81%E7%A4%BE%E5%8C%BA%E5%8F%91%E5%B1%95%E4%B8%8E%E7%8E%AF%E5%A2%83%E4%BF%9D%E6%8A%A4%E3%80%81%E7%81%BE%E5%AE%B3%E7%AE%A1%E7%90%86%E7%AD%89%E5%90%84%E9%A1%B9%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%EF%BC%8C%E8%BF%84%E4%BB%8A%E4%B8%BA%E6%AD%A2%EF%BC%8C%E9%A1%B9%E7%9B%AE%E5%8C%BA%E5%9F%9F%E7%B4%AF%E8%AE%A1%E8%A6%86%E7%9B%96%E5%85%A8%E5%9B%BD31%E4%B8%AA%E7%9C%81%E3%80%81%E5%B8%82%E3%80%81%E8%87%AA%E6%B2%BB%E5%8C%BA%EF%BC%8C%E9%80%BE%E5%8D%83%E4%B8%87%E4%BA%BA%E5%8F%97%E7%9B%8A%E3%80%82',
'id': '40',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3e28f14aa051684286b10ad99b1ac2b070f3e555d54ca27544c0477d961f15579aa6deebbe88ee1ee1256b4ead56b434',
'mn': 1292478766,
'rk': '7',
'title': '%E7%88%B1%E5%BE%B7%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 97226,
'uin': '95001117'},
{'desc': '%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%A5%B3%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E7%AE%80%E7%A7%B0%E2%80%9C%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E2%80%9D%E8%8B%B1%E6%96%87%E5%90%8D%E7%A7%B0%EF%BC%9AChina%20Women%26amp%3Bamp%3B%23039%3Bs%20Development%20Foundation%EF%BC%8C%E7%BC%A9%E5%86%99%EF%BC%9A%20CWDF%E3%80%82%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E6%98%AF5A%E7%BA%A7%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C1988%E5%B9%B412%E6%9C%88%E7%94%B1%E5%85%A8%E5%9B%BD%E5%A6%87%E8%81%94%E5%8F%91%E8%B5%B7%E6%88%90%E7%AB%8B%E3%80%82%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E6%98%AF%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%85%B6%E9%9D%A2%E5%90%91%E5%85%AC%E4%BC%97%E5%8B%9F%E6%8D%90%E7%9A%84%E5%9C%B0%E5%9F%9F%E6%98%AF%E4%B8%AD%E5%9B%BD%E4%BB%A5%E5%8F%8A%E8%AE%B8%E5%8F%AF%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E5%8B%9F%E6%8D%90%E7%9A%84%E5%9B%BD%E5%AE%B6%E5%92%8C%E5%9C%B0%E5%8C%BA%E3%80%82%0A%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E7%BB%B4%E6%8A%A4%E5%A6%87%E5%A5%B3%E6%9D%83%E7%9B%8A%EF%BC%8C%E6%8F%90%E9%AB%98%E5%A6%87%E5%A5%B3%E7%B4%A0%E8%B4%A8%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%A6%87%E5%A5%B3%E5%92%8C%E5%A6%87%E5%A5%B3%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%EF%BC%8C%E4%B8%BA%E6%9E%84%E5%BB%BA%E5%92%8C%E8%B0%90%E7%A4%BE%E4%BC%9A%E4%BD%9C%E5%87%BA%E5%BA%94%E6%9C%89%E7%9A%84%E8%B4%A1%E7%8C%AE%E3%80%82%0A%E9%95%BF%E6%9C%9F%E4%BB%A5%E6%9D%A5%EF%BC%8C%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%9F%BA%E4%BC%9A%E7%9D%80%E7%9C%BC%E4%BA%8E%E5%A6%87%E5%A5%B3%E7%BE%A4%E4%BC%97%E6%9C%80%E5%85%B3%E5%BF%83%E3%80%81%E6%9C%80%E7%9B%B4%E6%8E%A5%E3%80%81%E6%9C%80%E7%8E%B0%E5%AE%9E%E7%9A%84%E5%88%A9%E7%9B%8A%E9%97%AE%E9%A2%98%EF%BC%8C%E5%9C%A8%E5%9B%B4%E7%BB%95%E5%A6%87%E5%A5%B3%E6%89%B6%E8%B4%AB%E3%80%81%E5%A6%87%E5%A5%B3%E5%81%A5%E5%BA%B7%E3%80%81%E5%A5%B3%E6%80%A7%E5%88%9B%E4%B8%9A%E7%AD%89%E6%96%B9%E9%9D%A2%EF%BC%8C%E5%AE%9E%E6%96%BD%E4%BA%86%E4%B8%80%E7%B3%BB%E5%88%97%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E9%A1%B9%E7%9B%AE%EF%BC%8C%E5%8F%96%E5%BE%97%E4%BA%86%E6%98%8E%E6%98%BE%E7%9A%84%E7%A4%BE%E4%BC%9A%E6%88%90%E6%95%88%EF%BC%8C%E7%BB%84%E7%BB%87%E5%AE%9E%E6%96%BD%E7%9A%84%E2%80%9C%E6%AF%8D%E4%BA%B2%E5%B0%8F%E9%A2%9D%E5%BE%AA%E7%8E%AF%E2%80%9D%E3%80%81%E2%80%9C%E6%AF%8D%E4%BA%B2%E5%81%A5%E5%BA%B7%E5%BF%AB%E8%BD%A6%E2%80%9D%E3%80%81%E2%80%9C%E6%AF%8D%E4%BA%B2%E6%B0%B4%E7%AA%96%E2%80%9D%E3%80%81%20%E2%80%9C%E8%B4%AB%E5%9B%B0%E8%8B%B1%E6%A8%A1%E6%AF%8D%E4%BA%B2%E8%B5%84%E5%8A%A9%E8%AE%A1%E5%88%92%E2%80%9D%E3%80%81%E2%80%9C%E6%AF%8D%E4%BA%B2%E9%82%AE%E5%8C%85%E2%80%9D5%E4%B8%AA%E9%A1%B9%E7%9B%AE%E5%88%86%E5%88%AB%E8%8E%B7%E5%BE%97%E4%B8%AD%E5%9B%BD%E6%94%BF%E5%BA%9C%E6%9C%80%E9%AB%98%E6%85%88%E5%96%84%E5%A5%96%E9%A1%B9%E2%80%94%E4%B8%AD%E5%8D%8E%E6%85%88%E5%96%84%E5%A5%96%E3%80%82',
'id': '21',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F18908ac1703cb32b08e108143e2737cd1f1d625578e0c18fbd4affc2b2230251b52b515ed7bbadc19607af09a6658a45',
'mn': 1162045890,
'rk': '8',
'title': '%E4%B8%AD%E5%9B%BD%E5%A6%87%E5%A5%B3%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 115886,
'uin': '2081457189'},
{'desc': '%E4%B8%AD%E5%9B%BD%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A%28%E8%8B%B1%E6%96%87%E5%90%8D%3AChina%20Foundation%20for%20Poverty%20Alleviation%EF%BC%8C%E7%BC%A9%E5%86%99%3ACFPA%29%E6%88%90%E7%AB%8B%E4%BA%8E1989%E5%B9%B43%E6%9C%88%EF%BC%8C%E7%94%B1%E5%9B%BD%E5%8A%A1%E9%99%A2%E6%89%B6%E8%B4%AB%E5%BC%80%E5%8F%91%E9%A2%86%E5%AF%BC%E5%B0%8F%E7%BB%84%E5%8A%9E%E5%85%AC%E5%AE%A4%E4%B8%BB%E7%AE%A1%EF%BC%8C%E6%98%AF%E5%AF%B9%E6%B5%B7%E5%86%85%E5%A4%96%E6%8D%90%E8%B5%A0%E5%9F%BA%E9%87%91%E8%BF%9B%E8%A1%8C%E7%AE%A1%E7%90%86%E7%9A%84%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%EF%BC%8C%E6%98%AF%E7%8B%AC%E7%AB%8B%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%E6%B3%95%E4%BA%BA%E3%80%82',
'id': '78',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3c9f5dac15075332f0af7d7ef2e5d7f537068afaf6b01ae205da94bd0c97331eef8a40321163fdb39c4886bcee050fde',
'mn': 1128105177,
'rk': '9',
'title': '%E4%B8%AD%E5%9B%BD%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 213704,
'uin': '1162992508'},
{'desc': '%E4%B8%8A%E6%B5%B7%E7%9C%9F%E7%88%B1%E6%A2%A6%E6%83%B3%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%94%B1%E9%87%91%E8%9E%8D%E6%9C%BA%E6%9E%84%E5%92%8C%E4%B8%8A%E5%B8%82%E5%85%AC%E5%8F%B8%E7%9A%84%E4%B8%93%E4%B8%9A%E7%AE%A1%E7%90%86%E4%BA%BA%E5%91%98%E5%8F%91%E8%B5%B7%E5%92%8C%E8%BF%90%E8%90%A5%E7%9A%84%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E4%BF%83%E8%BF%9B%E6%95%99%E8%82%B2%E5%9D%87%E8%A1%A1%EF%BC%8C%E5%8F%91%E5%B1%95%E7%B4%A0%E5%85%BB%E6%95%99%E8%82%B2%EF%BC%8C%E5%B8%AE%E5%8A%A9%E5%AD%A9%E5%AD%90%E8%87%AA%E4%BF%A1%E3%80%81%E4%BB%8E%E5%AE%B9%E3%80%81%E6%9C%89%E5%B0%8A%E4%B8%A5%E5%9C%B0%E6%88%90%E9%95%BF%E3%80%82%0A%E7%9C%9F%E7%88%B1%E6%A2%A6%E6%83%B3%E9%A6%96%E5%88%9B%E2%80%9C%E6%A2%A6%E6%83%B3%E4%B8%AD%E5%BF%83%E2%80%9D%E7%B4%A0%E5%85%BB%E6%95%99%E8%82%B2%E6%9C%8D%E5%8A%A1%E4%BD%93%E7%B3%BB%EF%BC%8C%E8%B7%A8%E7%95%8C%E5%85%B1%E5%88%9B%E6%95%99%E8%82%B2%E7%94%9F%E6%80%81%EF%BC%8C%E5%B7%B2%E5%9C%A8%E5%85%A8%E5%9B%BD31%E4%B8%AA%E7%9C%81%E4%B8%BA280%E4%B8%87%E5%B8%88%E7%94%9F%E6%8F%90%E4%BE%9B%E5%85%AC%E7%9B%8A%E4%BA%A7%E5%93%81%E5%92%8C%E6%9C%8D%E5%8A%A1%E3%80%82',
'id': '102',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0cdb413bd5e29bee196b6862e708ea7c1e3e2cd95b518a222a86e678416094c0d8c7d74c09c90eac3',
'mn': 999433142,
'rk': '10',
'title': '%E4%B8%8A%E6%B5%B7%E7%9C%9F%E7%88%B1%E6%A2%A6%E6%83%B3%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 95211,
'uin': '2271431773'},
{'desc': '%E5%A3%B9%E5%9F%BA%E9%87%91%E6%98%AF%E6%9D%8E%E8%BF%9E%E6%9D%B0%E5%85%88%E7%94%9F2007%E5%B9%B44%E6%9C%88%E5%88%9B%E7%AB%8B%E7%9A%84%E5%88%9B%E6%96%B0%E5%9E%8B%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%EF%BC%8C2011%E5%B9%B41%E6%9C%88%E4%BD%9C%E4%B8%BA%E4%B8%AD%E5%9B%BD%E7%AC%AC%E4%B8%80%E5%AE%B6%E6%B0%91%E9%97%B4%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E8%90%BD%E6%88%B7%E6%B7%B1%E5%9C%B3%E3%80%82%E5%A3%B9%E5%9F%BA%E9%87%91%E4%BB%A5%E2%80%9C%E5%B0%BD%E6%88%91%E6%89%80%E8%83%BD%EF%BC%8C%E4%BA%BA%E4%BA%BA%E5%85%AC%E7%9B%8A%E2%80%9D%E4%B8%BA%E6%84%BF%E6%99%AF%EF%BC%8C%E6%90%AD%E5%BB%BA%E4%B8%93%E4%B8%9A%E9%80%8F%E6%98%8E%E7%9A%84%E5%85%AC%E7%9B%8A%E5%B9%B3%E5%8F%B0%EF%BC%8C%E4%B8%93%E6%B3%A8%E4%BA%8E%E7%81%BE%E5%AE%B3%E6%95%91%E5%8A%A9%E3%80%81%E5%84%BF%E7%AB%A5%E5%85%B3%E6%80%80%E4%B8%8E%E5%8F%91%E5%B1%95%E3%80%81%E5%85%AC%E7%9B%8A%E6%94%AF%E6%8C%81%E4%B8%8E%E5%88%9B%E6%96%B0%E4%B8%89%E5%A4%A7%E9%A2%86%E5%9F%9F%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%88%90%E4%B8%BA%E4%B8%AD%E5%9B%BD%E5%85%AC%E7%9B%8A%E7%9A%84%E5%BC%80%E6%8B%93%E8%80%85%E3%80%81%E5%88%9B%E6%96%B0%E8%80%85%E5%92%8C%E6%8E%A8%E5%8A%A8%E8%80%85%E3%80%82%0A',
'id': '16',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F7767c9653cd14ee9e67532e45389742881e2c231861883f935f3b01d6beeb0f74b78a0c902325169',
'mn': 835895133,
'rk': '11',
'title': '%E6%B7%B1%E5%9C%B3%E5%A3%B9%E5%9F%BA%E9%87%91%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 99731,
'uin': '95001115'},
{'desc': '%E5%8C%97%E4%BA%AC%E6%96%B0%E9%98%B3%E5%85%89%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E2009%E5%B9%B44%E6%9C%88%EF%BC%8C%E4%B8%93%E6%B3%A8%E4%B8%93%E4%B8%9A%E6%8A%97%E5%87%BB%E7%99%BD%E8%A1%80%E7%97%85%EF%BC%8C%E4%B8%BA%E6%82%A3%E8%80%85%E6%8F%90%E4%BE%9B%E5%9B%BD%E9%99%85%E9%AA%A8%E9%AB%93%E9%85%8D%E5%9E%8B%E6%A3%80%E7%B4%A2%E3%80%81%E7%9B%B4%E6%8E%A5%E7%BB%8F%E6%B5%8E%E8%B5%84%E5%8A%A9%E3%80%81%E4%BF%A1%E6%81%AF%E6%9C%8D%E5%8A%A1%E3%80%81%E5%8C%BB%E5%AD%A6%E7%A0%94%E7%A9%B6%E5%92%8C%E5%8C%BB%E7%94%9F%E8%BF%9B%E4%BF%AE%E6%94%AF%E6%8C%81%E3%80%81%E6%94%BF%E7%AD%96%E5%80%A1%E5%AF%BC%E7%AD%89%E5%A4%9A%E7%A7%8D%E6%9C%8D%E5%8A%A1%E3%80%82%0A',
'id': '103',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F38ca03395aad2883343ff9c3884555db36c4eeebe849e78abb864fa61898f499cbaddcd6d2d7be63171de2c66ddcbba3',
'mn': 824865299,
'rk': '12',
'title': '%E5%8C%97%E4%BA%AC%E6%96%B0%E9%98%B3%E5%85%89%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 59315,
'uin': '2915144362'},
{'desc': '%E9%98%BF%E6%8B%89%E5%96%84SEE%E5%9F%BA%E9%87%91%E4%BC%9A%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%89%93%E9%80%A0%E4%BC%81%E4%B8%9A%E5%AE%B6%E3%80%81NGO%E3%80%81%E5%85%AC%E4%BC%97%E5%85%B1%E5%90%8C%E5%8F%82%E4%B8%8E%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%8C%96%E4%BF%9D%E6%8A%A4%E5%B9%B3%E5%8F%B0%EF%BC%8C%E5%8F%AF%E6%8C%81%E7%BB%AD%E5%9C%B0%E4%BF%9D%E6%8A%A4%E7%94%9F%E6%80%81%E7%8E%AF%E5%A2%83%EF%BC%8C%E5%85%B1%E5%90%8C%E5%AE%88%E6%8A%A4%E7%A2%A7%E6%B0%B4%E8%93%9D%E5%A4%A9%E3%80%82',
'id': '145',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F64d34056c1f59af7416974df7d83f2639fbbe21f1ef6b0ee5a15c1ce9c7ac5e146484483c632a003fbac86c4529ab663',
'mn': 772131120,
'rk': '13',
'title': '%E9%98%BF%E6%8B%89%E5%96%84SEE%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 188274,
'uin': '2382868980'},
{'desc': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E6%85%88%E5%96%84%E4%BC%9A%EF%BC%88%E8%8B%B1%E6%96%87%E5%90%8DSHENZHEN%20CHARITY%20FEDERATION%EF%BC%89%E6%98%AF%E5%9C%A8%E5%B8%82%E5%A7%94%E3%80%81%E5%B8%82%E6%94%BF%E5%BA%9C%E9%AB%98%E5%BA%A6%E9%87%8D%E8%A7%86%E5%92%8C%E6%94%AF%E6%8C%81%E4%B8%8B%EF%BC%8C%E7%94%B1%E7%A4%BE%E4%BC%9A%E5%90%84%E7%95%8C%E7%83%AD%E5%BF%83%E4%BA%8E%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E7%9A%84%E6%9C%BA%E6%9E%84%E3%80%81%E5%9B%A2%E4%BD%93%E5%92%8C%E4%B8%AA%E4%BA%BA%E7%BB%84%E6%88%90%EF%BC%8C%E5%8F%91%E5%8A%A8%E5%92%8C%E6%8E%A5%E5%8F%97%E5%9B%BD%E5%86%85%E5%A4%96%E7%BB%84%E7%BB%87%E5%92%8C%E4%B8%AA%E4%BA%BA%EF%BC%8C%E8%87%AA%E6%84%BF%E5%90%91%E6%B7%B1%E5%9C%B3%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E6%8D%90%E8%B5%A0%E6%88%96%E8%B5%84%E5%8A%A9%E8%B4%A2%E4%BA%A7%E5%B9%B6%E8%BF%9B%E8%A1%8C%E7%AE%A1%E7%90%86%E5%92%8C%E8%BF%90%E7%94%A8%E7%9A%84%E3%80%81%E5%85%B7%E6%9C%89%E5%9B%BD%E5%AE%B6%E5%85%AC%E5%8B%9F%E8%B5%84%E8%B4%A8%E5%92%8C%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E7%9A%84%E5%85%AC%E7%9B%8A%E6%80%A7%E3%80%81%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E3%80%82',
'id': '101',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F748864bd25db5ee05df00b58f780b77c4053ec6b271b9b842e38020f445c787accf44ae89b2614a8',
'mn': 632056442,
'rk': '14',
'title': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E6%85%88%E5%96%84%E4%BC%9A',
'tms': 55558,
'uin': '2779160918'},
{'desc': '%E4%B8%AD%E5%8D%8E%E6%80%9D%E6%BA%90%E5%B7%A5%E7%A8%8B%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%94%B1%E4%B8%AD%E5%85%B1%E4%B8%AD%E5%A4%AE%E7%BB%9F%E6%88%98%E9%83%A8%E4%B8%BB%E7%AE%A1%EF%BC%8C%E6%B0%91%E5%BB%BA%E4%B8%AD%E5%A4%AE%E5%8F%91%E8%B5%B7%E5%B9%B6%E8%B4%9F%E8%B4%A3%E6%97%A5%E5%B8%B8%E7%AE%A1%E7%90%86%EF%BC%8C%E4%BA%8E2007%E5%B9%B43%E6%9C%88%E5%9C%A8%E6%B0%91%E6%94%BF%E9%83%A8%E6%AD%A3%E5%BC%8F%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%E7%9A%84%E5%85%A8%E5%9B%BD%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%AE%83%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%E8%B5%84%E5%8A%A9%E4%BB%A5%E6%89%B6%E8%B4%AB%E5%92%8C%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%E4%B8%BA%E4%B8%BB%E7%9A%84%E2%80%9C%E6%80%9D%E6%BA%90%E5%B7%A5%E7%A8%8B%E2%80%9D%E6%B4%BB%E5%8A%A8%EF%BC%8C%E5%B8%AE%E5%8A%A9%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%E8%A7%A3%E5%86%B3%E7%94%9F%E4%BA%A7%E7%94%9F%E6%B4%BB%E5%9B%B0%E9%9A%BE%EF%BC%8C%E4%BF%83%E8%BF%9B%E4%B8%AD%E5%9B%BD%E8%B4%AB%E5%9B%B0%E5%9C%B0%E5%8C%BA%E7%BB%8F%E6%B5%8E%E5%92%8C%E7%A4%BE%E4%BC%9A%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E3%80%82%E7%8E%B0%E4%BB%BB%E7%90%86%E4%BA%8B%E9%95%BF%E4%B8%BA%E5%85%A8%E5%9B%BD%E4%BA%BA%E5%A4%A7%E5%B8%B8%E5%A7%94%E4%BC%9A%E5%89%AF%E5%A7%94%E5%91%98%E9%95%BF%E3%80%81%E6%B0%91%E5%BB%BA%E4%B8%AD%E5%A4%AE%E4%B8%BB%E5%B8%AD%E9%99%88%E6%98%8C%E6%99%BA%E3%80%82',
'id': '79',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F74371d8caf56a60900c64c4ca020f087b3d1098c9a209036b43efa2b6f367297c9ff53354646ec81',
'mn': 498402175,
'rk': '15',
'title': '%E4%B8%AD%E5%8D%8E%E6%80%9D%E6%BA%90%E5%B7%A5%E7%A8%8B%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 51523,
'uin': '2806409577'},
{'desc': '%E5%9D%9A%E5%AE%88%E2%80%9C%E5%85%AC%E7%9B%8A%E8%87%B3%E4%B8%8A%E3%80%81%E4%BB%A5%E4%BA%BA%E4%B8%BA%E6%9C%AC%E2%80%9D%E7%9A%84%E7%90%86%E5%BF%B5%EF%BC%8C%E7%A7%89%E6%8C%81%E2%80%9C%E5%AE%89%E8%80%81%E6%8A%9A%E5%AD%A4%E3%80%81%E6%B5%8E%E8%B4%AB%E8%A7%A3%E5%9B%B0%E2%80%9D%E7%9A%84%E5%AE%97%E6%97%A8%EF%BC%8C%E4%BB%A5%E7%88%B1%E5%BF%83%E4%B8%BA%E5%8A%A8%E5%8A%9B%EF%BC%8C%E4%BB%A5%E5%8B%9F%E6%8D%90%E4%B8%BA%E6%89%8B%E6%AE%B5%EF%BC%8C%E4%BB%A5%E5%B8%AE%E5%8A%A9%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%E4%B8%BA%E7%9B%AE%E7%9A%84%EF%BC%8C%E5%8D%93%E6%9C%89%E6%88%90%E6%95%88%E5%9C%B0%E5%BC%80%E5%B1%95%E4%BA%86%E5%90%84%E9%A1%B9%E6%85%88%E5%96%84%E6%B4%BB%E5%8A%A8%EF%BC%8C%E7%B4%AF%E8%AE%A1%E5%8B%9F%E9%9B%86%E5%96%84%E6%AC%BE%EF%BC%88%E5%8C%85%E6%8B%AC%E7%89%A9%E8%B5%84%E6%8A%98%E4%BB%B7%EF%BC%8910%E4%BA%BF%E4%BD%99%E5%85%83%E3%80%82%E6%83%A0%E5%8F%8A%E5%9B%B0%E9%9A%BE%E7%BE%A4%E4%BC%97700%E4%BD%99%E4%B8%87%E4%BA%BA%E3%80%82',
'id': '232',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3dbf6daf6feea59d70db851052c4c0d16533dc3c208c7836b31a54d4738f63bbe421424200dcd425',
'mn': 476635139,
'rk': '16',
'title': '%E9%99%95%E8%A5%BF%E7%9C%81%E6%85%88%E5%96%84%E5%8D%8F%E4%BC%9A',
'tms': 35338,
'uin': '157755130'},
{'desc': '%E6%88%90%E9%83%BD%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E4%BA%8E2009%E5%B9%B412%E6%9C%88%E6%88%90%E7%AB%8B%EF%BC%8C%E5%AE%83%E7%9A%84%E5%89%8D%E8%BA%AB%E2%80%94%E6%88%90%E9%83%BD%E6%85%88%E5%96%84%E4%BC%9A%E4%BA%8E1995%E5%B9%B45%E6%9C%88%E6%88%90%E7%AB%8B%E3%80%82%E5%A4%9A%E5%B9%B4%E6%9D%A5%EF%BC%8C%E5%9C%A8%E5%81%9A%E5%A5%BD%E6%97%A5%E5%B8%B8%E6%89%B6%E8%80%81%E3%80%81%E5%8A%A9%E6%AE%8B%E3%80%81%E6%95%91%E5%AD%A4%E3%80%81%E6%B5%8E%E5%9B%B0%E3%80%81%E8%B5%88%E7%81%BE%E7%AD%89%E6%95%91%E5%8A%A9%E5%B7%A5%E4%BD%9C%E7%9A%84%E5%90%8C%E6%97%B6%EF%BC%8C%E5%9D%9A%E6%8C%81%E5%BC%80%E6%8B%93%E5%88%9B%E6%96%B0%EF%BC%8C%E7%9D%80%E5%8A%9B%E5%AE%9E%E6%96%BD%E4%BA%86%E4%BB%A5%E2%80%9C%E9%98%B3%E5%85%89%E2%80%9D%E5%91%BD%E5%90%8D%E7%9A%84%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E7%B3%BB%E5%88%97%E5%93%81%E7%89%8C%EF%BC%8C%E5%88%9D%E6%AD%A5%E5%BD%A2%E6%88%90%E4%BA%86%E4%BB%A5%E5%B8%AE%E5%9B%B0%E5%8A%A9%E5%AD%A6%E4%B8%BA%E4%B8%BB%EF%BC%8C%E6%B6%B5%E7%9B%96%E5%BB%BA%E6%88%BF%E3%80%81%E5%8A%A9%E8%80%81%E3%80%81%E6%89%B6%E8%B4%AB%E7%AD%89%E6%96%B9%E9%9D%A2%E7%9A%84%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E4%BD%93%E7%B3%BB%EF%BC%8C%E5%8F%91%E6%8C%A5%E4%BA%86%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E5%9C%A8%E7%A4%BE%E4%BC%9A%E4%BF%9D%E9%9A%9C%E4%BD%93%E7%B3%BB%E4%B8%AD%E7%9A%84%E9%87%8D%E8%A6%81%E8%A1%A5%E5%85%85%E4%BD%9C%E7%94%A8%EF%BC%8C%E8%AE%A9%E6%88%90%E9%83%BD%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E5%83%8F%E2%80%9C%E9%98%B3%E5%85%89%E2%80%9D%E4%B8%80%E6%A0%B7%E6%B8%A9%E6%9A%96%E7%9D%80%E5%85%A8%E5%B8%82%E5%9F%8E%E4%B9%A1%E8%B4%AB%E5%9B%B0%E7%BE%A4%E4%BD%93%EF%BC%8C%E4%B8%BA%E6%9E%84%E5%BB%BA%E5%92%8C%E8%B0%90%E7%A4%BE%E4%BC%9A%E4%BD%9C%E5%87%BA%E4%BA%86%E7%A7%AF%E6%9E%81%E8%B4%A1%E7%8C%AE%E3%80%82',
'id': '149',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb09601def96e39bef44a172fd82d3d71e38e8b23da99f1c5c612c6a43959c1bb451b77a5cf1a4b5925',
'mn': 443166863,
'rk': '17',
'title': '%E6%88%90%E9%83%BD%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 30858,
'uin': '2197847273'},
{'desc': '%E5%AE%97%E6%97%A8%EF%BC%9A%E7%BB%84%E7%BB%87%E5%92%8C%E5%9B%A2%E7%BB%93%E7%A4%BE%E4%BC%9A%E5%90%84%E7%95%8C%E5%8A%9B%E9%87%8F%EF%BC%8C%E8%81%94%E7%B3%BB%E6%B5%B7%E5%86%85%E5%A4%96%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E5%92%8C%E7%9F%A5%E5%90%8D%E4%BA%BA%E5%A3%AB%EF%BC%8C%E5%8F%91%E6%89%AC%E4%BA%BA%E9%81%93%E4%B8%BB%E4%B9%89%E7%B2%BE%E7%A5%9E%EF%BC%8C%E5%BC%98%E6%89%AC%E4%B8%AD%E5%8D%8E%E6%B0%91%E6%97%8F%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E7%9A%84%E4%BC%98%E8%89%AF%E7%BE%8E%E5%BE%B7%EF%BC%8C%E5%BC%80%E5%B1%95%E5%A4%9A%E7%A7%8D%E5%BD%A2%E5%BC%8F%E7%9A%84%E7%A4%BE%E4%BC%9A%E6%95%91%E5%8A%A9%E5%B7%A5%E4%BD%9C%EF%BC%8C%E4%BD%BF%E8%80%81%E6%9C%89%E6%89%80%E5%85%BB%E3%80%81%E7%97%85%E6%9C%89%E6%89%80%E5%8C%BB%E3%80%81%E5%B9%BC%E6%9C%89%E6%89%80%E6%89%98%E3%80%81%E6%AE%8B%E6%9C%89%E6%89%80%E9%9D%A0%E3%80%81%E5%9B%B0%E6%9C%89%E6%89%80%E5%B8%AE%E3%80%81%E8%B4%AB%E6%9C%89%E6%89%80%E6%89%B6%EF%BC%8C%E4%BF%83%E8%BF%9B%E7%A4%BE%E4%BC%9A%E5%92%8C%E8%B0%90%E8%BF%9B%E6%AD%A5%E3%80%82%E7%B2%BE%E7%A5%9E%EF%BC%9A%E5%9C%A8%E5%A5%89%E7%8C%AE%E4%BB%96%E4%BA%BA%E4%B8%AD%E6%88%90%E5%B0%B1%E8%87%AA%E5%B7%B1%EF%BC%81',
'id': '228',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3e28f14aa0516842697a39f1ed4317f38c8ced93c7c2e5d5b4e4202be22474dfce0a879e14e5993fe74802bb32205da4',
'mn': 441871280,
'rk': '18',
'title': '%E9%95%BF%E6%B2%99%E5%B8%82%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 22666,
'uin': '32772014'},
{'desc': '%E4%B8%8A%E6%B5%B7%E8%81%94%E5%8A%9D%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E4%B8%80%E5%AE%B6%E6%B0%91%E9%97%B4%E5%8F%91%E8%B5%B7%E7%9A%84%E8%B5%84%E5%8A%A9%E5%9E%8B%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E4%BB%A5%E8%81%94%E5%90%88%E5%8A%9D%E5%8B%9F%EF%BC%8C%E6%94%AF%E6%8C%81%E6%B0%91%E9%97%B4%E5%85%AC%E7%9B%8A%E4%B8%BA%E4%BD%BF%E5%91%BD%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E8%AE%A9%E4%B8%AD%E5%9B%BD%E6%B0%91%E9%97%B4%E5%85%AC%E7%9B%8A%E6%8B%A5%E6%9C%89%E4%BA%92%E4%BF%A1%EF%BC%8C%E5%90%88%E4%BD%9C%EF%BC%8C%E5%8F%AF%E6%8C%81%E7%BB%AD%E5%8F%91%E5%B1%95%E7%9A%84%E7%8E%AF%E5%A2%83%E3%80%82',
'id': '97',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F4d42a3acde967538f041f82f10bbf004fc85c369774df221fa9196850905acf49fb066f9b4d185e5c3294e292ee42922',
'mn': 353420448,
'rk': '19',
'title': '%E4%B8%8A%E6%B5%B7%E8%81%94%E5%8A%9D%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 30631,
'uin': '1503328566'},
{'desc': '%E5%A4%A9%E4%BD%BF%E5%A6%88%E5%A6%88%E5%A4%9A%E5%B9%B4%E6%9D%A5%E4%B8%80%E7%9B%B4%E6%B4%BB%E8%B7%83%E5%9C%A8%E5%84%BF%E7%AB%A5%E5%A4%A7%E7%97%85%E6%95%91%E5%8A%A9%E9%A2%86%E5%9F%9F%EF%BC%8C%E4%BB%A5%E6%B1%87%E8%81%9A%E7%88%B1%E5%BF%83%EF%BC%8C%E4%BF%9D%E6%8A%A4%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%E7%9A%84%E7%94%9F%E5%91%BD%E3%80%81%E5%81%A5%E5%BA%B7%E3%80%81%E7%94%9F%E5%AD%98%E3%80%81%E5%8F%91%E5%B1%95%E6%9D%83%E5%88%A9%E4%B8%BA%E5%AE%97%E6%97%A8%EF%BC%8C%E4%B8%BB%E8%A6%81%E5%BC%80%E5%B1%95%E7%89%B9%E6%AE%8A%E7%BE%A4%E4%BD%93%E7%9A%84%E5%8C%BB%E7%96%97%E6%95%91%E5%8A%A9%E3%80%81%E5%BA%B7%E5%A4%8D%E5%85%B3%E6%80%80%E5%92%8C%E4%BF%A1%E6%81%AF%E5%92%A8%E8%AF%A2%E7%AD%89%E5%85%AC%E7%9B%8A%E6%B4%BB%E5%8A%A8%E3%80%82%E5%A4%A9%E4%BD%BF%E5%A6%88%E5%A6%88%E5%9B%A2%E9%98%9F%E6%9B%BE%E4%BA%8E2008%E5%B9%B4%E5%92%8C2012%E5%B9%B4%E4%B8%A4%E6%AC%A1%E8%8E%B7%E5%BE%97%E4%B8%AD%E5%8D%8E%E6%85%88%E5%96%84%E5%A5%96%E3%80%82',
'id': '307',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb067d93eae12ea9834d1063b0935fa61b93b6c67f689f11c245fb0fecedfd1aa0b51e3f6d74b7e760c',
'mn': 334618870,
'rk': '20',
'title': '%E5%8C%97%E4%BA%AC%E5%A4%A9%E4%BD%BF%E5%A6%88%E5%A6%88%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 22511,
'uin': '2175409800'},
{'desc': '%E4%B8%AD%E5%8D%8E%E7%A4%BE%E4%BC%9A%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%822009%E5%B9%B41%E6%9C%88%E7%BB%8F%E4%B8%AD%E5%8D%8E%E4%BA%BA%E6%B0%91%E5%85%B1%E5%92%8C%E5%9B%BD%E6%B0%91%E6%94%BF%E9%83%A8%E6%89%B9%E5%87%86%E8%AE%BE%E7%AB%8B%E7%99%BB%E8%AE%B0%EF%BC%8C%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E9%83%A8%E9%97%A8%E4%B8%BA%E6%B0%91%E6%94%BF%E9%83%A8%E3%80%82',
'id': '43',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F9a223de4799cf0d59577d15717b8cc9045b0a602988828f4039a324e2d992e80346dc459434812cb',
'mn': 319019560,
'rk': '21',
'title': '%E4%B8%AD%E5%8D%8E%E7%A4%BE%E4%BC%9A%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 20427,
'uin': '2247243585'},
{'desc': '%E5%8C%97%E4%BA%AC%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1994%E5%B9%B4%EF%BC%8C%E6%98%AF%E4%B8%BA%E9%9D%92%E5%B0%91%E5%B9%B4%E7%BE%A4%E4%BD%93%E6%8F%90%E4%BE%9B%E5%B8%AE%E5%8A%A9%E3%80%81%E6%9C%8D%E5%8A%A1%E9%9D%92%E5%B0%91%E5%B9%B4%E5%81%A5%E5%BA%B7%E6%88%90%E9%95%BF%E7%9A%84%E9%9D%92%E5%B0%91%E5%B9%B4%E6%85%88%E5%96%84%E6%9C%BA%E6%9E%84%E5%92%8C%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%EF%BC%8C%E6%98%AF%E5%8C%97%E4%BA%AC%E5%85%B1%E9%9D%92%E5%9B%A2%E5%85%AC%E7%9B%8A%E6%9C%8D%E5%8A%A1%E5%B9%B3%E5%8F%B0%E3%80%82%E5%85%B6%E5%AE%97%E6%97%A8%E6%98%AF%E6%9C%8D%E5%8A%A1%E9%9D%92%E5%B0%91%E5%B9%B4%E5%81%A5%E5%BA%B7%E6%88%90%E9%95%BF%EF%BC%8C%E4%BF%83%E8%BF%9B%E9%9D%92%E5%B0%91%E5%B9%B4%E5%85%A8%E9%9D%A2%E5%8F%91%E5%B1%95%E3%80%82',
'id': '124',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb058d7a2def5943732960e18e4b4e31d2443486191a92726d5dcc3de22be90362cc034b74dfa7eef9d',
'mn': 226460854,
'rk': '22',
'title': '%E5%8C%97%E4%BA%AC%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 23292,
'uin': '2498890679'},
{'desc': '%E4%B8%8A%E6%B5%B7%E5%B8%82%E5%8D%8E%E4%BE%A8%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E5%AE%98%E7%BD%91%E7%94%B1%E4%B8%8A%E6%B5%B7%E5%B8%82%E5%8D%8E%E4%BE%A8%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E4%B8%BB%E5%8A%9E%E3%80%82%E7%BD%91%E7%AB%99%E4%B8%8D%E4%BB%85%E6%98%AF%E4%B8%8A%E6%B5%B7%E5%8D%8E%E4%BE%A8%E5%9F%BA%E9%87%91%E4%BC%9A%E5%AE%A3%E4%BC%A0%E4%B8%8E%E5%B1%95%E7%A4%BA%E7%9A%84%E7%AA%97%E5%8F%A3%EF%BC%8C%E4%B9%9F%E6%98%AF%E5%B9%BF%E5%A4%A7%E7%BD%91%E6%B0%91%E4%BA%86%E8%A7%A3%E4%B8%8A%E6%B5%B7%E4%BE%A8%E7%95%8C%E6%83%85%E5%86%B5%E7%9A%84%E4%BF%A1%E6%81%AF%E5%B9%B3%E5%8F%B0%E3%80%82%0A%E7%BD%91%E7%AB%99%E5%88%A9%E7%94%A8%E4%BA%92%E8%81%94%E7%BD%91%E4%BF%A1%E6%81%AF%E5%8C%96%E6%89%8B%E6%AE%B5%E4%B8%BA%E7%BD%91%E6%B0%91%E6%90%AD%E5%BB%BA%E8%B5%B7%E4%B8%80%E4%B8%AA%E7%BD%91%E7%BB%9C%E5%85%AC%E7%9B%8A%E4%BA%92%E5%8A%A8%E5%B9%B3%E5%8F%B0%E3%80%82%E9%80%9A%E8%BF%87%E8%AF%A5%E5%B9%B3%E5%8F%B0%EF%BC%8C%E7%BD%91%E6%B0%91%E5%8F%AF%E4%BB%A5%E9%9A%8F%E6%97%B6%E9%9A%8F%E5%9C%B0%E4%BA%86%E8%A7%A3%E5%88%B0%E4%B8%8A%E6%B5%B7%E5%8D%8E%E4%BA%BA%E5%8D%8E%E4%BE%A8%E7%9A%84%E6%83%85%E5%86%B5%E5%92%8C%E9%9C%80%E8%A6%81%E6%8D%90%E5%8A%A9%E7%9A%84%E9%A1%B9%E7%9B%AE%E6%83%85%E5%86%B5%EF%BC%8C%E5%B9%B6%E6%96%B9%E4%BE%BF%E4%BB%96%E4%BB%AC%E8%BF%9B%E8%A1%8C%E7%88%B1%E5%BF%83%E6%8D%90%E8%B5%A0%EF%BC%8C%E6%AD%A4%E6%96%B9%E5%BC%8F%E4%B8%8D%E4%BB%85%E6%89%93%E7%A0%B4%E4%BA%86%E4%BC%A0%E7%BB%9F%E7%9A%84%E5%AE%9E%E5%9C%B0%E6%8D%90%E6%AC%BE%E6%96%B9%E5%BC%8F%EF%BC%8C%E4%B9%9F%E7%AA%81%E7%A0%B4%E4%BA%86%E6%8D%90%E6%AC%BE%E4%BA%BA%E7%9A%84%E8%8C%83%E5%9B%B4%E9%99%90%E5%88%B6%EF%BC%8C%E6%9C%89%E9%92%B1%E6%B2%A1%E9%92%B1%E9%83%BD%E5%8F%AF%E4%BB%A5%E5%9C%A8%E8%BF%99%E4%B8%AA%E5%B9%B3%E5%8F%B0%E4%B8%8A%E6%96%BD%E8%A1%8C%E6%85%88%E5%96%84%E4%B9%8B%E4%B8%BE%EF%BC%8C%E8%BE%BE%E5%88%B0%E6%8D%90%E6%AC%BE%E7%9A%84%E7%9B%AE%E7%9A%84%E3%80%82',
'id': '297',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0d3244d79c169facb6d51638b8d6dd986d022edbf61ce3729900f4e4cbd4c1ddee2d0c5c84b43a1ab',
'mn': 224439197,
'rk': '23',
'title': '%E4%B8%8A%E6%B5%B7%E5%B8%82%E5%8D%8E%E4%BE%A8%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 4537,
'uin': '3447530094'},
{'desc': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1992%E5%B9%B4%EF%BC%8C%E6%98%AF%E4%B8%80%E5%AE%B6%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%8E%A8%E5%8A%A8%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%E5%81%A5%E5%BA%B7%E5%8F%91%E5%B1%95%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C2009%E5%B9%B4%E5%BC%80%E5%A7%8B%E7%A4%BE%E4%BC%9A%E5%8C%96%E8%BD%AC%E5%9E%8B%E4%BB%A5%E6%9D%A5%EF%BC%8C%E5%9C%A8%E5%9B%BD%E5%86%85%E7%8E%87%E5%85%88%E6%8F%90%E5%87%BA%E2%80%9C%E6%90%AD%E5%BB%BA%E5%85%AC%E7%9B%8A%E5%88%9B%E6%8A%95%E5%B9%B3%E5%8F%B0%EF%BC%8C%E6%94%AF%E6%8C%81%E7%A4%BE%E4%BC%9A%E5%88%9B%E6%96%B0%E9%A1%B9%E7%9B%AE%E2%80%9D%E7%9A%84%E5%8F%91%E5%B1%95%E6%88%98%E7%95%A5%EF%BC%8C%E5%85%B1%E8%AE%A1%E7%AD%B9%E9%9B%86%E5%96%84%E6%AC%BE%E8%B6%85%E8%BF%871%2E2%E4%BA%BF%E5%85%83%EF%BC%8C%E5%B7%B2%E8%B5%84%E5%8A%A9%E6%84%88500%E4%B8%AA%E5%85%AC%E7%9B%8A%E5%88%9B%E6%96%B0%E9%A1%B9%E7%9B%AE%EF%BC%8C%E5%9C%A8%E5%88%9B%E6%8A%95%E6%94%AF%E6%8C%81%E4%BD%93%E7%B3%BB%E3%80%81%E9%A1%B9%E7%9B%AE%E8%83%BD%E5%8A%9B%E5%BB%BA%E8%AE%BE%E3%80%81%E5%85%AC%E7%9B%8A%E8%B5%84%E6%BA%90%E5%AF%B9%E6%8E%A5%E7%AD%89%E6%96%B9%E9%9D%A2%E5%BB%BA%E7%AB%8B%E4%BA%86%E6%A0%B8%E5%BF%83%E7%AB%9E%E4%BA%89%E5%8A%9B%E3%80%82',
'id': '128',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fda85814796b1dcf37b818b711547858f1646d8cd2db902d74a58a35d69e2c6c3eb73d30dd046cf4309db41681107c93c',
'mn': 192813349,
'rk': '24',
'title': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 18010,
'uin': '979377360'},
{'desc': '%E4%B8%AD%E5%9B%BD%E5%90%AC%E5%8A%9B%E5%8C%BB%E5%AD%A6%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1995%E5%B9%B412%E6%9C%88%EF%BC%8C%E6%98%AF%E7%BB%8F%E4%B8%AD%E5%8D%8E%E4%BA%BA%E6%B0%91%E5%85%B1%E5%92%8C%E5%9B%BD%E5%8D%AB%E7%94%9F%E9%83%A8%E3%80%81%E4%B8%AD%E5%9B%BD%E4%BA%BA%E6%B0%91%E9%93%B6%E8%A1%8C%E6%89%B9%E5%87%86%EF%BC%8C%E6%B0%91%E6%94%BF%E9%83%A8%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%E7%9A%84%E5%85%A8%E5%9B%BD%E6%80%A7%E9%9D%9E%E8%90%A5%E5%88%A9%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%EF%BC%8C%E6%98%AF%E7%8B%AC%E7%AB%8B%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%E6%B3%95%E4%BA%BA%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E4%BF%83%E8%BF%9B%E4%B8%AD%E5%9B%BD%E5%90%AC%E5%8A%9B%E5%8C%BB%E5%AD%A6%E3%80%81%E5%BA%B7%E5%A4%8D%E5%8C%BB%E7%96%97%E3%80%81%E7%A7%91%E7%A0%94%E6%95%99%E8%82%B2%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%EF%BC%8C%E7%AD%B9%E9%9B%86%E5%96%84%E6%AC%BE%EF%BC%8C%E5%B8%AE%E5%8A%A9%E6%9B%B4%E5%A4%9A%E7%9A%84%E8%81%8B%E4%BA%BA%E5%90%AC%E5%8A%9B%E3%80%81%E8%AF%AD%E8%A8%80%E5%BA%B7%E5%A4%8D%E3%80%82',
'id': '235',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb066ade11126c17765766ab9f44af2b3fdcf0ddf4e74e7f50eecf6d83425eeb003f8cbf1d868dee1e7',
'mn': 162978805,
'rk': '25',
'title': '%E4%B8%AD%E5%9B%BD%E5%90%AC%E5%8A%9B%E5%8C%BB%E5%AD%A6%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 12225,
'uin': '2306673571'},
{'desc': '%E6%9C%AC%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%EF%BC%9A%E9%81%B5%E5%AE%88%E5%AE%AA%E6%B3%95%E3%80%81%E6%B3%95%E5%BE%8B%E3%80%81%E6%B3%95%E8%A7%84%E5%92%8C%E5%9B%BD%E5%AE%B6%E6%94%BF%E7%AD%96%EF%BC%8C%E9%81%B5%E5%AE%88%E7%A4%BE%E4%BC%9A%E5%85%AC%E5%BE%B7%EF%BC%9B%E5%BC%98%E6%89%AC%E4%B8%AD%E5%8D%8E%E6%B0%91%E6%97%8F%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E6%95%91%E7%81%BE%E7%9A%84%E4%BC%A0%E7%BB%9F%E7%BE%8E%E5%BE%B7%EF%BC%8C%E5%80%A1%E5%AF%BC%E4%BA%BA%E9%81%93%E4%B8%BB%E4%B9%89%E7%B2%BE%E7%A5%9E%EF%BC%8C%E5%8A%A8%E5%91%98%E7%A4%BE%E4%BC%9A%E5%90%84%E6%96%B9%E5%8A%9B%E9%87%8F%EF%BC%8C%E7%AD%B9%E9%9B%86%E6%85%88%E5%96%84%E6%AC%BE%E7%89%A9%EF%BC%8C%E7%A7%AF%E6%9E%81%E5%BC%80%E5%B1%95%E7%A4%BE%E4%BC%9A%E6%95%91%E5%8A%A9%E3%80%81%E6%89%B6%E5%8A%A9%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%E6%B4%BB%E5%8A%A8%EF%BC%9B%E7%A7%AF%E6%9E%81%E4%B8%BA%E4%BC%9A%E5%91%98%E3%80%81%E4%BC%81%E4%B8%9A%E5%92%8C%E6%94%BF%E5%BA%9C%E6%8F%90%E4%BE%9B%E8%89%AF%E5%A5%BD%E7%9A%84%E6%9C%8D%E5%8A%A1%EF%BC%8C%E5%BD%93%E5%A5%BD%E6%94%BF%E5%BA%9C%E3%80%81%E4%BC%81%E4%B8%9A%E5%92%8C%E7%A4%BE%E4%BC%9A%E7%9A%84%E5%8F%82%E8%B0%8B%EF%BC%8C%E5%8F%91%E6%8C%A5%E6%A1%A5%E6%A2%81%E7%BA%BD%E5%B8%A6%E4%BD%9C%E7%94%A8%EF%BC%8C%E4%B8%BA%E4%BF%83%E8%BF%9B%E6%B9%96%E5%8D%97%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E5%92%8C%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%85%A8%E9%9D%A2%E5%8F%91%E5%B1%95%E5%92%8C%E7%A4%BE%E4%BC%9A%E7%9A%84%E5%92%8C%E8%B0%90%E7%A8%B3%E5%AE%9A%E4%BD%9C%E5%87%BA%E5%BA%94%E6%9C%89%E7%9A%84%E8%B4%A1%E7%8C%AE%E3%80%82',
'id': '295',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fc54b03b559dd2f5d7438de5ed13e4bce49a9d41f2fc83649b374cb576f1069560f52a0d467ef2f05',
'mn': 140734436,
'rk': '26',
'title': '%E6%B9%96%E5%8D%97%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 17745,
'uin': '452449757'},
{'desc': '%E6%B9%96%E5%8C%97%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1995%E5%B9%B49%E6%9C%88%EF%BC%8C%E5%B1%9E%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E7%9A%84%E5%85%A8%E7%9C%81%E6%80%A7%E5%85%AC%E7%9B%8A%E7%B1%BB%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%EF%BC%8C%E7%A7%89%E6%89%BF%E2%80%9C%E5%AE%89%E8%80%81%E3%80%81%E6%89%B6%E5%B9%BC%E3%80%81%E5%8A%A9%E5%AD%A6%E3%80%81%E6%B5%8E%E5%9B%B0%E3%80%81%E6%95%91%E7%81%BE%E2%80%9D%E7%9A%84%E5%AE%97%E6%97%A8%E4%BA%8E2016%E5%B9%B412%E6%9C%88%E8%8E%B7%E5%BE%97%E5%85%AC%E5%BC%80%E5%8B%9F%E6%8D%90%E8%B5%84%E6%A0%BC%E3%80%82',
'id': '302',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0bfd25eebfb911b03bce59a6c690fb7639d2e3c9d4d501308d2ea2fb804f02762cc8ced3657af4ef2',
'mn': 125762643,
'rk': '27',
'title': '%E6%B9%96%E5%8C%97%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 35086,
'uin': '2158130599'},
{'desc': '%E4%B8%AD%E5%9B%BD%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%8C%E5%A4%A7%E5%8A%9B%E6%99%AE%E5%8F%8A%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E7%90%86%E5%BF%B5%E3%80%81%E5%BC%98%E6%89%AC%E5%9B%A2%E7%BB%93%E3%80%81%E5%8F%8B%E7%88%B1%E3%80%81%E5%A5%89%E7%8C%AE%E7%9A%84%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E7%B2%BE%E7%A5%9E%E3%80%82%E4%B8%9A%E5%8A%A1%E8%8C%83%E5%9B%B4%E6%98%AF%EF%BC%9A%E8%B5%84%E5%8A%A9%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E3%80%81%E6%89%B6%E5%BC%B1%E5%8A%A9%E6%AE%8B%E3%80%81%E5%B8%AE%E8%80%81%E5%8A%A9%E5%B9%BC%E3%80%81%E6%94%AF%E6%95%99%E5%8A%A9%E5%AD%A6%E3%80%81%E6%8A%A2%E9%99%A9%E6%95%91%E7%81%BE%E3%80%81%E7%8E%AF%E5%A2%83%E4%BF%9D%E6%8A%A4%E3%80%81%E7%A7%91%E6%8A%80%E4%BC%A0%E6%92%AD%E3%80%81%E5%8C%BB%E7%96%97%E5%8D%AB%E7%94%9F%E3%80%81%E6%B2%BB%E5%AE%89%E9%98%B2%E8%8C%83%E3%80%81%E7%A4%BE%E5%8C%BA%E6%9C%8D%E5%8A%A1%E4%BB%A5%E5%8F%8A%E5%85%B6%E4%BB%96%E6%8E%A8%E5%8A%A8%E6%95%99%E8%82%B2%E3%80%81%E7%A7%91%E5%AD%A6%E3%80%81%E6%96%87%E5%8C%96%E3%80%81%E5%8D%AB%E7%94%9F%E3%80%81%E4%BD%93%E8%82%B2%E7%AD%89%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E7%9A%84%E5%90%84%E7%B1%BB%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E5%85%AC%E7%9B%8A%E9%A1%B9%E7%9B%AE%E3%80%82%20',
'id': '290',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb01842fd3937cac63c09172ee874b8d548f72474e5a0e0c9024011165572ef018a9c6fad60aca96bc2',
'mn': 109914515,
'rk': '28',
'title': '%E4%B8%AD%E5%9B%BD%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 3056,
'uin': '1764093356'},
{'desc': '%E5%8C%97%E4%BA%AC%E6%98%A5%E8%8B%97%E5%84%BF%E7%AB%A5%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E4%BA%8E2010%E5%B9%B410%E6%9C%88%E4%BB%BD%E5%9C%A8%E5%8C%97%E4%BA%AC%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%EF%BC%8C%E6%98%AF%E4%B8%AA%E4%BA%BA%E5%8F%91%E8%B5%B7%E7%9A%84%E5%85%B7%E6%9C%89%E5%85%AC%E5%8B%9F%E8%B5%84%E8%B4%A8%E7%9A%84%E6%B0%91%E9%97%B4%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E3%80%82%E7%AE%80%E7%A7%B0%E2%80%9C%E6%98%A5%E8%8B%97%E5%9F%BA%E9%87%91%E4%BC%9A%E2%80%9D%E3%80%82%E6%98%A5%E8%8B%97%E4%BA%BA%E7%A7%89%E6%89%BF%E2%80%9C%E7%88%B1%E4%B8%8E%E4%B8%93%E4%B8%9A%E2%80%9D%E7%9A%84%E6%9C%8D%E5%8A%A1%E7%90%86%E5%BF%B5%EF%BC%8C%E4%B8%BA%E6%82%A3%E6%9C%89%E5%85%88%E5%A4%A9%E6%80%A7%E7%96%BE%E7%97%85%E7%9A%84%E5%9B%B0%E5%A2%83%E5%84%BF%E7%AB%A5%E6%8F%90%E4%BE%9B%E2%80%9C%E5%85%A8%E4%BA%BA%E5%A4%9A%E5%85%83%E2%80%9D%E7%A4%BE%E5%B7%A5%E6%9C%8D%E5%8A%A1%EF%BC%8C%E9%81%B5%E5%BE%AA%E4%BB%A5%E2%80%9C%E5%84%BF%E7%AB%A5%E4%B8%BA%E4%B8%AD%E5%BF%83%E2%80%9D%E7%9A%84%E5%8E%9F%E5%88%99%E5%BB%BA%E7%AB%8B%E8%BA%AF%E4%BD%93%E3%80%81%E5%BF%83%E7%90%86%E3%80%81%E7%A4%BE%E4%BC%9A%E7%9A%84%E5%84%BF%E7%AB%A5%E6%94%AF%E6%8C%81%E4%BD%93%E7%B3%BB%E3%80%82%E9%80%9A%E8%BF%87%E2%80%9C%E5%AE%9E%E8%B7%B5%EF%BC%8D%E6%95%99%E7%A0%94%EF%BC%8D%E6%8E%A8%E5%B9%BF%E2%80%9D%E6%A8%A1%E5%BC%8F%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%84%BF%E7%AB%A5%E6%95%91%E5%8A%A9%E6%9C%8D%E5%8A%A1%E4%BD%93%E7%B3%BB%E7%9A%84%E4%B8%8D%E6%96%AD%E5%AE%8C%E5%96%84%E3%80%822013%E5%B9%B4%E8%8E%B7%E8%AF%84%E5%8C%97%E4%BA%AC%E5%B8%825A%E7%BA%A7%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%0A%0A%E7%9B%AE%E5%89%8D%E5%BC%80%E5%B1%95%E7%9A%84%E9%A1%B9%E7%9B%AE%EF%BC%9A%E5%B0%8F%E8%8B%97%E5%8C%BB%E7%96%97%E9%A1%B9%E7%9B%AE%20%20%20%E5%B0%8F%E8%8A%B1%E5%85%B3%E7%88%B1%E9%A1%B9%E7%9B%AE%20%20%E5%B0%8F%E6%A0%91%E6%88%90%E9%95%BF%E9%A1%B9%E7%9B%AE%20%20%0A%E6%84%BF%E6%99%AF%EF%BC%9A%E6%84%BF%E6%AF%8F%E4%B8%80%E4%B8%AA%E5%AD%A9%E5%AD%90%E9%83%BD%E6%8B%A5%E6%9C%89%E5%AE%B6%E3%80%81%E5%81%A5%E5%BA%B7%E3%80%81%E5%85%B3%E7%88%B1%E3%80%81%E5%BF%AB%E4%B9%90%E5%92%8C%E5%B8%8C%E6%9C%9B%0A%E4%BD%BF%E5%91%BD%EF%BC%9A%E8%87%B4%E5%8A%9B%E4%BA%8E%E4%B8%BA%E5%84%BF%E7%AB%A5%E6%8F%90%E4%BE%9B%E5%8C%BB%E7%96%97%E3%80%81%E5%85%BB%E8%82%B2%E7%9A%84%E4%B8%93%E4%B8%9A%E6%9C%8D%E5%8A%A1%EF%BC%8C%E5%B8%AE%E5%8A%A9%E5%AD%A9%E5%AD%90%E4%BB%AC%E5%BF%AB%E4%B9%90%E6%88%90%E9%95%BF%E3%80%81%E8%9E%8D%E5%85%A5%E7%A4%BE%E4%BC%9A%0A%E4%BB%B7%E5%80%BC%E8%A7%82%EF%BC%9A%E4%B8%93%E4%B8%9A%20%20%20%E5%B0%8A%E9%87%8D%20%20%20%E5%85%AC%E6%AD%A3%20%20%20%E5%BF%AB%E4%B9%90%20%20%20%E5%90%88%E4%BD%9C',
'id': '291',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0a3db47aa0ff49333609985ab023437b842f33b8c558c298fefdc41ab682852544c54cf484e977f49',
'mn': 104078195,
'rk': '29',
'title': '%E5%8C%97%E4%BA%AC%E6%98%A5%E8%8B%97%E5%84%BF%E7%AB%A5%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 3570,
'uin': '2298416946'},
{'desc': '%E6%B5%99%E6%B1%9F%E7%9C%81%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88%E6%9B%BE%E7%94%A8%E5%90%8D%EF%BC%9A%E6%B5%99%E6%B1%9F%E7%9C%81%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%81%E6%B5%99%E6%B1%9F%E7%9C%81%E5%84%BF%E7%AB%A5%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%89%E6%88%90%E7%AB%8B%E4%BA%8E1981%E5%B9%B4%EF%BC%8C%E6%98%AF%E4%B8%80%E4%B8%AA%E4%B8%BA%E7%89%B9%E5%AE%9A%E7%9A%84%E5%85%AC%E7%9B%8A%E7%9B%AE%E7%9A%84%E8%80%8C%E8%AE%BE%E7%AB%8B%E7%9A%84%E3%80%81%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E7%9A%84%E7%BA%AF%E5%85%AC%E7%9B%8A%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E6%8E%A8%E5%8A%A8%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E6%9D%83%E7%9B%8A%E7%9A%84%E4%BF%9D%E9%9A%9C%E5%92%8C%E5%B9%B3%E7%AD%89%E5%8F%91%E5%B1%95%EF%BC%8C%E4%B8%BA%E8%B4%AB%E5%9B%B0%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E6%8F%90%E4%BE%9B%E5%B8%AE%E5%8A%A9%EF%BC%8C%E4%B8%BA%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E5%8A%9E%E5%A5%BD%E4%BA%8B%EF%BC%8C%E5%8A%9E%E5%AE%9E%E4%BA%8B%E3%80%82',
'id': '206',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb04d06a51067661b97520411133e8c64aa09d4a0df939a99ea2eb7c2ee337002e679e3e624292c4f1a',
'mn': 101662118,
'rk': '30',
'title': '%E6%B5%99%E6%B1%9F%E7%9C%81%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 8184,
'uin': '2842798747'},
{'desc': '%E6%B5%B7%E5%8D%97%E7%9C%81%E6%AE%8B%E7%96%BE%E4%BA%BA%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E7%99%BB%E8%AE%B0%E7%AE%A1%E7%90%86%E6%9C%BA%E5%85%B3%E6%98%AF%E6%B5%B7%E5%8D%97%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%EF%BC%8C%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E5%8D%95%E4%BD%8D%E6%98%AF%E6%B5%B7%E5%8D%97%E7%9C%81%E6%AE%8B%E7%96%BE%E4%BA%BA%E8%81%94%E5%90%88%E4%BC%9A%E3%80%82%E4%B8%9A%E5%8A%A1%E8%8C%83%E5%9B%B4%E4%B8%BB%E8%A6%81%E6%98%AF%E5%AE%A3%E4%BC%A0%E6%AE%8B%E7%96%BE%E4%BA%BA%E4%BA%8B%E4%B8%9A%EF%BC%8C%E7%AD%B9%E9%9B%86%E8%B5%84%E9%87%91%EF%BC%8C%E6%8E%A5%E5%8F%97%E6%8D%90%E6%AC%BE%EF%BC%8C%E7%AE%A1%E7%90%86%E5%92%8C%E4%BD%BF%E7%94%A8%E5%9F%BA%E9%87%91%EF%BC%8C%E8%B5%84%E5%8A%A9%E6%AE%8B%E7%96%BE%E4%BA%BA%E4%BA%8B%E4%B8%9A%E3%80%82',
'id': '140',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0f1d6909bb7edc18f585c4c75a2c6d32557948f4c632cfb2d0814f54344bd7230572e8da98c59de8d',
'mn': 100155340,
'rk': '31',
'title': '%E6%B5%B7%E5%8D%97%E7%9C%81%E6%AE%8B%E7%96%BE%E4%BA%BA%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 3400,
'uin': '1919069378'},
{'desc': '%20%20%20%20%20%20%20%E6%B2%B3%E5%8D%97%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E6%98%AF%E5%85%A8%E7%9C%81%E6%80%A7%E5%85%AC%E7%9B%8A%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%EF%BC%8C2001%E5%B9%B49%E6%9C%88%E6%88%90%E7%AB%8B%EF%BC%8C%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E5%8D%95%E4%BD%8D%E4%B8%BA%E6%B2%B3%E5%8D%97%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%EF%BC%8C%E7%99%BB%E8%AE%B0%E7%AE%A1%E7%90%86%E6%9C%BA%E5%85%B3%E4%B8%BA%E6%B2%B3%E5%8D%97%E7%9C%81%E6%B0%91%E9%97%B4%E7%BB%84%E7%BB%87%E7%AE%A1%E7%90%86%E5%B1%80%EF%BC%8C%E7%BB%84%E7%BB%87%E6%9C%BA%E6%9E%84%E4%BB%A3%E7%A0%81%E4%B8%BA72583847%2D6%EF%BC%8C%E6%B3%A8%E5%86%8C%E8%B5%84%E9%87%91%E4%BA%BA%E6%B0%91%E5%B8%8120%E4%B8%87%E5%85%83%EF%BC%8C%E6%B3%95%E5%AE%9A%E4%BB%A3%E8%A1%A8%E4%BA%BA%E4%B8%BA%E5%B8%88%E6%88%8C%E5%B9%B3%EF%BC%8C%E6%97%A5%E5%B8%B8%E5%8A%9E%E4%BA%8B%E6%9C%BA%E6%9E%84%E4%B8%BA%E7%A7%98%E4%B9%A6%E5%A4%84%E3%80%82%E6%80%BB%E4%BC%9A%E4%B8%BB%E8%A6%81%E8%81%8C%E8%83%BD%E6%98%AF%EF%BC%9A%E4%B8%BA%E7%88%B1%E5%BF%83%E7%BB%84%E7%BB%87%E5%92%8C%E4%B8%AA%E4%BA%BA%E6%8F%90%E4%BE%9B%E5%85%AC%E7%9B%8A%E7%AD%96%E5%88%92%EF%BC%8C%E6%90%AD%E5%BB%BA%E7%88%B1%E5%BF%83%E5%B9%B3%E5%8F%B0%EF%BC%9B%E5%B8%AE%E6%89%B6%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%EF%BC%8C%E6%95%91%E5%8A%A9%E5%9B%B0%E9%9A%BE%E7%BE%A4%E4%BC%97%EF%BC%9B%E4%B8%BA%E5%90%84%E7%B1%BB%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%E5%92%8C%E6%85%88%E5%96%84%E6%9C%BA%E6%9E%84%E5%AF%BB%E6%B1%82%E5%B8%AE%E5%8A%A9%E5%92%8C%E6%94%AF%E6%8C%81%E3%80%82%0A%20%20%20%20%20%E6%88%91%E4%BB%AC%E7%9A%84%E6%84%BF%E6%99%AF%0A%20%20%20%20%E6%89%93%E9%80%A0%E5%8F%97%E4%BA%BA%E5%B0%8A%E9%87%8D%E7%9A%84%E4%B8%80%E6%B5%81%E6%85%88%E5%96%84%E6%9C%BA%E6%9E%84%E3%80%82%0A%20%20%20%20%E2%80%94%E2%80%94%E4%BB%A5%E9%AB%98%E6%B0%B4%E5%87%86%E7%9A%84%E4%B8%93%E4%B8%9A%E6%9C%8D%E5%8A%A1%E5%B8%AE%E5%8A%A9%E6%AF%8F%E4%B8%AA%E4%BA%BA%E5%AE%9E%E7%8E%B0%E5%85%AC%E7%9B%8A%E6%A2%A6%E6%83%B3%EF%BC%9B%0A%20%20%20%20%E2%80%94%E2%80%94%E4%BB%A5%E9%AB%98%E6%95%88%E7%9A%84%E6%95%91%E5%8A%A9%E5%B9%B3%E5%8F%B0%E5%9C%A8%E7%AC%AC%E4%B8%80%E6%97%B6%E9%97%B4%E7%BB%99%E9%99%B7%E5%85%A5%E5%9B%B0%E5%A2%83%E7%9A%84%E4%BA%BA%E4%BB%AC%E4%BC%A0%E9%80%92%E6%B8%A9%E6%9A%96%EF%BC%9B%0A%20%20%20%20%E2%80%94%E2%80%94%E4%BB%A5%E6%BD%9C%E5%BF%83%E7%9A%84%E6%85%88%E5%96%84%E6%96%87%E5%8C%96%E4%BC%A0%E6%92%AD%E4%B8%BA%E5%BD%93%E4%BB%A3%E7%A4%BE%E4%BC%9A%E6%96%87%E5%8C%96%E6%B3%A8%E5%85%A5%E4%B8%B0%E5%AF%8C%E7%9A%84%E7%B2%BE%E7%A5%9E%E5%86%85%E6%B6%B5%E3%80%82%0A%20%20%20%20%E6%88%91%E4%BB%AC%E7%9A%84%E4%BD%BF%E5%91%BD%0A%20%20%20%20%E6%89%B6%E5%8D%B1%E5%8A%A9%E5%9B%B0%E3%80%81%E6%88%90%E5%B0%B1%E7%88%B1%E5%BF%83%EF%BC%8C%E5%BC%98%E6%89%AC%E6%85%88%E5%96%84%E6%96%87%E5%8C%96%E3%80%81%E5%80%A1%E5%AF%BC%E7%A4%BE%E4%BC%9A%E6%94%BF%E7%AD%96%EF%BC%8C%E5%8A%A9%E6%8E%A8%E7%A4%BE%E4%BC%9A%E5%92%8C%E8%B0%90%E3%80%82%0A%20%20%20%20%E6%88%91%E4%BB%AC%E7%9A%84%E4%BB%B7%E5%80%BC%E8%A7%82%0A%20%20%20%20%E5%B0%8A%E9%87%8D%20%E8%AF%9A%E4%BF%A1%20%E9%AB%98%E6%95%88%20%E5%88%9B%E6%96%B0%0A%20%20%20%20%E6%88%91%E4%BB%AC%E7%9A%84%E6%89%BF%E8%AF%BA%0A%20%20%20%20%E5%96%84%E6%AC%BE%E5%96%84%E7%94%A8%E3%80%81%E8%AF%9A%E4%BF%A1%E9%80%8F%E6%98%8E%E3%80%82',
'id': '233',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb003807e04be1ca1765afb7e58e77266024c9f130032449e92dc1ead4d721894aedbd91f3df4666f7a',
'mn': 82341031,
'rk': '32',
'title': '%E6%B2%B3%E5%8D%97%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 6759,
'uin': '1690729420'},
{'desc': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88%E4%BB%A5%E4%B8%8B%E7%AE%80%E7%A7%B0%E6%B7%B1%E5%9C%B3%E9%9D%92%E5%9F%BA%E4%BC%9A%EF%BC%89%E6%88%90%E7%AB%8B%E4%BA%8E1988%E5%B9%B4%EF%BC%8C%E6%98%AF%E7%BB%8F%E5%B9%BF%E4%B8%9C%E7%9C%81%E6%B0%91%E9%97%B4%E7%BB%84%E7%BB%87%E7%AE%A1%E7%90%86%E5%B1%80%E6%B3%A8%E5%86%8C%EF%BC%88%E7%B2%A4%E5%9F%BA%E8%AF%81%E5%AD%97%E7%AC%AC0007%E5%8F%B7%EF%BC%89%EF%BC%8C%E7%94%B1%E5%85%B1%E9%9D%92%E5%9B%A2%E5%B9%BF%E4%B8%9C%E7%9C%81%E5%A7%94%E5%91%98%E4%BC%9A%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E5%B9%B6%E5%A7%94%E6%89%98%E5%85%B1%E9%9D%92%E5%9B%A2%E6%B7%B1%E5%9C%B3%E5%B8%82%E5%A7%94%E5%91%98%E4%BC%9A%E7%AE%A1%E7%90%86%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%98%AF%E6%B7%B1%E5%9C%B3%E5%B8%82%E7%AC%AC%E4%B8%80%E5%AE%B6%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E6%9C%BA%E6%9E%84%E3%80%82%E4%B8%BB%E8%A6%81%E4%B8%9A%E5%8A%A1%E8%8C%83%E5%9B%B4%E6%98%AF%EF%BC%9A%E4%B8%BA%E4%BF%83%E8%BF%9B%E6%B7%B1%E5%9C%B3%E5%B8%82%E9%9D%92%E5%B0%91%E5%B9%B4%E5%90%84%E9%A1%B9%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%8F%91%E5%B1%95%E5%8B%9F%E9%9B%86%E3%80%81%E7%AE%A1%E7%90%86%E3%80%81%E4%BD%BF%E7%94%A8%E8%B5%84%E9%87%91%E3%80%82',
'id': '113',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fcf9bad811cae277b9ed11a225082993b2159f788fa6c43e7f9660b0b0fae3d19a63e236060149e4e1a0fb87f4100d45c',
'mn': 74977201,
'rk': '33',
'title': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 3840,
'uin': '312660217'},
{'desc': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E9%BE%99%E8%B6%8A%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%88%90%E7%AB%8B%E4%BA%8E2011%E5%B9%B411%E6%9C%8811%E6%97%A5%E3%80%822017%E5%B9%B41%E6%9C%886%E6%97%A5%EF%BC%8C%E4%BE%9D%E6%8D%AE%E3%80%8A%E6%85%88%E5%96%84%E6%B3%95%E3%80%8B%E5%8F%96%E5%BE%97%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E5%85%AC%E5%BC%80%E5%8B%9F%E6%8D%90%E8%B5%84%E6%A0%BC%E3%80%82%0A%0A%E6%B7%B1%E5%9C%B3%E5%B8%82%E9%BE%99%E8%B6%8A%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E4%BB%A5%E2%80%9C%E6%8A%9A%E6%85%B0%E6%88%98%E4%BA%89%E5%88%9B%E4%BC%A4%EF%BC%8C%E5%80%A1%E5%AF%BC%E4%BA%BA%E6%80%A7%E5%85%B3%E6%80%80%E2%80%9D%E4%B8%BA%E4%BD%BF%E5%91%BD%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E4%B8%BA%E6%88%98%E4%BA%89%E8%83%8C%E6%99%AF%E4%B8%8B%E7%9A%84%E4%B8%AA%E4%BD%93%E5%A3%AB%E5%85%B5%E6%8F%90%E4%BE%9B%E4%BA%BA%E6%80%A7%E5%85%B3%E6%80%80%EF%BC%8C%E4%B8%93%E6%B3%A8%E4%BA%8E%E8%80%81%E5%85%B5%E5%85%B3%E6%80%80%E8%AE%A1%E5%88%92%E3%80%81%E4%B8%A4%E5%B2%B8%E5%AF%BB%E4%BA%B2%E3%80%81%E9%98%B5%E4%BA%A1%E5%B0%86%E5%A3%AB%E9%81%97%E9%AA%B8%E5%AF%BB%E6%89%BE%E4%B8%8E%E5%BD%92%E8%91%AC%E7%AD%89%E9%A2%86%E5%9F%9F%E3%80%82',
'id': '289',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb07f213ebdf19208a0eab9723dfbe1775f96a73bcb1d413a4ff241a34c141e2c629eb5f383ffa77f14',
'mn': 69533858,
'rk': '34',
'title': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E9%BE%99%E8%B6%8A%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 14439,
'uin': '1330330815'},
{'desc': '%E4%B8%AD%E5%9B%BD%E6%AE%8B%E7%96%BE%E4%BA%BA%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%85%A8%E5%9B%BD%E6%80%A75A%E7%BA%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%88%90%E7%AB%8B%E4%BA%8E1984%E5%B9%B43%E6%9C%8815%E6%97%A5%E3%80%82%E5%AE%97%E6%97%A8%E6%98%AF%E5%BC%98%E6%89%AC%E4%BA%BA%E9%81%93%EF%BC%8C%E5%A5%89%E7%8C%AE%E7%88%B1%E5%BF%83%EF%BC%8C%E5%85%A8%E5%BF%83%E5%85%A8%E6%84%8F%E4%B8%BA%E6%AE%8B%E7%96%BE%E4%BA%BA%E6%9C%8D%E5%8A%A1%E3%80%82%E7%90%86%E5%BF%B5%E6%98%AF%E2%80%9C%E9%9B%86%E5%96%84%E2%80%9D%EF%BC%8C%E5%8D%B3%E9%9B%86%E5%90%88%E4%BA%BA%E9%81%93%E7%88%B1%E5%BF%83%EF%BC%8C%E5%96%84%E5%BE%85%E5%A4%A9%E4%B8%8B%E7%94%9F%E5%91%BD%E3%80%82%E5%B7%A5%E4%BD%9C%E7%9B%AE%E6%A0%87%E6%98%AF%E5%8A%AA%E5%8A%9B%E5%BB%BA%E8%AE%BE%E6%88%90%E4%B8%BA%E5%85%AC%E5%BC%80%E3%80%81%E9%80%8F%E6%98%8E%E3%80%81%E9%AB%98%E6%95%88%E7%8E%87%E5%92%8C%E9%AB%98%E5%85%AC%E4%BF%A1%E5%8A%9B%E7%9A%84%E4%B8%96%E7%95%8C%E4%B8%80%E6%B5%81%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82',
'id': '218',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb07aff2e7a92e05777a3e16afd414063e062a5a9c2b2796bd36d877240153b44948a4d01fb2eac184d',
'mn': 68704520,
'rk': '35',
'title': '%E4%B8%AD%E5%9B%BD%E6%AE%8B%E7%96%BE%E4%BA%BA%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 5243,
'uin': '2104458859'},
{'desc': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E5%85%B3%E7%88%B1%E8%A1%8C%E5%8A%A8%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88%E8%8B%B1%E6%96%87%E5%90%8DShenzhen%20Project%20Care%20Foundation%EF%BC%8C%E4%BB%A5%E4%B8%8B%E7%AE%80%E7%A7%B0%E2%80%9C%E5%85%B3%E7%88%B1%E5%9F%BA%E9%87%91%E4%BC%9A%E2%80%9D%EF%BC%89%E6%88%90%E7%AB%8B%E4%BA%8E2011%E5%B9%B404%E6%9C%8828%E6%97%A5%E3%80%82%E5%85%B3%E7%88%B1%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%BB%8F%E6%B7%B1%E5%9C%B3%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E6%89%B9%E5%87%86%E6%88%90%E7%AB%8B%E7%9A%84%E5%9C%B0%E6%96%B9%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E5%8D%95%E4%BD%8D%E4%B8%BA%E6%B7%B1%E5%9C%B3%E5%B8%82%E5%A7%94%E5%AE%A3%E4%BC%A0%E9%83%A8%E3%80%82%E5%85%B6%E5%8E%9F%E5%A7%8B%E5%9F%BA%E9%87%91400%E4%B8%87%E5%85%83%E4%BA%BA%E6%B0%91%E5%B8%81%EF%BC%8C%E6%9D%A5%E6%BA%90%E4%BA%8E%E6%B7%B1%E5%9C%B3%E6%8A%A5%E4%B8%9A%E9%9B%86%E5%9B%A2%E6%8D%90%E8%B5%A0%E3%80%82%0A%0A%E3%80%80%E3%80%80%E5%85%B3%E7%88%B1%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E5%AE%A3%E4%BC%A0%E5%85%AC%E7%9B%8A%E7%90%86%E5%BF%B5%EF%BC%8C%E5%88%9B%E6%96%B0%E5%85%AC%E7%9B%8A%E6%A8%A1%E5%BC%8F%EF%BC%8C%E5%9F%B9%E8%82%B2%E5%85%AC%E7%9B%8A%E6%96%87%E5%8C%96%EF%BC%8C%E6%8E%A8%E5%8A%A8%E6%B7%B1%E5%9C%B3%E5%85%B3%E7%88%B1%E8%A1%8C%E5%8A%A8%E7%9A%84%E6%B7%B1%E5%85%A5%E5%8F%91%E5%B1%95%E3%80%82%E5%85%B3%E7%88%B1%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E4%B8%9A%E5%8A%A1%E8%8C%83%E5%9B%B4%E5%8C%85%E5%90%AB%EF%BC%9A%E8%B5%84%E5%8A%A9%E6%B7%B1%E5%9C%B3%E5%85%B3%E7%88%B1%E8%A1%8C%E5%8A%A8%E7%BB%84%E7%BB%87%E5%BC%80%E5%B1%95%E7%9A%84%E9%A1%B9%E7%9B%AE%EF%BC%8C%E5%8C%85%E6%8B%AC%E7%89%A9%E8%B4%A8%E5%85%B3%E7%88%B1%E3%80%81%E6%96%87%E5%8C%96%E5%85%B3%E7%88%B1%E3%80%81%E5%BF%83%E7%90%86%E5%85%B3%E7%88%B1%E3%80%81%E8%83%BD%E5%8A%9B%E5%BB%BA%E8%AE%BE%E3%80%81%E6%85%88%E5%96%84%E6%95%99%E8%82%B2%E3%80%81%E7%90%86%E8%AE%BA%E7%A0%94%E7%A9%B6%E3%80%81%E4%BA%A4%E6%B5%81%E5%90%88%E4%BD%9C%E7%AD%89%E9%A2%86%E5%9F%9F%E7%9A%84%E5%85%AC%E7%9B%8A%E9%A1%B9%E7%9B%AE%EF%BC%9B%E7%BB%84%E7%BB%87%E5%AA%92%E4%BD%93%E5%BC%80%E5%B1%95%E5%85%AC%E7%9B%8A%E4%BF%A1%E6%81%AF%E4%BA%A4%E6%B5%81%EF%BC%8C%E5%AE%A3%E4%BC%A0%E5%85%AC%E7%9B%8A%E7%90%86%E5%BF%B5%E3%80%82%0A%0A%E3%80%80%E3%80%80%E9%95%BF%E8%BE%BE%E5%85%AB%E5%B9%B4%E7%9A%84%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E7%BB%8F%E9%AA%8C%E7%9A%84%E7%A7%AF%E7%B4%AF%EF%BC%8C%E7%AB%99%E5%9C%A8%E6%96%B0%E7%9A%84%E8%B5%B7%E7%82%B9%E4%B8%8A%EF%BC%8C%E5%85%B3%E7%88%B1%E5%9F%BA%E9%87%91%E4%BC%9A%E5%B0%B1%E6%98%AF%E8%A6%81%E5%9C%A8%E5%B7%A9%E5%9B%BA%E5%8E%9F%E6%9C%89%E7%9A%84%E5%85%B3%E7%88%B1%E6%A8%A1%E5%BC%8F%E5%92%8C%E5%81%9A%E6%B3%95%E7%9A%84%E5%90%8C%E6%97%B6%EF%BC%8C%E6%B5%B7%E7%BA%B3%E7%99%BE%E5%B7%9D%EF%BC%8C%E6%9B%B4%E5%B9%BF%E6%B3%9B%E5%9C%B0%E5%8A%A8%E5%91%98%E4%BC%81%E4%B8%9A%E3%80%81%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E3%80%81%E5%B8%82%E6%B0%91%E5%8F%82%E4%B8%8E%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%EF%BC%8C%E9%80%9A%E8%BF%87%E5%85%85%E6%B2%9B%E7%9A%84%E5%85%AC%E7%9B%8A%E8%B5%84%E9%87%91%E5%92%8C%E5%A4%A7%E9%87%8F%E7%9A%84%E5%85%AC%E7%9B%8A%E5%AE%A3%E4%BC%A0%E6%BF%80%E5%8F%91%E7%A4%BE%E4%BC%9A%E5%90%84%E7%95%8C%E7%9A%84%E5%85%AC%E7%9B%8A%E5%88%9B%E6%84%8F%EF%BC%8C%E5%B0%86%E5%85%A8%E7%A4%BE%E4%BC%9A%E7%9A%84%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E8%B5%84%E6%BA%90%E6%89%AD%E6%88%90%E4%B8%80%E8%82%A1%E7%BB%B3%EF%BC%8C%E5%BD%A2%E6%88%90%E5%90%88%E5%8A%9B%EF%BC%8C%E6%8E%A8%E5%8A%A8%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%8F%91%E5%B1%95%E3%80%82',
'id': '148',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F5eab0295b50a8e6b1e46e09df5b54d997e84fa3dc3efe5ac5d918d66027ccaeea1738f4d02cfa22d',
'mn': 64751852,
'rk': '36',
'title': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E5%85%B3%E7%88%B1%E8%A1%8C%E5%8A%A8%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 7788,
'uin': '908236568'},
{'desc': '%E4%B8%AD%E5%9B%BD%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%E5%9F%BA%E9%87%91%E4%BC%9A%28%E8%8B%B1%E6%96%87%E7%BC%A9%E5%86%99CFCAC%20%29%E7%9A%84%E6%A0%87%E5%BF%97%E4%BB%A5%E7%BB%BF%E5%8F%B6%E3%80%81%E8%8A%B1%E8%95%BE%E4%B8%BA%E5%88%9B%E6%84%8F%E5%85%83%E7%B4%A0%EF%BC%8C%E7%BB%BF%E5%8F%B6%E8%B1%A1%E5%BE%81%E7%9D%80%E7%A4%BE%E4%BC%9A%E7%9A%84%E5%85%B3%E7%88%B1%EF%BC%8C%E8%8A%B1%E8%95%BE%E8%B1%A1%E5%BE%81%E7%9D%80%E5%AD%A9%E5%AD%90%EF%BC%8C%E7%BB%BF%E5%8F%B6%E5%8F%88%E5%83%8F%E4%B8%80%E5%8F%8C%E5%91%B5%E6%8A%A4%E5%84%BF%E7%AB%A5%E7%9A%84%E6%89%8B%EF%BC%8C%E4%BD%93%E7%8E%B0%E2%80%9C%E7%82%B9%E4%BA%AE%E7%94%9F%E5%91%BD%C2%B7%E5%88%9B%E9%80%A0%E6%9C%AA%E6%9D%A5%E2%80%9D%E7%9A%84%E6%A0%B8%E5%BF%83%E7%90%86%E5%BF%B5%E3%80%82%E6%95%B4%E4%BD%93%E6%9E%84%E5%9B%BE%E7%AE%80%E5%8D%95%E6%98%8E%E4%BA%86%EF%BC%8C%E4%BB%A5%E8%B1%A1%E5%BE%81%E7%94%9F%E5%91%BD%E7%9A%84%E7%BB%BF%E8%89%B2%E4%B8%BA%E4%B8%BB%E8%89%B2%EF%BC%8C%E5%AF%93%E6%84%8F%E5%9F%BA%E9%87%91%E4%BC%9A%E5%9C%A8%E5%85%A8%E7%A4%BE%E4%BC%9A%E7%88%B1%E5%BF%83%E4%BA%BA%E5%A3%AB%E7%9A%84%E6%94%AF%E6%8C%81%E4%B8%8B%E8%93%AC%E5%8B%83%E5%8F%91%E5%B1%95%E3%80%82%0A%0A%20%20%20%20%20%20%20%E4%B8%AD%E5%9B%BD%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%BB%BA%E7%AB%8B%E4%BA%8E1986%E5%B9%B4%EF%BC%8C%E6%98%AF%E4%B8%AD%E5%8D%8E%E4%BA%BA%E6%B0%91%E5%85%B1%E5%92%8C%E5%9B%BD%E6%96%87%E5%8C%96%E9%83%A8%E4%B8%BB%E7%AE%A1%EF%BC%8C%E6%B0%91%E6%94%BF%E9%83%A8%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E5%9B%A2%E4%BD%93%E3%80%82%20%0A%0A%E3%80%80%E3%80%80%E6%9C%AC%E4%BC%9A%E7%A7%89%E6%89%BF%E2%80%9C%E7%82%B9%E4%BA%AE%E5%B8%8C%E6%9C%9B%C2%B7%E5%88%9B%E9%80%A0%E6%9C%AA%E6%9D%A5%E2%80%9D%E7%9A%84%E6%A0%B8%E5%BF%83%E7%90%86%E5%BF%B5%EF%BC%8C%E5%9D%9A%E6%8C%81%E4%BB%A5%E2%80%9C%E4%B8%BA%E4%BA%86%E4%BF%83%E8%BF%9B%E4%B8%8E%E7%B9%81%E8%8D%A3%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%E4%BA%8B%E4%B8%9A%EF%BC%8C%E7%94%A8%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%E5%BD%A2%E5%BC%8F%E9%99%B6%E5%86%B6%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%83%85%E6%93%8D%EF%BC%8C%E5%BC%80%E5%8F%91%E5%85%B6%E6%99%BA%E5%8A%9B%EF%BC%8C%E4%BD%BF%E5%85%B6%E5%81%A5%E5%BA%B7%E6%88%90%E9%95%BF%E2%80%9D%E4%B8%BA%E5%AE%97%E6%97%A8%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%8A%8A%E6%88%91%E5%9B%BD%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E5%9F%B9%E5%85%BB%E6%88%90%E6%9C%89%E7%90%86%E6%83%B3%E3%80%81%E6%9C%89%E6%96%87%E5%8C%96%E3%80%81%E6%9C%89%E9%81%93%E5%BE%B7%E3%80%81%E6%9C%89%E7%BA%AA%E5%BE%8B%E7%9A%84%E4%B8%80%E4%BB%A3%E6%96%B0%E4%BA%BA%E3%80%82%20%20%0A%20%20%20%20%20%20%20%20%E6%9C%AC%E4%BC%9A%E7%9A%84%E4%B8%9A%E5%8A%A1%E8%8C%83%E5%9B%B4%E6%98%AF%EF%BC%9A%E6%A0%B9%E6%8D%AE%E5%9B%BD%E5%AE%B6%E6%9C%89%E5%85%B3%E8%A7%84%E5%AE%9A%E5%8B%9F%E9%9B%86%E3%80%81%E8%AE%BE%E7%AB%8B%E3%80%81%E7%AE%A1%E7%90%86%E3%80%81%E5%92%8C%E8%A7%84%E8%8C%83%E5%9C%B0%E4%BD%BF%E7%94%A8%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%AD%8C%E8%88%9E%E3%80%81%E6%88%8F%E5%89%A7%E3%80%81%E4%B9%A6%E7%94%BB%E7%AD%89%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%E6%95%99%E8%82%B2%E4%B8%93%E9%A1%B9%E5%9F%BA%E9%87%91%2C%E5%8F%91%E5%B1%95%E6%96%87%E5%8C%96%E6%95%99%E8%82%B2%E4%BA%8B%E4%B8%9A%EF%BC%9B%E7%BB%84%E7%BB%87%E6%9C%89%E7%9B%8A%E4%BA%8E%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E8%BA%AB%E5%BF%83%E5%81%A5%E5%BA%B7%E7%9A%84%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%E6%BC%94%E5%87%BA%E3%80%81%E5%B1%95%E8%A7%88%E3%80%81%E6%AF%94%E8%B5%9B%E3%80%81%E4%BA%A4%E6%B5%81%E3%80%81%E5%BD%B1%E8%A7%86%E5%88%B6%E4%BD%9C%E7%AD%89%E5%90%84%E9%A1%B9%E6%B4%BB%E5%8A%A8%EF%BC%9B%E8%B5%84%E5%8A%A9%E8%80%81%E3%80%81%E5%B0%91%E3%80%81%E8%BE%B9%E3%80%81%E7%A9%B7%E5%9C%B0%E5%8C%BA%E7%9A%84%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%E6%B4%BB%E5%8A%A8%E5%92%8C%E7%9B%B8%E5%85%B3%E7%9A%84%E5%9C%BA%E6%89%80%E5%BB%BA%E8%AE%BE%EF%BC%9B%E9%80%89%E6%8B%94%E4%BA%BA%E6%89%8D%E3%80%81%E9%87%8D%E7%82%B9%E6%89%B6%E6%8C%81%EF%BC%8C%E7%A7%AF%E6%9E%81%E5%BC%80%E5%B1%95%E5%9B%BD%E5%86%85%E5%A4%96%E8%89%BA%E6%9C%AF%E4%BA%A4%E6%B5%81%E3%80%82%20%0A%20%20%E6%9C%AC%E4%BC%9A%E8%87%AA%E6%88%90%E7%AB%8B%E4%BB%A5%E6%9D%A5%EF%BC%8C%E5%9C%A8',
'id': '141',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0fa522571aadfccc82bbaaf432148a3940b12a9ae2c015d61b3b5fbf96fd95a534e0dccc7f58742b1',
'mn': 60355603,
'rk': '37',
'title': '%E4%B8%AD%E5%9B%BD%E5%B0%91%E5%B9%B4%E5%84%BF%E7%AB%A5%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 2920,
'uin': '1582160795'},
{'desc': '%E4%B8%BA%E5%8F%91%E6%89%AC%E4%BA%BA%E9%81%93%E4%B8%BB%E4%B9%89%E7%B2%BE%E7%A5%9E%EF%BC%8C%E5%BC%98%E6%89%AC%E6%B8%A9%E5%B7%9E%E4%BA%BA%E4%B9%90%E5%96%84%E5%A5%BD%E6%96%BD%E7%9A%84%E4%BC%A0%E7%BB%9F%E7%BE%8E%E5%BE%B7%EF%BC%8C%E9%80%9A%E8%BF%87%E7%BD%91%E7%BB%9C%E4%B9%90%E6%8D%90%E5%B9%B3%E5%8F%B0%EF%BC%8C%E5%80%A1%E5%AF%BC%E2%80%9C%E6%97%A5%E8%A1%8C%E4%B8%80%E5%96%84%E2%80%9D%E7%9A%84%E6%85%88%E5%96%84%E7%90%86%E5%BF%B5%EF%BC%8C%E5%BC%80%E5%B1%95%E7%A4%BE%E4%BC%9A%E6%95%91%E5%8A%A9%E3%80%81%E7%AD%B9%E5%8B%9F%E5%96%84%E6%AC%BE%E3%80%81%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E3%80%81%E8%B5%88%E7%81%BE%E6%95%91%E5%8A%A9%E3%80%81%E5%85%AC%E7%9B%8A%E6%B4%BB%E5%8A%A8%E3%80%81%E6%85%88%E5%96%84%E5%AE%A3%E4%BC%A0%E3%80%81%E7%BD%91%E7%BB%9C%E4%B9%90%E6%8D%90%E7%AD%89%EF%BC%8C%E6%89%B6%E5%8A%A9%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%EF%BC%8C%E4%BF%83%E8%BF%9B%E7%A4%BE%E4%BC%9A%E5%85%AC%E5%85%B1%E7%A6%8F%E5%88%A9%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%8F%91%E5%B1%95%EF%BC%8C%E6%8E%A8%E5%8A%A8%E7%A4%BE%E4%BC%9A%E4%B8%BB%E4%B9%89%E5%92%8C%E8%B0%90%E7%A4%BE%E4%BC%9A%E5%BB%BA%E8%AE%BE%E3%80%82',
'id': '182',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb020dc5ba5247ba913b3e09661b6e0f1c4022d5028beb79dbdd5e35a8ade7f61e60bb4c3097e17f998',
'mn': 59610370,
'rk': '38',
'title': '%E6%B8%A9%E5%B7%9E%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 4061,
'uin': '2089207055'},
{'desc': '%E8%87%AA2001%E5%B9%B4%E8%87%B32016%E5%B9%B4%E5%BA%95%EF%BC%8C%E9%80%8F%E8%BF%87%E6%8D%A1%E5%9B%9E%E7%8F%8D%E7%8F%A0%E8%AE%A1%E5%88%92%E5%9C%A8%E5%85%A8%E5%9B%BD%E4%B8%80%E5%AF%B9%E4%B8%80%E8%B5%84%E5%8A%A948459%E5%90%8D%E9%AB%98%E4%B8%AD%E7%8F%8D%E7%8F%A0%E7%94%9F%E3%80%81200%E5%90%8D%E5%88%9D%E4%B8%AD%E7%94%9F%EF%BC%8C9413%E5%90%8D%E5%A4%A7%E5%AD%A6%E7%94%9F%EF%BC%9B%E5%9C%A8%E5%9B%9B%E5%B7%9D%E7%9C%81%E5%B8%83%E6%8B%96%E5%8E%BF%E6%88%90%E7%AB%8B13%E4%B8%AA%E5%BD%9D%E6%97%8F%E5%84%BF%E7%AB%A5%E7%8F%AD%EF%BC%8C%E8%B5%84%E5%8A%A9702%E5%90%8D%E5%87%89%E5%B1%B1%E5%BD%9D%E6%97%8F%E5%84%BF%E7%AB%A5%EF%BC%9B%E6%8D%90%E5%BB%BA364%E6%89%80%E7%88%B1%E5%BF%83%E5%B0%8F%E5%AD%A6%EF%BC%9B%E8%AE%BE%E7%AB%8B157%E9%97%B4%E7%88%B1%E5%BF%83%E5%9B%BE%E4%B9%A6%E5%AE%A4%E3%80%82%E6%88%90%E7%AB%8B%E8%87%B3%E4%BB%8A%EF%BC%8C%E5%85%88%E5%90%8E%E5%BC%80%E5%B1%95%E4%BA%86%E6%8D%A1%E5%9B%9E%E7%8F%8D%E7%8F%A0%E8%AE%A1%E5%88%92%E3%80%81%E7%88%B1%E5%BF%83%E5%B0%8F%E5%AD%A6%E3%80%81%E7%88%B1%E5%BF%83%E5%9B%BE%E4%B9%A6%E5%AE%A4%E3%80%81%E5%BD%9D%E6%97%8F%E5%84%BF%E7%AB%A5%E7%8F%AD%E3%80%81512%E6%B1%B6%E5%B7%9D%E5%9C%B0%E9%9C%87%E6%8F%B4%E5%8A%A9%E4%B8%93%E9%A1%B9%E3%80%81%E4%B8%80%E4%B8%AA%E5%AD%A9%E5%AD%90%E4%B8%80%E4%B8%AA%E8%9B%8B%EF%BC%88%E7%BB%93%E9%A1%B9%EF%BC%89%E3%80%81%E8%A5%BF%E9%83%A8%E5%9C%B0%E5%8C%BA%E5%B8%88%E8%B5%84%E5%9F%B9%E8%AE%AD%EF%BC%88%E7%BB%93%E9%A1%B9%EF%BC%89%E3%80%82',
'id': '250',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb01b1bb4f46a8b59f42954a1f7d8fa2df587bdf0644134edb13ec140a991539d88197f05f85c039d8f',
'mn': 56978179,
'rk': '39',
'title': '%E6%B5%99%E6%B1%9F%E7%9C%81%E6%96%B0%E5%8D%8E%E7%88%B1%E5%BF%83%E6%95%99%E8%82%B2%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 7976,
'uin': '1193659955'},
{'desc': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E4%B8%80%E4%B8%AA%E5%85%A8%E6%96%B0%E7%9A%84%E4%B8%BA%E5%A6%87%E5%84%BF%E5%8F%91%E5%A3%B0%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%97%A8%E5%9C%A8%E6%94%B9%E5%96%84%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E7%94%9F%E5%AD%98%E7%8E%AF%E5%A2%83%EF%BC%8C%E6%8F%90%E9%AB%98%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E7%B4%A0%E8%B4%A8%EF%BC%8C%E6%95%B4%E5%90%88%E8%B5%84%E6%BA%90%E6%8E%A8%E5%8A%A8%E7%A4%BE%E4%BC%9A%E5%88%9B%E6%96%B0%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E3%80%82',
'id': '256',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F86ec43f40ccfebdefa050ffd403b3170893f49522da6746a0c4a7643d032c788927a850f251e04699a2cd314d50e388d',
'mn': 55605788,
'rk': '40',
'title': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 4969,
'uin': '3414335460'},
{'desc': '%E3%80%8E%E5%91%B5%E6%8A%A4%E5%96%84%E8%89%AF%EF%BC%8C%E6%BB%8B%E6%B6%A6%E5%A4%A7%E7%88%B1%E3%80%8F%E2%80%94%E2%80%94%E5%8A%A9%E5%8A%9B%E5%BF%97%E6%84%BF%E8%80%85%EF%BC%88%E4%B9%89%E5%B7%A5%EF%BC%89%E5%BC%80%E5%B1%95%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E6%B7%B1%E5%9C%B3%E5%B8%82%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E5%9F%BA%E9%87%91%E4%BC%9A%E4%BA%8E2012%E5%B9%B411%E6%9C%8830%E6%97%A5%E6%88%90%E7%AB%8B%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E8%B5%84%E5%8A%A9%E2%80%9C%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E3%80%81%E5%B8%AE%E5%AD%A4%E5%8A%A9%E6%AE%8B%E3%80%81%E6%94%AF%E6%95%99%E5%8A%A9%E5%AD%A6%E3%80%81%E9%9D%92%E5%B0%91%E5%B9%B4%E6%8F%B4%E5%8A%A9%E3%80%81%E7%A7%91%E6%8A%80%E6%8E%A8%E5%B9%BF%E3%80%81%E5%8C%BB%E7%96%97%E5%8D%AB%E7%94%9F%E3%80%81%E7%8E%AF%E5%A2%83%E4%BF%9D%E6%8A%A4%E3%80%81%E7%A4%BE%E5%8C%BA%E5%BB%BA%E8%AE%BE%E3%80%81%E5%A4%A7%E5%9E%8B%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E6%B4%BB%E5%8A%A8%E3%80%81%E5%BA%94%E6%80%A5%E6%95%91%E6%8F%B4%E2%80%9D%E7%AD%89%E5%BF%97%E6%84%BF%E5%85%AC%E7%9B%8A%E6%9C%8D%E5%8A%A1%E9%A1%B9%E7%9B%AE%E4%BB%A5%E5%8F%8A%E5%BF%97%E6%84%BF%E8%80%85%E5%9F%B9%E8%AE%AD%E3%80%81%E5%BF%97%E6%84%BF%E8%80%85%E6%9D%83%E7%9B%8A%E4%BF%9D%E9%9A%9C%E7%AD%89%E4%B8%8E%E5%BF%97%E6%84%BF%E8%80%85%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E6%9C%89%E5%85%B3%E7%9A%84%E9%A1%B9%E7%9B%AE%E3%80%82%E6%88%91%E4%BB%AC%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E4%BC%A0%E6%92%AD%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E7%90%86%E5%BF%B5%EF%BC%8C%E5%BC%98%E6%89%AC%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E7%B2%BE%E7%A5%9E%EF%BC%8C%E6%8F%90%E9%AB%98%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E6%B0%B4%E5%B9%B3%EF%BC%8C%E6%8E%A8%E5%8A%A8%E5%BF%97%E6%84%BF%E8%80%85%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E3%80%82',
'id': '88',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fd056e4e1655ecbb699dd99cdb2327504f6225b3e0b9c4dcd1869921767dc1a0bbe6e056250811a95',
'mn': 55013641,
'rk': '41',
'title': '%E6%B7%B1%E5%9C%B3%E5%B8%82%E5%BF%97%E6%84%BF%E6%9C%8D%E5%8A%A1%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 5721,
'uin': '1474127055'},
{'desc': '%E4%BA%91%E5%8D%97%E7%9C%81%E7%BB%BF%E8%89%B2%E7%8E%AF%E5%A2%83%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88%E7%AE%80%E7%A7%B0%E7%BB%BF%E8%89%B2%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%89%EF%BC%8C%E6%98%AF%E4%BA%8E2008%E5%B9%B41%E6%9C%88%E5%9C%A8%E4%BA%91%E5%8D%97%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E6%B3%A8%E5%86%8C%E6%88%90%E7%AB%8B%EF%BC%8C%E4%BB%A5%E5%A4%9A%E9%87%8D%E6%95%88%E7%9B%8A%E9%80%A0%E6%9E%97%E3%80%81%E4%BF%9D%E6%8A%A4%E7%94%9F%E7%89%A9%E5%A4%9A%E6%A0%B7%E6%80%A7%E3%80%81%E4%BF%83%E8%BF%9B%E7%A4%BE%E5%8C%BA%E5%8F%AF%E6%8C%81%E7%BB%AD%E5%8F%91%E5%B1%95%E4%B8%BA%E5%AE%97%E6%97%A8%E7%9A%84%E7%8E%AF%E4%BF%9D%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%822013%E5%B9%B4%EF%BC%8C%E8%A2%AB%E8%AF%84%E4%B8%BA%E4%B8%AD%E5%9B%BD4A%E7%BA%A7%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%EF%BC%8C%E5%B9%B6%E8%BF%9E%E7%BB%AD%E8%8E%B7%E5%BE%97%E5%85%AC%E7%9B%8A%E6%80%A7%E6%8D%90%E8%B5%A0%E7%A8%8E%E5%89%8D%E6%89%A3%E9%99%A4%E8%B5%84%E6%A0%BC%E3%80%82',
'id': '87',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Ff02254c802ff01cdcd6a050b1b83ae6995f7d59c8e4a5a60f36b43285150d4644e6b2cbf86ddd61fa068fa8bf224efce',
'mn': 52941385,
'rk': '42',
'title': '%E4%BA%91%E5%8D%97%E7%9C%81%E7%BB%BF%E8%89%B2%E7%8E%AF%E5%A2%83%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 2266,
'uin': '2511343539'},
{'desc': '%E5%8C%97%E4%BA%AC%E8%81%94%E7%9B%8A%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%BB%8F%E5%8C%97%E4%BA%AC%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%802011%E5%B9%B4%E6%89%B9%E5%87%86%E8%AE%BE%E7%AB%8B%E7%9A%84%E5%85%A8%E9%A2%86%E5%9F%9F%E6%B0%91%E9%97%B4%E5%85%AC%E5%8B%9F%E6%85%88%E5%96%84%E5%B9%B3%E5%8F%B0%E3%80%82%E9%80%9A%E8%BF%87%E5%9C%A8%E6%95%99%E8%82%B2%E3%80%81%E6%89%B6%E8%B4%AB%E3%80%81%E7%8E%AF%E4%BF%9D%E3%80%81%E8%89%BA%E6%9C%AF%E3%80%81%E6%85%88%E5%96%84%E5%8F%82%E4%B8%8E%E4%BA%94%E4%B8%AA%E9%A2%86%E5%9F%9F%E5%BC%80%E5%B1%95%E6%B7%B1%E5%BA%A6%E5%85%AC%E7%9B%8A%E9%A1%B9%E7%9B%AE%E6%88%96%E6%94%AF%E6%8C%81%E6%9C%89%E5%85%AC%E7%9B%8A%E7%90%86%E6%83%B3%E7%9A%84%E4%BA%BA%E6%88%96%E7%BB%84%E7%BB%87%EF%BC%8C%E8%81%94%E5%90%88%E5%AE%9E%E7%8E%B0%E2%80%9C%E9%80%8F%E6%98%8E%E3%80%81%E6%98%93%E8%A1%8C%E3%80%81%E6%9C%89%E6%95%88%E2%80%9D%E7%9A%84%E6%B7%B1%E5%BA%A6%E5%85%AC%E7%9B%8A%EF%BC%8C%E6%89%93%E9%80%A0%E4%B8%AD%E5%9B%BD%E6%B7%B1%E5%BA%A6%E5%85%AC%E7%9B%8A%E8%81%94%E5%90%88%E5%B9%B3%E5%8F%B0%E3%80%82',
'id': '132',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb08ff1ab3152d065dd9cedd9cf136a984aeac774d2e4c2624867542c46c2f72cc0d2d4e793c19ef55d',
'mn': 51741264,
'rk': '43',
'title': '%E5%8C%97%E4%BA%AC%E8%81%94%E7%9B%8A%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 3164,
'uin': '2484300456'},
{'desc': '%E5%9F%BA%E9%87%91%E4%BC%9A%E5%86%85%E8%AE%BE%E7%90%86%E4%BA%8B%E4%BC%9A%E5%92%8C%E5%8A%9E%E5%85%AC%E5%AE%A4%E3%80%82%E7%90%86%E4%BA%8B%E4%BC%9A%E7%8E%B0%E6%9C%89%E7%90%86%E4%BA%8B15%E5%90%8D%EF%BC%8C%E5%85%B6%E4%B8%AD%E5%90%8D%E8%AA%89%E7%90%86%E4%BA%8B%E9%95%BF1%E5%90%8D%EF%BC%8C%E7%90%86%E4%BA%8B%E9%95%BF1%E5%90%8D%EF%BC%8C%E5%89%AF%E7%90%86%E4%BA%8B%E9%95%BF5%E5%90%8D%EF%BC%8C%E7%A7%98%E4%B9%A6%E9%95%BF1%E5%90%8D%E3%80%818%E5%90%8D%E7%90%86%E4%BA%8B%E3%80%82%E5%8F%A6%E6%9C%89%EF%BC%8C%E7%9B%91%E4%BA%8B3%E5%90%8D%EF%BC%8C%E7%9B%91%E7%9D%A3%E5%91%988%E5%90%8D%EF%BC%8C%E5%9F%BA%E9%87%91%E4%BC%9A%E4%B8%8B%E8%AE%BE%E5%8A%9E%E5%85%AC%E5%AE%A4%EF%BC%8C%E6%9C%89%E5%B7%A5%E4%BD%9C%E4%BA%BA%E5%91%983%E5%90%8D%E3%80%82',
'id': '171',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F2bcefe168ed32aa493fa26b02ab81a354374141e3c7ca58cbcad4e783f5af6ba29d7ffaec6031a5c',
'mn': 47705094,
'rk': '44',
'title': '%E7%94%98%E8%82%83%E7%9C%81%E7%8E%9B%E6%9B%B2%E5%8E%BF%E6%95%99%E7%83%AD%E6%95%99%E8%82%B2%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 5086,
'uin': '262725265'},
{'desc': '%E5%90%89%E6%9E%97%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%EF%BC%88%E7%AE%80%E7%A7%B0%E2%80%9C%E5%90%89%E6%9E%97%E7%9C%81%E9%9D%92%E5%9F%BA%E4%BC%9A%E2%80%9D%EF%BC%89%EF%BC%8C%E9%9A%B6%E5%B1%9E%E4%BA%8E%E5%85%B1%E9%9D%92%E5%9B%A2%E5%90%89%E6%9E%97%E7%9C%81%E5%A7%94%EF%BC%8C%E6%88%90%E7%AB%8B%E4%BA%8E1992%E5%B9%B4%EF%BC%8C%E6%98%AF%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E5%9C%B0%E4%BD%8D%E7%9A%84%E5%85%A8%E7%9C%81%E6%80%A7%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%E5%9C%B0%E6%96%B9%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%98%AF%E5%90%89%E6%9E%97%E7%9C%81%E5%B8%8C%E6%9C%9B%E5%B7%A5%E7%A8%8B%E7%9A%84%E5%94%AF%E4%B8%80%E5%90%88%E6%B3%95%E5%AE%9E%E6%96%BD%E6%9C%BA%E6%9E%84%E3%80%82',
'id': '104',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F74b5a1ab59fb6d4b683e86216e34b6c0d4a5b2a516af256038646febe21e0ccbe1439dbc778786a1',
'mn': 46570981,
'rk': '45',
'title': '%E5%90%89%E6%9E%97%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1906,
'uin': '822996407'},
{'desc': '%E9%87%8D%E5%BA%86%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1995%E5%B9%B4%EF%BC%8C%E6%98%AF%E7%94%B1%E7%83%AD%E5%BF%83%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%85%AC%E6%B0%91%E3%80%81%E6%B3%95%E4%BA%BA%E5%8F%8A%E5%85%B6%E4%BB%96%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E8%87%AA%E6%84%BF%E5%8F%82%E5%8A%A0%E7%9A%84%E5%85%A8%E5%B8%82%E6%80%A7%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%E5%85%AC%E7%9B%8A%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%EF%BC%9B%E6%98%AF%E4%BE%9D%E6%B3%95%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%E3%80%81%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E7%9A%84%E8%A1%8C%E4%B8%9A%E6%80%A7%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%E3%80%82%0A%E6%9C%AC%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%EF%BC%9A%E9%81%B5%E5%AE%88%E5%9B%BD%E5%AE%B6%E5%AE%AA%E6%B3%95%E3%80%81%E6%B3%95%E5%BE%8B%E3%80%81%E6%B3%95%E8%A7%84%E5%92%8C%E6%9C%89%E5%85%B3%E6%94%BF%E7%AD%96%EF%BC%8C%E5%8F%91%E6%89%AC%E4%BA%BA%E9%81%93%E4%B8%BB%E4%B9%89%E7%B2%BE%E7%A5%9E%EF%BC%8C%E5%BC%98%E6%89%AC%E4%B8%AD%E5%8D%8E%E6%B0%91%E6%97%8F%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E3%80%81%E4%B9%90%E5%96%84%E5%A5%BD%E6%96%BD%E7%9A%84%E4%BC%A0%E7%BB%9F%E7%BE%8E%E5%BE%B7%EF%BC%8C%E5%80%A1%E5%AF%BC%E7%AC%A6%E5%90%88%E6%97%B6%E4%BB%A3%E7%89%B9%E5%BE%81%E7%9A%84%E7%A4%BE%E4%BC%9A%E9%81%93%E5%BE%B7%E9%A3%8E%E5%B0%9A%EF%BC%8C%E5%8F%91%E5%8A%A8%E7%A4%BE%E4%BC%9A%E5%90%84%E7%95%8C%E5%8A%9B%E9%87%8F%EF%BC%8C%E5%9C%A8%E5%9B%BD%E5%86%85%E5%A4%96%E5%8F%8A%E6%B8%AF%E3%80%81%E6%BE%B3%E3%80%81%E5%8F%B0%E5%9C%B0%E5%8C%BA%E7%9A%84%E8%87%AA%E7%84%B6%E4%BA%BA%E3%80%81%E6%B3%95%E4%BA%BA%E3%80%81%E6%88%96%E5%85%B6%E4%BB%96%E7%BB%84%E7%BB%87%E4%B8%AD%E7%AD%B9%E5%8B%9F%E6%85%88%E5%96%84%E8%B5%84%E9%87%91%EF%BC%8C%E5%BC%80%E5%B1%95%E5%AE%89%E8%80%81%E6%89%B6%E5%B9%BC%E3%80%81%E5%8A%A9%E6%AE%8B%E6%B5%8E%E5%9B%B0%E3%80%81%E8%B5%88%E7%81%BE%E6%95%91%E6%8F%B4%E3%80%81%E5%8A%A9%E5%AD%A6%E5%85%B4%E6%95%99%E3%80%81%E5%85%AC%E7%9B%8A%E6%8F%B4%E5%8A%A9%E6%B4%BB%E5%8A%A8%EF%BC%8C%E7%A7%AF%E6%9E%81%E5%8F%82%E4%B8%8E%E5%9B%BD%E5%86%85%E5%A4%96%E6%85%88%E5%96%84%E4%BA%A4%E6%B5%81%E5%90%88%E4%BD%9C%EF%BC%8C%E5%8F%91%E5%B1%95%E9%87%8D%E5%BA%86%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E3%80%82%0A%20%E6%9C%AC%E4%BC%9A%E7%9A%84%E4%BD%9C%E7%94%A8%EF%BC%9A%E5%85%85%E5%88%86%E5%8F%91%E6%8C%A5%E6%9C%AC%E4%BC%9A%E5%9C%A8%E4%BF%9D%E9%9A%9C%E7%BE%A4%E4%BC%97%E5%9F%BA%E6%9C%AC%E7%94%9F%E6%B4%BB%E3%80%81%E5%BB%BA%E7%AB%8B%E4%B8%8E%E7%A4%BE%E4%BC%9A%E4%BF%9D%E9%99%A9%E3%80%81%E7%A4%BE%E4%BC%9A%E6%95%91%E5%8A%A9%E3%80%81%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E3%80%81%E6%85%88%E5%96%84%E4%BA%8B%E4%B8%9A%E7%9B%B8%E8%A1%94%E6%8E%A5%E7%9A%84%E7%A4%BE%E4%BC%9A%E4%BF%9D%E9%9A%9C%E4%BD%93%E7%B3%BB%E4%B8%AD%E7%9A%84%E6%8B%BE%E9%81%97%E8%A1%A5%E7%BC%BA%E4%BD%9C%E7%94%A8%EF%BC%9B%E5%9C%A8%E6%95%91%E5%8A%A9%E4%B8%B4%E6%97%B6%E5%9B%B0%E9%9A%BE%E7%BE%A4%E4%BD%93%E4%B8%AD%E7%9A%84%E8%BE%85%E5%8A%A9%E6%80%A7%E4%BD%9C%E7%94%A8%EF%BC%9B%E5%9C%A8%E4%B8%BA%E6%9E%84%E5%BB%BA%E5%92%8C%E8%B0%90%E5%B9%B8%E7%A6%8F%E9%87%8D%E5%BA%86%E4%B8%AD%E7%9A%84%E5%8A%A9%E6%8E%A8%E4%BD%9C%E7%94%A8%E3%80%82%0A%20%20%20%20%20%20%E6%9C%AC%E4%BC%9A%E7%9A%84%E4%B8%9A%E5%8A%A1%E8%8C%83%E5%9B%B4%EF%BC%9A%20%E7%AD%B9%E5%8B%9F%E5%96%84%E6%AC%BE%E3%80%81%E8%B5%88%E7%81%BE%E6%95%91%E5%8A%A9%E3%80%81%E6%85%88%E5%96%84%E6%95%91%E5%8A%A9%E3%80%81%E5%85%AC%E7%9B%8A%E6%8F%B4%E5%8A%A9%E3%80%81%E5%85%B4%E5%8A%9E%E6%9C%BA%E6%9E%84%E3%80%81%E4%BA%A4%E6%B5%81%E5%90%88%E4%BD%9C%E3%80%81%E6%85%88%E5%96%84%E5%AE%A3%E4%BC%A0%E3%80%81%E8%A1%A8%E5%BD%B0%E5%A5%96%E5%8A%B1%E3%80%81%E5%8F%8D%E6%98%A0%E8%AF%89%E6%B1%82%E7%AD%89%E3%80%82',
'id': '212',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F90e9eb12c7a0110fb972988fd402374e806f192caa88502da34e7082d05cb0a48f9441accc3ba898',
'mn': 46411806,
'rk': '46',
'title': '%E9%87%8D%E5%BA%86%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 9210,
'uin': '921593235'},
{'desc': '%20%E5%8D%97%E6%98%8C%E9%9D%92%E5%9F%BA%E4%BC%9A%E6%98%AF%E7%94%B1%E5%85%B1%E9%9D%92%E5%9B%A2%E5%8D%97%E6%98%8C%E5%B8%82%E5%A7%94%E3%80%81%E5%8D%97%E6%98%8C%E5%B8%82%E9%9D%92%E5%B9%B4%E8%81%94%E5%90%88%E4%BC%9A%E3%80%81%E5%8F%91%E8%B5%B7%E5%88%9B%E5%8A%9E%E5%92%8C%E7%9B%B4%E6%8E%A5%E9%A2%86%E5%AF%BC%EF%BC%8C%E5%85%B7%E6%9C%89%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E7%9A%84%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%E3%80%82%E6%9C%AC%E5%9F%BA%E9%87%91%E4%BC%9A%E9%9D%A2%E5%90%91%E5%8D%97%E6%98%8C%E5%B8%82%E5%86%85%E5%85%AC%E4%BC%97%E5%8B%9F%E6%8D%90%EF%BC%8C%E5%90%8C%E6%97%B6%E6%8E%A5%E5%8F%97%E5%B8%82%E5%A4%96%E5%85%AC%E4%BC%97%E7%9A%84%E6%8D%90%E5%8A%A9%E3%80%82%0A%20%20%20%20%E6%9C%AC%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E9%81%B5%E5%AE%88%E5%AE%AA%E6%B3%95%E3%80%81%E6%B3%95%E5%BE%8B%E3%80%81%E6%B3%95%E8%A7%84%E5%92%8C%E5%9B%BD%E5%AE%B6%E6%94%BF%E7%AD%96%EF%BC%8C%E9%81%B5%E5%AE%88%E7%A4%BE%E4%BC%9A%E9%81%93%E5%BE%B7%E9%A3%8E%E5%B0%9A%E3%80%82%E4%BA%89%E5%8F%96%E6%B5%B7%E5%86%85%E5%A4%96%E5%85%B3%E5%BF%83%E5%8D%97%E6%98%8C%E5%B8%82%E9%9D%92%E5%B0%91%E5%B9%B4%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%9B%A2%E4%BD%93%E3%80%81%E4%BA%BA%E5%A3%AB%E7%9A%84%E6%94%AF%E6%8C%81%E5%92%8C%E8%B5%9E%E5%8A%A9%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%8D%97%E6%98%8C%E5%B8%82%E9%9D%92%E5%B0%91%E5%B9%B4%E5%B7%A5%E4%BD%9C%E3%80%81%E7%A4%BE%E4%BC%9A%E6%95%99%E8%82%B2%E3%80%81%E7%A7%91%E6%8A%80%E3%80%81%E6%96%87%E5%8C%96%E3%80%81%E4%BD%93%E8%82%B2%E3%80%81%E5%8D%AB%E7%94%9F%E3%80%81%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E4%BA%8B%E4%B8%9A%E5%92%8C%E7%8E%AF%E5%A2%83%E4%BF%9D%E6%8A%A4%E7%AD%89%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%8F%91%E5%B1%95%EF%BC%8C%E5%A5%96%E5%8A%B1%E5%90%84%E7%B1%BB%E9%9D%92%E5%B0%91%E5%B9%B4%E4%BC%98%E7%A7%80%E4%BA%BA%E6%89%8D%EF%BC%8C%E6%89%B6%E6%8C%81%E9%9D%92%E5%B9%B4%E5%88%9B%E4%B8%9A%EF%BC%8C%E9%80%9A%E8%BF%87%E8%B5%84%E5%8A%A9%E6%9C%8D%E5%8A%A1%E3%80%81%E5%88%A9%E7%9B%8A%E8%A1%A8%E8%BE%BE%E5%92%8C%E7%A4%BE%E4%BC%9A%E5%80%A1%E5%AF%BC%EF%BC%8C%E5%B8%AE%E5%8A%A9%E6%88%91%E5%B8%82%E9%9D%92%E5%B0%91%E5%B9%B4%E6%8F%90%E9%AB%98%E8%83%BD%E5%8A%9B%EF%BC%8C%E6%94%B9%E5%96%84%E9%9D%92%E5%B0%91%E5%B9%B4%E6%88%90%E9%95%BF%E7%8E%AF%E5%A2%83%E3%80%82%0A%20%20%20%20%E6%9C%AC%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%8E%9F%E5%A7%8B%E5%9F%BA%E9%87%91%E6%95%B0%E9%A2%9D%E4%B8%BA%E4%BA%BA%E6%B0%91%E5%B8%81400%E4%B8%87%E5%85%83%EF%BC%8C%E6%9D%A5%E6%BA%90%E4%BA%8E%E7%BB%84%E7%BB%87%E5%8B%9F%E6%8D%90%E7%9A%84%E6%94%B6%E5%85%A5%EF%BC%9B%E8%87%AA%E7%84%B6%E4%BA%BA%E3%80%81%E6%B3%95%E4%BA%BA%E6%88%96%E5%85%B6%E4%BB%96%E7%BB%84%E7%BB%87%E8%87%AA%E6%84%BF%E6%8D%90%E8%B5%A0%EF%BC%9B%E6%8A%95%E8%B5%84%E6%94%B6%E7%9B%8A%E5%8F%8A%E5%85%B6%E4%BB%96%E5%90%88%E6%B3%95%E6%94%B6%E5%85%A5%E7%AD%89%E3%80%82',
'id': '134',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0def17c33574b40afd15a94455a711911d32fd4197178c4654c5a4df305fdc4000bd85afc28638df1',
'mn': 45676459,
'rk': '47',
'title': '%E5%8D%97%E6%98%8C%E5%B8%82%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 5297,
'uin': '2846180452'},
{'desc': '%E5%B9%BF%E5%B7%9E%E5%B8%82%E5%8D%8E%E4%BE%A8%E6%96%87%E5%8C%96%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E4%BA%8E1995%E5%B9%B45%E6%9C%88%E7%BB%84%E5%BB%BA%EF%BC%8C%E6%98%AF%E4%B8%80%E4%B8%AA%E5%85%B7%E6%9C%89%E5%85%AC%E5%8B%9F%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%E3%80%82%0A%E7%88%B1%E5%BF%83%E5%A6%82%E6%98%A5%E9%9B%A8%EF%BC%8C%E7%82%B9%E6%BB%B4%E6%98%AF%E6%83%85%E8%B0%8A%E3%80%82%E5%B9%BF%E5%B7%9E%E5%B8%82%E5%8D%8E%E4%BE%A8%E6%96%87%E5%8C%96%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E6%9C%AC%E7%9D%80%E2%80%9C%E5%BC%98%E6%89%AC%E4%B8%AD%E5%8D%8E%E6%B0%91%E6%97%8F%E4%BC%98%E7%A7%80%E6%96%87%E5%8C%96%E3%80%81%E7%B9%81%E8%8D%A3%E5%8D%8E%E4%BE%A8%E6%96%87%E5%8C%96%E4%BA%8B%E4%B8%9A%E2%80%9D%E6%9C%8D%E5%8A%A1%E7%90%86%E5%BF%B5%EF%BC%8C%E5%87%9D%E8%81%9A%E4%BE%A8%E5%BF%83%EF%BC%8C%E6%B1%87%E9%9B%86%E4%BE%A8%E5%8A%9B%EF%BC%8C%E9%87%87%E5%8F%96%E7%81%B5%E6%B4%BB%E5%A4%9A%E6%A0%B7%E7%9A%84%E5%BD%A2%E5%BC%8F%EF%BC%8C%E7%B2%BE%E5%BF%83%E7%BB%84%E7%BB%87%E7%AD%B9%E5%88%92%E5%90%84%E9%A1%B9%E6%B4%BB%E5%8A%A8%E3%80%82',
'id': '309',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0956c74092172b121c684d1da8e87b760be1396fd3eb62527c2d873ac8ddcec1cfcc09d966b780d0c',
'mn': 45533268,
'rk': '48',
'title': '%E5%B9%BF%E5%B7%9E%E5%B8%82%E5%8D%8E%E4%BE%A8%E6%96%87%E5%8C%96%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1582,
'uin': '1156135681'},
{'desc': '%20%20%20%20%E7%BB%84%E7%BB%87%E5%BC%80%E5%B1%95%E5%90%84%E7%B1%BB%E6%8D%90%E8%B5%A0%EF%BC%9B%E8%B5%84%E5%8A%A9%E8%B4%AB%E5%9B%B0%E5%9C%B0%E5%8C%BA%E5%92%8C%E8%B4%AB%E5%9B%B0%E7%BE%A4%E4%BC%97%EF%BC%8C%E6%94%B9%E5%96%84%E5%85%B6%E7%94%9F%E4%BA%A7%E3%80%81%E7%94%9F%E6%B4%BB%E6%9D%A1%E4%BB%B6%EF%BC%9B%E5%AE%9E%E6%96%BD%E7%81%BE%E5%90%8E%E6%95%91%E6%8F%B4%EF%BC%9B%E4%BF%83%E8%BF%9B%E6%89%B6%E8%B4%AB%E5%BC%80%E5%8F%91%E4%BA%8B%E4%B8%9A%E7%9A%84%E4%BA%A4%E6%B5%81%E5%90%88%E4%BD%9C%E3%80%82%0A%20%20%20%20%E4%B8%BA%E8%B4%AB%E5%9B%B0%E4%BA%BA%E5%A3%AB%E6%91%86%E8%84%B1%E8%B4%AB%E5%9B%B0%E9%93%BA%E8%B7%AF%EF%BC%8C%E4%B8%BA%E7%88%B1%E5%BF%83%E4%BA%BA%E5%A3%AB%E8%A1%8C%E6%89%B6%E8%B4%AB%E5%96%84%E4%B8%BE%E6%9E%B6%E6%A1%A5%E3%80%82%0A%20%20%20%20%E5%BD%93%E5%A5%89%E7%8C%AE%E7%88%B1%E5%BF%83%E6%88%90%E4%B8%BA%E6%9B%B4%E5%A4%9A%E4%BA%BA%E7%9A%84%E4%B8%80%E7%A7%8D%E4%B9%A0%E6%83%AF%EF%BC%8C%E6%88%91%E4%BB%AC%E7%9A%84%E7%A4%BE%E4%BC%9A%E4%B8%8D%E4%BB%85%E4%BC%9A%E5%87%8F%E5%B0%91%E8%B4%AB%E5%9B%B0%EF%BC%8C%E8%BF%98%E4%BC%9A%E6%9B%B4%E5%8A%A0%E5%92%8C%E8%B0%90%E3%80%82',
'id': '292',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb05988fdae02116c3b26470640f848224b3c313a8c5b591ba92e1ecde23e291e0e8355035a0204a983',
'mn': 43241381,
'rk': '49',
'title': '%E5%B1%B1%E4%B8%9C%E7%9C%81%E6%89%B6%E8%B4%AB%E5%BC%80%E5%8F%91%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 3152,
'uin': '2661992241'},
{'desc': '%E5%8C%97%E4%BA%AC%E6%89%B6%E8%80%81%E5%8A%A9%E6%AE%8B%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E2015%E5%B9%B49%E6%9C%88%EF%BC%8C%E6%98%AF%E5%8C%97%E4%BA%AC%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E6%89%B9%E5%87%86%E6%88%90%E7%AB%8B%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E4%B9%9F%E6%98%AF%E5%8C%97%E4%BA%AC%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E4%B8%BB%E7%AE%A1%E7%9A%84%E6%A8%AA%E8%B7%A8%E5%85%BB%E8%80%81%E5%8A%A9%E6%AE%8B%E9%A2%86%E5%9F%9F%E7%9A%84%E6%9C%8D%E5%8A%A1%E6%9C%BA%E6%9E%84%E3%80%82%0A%E5%8C%97%E4%BA%AC%E6%89%B6%E8%80%81%E5%8A%A9%E6%AE%8B%E5%9F%BA%E9%87%91%E4%BC%9A%E6%8E%A5%E5%8F%97%E6%B0%91%E6%94%BF%E5%B1%80%E5%A7%94%E6%89%98%EF%BC%8C%E4%BD%9C%E4%B8%BA%E6%9C%AC%E5%B8%82%E7%89%B9%E6%AE%8A%E5%AE%B6%E5%BA%AD%E8%80%81%E5%B9%B4%E4%BA%BA%E5%85%A5%E4%BD%8F%E5%85%BB%E8%80%81%E6%9C%BA%E6%9E%84%E7%9A%84%E4%BB%A3%E7%90%86%E6%9C%8D%E5%8A%A1%E6%9C%BA%E6%9E%84%EF%BC%8C%E4%BD%9C%E4%B8%BA%E2%80%9C%E4%BB%A3%E7%90%86%E5%84%BF%E5%A5%B3%E2%80%9D%E4%B8%BA%E5%8C%97%E4%BA%AC%E5%B8%82%E7%89%B9%E6%AE%8A%E5%AE%B6%E5%BA%AD%E8%80%81%E5%B9%B4%E4%BA%BA%E6%8F%90%E4%BE%9B%E5%85%A5%E4%BD%8F%E5%85%BB%E8%80%81%E6%9C%BA%E6%9E%84%E5%8F%8A%E7%B4%A7%E6%80%A5%E7%9C%8B%E7%97%85%E5%B0%B1%E5%8C%BB%E7%AD%89%E6%8B%85%E4%BF%9D%E6%9C%8D%E5%8A%A1%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E9%80%9A%E8%BF%87%E5%8B%9F%E9%9B%86%E5%9F%BA%E9%87%91%EF%BC%8C%E7%BB%84%E7%BB%87%E6%89%B6%E8%80%81%E7%88%B1%E8%80%81%E3%80%81%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E3%80%81%E6%95%91%E5%AD%A4%E5%8A%A9%E6%AE%8B%E7%AD%89%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E6%B4%BB%E5%8A%A8%EF%BC%9B%E6%8E%A5%E5%8F%97%E7%A4%BE%E4%BC%9A%E4%BC%81%E4%B8%9A%E3%80%81%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%E3%80%81%E7%88%B1%E5%BF%83%E4%BA%BA%E5%A3%AB%E7%AD%89%E7%9A%84%E6%8D%90%E8%B5%A0%EF%BC%9B%E6%8E%A8%E8%BF%9B%E5%85%BB%E8%80%81%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%E5%8F%8A%E5%85%B6%E5%AE%83%E7%A4%BE%E4%BC%9A%E5%85%AC%E7%9B%8A%E9%A1%B9%E7%9B%AE%E5%8F%91%E5%B1%95%E3%80%82',
'id': '329',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0614d9a64e0d1da706ace667f065adc34284a47023259f67050c4f7e8da7bb629e33c149c5d2cfb10',
'mn': 43059189,
'rk': '50',
'title': '%E5%8C%97%E4%BA%AC%E6%89%B6%E8%80%81%E5%8A%A9%E6%AE%8B%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 2295,
'uin': '2909780269'},
{'desc': '%E5%8B%9F%E6%AC%BE%E6%9D%A5%E6%BA%90%EF%BC%9A%E5%90%91%E7%A4%BE%E4%BC%9A%E5%85%AC%E5%BC%80%E5%8B%9F%E6%8D%90%E4%BC%97%E7%AD%B9%E3%80%81%E6%8E%A5%E5%8F%97%E6%8D%90%E8%B5%A0%E3%80%81%E6%94%BF%E5%BA%9C%E8%B5%84%E5%8A%A9%0A%E7%89%B9%E8%89%B2%E9%A1%B9%E7%9B%AE%EF%BC%9A%E5%A4%A7%E7%97%85%E5%8C%BB%E7%96%97%E6%95%91%E5%8A%A9%E3%80%81%E6%95%91%E7%81%BE%E6%89%B6%E8%B4%AB%E3%80%81%E5%AE%89%E8%80%81%E6%89%B6%E5%B9%BC%E3%80%81%E5%8A%A9%E6%AE%8B%E5%B8%AE%E5%9B%B0%E3%80%81%E7%8E%AF%E4%BF%9D%E5%85%AC%E7%9B%8A%0A%E6%9C%8D%E5%8A%A1%E7%89%87%E5%8C%BA%EF%BC%9A%E9%83%B4%E5%B7%9E%E5%B8%82%E8%BE%96%E5%8C%BA%0A%E6%9C%BA%E6%9E%84%E6%84%BF%E6%99%AF%EF%BC%9A%E5%8F%91%E6%8C%A5%E2%80%9C%E4%BA%92%E8%81%94%E7%BD%91%2B%E5%85%AC%E7%9B%8A%E2%80%9D%E6%8A%80%E6%9C%AF%E4%BC%98%E5%8A%BF%EF%BC%8C%E5%81%9A%E5%A4%A7%E5%81%9A%E5%BC%BA%E5%93%81%E7%89%8C%E9%A1%B9%E7%9B%AE%EF%BC%8C%E5%8A%A9%E5%8A%9B%E7%B2%BE%E5%87%86%E6%89%B6%E8%B4%AB%0A%E5%B8%8C%E6%9C%9B%E8%A7%A3%E5%86%B3%E7%9A%84%E7%A4%BE%E4%BC%9A%E9%97%AE%E9%A2%98%EF%BC%9A%E7%89%B9%E5%9B%B0%E9%87%8D%E5%A4%A7%E7%96%BE%E7%97%85%E6%82%A3%E8%80%85%E5%8C%BB%E7%96%97%E6%95%91%E5%8A%A9%EF%BC%8C%E8%B4%AB%E5%9B%B0%E8%BE%B9%E8%BF%9C%E5%B1%B1%E5%8C%BA%E5%8C%BB%E7%96%97%E6%95%99%E8%82%B2%E6%8F%B4%E5%8A%A9%0A',
'id': '301',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fa2c318c50f394ac4ff1253821ff370590fd86c0dcd36603793f438096bf18ee403bbc65188dc8a6db4e3eb1cc3ca306e',
'mn': 42733687,
'rk': '51',
'title': '%E9%83%B4%E5%B7%9E%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 4781,
'uin': '3501659195'},
{'desc': '%E6%B1%9F%E8%8B%8F%E7%9C%81%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E4%BA%8E1984%E5%B9%B4%E6%88%90%E7%AB%8B%EF%BC%8C%E6%98%AF%E6%B1%9F%E8%8B%8F%E7%9C%81%E9%A6%96%E5%AE%B6%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%8233%E5%B9%B4%E6%9D%A5%EF%BC%8C%E7%A7%89%E6%89%BF%E2%80%9C%E6%9C%8D%E5%8A%A1%E5%84%BF%E7%AB%A5%EF%BC%8C%E9%80%A0%E7%A6%8F%E5%84%BF%E7%AB%A5%E2%80%9D%E7%9A%84%E5%AE%97%E6%97%A8%EF%BC%8C%E5%85%88%E5%90%8E%E8%8D%A3%E8%8E%B7%E5%85%A8%E5%9B%BD%E5%85%88%E8%BF%9B%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E3%80%81%E6%B1%9F%E8%8B%8F%E7%9C%81%E7%A4%BA%E8%8C%83%E6%80%A7%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E7%AD%8920%E5%A4%9A%E9%A1%B9%E8%8D%A3%E8%AA%89%EF%BC%8C%E5%9C%A8%E5%85%A8%E5%9B%BD%E4%B8%AD%E5%9F%BA%E9%80%8F%E6%98%8E%E6%8C%87%E6%95%B0%E6%8E%92%E8%A1%8C%E6%A6%9C%E4%B8%AD%E6%8C%81%E7%BB%AD%E5%B9%B6%E5%88%97%E7%AC%AC%E4%B8%80%EF%BC%8C%E8%A2%AB%E8%AF%84%E5%AE%9A%E4%B8%BA%E6%B1%9F%E8%8B%8F%E7%9C%815A%E7%BA%A7%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%EF%BC%8C%E6%98%AF%E4%B8%80%E4%B8%AA%E5%85%B7%E6%9C%89%E6%85%88%E5%96%84%E7%90%86%E6%83%B3%E5%92%8C%E6%95%AC%E4%B8%9A%E7%B2%BE%E7%A5%9E%EF%BC%8C%E5%9B%A2%E7%BB%93%E8%BF%9B%E5%8F%96%E3%80%81%E6%B1%82%E7%9C%9F%E5%8A%A1%E5%AE%9E%E7%9A%84%E5%85%AC%E7%9B%8A%E5%9B%A2%E9%98%9F%E3%80%82',
'id': '93',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F369bd30e27bbdd55be7ad6bbca02419096e8c2dc872828c82ba045e3e98bfe0b38cbdf6a94782b61',
'mn': 40091946,
'rk': '52',
'title': '%E6%B1%9F%E8%8B%8F%E7%9C%81%E5%84%BF%E7%AB%A5%E5%B0%91%E5%B9%B4%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1168,
'uin': '822997750'},
{'desc': '%E4%B8%AD%E5%9B%BD%E7%BA%A2%E5%8D%81%E5%AD%97%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E4%B8%AD%E5%9B%BD%E7%BA%A2%E5%8D%81%E5%AD%97%E6%80%BB%E4%BC%9A%E5%8F%91%E8%B5%B7%E5%B9%B6%E4%B8%BB%E7%AE%A1%E3%80%81%E7%BB%8F%E6%B0%91%E6%94%BF%E9%83%A8%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%E7%9A%84%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E5%9C%B0%E4%BD%8D%E7%9A%84%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%85%B6%E5%AE%97%E6%97%A8%E6%98%AF%E5%BC%98%E6%89%AC%E4%BA%BA%E9%81%93%E3%80%81%E5%8D%9A%E7%88%B1%E3%80%81%E5%A5%89%E7%8C%AE%E7%9A%84%E7%BA%A2%E5%8D%81%E5%AD%97%E7%B2%BE%E7%A5%9E%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E6%94%B9%E5%96%84%E4%BA%BA%E7%9A%84%E7%94%9F%E5%AD%98%E4%B8%8E%E5%8F%91%E5%B1%95%E5%A2%83%E5%86%B5%EF%BC%8C%E4%BF%9D%E6%8A%A4%E4%BA%BA%E7%9A%84%E7%94%9F%E5%91%BD%E4%B8%8E%E5%81%A5%E5%BA%B7%EF%BC%8C%E4%BF%83%E8%BF%9B%E4%B8%96%E7%95%8C%E5%92%8C%E5%B9%B3%E4%B8%8E%E7%A4%BE%E4%BC%9A%E8%BF%9B%E6%AD%A5%E3%80%822008%E5%B9%B4%EF%BC%8C2013%E5%B9%B4%E8%BF%9E%E7%BB%AD%E8%8E%B7%E8%AF%84%26quot%3B5A%E7%BA%A7%E5%9F%BA%E9%87%91%E4%BC%9A%26quot%3B%E3%80%82%0A',
'id': '110',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F725cf34c703e1f8978362104e079f2516559c4e7526d21ed49ef1a74652aeee716ceaa97915612c7f1510d43c01f381b',
'mn': 34489150,
'rk': '53',
'title': '%E4%B8%AD%E5%9B%BD%E7%BA%A2%E5%8D%81%E5%AD%97%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 15997,
'uin': '822993184'},
{'desc': '%E4%B8%AD%E5%9B%BD%E4%BA%BA%E5%8F%A3%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1987%E5%B9%B46%E6%9C%8810%E6%97%A5%EF%BC%8C%E6%98%AF%E7%BB%8F%E6%B0%91%E6%94%BF%E9%83%A8%E6%B3%A8%E5%86%8C%EF%BC%8C%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E5%9C%B0%E4%BD%8D%E7%9A%84%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E9%9D%9E%E8%90%A5%E5%88%A9%E5%85%AC%E7%9B%8A%E7%BB%84%E7%BB%87%EF%BC%8C%E6%B0%91%E6%94%BF%E9%83%A8%E5%85%AC%E5%B8%83%E9%A6%96%E6%89%B9%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%EF%BC%9A%E5%A2%9E%E8%BF%9B%E4%BA%BA%E5%8F%A3%E7%A6%8F%E5%88%A9%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%AE%B6%E5%BA%AD%E5%B9%B8%E7%A6%8F%E3%80%82%0A',
'id': '73',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fe41c91496b21fc1333d285d7c0b04271c2ef1df3edddeb0fc2b06ac75b68721a038589e7d0c0897dfeff4a9389bef91f',
'mn': 31817285,
'rk': '54',
'title': '%E4%B8%AD%E5%9B%BD%E4%BA%BA%E5%8F%A3%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1901,
'uin': '2209297650'},
{'desc': '%E8%B4%B5%E5%B7%9E%E7%9C%81%E5%90%8C%E5%BF%83%E5%85%89%E5%BD%A9%E4%BA%8B%E4%B8%9A%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF2011%E5%B9%B48%E6%9C%88%E5%9C%A8%E8%B4%B5%E5%B7%9E%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E6%B3%A8%E5%86%8C%E3%80%81%E7%94%B1%E8%B4%B5%E5%B7%9E%E7%9C%81%E5%A7%94%E7%BB%9F%E6%88%98%E9%83%A8%E4%B8%BB%E7%AE%A1%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E6%88%90%E7%AB%8B%E4%B8%BB%E8%A6%81%E6%98%AF%E4%B8%BA%E4%BA%86%E4%BF%83%E8%BF%9B%E8%B4%B5%E5%B7%9E%E5%85%89%E5%BD%A9%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%8F%91%E5%B1%95%E3%80%81%E6%94%AF%E6%8C%81%E8%B4%AB%E5%9B%B0%E5%9C%B0%E5%8C%BA%E7%9A%84%E6%89%B6%E8%B4%AB%E5%BC%80%E5%8F%91%E5%B9%B6%E5%AE%9E%E6%96%BD%E5%90%8C%E5%BF%83%E5%B7%A5%E7%A8%8B%E3%80%82',
'id': '238',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3580a1d41bf1913009fee8973db9253ec282668c74d9bad6396b3e424d0273574f95296d9814d14e',
'mn': 31782195,
'rk': '55',
'title': '%E8%B4%B5%E5%B7%9E%E7%9C%81%E5%90%8C%E5%BF%83%E5%85%89%E5%BD%A9%E4%BA%8B%E4%B8%9A%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 3634,
'uin': '810310202'},
{'desc': '%E4%B8%AD%E5%9B%BD%E5%8F%91%E5%B1%95%E7%A0%94%E7%A9%B6%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%E6%94%AF%E6%8C%81%E6%94%BF%E7%AD%96%E7%A0%94%E7%A9%B6%E3%80%81%E4%BF%83%E8%BF%9B%E7%A7%91%E5%AD%A6%E5%86%B3%E7%AD%96%E3%80%81%E6%9C%8D%E5%8A%A1%E4%B8%AD%E5%9B%BD%E5%8F%91%E5%B1%95%E3%80%82%20%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E8%B5%84%E9%87%91%E4%B8%BB%E8%A6%81%E6%9D%A5%E6%BA%90%E4%BA%8E%E5%9B%BD%E5%86%85%E5%A4%96%E4%BC%81%E4%B8%9A%E3%80%81%E6%9C%BA%E6%9E%84%E3%80%81%E4%B8%AA%E4%BA%BA%E7%9A%84%E6%8D%90%E8%B5%A0%E5%92%8C%E8%B5%9E%E5%8A%A9%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E8%B5%84%E9%87%91%E4%B8%BB%E8%A6%81%E7%94%A8%E4%BA%8E%E6%94%AF%E6%8C%81%E5%9B%BD%E5%AE%B6%E6%94%BF%E6%B2%BB%E3%80%81%E7%BB%8F%E6%B5%8E%E3%80%81%E7%A4%BE%E4%BC%9A%E7%AD%89%E6%96%B9%E9%9D%A2%E7%9A%84%E6%94%BF%E7%AD%96%E8%AF%95%E9%AA%8C%E3%80%81%E7%A0%94%E7%A9%B6%E5%8F%8A%E5%84%BF%E7%AB%A5%E5%8F%91%E5%B1%95%E7%B1%BB%E5%85%AC%E7%9B%8A%E9%A1%B9%E7%9B%AE%E3%80%82',
'id': '312',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fb0bfba802c3dd8de63641441da8da6115db575fd845327d5e6a37293864e0398215054caa5be2d40',
'mn': 27338526,
'rk': '56',
'title': '%E4%B8%AD%E5%9B%BD%E5%8F%91%E5%B1%95%E7%A0%94%E7%A9%B6%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 19043,
'uin': '242494057'},
{'desc': '%E6%88%90%E9%83%BD%E5%B8%82%E9%94%A6%E6%B1%9F%E5%8C%BA%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88%E7%AE%80%E7%A7%B0%E2%80%9C%E9%94%A6%E5%9F%BA%E9%87%91%E2%80%9D%EF%BC%89%E7%94%B1%E9%94%A6%E6%B1%9F%E5%8C%BA%E5%8C%BA%E5%A7%94%E5%8C%BA%E6%94%BF%E5%BA%9C%E4%B8%BB%E5%AF%BC%EF%BC%8C%E4%BA%8E2011%E5%B9%B411%E6%9C%8830%E6%97%A5%E7%BB%8F%E5%9B%9B%E5%B7%9D%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E6%89%B9%E5%87%86%E6%88%90%E7%AB%8B%EF%BC%8C%E4%B8%BA%E5%85%A8%E5%9B%BD%E7%AC%AC%E4%B8%80%E5%AE%B6%E5%9C%A8%E5%8C%BA%E5%8E%BF%E5%BB%BA%E7%AB%8B%E7%9A%84%E4%B8%93%E9%97%A8%E4%B8%BA%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E5%8F%91%E5%B1%95%E6%8F%90%E4%BE%9B%E6%94%AF%E6%8C%81%E7%9A%84%E5%9C%B0%E6%96%B9%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%2C%E8%B5%84%E9%87%91%E4%B8%BB%E8%A6%81%E6%9D%A5%E6%BA%90%E4%BA%8E%E6%94%BF%E5%BA%9C%E6%8B%A8%E6%AC%BE%E3%80%81%E5%B7%A5%E5%95%86%E4%BC%81%E4%B8%9A%E3%80%81%E7%A4%BE%E4%BC%9A%E4%BC%81%E4%B8%9A%E4%BB%A5%E5%8F%8A%E4%B8%AA%E4%BA%BA%E6%8D%90%E8%B5%A0%E3%80%82',
'id': '150',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F417b4996fcdd1d58ec44f308a536adb515764ff5cc8d551d046079008bcb3f196315b7ab63225defb193d528baedead6',
'mn': 27296641,
'rk': '57',
'title': '%E6%88%90%E9%83%BD%E5%B8%82%E9%94%A6%E6%B1%9F%E5%8C%BA%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 3400,
'uin': '1980642523'},
{'desc': '%20%20%20%20%E6%B2%B3%E5%8D%97%E7%9C%81%E7%88%B1%E5%BF%83%E5%8A%A9%E8%80%81%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E8%87%B4%E5%8A%9B%E4%BA%8E%E4%B8%BA%E5%85%A8%E7%9C%81%E8%80%81%E5%B9%B4%E4%BA%BA%E6%8F%90%E4%BE%9B%E7%88%B1%E5%BF%83%E5%85%BB%E8%80%81%E6%9C%8D%E5%8A%A1%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%98%AF%E7%9B%AE%E5%89%8D%E5%94%AF%E4%B8%80%E4%B8%80%E4%B8%AA%E4%B8%BA%E8%B4%AB%E5%9B%B0%E5%AE%B6%E5%BA%AD%E5%A4%B1%E8%83%BD%E8%80%81%E4%BA%BA%E6%8F%90%E4%BE%9B%E5%85%8D%E8%B4%B9%E3%80%81%E4%B8%93%E4%B8%9A%E5%85%BB%E6%8A%A4%E6%9C%8D%E5%8A%A1%E7%9A%84%E7%88%B1%E5%BF%83%E6%9C%BA%E6%9E%84%EF%BC%8C%E6%9C%AC%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E2010%E5%B9%B44%E6%9C%88%EF%BC%8C%E7%BB%8F%E6%B2%B3%E5%8D%97%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E5%AE%A1%E6%A0%B8%E4%BE%9D%E6%B3%95%E6%88%90%E7%AB%8B%EF%BC%8C%E5%8E%9F%E5%A7%8B%E5%9F%BA%E9%87%91%E6%95%B0%E9%A2%9D%E4%B8%BA%E4%BA%BA%E6%B0%91%E5%B8%81%E8%82%86%E4%BD%B0%E4%B8%87%E5%85%83%EF%BC%8C%E6%B3%95%E5%AE%9A%E4%BB%A3%E8%A1%A8%E4%BA%BA%E4%B8%BA%E5%8F%B8%E6%A1%82%E6%98%8E%EF%BC%8C%E7%99%BB%E8%AE%B0%E7%AE%A1%E7%90%86%E6%9C%BA%E5%85%B3%E5%8F%8A%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E5%8D%95%E4%BD%8D%E4%B8%BA%E6%B2%B3%E5%8D%97%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E6%84%BF%E6%99%AF%E6%98%AF%E6%89%93%E9%80%A0%E5%9B%BD%E9%99%85%E4%B8%80%E6%B5%81%E7%9A%84%E5%8A%A9%E8%80%81%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%9B%E5%9F%BA%E9%87%91%E4%BC%9A%E5%AE%97%E6%97%A8%E6%98%AF%E5%85%A8%E5%BF%83%E5%85%A8%E6%84%8F%E4%B8%BA%E8%80%81%E4%BA%BA%E6%9C%8D%E5%8A%A1%EF%BC%9B%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9B%AE%E6%A0%87%E6%98%AF%E8%AE%A9%E5%A4%A9%E4%B8%8B%E8%80%81%E4%BA%BA%E4%B9%90%E4%BA%AB%E6%99%9A%E5%B9%B4%EF%BC%81',
'id': '160',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb09a83232684e49477ba260426849f6d53489254882c377921c4833fcd7c0f2e0343aab5f0fc4bb8d0',
'mn': 24767206,
'rk': '58',
'title': '%E6%B2%B3%E5%8D%97%E7%9C%81%E7%88%B1%E5%BF%83%E5%8A%A9%E8%80%81%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1010,
'uin': '2968654570'},
{'desc': '%E4%B8%AD%E5%9B%BD%E7%94%9F%E7%89%A9%E5%A4%9A%E6%A0%B7%E6%80%A7%E4%BF%9D%E6%8A%A4%E4%B8%8E%E7%BB%BF%E8%89%B2%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88%E7%AE%80%E7%A7%B0%E2%80%9C%E4%B8%AD%E5%9B%BD%E7%BB%BF%E5%8F%91%E4%BC%9A%E2%80%9D%E3%80%81%E2%80%9C%E7%BB%BF%E4%BC%9A%E2%80%9D%EF%BC%89%EF%BC%8C%E6%98%AF%E7%BB%8F%E5%9B%BD%E5%8A%A1%E9%99%A2%E6%89%B9%E5%87%86%E6%88%90%E7%AB%8B%EF%BC%8C%E4%B8%AD%E5%9B%BD%E7%A7%91%E5%AD%A6%E6%8A%80%E6%9C%AF%E5%8D%8F%E4%BC%9A%E4%B8%BB%E7%AE%A1%EF%BC%8C%E6%B0%91%E6%94%BF%E9%83%A8%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%E7%9A%84%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E7%9B%8A%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%85%A8%E5%9B%BD%E6%80%A7%E4%B8%80%E7%BA%A7%E5%AD%A6%E4%BC%9A%EF%BC%8C%E9%95%BF%E6%9C%9F%E8%87%B4%E5%8A%9B%E4%BA%8E%E7%94%9F%E7%89%A9%E5%A4%9A%E6%A0%B7%E6%80%A7%E4%BF%9D%E6%8A%A4%E4%B8%8E%E7%BB%BF%E8%89%B2%E5%8F%91%E5%B1%95%E4%BA%8B%E4%B8%9A%E3%80%821985%E5%B9%B4%EF%BC%8C%E7%94%B1%E6%97%B6%E4%BB%BB%E5%85%A8%E5%9B%BD%E6%94%BF%E5%8D%8F%E5%89%AF%E4%B8%BB%E5%B8%AD%E5%90%95%E6%AD%A3%E6%93%8D%E3%80%81%E9%92%B1%E6%98%8C%E7%85%A7%E3%80%81%E5%8C%85%E5%B0%94%E6%B1%89%E7%AD%89%E5%90%8C%E5%BF%97%E5%88%9B%E5%8A%9E%EF%BC%8C%E7%8E%B0%E4%BB%BB%E7%90%86%E4%BA%8B%E9%95%BF%E4%B8%BA%E4%B8%AD%E5%85%B1%E4%B8%AD%E5%A4%AE%E7%BB%9F%E6%88%98%E9%83%A8%E5%8E%9F%E5%89%AF%E9%83%A8%E9%95%BF%E3%80%81%E5%85%A8%E5%9B%BD%E5%B7%A5%E5%95%86%E8%81%94%E5%8E%9F%E5%85%9A%E7%BB%84%E4%B9%A6%E8%AE%B0%E8%83%A1%E5%BE%B7%E5%B9%B3%E5%90%8C%E5%BF%97%EF%BC%8C%E7%8E%AF%E4%BF%9D%E9%83%A8%E5%8E%9F%E5%89%AF%E9%83%A8%E9%95%BF%E3%80%81%E4%B8%AD%E5%A4%AE%E7%8E%AF%E4%BF%9D%E7%9D%A3%E6%9F%A5%E7%BB%84%E7%BB%84%E9%95%BF%E5%91%A8%E5%BB%BA%E5%90%8C%E5%BF%97%E6%97%A5%E5%89%8D%E8%8E%B7%E6%89%B9%E6%8B%85%E4%BB%BB%E6%88%91%E4%BC%9A%E5%89%AF%E7%90%86%E4%BA%8B%E9%95%BF%E3%80%82',
'id': '155',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F07b4d6ced6765ce0723cb136a6f30fd9a3fd16b73609de6996981ab2fde5032183c715d46f391f539bd3c42377302979',
'mn': 24374160,
'rk': '59',
'title': '%E4%B8%AD%E5%9B%BD%E7%94%9F%E7%89%A9%E5%A4%9A%E6%A0%B7%E6%80%A7%E4%BF%9D%E6%8A%A4%E4%B8%8E%E7%BB%BF%E8%89%B2%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1658,
'uin': '3156039599'},
{'desc': '%E6%88%90%E7%BE%8E%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E6%88%90%E7%AB%8B%E4%BA%8E2010%E5%B9%B410%E6%9C%88%E7%9A%84%E5%9C%B0%E6%96%B9%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E6%88%90%E7%BE%8E%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E5%AE%9A%E4%BD%8D%E4%BA%8E%E5%8F%91%E5%B1%95%E6%88%90%E4%B8%BA%E4%B8%80%E4%B8%AA%E7%A4%BE%E4%BC%9A%E6%8A%95%E8%B5%84%E5%9E%8B%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E5%85%B3%E6%B3%A8%E6%95%99%E8%82%B2%E3%80%81%E5%8C%BB%E7%96%97%E3%80%81%E6%96%87%E5%8C%96%E3%80%81%E7%8E%AF%E4%BF%9D%E7%AD%89%E5%9B%9B%E5%A4%A7%E9%A2%86%E5%9F%9F%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%8F%91%E5%B1%95%E5%B7%A5%E4%BD%9C%EF%BC%8C%E5%B9%B6%E6%94%AF%E6%8C%81%E7%9B%B8%E5%85%B3%E9%A2%86%E5%9F%9F%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%88%9B%E6%96%B0%E5%8F%8A%E5%8F%AF%E6%8C%81%E7%BB%AD%E5%8F%91%E5%B1%95%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%85%AC%E7%9B%8A%E6%9C%89%E6%95%88%E6%80%A7%E5%92%8C%E5%BD%B1%E5%93%8D%E5%8A%9B%E3%80%82%E4%B8%BA%E4%BA%86%E6%8E%A8%E5%8A%A8%E6%B5%B7%E5%8D%97%E6%9C%AC%E5%9C%9F%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%E5%8F%91%E5%B1%95%EF%BC%8C%E4%BA%8E2012%E5%B9%B411%E6%9C%88%E5%8F%91%E8%B5%B7%E6%88%90%E7%AB%8B%E6%B5%B7%E5%8D%97%E6%88%90%E7%BE%8E%E5%85%AC%E7%9B%8A%E7%A0%94%E7%A9%B6%E6%9C%8D%E5%8A%A1%E4%B8%AD%E5%BF%83%E3%80%82',
'id': '117',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0ca53eeb8ba7a495af8654ace4ab0aea3170f47017b646923fbd1098542642e54b31257c222c41d72',
'mn': 20841018,
'rk': '60',
'title': '%E6%B5%B7%E5%8D%97%E6%88%90%E7%BE%8E%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 2297,
'uin': '2406668371'},
{'desc': '%E8%BF%90%E7%94%A8%E4%BA%92%E8%81%94%E7%BD%91%E6%8A%80%E6%9C%AF%E5%8F%8A%E5%BC%80%E5%B1%95%E6%89%B6%E8%B4%AB%E3%80%81%E6%95%91%E5%8A%A9%E3%80%81%E6%95%99%E8%82%B2%E7%AD%89%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E6%B4%BB%E5%8A%A8%EF%BC%9B%E6%90%AD%E5%BB%BA%E5%85%AC%E7%9B%8A%E5%B9%B3%E5%8F%B0%E5%AD%B5%E5%8C%96%E9%9D%9E%E8%90%A5%E5%88%A9%E7%BB%84%E7%BB%87%E3%80%81%E7%A4%BE%E4%BC%9A%E4%BC%81%E4%B8%9A%EF%BC%8C%E6%8E%A8%E8%BF%9B%E5%85%AC%E5%B9%B3%E8%B4%B8%E6%98%93%EF%BC%9B%E7%BB%84%E7%BB%87%E5%85%AC%E7%9B%8A%E4%B8%BB%E9%A2%98%E5%9F%B9%E8%AE%AD%E3%80%81%E8%AE%BA%E5%9D%9B%E3%80%81%E5%9B%BD%E9%99%85%E4%BA%A4%E6%B5%81%E7%AD%89%E6%B4%BB%E5%8A%A8%E3%80%82',
'id': '179',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F51676abebfa7140a3288e4e48760b8cb6e7cd13892909da2c57b4889edf39138cc656aa3f7b41e22205abc6d645849d8',
'mn': 18594969,
'rk': '61',
'title': '%E5%AE%81%E6%B3%A2%E5%B8%82%E5%96%84%E5%9B%AD%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 2495,
'uin': '3281793182'},
{'desc': '%E5%AE%81%E6%B3%A2%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BA%8E1998%E5%B9%B49%E6%9C%88%EF%BC%8C%E6%98%AF%E4%BE%9D%E6%B3%95%E6%A0%B8%E5%87%86%E7%99%BB%E8%AE%B0%E7%9A%84%E5%85%AC%E7%9B%8A%E6%80%A7%E9%9D%9E%E8%90%A5%E5%88%A9%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%E6%B3%95%E4%BA%BA%EF%BC%8C%E5%85%B7%E6%9C%89%E5%85%AC%E5%8B%9F%E8%B5%84%E6%A0%BC%EF%BC%8C%E4%B8%BB%E8%A6%81%E5%B8%82%E5%90%91%E4%BC%81%E4%BA%8B%E4%B8%9A%E5%8D%95%E4%BD%8D%E3%80%81%E5%B1%85%E6%B0%91%E5%AE%A3%E4%BC%A0%E6%85%88%E5%96%84%E7%90%86%E5%BF%B5%EF%BC%8C%E6%8E%A5%E5%8F%97%E7%A4%BE%E4%BC%9A%E6%8D%90%E8%B5%A0%E3%80%82%E7%9B%AE%E5%89%8D%E4%B8%BB%E8%A6%81%E6%9C%89%E2%80%9C%E6%83%85%E6%9A%96%E4%B8%87%E5%AE%B6%E2%80%9D%E3%80%81%E2%80%9C%E5%8F%8C%E7%99%BE%E5%B8%AE%E6%89%B6%E2%80%9D%E3%80%81%E2%80%9C%E5%BD%A9%E8%99%B9%E5%8A%A9%E5%AD%A6%E2%80%9D%E3%80%81%E2%80%9C%E6%85%88%E7%88%B1%E5%8A%A9%E6%AE%8B%E2%80%9D%E3%80%81%E2%80%9C%E9%98%B3%E5%85%89%E6%95%AC%E8%80%81%E2%80%9D%E7%AD%89%E5%93%81%E7%89%8C%E9%A1%B9%E7%9B%AE%E3%80%82%E6%9C%BA%E6%9E%84%E4%B8%BB%E8%A6%81%E5%9C%A8%E5%AE%81%E6%B3%A2%E5%9C%B0%E5%8C%BA%E5%BC%80%E5%B1%95%E6%85%88%E5%96%84%E5%85%AC%E7%9B%8A%E6%B4%BB%E5%8A%A8%EF%BC%8C%E5%8B%9F%E9%9B%86%E5%96%84%E6%AC%BE%EF%BC%8C%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E3%80%82',
'id': '229',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F170055e73503af0fb867943c5bb661afae78d360f3c165e7adddafd09b5ae075037f4b393a5d73bd',
'mn': 17955273,
'rk': '62',
'title': '%E5%AE%81%E6%B3%A2%E5%B8%82%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 6008,
'uin': '282505623'},
{'desc': '%20%20%20%20%20%20%20%E4%B8%AD%E5%9B%BD%E9%9D%92%E5%9F%BA%E4%BC%9A%E4%BA%8E1989%E5%B9%B410%E6%9C%88%E5%8F%91%E8%B5%B7%E5%AE%9E%E6%96%BD%E5%B8%8C%E6%9C%9B%E5%B7%A5%E7%A8%8B%EF%BC%8C%E6%98%AF%E6%88%91%E5%9B%BD%E7%A4%BE%E4%BC%9A%E5%8F%82%E4%B8%8E%E6%9C%80%E5%B9%BF%E6%B3%9B%E3%80%81%E6%9C%80%E5%AF%8C%E5%BD%B1%E5%93%8D%E7%9A%84%E6%B0%91%E9%97%B4%E5%85%AC%E7%9B%8A%E4%BA%8B%E4%B8%9A%E3%80%82%E4%B8%AD%E5%9B%BD%E9%9D%92%E5%9F%BA%E4%BC%9A%E7%9A%84%E4%BD%BF%E5%91%BD%E6%98%AF%EF%BC%9A%E9%80%9A%E8%BF%87%E8%B5%84%E5%8A%A9%E6%9C%8D%E5%8A%A1%E3%80%81%E5%88%A9%E7%9B%8A%E8%A1%A8%E8%BE%BE%E5%92%8C%E7%A4%BE%E4%BC%9A%E5%80%A1%E5%AF%BC%EF%BC%8C%E5%B8%AE%E5%8A%A9%E9%9D%92%E5%B0%91%E5%B9%B4%E6%8F%90%E9%AB%98%E8%83%BD%E5%8A%9B%EF%BC%8C%E6%94%B9%E5%96%84%E9%9D%92%E5%B0%91%E5%B9%B4%E6%88%90%E9%95%BF%E7%8E%AF%E5%A2%83%E3%80%82',
'id': '6',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb09f47705d84b0e1330d73d64ee96b51efd05610ae68996568e491440efa9b8d2ada4176bbc0b92974',
'mn': 17633764,
'rk': '63',
'title': '%E4%B8%AD%E5%9B%BD%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 5019,
'uin': '3061663924'},
{'desc': '%E4%B8%AD%E5%85%B3%E6%9D%91%E7%B2%BE%E5%87%86%E5%8C%BB%E5%AD%A6%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E8%8B%B1%E6%96%87China%20zhongguancun%20Precision%20Medicine%20science%20and%20technology%20foundation%E8%8B%B1%E6%96%87%E7%BC%A9%E5%86%99CPMF%2C%E7%BD%91%E7%AB%99%EF%BC%9A%E4%B8%AD%E5%9B%BD%E7%B2%BE%E5%87%86%E5%8C%BB%E5%AD%A6%E7%BD%91%E3%80%81%E7%BD%91%E5%9D%80%3Awww%2Ecpm010%2Eorg%2Ecn%2C%E4%B8%AD%E5%85%B3%E6%9D%91%E7%B2%BE%E5%87%86%E5%8C%BB%E5%AD%A6%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%BB%8F%E5%8C%97%E4%BA%AC%E5%B8%82%E6%94%BF%E5%BA%9C%E6%89%B9%E5%87%86%EF%BC%8C%E4%B8%AD%E5%85%B3%E6%9D%91%E5%9B%BD%E5%AE%B6%E8%87%AA%E4%B8%BB%E5%88%9B%E6%96%B0%E7%A4%BA%E8%8C%83%E5%8C%BA%E7%AE%A1%E5%A7%94%E4%BC%9A%E4%B8%BB%E7%AE%A1%EF%BC%8C%E5%9B%BD%E5%AE%B6%E6%B0%91%E6%94%BF%E9%83%A8%E6%8E%88%E6%9D%83%E5%8C%97%E4%BA%AC%E5%B8%82%E6%B0%91%E6%94%BF%E5%B1%80%E6%B3%A8%E5%86%8C%E7%99%BB%E8%AE%B0%E7%9A%84%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88%E5%8D%B3%E9%9D%A2%E5%90%91%E5%85%AC%E4%BC%97%E5%8B%9F%E6%8D%90%E7%9A%84%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E7%AE%80%E7%A7%B0%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%98%AF%E9%9D%A2%E5%90%91%E5%85%AC%E4%BC%97%E5%8B%9F%E6%8D%90%E7%9A%84%E5%9C%B0%E5%9F%9F%E8%8C%83%E5%9B%B4%E6%98%AF%E5%9B%BD%E5%86%85%E5%A4%96%EF%BC%89%2C%E6%98%AF%E5%9B%BD%E5%AE%B6%E2%80%9C%E5%8D%81%E4%B8%89%E4%BA%94%E2%80%9D%E7%A7%91%E6%8A%80%E8%A7%84%E5%88%92%E2%80%9C%E4%B8%AD%E5%9B%BD%E7%B2%BE%E5%87%86%E5%8C%BB%E5%AD%A6%E7%A0%94%E7%A9%B6%E8%AE%A1%E5%88%92%E2%80%9D%E7%A7%91%E6%8A%80%E9%87%8D%E7%82%B9%E9%A1%B9%E7%9B%AE%E5%8D%95%E4%BD%8D%E3%80%82%E5%85%B6%E5%AE%97%E6%97%A8%E6%98%AF%E4%BB%A5%E5%88%9B%E6%96%B0%E9%A9%B1%E5%8A%A8%E4%B8%BA%E6%8C%87%E5%AF%BC%2C%E6%8E%A8%E5%8A%A8%E7%B2%BE%E5%87%86%E5%8C%BB%E5%AD%A6%E7%A7%91%E5%AD%A6%E6%8A%80%E6%9C%AF%E5%8F%91%E5%B1%95%E4%B8%BA%E7%9B%AE%E6%A0%87%2C%E4%B8%BA%E5%BB%BA%E7%AB%8B%E7%A7%91%E5%AD%A6%E7%9A%84%E5%8C%BB%E5%AD%A6%E7%A7%91%E5%AD%A6%E4%BD%93%E7%B3%BB%E4%BD%9C%E8%B4%A1%E7%8C%AE%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E5%B0%86%E4%B8%A5%E6%A0%BC%E9%81%B5%E5%AE%88%E5%9B%BD%E5%AE%B6%E7%9A%84%E5%AE%AA%E6%B3%95%E3%80%81%E6%B3%95%E5%BE%8B%E3%80%81%E6%B3%95%E8%A7%84%E3%80%81%E8%A7%84%E7%AB%A0%E5%92%8C%E5%9B%BD%E5%AE%B6%E6%94%BF%E7%AD%96%EF%BC%8C%E4%BE%9D%E7%85%A7%E7%AB%A0%E7%A8%8B%E4%BB%8E%E4%BA%8B%E5%85%AC%E7%9B%8A%E6%B4%BB%E5%8A%A8%E3%80%82',
'id': '287',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb000f52da0e48b5e03dcbdce1ce761a966fb1ebe9d8d782ad272ccd2448c5f16d72524395051cc80a1',
'mn': 17623178,
'rk': '64',
'title': '%E4%B8%AD%E5%85%B3%E6%9D%91%E7%B2%BE%E5%87%86%E5%8C%BB%E5%AD%A6%E5%9F%BA%E9%87%91%E4%BC%9A%20',
'tms': 1573,
'uin': '3548841686'},
{'desc': '%E6%B3%B0%E5%AE%89%E5%B8%82%E6%B3%B0%E5%B1%B1%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E6%B3%B0%E5%AE%89%E7%AC%AC%E4%B8%80%E5%AE%B65A%E7%BA%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%B3%A8%E5%86%8C%E8%B5%84%E9%87%91600%E4%B8%87%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%EF%BC%8C%E4%BC%A0%E9%80%92%E7%88%B1%E5%BF%83%EF%BC%8C%E5%BC%98%E6%89%AC%E6%85%88%E5%96%84%E3%80%81%E4%BF%83%E8%BF%9B%E5%92%8C%E8%B0%90%E3%80%82%20%E5%9F%BA%E9%87%91%E4%BC%9A%E6%88%90%E7%AB%8B%E4%BB%A5%E2%80%9C%E5%AD%9D%E2%80%9D%E2%80%9C%E5%AD%A6%E2%80%9D%E2%80%9C%E8%AF%9A%E2%80%9D%E2%80%9C%E5%92%8C%E2%80%9D%E2%80%9C%E5%AE%B6%E2%80%9D%E4%BA%94%E5%A4%A7%E7%90%86%E5%BF%B5%E4%B8%BA%E6%A0%B8%E5%BF%83%EF%BC%8C%E8%87%B4%E5%8A%9B%E4%BA%8E%E8%80%81%E4%BA%BA%E3%80%81%E5%84%BF%E7%AB%A5%E3%80%81%E9%9D%92%E5%B0%91%E5%B9%B4%E7%BE%A4%E4%BD%93%E5%BC%80%E5%B1%95%E5%B8%AE%E6%89%B6%E9%A1%B9%E7%9B%AE%E3%80%82',
'id': '299',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0bd2fa9e625f7619f1a6694591b57840c4d8dfa8e4aafaaf8928cf125c885fcb81429785b13a53b79',
'mn': 16374493,
'rk': '65',
'title': '%E6%B3%B0%E5%AE%89%E5%B8%82%E6%B3%B0%E5%B1%B1%E6%85%88%E5%96%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1595,
'uin': '3552817621'},
{'desc': '%E4%B8%AD%E5%9B%BD%E7%BB%BF%E5%8C%96%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%9C%A8%E6%B0%91%E6%94%BF%E9%83%A8%E6%B3%A8%E5%86%8C%E7%9A%84%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E5%8D%95%E4%BD%8D%E6%98%AF%E5%9B%BD%E5%AE%B6%E6%9E%97%E4%B8%9A%E5%B1%80%EF%BC%8C%E4%BA%AB%E6%9C%89%E8%81%94%E5%90%88%E5%9B%BD%E7%BB%8F%E6%B5%8E%E7%A4%BE%E4%BC%9A%E7%90%86%E4%BA%8B%E4%BC%9A%E5%92%A8%E5%95%86%E5%9C%B0%E4%BD%8D%EF%BC%8C%E3%80%8A%E6%85%88%E5%96%84%E6%B3%95%E3%80%8B%E9%A2%81%E5%B8%83%E5%90%8E%EF%BC%8C%E8%A2%AB%E6%B0%91%E6%94%BF%E9%83%A8%E8%AE%A4%E5%AE%9A%E4%B8%BA%E9%A6%96%E6%89%B916%E5%AE%B6%E5%85%B7%E6%9C%89%E5%85%AC%E5%BC%80%E5%8B%9F%E6%8D%90%E8%B5%84%E6%A0%BC%E7%9A%84%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E4%B9%8B%E4%B8%80%E3%80%82',
'id': '36',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F3e28f14aa0516842b06ddbb2ce692024a0a7024623bbf0812997772d66f5113a0682bfd6e7b7b84cce3f0104c13ce3d8',
'mn': 15244058,
'rk': '66',
'title': '%E4%B8%AD%E5%9B%BD%E7%BB%BF%E5%8C%96%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 2472,
'uin': '95001132'},
{'desc': '%E9%9D%92%E6%B5%B7%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E4%BA%8E1996%E5%B9%B45%E6%9C%88%E7%BB%8F%E9%9D%92%E6%B5%B7%E7%9C%81%E4%BA%BA%E6%B0%91%E6%94%BF%E5%BA%9C%E6%89%B9%E5%87%86%E4%BE%9D%E6%B3%95%E5%9C%A8%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E6%B3%A8%E5%86%8C%E7%99%BB%E8%AE%B0%E6%88%90%E7%AB%8B%EF%BC%8C%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E3%80%82%E5%85%B6%E8%B4%A2%E4%BA%A7%E5%92%8C%E6%94%B6%E7%9B%8A%E4%B8%8D%E4%B8%BA%E4%BB%BB%E4%BD%95%E4%B8%AA%E4%BA%BA%E8%B0%8B%E5%8F%96%E7%A7%81%E5%88%A9%EF%BC%8C%E6%85%88%E5%96%84%E8%A1%8C%E4%B8%BA%E5%92%8C%E5%96%84%E6%AC%BE%E5%96%84%E7%89%A9%E5%8F%97%E5%9B%BD%E5%AE%B6%E6%B3%95%E5%BE%8B%E4%BF%9D%E6%8A%A4%E3%80%82%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E6%8E%A5%E5%8F%97%E4%B8%AD%E5%8D%8E%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E7%9A%84%E4%B8%9A%E5%8A%A1%E6%8C%87%E5%AF%BC%EF%BC%8C%E5%90%84%E5%B7%9E%E3%80%81%E5%9C%B0%EF%BC%88%E5%B8%82%EF%BC%89%E3%80%81%E5%8E%BF%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E6%8E%A5%E5%8F%97%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A%E7%9A%84%E4%B8%9A%E5%8A%A1%E6%8C%87%E5%AF%BC%E3%80%82',
'id': '209',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fa1fbd18c11e6ad34708cca12361d744bdd47fc218edfa01651829238e5fd17bb6e81a23a1572ca3499c020655a990990',
'mn': 14666389,
'rk': '67',
'title': '%E9%9D%92%E6%B5%B7%E7%9C%81%E6%85%88%E5%96%84%E6%80%BB%E4%BC%9A',
'tms': 464,
'uin': '1467234246'},
{'desc': '%E5%B9%BF%E4%B8%9C%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%B9%BF%E4%B8%9C%E5%9B%A2%E7%9C%81%E5%A7%94%E3%80%81%E7%9C%81%E9%9D%92%E8%81%94%E3%80%81%E7%9C%81%E5%AD%A6%E8%81%94%E3%80%81%E7%9C%81%E5%B0%91%E5%B7%A5%E5%A7%94%E5%85%B1%E5%90%8C%E5%88%9B%E5%8A%9E%E7%9A%84%E3%80%81%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E7%9A%84%E7%A4%BE%E4%BC%9A%E5%9B%A2%E4%BD%93%E3%80%82%E6%88%91%E4%BB%AC%E7%9A%84%E5%AE%97%E6%97%A8%E6%98%AF%EF%BC%9A%E4%BA%89%E5%8F%96%E6%B5%B7%E5%86%85%E5%A4%96%E5%85%B3%E5%BF%83%E9%9D%92%E5%B0%91%E5%B9%B4%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%9B%A2%E4%BD%93%E3%80%81%E4%B8%AA%E4%BA%BA%E7%9A%84%E6%94%AF%E6%8C%81%E5%92%8C%E6%8D%90%E5%8A%A9%EF%BC%8C%E6%8E%A8%E5%8A%A8%E9%9D%92%E5%B0%91%E5%B9%B4%E6%95%99%E8%82%B2%E3%80%81%E7%A7%91%E6%8A%80%E3%80%81%E6%96%87%E5%8C%96%E3%80%81%E4%BD%93%E8%82%B2%E3%80%81%E5%8D%AB%E7%94%9F%E3%80%81%E7%A4%BE%E4%BC%9A%E7%A6%8F%E5%88%A9%E5%92%8C%E7%8E%AF%E5%A2%83%E4%BF%9D%E6%8A%A4%E7%AD%89%E4%BA%8B%E4%B8%9A%E7%9A%84%E5%8F%91%E5%B1%95%E3%80%82',
'id': '82',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F36d96a115cffc9cee8e0b346b64a5ba7a3d438ecbdf4952034e994e4a5dfcbe704868f32f6dc7e3db5cc9d16154a1cbd',
'mn': 14459138,
'rk': '68',
'title': '%E5%B9%BF%E4%B8%9C%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 2735,
'uin': '2495557139'},
{'desc': '%E6%B5%99%E6%B1%9F%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%85%B1%E9%9D%92%E5%9B%A2%E6%B5%99%E6%B1%9F%E7%9C%81%E5%A7%94%E4%B8%BB%E7%AE%A1%EF%BC%8C%E6%B5%99%E6%B1%9F%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E7%99%BB%E8%AE%B0%E7%AE%A1%E7%90%86%EF%BC%8C%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E5%9C%B0%E4%BD%8D%E7%9A%845A%E7%BA%A7%E7%9C%81%E7%BA%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E4%B8%BB%E8%A6%81%E7%AD%B9%E6%AC%BE%E6%9D%A5%E6%BA%90%E6%98%AF%E5%85%AC%E4%BC%97%E6%8D%90%E8%B5%A0%E5%92%8C%E4%BC%81%E4%B8%9A%E6%8D%90%E8%B5%A0%E3%80%82%E6%B5%99%E6%B1%9F%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E8%87%AA1991%E5%B9%B4%E5%AE%9E%E6%96%BD%E2%80%9C%E5%B8%8C%E6%9C%9B%E5%B7%A5%E7%A8%8B%E2%80%9D%E3%80%81%E6%B5%99%E6%B1%9F%E7%9C%81%E5%A4%A7%E5%AD%A6%E7%94%9F%E5%8A%A9%E5%AD%A6%E8%AE%A1%E5%88%92%E3%80%81%E4%BD%8E%E6%94%B6%E5%85%A5%E5%86%9C%E6%88%B7%E9%9D%92%E5%B0%91%E5%B9%B4%E5%85%B3%E7%88%B1%E8%A1%8C%E5%8A%A8%E7%AD%89%E5%85%AC%E7%9B%8A%E9%A1%B9%E7%9B%AE%E4%BB%A5%E6%9D%A5%EF%BC%8C%E9%80%9A%E8%BF%87%E5%85%A8%E7%9C%81%E5%90%84%E7%BA%A7%E5%9B%A2%E7%BB%84%E7%BB%87%E5%92%8C%E5%B8%8C%E6%9C%9B%E5%B7%A5%E7%A8%8B%E5%AE%9E%E6%96%BD%E6%9C%BA%E6%9E%84%EF%BC%8C%E5%B7%A5%E4%BD%9C%E5%8F%96%E5%BE%97%E4%BA%86%E6%98%BE%E8%91%97%E7%9A%84%E6%88%90%E6%95%88%E3%80%82%E6%88%91%E4%BB%AC%E7%9A%84%E6%9C%8D%E5%8A%A1%E5%9C%B0%E5%8C%BA%E4%B8%BB%E8%A6%81%E6%98%AF%E6%B5%99%E6%B1%9F%E7%9C%81%E5%9C%B0%E5%8C%BA%E3%80%82%E6%B5%99%E6%B1%9F%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E5%B8%8C%E6%9C%9B%E9%80%9A%E8%BF%87%E8%B5%84%E5%8A%A9%E6%9C%8D%E5%8A%A1%EF%BC%8C%E6%94%B9%E5%96%84%E9%9D%92%E5%B0%91%E5%B9%B4%E6%88%90%E9%95%BF%E7%8E%AF%E5%A2%83%E5%92%8C%E6%8F%90%E9%AB%98%E4%B8%AA%E4%BA%BA%E8%83%BD%E5%8A%9B%EF%BC%9B%E9%80%9A%E8%BF%87%E5%88%A9%E7%9B%8A%E8%A1%A8%E8%BE%BE%E5%92%8C%E7%A4%BE%E4%BC%9A%E5%80%A1%E5%AF%BC%EF%BC%8C%E7%BB%B4%E6%8A%A4%E9%9D%92%E5%B0%91%E5%B9%B4%E5%90%88%E6%B3%95%E6%9D%83%E7%9B%8A%E5%92%8C%E4%BC%A0%E6%92%AD%E6%89%B6%E8%B4%AB%E6%B5%8E%E5%9B%B0%E5%85%AC%E7%9B%8A%E7%90%86%E5%BF%B5%E3%80%82',
'id': '130',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0fdd26dcc9f6e6624454be3a73b639ab1bf13b347e05456f8d92a6db535c9e96e1ca9e1ebbc6910e5',
'mn': 13805214,
'rk': '69',
'title': '%E6%B5%99%E6%B1%9F%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1358,
'uin': '1873846785'},
{'desc': '%E5%AE%89%E5%BE%BD%E7%9C%81%E7%BA%A2%E5%8D%81%E5%AD%97%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%AE%89%E5%BE%BD%E7%9C%81%E7%BA%A2%E5%8D%81%E5%AD%97%E4%BC%9A%E4%B8%BB%E7%AE%A1%E3%80%81%E7%BB%8F%E6%B0%91%E6%94%BF%E9%83%A8%E9%97%A8%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%E7%9A%84%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E5%9C%B0%E4%BD%8D%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E8%AF%A5%E5%9F%BA%E9%87%91%E4%BC%9A%E5%A5%89%E8%A1%8C%E2%80%9C%E4%BA%BA%E9%81%93%E3%80%81%E5%8D%9A%E7%88%B1%E3%80%81%E5%A5%89%E7%8C%AE%E2%80%9D%E7%9A%84%E7%BA%A2%E5%8D%81%E5%AD%97%E7%B2%BE%E7%A5%9E%EF%BC%8C%E4%BB%A5%E5%BC%80%E5%B1%95%E4%BA%BA%E9%81%93%E6%95%91%E5%8A%A9%E5%92%8C%E5%85%AC%E7%9B%8A%E6%B4%BB%E5%8A%A8%EF%BC%8C%E6%94%B9%E5%96%84%E6%9C%80%E6%98%93%E5%8F%97%E6%8D%9F%E5%AE%B3%E7%BE%A4%E4%BD%93%E7%9A%84%E7%94%9F%E5%AD%98%E7%8A%B6%E5%86%B5%E5%8F%8A%E5%85%B6%E5%8F%91%E5%B1%95%E7%8E%AF%E5%A2%83%E4%B8%BA%E5%AE%97%E6%97%A8%E3%80%82',
'id': '324',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0b9d4c75b5984a11c756f2f0743e2f9797f08fc9763d1786c40667db26aa508fe0fff6d44ceb3b7a4',
'mn': 12470897,
'rk': '70',
'title': '%E5%AE%89%E5%BE%BD%E7%9C%81%E7%BA%A2%E5%8D%81%E5%AD%97%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 927,
'uin': '2686044007'},
{'desc': '%E5%8C%97%E4%BA%AC%E5%BD%93%E4%BB%A3%E8%89%BA%E6%9C%AF%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88BCAF%EF%BC%89%E4%BD%9C%E4%B8%BA%E4%B8%AD%E5%9B%BD%E5%94%AF%E4%B8%80%E4%B8%93%E6%B3%A8%E4%BA%8E%E5%BD%93%E4%BB%A3%E4%BA%BA%E6%96%87%E8%89%BA%E6%9C%AF%E5%8F%91%E5%B1%95%E7%9A%84%E5%85%AC%E5%8B%9F%E6%80%A7%E5%9F%BA%E9%87%91%E4%BC%9A%E5%92%8C%E6%96%87%E5%8C%96%E6%99%BA%E5%BA%93%EF%BC%8C%E4%BB%A5%E2%80%9C%E5%8F%91%E7%8E%B0%E6%96%87%E5%8C%96%E5%88%9B%E6%96%B0%EF%BC%8C%E6%8E%A8%E5%8A%A8%E8%89%BA%E6%9C%AF%E5%85%AC%E7%9B%8A%E2%80%9D%E4%B8%BA%E4%BD%BF%E5%91%BD%EF%BC%8C%E9%80%9A%E8%BF%87%E5%9C%A8%E6%96%87%E5%8C%96%E5%88%9B%E6%96%B0%E3%80%81%E8%89%BA%E6%9C%AF%E5%85%AC%E7%9B%8A%E5%92%8C%E6%99%BA%E5%BA%93%E7%A0%94%E7%A9%B6%E4%B8%89%E5%A4%A7%E9%A2%86%E5%9F%9F%E5%B9%BF%E6%B3%9B%E8%80%8C%E6%9C%89%E6%B4%BB%E5%8A%9B%E7%9A%84%E5%85%AC%E7%9B%8A%E9%A1%B9%E7%9B%AE%EF%BC%8C%E8%AE%A9%E6%9B%B4%E5%A4%9A%E4%BA%BA%E8%87%AA%E7%94%B1%E5%B9%B3%E7%AD%89%E5%9C%B0%E5%88%86%E4%BA%AB%E6%96%87%E5%8C%96%E8%89%BA%E6%9C%AF%EF%BC%8C%E6%9E%84%E5%BB%BA%E5%88%9B%E6%96%B0%E4%B8%AD%E5%9B%BD%E4%B8%BA%E4%B8%BB%E7%BA%BF%E7%9A%84%E4%BA%BA%E6%96%87%E5%85%AC%E6%B0%91%E7%A4%BE%E4%BC%9A%E3%80%82',
'id': '227',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0333a96b0b3ec9b5ab0e1d2840a2a86b391c070fcbe3944be0b0814172d37d5692d9208329404bacc',
'mn': 10327825,
'rk': '71',
'title': '%E5%8C%97%E4%BA%AC%E5%BD%93%E4%BB%A3%E8%89%BA%E6%9C%AF%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 448,
'uin': '3356339623'},
{'desc': '%20%E6%B9%96%E5%8D%97%E7%9C%81%E6%AE%8B%E7%96%BE%E4%BA%BA%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E4%BA%8E2008%E5%B9%B47%E6%9C%8824%E6%97%A5%E6%AD%A3%E5%BC%8F%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E5%8D%95%E4%BD%8D%E6%B9%96%E5%8D%97%E7%9C%81%E6%AE%8B%E7%96%BE%E4%BA%BA%E8%81%94%E5%90%88%E4%BC%9A%E3%80%82%E5%85%B6%E5%AE%97%E6%97%A8%E6%98%AF%E9%81%B5%E7%85%A7%E5%9B%BD%E5%AE%B6%E5%92%8C%E5%9C%B0%E6%96%B9%E6%94%BF%E5%BA%9C%E6%9C%89%E5%85%B3%E6%B3%95%E5%BE%8B%E3%80%81%E6%B3%95%E8%A7%84%E5%92%8C%E6%94%BF%E7%AD%96%EF%BC%8C%E5%B9%BF%E6%B3%9B%E5%9B%A2%E7%BB%93%E3%80%81%E5%8A%A8%E5%91%98%E4%B8%80%E5%88%87%E7%A4%BE%E4%BC%9A%E5%8A%9B%E9%87%8F%EF%BC%8C%E5%85%A8%E5%BF%83%E5%85%A8%E6%84%8F%E4%B8%BA%E6%AE%8B%E7%96%BE%E4%BA%BA%E6%9C%8D%E5%8A%A1%E3%80%82%0A',
'id': '139',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb013f0dcda9efeb1eab9fc516dfed8fb2bd89b30ecb4f22387ab2cf6f87589be434445a8717d5443c2',
'mn': 9193810,
'rk': '72',
'title': '%E6%B9%96%E5%8D%97%E7%9C%81%E6%AE%8B%E7%96%BE%E4%BA%BA%E7%A6%8F%E5%88%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 97,
'uin': '3064485955'},
{'desc': '%E4%B8%8A%E6%B5%B7%E5%AE%8B%E5%BA%86%E9%BE%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%94%B1%E5%AE%8B%E5%BA%86%E9%BE%84%E5%A5%B3%E5%A3%AB%E6%89%80%E5%88%9B%E5%8A%9E%E7%9A%84%E4%B8%AD%E5%9B%BD%E7%A6%8F%E5%88%A9%E4%BC%9A%E5%8F%91%E8%B5%B7%E7%9A%84%EF%BC%8C%E4%BA%8E1986%E5%B9%B4%E6%88%90%E7%AB%8B%E7%9A%84%E4%B8%80%E5%AE%B6%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E8%87%AA%E6%88%90%E7%AB%8B%E4%BB%A5%E6%9D%A5%EF%BC%8C%E5%9F%BA%E9%87%91%E4%BC%9A%E5%A7%8B%E7%BB%88%E7%A7%89%E6%89%BF%E5%AE%8B%E5%BA%86%E9%BE%84%E5%A5%B3%E5%A3%AB%E7%9A%84%E5%85%AC%E7%9B%8A%E6%85%88%E5%96%84%E7%B2%BE%E7%A5%9E%EF%BC%8C%E7%A7%AF%E6%9E%81%E5%9B%B4%E7%BB%95%E6%95%99%E8%82%B2%E3%80%81%E6%96%87%E5%8C%96%E3%80%81%E5%8C%BB%E7%96%97%E5%8D%AB%E7%94%9F%E5%92%8C%E7%A4%BE%E4%BC%9A%E5%8F%AF%E6%8C%81%E7%BB%AD%E5%8F%91%E5%B1%95%E9%A2%86%E5%9F%9F%E5%BC%80%E5%B1%95%E5%90%84%E7%B1%BB%E5%85%AC%E7%9B%8A%E6%B4%BB%E5%8A%A8%EF%BC%8C%E4%BB%8E%E8%80%8C%E5%85%A8%E9%9D%A2%E6%94%B9%E5%96%84%E5%8F%97%E5%8A%A9%E4%BA%BA%E7%BE%A4%E7%9A%84%E7%94%9F%E5%AD%98%E7%8A%B6%E5%86%B5%E3%80%82%20%0A%0A%E4%B8%8A%E6%B5%B7%E5%AE%8B%E5%BA%86%E9%BE%84%E5%9F%BA%E9%87%91%E4%BC%9A%E6%8B%A5%E6%9C%89%E4%B8%80%E6%94%AF%E4%B8%93%E4%B8%9A%E5%8C%96%E7%9A%84%E7%AE%A1%E7%90%86%E5%92%8C%E6%89%A7%E8%A1%8C%E5%9B%A2%E9%98%9F%EF%BC%8C%E6%98%AF%E4%B8%80%E5%AE%B6%E7%A7%AF%E6%9E%81%E5%80%A1%E5%AF%BC%E9%AB%98%E6%95%88%E3%80%81%E8%A7%84%E8%8C%83%E3%80%81%E9%80%8F%E6%98%8E%E7%9A%84%E5%85%AC%E7%9B%8A%E6%9C%BA%E6%9E%84%E3%80%82%E6%9C%AC%E7%9D%80%E5%AF%B9%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E7%9A%84%E7%89%B9%E5%88%AB%E5%85%B3%E6%B3%A8%EF%BC%8C%E4%B8%8A%E6%B5%B7%E5%AE%8B%E5%BA%86%E9%BE%84%E5%9F%BA%E9%87%91%E4%BC%9A%E5%B7%B2%E7%BB%8F%E5%9C%A8%E5%A6%87%E5%B9%BC%E4%BF%9D%E9%94%AE%E3%80%81%E5%8A%A9%E5%AD%A6%E5%8A%A9%E6%95%99%E3%80%81%E5%84%BF%E7%AB%A5%E6%96%87%E5%8C%96%E7%AD%89%E6%96%B9%E9%9D%A2%E8%AE%BE%E7%AB%8B%E4%BA%86%E5%A4%9A%E4%B8%AA%E9%A1%B9%E7%9B%AE%E5%9F%BA%E9%87%91%EF%BC%8C%E8%B6%B3%E8%BF%B9%E9%81%8D%E5%B8%83%E5%85%A8%E5%9B%BD%E5%90%84%E5%A4%A7%E7%9C%81%E5%B8%82%E8%87%AA%E6%B2%BB%E5%8C%BA%E3%80%82%E5%8F%98%E9%9D%A9%E5%8F%91%E5%B1%95%E4%B8%AD%E7%9A%84%E4%B8%8A%E6%B5%B7%E5%AE%8B%E5%BA%86%E9%BE%84%E5%9F%BA%E9%87%91%E4%BC%9A%E4%B9%9F%E6%98%AF%E4%B8%80%E5%AE%B6%E5%85%B7%E6%9C%89%E5%9B%BD%E9%99%85%E8%A7%86%E9%87%8E%E7%9A%84%E5%85%AC%E7%9B%8A%E6%9C%BA%E6%9E%84%EF%BC%8C%E5%85%B6%E6%9C%89%E6%95%88%E5%80%9F%E5%8A%A9%E8%87%AA%E8%BA%AB%E4%BC%98%E5%8A%BF%EF%BC%8C%E7%AB%8B%E8%B6%B3%E4%B8%AD%E5%9B%BD%EF%BC%8C%E6%94%BE%E7%9C%BC%E5%85%A8%E7%90%83%EF%BC%8C%E7%A7%AF%E6%9E%81%E6%8B%93%E5%B1%95%E5%9B%BD%E9%99%85%E9%97%B4%E7%9A%84%E9%A1%B9%E7%9B%AE%E5%90%88%E4%BD%9C%E4%B8%8E%E4%BA%A4%E6%B5%81%E3%80%82%E5%90%8C%E6%97%B6%EF%BC%8C%E5%8A%AA%E5%8A%9B%E5%8A%A0%E5%BC%BA%E5%92%8C%E6%8E%A8%E5%8A%A8%E8%B7%A8%E7%95%8C%E5%90%88%E4%BD%9C%EF%BC%8C%E9%80%9A%E8%BF%87%E4%B8%8D%E6%96%AD%E6%8E%A2%E7%B4%A2%E5%92%8C%E5%AE%9E%E8%B7%B5%EF%BC%8C%E8%87%B4%E5%8A%9B%E8%AE%A9%E5%AE%8B%E5%BA%86%E9%BE%84%E5%A5%B3%E5%A3%AB%E7%9A%84%E7%88%B1%E5%BF%83%E4%BA%8B%E4%B8%9A%E6%83%A0%E5%8F%8A%E6%9B%B4%E5%A4%9A%E4%BA%BA%E7%BE%A4%EF%BC%8C%E6%9C%80%E7%BB%88%E6%8E%A8%E5%8A%A8%E7%A4%BE%E4%BC%9A%E7%9A%84%E5%85%A8%E9%9D%A2%E5%8F%91%E5%B1%95%E3%80%82',
'id': '147',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb070a7ba415c342b47a82677522af09123d8242156cc6de96c02cc935f37b93c1463df54afbf90af90',
'mn': 9191871,
'rk': '73',
'title': '%E4%B8%8A%E6%B5%B7%E5%AE%8B%E5%BA%86%E9%BE%84%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1117,
'uin': '2209931292'},
{'desc': '%E4%BA%91%E5%8D%97%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%98%AF%E7%94%B1%E5%85%B1%E9%9D%92%E5%9B%A2%E4%BA%91%E5%8D%97%E7%9C%81%E5%A7%94%E3%80%81%E7%9C%81%E9%9D%92%E8%81%94%E3%80%81%E7%9C%81%E5%AD%A6%E8%81%94%E3%80%81%E7%9C%81%E5%B0%91%E5%B7%A5%E5%A7%94%E5%85%B1%E5%90%8C%E5%8F%91%E8%B5%B7%EF%BC%8C%E7%BB%8F%E4%BA%91%E5%8D%97%E7%9C%81%E4%BA%BA%E6%B0%91%E6%94%BF%E5%BA%9C%E6%89%B9%E5%87%86%E4%BA%8E1994%E5%B9%B4%E6%88%90%E7%AB%8B%EF%BC%8C%E5%9C%A8%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%EF%BC%8C%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E5%9C%B0%E4%BD%8D%E7%9A%845A%E7%BA%A7%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%20%EF%BC%88NPO%EF%BC%89%E5%9C%B0%E6%96%B9%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%B8%8C%E6%9C%9B%E5%B7%A5%E7%A8%8B%E6%98%AF%E7%BB%8F%E5%9B%BD%E5%AE%B6%E5%B7%A5%E5%95%86%E5%B1%80%E6%B3%A8%E5%86%8C%E7%9A%84%E9%A6%96%E4%BE%8B%E5%85%AC%E7%9B%8A%E6%9C%8D%E5%8A%A1%E5%95%86%E6%A0%87%EF%BC%8C%E4%BA%91%E5%8D%97%E9%9D%92%E5%9F%BA%E4%BC%9A%E6%98%AF%E7%BB%8F%E4%B8%AD%E5%9B%BD%E9%9D%92%E5%9F%BA%E4%BC%9A%E5%85%AC%E7%9B%8A%E5%95%86%E6%A0%87%E6%8E%88%E6%9D%83%E5%9C%A8%E4%BA%91%E5%8D%97%E7%9C%81%E5%A2%83%E5%86%85%E6%8E%A5%E5%8F%97%E5%B8%8C%E6%9C%9B%E5%B7%A5%E7%A8%8B%E6%8D%90%E6%AC%BE%E7%9A%84%E5%94%AF%E4%B8%80%E5%90%88%E6%B3%95%E6%9C%BA%E6%9E%84%E3%80%82%E3%80%80%0A%0A%20%20%20%20%E4%BA%91%E5%8D%97%E9%9D%92%E5%9F%BA%E4%BC%9A%E5%86%85%E8%AE%BE%E7%A7%98%E4%B9%A6%E5%A4%84%E5%8A%9E%E5%85%AC%E5%AE%A4%E3%80%81%E9%A1%B9%E7%9B%AE%E7%AE%A1%E7%90%86%E9%83%A8%E3%80%81%E9%A1%B9%E7%9B%AE%E6%8B%93%E5%B1%95%E9%83%A8%E3%80%81%E7%9B%91%E5%AF%9F%E5%A7%94%E5%8A%9E%E5%85%AC%E5%AE%A4%E7%AD%89%E9%83%A8%E9%97%A8%0A%0A%E3%80%80%E4%BA%91%E5%8D%97%E9%9D%92%E5%9F%BA%E4%BC%9A%E7%9A%84%E6%9C%BA%E6%9E%84%E7%90%86%E5%BF%B5%EF%BC%9A%E2%80%9C%E4%BB%A5%E4%BA%BA%E4%B8%BA%E6%9C%AC%EF%BC%8C%E4%BB%A5%E6%96%87%E5%8C%96%E4%BA%BA%EF%BC%9B%E4%BC%A0%E9%80%92%E7%88%B1%E5%BF%83%EF%BC%8C%E5%8A%A9%E4%BA%BA%E8%87%AA%E5%8A%A9%EF%BC%9B%E5%88%9B%E9%80%A0%E8%BF%9B%E5%8F%96%EF%BC%8C%E8%BF%BD%E6%B1%82%E5%8D%93%E8%B6%8A%E2%80%9D%E3%80%82%0A',
'id': '112',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2Fd13c25b7d92690446d0edcf8d97022d833d776af3da8d4a7083306d4263b9fdc05a6f4dfd3f5231622bf1a4eb7d7d24f',
'mn': 8779047,
'rk': '74',
'title': '%E4%BA%91%E5%8D%97%E7%9C%81%E9%9D%92%E5%B0%91%E5%B9%B4%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1202,
'uin': '1919209582'},
{'desc': '%E4%B8%AD%E5%9B%BD%E4%B8%8B%E4%B8%80%E4%BB%A3%E6%95%99%E8%82%B2%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E6%98%AF2010%E5%B9%B47%E6%9C%889%E6%97%A5%E7%BB%8F%E5%9B%BD%E5%8A%A1%E9%99%A2%E6%89%B9%E5%87%86%EF%BC%8C%E5%9C%A8%E6%B0%91%E6%94%BF%E9%83%A8%E6%AD%A3%E5%BC%8F%E7%99%BB%E8%AE%B0%E6%B3%A8%E5%86%8C%E7%9A%84%E5%85%A8%E5%9B%BD%E6%80%A7%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%8F%91%E8%B5%B7%E5%8D%95%E4%BD%8D%E4%B8%AD%E5%9B%BD%E5%85%B3%E5%BF%83%E4%B8%8B%E4%B8%80%E4%BB%A3%E5%B7%A5%E4%BD%9C%E5%A7%94%E5%91%98%E4%BC%9A%EF%BC%8C%E4%B8%9A%E5%8A%A1%E4%B8%BB%E7%AE%A1%E5%8D%95%E4%BD%8D%E4%B8%AD%E5%8D%8E%E4%BA%BA%E6%B0%91%E5%85%B1%E5%92%8C%E5%9B%BD%E6%95%99%E8%82%B2%E9%83%A8%E3%80%82%E6%97%A8%E5%9C%A8%E9%80%9A%E8%BF%87%E7%A4%BE%E4%BC%9A%E5%80%A1%E5%AF%BC%E3%80%81%E5%8B%9F%E9%9B%86%E8%B5%84%E9%87%91%E3%80%81%E6%95%99%E8%82%B2%E5%9F%B9%E8%AE%AD%E3%80%81%E6%95%91%E5%8A%A9%E8%B5%84%E5%8A%A9%E3%80%81%E5%BC%80%E5%8F%91%E6%9C%8D%E5%8A%A1%E7%AD%89%E6%96%B9%E5%BC%8F%EF%BC%8C%E9%85%8D%E5%90%88%E6%94%BF%E5%BA%9C%E6%8E%A8%E5%8A%A8%E6%88%91%E5%9B%BD%E4%B8%8B%E4%B8%80%E4%BB%A3%E6%95%99%E8%82%B2%E4%BA%8B%E4%B8%9A%E7%9A%84%E7%A7%91%E5%AD%A6%E5%8F%91%E5%B1%95%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E8%87%AA%E6%88%90%E7%AB%8B%E4%BB%A5%E6%9D%A5%EF%BC%8C%E9%9D%A2%E5%90%91%E8%A5%BF%E9%83%A8%E3%80%81%E5%86%9C%E6%9D%91%E3%80%81%E6%B0%91%E6%97%8F%E5%9C%B0%E5%8C%BA%E5%92%8C%E9%9D%A9%E5%91%BD%E8%80%81%E5%8C%BA%EF%BC%8C%E7%B4%A7%E7%B4%A7%E5%9B%B4%E7%BB%95%E5%AD%A6%E5%89%8D%E6%95%99%E8%82%B2%E3%80%81%E6%A0%A1%E5%A4%96%E6%95%99%E8%82%B2%E5%92%8C%E5%AE%B6%E5%BA%AD%E6%95%99%E8%82%B2%E4%B8%89%E5%A4%A7%E9%A2%86%E5%9F%9F%E5%8F%8A%E7%95%99%E5%AE%88%E5%84%BF%E7%AB%A5%E6%95%99%E8%82%B2%E5%B8%AE%E6%89%B6%E8%A1%8C%E5%8A%A8%E3%80%81%E7%BA%A2%E7%83%9B%E8%A1%8C%E5%8A%A8%E3%80%81%E8%82%B2%E5%BE%B7%E8%A1%8C%E5%8A%A8%E3%80%81%E5%9C%86%E6%A2%A6%E8%A1%8C%E5%8A%A8%E5%9B%9B%E4%B8%AA%E6%96%B9%E9%9D%A2%EF%BC%8C%E4%BE%9D%E6%89%98%E5%90%84%E7%9C%81%E5%85%B3%E5%BF%83%E4%B8%8B%E4%B8%80%E4%BB%A3%E5%B7%A5%E4%BD%9C%E5%A7%94%E5%91%98%E4%BC%9A%E5%8F%8A%E6%95%99%E8%82%B2%E7%B3%BB%E7%BB%9F%E5%85%B3%E5%BF%83%E4%B8%8B%E4%B8%80%E4%BB%A3%E5%B7%A5%E4%BD%9C%E5%A7%94%E5%91%98%E4%BC%9A%EF%BC%8C%E5%BC%80%E5%B1%95%E4%BA%86%E4%B8%80%E7%B3%BB%E5%88%97%E5%90%84%E7%BA%A7%E6%94%BF%E5%BA%9C%E9%87%8D%E8%A7%86%E3%80%81%E6%95%99%E8%82%B2%E9%83%A8%E9%97%A8%E6%94%AF%E6%8C%81%E3%80%81%E5%BC%B1%E5%8A%BF%E7%BE%A4%E4%BD%93%E9%9C%80%E6%B1%82%E3%80%81%E7%A4%BE%E4%BC%9A%E5%B9%BF%E6%B3%9B%E5%85%B3%E6%B3%A8%E7%9A%84%E5%85%AC%E7%9B%8A%E9%A1%B9%E7%9B%AE%E5%8F%8A%E6%B4%BB%E5%8A%A8%EF%BC%8C%E9%A1%B9%E7%9B%AE%E5%8F%97%E7%9B%8A%E5%9C%B0%E5%8C%BA%E9%81%8D%E5%B8%83%E5%85%A8%E5%9B%BD31%E4%B8%AA%E7%9C%81%E5%8C%BA%E5%B8%82%E3%80%822014%E5%B9%B4%EF%BC%8C%E8%A2%AB%E6%B0%91%E6%94%BF%E9%83%A8%E8%AF%84%E4%B8%BA4A%E7%BA%A7%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%9B2015%E5%B9%B4%2C%E7%BB%8F%E6%95%99%E8%82%B2%E9%83%A8%E6%8E%A8%E8%8D%90%EF%BC%8C%E8%A2%AB%E6%B0%91%E6%94%BF%E9%83%A8%E6%8E%88%E4%BA%88%E5%85%A8%E5%9B%BD%E5%85%88%E8%BF%9B%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E7%A7%B0%E5%8F%B7%E3%80%82',
'id': '118',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb04c9f135e5f4e3f7e0f563269e2083c1d33d18a2cee147018776249f99892b6dea48cb14360e9c2b8',
'mn': 7984107,
'rk': '75',
'title': '%E4%B8%AD%E5%9B%BD%E4%B8%8B%E4%B8%80%E4%BB%A3%E6%95%99%E8%82%B2%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1943,
'uin': '2796651737'},
{'desc': '%E4%B8%AD%E5%9B%BD%E6%B3%95%E5%BE%8B%E6%8F%B4%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E5%9C%A8%E6%B0%91%E6%94%BF%E9%83%A8%E4%BE%9D%E6%B3%95%E7%99%BB%E8%AE%B0%E6%88%90%E7%AB%8B%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%AE%97%E6%97%A8%E6%98%AF%E4%BF%9D%E9%9A%9C%E5%85%A8%E4%BD%93%E5%85%AC%E6%B0%91%E4%BA%AB%E5%8F%97%E5%B9%B3%E7%AD%89%E7%9A%84%E5%8F%B8%E6%B3%95%E4%BF%9D%E6%8A%A4%EF%BC%8C%E7%BB%B4%E6%8A%A4%E6%B3%95%E5%BE%8B%E8%B5%8B%E4%BA%88%E5%85%AC%E6%B0%91%E7%9A%84%E5%9F%BA%E6%9C%AC%E6%9D%83%E5%88%A9%E3%80%82%E4%B8%BB%E8%A6%81%E4%BB%BB%E5%8A%A1%E6%98%AF%E5%8B%9F%E9%9B%86%E6%B3%95%E5%BE%8B%E6%8F%B4%E5%8A%A9%E8%B5%84%E9%87%91%EF%BC%8C%E4%B8%BA%E5%AE%9E%E6%96%BD%E6%B3%95%E5%BE%8B%E6%8F%B4%E5%8A%A9%E6%8F%90%E4%BE%9B%E7%89%A9%E8%B4%A8%E6%94%AF%E6%8C%81%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%8F%B8%E6%B3%95%E5%85%AC%E6%AD%A3%EF%BC%8C%E7%BB%B4%E6%8A%A4%E7%A4%BE%E4%BC%9A%E5%85%AC%E5%B9%B3%E4%B8%8E%E6%AD%A3%E4%B9%89%E3%80%82%E8%BF%91%E5%B9%B4%E6%9D%A5%EF%BC%8C%E5%9C%A8%E5%8F%B8%E6%B3%95%E9%83%A8%E5%92%8C%E4%B8%AD%E5%9B%BD%E6%B3%95%E5%BE%8B%E6%8F%B4%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E7%90%86%E4%BA%8B%E4%BC%9A%E7%9A%84%E9%A2%86%E5%AF%BC%E4%B8%8B%EF%BC%8C%E6%A0%B9%E6%8D%AE%E4%B8%8D%E5%90%8C%E4%BA%BA%E7%BE%A4%E5%92%8C%E5%9C%B0%E5%8C%BA%E7%9A%84%E9%9C%80%E8%A6%81%EF%BC%8C%E5%85%88%E5%90%8E%E8%AE%BE%E7%AB%8B%E4%BA%8622%E4%B8%AA%E4%B8%93%E9%A1%B9%E5%9F%BA%E9%87%91%E6%94%AF%E6%8C%81%E6%B3%95%E5%BE%8B%E6%8F%B4%E5%8A%A9%E5%B7%A5%E4%BD%9C%E7%9A%84%E5%BC%80%E5%B1%95%E3%80%82',
'id': '326',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0549fbe193d04ef89b22d1346ae6f051d1cc3b1c2247dc387c3fe3112abb9e8058118d3369ba6838b',
'mn': 7781773,
'rk': '76',
'title': '%E4%B8%AD%E5%9B%BD%E6%B3%95%E5%BE%8B%E6%8F%B4%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 981,
'uin': '3245553409'},
{'desc': '%E5%AE%89%E5%BE%BD%E7%9C%81%E5%87%BA%E7%94%9F%E7%BC%BA%E9%99%B7%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A%E6%98%AF%E7%BB%8F%E5%AE%89%E5%BE%BD%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%E6%89%B9%E5%87%86%E6%88%90%E7%AB%8B%EF%BC%8C%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E8%B5%84%E6%A0%BC%E3%80%81%E9%9D%9E%E8%90%A5%E5%88%A9%E6%80%A7%E7%9A%84%E5%85%AC%E5%8B%9F%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%85%B6%E5%AE%97%E6%97%A8%E6%98%AF%E2%80%9C%E5%87%8F%E5%B0%91%E5%87%BA%E7%94%9F%E7%BC%BA%E9%99%B7%E4%BA%BA%E5%8F%A3%E6%AF%94%E7%8E%87%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%87%BA%E7%94%9F%E7%BC%BA%E9%99%B7%E6%82%A3%E8%80%85%E5%BA%B7%E5%A4%8D%EF%BC%8C%E6%8F%90%E9%AB%98%E6%95%91%E5%8A%A9%E5%AF%B9%E8%B1%A1%E7%94%9F%E6%B4%BB%E8%B4%A8%E9%87%8F%E3%80%82%E4%B8%BB%E8%A6%81%E6%94%AF%E6%8C%81%E4%B8%8E%E5%87%BA%E7%94%9F%E7%BC%BA%E9%99%B7%E6%9C%89%E5%85%B3%E7%9A%84%E4%BA%A4%E6%B5%81%E5%9F%B9%E8%AE%AD%E3%80%81%E5%AE%A3%E4%BC%A0%E5%80%A1%E5%AF%BC%E3%80%81%E6%A3%80%E6%B5%8B%E3%80%81%E8%AF%8A%E6%96%AD%E3%80%81%E6%B2%BB%E7%96%97%E3%80%81%E6%95%91%E5%8A%A9%E7%AD%89%E5%B7%A5%E4%BD%9C%E3%80%82',
'id': '275',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb069d19a464680b8615ad578a1c80b27f8ebc1540752c76bf529826039ad31411aebdbad10928ae94c',
'mn': 7744580,
'rk': '77',
'title': '%E5%AE%89%E5%BE%BD%E7%9C%81%E5%87%BA%E7%94%9F%E7%BC%BA%E9%99%B7%E6%95%91%E5%8A%A9%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 203,
'uin': '2913444943'},
{'desc': '%E4%B8%AD%E5%9B%BD%E6%96%87%E7%89%A9%E4%BF%9D%E6%8A%A4%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88China%20Foundation%20For%20Cultural%20Heritage%20Conservation%EF%BC%8C%E7%BC%A9%E5%86%99%EF%BC%9ACFCHC%EF%BC%89%E5%88%9B%E7%AB%8B%E4%BA%8E1990%E5%B9%B4%EF%BC%8C%E6%98%AF%E7%BB%8F%E4%B8%AD%E5%8D%8E%E4%BA%BA%E6%B0%91%E5%85%B1%E5%92%8C%E5%9B%BD%E6%B0%91%E6%94%BF%E9%83%A8%E6%89%B9%E5%87%86%E3%80%81%E7%94%B1%E5%9B%BD%E5%AE%B6%E6%96%87%E7%89%A9%E5%B1%80%E4%B8%BB%E7%AE%A1%E7%9A%84%E5%85%B7%E6%9C%89%E7%8B%AC%E7%AB%8B%E6%B3%95%E4%BA%BA%E5%9C%B0%E4%BD%8D%E7%9A%84%E5%85%A8%E5%9B%BD%E5%85%AC%E5%8B%9F%E6%80%A7%E5%85%AC%E7%9B%8A%E5%9F%BA%E9%87%91%E7%BB%84%E7%BB%87%E3%80%82%E4%B8%AD%E5%9B%BD%E6%96%87%E7%89%A9%E4%BF%9D%E6%8A%A4%E5%9F%BA%E9%87%91%E4%BC%9A%E7%A7%89%E6%8C%81%E2%80%9C%E6%96%87%E7%89%A9%E4%BF%9D%E6%8A%A4%E5%85%A8%E6%B0%91%E5%8F%82%E4%B8%8E%E3%80%81%E4%BF%9D%E6%8A%A4%E6%88%90%E6%9E%9C%E5%85%A8%E6%B0%91%E5%85%B1%E4%BA%AB%E2%80%9D%E7%9A%84%E5%8F%91%E5%B1%95%E7%90%86%E5%BF%B5%EF%BC%8C%E8%B5%84%E5%8A%A9%E6%96%87%E7%89%A9%E4%BF%9D%E6%8A%A4%E4%BF%AE%E7%BC%AE%EF%BC%8C%E4%BF%83%E8%BF%9B%E6%96%87%E7%89%A9%E5%90%88%E7%90%86%E5%88%A9%E7%94%A8%EF%BC%9B%E5%BC%80%E5%B1%95%E6%96%87%E7%89%A9%E4%BB%B7%E5%80%BC%E7%9A%84%E7%A0%94%E7%A9%B6%E4%B8%8E%E4%BC%A0%E6%92%AD%EF%BC%8C%E6%8E%A8%E8%BF%9B%E6%96%87%E7%89%A9%E9%A2%86%E5%9F%9F%E7%9A%84%E5%85%AC%E5%85%B1%E6%96%87%E5%8C%96%E6%9C%8D%E5%8A%A1%EF%BC%9B%E8%81%94%E7%BB%9C%E5%85%A8%E5%9B%BD%E6%96%87%E7%89%A9%E4%BF%9D%E6%8A%A4%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E5%92%8C%E5%BF%97%E6%84%BF%E8%80%85%EF%BC%8C%E6%8E%A8%E8%BF%9B%E7%A4%BE%E4%BC%9A%E5%8A%9B%E9%87%8F%E5%B9%BF%E6%B3%9B%E5%8F%82%E4%B8%8E%EF%BC%8C%E4%B8%BA%E5%8A%AA%E5%8A%9B%E8%B5%B0%E5%87%BA%E4%B8%80%E6%9D%A1%E7%AC%A6%E5%90%88%E5%9B%BD%E6%83%85%E7%9A%84%E6%96%87%E7%89%A9%E4%BF%9D%E6%8A%A4%E5%88%A9%E7%94%A8%E4%B9%8B%E8%B7%AF%E5%81%9A%E5%87%BA%E5%BA%94%E6%9C%89%E8%B4%A1%E7%8C%AE%E3%80%82%0A',
'id': '273',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F6a335576a1d92cb0b8e38dab23f8a497d91c2dfa19b87b87a9be33fb333cd096bdceeac815aa61c2c894ef509f521eb8',
'mn': 7539369,
'rk': '78',
'title': '%E4%B8%AD%E5%9B%BD%E6%96%87%E7%89%A9%E4%BF%9D%E6%8A%A4%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 54554,
'uin': '3312036814'},
{'desc': '%E8%87%AA2004%E5%B9%B4%E4%BB%A5%E6%9D%A5%EF%BC%8C%E5%A4%A9%E6%B4%A5%E5%B8%82%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E5%9B%B4%E7%BB%95%E2%80%9C%E5%85%B3%E7%88%B1%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%EF%BC%8C%E4%BF%83%E8%BF%9B%E5%85%A8%E9%9D%A2%E5%8F%91%E5%B1%95%E2%80%9D%E7%9A%84%E5%AE%97%E6%97%A8%EF%BC%8C%E9%80%9A%E8%BF%87%E7%A4%BE%E4%BC%9A%E5%8C%96%E8%BF%90%E4%BD%9C%E6%96%B9%E5%BC%8F%EF%BC%8C%E4%BB%A5%E2%80%9C%E5%9B%B0%E9%9A%BE%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E6%95%91%E5%8A%A9%E2%80%9D%E3%80%81%E2%80%9C%E4%BB%8A%E6%99%9A%E5%8A%A9%E5%AD%A6%E2%80%9D%E3%80%81%E2%80%9C%E5%81%A5%E5%BA%B7%E4%B8%8E%E6%88%91%E5%90%8C%E8%A1%8C%E2%80%9D%E7%AD%89%E6%B4%BB%E5%8A%A8%E4%B8%BA%E8%BD%BD%E4%BD%93%EF%BC%8C%E6%95%91%E5%8A%A9%E4%BA%86%E5%9B%B0%E9%9A%BE%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A510%E4%B8%87%E4%BD%99%E4%BA%BA%E6%AC%A1%EF%BC%8C%E5%BD%A2%E6%88%90%E4%BA%86%E2%80%9C%E9%98%B3%E5%85%89%E5%85%B3%E7%88%B1%E2%80%9D%E7%9A%84%E4%B8%80%E5%A4%A7%E6%95%91%E5%8A%A9%E5%93%81%E7%89%8C%E3%80%82%E7%89%B9%E5%88%AB%E6%98%AF2007%E5%B9%B4%E5%9C%A8%E5%B8%82%E8%B4%A2%E6%94%BF%E8%B5%84%E9%87%91%E6%94%AF%E6%8C%81%E4%B8%8B%EF%BC%8C%E5%BB%BA%E7%AB%8B%E4%BA%86%E2%80%9C%E5%8D%95%E4%BA%B2%E5%9B%B0%E9%9A%BE%E6%AF%8D%E4%BA%B2%E6%95%91%E5%8A%A9%E4%B8%93%E9%A1%B9%E5%9F%BA%E9%87%91%E2%80%9D%E5%90%8E%EF%BC%8C%E5%9C%A8%E5%8D%95%E4%BA%B2%E5%9B%B0%E9%9A%BE%E6%AF%8D%E4%BA%B2%E7%9A%84%E6%95%91%E5%8A%A9%E6%96%B9%E5%BC%8F%E4%B8%8E%E6%95%91%E5%8A%A9%E5%86%85%E5%AE%B9%E4%B8%8A%E5%A4%A7%E8%83%86%E6%8E%A2%E7%B4%A2%EF%BC%8C%E5%BD%A2%E6%88%90%E4%BA%86%E7%94%9F%E6%B4%BB%E6%95%91%E5%8A%A9%E4%B8%8E%E7%94%9F%E4%BA%A7%E5%8F%91%E5%B1%95%E7%9B%B8%E7%BB%93%E5%90%88%EF%BC%8C%E7%89%A9%E8%B4%A8%E6%95%91%E5%8A%A9%E4%B8%8E%E5%BF%83%E7%90%86%E6%8A%9A%E6%85%B0%E7%9B%B8%E7%BB%93%E5%90%88%EF%BC%8C%E5%81%A5%E5%BA%B7%E4%BF%9D%E9%9A%9C%E4%B8%8E%E5%A4%A7%E7%97%85%E6%95%91%E5%8A%A9%E7%9B%B8%E7%BB%93%E5%90%88%E7%9A%84%E6%9C%89%E6%9C%BA%E8%BF%90%E4%BD%9C%E6%A8%A1%E5%BC%8F%E3%80%82%E5%9C%A8%E8%B4%A2%E6%94%BF%E4%B8%93%E9%A1%B9%E8%B5%84%E9%87%91%E7%9A%84%E6%94%AF%E6%8C%81%E4%B8%8B%EF%BC%8C%E5%A4%9A%E6%8E%AA%E5%B9%B6%E4%B8%BE%E5%B9%BF%E6%B3%9B%E5%90%B8%E7%BA%B3%E7%A4%BE%E4%BC%9A%E8%B5%84%E9%87%91%EF%BC%8C%E4%B8%8D%E6%96%AD%E6%89%A9%E5%A4%A7%E5%8F%97%E5%8A%A9%E7%BE%A4%E4%BD%93%EF%BC%8C%E5%AE%8C%E5%96%84%E6%95%91%E5%8A%A9%E6%9C%BA%E5%88%B6%EF%BC%8C%E5%9C%A8%E7%A4%BE%E4%BC%9A%E4%B8%8A%E5%BC%95%E8%B5%B7%E4%BA%86%E8%89%AF%E5%A5%BD%E5%8F%8D%E5%93%8D%EF%BC%8C%E5%A4%9A%E6%AC%A1%E8%A2%AB%E4%B8%AD%E5%A4%AE%E5%92%8C%E5%9C%B0%E6%96%B9%E5%AA%92%E4%BD%93%E5%AE%A3%E4%BC%A0%EF%BC%8C%E5%9C%A8%E4%BF%83%E8%BF%9B%E5%A4%A9%E6%B4%A5%E7%9A%84%E5%92%8C%E8%B0%90%E7%A8%B3%E5%AE%9A%E4%B8%AD%E8%B5%B7%E5%88%B0%E4%BA%86%E9%87%8D%E8%A6%81%E4%BD%9C%E7%94%A8%E3%80%822008%E5%B9%B4%EF%BC%8C%E5%A4%A9%E6%B4%A5%E5%B8%82%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A%E8%8E%B7%E5%9B%BD%E5%8A%A1%E9%99%A2%E6%89%B6%E8%B4%AB%E5%BC%80%E5%8F%91%E9%A2%86%E5%AF%BC%E5%B0%8F%E7%BB%84%E9%A2%81%E5%8F%91%E7%9A%84%E2%80%9C%E5%85%A8%E5%9B%BD%E4%B8%9C%E8%A5%BF%E6%89%B6%E8%B4%AB%E5%8D%8F%E4%BD%9C%E5%85%88%E8%BF%9B%E5%8D%95%E4%BD%8D%E2%80%9D%E3%80%81%E5%85%A8%E5%9B%BD%E5%A6%87%E8%81%94%E9%A2%81%E5%8F%91%E7%9A%84%E2%80%9C%E6%8A%97%E9%9C%87%E6%95%91%E7%81%BE%E5%85%88%E8%BF%9B%E5%A6%87%E8%81%94%E7%BB%84%E7%BB%87%E2%80%9D%EF%BC%8C2009%E5%B9%B4%E8%8E%B7%E5%A4%A9%E6%B4%A5%E5%B8%82%E4%BA%BA%E6%B0%91%E6%94%BF%E5%BA%9C%E9%A2%81%E5%8F%91%E7%9A%84%E2%80%9C%E5%A4%A9%E6%B4%A5%E5%B8%82%E5%85%88%E8%BF%9B%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E2%80%9D%E3%80%822012%2D2014%E5%B9%B4%EF%BC%8C%E8%BF%9E%E7%BB%AD%E4%B8%89%E5%B9%B4%E6%88%90%E5%8A%9F%E7%94%B3%E8%AF%B7%E4%B8%AD%E5%A4%AE%E8%B4%A2%E6%94%BF%E6%94%AF%E6%8C%81%E9%A1%B9%E7%9B%AE%EF%BC%8C%E6%98%AF%E6%88%91%E5%B8%82%E5%94%AF%E4%B8%80%E4%B8%80%E5%AE%B6%E8%8E%B7%E5%BE%97%E4%B8%AD%E5%A4%AE%E8%B4%A2%E6%94%BF%E8%B5%84%E9%87%91%E6%94%AF%E6%8C%81%E7%9A%84%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%8C%E5%9C%A8%E5%8F%91%E6%8C%A5%E7%A4%BE%E4%BC%9A%E7%BB%84%E7%BB%87%E6%89%BF%E6%8E%A5%E6%94%BF%E5%BA%9C%E8%81%8C%E8%83%BD%E4%B8%AD%E8%BF%88%E5%87%BA%E4%BA%86%E5%9D%9A%E5%AE%9E%E7%9A%84%E6%AD%A5%E4%BC%90%E3%80%82',
'id': '166',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F521d77b7abda0a1951facbe1fcdf05f6d7a8854f17dc4aa28d7bd1ad19cc65deca2412c4796514b454b0dab70bb4b141',
'mn': 6845561,
'rk': '79',
'title': '%E5%A4%A9%E6%B4%A5%E5%B8%82%E5%A6%87%E5%A5%B3%E5%84%BF%E7%AB%A5%E5%8F%91%E5%B1%95%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 651,
'uin': '1043909519'},
{'desc': '%E5%9B%9B%E5%B7%9D%E7%9C%81%E7%A7%91%E6%8A%80%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%88%E4%BB%A5%E4%B8%8B%E7%AE%80%E7%A7%B0%E5%9F%BA%E9%87%91%E4%BC%9A%EF%BC%89%E6%88%90%E7%AB%8B%E4%BA%8E2010%E5%B9%B46%E6%9C%88%EF%BC%8C%E7%99%BB%E8%AE%B0%E6%9C%BA%E5%85%B3%E6%98%AF%E5%9B%9B%E5%B7%9D%E7%9C%81%E6%B0%91%E6%94%BF%E5%8E%85%EF%BC%8C%E6%98%AF%E7%BB%8F%E7%99%BB%E8%AE%B0%E7%AE%A1%E7%90%86%E6%9C%BA%E5%85%B3%E8%AE%A4%E5%AE%9A%E7%9A%84%E5%85%B7%E6%9C%89%E5%85%AC%E5%8B%9F%E8%B5%84%E6%A0%BC%E7%9A%84%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E4%BA%8E2017%E5%B9%B43%E6%9C%881%E6%97%A5%E4%BD%9C%E4%B8%BA%E5%9B%9B%E5%B7%9D%E7%9C%81%E9%A6%96%E6%89%B9%EF%BC%8C%E5%85%A8%E7%9C%81%E9%A6%96%E4%B8%AA%E5%9F%BA%E9%87%91%E4%BC%9A%E6%8D%A2%E9%A2%86%E6%96%B0%E6%85%88%E5%96%84%E7%BB%84%E7%BB%87%E8%AF%81%E4%B9%A6%E4%BB%A5%E5%8F%8A%E5%85%AC%E5%8B%9F%E8%AF%81%E4%B9%A6%E3%80%82%E6%98%AF%E5%85%B7%E6%9C%89%E7%A8%8E%E5%8A%A1%E6%8A%B5%E6%89%A3%E8%B5%84%E6%A0%BC%E7%9A%84%E5%9F%BA%E9%87%91%E4%BC%9A%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E4%B8%BB%E8%A6%81%E7%9A%84%E5%B7%A5%E4%BD%9C%E9%A2%86%E5%9F%9F%E5%8C%85%E6%8B%AC%EF%BC%9A%E2%80%9C%E5%85%B3%E7%88%B1%E7%95%99%E5%AE%88%E5%84%BF%E7%AB%A5%E2%80%9D%E2%80%9C%E7%AD%91%E6%A2%A6%E5%B0%8F%E8%84%9A%E5%8D%B0%E5%85%AC%E7%9B%8A%E5%BE%92%E6%AD%A5%E6%B4%BB%E5%8A%A8%E2%80%9D%E2%80%9C%E5%B1%B1%E5%8C%BA%E5%AE%89%E5%85%A8%E9%A5%AE%E6%B0%B4%E2%80%9D%E2%80%9C%E6%95%91%E7%81%BE%E4%B8%8E%E7%81%BE%E5%90%8E%E9%87%8D%E5%BB%BA%E2%80%9D%E2%80%9C%E7%A7%91%E6%8A%80%E5%88%9B%E6%96%B0%E6%89%B6%E8%B4%AB%E2%80%9D%E2%80%9C%E5%8A%A9%E5%AD%A6%E4%B8%8E%E6%94%AF%E6%95%99%E2%80%9D%E7%AD%89%E3%80%82%E5%9F%BA%E9%87%91%E4%BC%9A%E7%9B%AE%E5%89%8D%E5%9C%A8%E5%9F%BA%E9%87%91%E4%BC%9A%E4%B8%AD%E5%BF%83%E7%BD%91%E5%85%A8%E5%9B%BD5900%E4%BD%99%E5%AE%B6%E5%9F%BA%E9%87%91%E4%BC%9A%E8%B4%A2%E5%8A%A1%E9%80%8F%E6%98%8E%E5%BA%A6%E4%BB%A5%E5%8F%8A%E4%BF%A1%E6%81%AF%E5%85%AC%E5%BC%80%E6%8A%AB%E9%9C%B2%E4%B8%8A%EF%BC%8C%E5%9D%87%E4%B8%BA%E6%BB%A1%E5%88%86100%E5%88%86%EF%BC%8C%E5%B9%B6%E5%88%97%E7%AC%AC%E4%B8%80%E3%80%82',
'id': '244',
'logo': 'http%3A%2F%2Fimgcdn%2Egongyi%2Eqq%2Ecom%2Fgongyi%2F9495f50263f61bed81a7140939a8a1b080f6c0c8af6e738629cd4d2331a7995226f9a0b0304df5f271fb1b8a8d37e015',
'mn': 6366826,
'rk': '80',
'title': '%E5%9B%9B%E5%B7%9D%E7%9C%81%E7%A7%91%E6%8A%80%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A',
'tms': 1560,
'uin': '2771213466'}]
len(combined_list)
#combined_list有80家基金会的捐款信息
2**21
###Output
_____no_output_____
###Markdown
2.4 Decode已有信息 我们看到其实raw data根本不是人类能够阅读的,比如某家基金会的名称是:'title': '%E5%9B%9B%E5%B7%9D%E7%9C%81%E7%A7%91%E6%8A%80%E6%89%B6%E8%B4%AB%E5%9F%BA%E9%87%91%E4%BC%9A',这是因为这些内容是以 16进制UTF-8的格式编码的,需要解码
###Code
def decode_ngo_list(ngo_list):
for i in range(len(ngo_list)):
ngo_list[i]['desc'] = urllib.parse.unquote(ngo_list[i]['desc'])
ngo_list[i]['logo'] = urllib.parse.unquote(ngo_list[i]['logo'])
ngo_list[i]['title'] = urllib.parse.unquote(ngo_list[i]['title'])
return ngo_list
ngo_list = decode_ngo_list(combined_list)
ngo_list
###Output
_____no_output_____
###Markdown
2.5 给List数据排序
###Code
sorted_data_ngo_list = sorted(ngo_list,key = lambda x:int(x['rk']), reverse=False)
###Output
_____no_output_____
###Markdown
在整理好了数据之后,就要输出这样的数据 3. 输出数据 3.1 Write into a html table
###Code
from json2html import *
with open('NGO_Table.html','a') as html_file:
for i in range(len(sorted_data_ngo_list)):
html_table_script = json2html.convert(json = sorted_data_ngo_list[i])
html_file.write(html_table_script)
html_file.write('\n')
empty_line = '<br> \n <br>'
html_file.write(empty_line)
html_file.write('\n')
html_file.close()
###Output
_____no_output_____
###Markdown
附件里面有一个文档NGO_Table.html,双击这个文件就可以看到用html简单呈现的爬取结果,都是一个一个的表 3.2 Write into a csv file
###Code
DF_NGO = pd.DataFrame(sorted_data_ngo_list)
DF_NGO.columns
DF_NGO_Yuan = DF_NGO['mn']/100
# 重新调整列的顺序
DF_NGO_result = pd.concat([DF_NGO['rk'],DF_NGO['id'],DF_NGO['title'],\
DF_NGO_Yuan,DF_NGO['tms'],\
DF_NGO['desc'],DF_NGO['uin']],axis = 1)
DF_NGO_result
# 写进 csv 文件
DF_NGO_result.to_csv('NGO_founding_record.csv')
###Output
_____no_output_____ |
arrays_strings/fizz_buzz/fizz_buzz_challenge.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
if num is None:
raise TypeError('num cannot be None')
if num < 1:
raise ValueError('num cannot be less than one')
result = []
for val in range(1, num+1):
string = str(val)
if val % 5 == 0 and val % 3 == 0:
string = 'FizzBuzz'
elif val % 3 == 0:
string = 'Fizz'
elif val % 5 == 0:
string = 'Buzz'
result.append(string)
return result
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
import unittest
class TestFizzBuzz(unittest.TestCase):
def test_fizz_buzz(self):
solution = Solution()
self.assertRaises(TypeError, solution.fizz_buzz, None)
self.assertRaises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
self.assertEqual(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
Success: test_fizz_buzz
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
if num is None:
raise TypeError("must be a number")
if num <1:
raise ValueError("must be >0")
return [self._replace(item) for item in range(1,num+1)]
def _replace(self,num):
replace=""
if not num % 3:
replace+="Fizz"
if not num %5:
replace+="Buzz"
return replace if replace is not "" else str(num)
###Output
<>:16: SyntaxWarning: "is not" with a literal. Did you mean "!="?
<>:16: SyntaxWarning: "is not" with a literal. Did you mean "!="?
/tmp/ipykernel_16673/1605268949.py:16: SyntaxWarning: "is not" with a literal. Did you mean "!="?
return replace if replace is not "" else str(num)
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
import unittest
class TestFizzBuzz(unittest.TestCase):
def test_fizz_buzz(self):
solution = Solution()
self.assertRaises(TypeError, solution.fizz_buzz, None)
self.assertRaises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
self.assertEqual(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
Success: test_fizz_buzz
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
if num is None:
raise TypeError("None is invalid")
if num < 1:
raise ValueError("number is invalid")
res = []
for i in range(1,num+1):
multiple_of_3 = i%3 == 0
multiple_of_5 = i%5 == 0
if multiple_of_3 and not multiple_of_5:
res.append("Fizz")
elif multiple_of_5 and not multiple_of_3:
res.append("Buzz")
elif multiple_of_5 and multiple_of_3:
res.append("FizzBuzz")
else:
res.append(f"{i}")
return res
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
import unittest
class TestFizzBuzz(unittest.TestCase):
def test_fizz_buzz(self):
solution = Solution()
self.assertRaises(TypeError, solution.fizz_buzz, None)
self.assertRaises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
self.assertEqual(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
Success: test_fizz_buzz
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
from nose.tools import assert_equal, assert_raises
class TestFizzBuzz(object):
def test_fizz_buzz(self):
solution = Solution()
assert_raises(TypeError, solution.fizz_buzz, None)
assert_raises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
assert_equal(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
if num is None:
raise TypeError
elif num < 1:
raise ValueError
return ['Fizz' * (i % 3 == 0) + 'Buzz' * (i % 5 == 0) or str(i) for i in range (1, num + 1)]
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
from nose.tools import assert_equal, assert_raises
class TestFizzBuzz(object):
def test_fizz_buzz(self):
solution = Solution()
assert_raises(TypeError, solution.fizz_buzz, None)
assert_raises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
assert_equal(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
# TODO: Implement me
# print(num)
if num is None:
raise TypeError('Cannot have None')
if num < 1:
raise ValueError('Cannot be less than 1 ')
results=[]
for i in range(1, num+1):
if i%3 == 0 and i%5==0:
results.append("FizzBuzz")
elif i%5 == 0:
results.append("Buzz")
elif i%3 == 0:
results.append("Fizz")
else:
results.append(str(i))
print('results', results)
return results
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
import unittest
class TestFizzBuzz(unittest.TestCase):
def test_fizz_buzz(self):
solution = Solution()
self.assertRaises(TypeError, solution.fizz_buzz, None)
self.assertRaises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
self.assertEqual(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
results ['1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz']
Success: test_fizz_buzz
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
if num is None:
raise TypeError
if num < 1:
raise ValueError
result = []
for i in range(1, num + 1):
possible_fizz = '' if i % 3 else 'Fizz'
possible_buzz = '' if i % 5 else 'Buzz'
result.append((possible_fizz + possible_buzz) or str(i))
return result
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
from nose.tools import assert_equal, assert_raises
class TestFizzBuzz(object):
def test_fizz_buzz(self):
solution = Solution()
assert_raises(TypeError, solution.fizz_buzz, None)
assert_raises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
assert_equal(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
Success: test_fizz_buzz
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
if num is None:
raise TypeError
elif num < 1:
raise ValueError
else:
result = []
for i in range(1, num + 1):
if i % 15 == 0:
result.append('FizzBuzz')
elif i % 5 == 0:
result.append('Buzz')
elif i % 3 == 0:
result.append('Fizz')
else:
result.append(str(i))
return result
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
import unittest
class TestFizzBuzz(unittest.TestCase):
def test_fizz_buzz(self):
solution = Solution()
self.assertRaises(TypeError, solution.fizz_buzz, None)
self.assertRaises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
self.assertEqual(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
Success: test_fizz_buzz
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
if num is None:
raise TypeError
if num < 1:
raise ValueError
answer = []
for i in range (1,num+1):
if i % 3 is 0 and i % 5 is 0:
answer.append('FizzBuzz')
elif i % 3 is 0:
answer.append('Fizz')
elif i % 5 is 0:
answer.append('Buzz')
else:
answer.append(str(i))
return answer
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
from nose.tools import assert_equal, assert_raises
class TestFizzBuzz(object):
def test_fizz_buzz(self):
solution = Solution()
assert_raises(TypeError, solution.fizz_buzz, None)
assert_raises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
assert_equal(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
Success: test_fizz_buzz
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement Fizz Buzz.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* What is fizz buzz? * Return the string representation of numbers from 1 to n * Multiples of 3 -> 'Fizz' * Multiples of 5 -> 'Buzz' * Multiples of 3 and 5 -> 'FizzBuzz'* Can we assume the inputs are valid? * No* Can we assume this fits memory? * Yes Test Cases* None -> Exception* Exception* 15 ->[ '1', '2', 'Fizz', '4', 'Buzz', 'Fizz', '7', '8', 'Fizz', 'Buzz', '11', 'Fizz', '13', '14', 'FizzBuzz'] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/fizz_buzz/fizz_buzz_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Solution(object):
def fizz_buzz(self, num):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_fizz_buzz.py
import unittest
class TestFizzBuzz(unittest.TestCase):
def test_fizz_buzz(self):
solution = Solution()
self.assertRaises(TypeError, solution.fizz_buzz, None)
self.assertRaises(ValueError, solution.fizz_buzz, 0)
expected = [
'1',
'2',
'Fizz',
'4',
'Buzz',
'Fizz',
'7',
'8',
'Fizz',
'Buzz',
'11',
'Fizz',
'13',
'14',
'FizzBuzz'
]
self.assertEqual(solution.fizz_buzz(15), expected)
print('Success: test_fizz_buzz')
def main():
test = TestFizzBuzz()
test.test_fizz_buzz()
if __name__ == '__main__':
main()
###Output
_____no_output_____ |
Data Visualization with Matplotlib.ipynb | ###Markdown
Qualitative Visuals for 1D Data Loading Libraries and Data **Load libraries**
###Code
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
!pip install --upgrade -q gspread
from google.colab import auth
auth.authenticate_user()
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default())
###Output
_____no_output_____
###Markdown
**Load datasets**
###Code
def get_df(file_name, location):
worksheet = gc.open('Sales').get_worksheet(location)
rows = worksheet.get_all_values()
head = rows[0]
data = rows[1:]
df = pd.DataFrame.from_records(data, columns=head)
return df
employees = get_df('Sales', 0)
customers = get_df('Sales', 1)
products = get_df('Sales', 2)
orders = get_df('Sales', 3)
suppliers = get_df('Sales', 4)
###Output
_____no_output_____
###Markdown
**Clean data**
###Code
def missing_fields(df, field):
dict_department_names = {'': 'Other', 'N/A': 'Other', None: 'Other', ' ': 'Other'}
new_df = df[field].map(dict_department_names)
df.update(new_df)
return df
employees = missing_fields(employees, 'Department')
products = missing_fields(products, 'Supplier Name')
suppliers = missing_fields(suppliers, 'Name')
def update_city_names(df, field, city_proper, city_names):
dict_city_names = {}
for city in city_names:
dict_city_names[city] = city_proper
new_df = df[field].map(dict_city_names)
df.update(new_df)
return df
sf = 'San Francisco'
sf_names = ["SAN FRANCISCO", "SANFRANCISCO", "SANFRANCISCO,CA", "SF", "sf", "FRISCO", "Frisco"]
oak = 'Oakland'
oak_names = ["OAKLAND", "oakland", "Oak Town"]
sj = 'San Jose'
sj_names = ["SAN JOSE", "san jose", "SANJOSE", "SANJOSE,CA", "SJ", "sj", "Jose"]
employees = update_city_names(employees, 'City', sf, sf_names)
employees = update_city_names(employees, 'City', oak, oak_names)
employees = update_city_names(employees, 'City', sj, sj_names)
customers = update_city_names(customers, 'City', sf, sf_names)
customers = update_city_names(customers, 'City', oak, oak_names)
customers = update_city_names(customers, 'City', sj, sj_names)
suppliers = update_city_names(suppliers, 'City', sf, sf_names)
suppliers = update_city_names(suppliers, 'City', oak, oak_names)
suppliers = update_city_names(suppliers, 'City', sj, sj_names)
###Output
_____no_output_____
###Markdown
Pie Chart
###Code
labels = 'A', 'B', 'C', 'D'
freq = [5,10,9,3]
fig, ax = plt.subplots()
ax.pie(freq, labels=labels, autopct='%1.1f%%')
plt.show()
plt.pie(freq, labels=labels, autopct='%1.1f%%')
plt.show()
###Output
_____no_output_____
###Markdown
**Frequency of departments for employees**
###Code
department_table = employees['Department'].value_counts()
department_names = department_table.index
department_freq = (1.0*department_table.values)/department_table.sum()
plt.pie(department_freq, labels=department_names, autopct='%1.2f%%')
plt.show()
###Output
_____no_output_____
###Markdown
**Cities customers live**
###Code
city_table = customers['City'].value_counts()
city_names = city_table.index
city_freq = (1.0*city_table.values)/city_table.sum()
plt.pie(city_freq, labels=city_names, autopct='%1.2f%%')
plt.show()
###Output
_____no_output_____
###Markdown
Bar Graph
###Code
rain_avg = (18, 23, 28, 33, 32, 25, 21)
rain_std = (3, 2, 4, 1, 3, 2, 1)
ind = np.arange(len(rain_avg))
width = 0.9
fig, ax = plt.subplots()
ax.bar(ind, rain_avg, width, yerr=rain_std, color='SkyBlue', label='Rain')
ax.set_ylabel('Rain in Inches')
ax.set_title('Rain Per Month')
ax.set_xticks(ind)
ax.set_xticklabels(('Sep', 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar'))
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Number of products offered by supplier**
###Code
product_supplier_df = pd.merge(products, suppliers, how='left').drop_duplicates()
product_supplier_table = product_supplier_df['Supplier Name'].value_counts()
supplier_names = product_supplier_table.index
supplier_freq = product_supplier_table.values
ind = np.arange(len(supplier_freq))
width = 0.8
fig, ax = plt.subplots()
ax.barh(ind, supplier_freq, width, label='Suppliers')
ax.set_xlabel('Products')
ax.set_title('Products by Suppliers')
ax.set_yticks(ind)
ax.set_yticklabels(supplier_names)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Qualitative Visuals for 2D Data Side-by-side Pie Charts
###Code
from matplotlib.gridspec import GridSpec
labels1 = 'A', 'B', 'C', 'D'
freq1 = [5, 10, 9, 3]
labels2 = 'A', 'B', 'C', 'D'
freq2 = [6, 7, 4, 2]
the_grid = GridSpec(1, 2)
plt.subplot(the_grid[0, 0], aspect=1)
plt.pie(freq1, labels=labels1, autopct='%1.1f%%')
plt.subplot(the_grid[0, 1], aspect=1)
plt.pie(freq2, labels=labels2, autopct='%1.1f%%')
plt.show()
###Output
_____no_output_____
###Markdown
**Departments per city**
###Code
city_department_df = employees.groupby(['City', 'Department'])['Department'].count()
sf_departs = city_department_df.loc['San Francisco']
oak_departs = city_department_df.loc['Oakland']
sj_departs = city_department_df.loc['San Jose']
sf_department_names = sf_departs.index
oak_department_names = oak_departs.index
sj_department_names = sj_departs.index
the_grid = GridSpec(1, 3)
plt.subplot(the_grid[0, 0], aspect=1)
plt.pie(sf_departs, labels=sf_department_names, autopct='%1.1f%%')
plt.subplot(the_grid[0, 1], aspect=1)
plt.pie(oak_departs, labels=oak_department_names, autopct='%1.1f%%')
plt.subplot(the_grid[0, 2], aspect=1)
plt.pie(sj_departs, labels=sj_department_names, autopct='%1.1f%%')
plt.show()
###Output
_____no_output_____
###Markdown
Side-by-side Bar Graphs
###Code
sales17 = np.array([2921, 3178, 3721, 1126, 986, 1323, 2105])
sales18 = np.array([1761, 2185, 4821, 1942, 1346, 1689, 2632])
ind = np.arange(len(sales17))
width = 0.35
fig, ax = plt.subplots()
ax.bar(ind, sales17, width, color='r')
ax.bar(ind + width, sales18, width, color='g')
ax.set_ylabel('Sales')
ax.set_title('Sales by Month and Year')
ax.set_xticks(ind + width/2)
ax.set_xticklabels(('Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr'))
plt.show()
###Output
_____no_output_____
###Markdown
Stacked Bar Graphs
###Code
ind = np.arange(len(sales17))
width = 0.7
p1 = plt.bar(ind, sales17, width)
p2 = plt.bar(ind, sales18, width, bottom=sales17)
plt.ylabel('Sales')
plt.title('Sales by Month and Year')
plt.xticks(ind, ('Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr'))
plt.yticks(np.arange(0, 10000, 1000))
plt.legend((p1[0], p2[0]), ('17', '18'))
plt.show()
###Output
_____no_output_____
###Markdown
Quantitative Visuals for 1D Data Line Graphs
###Code
x = np.arange(-10, 10, 0.1)
y = np.sin(x)
plt.plot(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
**Stock prices**
###Code
stock_prices = np.array([11, 12, 15, 17, 18, 14, 12, 13, 9, 7, 13, 15, 14, 14, 17, 16, 15, 19, 21, 22, 26, 24, 21, 23, 20, 25, 23, 24, 23, 26, 28])
plt.plot(stock_prices, 'o')
plt.axis([-1, 30, -1, 30])
plt.ylabel('Price')
plt.xlabel('Day')
plt.title('Monthly Stock Price')
plt.vlines(np.arange(31), [0], stock_prices, linestyles='dotted')
plt.show()
###Output
_____no_output_____
###Markdown
Histogram
###Code
mu = 0
sigma = 10
x = mu + sigma*np.random.randn(10000)
import matplotlib.mlab as mlab
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='green')
y = mlab.normpdf(bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=1)
plt.axis([-40, 40, 0, 0.05])
plt.show()
###Output
_____no_output_____
###Markdown
Box Plot
###Code
mu = 100
sigma = 15
x = mu + sigma*np.random.randn(10000)
plt.boxplot(x)
plt.show()
###Output
_____no_output_____
###Markdown
Quantitative Visuals for 2D Data Line Graphs
###Code
stock_prices_A = stock_prices
stock_prices_B = np.array([13, 15, 14, 19, 21, 11, 10, 13, 9, 11, 12, 16, 17, 15, 18, 21, 22, 21, 21, 19, 22, 24, 25, 24, 26, 23, 25, 24, 27, 26, 28])
plt.plot(stock_prices_A, 'o')
plt.plot(stock_prices_B, 'o')
plt.axis([-1, 30, -1, 40])
plt.ylabel('Price')
plt.xlabel('Day')
plt.title('Monthly Stock Price')
#xticks = np.arange(0, 32, 2)
#plt.xticks(xticks)
plt.vlines(np.arange(31), [0], stock_prices_A, linestyles='dotted')
plt.vlines(np.arange(31), [0], stock_prices_B, linestyles='dotted')
plt.show()
###Output
_____no_output_____
###Markdown
Scatter Plot
###Code
np.random.seed(0)
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
plt.scatter(x, y)
plt.show()
np.random.seed(0)
mu = 100
sigma = 15
x = mu + sigma*np.random.randn(1000)
y = mu + sigma*np.random.randn(1000)
plt.scatter(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
Stacked Area Charts
###Code
product_A = np.array([10, 11, 8, 14, 9, 13, 16, 19, 24, 21, 23, 14, 13, 16])
product_B = np.array([11, 14, 10, 15, 12, 15, 17, 20, 23, 21, 20, 22, 23, 24])
y = np.row_stack((product_A, product_B))
x = np.arange(14)
fig, ax = plt.subplots()
ax.stackplot(x, y)
plt.ylabel('Sales')
plt.xlabel('Day')
plt.title('Bi Weekly Products Sold')
plt.show()
###Output
_____no_output_____
###Markdown
**Stacked sales by month and year**
###Code
fig, ax = plt.subplots()
y = np.row_stack((sales17, sales18))
x = np.arange(len(sales17))
ax.stackplot(x, y)
plt.ylabel('Sales')
plt.xlabel('Month')
plt.xticks([0,1,2,3,4,5,6], ('Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr'))
plt.title('Sales Per Month')
plt.show()
###Output
_____no_output_____ |
week07/.ipynb_checkpoints/prep_notebook_week07-checkpoint.ipynb | ###Markdown
Activity 1: Heat maps* we'll start with building up a heat map based on some small, randomly generate data* we'll use this methodology to make our plot interactive & then move on to using "real" data
###Code
# lets import our usual stuff
import pandas as pd
import bqplot
import numpy as np
import traitlets
import ipywidgets
%matplotlib inline
# lets start thinking about heatmaps with some random data
data = np.random.random((10, 10))
data
# so we just have a 10 x 10 array here
# lets start by generating a quick heat map
# (1)
# create our first scale of our plot: just a color scale
col_sc = bqplot.ColorScale()
# now we'll use bqplot's gridheatmap function
# with our randomly generated data & our scales to
# make a heatmap like so:
heat_map = bqplot.GridHeatMap(color = data,
scales = {'color': col_sc})
# put our marks into our figure and lets go!
fig = bqplot.Figure(marks = [heat_map])
# (2) ok, this is fine and all, but lets add some reference for our
# color scheme with a colorbar & also lets choose a different
# color scheme
col_sc = bqplot.ColorScale(scheme = "Reds")
# lets plot some axes on our plot as well, in this case
# our axis will be a color bar, vertically on the right
# of our heatmap
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
# put it all together and lets take a look!
heat_map = bqplot.GridHeatMap(color = data,
scales = {'color': col_sc})
# generate fig!
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax])
# (3) finally, lets add some axes labels on the x & y axis,
# we need to add their scales first
# this scale will just count up the boxes in the vertical
# & horizontal direction
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# add our axes objects
x_ax = bqplot.Axis(scale = x_sc)
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical')
heat_map = bqplot.GridHeatMap(color = data,
scales = {'color': col_sc,
'row': y_sc,
'column':x_sc})
fig = bqplot.Figure(marks = [heat_map],
axes = [c_ax, y_ax, x_ax])
fig
# so, while this indeed a lovely heatmap, it isn't interactive in any way!
# boo to that!
# Lets start adding in some interactivity
# keep data from last time
# now add scales - colors, x & y
col_sc = bqplot.ColorScale(scheme = "Reds")
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc)
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical')
# lets now re-do our heat map & add in some interactivity:
heat_map = bqplot.GridHeatMap(color = data,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'}, # to make our selection blue
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 0.8})
# stir and combine into 1 figure
fig = bqplot.Figure(marks = [heat_map],
axes = [c_ax, y_ax, x_ax])
fig
# Ok fine, but our selection isn't linked to anything!
# lets check out what heat_map selected is
heat_map.selected
# note if I select a different box & re-run this cell,
# I get out different values
# so now, lets write a little function that links the data value
# to the selected & lets print this in a little ipywidgets label
mySelectedLabel = ipywidgets.Label()
# (1)
# lets write our linking function
# there are a few ways to link this,
# here is a simple way first
def get_data_value(change):
i,j = heat_map.selected[0]
v = data[i,j] # grab data value
mySelectedLabel.value = str(v) # set our label
# (2) this is maybe in-elegant as we are
# explicitly calling our origininal heat map!
# so, lets instead remind ourselves what "change" is here
def get_data_value(change):
print(change)
i,j = heat_map.selected[0]
v = data[i,j] # grab data value
mySelectedLabel.value = str(v) # set our label
# now we see when we click we get back a whole
# dictionary of information - if we recall,
# "owner" here is our heat_map which "owns"
# this change.
# If we want to be able to apply our function to
# this or any other heatmap figure we generate,
# we can re-write the above function as follows:
# (3)
#def get_data_value(change,mylab):
def get_data_value(change):
#print(change['owner'].selected)
i,j = change['owner'].selected[0]
v = data[i,j] # grab data value
mySelectedLabel.value = str(v) # set our label
#mylab.value = str(v) # set our label
# so, this now is applied to any map that we choose to input
# regenerate our heatmap to use in our fig canvas
heat_map = bqplot.GridHeatMap(color = data,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 0.8})
# make sure we check out
heat_map.observe(get_data_value, 'selected')
#heat_map.observe(self, mySelectedLabel)
fig = bqplot.Figure(marks = [heat_map],
axes = [c_ax, y_ax, x_ax])
ipywidgets.VBox([mySelectedLabel, fig])
#fig
###Output
_____no_output_____
###Markdown
Activity 2: Preliminary dashboarding* we'll use a random dataset to explore how to make dashboard-like plots that change when things are updated
###Code
# now lets move on to making a preliminary
#dashboard for multi-dimensional datasets
# lets first start with some randomly generated data again
data = np.random.random((10, 10,20))
data
data.shape
data[0,0,:]
# we can see that no instead of 1 value, each "i,j" component
# has an array of values
# lets start building up linked plots
# first, lets re-do our plot above with our label printing
# out the sum along this 5-d array
# now add scales - colors, x & y
col_sc = bqplot.ColorScale(scheme = "Reds")
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc)
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical')
# create label again
mySelectedLabel = ipywidgets.Label()
def get_data_value(change):
i,j = change['owner'].selected[0]
# if we run with this, our label is the 20 elements
#v = data[i,j] # grab data value
# but,lets sum instead
v = data[i,j].sum() # grab data value
mySelectedLabel.value = str(v) # set our label
# so, this now is applied to any map that we choose to input
# regenerate our heatmap to use in our fig canvas
# now, we want to plot the sum along our 3rd axis as well,
# so, lets do this with "np.sum" along our 3rd axis
heat_map = bqplot.GridHeatMap(color = np.sum(data,axis=2),
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 0.8})
# make sure we check out
heat_map.observe(get_data_value, 'selected')
#heat_map.observe(self, mySelectedLabel)
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
#(1)
#ipywidgets.VBox([mySelectedLabel, fig])
# (2)
# now, lets generate another figure that just plots the histogram of values in our 3rd axis
x_sch = bqplot.LinearScale()
y_sch = bqplot.LinearScale()
x_axh = bqplot.Axis(scale = x_sch, label = 'Sum of 3rd axis')
y_axh = bqplot.Axis(scale = y_sch,
orientation = 'vertical',
label='Frequency')
hist = bqplot.Hist(sample = data[0,0,:],
opacity = 0.1,
normalized = False, # normalized=False means we get counts in each bin
scales = {'sample': x_sch, 'count': y_sch},
bins = 5)
figh = bqplot.Figure(marks = [hist], axes = [x_axh, y_axh])
# ok, so side by side plots, but nothing updates!
#(3) so, we have to update what our heatmap has access to as
# far as being able to update both the label *AND* the
# histogram's data
def get_data_value2(change):
i,j = change['owner'].selected[0]
# if we run with this, our label is the 20 elements
#v = data[i,j] # grab data value
# but,lets sum instead
v = data[i,j].sum() # grab data value
mySelectedLabel.value = str(v) # set our label
hist.sample = data[i,j]
#print(data[i,j])
heat_map.observe(get_data_value2, 'selected')
# note here now the heat_map is in a sense "driving" our
# changes.
# *** DO EXAMPLE OF BACK AND FORTH ***
ipywidgets.VBox([mySelectedLabel, ipywidgets.HBox([fig,figh])] )
###Output
_____no_output_____
###Markdown
Activity 3: Dashboarding with "real" data* now we'll move onto the UFO dataset and start messing around with creating a dashboard for this dataset
###Code
# lets start by loading the UFO dataset
ufos = pd.read_csv("/Users/jillnaiman/Downloads/ufo-scrubbed-geocoded-time-standardized-00.csv",
names = ["date", "city", "state", "country",
"shape", "duration_seconds", "duration",
"comment", "report_date",
"latitude", "longitude"],
parse_dates = ["date", "report_date"])
###Output
_____no_output_____
###Markdown
Aside: downsampling* some folks reported having a tough time with interactivity of scatter plots with the UFO dataset* here we'll quickly go over some methods of downsampling that can be applied to decrease the size of our dataset
###Code
# you'll see the above takes a good long time to load on my computer
# the length of the dataset is quite large:
len(ufos)
# 80,000! So, to speed up our interactivity, we can
# randomly sample this dataset for plotting purposes
# lets down sample to 1000 samples:
nsamples = 1000
#nsamples = 5000
downSampleMask = np.random.randint(0,len(ufos)-1,nsamples)
downSampleMask
# so, downsample mask is now a list of random indicies for
# the UFO dataset
# the above doesn't disclude repeats, but we can take
# care of this with a different call:
downSampleMask = np.random.choice(range(len(ufos)-1),
nsamples, replace=False)
# lets update:
ufosDS = ufos.loc[downSampleMask]
len(ufosDS)
# so much shorter
# we can also see that this is saved as a dataframe:
ufosDS
# lets make a super quick scatter plot to remind ourselves what this looks like:
x_sc = bqplot.LinearScale()
y_sc = bqplot.LinearScale()
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label='Latitude')
#(1)
#scatters = bqplot.Scatter(x = ufosDS['longitude'],
# y = ufosDS['latitude'],
# scales = {'x': x_sc, 'y': y_sc})
# (2) recall we can also color by things like duration
c_sc = bqplot.ColorScale()
#c_ax = bqplot.ColorAxis(scale = c_sc, label='Duration in sec', orientation = 'vertical', side = 'right')
#scatters = bqplot.Scatter(x = ufosDS['longitude'],
# y = ufosDS['latitude'],
# color=ufosDS['duration_seconds'],
# scales = {'x': x_sc, 'y': y_sc, 'color':c_sc})
# (3) again, we recall that there is a large range in durations, so
# it makes sense that we have a muted color pattern - we want
# to use a log colorscale
# with bqplot we can do this with:
c_ax = bqplot.ColorAxis(scale = c_sc, label='log(sec)',
orientation = 'vertical', side = 'right')
scatters = bqplot.Scatter(x = ufosDS['longitude'],
y = ufosDS['latitude'],
color=np.log10(ufosDS['duration_seconds']),
scales = {'x': x_sc, 'y': y_sc, 'color':c_sc})
fig = bqplot.Figure(marks = [scatters], axes = [x_ax, y_ax, c_ax])
fig
# now we are going to use our heatmap idea to plot this data again
# note this will shmear out a lot of the nice map stuff we see above
# don't worry! We'll talk about making maps in the next class or so
# what should we color by? lets do by duration
# to get this to work with our heatmap, we're going
# to have to do some rebinning
# right now, our data is all in 1 long list
# we need to rebin things in a 2d histogram where
# the x axis is long & y is lat
# ***START WITH 10 EACH**
nlong = 20
nlat = 20
#(1)
hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'],
ufos['latitude'],
weights=ufos['duration_seconds'],
bins=[nlong,nlat])
# this returns the TOTAL duration of ufo events in each bin
hist2d
# (2)
# to do the average duration in each bin we can do:
hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'],
ufos['latitude'],
weights=ufos['duration_seconds'],
normed=True,
bins = [nlong,nlat])
hist2d
# (3) ok, lets go back to total duration
hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'],
ufos['latitude'],
weights=np.log10(ufos['duration_seconds']),
bins = [nlong,nlat])
# note that the sizes of the edges & the hist are different:
hist2d.shape, long_edges.shape, lat_edges.shape
# this is becuase the edges are bin edges, not centers
# to get bin centers we can do:
# lets do some fancy in-line forloops
long_centers = [(long_edges[i]+long_edges[i+1])*0.5 for i in range(len(long_edges)-1)]
lat_centers = [(lat_edges[i]+lat_edges[i+1])*0.5 for i in range(len(lat_edges)-1)]
long_centers, lat_centers
# (4) note: we might want to control where our bins are, we can do this by
# specifying bin edges ourselves
long_bins = np.linspace(-150, 150, nlong+1)
lat_bins = np.linspace(-40, 70, nlat+1)
long_bins, long_bins.shape
lat_bins, lat_bins.shape
hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'],
ufos['latitude'],
weights=ufos['duration_seconds'],
bins = [long_bins,lat_bins])
# this is becuase the edges are bin edges, not centers
long_centers = [(long_edges[i]+long_edges[i+1])*0.5 for i in range(len(long_edges)-1)]
lat_centers = [(lat_edges[i]+lat_edges[i+1])*0.5 for i in range(len(lat_edges)-1)]
# (5)
# again, we want to take the log scale of things
# we're going to do this by taking the log of hist2d
# but there are some zero values in this hsitogram
# if we just take the log we get -inf
np.log10(hist2d)
# this can mess up our color scheme mapping
# (6) so we are going to "trick" our color scheme like so
hist2d[hist2d <= 0] = np.nan # set zeros to NaNs
# then take log
hist2d = np.log10(hist2d)
hist2d
# (7) finally, our histogram is actually
# transposed - this is just how numpy outputs it,
# lets put the world right side up with:
hist2d = hist2d.T
# now that we have all that fancy binning out of the way,
# lets proceed as normal:
# add scales - colors, x & y
col_sc = bqplot.ColorScale(scheme="RdPu",
min=np.nanmin(hist2d),
max=np.nanmax(hist2d))
x_sc = bqplot.LinearScale()
y_sc = bqplot.LinearScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')#,
#label='log(sec)')
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label = 'Latitude')
heat_map = bqplot.GridHeatMap(color = hist2d,
row = lat_centers,
column = long_centers,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 1.0})
#***GO BACK AND PLAY WITH BIN SIZES***
# (2) lets add a label again to pritn duration
# create label again
mySelectedLabel = ipywidgets.Label()
def get_data_value(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# make sure we check out
heat_map.observe(get_data_value, 'selected')
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
#(1)
#fig
#(2)
ipywidgets.VBox([mySelectedLabel,fig])
# ok, now lets build up our dashboard
# again to also show how the duration of UFO sitings in each
# selected region changes with year
# we'll do this with the same methodology we applied before
# **copy paste above***
# (1)
# (I) For the heatmap
# add scales - colors, x & y
col_sc = bqplot.ColorScale(scheme="RdPu",
min=np.nanmin(hist2d),
max=np.nanmax(hist2d))
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label = 'Latitude')
heat_map = bqplot.GridHeatMap(color = hist2d,
row = lat_centers,
column = long_centers,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 1.0})
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
# (II) Scatter plot
# scales & ax in usual way
import datetime as dt
x_scl = bqplot.DateScale(min=dt.datetime(1950,1,1),max=dt.datetime(2020,1,1)) # note: for dates on x-axis
y_scl = bqplot.LogScale()
ax_xcl = bqplot.Axis(label='Date', scale=x_scl)
ax_ycl = bqplot.Axis(label='Duration in Sec', scale=y_scl,
orientation='vertical', side='left')
# for the lineplot of duration in a region as a function of year
# lets start with a default region & year
i,j = 0,0
longs = [long_edges[i], long_edges[i+1]]
lats = [lat_edges[j],lat_edges[j+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
# we can see this selects for the upper right point of our heatmap
lats, longs, ufos['latitude'][region_mask]
# lets plot the durations as a function of year there
duration_scatt = bqplot.Scatter(x = ufos['date'][region_mask],
y = ufos['duration_seconds'][region_mask],
scales={'x':x_scl, 'y':y_scl})
fig_dur = bqplot.Figure(marks = [duration_scatt], axes = [ax_xcl, ax_ycl])
# create label again
mySelectedLabel = ipywidgets.Label()
def get_data_value(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# make sure we connect to heatmap
#heat_map.observe(get_data_value, 'selected')
# (2) now again, we want our scatter plot to react to changes
# to what we've selected so:
def get_data_value2(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# note!! i & j are swapped here to machup with hist & selection
longs = [long_edges[j], long_edges[j+1]]
lats = [lat_edges[i],lat_edges[i+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
duration_scatt.x = ufos['date'][region_mask]
duration_scatt.y = ufos['duration_seconds'][region_mask]
#print(i,j)
#print(longs,lats)
#print(ufos['date'][region_mask])
# make sure we connect to heatmap
heat_map.observe(get_data_value2, 'selected')
ipywidgets.VBox([mySelectedLabel, ipywidgets.HBox([fig,fig_dur])])
# note that when I select a deep purple place, my scatter plot is
# very laggy, this makes me think we should do this with a
# histogram/bar type plot
# (I) For the heatmap
# add scales - colors, x & y
col_sc = bqplot.ColorScale(scheme="RdPu",
min=np.nanmin(hist2d),
max=np.nanmax(hist2d))
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label = 'Latitude')
heat_map = bqplot.GridHeatMap(color = hist2d,
row = lat_centers,
column = long_centers,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 1.0})
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
# (II) Bar plot
# scales & ax in usual way
x_scl = bqplot.LinearScale() # note we are back to linears
y_scl = bqplot.LinearScale()
ax_xcl = bqplot.Axis(label='Date', scale=x_scl)
ax_ycl = bqplot.Axis(label='Total duration in Sec', scale=y_scl,
orientation='vertical', side='left')
# for the lineplot of duration in a region as a function of year
# lets start with a default region & year
i,j = 0,0
longs = [long_edges[i], long_edges[i+1]]
lats = [lat_edges[j],lat_edges[j+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
# we can see this selects for the upper right point of our heatmap
lats, longs, ufos['latitude'][region_mask]
# lets plot the durations as a function of year there
ufos['year'] = ufos['date'].dt.year
dur, dur_edges = np.histogram(ufos['year'][region_mask],
weights=ufos['duration_seconds'][region_mask],
bins=10)
# like before with our histograms
dur_centers = [(dur_edges[i]+dur_edges[i+1])*0.5 for i in range(len(dur_edges)-1)]
# make histogram by hand, weighting by duration
duration_hist = bqplot.Bars(x=dur_centers, y=dur,
scales={'x':x_scl, 'y':y_scl})
fig_dur = bqplot.Figure(marks = [duration_hist], axes = [ax_xcl, ax_ycl])
# to what we've selected so:
def get_data_value(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# note!! i & j are swapped here to machup with hist & selection
longs = [long_edges[j], long_edges[j+1]]
lats = [lat_edges[i],lat_edges[i+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
if len(ufos['year'][region_mask]) > 0:
dur, dur_edges = np.histogram(ufos['year'][region_mask],
weights=ufos['duration_seconds'][region_mask],
bins=10)
dur_centers = [(dur_edges[i]+dur_edges[i+1])*0.5 for i in range(len(dur_edges)-1)]
duration_hist.x = dur_centers
duration_hist.y = dur
else:
duration_hist.x = [0]; duration_hist.y = [0]
# make sure we connect to heatmap
heat_map.observe(get_data_value, 'selected')
fig.layout.min_width = '500px'
fig_dur.layout.min_width = '700px'
plots = ipywidgets.HBox([fig,fig_dur])
myout = ipywidgets.VBox([mySelectedLabel, plots])
myout
###Output
_____no_output_____
###Markdown
Might not get to this...
###Code
col_sc = bqplot.ColorScale(scheme="RdPu",
min=np.nanmin(hist2d),
max=np.nanmax(hist2d))
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label = 'Latitude')
heat_map = bqplot.GridHeatMap(color = hist2d,
row = lat_centers,
column = long_centers,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 1.0})
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
# (II) Bar plot for durations thorugh the years
# scales & ax in usual way
x_scl = bqplot.LinearScale() # note we are back to linears
y_scl = bqplot.LinearScale()
ax_xcl = bqplot.Axis(label='Date', scale=x_scl)
ax_ycl = bqplot.Axis(label='Total duration in Sec', scale=y_scl,
orientation='vertical', side='left')
# for the lineplot of duration in a region as a function of year
# lets start with a default region & year
i,j = 0,0
longs = [long_edges[i], long_edges[i+1]]
lats = [lat_edges[j],lat_edges[j+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
# we can see this selects for the upper right point of our heatmap
lats, longs, ufos['latitude'][region_mask]
# lets plot the durations as a function of year there
ufos['year'] = ufos['date'].dt.year
dur, dur_edges = np.histogram(ufos['year'][region_mask],
weights=ufos['duration_seconds'][region_mask],
bins=10)
# like before with our histograms
dur_centers = [(dur_edges[i]+dur_edges[i+1])*0.5 for i in range(len(dur_edges)-1)]
# make histogram by hand, weighting by duration
duration_hist = bqplot.Bars(x=dur_centers, y=dur,
scales={'x':x_scl, 'y':y_scl})
fig_dur = bqplot.Figure(marks = [duration_hist], axes = [ax_xcl, ax_ycl])
# (III) histogram for shape
x_ord = bqplot.OrdinalScale()
y_ord = bqplot.LinearScale()
ax_xord = bqplot.Axis(label='Shape', scale=x_ord)
ax_yord = bqplot.Axis(label='Freq', scale=y_ord,
orientation='vertical',
side='left')
# histogram using pandas
hist_ord = bqplot.Bars(x=ufos['shape'][region_mask].unique(),
y=ufos['shape'][region_mask].value_counts(),
scales={'x':x_ord, 'y':y_ord})
fig_shape = bqplot.Figure(marks=[hist_ord], axes=[ax_xord,ax_yord])
# to what we've selected so:
def get_data_value(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# note!! i & j are swapped here to machup with hist & selection
longs = [long_edges[j], long_edges[j+1]]
lats = [lat_edges[i],lat_edges[i+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
dur, dur_edges = np.histogram(ufos['year'][region_mask],
weights=ufos['duration_seconds'][region_mask],
bins=10)
dur_centers = [(dur_edges[i]+dur_edges[i+1])*0.5 for i in range(len(dur_edges)-1)]
duration_hist.x = dur_centers
duration_hist.y = dur
# also update shapes
#print(ufos['shape'][region_mask])
hist_ord.x = ufos['shape'][region_mask].unique()
hist_ord.y = ufos['shape'][region_mask].value_counts()
# make sure we connect to heatmap
heat_map.observe(get_data_value, 'selected')
# lets make all the sizes look nice
fig_dur.layout.max_width = '400px'
fig_dur.layout.max_height= '300px'
fig_shape.layout.max_width = '400px'
fig_shape.layout.max_height= '300px'
fig.layout.min_width = '800px' # add to both
# dhange layout
ipywidgets.VBox([mySelectedLabel, ipywidgets.HBox([fig,ipywidgets.VBox([fig_shape,fig_dur])])])
#myout = ipywidgets.VBox([mySelectedLabel,
# ipywidgets.HBox([fig_shape,fig_dur]),
# fig])
#myout
###Output
_____no_output_____ |
OptiMen_safe_driving.ipynb | ###Markdown
Importing Data Files
###Code
test = pd.read_csv('test.csv')
train = pd.read_csv('train.csv')
###Output
_____no_output_____
###Markdown
Exploratory Data Anbalysis
###Code
train.head()
train.shape
test.shape
###Output
_____no_output_____
###Markdown
We are given that the missing values are indicated by -1. Lets replace -1 with NaN so that we can compute how many missing values are present.
###Code
train_data = train # Make a copy just to be on the safe side
train_data = train_data.replace(-1, np.NaN)
test_data = test
test_data = test_data.replace(-1,np.NaN)
###Output
_____no_output_____
###Markdown
Let's check if there are any duplicate values present or not.
###Code
train_data.shape
train_data.drop_duplicates()
train_data.shape
###Output
_____no_output_____
###Markdown
No duplicate Data is present in the dataset.
###Code
train_data.info()
test_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 178564 entries, 0 to 178563
Data columns (total 58 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 178564 non-null int64
1 ps_ind_01 178564 non-null int64
2 ps_ind_02_cat 178496 non-null float64
3 ps_ind_03 178564 non-null int64
4 ps_ind_04_cat 178536 non-null float64
5 ps_ind_05_cat 176802 non-null float64
6 ps_ind_06_bin 178564 non-null int64
7 ps_ind_07_bin 178564 non-null int64
8 ps_ind_08_bin 178564 non-null int64
9 ps_ind_09_bin 178564 non-null int64
10 ps_ind_10_bin 178564 non-null int64
11 ps_ind_11_bin 178564 non-null int64
12 ps_ind_12_bin 178564 non-null int64
13 ps_ind_13_bin 178564 non-null int64
14 ps_ind_14 178564 non-null int64
15 ps_ind_15 178564 non-null int64
16 ps_ind_16_bin 178564 non-null int64
17 ps_ind_17_bin 178564 non-null int64
18 ps_ind_18_bin 178564 non-null int64
19 ps_reg_01 178564 non-null float64
20 ps_reg_02 178564 non-null float64
21 ps_reg_03 146268 non-null float64
22 ps_car_01_cat 178533 non-null float64
23 ps_car_02_cat 178561 non-null float64
24 ps_car_03_cat 55519 non-null float64
25 ps_car_04_cat 178564 non-null int64
26 ps_car_05_cat 98627 non-null float64
27 ps_car_06_cat 178564 non-null int64
28 ps_car_07_cat 175170 non-null float64
29 ps_car_08_cat 178564 non-null int64
30 ps_car_09_cat 178384 non-null float64
31 ps_car_10_cat 178564 non-null int64
32 ps_car_11_cat 178564 non-null int64
33 ps_car_11 178560 non-null float64
34 ps_car_12 178563 non-null float64
35 ps_car_13 178564 non-null float64
36 ps_car_14 165766 non-null float64
37 ps_car_15 178564 non-null float64
38 ps_calc_01 178564 non-null float64
39 ps_calc_02 178564 non-null float64
40 ps_calc_03 178564 non-null float64
41 ps_calc_04 178564 non-null int64
42 ps_calc_05 178564 non-null int64
43 ps_calc_06 178564 non-null int64
44 ps_calc_07 178564 non-null int64
45 ps_calc_08 178564 non-null int64
46 ps_calc_09 178564 non-null int64
47 ps_calc_10 178564 non-null int64
48 ps_calc_11 178564 non-null int64
49 ps_calc_12 178564 non-null int64
50 ps_calc_13 178564 non-null int64
51 ps_calc_14 178564 non-null int64
52 ps_calc_15_bin 178564 non-null int64
53 ps_calc_16_bin 178564 non-null int64
54 ps_calc_17_bin 178564 non-null int64
55 ps_calc_18_bin 178564 non-null int64
56 ps_calc_19_bin 178564 non-null int64
57 ps_calc_20_bin 178564 non-null int64
dtypes: float64(20), int64(38)
memory usage: 79.0 MB
###Markdown
Both our training data and test data have only numeric values. The features have been named such as ending with 'bin' indicating binary data, 'cat' indicating categorical data. Types of features in the dataset
###Code
def get_info(train_data):
data = []
for col in train_data.columns:
# Defining the role
if col == 'target':
role = 'target'
elif col == 'id':
role = 'id'
else:
role = 'input'
# Defining the level
if 'bin' in col or col == 'target':
level = 'binary'
elif 'cat' in col or col == 'id':
level = 'nominal'
elif train[col].dtype == np.float64:
level = 'interval'
elif train[col].dtype == np.int64:
level = 'ordinal'
# Defining the data type
dtype = train[col].dtype
# Creating a Dict that contains all the metadata for the variable
col_dict = {
'varname': col,
'role' : role,
'level' : level,
'dtype' : dtype
}
data.append(col_dict)
meta = pd.DataFrame(data, columns=['varname', 'role', 'level', 'dtype'])
meta.set_index('varname', inplace=True)
return meta
info = get_info(train)
info_counts = info\
.groupby(['role','level'])\
.agg({'dtype': lambda x: x.count()})\
.reset_index()
display(info_counts)
fig,ax = plt.subplots()
fig.set_size_inches(10,5)
sns.barplot(data=info_counts[(info_counts.role != 'target') & (info_counts.role != 'id') ],
x="level",
y="dtype",
ax=ax)
ax.set(xlabel='Variable Type', ylabel='Count',title="Variables Count Across Datatype")
###Output
_____no_output_____
###Markdown
The above distribution shows the classification of different data types we have. Feature Analysis
###Code
col_ordinal = info[(info.level == 'ordinal') ].index
col_nominal = info[(info.level == 'nominal') & (info.role != 'id')].index
col_internval = info[(info.level == 'interval')].index
col_binary = info[(info.level == 'binary') & (info.role != 'target')].index
# Visualizing interval/continuous features
plt.figure(figsize=(18,16))
plt.title('Pearson correlation of continuous (interval) features', y=1.05, size=15)
sns.heatmap(train[col_internval].corr(),
linewidths=0.1,
vmax=1.0,
square=True,
linecolor='white',
cmap = "coolwarm",
annot=True)
# Printing number of categories in each column
for i in col_nominal:
print (i,len(train_data[i].unique()))
# Visualizing columns having number of categories <= 8
for i in col_nominal:
n = len(train_data[i].unique())
if(n<=8):
ax = sns.countplot(x=i, hue="target", data=train_data)
ax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha="right")
plt.figure(figsize = (50,50))
plt.show()
# Visualizing ordinal features
for i in col_ordinal:
n = len(train_data[i].unique())
ax = sns.countplot(x=i, hue="target", data=train_data,palette=['#432371',"#FAAE7B"])
ax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha="right")
plt.figure(figsize = (50,50))
plt.show()
# Visualizing binary features
for i in col_binary:
n = len(train_data[i].unique())
ax = sns.countplot(x=i, hue="target", data=train_data,palette="Set1")
ax.set_xticklabels(ax.get_xticklabels(), rotation=40, ha="right")
plt.figure(figsize = (50,50))
plt.show()
###Output
_____no_output_____
###Markdown
Missing Values Missing values in training set
###Code
total_train = train_data.isnull().sum().sort_values(ascending=False)
percent_train = (train_data.isnull().sum()/train_data.isnull().count() * 100).sort_values(ascending=False)
missing_values_train = pd.concat([total_train, percent_train], axis=1, keys=['Total', 'Percent'])
missing_values_train.head(13)
###Output
_____no_output_____
###Markdown
We observe that ps_car_03_cat and ps_car_05_cat both have very large percentage of missing data(greater than 40%). We remove these features. Missing values in Test data set
###Code
total_test = test_data.isnull().sum().sort_values(ascending=False)
percent_test = (test_data.isnull().sum()/test_data.isnull().count() * 100).sort_values(ascending=False)
missing_values_test = pd.concat([total_test, percent_test], axis=1, keys=['Total', 'Percent'])
missing_values_test.head(14)
###Output
_____no_output_____
###Markdown
Let's see what type of features have missing values.
###Code
missing_features=np.array(missing_values_train.index,dtype=str)
missing_features=missing_features[2:12] # Names of the columns with missing values
train_data[missing_features].head()
###Output
_____no_output_____
###Markdown
We observe that we have:- Categorical missing features ( ending with 'cat' )- Continous missing features ( features having numerical continous values - 'ps_reg_03','ps_car_14' )- Ordinal missing feature ( which is neither continous nor categorical - 'ps_car_11' ) Let's see missing data in test set
###Code
missing_features_test=np.array(missing_values_test.index,dtype=str)
missing_features_test=missing_features_test[2:13] # Names of the columns with missing values
test_data[missing_features_test].head()
###Output
_____no_output_____
###Markdown
We observe that we have:- Categorical missing features ( ending with 'cat' )- Continous missing features ( features having numerical continous values - 'ps_reg_03','ps_car_14','ps_car_12' )- Ordinal missing feature ( which is neither continous nor categorical - 'ps_car_11' ) Filling Missing Data
###Code
train_data=train_data.drop(['ps_car_03_cat','ps_car_05_cat'],axis=1)
test_data=test_data.drop(['ps_car_03_cat','ps_car_05_cat'],axis=1)
print('Train data shape = ',train_data.shape)
print('Test data shape', test_data.shape)
###Output
Train data shape = (416648, 57)
Test data shape (178564, 56)
###Markdown
Let's store the parameters on which we will train the model
###Code
training_parameters = list(test.columns)
training_parameters.remove('id')
training_parameters.remove('ps_car_03_cat')
training_parameters.remove('ps_car_05_cat')
for x in missing_features:
training_parameters.remove(x)
###Output
_____no_output_____
###Markdown
We will predict the missing values as follows:-- If the feature is continous, we will use Linear Regression.- If the feature is categorical, we will use Logistic Regression. The reason for prediction rather than imputation is that imputing artificial values results in poor accuracy.
###Code
from sklearn.linear_model import LogisticRegression,LinearRegression
for feature in missing_features:
train_new = train_data[training_parameters+[feature]]
idx = train_data.loc[pd.isna(train_data[feature]), :].index
train_new = train_new.dropna()
y_new = train_new[feature]
train_new = train_new.drop([feature],axis=1)
if features[feature] == 'interval': #Check feature dictionary for data type
model = LinearRegression(n_jobs=-1)
model.fit(train_new,y_new)
for i in idx:
train_data[feature].loc[i] = model.predict(train_data[training_parameters].loc[i].values.reshape(1, -1)) #Predict and fill
#the missing values
if feature in missing_features_test:
idx = test_data.loc[pd.isna(test_data[feature]), :].index
for i in idx:
test_data[feature].loc[i] = model.predict(test_data[training_parameters].loc[i].values.reshape(1, -1))
else:
model = LogisticRegression(penalty='l2' , class_weight='balanced',n_jobs=-1)
model = LogisticRegression(class_weight='balanced', n_jobs = -1)
model.fit(train_new,y_new)
for i in idx:
train_data[feature].loc[i] = model.predict(train_data[training_parameters].loc[i].values.reshape(1, -1))
if feature in missing_features_test:
idx = test_data.loc[pd.isna(test_data[feature]), :].index
for i in idx:
test_data[feature].loc[i] = model.predict(test_data[training_parameters].loc[i].values.reshape(1, -1))
train_data.to_csv("train_without_mv.csv")
test_data.to_csv("test_without_mv.csv")
train_data = pd.read_csv('train_without_mv.csv')
test_data = pd.read_csv('test_without_mv.csv')
from sklearn.impute import SimpleImputer
fill_NaN = SimpleImputer(missing_values=np.nan, strategy='mean')
test_data = pd.DataFrame(fill_NaN.fit_transform(test_data),columns = test_data.columns)
train_data.isnull().sum()
test_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Thus, we have successfully filled missing values! Let's see what the target feature looks like
###Code
sns.countplot(train_data.target);
plt.xlabel('Is Filed Claim?');
plt.ylabel('Total Count');
plt.show()
imbalance = train_data['target'].value_counts()
print("% of People who claimed the insurance (Denoted by 1) = ", (imbalance[1]/train.shape[0])*100)
print("% of People who did not claim the insurance (Denoted by 0) = ", (imbalance[0]/train.shape[0])*100)
###Output
% of People who claimed the insurance (Denoted by 1) = 3.653203663524126
% of People who did not claim the insurance (Denoted by 0) = 96.34679633647588
###Markdown
We see that there is a class imbalance problem present. We will handle this while building the model. Models Handling Imbalanced Data We will first try to oversample data using SMOTE
###Code
del test_data['id']
del train_data['id']
X = train_data.iloc[:, 1:].values
Y = train_data['target']
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state=2020,n_jobs=-1)
x_new, y_new = sm.fit_sample(X,Y)
print ('Shape of oversampled data: {}'.format(x_new.shape))
print ('Shape of Y: {}'.format(y_new.shape))
sns.countplot(y_new)
plt.title('Balanced training data')
plt.show()
temp = pd.read_csv('test_without_mv.csv')
test_id = temp['id'].values
###Output
_____no_output_____
###Markdown
Upon trying various models with smote, we came to a conclusion that SMOTE affects the accuracy of our model poorly! Hence, we dropped SMOTE! Logistic Regression
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X,Y, test_size = 0.25, random_state = 2020)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 2020, penalty='l2')
classifier.fit(x_train, y_train)
y_pred = classifier.predict_proba(test_data)
submission = pd.DataFrame(columns=['id', 'target'])
submission['id'] = test_id
submission['target'] = y_pred[:, 1]
submission.to_csv('submission_lr_without_smote.csv')
submission
###Output
_____no_output_____
###Markdown
The above gave a gini score of 0.21561 upon submission. SVM
###Code
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state =86)
classifier.fit(x_train, y_train)
y_pred = classifier.predict_proba(test_data)
submission = pd.DataFrame(columns=['id', 'target'])
submission['id'] = test_id
submission['target'] = y_pred[:, 1]
submission.to_csv('submission_svm_without_smote.csv')
submission
###Output
_____no_output_____
###Markdown
Random Forest
###Code
from sklearn.model_selection import train_test_split
x_train, x_cv, y_train, y_cv = train_test_split(X,Y, test_size = 0.25, random_state = 2020)
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 1500, random_state = 2020, n_jobs=-1,
max_depth=6, min_samples_split=120, min_samples_leaf=50)
classifier.fit(x_train, y_train)
y_pred = classifier.predict_proba(test_data)
submission = pd.DataFrame(columns=['id', 'target'])
submission['id'] = test_id
submission['target'] = y_pred[:, 1]
submission.to_csv('submission_12_rf_without_smote.csv')
submission
###Output
_____no_output_____
###Markdown
The above gave a Gini score of 0.26178 upon submission. Decision Tree
###Code
x_train, x_cv, y_train, y_cv = train_test_split(X,Y, test_size = 0.25, random_state = 2020)
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 2020, max_depth=6,min_samples_split=70,min_samples_leaf=30)
classifier.fit(x_train, y_train)
y_pred = classifier.predict_proba(test_data)
submission = pd.DataFrame(columns=['id', 'target'])
submission['id'] = test_id
submission['target'] = y_pred[:, 1]
submission.to_csv('submission_12_dc_without_smote.csv')
submission
###Output
_____no_output_____
###Markdown
The above gave a Gini Score of 0.21730 upon submission. XGBoost
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X,Y, test_size = 0.2, random_state = 2020)
from xgboost import XGBClassifier
classifier = XGBClassifier(learning_rate=0.01, n_estimators=1200, max_depth=6, gamma=0.7, subsample=0.8, colsample_bytree=0.3,
objective= 'binary:logistic', reg_alpha = 1,reg_lambda = 3,n_jobs=-1, seed=42)
classifier.fit(x_train, y_train)
y_pred = classifier.predict_proba(test_data.values) #have to pass as nd-array, not as dataframe here to xgb
submission = pd.DataFrame(columns=['id', 'target'])
submission['id'] = test_id
submission['target'] = y_pred[:, 1]
submission.to_csv('submission_10_xgb_without_smote_best.csv')
submission
###Output
_____no_output_____
###Markdown
The above gave a Gini score of 0.29023 on submission, which is our final model.
###Code
from xgboost import XGBClassifier
x_train, x_test, y_train, y_test = train_test_split(X,Y, test_size = 0.2, random_state = 2020)
classifier = XGBClassifier(learning_rate=0.01, n_estimators=1400, max_depth=8, gamma=0.5, subsample=0.6, colsample_bytree=0.5,
objective= 'binary:logistic', reg_alpha = 2,reg_lambda = 2,n_jobs=-1, seed=86)
classifier.fit(x_train, y_train)
y_pred = classifier.predict_proba(test_data.values) #have to pass as nd-array, not as dataframe here to xgb
submission = pd.DataFrame(columns=['id', 'target'])
submission['id'] = test_id
submission['target'] = y_pred[:, 1]
submission.to_csv('submission_7_xgb_without_smote.csv')
submission
###Output
_____no_output_____
###Markdown
The above gave a Gini score of 0.28615 on submission.
###Code
from xgboost import XGBClassifier
x_train, x_test, y_train, y_test = train_test_split(X,Y, test_size = 0.2, random_state = 2020)
classifier = XGBClassifier(learning_rate=0.01, n_estimators=1800, max_depth=10, gamma=0.7, subsample=0.7, colsample_bytree=0.7,
objective= 'binary:logistic', reg_alpha = 3,reg_lambda = 1,n_jobs=-1, seed=86)
classifier.fit(x_train, y_train)
y_pred = classifier.predict_proba(test_data.values) #have to pass as nd-array, not as dataframe here to xgb
submission = pd.DataFrame(columns=['id', 'target'])
submission['id'] = test_id
submission['target'] = y_pred[:, 1]
submission.to_csv('submission_8_xgb_without_smote.csv')
submission
###Output
_____no_output_____
###Markdown
The above gave a Gini score of 0.26510 on submission.
###Code
from xgboost import plot_importance
# plt.figure(figsize=(1,1))
ax = plot_importance(classifier)
# plt.rcParams["figure.figsize"] = (20,20)
plt.show()
###Output
_____no_output_____
###Markdown
Simple Ensembling
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
###Output
_____no_output_____
###Markdown
Max Voting
###Code
model1 = LogisticRegression(random_state=1)
model2 = DecisionTreeClassifier(criterion = 'entropy', random_state = 2000, max_depth=6,min_samples_split=70,min_samples_leaf=30)
model3 = GaussianNB()
model4 = RandomForestClassifier(n_estimators = 1500, random_state = 2020, n_jobs=-1,
max_depth=6, min_samples_split=120, min_samples_leaf=50)
###Output
_____no_output_____
###Markdown
Weighted Averaging Without SMOTE
###Code
model1.fit(x_train, y_train)
model1.score(x_test, y_test)
model2.fit(x_train, y_train)
model2.score(x_test, y_test)
model3.fit(x_train, y_train)
model3.score(x_test, y_test)
model4.fit(x_train, y_train)
model4.score(x_test, y_test)
pred1 = model1.predict_proba(test_data)
pred2 = model2.predict_proba(test_data)
pred3 = model3.predict_proba(test_data)
pred4 = model4.predict_proba(test_data)
weighted_prediction = ((pred1)**0.28+(pred2)**0.28+(pred3)**0.16+(pred4)**0.28)/4
weighted_prediction
temp = pd.read_csv('test_without_mv.csv')
test_id = temp['id'].values
submission = pd.DataFrame(columns=['id', 'target'])
submission['id'] = test_id
submission['target'] = weighted_prediction[:, 1]
submission.to_csv('submission_weighted_avg.csv', index=False, header =1)
submission['target'].mean()
###Output
_____no_output_____
###Markdown
With Smote
###Code
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state=2020,n_jobs=-1)
x_new, y_new = sm.fit_sample(X,Y)
print ('Shape of oversampled data: {}'.format(x_new.shape))
print ('Shape of Y: {}'.format(y_new.shape))
model1.fit(x_new, y_new)
model1.score(x_test, y_test)
model2.fit(x_new, y_new)
model2.score(x_test, y_test)
model3.fit(x_new, y_new)
model3.score(x_test, y_test)
model4.fit(x_new, y_new)
model4.score(x_test, y_test)
pred1 = model1.predict_proba(test_data)
pred2 = model2.predict_proba(test_data)
pred3 = model3.predict_proba(test_data)
pred4 = model4.predict_proba(test_data)
weighted_prediction = (pred1)*0.16+(pred2)*0.35+(pred3)*0.16+(pred4)*0.33
labelprediction = np.argmax(weighted_prediction, axis = 1)
submission_w_avg_smote = pd.DataFrame(columns=['id', 'target'])
submission_w_avg_smote['id'] = test_id
submission_w_avg_smote['target'] = weighted_prediction[:, 1]
submission_w_avg_smote.to_csv('submission_weighted_avg_smote.csv', index=False, header =1)
submission_w_avg_smote['target'].mean()
###Output
_____no_output_____
###Markdown
Power Averaging Without Smote
###Code
model1.fit(x_train, y_train)
model1.score(x_test, y_test)
model2.fit(x_train, y_train)
model2.score(x_test, y_test)
model3.fit(x_train, y_train)
model3.score(x_test, y_test)
model4.fit(x_train, y_train)
model4.score(x_test, y_test)
pred1 = model1.predict_proba(test_data)
pred2 = model2.predict_proba(test_data)
pred3 = model3.predict_proba(test_data)
pred4 = model4.predict_proba(test_data)
powered_prediction = ((pred1**2)+(pred2**2)+(pred3**2)+(pred4**2))/4
submission_p = pd.DataFrame(columns=['id', 'target'])
submission_p['id'] = test_id
submission_p['target'] = powered_prediction[:, 1]
submission_p.to_csv('submission_powered_avg.csv', index=False, header =1)
submission_p['target'].mean()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.