markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Gradient Boosted Trees: Model understanding View on TensorFlow.org Run in Google Colab View source on GitHub For an end-to-end walkthrough of training a Gradient Boosting model check out the [boosted trees tutorial](./boosted_trees). In this tutorial you will:* Learn how to interpret a Boosted Tree model both *locally* and *globally** Gain intution for how a Boosted Trees model fits a dataset How to interpret Boosted Trees models both locally and globallyLocal interpretability refers to an understanding of a model’s predictions at the individual example level, while global interpretability refers to an understanding of the model as a whole. Such techniques can help machine learning (ML) practitioners detect bias and bugs during the model development stage.For local interpretability, you will learn how to create and visualize per-instance contributions. To distinguish this from feature importances, we refer to these values as directional feature contributions (DFCs).For global interpretability you will retrieve and visualize gain-based feature importances, [permutation feature importances](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) and also show aggregated DFCs. Load the titanic datasetYou will be using the titanic dataset, where the (rather morbid) goal is to predict passenger survival, given characteristics such as gender, age, class, etc.
from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np import pandas as pd from IPython.display import clear_output # Load dataset. dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv') dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv') y_train = dftrain.pop('survived') y_eval = dfeval.pop('survived') !pip install tensorflow==2.0.0-alpha0 import tensorflow as tf tf.random.set_seed(123)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
For a description of the features, please review the prior tutorial. Create feature columns, input_fn, and the train the estimator Preprocess the data Create the feature columns, using the original numeric columns as is and one-hot-encoding categorical variables.
fc = tf.feature_column CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck', 'embark_town', 'alone'] NUMERIC_COLUMNS = ['age', 'fare'] def one_hot_cat_column(feature_name, vocab): return fc.indicator_column( fc.categorical_column_with_vocabulary_list(feature_name, vocab)) feature_columns = [] for feature_name in CATEGORICAL_COLUMNS: # Need to one-hot encode categorical features. vocabulary = dftrain[feature_name].unique() feature_columns.append(one_hot_cat_column(feature_name, vocabulary)) for feature_name in NUMERIC_COLUMNS: feature_columns.append(fc.numeric_column(feature_name, dtype=tf.float32))
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Build the input pipeline Create the input functions using the `from_tensor_slices` method in the [`tf.data`](https://www.tensorflow.org/api_docs/python/tf/data) API to read in data directly from Pandas.
# Use entire batch since this is such a small dataset. NUM_EXAMPLES = len(y_train) def make_input_fn(X, y, n_epochs=None, shuffle=True): def input_fn(): dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y)) if shuffle: dataset = dataset.shuffle(NUM_EXAMPLES) # For training, cycle thru dataset as many times as need (n_epochs=None). dataset = (dataset .repeat(n_epochs) .batch(NUM_EXAMPLES)) return dataset return input_fn # Training and evaluation input functions. train_input_fn = make_input_fn(dftrain, y_train) eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Train the model
params = { 'n_trees': 50, 'max_depth': 3, 'n_batches_per_layer': 1, # You must enable center_bias = True to get DFCs. This will force the model to # make an initial prediction before using any features (e.g. use the mean of # the training labels for regression or log odds for classification when # using cross entropy loss). 'center_bias': True } est = tf.estimator.BoostedTreesClassifier(feature_columns, **params) # Train model. est.train(train_input_fn, max_steps=100) # Evaluation. results = est.evaluate(eval_input_fn) clear_output() pd.Series(results).to_frame()
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
For performance reasons, when your data fits in memory, we recommend use the `boosted_trees_classifier_train_in_memory` function. However if training time is not of a concern or if you have a very large dataset and want to do distributed training, use the `tf.estimator.BoostedTrees` API shown above.When using this method, you should not batch your input data, as the method operates on the entire dataset.
in_memory_params = dict(params) in_memory_params['n_batches_per_layer'] = 1 # In-memory input_fn does not use batching. def make_inmemory_train_input_fn(X, y): def input_fn(): return dict(X), y return input_fn train_input_fn = make_inmemory_train_input_fn(dftrain, y_train) # Train the model. est = tf.estimator.BoostedTreesClassifier( feature_columns, train_in_memory=True, **in_memory_params) est.train(train_input_fn) print(est.evaluate(eval_input_fn))
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Model interpretation and plotting
import matplotlib.pyplot as plt import seaborn as sns sns_colors = sns.color_palette('colorblind')
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Local interpretabilityNext you will output the directional feature contributions (DFCs) to explain individual predictions using the approach outlined in [Palczewska et al](https://arxiv.org/pdf/1312.1121.pdf) and by Saabas in [Interpreting Random Forests](http://blog.datadive.net/interpreting-random-forests/) (this method is also available in scikit-learn for Random Forests in the [`treeinterpreter`](https://github.com/andosa/treeinterpreter) package). The DFCs are generated with:`pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))`(Note: The method is named experimental as we may modify the API before dropping the experimental prefix.)
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn)) # Create DFC Pandas dataframe. labels = y_eval.values probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts]) df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts]) df_dfc.describe().T
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
A nice property of DFCs is that the sum of the contributions + the bias is equal to the prediction for a given example.
# Sum of DFCs + bias == probabality. bias = pred_dicts[0]['bias'] dfc_prob = df_dfc.sum(axis=1) + bias np.testing.assert_almost_equal(dfc_prob.values, probs.values)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Plot DFCs for an individual passenger. Let's make the plot nice by color coding based on the contributions' directionality and add the feature values on figure.
# Boilerplate code for plotting :) def _get_color(value): """To make positive DFCs plot green, negative DFCs plot red.""" green, red = sns.color_palette()[2:4] if value >= 0: return green return red def _add_feature_values(feature_values, ax): """Display feature's values on left of plot.""" x_coord = ax.get_xlim()[0] OFFSET = 0.15 for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()): t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12) t.set_bbox(dict(facecolor='white', alpha=0.5)) from matplotlib.font_manager import FontProperties font = FontProperties() font.set_weight('bold') t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue', fontproperties=font, size=12) def plot_example(example): TOP_N = 8 # View top 8 features. sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude. example = example[sorted_ix] colors = example.map(_get_color).tolist() ax = example.to_frame().plot(kind='barh', color=[colors], legend=None, alpha=0.75, figsize=(10,6)) ax.grid(False, axis='y') ax.set_yticklabels(ax.get_yticklabels(), size=14) # Add feature values. _add_feature_values(dfeval.iloc[ID][sorted_ix], ax) return ax # Plot results. ID = 182 example = df_dfc.iloc[ID] # Choose ith example from evaluation set. TOP_N = 8 # View top 8 features. sorted_ix = example.abs().sort_values()[-TOP_N:].index ax = plot_example(example) ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID])) ax.set_xlabel('Contribution to predicted probability', size=14) plt.show()
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
The larger magnitude contributions have a larger impact on the model's prediction. Negative contributions indicate the feature value for this given example reduced the model's prediction, while positive values contribute an increase in the prediction. You can also plot the example's DFCs compare with the entire distribution using a voilin plot.
# Boilerplate plotting code. def dist_violin_plot(df_dfc, ID): # Initialize plot. fig, ax = plt.subplots(1, 1, figsize=(10, 6)) # Create example dataframe. TOP_N = 8 # View top 8 features. example = df_dfc.iloc[ID] ix = example.abs().sort_values()[-TOP_N:].index example = example[ix] example_df = example.to_frame(name='dfc') # Add contributions of entire distribution. parts=ax.violinplot([df_dfc[w] for w in ix], vert=False, showextrema=False, widths=0.7, positions=np.arange(len(ix))) face_color = sns_colors[0] alpha = 0.15 for pc in parts['bodies']: pc.set_facecolor(face_color) pc.set_alpha(alpha) # Add feature values. _add_feature_values(dfeval.iloc[ID][sorted_ix], ax) # Add local contributions. ax.scatter(example, np.arange(example.shape[0]), color=sns.color_palette()[2], s=100, marker="s", label='contributions for example') # Legend # Proxy plot, to show violinplot dist on legend. ax.plot([0,0], [1,1], label='eval set contributions\ndistributions', color=face_color, alpha=alpha, linewidth=10) legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large', frameon=True) legend.get_frame().set_facecolor('white') # Format plot. ax.set_yticks(np.arange(example.shape[0])) ax.set_yticklabels(example.index) ax.grid(False, axis='y') ax.set_xlabel('Contribution to predicted probability', size=14)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Plot this example.
dist_violin_plot(df_dfc, ID) plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID])) plt.show()
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Finally, third-party tools, such as [LIME](https://github.com/marcotcr/lime) and [shap](https://github.com/slundberg/shap), can also help understand individual predictions for a model. Global feature importancesAdditionally, you might want to understand the model as a whole, rather than studying individual predictions. Below, you will compute and use:1. Gain-based feature importances using `est.experimental_feature_importances`2. Permutation importances3. Aggregate DFCs using `est.experimental_predict_with_explanations`Gain-based feature importances measure the loss change when splitting on a particular feature, while permutation feature importances are computed by evaluating model performance on the evaluation set by shuffling each feature one-by-one and attributing the change in model performance to the shuffled feature.In general, permutation feature importance are preferred to gain-based feature importance, though both methods can be unreliable in situations where potential predictor variables vary in their scale of measurement or their number of categories and when features are correlated ([source](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-9-307)). Check out [this article](http://explained.ai/rf-importance/index.html) for an in-depth overview and great discussion on different feature importance types. 1. Gain-based feature importances Gain-based feature importances are built into the TensorFlow Boosted Trees estimators using `est.experimental_feature_importances`.
importances = est.experimental_feature_importances(normalize=True) df_imp = pd.Series(importances) # Visualize importances. N = 8 ax = (df_imp.iloc[0:N][::-1] .plot(kind='barh', color=sns_colors[0], title='Gain feature importances', figsize=(10, 6))) ax.grid(False, axis='y')
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
2. Average absolute DFCsYou can also average the absolute values of DFCs to understand impact at a global level.
# Plot. dfc_mean = df_dfc.abs().mean() N = 8 sorted_ix = dfc_mean.abs().sort_values()[-N:].index # Average and sort by absolute. ax = dfc_mean[sorted_ix].plot(kind='barh', color=sns_colors[1], title='Mean |directional feature contributions|', figsize=(10, 6)) ax.grid(False, axis='y')
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
You can also see how DFCs vary as a feature value varies.
FEATURE = 'fare' feature = pd.Series(df_dfc[FEATURE].values, index=dfeval[FEATURE].values).sort_index() ax = sns.regplot(feature.index.values, feature.values, lowess=True) ax.set_ylabel('contribution') ax.set_xlabel(FEATURE) ax.set_xlim(0, 100) plt.show()
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
3. Permutation feature importance
def permutation_importances(est, X_eval, y_eval, metric, features): """Column by column, shuffle values and observe effect on eval set. source: http://explained.ai/rf-importance/index.html A similar approach can be done during training. See "Drop-column importance" in the above article.""" baseline = metric(est, X_eval, y_eval) imp = [] for col in features: save = X_eval[col].copy() X_eval[col] = np.random.permutation(X_eval[col]) m = metric(est, X_eval, y_eval) X_eval[col] = save imp.append(baseline - m) return np.array(imp) def accuracy_metric(est, X, y): """TensorFlow estimator accuracy.""" eval_input_fn = make_input_fn(X, y=y, shuffle=False, n_epochs=1) return est.evaluate(input_fn=eval_input_fn)['accuracy'] features = CATEGORICAL_COLUMNS + NUMERIC_COLUMNS importances = permutation_importances(est, dfeval, y_eval, accuracy_metric, features) df_imp = pd.Series(importances, index=features) sorted_ix = df_imp.abs().sort_values().index ax = df_imp[sorted_ix][-5:].plot(kind='barh', color=sns_colors[2], figsize=(10, 6)) ax.grid(False, axis='y') ax.set_title('Permutation feature importance') plt.show()
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Visualizing model fitting Lets first simulate/create training data using the following formula:$$z=x* e^{-x^2 - y^2}$$Where \(z\) is the dependent variable you are trying to predict and \(x\) and \(y\) are the features.
from numpy.random import uniform, seed from matplotlib.mlab import griddata # Create fake data seed(0) npts = 5000 x = uniform(-2, 2, npts) y = uniform(-2, 2, npts) z = x*np.exp(-x**2 - y**2) # Prep data for training. df = pd.DataFrame({'x': x, 'y': y, 'z': z}) xi = np.linspace(-2.0, 2.0, 200), yi = np.linspace(-2.1, 2.1, 210), xi,yi = np.meshgrid(xi, yi) df_predict = pd.DataFrame({ 'x' : xi.flatten(), 'y' : yi.flatten(), }) predict_shape = xi.shape def plot_contour(x, y, z, **kwargs): # Grid the data. plt.figure(figsize=(10, 8)) # Contour the gridded data, plotting dots at the nonuniform data points. CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k') CS = plt.contourf(x, y, z, 15, vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r') plt.colorbar() # Draw colorbar. # Plot data points. plt.xlim(-2, 2) plt.ylim(-2, 2)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
You can visualize the function. Redder colors correspond to larger function values.
zi = griddata(x, y, z, xi, yi, interp='linear') plot_contour(xi, yi, zi) plt.scatter(df.x, df.y, marker='.') plt.title('Contour on training data') plt.show() fc = [tf.feature_column.numeric_column('x'), tf.feature_column.numeric_column('y')] def predict(est): """Predictions from a given estimator.""" predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict)) preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)]) return preds.reshape(predict_shape)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
First let's try to fit a linear model to the data.
train_input_fn = make_input_fn(df, df.z) est = tf.estimator.LinearRegressor(fc) est.train(train_input_fn, max_steps=500); plot_contour(xi, yi, predict(est))
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
It's not a very good fit. Next let's try to fit a GBDT model to it and try to understand how the model fits the function.
n_trees = 22 #@param {type: "slider", min: 1, max: 80, step: 1} est = tf.estimator.BoostedTreesRegressor(fc, n_batches_per_layer=1, n_trees=n_trees) est.train(train_input_fn, max_steps=500) clear_output() plot_contour(xi, yi, predict(est)) plt.text(-1.8, 2.1, '# trees: {}'.format(n_trees), color='w', backgroundcolor='black', size=20) plt.show()
_____no_output_____
Apache-2.0
site/en/r2/tutorials/estimators/_boosted_trees_model_understanding.ipynb
ThomasTransboundaryYan/docs
Facial Keypoint Detection This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with. Let's take a look at some examples of images and corresponding facial keypoints.Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.--- Load and Visualize DataThe first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints. Training and Testing DataThis facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.* 3462 of these images are training images, for you to use as you create a model to predict keypoints.* 2308 are test images, which will be used to test the accuracy of your model.The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).--- First, before we do anything, we have to load in our image data. This data is stored in a zip file and in the below cell, we access it by it's URL and unzip the data in a `/data/` directory that is separate from the workspace home directory.
# -- DO NOT CHANGE THIS CELL -- # !mkdir /data !wget -P /data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip !unzip -n /data/train-test-data.zip -d /data # import the required libraries import glob import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as mpimg import cv2
_____no_output_____
MIT
1. Load and Visualize Data.ipynb
Buddhone/P1_Facial_Keypoints
Then, let's load in our training data and display some stats about that dat ato make sure it's been loaded in correctly!
key_pts_frame = pd.read_csv('/data/training_frames_keypoints.csv') n = 0 image_name = key_pts_frame.iloc[n, 0] key_pts = key_pts_frame.iloc[n, 1:].as_matrix() key_pts = key_pts.astype('float').reshape(-1, 2) print('Image name: ', image_name) print('Landmarks shape: ', key_pts.shape) print('First 4 key pts: {}'.format(key_pts[:4])) # print out some stats about the data print('Number of images: ', key_pts_frame.shape[0])
Number of images: 3462
MIT
1. Load and Visualize Data.ipynb
Buddhone/P1_Facial_Keypoints
Look at some imagesBelow, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.
def show_keypoints(image, key_pts): """Show image with keypoints""" plt.imshow(image) plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m') # Display a few different types of images by changing the index n # select an image by index in our data frame n = 15 image_name = key_pts_frame.iloc[n, 0] key_pts = key_pts_frame.iloc[n, 1:].as_matrix() key_pts = key_pts.astype('float').reshape(-1, 2) plt.figure(figsize=(5, 5)) show_keypoints(mpimg.imread(os.path.join('/data/training/', image_name)), key_pts) plt.show()
_____no_output_____
MIT
1. Load and Visualize Data.ipynb
Buddhone/P1_Facial_Keypoints
Dataset class and TransformationsTo prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). Dataset class``torch.utils.data.Dataset`` is an abstract class representing adataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.Your custom dataset should inherit ``Dataset`` and override the followingmethods:- ``__len__`` so that ``len(dataset)`` returns the size of the dataset.- ``__getitem__`` to support the indexing such that ``dataset[i]`` can be used to get the i-th sample of image/keypoint data.Let's create a dataset class for our face keypoints dataset. We willread the CSV file in ``__init__`` but leave the reading of images to``__getitem__``. This is memory efficient because all the images are notstored in the memory at once but read as required.A sample of our dataset will be a dictionary``{'image': image, 'keypoints': key_pts}``. Our dataset will take anoptional argument ``transform`` so that any required processing can beapplied on the sample. We will see the usefulness of ``transform`` in thenext section.
from torch.utils.data import Dataset, DataLoader class FacialKeypointsDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.key_pts_frame = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.key_pts_frame) def __getitem__(self, idx): image_name = os.path.join(self.root_dir, self.key_pts_frame.iloc[idx, 0]) image = mpimg.imread(image_name) # if image has an alpha color channel, get rid of it if(image.shape[2] == 4): image = image[:,:,0:3] key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix() key_pts = key_pts.astype('float').reshape(-1, 2) sample = {'image': image, 'keypoints': key_pts} if self.transform: sample = self.transform(sample) return sample
_____no_output_____
MIT
1. Load and Visualize Data.ipynb
Buddhone/P1_Facial_Keypoints
Now that we've defined this class, let's instantiate the dataset and display some images.
# Construct the dataset face_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv', root_dir='/data/training/') # print some stats about the dataset print('Length of dataset: ', len(face_dataset)) # Display a few of the images from the dataset num_to_display = 3 for i in range(num_to_display): # define the size of images fig = plt.figure(figsize=(20,10)) # randomly select a sample rand_i = np.random.randint(0, len(face_dataset)) sample = face_dataset[rand_i] # print the shape of the image and keypoints print(i, sample['image'].shape, sample['keypoints'].shape) ax = plt.subplot(1, num_to_display, i + 1) ax.set_title('Sample #{}'.format(i)) # Using the same display function, defined earlier show_keypoints(sample['image'], sample['keypoints'])
0 (213, 201, 3) (68, 2) 1 (305, 239, 3) (68, 2) 2 (147, 143, 3) (68, 2)
MIT
1. Load and Visualize Data.ipynb
Buddhone/P1_Facial_Keypoints
TransformsNow, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.Therefore, we will need to write some pre-processing code.Let's create four transforms:- ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]- ``Rescale``: to rescale an image to a desired size.- ``RandomCrop``: to crop an image randomly.- ``ToTensor``: to convert numpy images to torch images.We will write them as callable classes instead of simple functions sothat parameters of the transform need not be passed everytime it'scalled. For this, we just need to implement ``__call__`` method and (if we require parameters to be passed in), the ``__init__`` method. We can then use a transform like this: tx = Transform(params) transformed_sample = tx(sample)Observe below how these transforms are generally applied to both the image and its keypoints.
import torch from torchvision import transforms, utils # tranforms class Normalize(object): """Convert a color image to grayscale and normalize the color range to [0,1].""" def __call__(self, sample): image, key_pts = sample['image'], sample['keypoints'] image_copy = np.copy(image) key_pts_copy = np.copy(key_pts) # convert image to grayscale image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) # scale color range from [0, 255] to [0, 1] image_copy= image_copy/255.0 # scale keypoints to be centered around 0 with a range of [-1, 1] # mean = 100, sqrt = 50, so, pts should be (pts - 100)/50 key_pts_copy = (key_pts_copy - 100)/50.0 return {'image': image_copy, 'keypoints': key_pts_copy} class Rescale(object): """Rescale the image in a sample to a given size. Args: output_size (tuple or int): Desired output size. If tuple, output is matched to output_size. If int, smaller of image edges is matched to output_size keeping aspect ratio the same. """ def __init__(self, output_size): assert isinstance(output_size, (int, tuple)) self.output_size = output_size def __call__(self, sample): image, key_pts = sample['image'], sample['keypoints'] h, w = image.shape[:2] if isinstance(self.output_size, int): if h > w: new_h, new_w = self.output_size * h / w, self.output_size else: new_h, new_w = self.output_size, self.output_size * w / h else: new_h, new_w = self.output_size new_h, new_w = int(new_h), int(new_w) img = cv2.resize(image, (new_w, new_h)) # scale the pts, too key_pts = key_pts * [new_w / w, new_h / h] return {'image': img, 'keypoints': key_pts} class RandomCrop(object): """Crop randomly the image in a sample. Args: output_size (tuple or int): Desired output size. If int, square crop is made. """ def __init__(self, output_size): assert isinstance(output_size, (int, tuple)) if isinstance(output_size, int): self.output_size = (output_size, output_size) else: assert len(output_size) == 2 self.output_size = output_size def __call__(self, sample): image, key_pts = sample['image'], sample['keypoints'] h, w = image.shape[:2] new_h, new_w = self.output_size top = np.random.randint(0, h - new_h) left = np.random.randint(0, w - new_w) image = image[top: top + new_h, left: left + new_w] key_pts = key_pts - [left, top] return {'image': image, 'keypoints': key_pts} class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, key_pts = sample['image'], sample['keypoints'] # if image has no grayscale color channel, add one if(len(image.shape) == 2): # add that third color dim image = image.reshape(image.shape[0], image.shape[1], 1) # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose((2, 0, 1)) return {'image': torch.from_numpy(image), 'keypoints': torch.from_numpy(key_pts)}
_____no_output_____
MIT
1. Load and Visualize Data.ipynb
Buddhone/P1_Facial_Keypoints
Test out the transformsLet's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.
# test out some of these transforms rescale = Rescale(100) crop = RandomCrop(50) composed = transforms.Compose([Rescale(250), RandomCrop(224)]) # apply the transforms to a sample image test_num = 500 sample = face_dataset[test_num] fig = plt.figure() for i, tx in enumerate([rescale, crop, composed]): transformed_sample = tx(sample) ax = plt.subplot(1, 3, i + 1) plt.tight_layout() ax.set_title(type(tx).__name__) show_keypoints(transformed_sample['image'], transformed_sample['keypoints']) plt.show()
_____no_output_____
MIT
1. Load and Visualize Data.ipynb
Buddhone/P1_Facial_Keypoints
Create the transformed datasetApply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).
# define the data tranform # order matters! i.e. rescaling should come before a smaller crop data_transform = transforms.Compose([Rescale(250), RandomCrop(224), Normalize(), ToTensor()]) # create the transformed dataset transformed_dataset = FacialKeypointsDataset(csv_file='/data/training_frames_keypoints.csv', root_dir='/data/training/', transform=data_transform) # print some stats about the transformed data print('Number of images: ', len(transformed_dataset)) # make sure the sample tensors are the expected size for i in range(5): sample = transformed_dataset[i] print(i, sample['image'].size(), sample['keypoints'].size())
Number of images: 3462 0 torch.Size([1, 224, 224]) torch.Size([68, 2]) 1 torch.Size([1, 224, 224]) torch.Size([68, 2]) 2 torch.Size([1, 224, 224]) torch.Size([68, 2]) 3 torch.Size([1, 224, 224]) torch.Size([68, 2]) 4 torch.Size([1, 224, 224]) torch.Size([68, 2])
MIT
1. Load and Visualize Data.ipynb
Buddhone/P1_Facial_Keypoints
STEP 1: IMPORT LIBRARIES AND DATASET
import warnings warnings.filterwarnings("ignore") # import libraries import pickle import seaborn as sns import pandas as pd # Import Pandas for data manipulation using dataframes import numpy as np # Import Numpy for data statistical analysis import matplotlib.pyplot as plt # Import matplotlib for data visualisation import random from google.colab import files files.download('/content/test.p') files.download('/content/valid.p') files.download('/content/train.p') # The pickle module implements binary protocols for serializing and de-serializing a Python object structure. with open("/content/train.p", mode='rb') as training_data: train = pickle.load(training_data) with open("/content/valid.p", mode='rb') as validation_data: valid = pickle.load(validation_data) with open("/content/test.p", mode='rb') as testing_data: test = pickle.load(testing_data) X_train, y_train = train['features'], train['labels'] X_validation, y_validation = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] X_train.shape y_train.shape
_____no_output_____
MIT
traffic_sign_prediction_using_LE_NET_ARCHITECTURE.ipynb
abegpatel/Traffic-Sign-Classification-suing-LENET-Architecture
STEP 2: IMAGE EXPLORATION¶
i = 1001 plt.imshow(X_train[i]) # Show images are not shuffled y_train[i]
_____no_output_____
MIT
traffic_sign_prediction_using_LE_NET_ARCHITECTURE.ipynb
abegpatel/Traffic-Sign-Classification-suing-LENET-Architecture
STEP 3: DATA PEPARATION
## Shuffle the dataset from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train) X_train_gray = np.sum(X_train/3, axis=3, keepdims=True) X_test_gray = np.sum(X_test/3, axis=3, keepdims=True) X_validation_gray = np.sum(X_validation/3, axis=3, keepdims=True) X_train_gray_norm = (X_train_gray - 128)/128 X_test_gray_norm = (X_test_gray - 128)/128 X_validation_gray_norm = (X_validation_gray - 128)/128 X_train_gray.shape i = 610 plt.imshow(X_train_gray[i].squeeze(), cmap='gray') plt.figure() plt.imshow(X_train[i])
_____no_output_____
MIT
traffic_sign_prediction_using_LE_NET_ARCHITECTURE.ipynb
abegpatel/Traffic-Sign-Classification-suing-LENET-Architecture
STEP 4: MODEL TRAININGThe model consists of the following layers:STEP 1: THE FIRST CONVOLUTIONAL LAYER 1Input = 32x32x1Output = 28x28x6Output = (Input-filter+1)/Stride* => (32-5+1)/1=28Used a 5x5 Filter with input depth of 3 and output depth of 6Apply a RELU Activation function to the outputpooling for input, Input = 28x28x6 and Output = 14x14x6* Stride is the amount by which the kernel is shifted when the kernel is passed over the image.STEP 2: THE SECOND CONVOLUTIONAL LAYER 2Input = 14x14x6Output = 10x10x16Layer 2: Convolutional layer with Output = 10x10x16Output = (Input-filter+1)/strides => 10 = 14-5+1/1Apply a RELU Activation function to the outputPooling with Input = 10x10x16 and Output = 5x5x16STEP 3: FLATTENING THE NETWORKFlatten the network with Input = 5x5x16 and Output = 400STEP 4: FULLY CONNECTED LAYERLayer 3: Fully Connected layer with Input = 400 and Output = 120Apply a RELU Activation function to the outputSTEP 5: ANOTHER FULLY CONNECTED LAYERLayer 4: Fully Connected Layer with Input = 120 and Output = 84Apply a RELU Activation function to the outputSTEP 6: FULLY CONNECTED LAYERLayer 5: Fully Connected layer with Input = 84 and Output = 43
# Import train_test_split from scikit library from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Dense, Flatten, Dropout from keras.optimizers import Adam from keras.callbacks import TensorBoard from sklearn.model_selection import train_test_split image_shape = X_train_gray[i].shape cnn_model = Sequential() cnn_model.add(Conv2D(filters=6, kernel_size=(5, 5), activation='relu', input_shape=(32,32,1))) cnn_model.add(AveragePooling2D()) cnn_model.add(Conv2D(filters=16, kernel_size=(5, 5), activation='relu')) cnn_model.add(AveragePooling2D()) cnn_model.add(Flatten()) cnn_model.add(Dense(units=120, activation='relu')) cnn_model.add(Dense(units=84, activation='relu')) cnn_model.add(Dense(units=43, activation = 'softmax')) cnn_model.compile(loss ='sparse_categorical_crossentropy', optimizer=Adam(lr=0.001),metrics =['accuracy']) history = cnn_model.fit(X_train_gray_norm, y_train, batch_size=500, epochs=50, verbose=1, validation_data = (X_validation_gray_norm,y_validation))
Epoch 1/50 70/70 [==============================] - 8s 10ms/step - loss: 3.4363 - accuracy: 0.1037 - val_loss: 2.5737 - val_accuracy: 0.3120 Epoch 2/50 70/70 [==============================] - 0s 6ms/step - loss: 1.8750 - accuracy: 0.4805 - val_loss: 1.4311 - val_accuracy: 0.5537 Epoch 3/50 70/70 [==============================] - 0s 5ms/step - loss: 1.0405 - accuracy: 0.6931 - val_loss: 1.0730 - val_accuracy: 0.6859 Epoch 4/50 70/70 [==============================] - 0s 5ms/step - loss: 0.7399 - accuracy: 0.7856 - val_loss: 0.8831 - val_accuracy: 0.7234 Epoch 5/50 70/70 [==============================] - 0s 5ms/step - loss: 0.5875 - accuracy: 0.8318 - val_loss: 0.8052 - val_accuracy: 0.7413 Epoch 6/50 70/70 [==============================] - 0s 5ms/step - loss: 0.4881 - accuracy: 0.8654 - val_loss: 0.7567 - val_accuracy: 0.7671 Epoch 7/50 70/70 [==============================] - 0s 5ms/step - loss: 0.4150 - accuracy: 0.8829 - val_loss: 0.7154 - val_accuracy: 0.7844 Epoch 8/50 70/70 [==============================] - 0s 5ms/step - loss: 0.3522 - accuracy: 0.9021 - val_loss: 0.6872 - val_accuracy: 0.8023 Epoch 9/50 70/70 [==============================] - 0s 6ms/step - loss: 0.3141 - accuracy: 0.9150 - val_loss: 0.6809 - val_accuracy: 0.7975 Epoch 10/50 70/70 [==============================] - 0s 5ms/step - loss: 0.2788 - accuracy: 0.9248 - val_loss: 0.6507 - val_accuracy: 0.8116 Epoch 11/50 70/70 [==============================] - 0s 7ms/step - loss: 0.2490 - accuracy: 0.9327 - val_loss: 0.6513 - val_accuracy: 0.8231 Epoch 12/50 70/70 [==============================] - 0s 5ms/step - loss: 0.2369 - accuracy: 0.9377 - val_loss: 0.6711 - val_accuracy: 0.8034 Epoch 13/50 70/70 [==============================] - 0s 5ms/step - loss: 0.2145 - accuracy: 0.9423 - val_loss: 0.6187 - val_accuracy: 0.8293 Epoch 14/50 70/70 [==============================] - 0s 5ms/step - loss: 0.2005 - accuracy: 0.9489 - val_loss: 0.6059 - val_accuracy: 0.8367 Epoch 15/50 70/70 [==============================] - 0s 5ms/step - loss: 0.1789 - accuracy: 0.9525 - val_loss: 0.6724 - val_accuracy: 0.8249 Epoch 16/50 70/70 [==============================] - 0s 5ms/step - loss: 0.1634 - accuracy: 0.9555 - val_loss: 0.6359 - val_accuracy: 0.8399 Epoch 17/50 70/70 [==============================] - 0s 5ms/step - loss: 0.1572 - accuracy: 0.9603 - val_loss: 0.6481 - val_accuracy: 0.8367 Epoch 18/50 70/70 [==============================] - 0s 5ms/step - loss: 0.1311 - accuracy: 0.9663 - val_loss: 0.6483 - val_accuracy: 0.8302 Epoch 19/50 70/70 [==============================] - 0s 5ms/step - loss: 0.1302 - accuracy: 0.9680 - val_loss: 0.6580 - val_accuracy: 0.8306 Epoch 20/50 70/70 [==============================] - 0s 5ms/step - loss: 0.1230 - accuracy: 0.9669 - val_loss: 0.6450 - val_accuracy: 0.8363 Epoch 21/50 70/70 [==============================] - 0s 5ms/step - loss: 0.1083 - accuracy: 0.9738 - val_loss: 0.6795 - val_accuracy: 0.8390 Epoch 22/50 70/70 [==============================] - 0s 5ms/step - loss: 0.1068 - accuracy: 0.9726 - val_loss: 0.6792 - val_accuracy: 0.8381 Epoch 23/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0933 - accuracy: 0.9761 - val_loss: 0.7126 - val_accuracy: 0.8410 Epoch 24/50 70/70 [==============================] - 0s 6ms/step - loss: 0.0874 - accuracy: 0.9764 - val_loss: 0.6611 - val_accuracy: 0.8469 Epoch 25/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0773 - accuracy: 0.9809 - val_loss: 0.7272 - val_accuracy: 0.8413 Epoch 26/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0719 - accuracy: 0.9824 - val_loss: 0.7447 - val_accuracy: 0.8290 Epoch 27/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0755 - accuracy: 0.9814 - val_loss: 0.7347 - val_accuracy: 0.8322 Epoch 28/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0723 - accuracy: 0.9807 - val_loss: 0.7886 - val_accuracy: 0.8311 Epoch 29/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0611 - accuracy: 0.9847 - val_loss: 0.7606 - val_accuracy: 0.8345 Epoch 30/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0599 - accuracy: 0.9860 - val_loss: 0.8071 - val_accuracy: 0.8365 Epoch 31/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0595 - accuracy: 0.9848 - val_loss: 0.7790 - val_accuracy: 0.8404 Epoch 32/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0510 - accuracy: 0.9864 - val_loss: 0.7991 - val_accuracy: 0.8374 Epoch 33/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0484 - accuracy: 0.9876 - val_loss: 0.7773 - val_accuracy: 0.8442 Epoch 34/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0540 - accuracy: 0.9844 - val_loss: 0.8191 - val_accuracy: 0.8356 Epoch 35/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0438 - accuracy: 0.9887 - val_loss: 0.7977 - val_accuracy: 0.8522 Epoch 36/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0372 - accuracy: 0.9903 - val_loss: 0.7888 - val_accuracy: 0.8401 Epoch 37/50 70/70 [==============================] - 0s 6ms/step - loss: 0.0372 - accuracy: 0.9902 - val_loss: 0.8771 - val_accuracy: 0.8413 Epoch 38/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0385 - accuracy: 0.9897 - val_loss: 0.8986 - val_accuracy: 0.8438 Epoch 39/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0380 - accuracy: 0.9902 - val_loss: 0.8557 - val_accuracy: 0.8485 Epoch 40/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0370 - accuracy: 0.9900 - val_loss: 0.8356 - val_accuracy: 0.8478 Epoch 41/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0347 - accuracy: 0.9901 - val_loss: 0.8599 - val_accuracy: 0.8438 Epoch 42/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0334 - accuracy: 0.9905 - val_loss: 0.9633 - val_accuracy: 0.8388 Epoch 43/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0406 - accuracy: 0.9879 - val_loss: 0.9581 - val_accuracy: 0.8327 Epoch 44/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0321 - accuracy: 0.9915 - val_loss: 0.9337 - val_accuracy: 0.8415 Epoch 45/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0291 - accuracy: 0.9922 - val_loss: 0.8349 - val_accuracy: 0.8497 Epoch 46/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0278 - accuracy: 0.9923 - val_loss: 0.9275 - val_accuracy: 0.8506 Epoch 47/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0265 - accuracy: 0.9931 - val_loss: 0.9720 - val_accuracy: 0.8383 Epoch 48/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0287 - accuracy: 0.9918 - val_loss: 0.9064 - val_accuracy: 0.8580 Epoch 49/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0286 - accuracy: 0.9911 - val_loss: 0.8895 - val_accuracy: 0.8619 Epoch 50/50 70/70 [==============================] - 0s 5ms/step - loss: 0.0223 - accuracy: 0.9942 - val_loss: 0.8876 - val_accuracy: 0.8560
MIT
traffic_sign_prediction_using_LE_NET_ARCHITECTURE.ipynb
abegpatel/Traffic-Sign-Classification-suing-LENET-Architecture
STEP 5: MODEL EVALUATION¶
score = cnn_model.evaluate(X_test_gray_norm, y_test,verbose=0) print('Test Accuracy : {:.4f}'.format(score[1])) history.history.keys() accuracy = history.history['accuracy'] val_accuracy = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(accuracy)) plt.plot(epochs, accuracy, 'bo', label='Training Accuracy') plt.plot(epochs, val_accuracy, 'b', label='Validation Accuracy') plt.title('Training and Validation accuracy') plt.legend() plt.plot(epochs, loss, 'ro', label='Training Loss') plt.plot(epochs, val_loss, 'r', label='Validation Loss') plt.title('Training and validation loss') plt.legend() plt.show() #get the predictions for the test data predicted_classes = cnn_model.predict_classes(X_test_gray_norm) #get the indices to be plotted y_true = y_test from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_true, predicted_classes) plt.figure(figsize = (25,25)) sns.heatmap(cm, annot=True) L = 7 W = 7 fig, axes = plt.subplots(L, W, figsize = (12,12)) axes = axes.ravel() # for i in np.arange(0, L * W): axes[i].imshow(X_test[i]) axes[i].set_title("Prediction={}\n True={}".format(predicted_classes[i], y_true[i])) axes[i].axis('off') plt.subplots_adjust(wspace=1)
_____no_output_____
MIT
traffic_sign_prediction_using_LE_NET_ARCHITECTURE.ipynb
abegpatel/Traffic-Sign-Classification-suing-LENET-Architecture
Для задания 2 вытащу столбец temperature
temperature_column = weather_hourly_df.loc[:, 'temperature'] with open('files\\temperature.txt', 'w') as file: for temp in temperature_column: file.write(str(temp) + '\n') path_to_passwords_json = "files\\passwords.json" with open(path_to_passwords_json, 'r') as file: passwords = json.load(file) with open("files\\passwords.txt", "w") as file: for password in passwords: file.write(password + '\n') with open("files\\pbkdf2_bits.txt", "r") as file: pbkdf2 = list(file.readlines()) with open("files\\hkdf_bits.txt", "r") as file: hkdf = list(file.readlines()) hkdf = [int(x) for x in hkdf] pbkdf2 = [int(x) for x in pbkdf2] plt.figure(figsize=(10,5)) plt.title('Histogram of HKDF bits') plt.hist(hkdf, bins=10, color='g') plt.savefig('hists\HKDF_gist') plt.figure(figsize=(10,5)) plt.title('Histogram of PBKDF2 bits') plt.hist(pbkdf2, bins=10, color='g') plt.savefig('hists\PBKDF2_gist')
_____no_output_____
MIT
crypto_labs/hkdf/data_analysis.ipynb
yerseg/mephi_labs
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
# Installs geemap package import subprocess try: import geemap except ImportError: print('Installing geemap ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) import ee import geemap
_____no_output_____
MIT
ImageCollection/map_function.ipynb
OIEIEIO/earthengine-py-notebooks
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
Map = geemap.Map(center=[40,-100], zoom=4) Map
_____no_output_____
MIT
ImageCollection/map_function.ipynb
OIEIEIO/earthengine-py-notebooks
Add Earth Engine Python script
# Add Earth Engine dataset # This function adds a band representing the image timestamp. def addTime(image): return image.addBands(image.metadata('system:time_start')) def conditional(image): return ee.Algorithms.If(ee.Number(image.get('SUN_ELEVATION')).gt(40), image, ee.Image(0)) # Load a Landsat 8 collection for a single path-row. collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \ .filter(ee.Filter.eq('WRS_PATH', 44)) \ .filter(ee.Filter.eq('WRS_ROW', 34)) # Map the function over the collection and display the result. print(collection.map(addTime).getInfo()) # Load a Landsat 8 collection for a single path-row. collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA') \ .filter(ee.Filter.eq('WRS_PATH', 44)) \ .filter(ee.Filter.eq('WRS_ROW', 34)) # This function uses a conditional statement to return the image if # the solar elevation > 40 degrees. Otherwise it returns a zero image. # conditional = function(image) { # return ee.Algorithms.If(ee.Number(image.get('SUN_ELEVATION')).gt(40), # image, # ee.Image(0)) # } # Map the function over the collection, convert to a List and print the result. print('Expand this to see the result: ', collection.map(conditional).getInfo())
_____no_output_____
MIT
ImageCollection/map_function.ipynb
OIEIEIO/earthengine-py-notebooks
Display Earth Engine data layers
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
_____no_output_____
MIT
ImageCollection/map_function.ipynb
OIEIEIO/earthengine-py-notebooks
相関分析(pre 0 vs pre 10)ndcg_00はpre 0, ndcg_10はpre 10の結果 KCの出現数とNDCGの相関
# 1.1 corrcoef = np.corrcoef(x=count_list, y=ndcg_00)[0][1] print("Corr coef =", corrcoef) ax = sns.jointplot(x=count_list, y=ndcg_00, kind='reg') plt.xlabel("KC count") plt.ylabel("NDCG (baseline)") plt.show() # 1.2 corrcoef = np.corrcoef(x=count_list, y=ndcg_10) print("Corr coef =", corrcoef) sns.jointplot(x=count_list, y=ndcg_10, kind='reg') plt.xlabel("KC count") plt.ylabel("NDCG (ours)") plt.show() # 1.6 _y = [n10 - n0 for n10, n0 in zip(ndcg_10, ndcg_00)] corrcoef = np.corrcoef(x=count_list, y=_y) print("Corr coef =", corrcoef) sns.jointplot(x=count_list, y=_y, kind='reg') plt.xlabel("KC count") plt.ylabel("NDCG gain") plt.show()
Corr coef = [[ 1. -0.06907159] [-0.06907159 1. ]]
MIT
notebook/Analyze_Statics2011_NDCG.ipynb
qqhann/KnowledgeTracing
KCの正解率とNDCGの相関(1.3)は正解率の低い問題でNDCGがやや低い傾向がみてとれる
# 1.3 sns.jointplot(x=[sum(l)/len(l) for l in count], y=ndcg_00, kind='reg') # 1.4 sns.jointplot(x=[sum(l)/len(l) for l in count], y=ndcg_10, kind='reg')
_____no_output_____
MIT
notebook/Analyze_Statics2011_NDCG.ipynb
qqhann/KnowledgeTracing
出現数と正解率の関係結果考察:難しい問題はときたがらない,かんたんな問題は繰り返しがち,という傾向があるのか?
# 1.5 print(np.corrcoef(x=[len(l) for l in count], y=[sum(l)/len(l) for l in count])) sns.jointplot(x=[len(l) for l in count], y=[sum(l)/len(l) for l in count], kind='reg')
[[1. 0.27203797] [0.27203797 1. ]]
MIT
notebook/Analyze_Statics2011_NDCG.ipynb
qqhann/KnowledgeTracing
Analysis Bike Share Summary* 85% of the trips are made by users who are subscribers for the last two years (2013-08, 2015-08).* This trend has been maintained for the last couple of months (86% are subscribers).* The number of trips is variable through the days. Last couple of months follow the same trends.* Subscribers: Average number of trips per day 773. Average number of trips per day (Last couple of months) 900.* Subscribers use bikes more on weekdays. There is a big difference between weekdays and weekend.* The Subscriber uses the bike in a greater frequency during rush hours. Morning: 7:00 AM - 9:00 AM. Evening: 16:00 AM - 18:00 AM.* Average number of trips during the weekday: 105.076 Average number of trips during the weekend: 20.682* The subscripter use the bike 8 minutes in average. The most frequently used range is between: [2, 15] minutes.* The most frequent start station is: San Francisco Caltrain (Townsend at 4th).* The most frequent end station is: San Francisco Caltrain (Townsend at 4th).* Trip start-end most used: San Francisco Caltrain 2 (330 Townsend) --> Townsend at 7th (6216 Trips)* Some bikes are used in a greater frequency. User Experience * According to the data, the user's profile is a worker. He leaves his house in the morning for a station to get on a bike and go to his work (nearest station). This time on average is 8 minutes (not long distance). For the return it is the same idea.* The user experience can be affected mainly for 2 reasons.* 1. Limited availability of bikes at rush hours.* 2. Bikes damaged by excessive use in the stations where there is more demand. Experimentation Plan * Go to the route with the most demand (San Francisco Caltrain 2 (330 Townsend) --> Townsend at 7th) and see what is happening.* Try to quantify waiting for available bikes at the rush hours. This situation must be equal in all days of the weekday.* Based on the above, increase the supply of bikes. This must be dynamic according to the demand at the rush hours.* Detect the most used bikes (bike_id) and check their status.* Based on the above, implement maintenance (dynamic) according to use. Modules
import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates import numpy as np
_____no_output_____
MIT
.ipynb_checkpoints/Bike_Share-checkpoint.ipynb
alan-toledo/bike-share-data-analysis
Load Data
trip = pd.read_csv('trip.csv')
_____no_output_____
MIT
.ipynb_checkpoints/Bike_Share-checkpoint.ipynb
alan-toledo/bike-share-data-analysis
Subscription Types (Users)
trip['subscription_type'] = pd.Categorical(trip['subscription_type']) fig, ax = plt.subplots(1, 2, figsize=(15,5)) trip['subscription_type'].value_counts().plot(kind='pie', autopct='%.2f', ax=ax[0]) stats = trip['subscription_type'].value_counts(dropna=True) ax[1].set_ylabel('Number of trips') ax[1].set_title('N trips') barlist = ax[1].bar(stats.index.categories, [stats['Customer'],stats['Subscriber']], width = 0.35) barlist[0].set_color('#ff7f0e') barlist[1].set_color('#1f77b4') print("85% of the trips are made by users who are subscribers (use subscription plan)") trip['start_date'] = pd.to_datetime(trip['start_date']) trip['start'] = trip['start_date'].dt.date trip['end_date'] = pd.to_datetime(trip['end_date']) trip['end'] = trip['end_date'].dt.date print('First Trip', trip['start'].min()) print('Last Trip', trip['end'].max()) from_last_months = pd.to_datetime('2015-06-01') condition = trip['start'] >= from_last_months trip.loc[condition,'subscription_type'].value_counts().plot(kind='pie', autopct='%.2f', figsize=(6,6)) print("86% of the trips are made by users who are subscribers (use subscription plan). Last couple of months.") group = trip.groupby('start').count() condition = (group.index >= from_last_months) fig, ax = plt.subplots(figsize=(24,6)) locator = mdates.AutoDateLocator() formatter = mdates.ConciseDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) ax.plot(group.index[~condition], group.id[~condition].values, color = 'blue', linewidth=2) ax.plot(group.index[condition], group.id[condition].values, color = 'red', linewidth=2) ax.set_title('N trips') ax.legend(['N trips', 'N trips last months']) print("The number of trips is variable through the days.") print("Last couple of months follow the same trends. This is important.") print("End of the year and early next there is a downward trend.") group = trip.groupby(['start','subscription_type']).size().unstack(level=1, fill_value=0) fig, ax = plt.subplots(figsize=(24,6)) locator = mdates.AutoDateLocator() formatter = mdates.ConciseDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) ax.plot(group.index, group['Customer'].values, color = '#ff7f0e', linewidth=2) ax.set_title('N trips by Subscription Types: Customer') ax.legend(['N trips Customer']) fig, ax = plt.subplots(figsize=(24,6)) condition = (group.index >= from_last_months) locator = mdates.AutoDateLocator() formatter = mdates.ConciseDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) ax.plot(group.index[~condition], group['Subscriber'][~condition].values, color='#1f77b4', linewidth=2) ax.plot(group.index[condition], group['Subscriber'][condition].values, color='red', linewidth=2) ax.set_title('N trips by Subscription Types: Subscriber') ax.legend(['N trips Subscriber', 'N trips last months']) avg = int(group['Subscriber'].values.sum()/len(group.index)) print('Average number of trips per day', avg) avg = int(group['Subscriber'][condition].values.sum()/len(group.index[condition])) print('Average number of trips per day (Last couple of months)', avg) def get_ordered_data(group): values = [] avg_weekday, avg_weekend = 0.0, 0.0 weekday = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday'] weekend = ['Saturday', 'Sunday'] week = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] for day in week: if day in group.index: values.append(group[day]) if day in weekday: avg_weekday = avg_weekday + group[day]/len(weekday) else: avg_weekend = avg_weekend + group[day]/len(weekend) else: values.append(0.0) return week, values, avg_weekend, avg_weekday trip['day_name'] = trip['start_date'].dt.day_name() group = trip.groupby(['day_name', 'subscription_type']).size().unstack(level=1, fill_value=0) days, trips_subscriber, avg_weekend, avg_weekday = get_ordered_data(group.Subscriber) _, trips_customer, _, _ = get_ordered_data(group.Customer) trips_subscriber fig, ax = plt.subplots(figsize=(8,8)) ax.set_title('Number of trips by day') ax.set_ylabel('Number of trips') x = np.arange(len(days)) width = 0.35 ax.bar(x + width/2, trips_subscriber, width, label='Subscriber',color='#1f77b4') ax.bar(x - width/2, trips_customer, width, label='Customer',color='#ff7f0e') ax.set_xticks(x) ax.set_xticklabels(days) ax.legend() print("Subscribers use bikes more on weekday. There is a big difference between weekday and weekend.") print("Average number of trips during the weekday:", avg_weekday) print("Average number of trips during the weekend:", avg_weekend) trip['hour'] = trip['start_date'].dt.hour group = trip.groupby(['hour', 'subscription_type']).size().unstack(level=1, fill_value=0) fig, ax = plt.subplots(figsize=(8,8)) ax.set_title('Number of trips by hour') ax.set_ylabel('Number of trips') hours = group.index x = np.arange(len(hours)) width = 0.35 ax.bar(x + width/2, group.Subscriber.values, width, label='Subscriber',color='#1f77b4') ax.bar(x - width/2, group.Customer.values, width, label='Customer',color='#ff7f0e') ax.set_xticks(x) ax.set_xticklabels(hours) ax.legend() print("The Subscriber uses the bike in a greater proportion during rush hours.") print("Morning: 7:00 AM - 9:00 AM") print("Evening: 16:00 AM - 18:00 AM") trip['duration_min'] = (trip['duration']/60.0).apply(np.floor).astype(int) group = trip.groupby(['duration_min', 'subscription_type']).size().unstack(level=1, fill_value=0) fig, ax = plt.subplots(figsize=(20,5)) condition = (group.Subscriber.index <= 60) group.Subscriber[condition] ax.set_title('Number of trips by first 60 minutes') ax.set_ylabel('Number of trips') mins = group.Subscriber[condition].index x = np.arange(len(mins)) width = 0.35 ax.bar(x + width/2, group.Subscriber[condition].values, width, label='Subscriber',color='#1f77b4') ax.bar(x - width/2, group.Customer[condition].values, width, label='Customer',color='#ff7f0e') ax.set_xticks(x) ax.set_xticklabels(mins) ax.legend() avg_time = (sum(group.Subscriber[condition].values*mins)/sum(group.Subscriber[condition].values)) print("The subscripter use the bike {} minutes in average.".format(round(avg_time, 2))) print("The most frequently used range is between: [2, 15] minutes.") trip['start_station_name'] = pd.Categorical(trip['start_station_name']) most_used = trip['start_station_name'].value_counts().nlargest(10) fig, ax = plt.subplots(figsize=(15,5)) ax.set_ylabel('Number of trips') ax.set_title('Top 10 Most frequent start station') ax.bar(most_used.index.values, most_used.values) for tick in ax.get_xticklabels(): tick.set_rotation(90) print("The most frequent start station is: ", most_used.index.values[0]) trip['end_station_name'] = pd.Categorical(trip['end_station_name']) most_used = trip['end_station_name'].value_counts().nlargest(10) fig, ax = plt.subplots(figsize=(15,5)) ax.set_ylabel('Number of trips') ax.set_title('Top 10 Most frequent end station') ax.bar(most_used.index.values, most_used.values) for tick in ax.get_xticklabels(): tick.set_rotation(90) most_used.index.values[0] print("The most frequent end station is: ", most_used.index.values[0]) group = trip.groupby(['start_station_name', 'end_station_name']).size().unstack(level=1, fill_value=0) cond = (group.index == group.max(axis=1).nlargest(1).index[0]) most_start_station = group.max(axis=1).nlargest(1).index[0] most_end_station = group[cond].max().nlargest(1) print('Trip start-end most used:', most_start_station,'-->', most_end_station.index[0],', N Trips = ', most_end_station.values[0]) fig, ax = plt.subplots(figsize=(15,5)) ax.set_ylabel('Number of trips') ax.set_title('bike_id') most_used = trip['bike_id'].value_counts() ax.bar(most_used.index.values, most_used.values, color='blue') print("Some bikes are used in greater frequency.")
Some bikes are used in greater frequency.
MIT
.ipynb_checkpoints/Bike_Share-checkpoint.ipynb
alan-toledo/bike-share-data-analysis
Amazon SageMaker Debugger Tutorial: How to Use the Built-in Debugging Rules [Amazon SageMaker Debugger](https://docs.aws.amazon.com/sagemaker/latest/dg/train-debugger.html) is a feature that offers capability to debug training jobs of your machine learning model and identify training problems in real time. While a training job looks like it's working like a charm, the model might have some common problems, such as loss not decreasing, overfitting, and underfitting. To better understand, practitioners have to debug the training job, while it can be challenging to track and analyze all of the output tensors.SageMaker Debugger covers the major deep learning frameworks (TensorFlow, PyTorch, and MXNet) and machine learning algorithm (XGBoost) to do the debugging jobs with minimal coding. Debugger provides an automatic detection of training problems through its built-in rules, and you can find a full list of the built-in rules for debugging at [List of Debugger Built-in Rules](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-built-in-rules.html). In this tutorial, you will learn how to use SageMaker Debugger and its built-in rules to debug your model.The workflow is as follows:* [Step 1: Import SageMaker Python SDK and the Debugger client library smdebug](step1)* [Step 2: Create a Debugger built-in rule list object](step2)* [Step 3: Construct a SageMaker estimator](step3)* [Step 4: Run the training job](step4)* [Step 5: Check training progress on Studio Debugger insights dashboard and the built-in rules evaluation status](step5)* [Step 6: Create a Debugger trial object to access the saved tensors](step6) Step 1: Import SageMaker Python SDK and the SMDebug client library **Important**: To use the new Debugger features, you need to upgrade the SageMaker Python SDK and the SMDebug libary. In the following cell, change the third line to `install_needed=True` and run to upgrade the libraries.
import sys import IPython install_needed = True # Set to True to upgrade if install_needed: print("installing deps and restarting kernel") !{sys.executable} -m pip install -U sagemaker !{sys.executable} -m pip install smdebug matplotlib IPython.Application.instance().kernel.do_shutdown(True)
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Check the SageMaker Python SDK and the SMDebug library versions.
import sagemaker sagemaker.__version__ import smdebug smdebug.__version__
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Step 2: Create a Debugger built-in rule list object
from sagemaker.debugger import ( Rule, rule_configs, ProfilerRule, ProfilerConfig, FrameworkProfile, DetailedProfilingConfig, DataloaderProfilingConfig, PythonProfilingConfig, )
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
The following code cell shows how to configure a rule object for debugging and profiling. For more information about the Debugger built-in rules, see [List of Debugger Built-in Rules](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-built-in-rules.html).The following cell demo how to configure system and framework profiling.
profiler_config = ProfilerConfig( system_monitor_interval_millis=500, framework_profile_params=FrameworkProfile( local_path="/opt/ml/output/profiler/", detailed_profiling_config=DetailedProfilingConfig(start_step=5, num_steps=3), dataloader_profiling_config=DataloaderProfilingConfig(start_step=5, num_steps=2), python_profiling_config=PythonProfilingConfig(start_step=9, num_steps=1), ), ) built_in_rules = [ Rule.sagemaker(rule_configs.overfit()), ProfilerRule.sagemaker(rule_configs.ProfilerReport()), ]
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Step 3: Construct a SageMaker estimatorUsing the rule object created in the previous cell, construct a SageMaker estimator. The estimator can be one of the SageMaker framework estimators, [TensorFlow](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.htmltensorflow-estimator), [PyTorch](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/sagemaker.pytorch.html), [MXNet](https://sagemaker.readthedocs.io/en/stable/frameworks/mxnet/sagemaker.mxnet.htmlmxnet-estimator), and [XGBoost](https://sagemaker.readthedocs.io/en/stable/frameworks/xgboost/xgboost.html), or the [SageMaker generic estimator](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.htmlsagemaker.estimator.Estimator). For more information about what framework versions are supported, see [Debugger-supported Frameworks and Algorithms](https://docs.aws.amazon.com/sagemaker/latest/dg/train-debugger.htmldebugger-supported-aws-containers).In this tutorial, the SageMaker TensorFlow estimator is constructed to run a TensorFlow training script with the Keras ResNet50 model and the cifar10 dataset.
import boto3 from sagemaker.tensorflow import TensorFlow session = boto3.session.Session() region = session.region_name estimator = TensorFlow( role=sagemaker.get_execution_role(), instance_count=1, instance_type="ml.g4dn.xlarge", image_uri=f"763104351884.dkr.ecr.{region}.amazonaws.com/tensorflow-training:2.3.1-gpu-py37-cu110-ubuntu18.04", # framework_version='2.3.1', # py_version="py37", max_run=3600, source_dir="./src", entry_point="tf-resnet50-cifar10.py", # Debugger Parameters rules=built_in_rules, profiler_config=profiler_config )
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Step 4: Run the training jobWith the `wait=False` option, you can proceed to the next notebook cell without waiting for the training job logs to be printed out.
estimator.fit(wait=True)
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Step 5: Check training progress on Studio Debugger insights dashboard and the built-in rules evaluation status- **Option 1** - Use SageMaker Studio Debugger insights and Experiments. This is a non-coding approach.- **Option 2** - Use the following code cells. This is a code-based approach. Option 1 - Open Studio Debugger insights dashboard to get insights into the training jobThrough the Debugger insights dashboard on Studio, you can check the training jobs status, system resource utilization, and suggestions to optimize model performance. The following screenshot shows the Debugger insights dashboard interface.The following heatmap shows the `ml.c5.4xlarge` instance utilization while the training job is running or after the job has completed. To learn how to access the Debugger insights dashboard, see [Debugger on Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-on-studio.html) in the [SageMaker Debugger developer guide](https://docs.aws.amazon.com/sagemaker/latest/dg/train-debugger.html). Option 2 - Run the following scripts for the code-based optionThe following two code cells return the current training job name, status, and the rule status in real time. Print the training job name
job_name = estimator.latest_training_job.name print("Training job name: {}".format(job_name))
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Print the training job and rule evaluation statusThe following script returns the status in real time every 15 seconds, until the secondary training status turns to one of the descriptions, `Training`, `Stopped`, `Completed`, or `Failed`. Once the training job status turns into the `Training`, you will be able to retrieve tensors from the default S3 bucket.
import time client = estimator.sagemaker_session.sagemaker_client description = client.describe_training_job(TrainingJobName=job_name) if description["TrainingJobStatus"] != "Completed": while description["SecondaryStatus"] not in {"Training", "Stopped", "Completed", "Failed"}: description = client.describe_training_job(TrainingJobName=job_name) primary_status = description["TrainingJobStatus"] secondary_status = description["SecondaryStatus"] print( "Current job status: [PrimaryStatus: {}, SecondaryStatus: {}] | {} Rule Evaluation Status: {}".format( primary_status, secondary_status, estimator.latest_training_job.rule_job_summary()[0]["RuleConfigurationName"], estimator.latest_training_job.rule_job_summary()[0]["RuleEvaluationStatus"], ) ) time.sleep(30)
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Step 6: Create a Debugger trial object to access the saved model parametersTo access the saved tensors by Debugger, use the `smdebug` client library to create a Debugger trial object. The following code cell sets up a `tutorial_trial` object, and waits until it finds available tensors from the default S3 bucket.
from smdebug.trials import create_trial tutorial_trial = create_trial(estimator.latest_job_debugger_artifacts_path())
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
The Debugger trial object accesses the SageMaker estimator's Debugger artifact path, and fetches the output tensors saved for debugging. Print the default S3 bucket URI where the Debugger output tensors are stored
tutorial_trial.path
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Print the Debugger output tensor names
tutorial_trial.tensor_names()
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Print the list of steps where the tensors are saved The smdebug `ModeKeys` class provides training phase mode keys that you can use to sort training (`TRAIN`) and validation (`EVAL`) steps and their corresponding values.
from smdebug.core.modes import ModeKeys tutorial_trial.steps(mode=ModeKeys.TRAIN) tutorial_trial.steps(mode=ModeKeys.EVAL)
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
Plot the loss curveThe following script plots the loss and accuracy curves of training and validation loops.
trial = tutorial_trial def get_data(trial, tname, mode): tensor = trial.tensor(tname) steps = tensor.steps(mode=mode) vals = [tensor.value(s, mode=mode) for s in steps] return steps, vals import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import host_subplot def plot_tensor(trial, tensor_name): tensor_name = tensor_name steps_train, vals_train = get_data(trial, tensor_name, mode=ModeKeys.TRAIN) steps_eval, vals_eval = get_data(trial, tensor_name, mode=ModeKeys.EVAL) fig = plt.figure(figsize=(10, 7)) host = host_subplot(111) par = host.twiny() host.set_xlabel("Steps (TRAIN)") par.set_xlabel("Steps (EVAL)") host.set_ylabel(tensor_name) (p1,) = host.plot(steps_train, vals_train, label=tensor_name) (p2,) = par.plot(steps_eval, vals_eval, label="val_" + tensor_name) leg = plt.legend() host.xaxis.get_label().set_color(p1.get_color()) leg.texts[0].set_color(p1.get_color()) par.xaxis.get_label().set_color(p2.get_color()) leg.texts[1].set_color(p2.get_color()) plt.ylabel(tensor_name) plt.show() plot_tensor(trial, "loss") plot_tensor(trial, "accuracy")
_____no_output_____
MIT
04_sagemaker_debugger/tf-mnist-builtin-rule.ipynb
tom5610/amazon_sagemaker_intermediate_workshop
This notebook was prepared by Marco Guajardo. For license visit [github](https://github.com/donnemartin/interactive-coding-challenges) Solution notebook Problem: Given a string of words, return a string with the words in reverse * [Constraits](Constraint)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes* Is whitespace important? * no the whitespace does not change* Is this case sensitive? * yes* What if the string is empty? * return None* Is the order of words important? * yes Algorithm: Split words into a list and reverse each word individuallySteps:* Check if string is empty* If not empty, split the string into a list of words * For each word on the list * reverse the word* Return the string representation of the listComplexity:* Time complexity is O(n) where n is the number of chars.* Space complexity is O(n) where n is the number of chars.
def reverse_words(S): if len(S) is 0: return None words = S.split() for i in range (len(words)): words[i] = words[i][::-1] return " ".join(words) %%writefile reverse_words_solution.py from nose.tools import assert_equal class UnitTest (object): def testReverseWords(self, func): assert_equal(func('the sun is hot'), 'eht nus si toh') assert_equal(func(''), None) assert_equal(func('123 456 789'), '321 654 987') assert_equal(func('magic'), 'cigam') print('Success: reverse_words') def main(): test = UnitTest() test.testReverseWords(reverse_words) if __name__=="__main__": main() run -i reverse_words_solution.py
Success: reverse_words
Apache-2.0
staging/arrays_strings/reverse_words/reverse_words_solution.ipynb
sophomore99/PythonInterective
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Eager Execution Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage To start eager execution, add `` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
import tensorflow.compat.v1 as tf
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Now you can run TensorFlow operations and the results will return immediately:
tf.executing_eagerly() x = [[2.]] m = tf.matmul(x, x) print("hello, {}".format(m))
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
a = tf.constant([[1, 2], [3, 4]]) print(a) # Broadcasting support b = tf.add(a, 1) print(b) # Operator overloading is supported print(a * b) # Use NumPy values import numpy as np c = np.multiply(a, b) print(c) # Obtain numpy value from a tensor: print(a.numpy()) # => [[1 2] # [3 4]]
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
def fizzbuzz(max_num): counter = tf.constant(0) max_num = tf.convert_to_tensor(max_num) for num in range(1, max_num.numpy()+1): num = tf.constant(num) if int(num % 3) == 0 and int(num % 5) == 0: print('FizzBuzz') elif int(num % 3) == 0: print('Fizz') elif int(num % 5) == 0: print('Buzz') else: print(num.numpy()) counter += 1 fizzbuzz(15)
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
class MySimpleLayer(tf.keras.layers.Layer): def __init__(self, output_units): super(MySimpleLayer, self).__init__() self.output_units = output_units def build(self, input_shape): # The build method gets called the first time your layer is used. # Creating variables on build() allows you to make their shape depend # on the input shape and hence removes the need for the user to specify # full shapes. It is possible to create variables during __init__() if # you already know their full shapes. self.kernel = self.add_variable( "kernel", [input_shape[-1], self.output_units]) def call(self, input): # Override call() instead of __call__ so we can perform some bookkeeping. return tf.matmul(input, self.kernel)
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
model = tf.keras.Sequential([ tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape tf.keras.layers.Dense(10) ])
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
class MNISTModel(tf.keras.Model): def __init__(self): super(MNISTModel, self).__init__() self.dense1 = tf.keras.layers.Dense(units=10) self.dense2 = tf.keras.layers.Dense(units=10) def call(self, input): """Run the model.""" result = self.dense1(input) result = self.dense2(result) result = self.dense2(result) # reuse variables from dense2 layer return result model = MNISTModel()
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
w = tf.Variable([[1.0]]) with tf.GradientTape() as tape: loss = w * w grad = tape.gradient(loss, w) print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
# Fetch and format the mnist data (mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data() dataset = tf.data.Dataset.from_tensor_slices( (tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32), tf.cast(mnist_labels,tf.int64))) dataset = dataset.shuffle(1000).batch(32) # Build the model mnist_model = tf.keras.Sequential([ tf.keras.layers.Conv2D(16,[3,3], activation='relu'), tf.keras.layers.Conv2D(16,[3,3], activation='relu'), tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(10) ])
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Even without training, call the model and inspect the output in eager execution:
for images,labels in dataset.take(1): print("Logits: ", mnist_model(images[0:1]).numpy())
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
optimizer = tf.train.AdamOptimizer() loss_history = [] for (batch, (images, labels)) in enumerate(dataset.take(400)): if batch % 10 == 0: print('.', end='') with tf.GradientTape() as tape: logits = mnist_model(images, training=True) loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits) loss_history.append(loss_value.numpy()) grads = tape.gradient(loss_value, mnist_model.trainable_variables) optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables), global_step=tf.train.get_or_create_global_step()) import matplotlib.pyplot as plt plt.plot(loss_history) plt.xlabel('Batch #') plt.ylabel('Loss [entropy]')
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
class Model(tf.keras.Model): def __init__(self): super(Model, self).__init__() self.W = tf.Variable(5., name='weight') self.B = tf.Variable(10., name='bias') def call(self, inputs): return inputs * self.W + self.B # A toy dataset of points around 3 * x + 2 NUM_EXAMPLES = 2000 training_inputs = tf.random_normal([NUM_EXAMPLES]) noise = tf.random_normal([NUM_EXAMPLES]) training_outputs = training_inputs * 3 + 2 + noise # The loss function to be optimized def loss(model, inputs, targets): error = model(inputs) - targets return tf.reduce_mean(tf.square(error)) def grad(model, inputs, targets): with tf.GradientTape() as tape: loss_value = loss(model, inputs, targets) return tape.gradient(loss_value, [model.W, model.B]) # Define: # 1. A model. # 2. Derivatives of a loss function with respect to model parameters. # 3. A strategy for updating the variables based on the derivatives. model = Model() optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs))) # Training loop for i in range(300): grads = grad(model, training_inputs, training_outputs) optimizer.apply_gradients(zip(grads, [model.W, model.B]), global_step=tf.train.get_or_create_global_step()) if i % 20 == 0: print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs))) print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs))) print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
if tf.test.is_gpu_available(): with tf.device("gpu:0"): v = tf.Variable(tf.random_normal([1000, 1000])) v = None # v no longer takes up GPU memory
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
x = tf.Variable(10.) checkpoint = tf.train.Checkpoint(x=x) x.assign(2.) # Assign a new value to the variables and save. checkpoint_path = './ckpt/' checkpoint.save('./ckpt/') x.assign(11.) # Change the variable after saving. # Restore values from the checkpoint checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path)) print(x) # => 2.0
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
import os import tempfile model = tf.keras.Sequential([ tf.keras.layers.Conv2D(16,[3,3], activation='relu'), tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(10) ]) optimizer = tf.train.AdamOptimizer(learning_rate=0.001) checkpoint_dir = tempfile.mkdtemp() checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") root = tf.train.Checkpoint(optimizer=optimizer, model=model, optimizer_step=tf.train.get_or_create_global_step()) root.save(checkpoint_prefix) root.restore(tf.train.latest_checkpoint(checkpoint_dir))
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Object-oriented metrics`tf.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.metrics.result` method,for example:
m = tf.keras.metrics.Mean("loss") m(0) m(5) m.result() # => 2.5 m([8, 9]) m.result() # => 5.5
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.TensorFlow 1 summaries only work in eager mode, but can be run with the `compat.v2` module:
from tensorflow.compat.v2 import summary global_step = tf.train.get_or_create_global_step() logdir = "./tb/" writer = summary.create_file_writer(logdir) writer.set_as_default() for _ in range(10): global_step.assign_add(1) # your model code goes here summary.scalar('global_step', global_step, step=global_step) !ls tb/
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
def line_search_step(fn, init_x, rate=1.0): with tf.GradientTape() as tape: # Variables are automatically recorded, but manually watch a tensor tape.watch(init_x) value = fn(init_x) grad = tape.gradient(value, init_x) grad_norm = tf.reduce_sum(grad * grad) init_value = value while value > init_value - rate * grad_norm: x = init_x - rate * grad value = fn(x) rate /= 2.0 return x, value
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
@tf.custom_gradient def clip_gradient_by_norm(x, norm): y = tf.identity(x) def grad_fn(dresult): return [tf.clip_by_norm(dresult, norm), None] return y, grad_fn
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
def log1pexp(x): return tf.log(1 + tf.exp(x)) class Grad(object): def __init__(self, f): self.f = f def __call__(self, x): x = tf.convert_to_tensor(x) with tf.GradientTape() as tape: tape.watch(x) r = self.f(x) g = tape.gradient(r, x) return g grad_log1pexp = Grad(log1pexp) # The gradient computation works fine at x = 0. grad_log1pexp(0.).numpy() # However, x = 100 fails because of numerical instability. grad_log1pexp(100.).numpy()
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
@tf.custom_gradient def log1pexp(x): e = tf.exp(x) def grad(dy): return dy * (1 - 1 / (1 + e)) return tf.log(1 + e), grad grad_log1pexp = Grad(log1pexp) # As before, the gradient computation works fine at x = 0. grad_log1pexp(0.).numpy() # And the gradient computation also works at x = 100. grad_log1pexp(100.).numpy()
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
import time def measure(x, steps): # TensorFlow initializes a GPU the first time it's used, exclude from timing. tf.matmul(x, x) start = time.time() for i in range(steps): x = tf.matmul(x, x) # tf.matmul can return before completing the matrix multiplication # (e.g., can return after enqueing the operation on a CUDA stream). # The x.numpy() call below will ensure that all enqueued operations # have completed (and will also copy the result to host memory, # so we're including a little more than just the matmul operation # time). _ = x.numpy() end = time.time() return end - start shape = (1000, 1000) steps = 200 print("Time to multiply a {} matrix by itself {} times:".format(shape, steps)) # Run on CPU: with tf.device("/cpu:0"): print("CPU: {} secs".format(measure(tf.random_normal(shape), steps))) # Run on GPU, if available: if tf.test.is_gpu_available(): with tf.device("/gpu:0"): print("GPU: {} secs".format(measure(tf.random_normal(shape), steps))) else: print("GPU: not found")
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
if tf.test.is_gpu_available(): x = tf.random_normal([10, 10]) x_gpu0 = x.gpu() x_cpu = x.cpu() _ = tf.matmul(x_cpu, x_cpu) # Runs on CPU _ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `` has *not*been called.
def my_py_func(x): x = tf.matmul(x, x) # You can use tf ops print(x) # but it's eager! return x with tf.Session() as sess: x = tf.placeholder(dtype=tf.float32) # Call eager function in graph! pf = tf.py_func(my_py_func, [x], tf.float32) sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
_____no_output_____
Apache-2.0
site/en/r1/guide/eager.ipynb
PRUBHTEJ/docs-1
Interactive Variant AnnotationThe following query retrieves variants from [DeepVariant-called Platinum Genomes](http://googlegenomics.readthedocs.io/en/latest/use_cases/discover_public_data/platinum_genomes_deepvariant.html) and interactively JOINs them with [ClinVar](http://googlegenomics.readthedocs.io/en/latest/use_cases/discover_public_data/clinvar_annotations.html). To run this on your own table of variants, change the table name and call_set_name in the `sample_variants` sub query below.For an ongoing investigation, you may wish to repeat this query each time a new version of ClinVar is released and [loaded into BigQuery](https://github.com/verilylifesciences/variant-annotation/tree/master/curation/tables/README.md) by changing the table name in the `rare_pathenogenic_variants` sub query.See also similar examples for GRCh37 in https://github.com/googlegenomics/bigquery-examples/tree/master/platinumGenomes
%%bq query #standardSQL -- -- Return variants for sample NA12878 that are: -- annotated as 'pathogenic' or 'other' in ClinVar -- with observed population frequency less than 5% -- WITH sample_variants AS ( SELECT -- Remove the 'chr' prefix from the reference name. REGEXP_EXTRACT(reference_name, r'chr(.+)') AS chr, start, reference_bases, alt, call.call_set_name FROM `genomics-public-data.platinum_genomes_deepvariant.single_sample_genome_calls` v, v.call call, v.alternate_bases alt WITH OFFSET alt_offset WHERE call_set_name = 'NA12878_ERR194147' -- Require that at least one genotype matches this alternate. AND EXISTS (SELECT gt FROM UNNEST(call.genotype) gt WHERE gt = alt_offset+1) ), -- -- rare_pathenogenic_variants AS ( SELECT -- ClinVar does not use the 'chr' prefix for reference names. reference_name AS chr, start, reference_bases, alt, CLNHGVS, CLNALLE, CLNSRC, CLNORIGIN, CLNSRCID, CLNSIG, CLNDSDB, CLNDSDBID, CLNDBN, CLNREVSTAT, CLNACC FROM `bigquery-public-data.human_variant_annotation.ncbi_clinvar_hg38_20170705` v, v.alternate_bases alt WHERE -- Variant Clinical Significance, 0 - Uncertain significance, 1 - not provided, -- 2 - Benign, 3 - Likely benign, 4 - Likely pathogenic, 5 - Pathogenic, -- 6 - drug response, 7 - histocompatibility, 255 - other EXISTS (SELECT sig FROM UNNEST(CLNSIG) sig WHERE REGEXP_CONTAINS(sig, '(4|5|255)')) -- TRUE if >5% minor allele frequency in 1+ populations AND G5 IS NULL ) -- -- SELECT * FROM sample_variants JOIN rare_pathenogenic_variants USING(chr, start, reference_bases, alt) ORDER BY chr, start, reference_bases, alt
_____no_output_____
Apache-2.0
interactive/InteractiveVariantAnnotation.ipynb
bashir2/variant-annotation
Dealing with compound data setUsing dtypes one can detect the names for the dtype and then copy into an array and convert to np.strThen pandas DataFrame can parse those properly as a table
import pandas as pd import numpy as np import h5py h5 = h5py.File('../../tests/historical_v82.h5') x=h5.get('/hydro/geometry/reservoir_node_connect')
_____no_output_____
MIT
docs/html/notebooks/h5_compound_dataset_as_dataframe.ipynb
sainjacobs/pydsm
See below on how to use dtype on returned array to see the names
x[0].dtype.names
_____no_output_____
MIT
docs/html/notebooks/h5_compound_dataset_as_dataframe.ipynb
sainjacobs/pydsm
Now the names can be used to get the value for that dtype
x[0]['res_name']
_____no_output_____
MIT
docs/html/notebooks/h5_compound_dataset_as_dataframe.ipynb
sainjacobs/pydsm
Using generative expressions to get the values as arrays of arrays with everything converted to strings
pd.DataFrame([[v[name].astype(np.str) for name in v.dtype.names] for v in x])
_____no_output_____
MIT
docs/html/notebooks/h5_compound_dataset_as_dataframe.ipynb
sainjacobs/pydsm
4.3.4 抽出した文章群から日本語極性辞書にマッチする単語を特定しトーンを算出抽出した文章群から乾・鈴木(2008)で公開された日本語評価極性辞書を用いて、マッチする単語を特定しトーンを算出する。ここでは、osetiと呼ばれる日本語評価極性辞書を用いて極性の判定を行うPythonのライブラリを用いた。
import glob def call_sample_dir_name(initial_name): if initial_name == "a": return "AfterSample" elif initial_name == "t": return "TransitionPeriodSample" else: return "BeforeSample" def call_csv_files(sample_dir_name="AfterSample", data_frame_spec=None, industry_spec=None): if data_frame_spec is None: if industry_spec is None: csv_files = glob.glob('/home/jovyan/3FetchingMDandA' + f"/**/{sample_dir_name}/*.csv", recursive=True) else: csv_files = glob.glob(f'/home/jovyan/3FetchingMDandA' + f"/**/{industry_spec}/{sample_dir_name}/*.csv", recursive=True) else: if industry_spec is None: csv_files = glob.glob(f'/home/jovyan/3FetchingMDandA/{data_frame_spec}' + f"/**/{sample_dir_name}/*.csv", recursive=True) else: csv_files = glob.glob(f"/home/jovyan/3FetchingMDandA/{data_frame_spec}/{industry_spec}/{sample_dir_name}/*.csv", recursive=True) return csv_files import glob import pandas as pd import os import oseti # analyzer = oseti.Analyzer() def make_atb_li(atb_file, analyzer): atb_df = pd.read_csv(atb_file, index_col=0) if len(atb_df) < 1: return 0 texts_joined = "".join(list(atb_df["Text"].values)) #parse error対策 texts_joined = texts_joined.replace("\n", "") scores = analyzer.count_polarity(texts_joined) sum_plus = 0 sum_minus = 0 for score in scores: sum_plus += score["positive"] sum_minus += score["negative"] ret_val = (sum_plus - sum_minus)/(sum_plus + sum_minus) return ret_val #今回は全企業を抽出して分析 data_frame_spec=None industry_spec=None #before dir_name_b = call_sample_dir_name("b") before_csv_files = call_csv_files(dir_name_b, data_frame_spec, industry_spec) #transition dir_name_t = call_sample_dir_name("t") transition_period_csv_files = call_csv_files(dir_name_t, data_frame_spec, industry_spec) #after dir_name_a = call_sample_dir_name("a") after_csv_files = call_csv_files(dir_name_a, data_frame_spec, industry_spec) print("--------ここまで終わりました1------") analyzer = oseti.Analyzer() #cache化して速度向上 before_li = [] f = before_li.append for b_file in before_csv_files : tmp = make_atb_li(b_file, analyzer) f(tmp) print("-------ここまで終わりました2-------") transition_period_li = [] f = transition_period_li.append for t_file in transition_period_csv_files : tmp = make_atb_li(t_file, analyzer) f(tmp) print("-------ここまで終わりました3-------") after_li = [] f = after_li.append for a_file in after_csv_files : tmp = make_atb_li(a_file, analyzer) f(tmp) print("-------ここまで終わりました4-------") #各年度のサンプルサイズ print(len(after_li), len(transition_period_li) ,len(before_li)) #beforeのヒストグラム import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(1,1,1) plt.ylabel("Number of companies") plt.xlabel("TONE") ax.hist(before_li, bins=50) #transitionのヒストグラム fig = plt.figure() ax = fig.add_subplot(1,1,1) plt.ylabel("Number of companies") plt.xlabel("TONE") ax.hist(transition_period_li, bins=50) #afterのヒストグラム fig = plt.figure() ax = fig.add_subplot(1,1,1) plt.ylabel("Number of companies") plt.xlabel("TONE") ax.hist(after_li, bins=50) # 正規性の検定 #before import scipy.stats as stats print(stats.shapiro(before_li)) print(stats.kstest(before_li, "norm")) # 正規性の検定 #transition print(stats.shapiro(transition_period_li)) print(stats.kstest(transition_period_li, "norm")) # 正規性の検定 #after print(stats.shapiro(after_li)) print(stats.kstest(after_li, "norm")) import numpy as np # 等分散性の検定 #ex) A=before_li, B=transition_period_li def exec_f_test(A, B): A_var = np.var(A, ddof=1) # Aの不偏分散 B_var = np.var(B, ddof=1) # Bの不偏分散 A_df = len(A) - 1 # Aの自由度 B_df = len(B) - 1 # Bの自由度 f = A_var / B_var # F比の値 one_sided_pval1 = stats.f.cdf(f, A_df, B_df) # 片側検定のp値 1 one_sided_pval2 = stats.f.sf(f, A_df, B_df) # 片側検定のp値 2 two_sided_pval = min(one_sided_pval1, one_sided_pval2) * 2 # 両側検定のp値 print('F: ', round(f, 3)) print('p-value: ', round(two_sided_pval, 3)) A=before_li B=transition_period_li exec_f_test(A, B) A=transition_period_li B=after_li exec_f_test(A, B) A=before_li B=after_li exec_f_test(A, B) import numpy print("before_li: ", numpy.average(before_li)) print("transition_period_lii: ", numpy.average(transition_period_li)) print("after_li: ", numpy.average(after_li)) import numpy print("before_li: ", numpy.average(before_li)) print("transition_period_lii: ", numpy.average(transition_period_li)) print("after_li: ", numpy.average(after_li)) #ウェルチのt検定 stats.ttest_ind(before_li, transition_period_li, equal_var=False) #スチューデントのt検定 stats.ttest_ind(transition_period_li, after_li, axis=0, equal_var=True, nan_policy='propagate') #スチューデントのt検定 stats.ttest_ind(before_li, after_li, axis=0, equal_var=True, nan_policy='propagate') #マンホイットニーのu検定 stats.mannwhitneyu( before_li, transition_period_li, alternative='two-sided') stats.mannwhitneyu(transition_period_li, after_li, alternative='two-sided') stats.mannwhitneyu(before_li, after_li, alternative='two-sided')
_____no_output_____
MIT
src/4AnalysingText/analyzing_text.ipynb
Densuke-fitness/MDandAAnalysisFlow
Publications markdown generator for academicpagesTakes a TSV of publications with metadata and converts them for use with [academicpages.github.io](academicpages.github.io). This is an interactive Jupyter notebook ([see more info here](http://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html)). The core python code is also in `publications.py`. Run either from the `markdown_generator` folder after replacing `publications.tsv` with one containing your data.TODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style. Data formatThe TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top. - `excerpt` and `paper_url` can be blank, but the others must have values. - `pub_date` must be formatted as YYYY-MM-DD.- `url_slug` will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be `YYYY-MM-DD-[url_slug].md` and the permalink will be `https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]`This is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).
!cat publications.tsv
pub_date title venue excerpt citation url_slug paper_url 2012 The effect of surface wave propagation on neural responses to vibration in primate glabrous skin. PloS one Manfredi LR, Baker AT, Elias DO, Dammann III JF, Zielinski MC, Polashock VS, Bensmaia SJ. The effect of surface wave propagation on neural responses to vibration in primate glabrous skin. PloS one. 2012;7(2):e31203. Neuro-1 2012 Quantification of islet size and architecture. Islets Kilimnik G, Jo J, Periwal V, Zielinski MC, Hara M. Quantification of islet size and architecture. Islets. 2012;4(2):167–172. Diabetes-1 2013 Quantitative analysis of pancreatic polypeptide cell distribution in the human pancreas. PloS one Wang X, Zielinski MC, Misawa R, Wen P, Wang T-Y, Wang C-Z, Witkowski P, Hara M. Quantitative analysis of pancreatic polypeptide cell distribution in the human pancreas. PloS one. 2013;8(1):e55501. Diabetes-2 2013 Regional differences in islet distribution in the human pancreas--preferential beta-cell loss in the head region in patients with type 2 diabetes. PloS one Wang X, Misawa R, Zielinski MC, Cowen P, Jo J, Periwal V, Ricordi C, Khan A, Szust J, Shen J. Regional differences in islet distribution in the human pancreas--preferential beta-cell loss in the head region in patients with type 2 diabetes. PLoS One. 2013;8(6). Diabetes-3 2013 Distinct function of the head region of human pancreas in the pathogenesis of diabetes. Islets Savari O, Zielinski MC, Wang X, Misawa R, Millis JM, Witkowski P, Hara M. Distinct function of the head region of human pancreas in the pathogenesis of diabetes. Islets. 2013;5(5):226–228. Diabetes-4 2014 Natural scenes in tactile texture. Journal of neurophysiology Manfredi LR, Saal HP, Brown KJ, Zielinski MC, Dammann JF, Polashock VS, Bensmaia SJ. Natural scenes in tactile texture. Journal of neurophysiology. 2014;111(9):1792–1802. Neuro-2 2014 Improved coating of pancreatic islets with regulatory T cells to create local immunosuppression by using the biotin-polyethylene glycol-succinimidyl valeric acid ester molecule. Transplantation proceedings Gołąb K, Kizilel S, Bal T, Hara M, Zielinski M, Grose R, Savari O, Wang X-J, Wang L-J, Tibudan M. Improved Coating of Pancreatic Islets With Regulatory T cells to Create Local Immunosuppression by Using the Biotin-polyethylene Glycol-succinimidyl Valeric Acid Ester Molecule. Transplantation proceedings. 2014;46(6):1967–1971. Diabetes-4 2015 Evidence of non-pancreatic beta cell-dependent roles of Tcf7l2 in the regulation of glucose metabolism in mice. Human molecular genetics Bailey KA, Savic D, Zielinski M, Park S-Y, Wang L, Witkowski P, Brady M, Hara M, Bell GI, Nobrega MA. Evidence of non-pancreatic beta cell-dependent roles of Tcf7l2 in the regulation of glucose metabolism in mice. Human molecular genetics. 2015;24(6):1646–1654. Diabetes-5 2016 Stereological analyses of the whole human pancreas. Scientific reports A Poudel, JL Fowler, MC Zielinski, G Kilimnik, M Hara. Stereological analyses of the whole human pancreas. Scientific Reports. 2016;6:34049. Diabetes-6 2016 Interplay between Hippocampal Sharp-Wave-Ripple Events and Vicarious Trial and Error Behaviors in Decision Making. Neuron AE Papale, MC Zielinski, LM Frank, SP Jadhav, AD Redish. Interplay between hippocampal sharp wave ripple events and vicarious trial and error behaviors in decision making. Neuron. 2016;92(5):975-982. Neuro-3 2017 Preservation of Reduced Numbers of Insulin-Positive Cells in Sulfonylurea-Unresponsive KCNJ11-Related Diabetes. The Journal of clinical endocrinology and metabolism SA Greeley, MC Zielinski, A Poudel, H Ye, S Berry, JB Taxy, D Carmody, DF Steiner, LH Philipson, JR Wood, M Hara. Preservation of Reduced Numbers of Insulin-Positive Cells in Sulfonylurea-Unresponsive KCNJ11-Related Diabetes. Journal of Clinical Endocrinology and Metabolism. 2017;102(1):1-5. Diabetes-7 2017 The role of replay and theta sequences in mediating hippocampal-prefrontal interactions for memory and cognition. Hippocampus MC Zielinski, W Tang, SP Jadhav. The role of replay and theta sequences in mediating hippocampal-prefrontal interactions for memory and cognition Hippocampus. 2017;10.1002/hipo.22821 Neuro-4
MIT
markdown_generator/publications.ipynb
mczielinski/mczielinski.github.io
Import pandasWe are using the very handy pandas library for dataframes.
import pandas as pd
_____no_output_____
MIT
markdown_generator/publications.ipynb
mczielinski/mczielinski.github.io
Import TSVPandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or `\t`.I found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.
publications = pd.read_csv("publications.tsv", sep="\t", header=0) publications
_____no_output_____
MIT
markdown_generator/publications.ipynb
mczielinski/mczielinski.github.io
Escape special charactersYAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.
html_escape_table = { "&": "&amp;", '"': "&quot;", "'": "&apos;" } def html_escape(text): """Produce entities within text.""" return "".join(html_escape_table.get(c,c) for c in text)
_____no_output_____
MIT
markdown_generator/publications.ipynb
mczielinski/mczielinski.github.io
Creating the markdown filesThis is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (```md```) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.
import os for row, item in publications.iterrows(): md_filename = str(item.pub_date) + "-" + item.url_slug + ".md" html_filename = str(item.pub_date) + "-" + item.url_slug year = item.pub_date[:4] ## YAML variables md = "---\ntitle: \"" + item.title + '"\n' md += """collection: publications""" md += """\npermalink: /publication/""" + html_filename if len(str(item.excerpt)) > 5: md += "\nexcerpt: '" + html_escape(item.excerpt) + "'" md += "\ndate: " + str(item.pub_date) md += "\nvenue: '" + html_escape(item.venue) + "'" if len(str(item.paper_url)) > 5: md += "\npaperurl: '" + item.paper_url + "'" md += "\ncitation: '" + html_escape(item.citation) + "'" md += "\n---" ## Markdown description for individual page if len(str(item.excerpt)) > 5: md += "\n" + html_escape(item.excerpt) + "\n" if len(str(item.paper_url)) > 5: md += "\n[Download paper here](" + item.paper_url + ")\n" md += "\nRecommended citation: " + item.citation md_filename = os.path.basename(md_filename) with open("../_publications/" + md_filename, 'w') as f: f.write(md)
_____no_output_____
MIT
markdown_generator/publications.ipynb
mczielinski/mczielinski.github.io
These files are in the publications directory, one directory below where we're working from.
!ls ../_publications/ !cat ../_publications/2009-10-01-paper-title-number-1.md
--- title: "Paper Title Number 1" collection: publications permalink: /publication/2009-10-01-paper-title-number-1 excerpt: 'This paper is about the number 1. The number 2 is left for future work.' date: 2009-10-01 venue: 'Journal 1' paperurl: 'http://academicpages.github.io/files/paper1.pdf' citation: 'Your Name, You. (2009). &quot;Paper Title Number 1.&quot; <i>Journal 1</i>. 1(1).' --- This paper is about the number 1. The number 2 is left for future work. [Download paper here](http://academicpages.github.io/files/paper1.pdf) Recommended citation: Your Name, You. (2009). "Paper Title Number 1." <i>Journal 1</i>. 1(1).
MIT
markdown_generator/publications.ipynb
mczielinski/mczielinski.github.io
**[Deep Learning Course Home Page](https://www.kaggle.com/learn/deep-learning)**--- IntroductionYou've seen how to build a model from scratch to identify handwritten digits. You'll now build a model to identify different types of clothing. To make models that train quickly, we'll work with very small (low-resolution) images. As an example, your model will take an images like this and identify it as a shoe:![Imgur](https://i.imgur.com/GyXOnSB.png) Data PreparationThis code is supplied, and you don't need to change it. Just run the cell below.
import numpy as np from sklearn.model_selection import train_test_split from tensorflow import keras img_rows, img_cols = 28, 28 num_classes = 10 def prep_data(raw): y = raw[:, 0] out_y = keras.utils.to_categorical(y, num_classes) x = raw[:,1:] num_images = raw.shape[0] out_x = x.reshape(num_images, img_rows, img_cols, 1) out_x = out_x / 255 return out_x, out_y fashion_file = "../input/fashionmnist/fashion-mnist_train.csv" fashion_data = np.loadtxt(fashion_file, skiprows=1, delimiter=',') x, y = prep_data(fashion_data) # Set up code checking from learntools.core import binder binder.bind(globals()) from learntools.deep_learning.exercise_7 import * print("Setup Complete")
Using TensorFlow version 2.1.0 Setup Complete
MIT
deep_learning/07-deep-learning-from-scratch.ipynb
drakearch/kaggle-courses
1) Start the modelCreate a `Sequential` model called `fashion_model`. Don't add layers yet.
from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D # Your Code Here fashion_model = Sequential() q_1.check() #q_1.solution()
_____no_output_____
MIT
deep_learning/07-deep-learning-from-scratch.ipynb
drakearch/kaggle-courses
2) Add the first layerAdd the first `Conv2D` layer to `fashion_model`. It should have 12 filters, a kernel_size of 3 and the `relu` activation function. The first layer always requires that you specify the `input_shape`. We have saved the number of rows and columns to the variables `img_rows` and `img_cols` respectively, so the input shape in this case is `(img_rows, img_cols, 1)`.
# Your code here fashion_model.add(Conv2D(12, kernel_size=(3, 3), activation='relu', input_shape=(img_rows, img_cols, 1))) q_2.check() # q_2.hint() #q_2.solution()
_____no_output_____
MIT
deep_learning/07-deep-learning-from-scratch.ipynb
drakearch/kaggle-courses
3) Add the remaining layers1. Add 2 more convolutional (`Conv2D layers`) with 20 filters each, 'relu' activation, and a kernel size of 3. Follow that with a `Flatten` layer, and then a `Dense` layer with 100 neurons. 2. Add your prediction layer to `fashion_model`. This is a `Dense` layer. We alrady have a variable called `num_classes`. Use this variable when specifying the number of nodes in this layer. The activation should be `softmax` (or you will have problems later).
# Your code here fashion_model.add(Conv2D(20, kernel_size=(3, 3), activation='relu')) fashion_model.add(Conv2D(20, kernel_size=(3, 3), activation='relu')) fashion_model.add(Flatten()) fashion_model.add(Dense(100, activation='relu')) fashion_model.add(Dense(num_classes, activation='softmax')) q_3.check() # q_3.solution()
_____no_output_____
MIT
deep_learning/07-deep-learning-from-scratch.ipynb
drakearch/kaggle-courses
4) Compile Your ModelCompile fashion_model with the `compile` method. Specify the following arguments:1. `loss = "categorical_crossentropy"`2. `optimizer = 'adam'`3. `metrics = ['accuracy']`
# Your code to compile the model in this cell fashion_model.compile(loss=keras.losses.categorical_crossentropy, optimizer='adam', metrics=['accuracy']) q_4.check() # q_4.solution()
_____no_output_____
MIT
deep_learning/07-deep-learning-from-scratch.ipynb
drakearch/kaggle-courses
5) Fit The ModelRun the command `fashion_model.fit`. The arguments you will use are1. The data used to fit the model. First comes the data holding the images, and second is the data with the class labels to be predicted. Look at the first code cell (which was supplied to you) where we called `prep_data` to find the variable names for these.2. `batch_size = 100`3. `epochs = 4`4. `validation_split = 0.2`When you run this command, you can watch your model start improving. You will see validation accuracies after each epoch.
# Your code to fit the model here fashion_model.fit(x, y, batch_size = 100, epochs = 4, validation_split = 0.2) q_5.check() #q_5.solution()
_____no_output_____
MIT
deep_learning/07-deep-learning-from-scratch.ipynb
drakearch/kaggle-courses