markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
*Step 8: Visualization Check* | fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.set_title(str(channel_1_color + ' Filled'))
ax1.imshow(filled_c1, cmap='gray')
ax2 = fig.add_subplot(2,2,2)
ax2.set_title(str(channel_2_color + ' Filled'))
ax2.imshow(filled_c2, cmap='gray')
fig.set_size_inches(10.5, 10.5, forward=True) | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 9: Labeling Objects* | label_objects1, nb_labels1 = ndi.label(filled_c1)
sizes1 = np.bincount(label_objects1.ravel())
mask_sizes1 = sizes1 > 100
mask_sizes1[0] = 0
cells_cleaned_c1 = mask_sizes1[label_objects1]
label_objects2, nb_labels2 = ndi.label(filled_c2)
sizes2 = np.bincount(label_objects2.ravel())
mask_sizes2 = sizes2 > 100
mask_sizes2[0] = 0
cells_cleaned_c2 = mask_sizes2[label_objects2]
labeled_c1, _ = ndi.label(cells_cleaned_c1)
labeled_c2, _ = ndi.label(cells_cleaned_c2) | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 10: Visualization Check* | fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.set_title(str(channel_1_color + ' Labeled'))
ax1.imshow(labeled_c1)
ax2 = fig.add_subplot(2,2,2)
ax2.set_title(str(channel_2_color + ' Labeled'))
ax2.imshow(labeled_c2)
fig.set_size_inches(10.5, 10.5, forward=True) | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 11: Get Region Props* | regionprops_c1 = measure.regionprops(labeled_c1)
regionprops_c2 = measure.regionprops(labeled_c2)
df = pd.DataFrame(columns=['centroid x', 'centroid y','equiv_diam'])
k = 1
for props in regionprops_c1:
#Get the properties that I need for areas
#Add them into a pandas dataframe that has the same number of rows as objects detected
#
centroid = props.centroid
centroid_x = centroid[0]
centroid_y = centroid[1]
equiv_diam = props.equivalent_diameter
df.loc[k] = [centroid_x, centroid_y, equiv_diam]
k = k + 1
df2 = pd.DataFrame(columns=['centroid x', 'centroid y','equiv_diam'])
k = 1
for props in regionprops_c2:
#Get the properties that I need for areas
#Add them into a pandas dataframe that has the same number of rows as objects detected
#
centroid = props.centroid
centroid_x = centroid[0]
centroid_y = centroid[1]
equiv_diam = props.equivalent_diameter
df2.loc[k] = [centroid_x, centroid_y, equiv_diam]
k = k + 1
count_c1 = df.shape[0]
print('Count ' + channel_1_color + ': ' + str(count_c1))
count_c2 = df2.shape[0]
print('Count ' + channel_2_color + ': ' + str(count_c2)) | Count Blue: 114
Count Green: 16
| MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
E4 Sensor Concatenation This sensor concatenation file compiles all .csv files of subjects by sensor type. A column is added with the "Subject_ID" and arranges the data in order of ascending ID number. The output of this function is a csv file. *** **Input:** Properly formatted .csv files from the E4FileFormatter (DBDP preprocessing folder) **Output:** Each .csv file will consist of only one type of sensor data. A column for subject ID has been added. Data will be organized numerically, by subject ID. Headers will be based on the column names input into the function. *** | import pandas as pd
import glob
import os
os.chdir('../00_source') | _____no_output_____ | Apache-2.0 | DigitalBiomarkers-HumanActivityRecognition/00_source/.ipynb_checkpoints/20_sensor_concat-checkpoint.ipynb | Big-Ideas-Lab/DBDP |
Import & Concatenate Sensor Data of Choice**Functions:*** $\underline{data\_concat()}$ - reads all files in data directory (00_source) and concatenates those of one sensor type. Adds subject ID column to resulting .csv file > data = data type to be concatenated as a string > cols = column names in resulting dataframe as a list > file_name = output .csv file name as a string | # Select files of specific data and concat to one dataframe
def data_concat(data, cols, file_name):
"""
data = data type to be concatenated as a string
cols = column names in resulting dataframe as a list
file_name = output csv file name as a string
"""
all_filenames = [i for i in glob.glob(f'*{data}.csv')]
all_filenames = sorted(all_filenames)
df = pd.concat([pd.read_csv(f, header=None).assign(Subject_ID=os.path.basename(f))
for f in all_filenames])
df['Subject_ID'] = df['Subject_ID'].str[:6]
df.columns = cols
df.to_csv(f"../20_Intermediate_files/{file_name}.csv", index = False)
return df
cols = ['Time', 'TEMP', 'Subject_ID']
data_concat("TEMP", cols, "20_Temp_Combined") | _____no_output_____ | Apache-2.0 | DigitalBiomarkers-HumanActivityRecognition/00_source/.ipynb_checkpoints/20_sensor_concat-checkpoint.ipynb | Big-Ideas-Lab/DBDP |
variance | print(ind_data.info()) | <class 'pandas.core.frame.DataFrame'>
Index: 0 entries
Empty DataFrameNone
| MIT | jupyterexample/StudyPandas2.ipynb | newrey/QUANTAXIS |
DataWe assume that data has already been downloaded via notebook [1_data.ipynb](1_data.ipynb). Training data (for input `X` with associated label masks `Y`) can be provided via lists of numpy arrays, where each image can have a different size. Alternatively, a single numpy array can also be used if all images have the same size. Input images can either be two-dimensional (single-channel) or three-dimensional (multi-channel) arrays, where the channel axis comes last. Label images need to be integer-valued. | X = sorted(glob('data/dsb2018/train/images/*.tif'))
Y = sorted(glob('data/dsb2018/train/masks/*.tif'))
assert all(Path(x).name==Path(y).name for x,y in zip(X,Y))
X = list(map(imread,X))
Y = list(map(imread,Y))
n_channel = 1 if X[0].ndim == 2 else X[0].shape[-1] | _____no_output_____ | BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
Normalize images and fill small label holes. | axis_norm = (0,1) # normalize channels independently
# axis_norm = (0,1,2) # normalize channels jointly
if n_channel > 1:
print("Normalizing image channels %s." % ('jointly' if axis_norm is None or 2 in axis_norm else 'independently'))
sys.stdout.flush()
X = [normalize(x,1,99.8,axis=axis_norm) for x in tqdm(X)]
Y = [fill_label_holes(y) for y in tqdm(Y)] | 100%|ββββββββββ| 447/447 [00:01<00:00, 462.35it/s]
100%|ββββββββββ| 447/447 [00:04<00:00, 111.61it/s]
| BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
Split into train and validation datasets. | assert len(X) > 1, "not enough training data"
rng = np.random.RandomState(42)
ind = rng.permutation(len(X))
n_val = max(1, int(round(0.15 * len(ind))))
ind_train, ind_val = ind[:-n_val], ind[-n_val:]
X_val, Y_val = [X[i] for i in ind_val] , [Y[i] for i in ind_val]
X_trn, Y_trn = [X[i] for i in ind_train], [Y[i] for i in ind_train]
print('number of images: %3d' % len(X))
print('- training: %3d' % len(X_trn))
print('- validation: %3d' % len(X_val)) | number of images: 447
- training: 380
- validation: 67
| BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
Training data consists of pairs of input image and label instances. | i = min(9, len(X)-1)
img, lbl = X[i], Y[i]
assert img.ndim in (2,3)
img = img if img.ndim==2 else img[...,:3]
plt.figure(figsize=(16,10))
plt.subplot(121); plt.imshow(img,cmap='gray'); plt.axis('off'); plt.title('Raw image')
plt.subplot(122); plt.imshow(lbl,cmap=lbl_cmap); plt.axis('off'); plt.title('GT labels')
None; | _____no_output_____ | BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
ConfigurationA `StarDist2D` model is specified via a `Config2D` object. | print(Config2D.__doc__)
# 32 is a good default choice (see 1_data.ipynb)
n_rays = 32
# Use OpenCL-based computations for data generator during training (requires 'gputools')
use_gpu = False and gputools_available()
# Predict on subsampled grid for increased efficiency and larger field of view
grid = (2,2)
conf = Config2D (
n_rays = n_rays,
grid = grid,
use_gpu = use_gpu,
n_channel_in = n_channel,
)
print(conf)
vars(conf)
if use_gpu:
from csbdeep.utils.tf import limit_gpu_memory
# adjust as necessary: limit GPU memory to be used by TensorFlow to leave some to OpenCL-based computations
limit_gpu_memory(0.8) | _____no_output_____ | BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
**Note:** The trained `StarDist2D` model will *not* predict completed shapes for partially visible objects at the image boundary if `train_shape_completion=False` (which is the default option). | model = StarDist2D(conf, name='stardist', basedir='models') | Using default values: prob_thresh=0.5, nms_thresh=0.4.
| BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
Check if the neural network has a large enough field of view to see up to the boundary of most objects. | median_size = calculate_extents(list(Y), np.median)
fov = np.array(model._axes_tile_overlap('YX'))
if any(median_size > fov):
print("WARNING: median object size larger than field of view of the neural network.") | _____no_output_____ | BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
Training You can define a function/callable that applies augmentation to each batch of the data generator. | augmenter = None
# def augmenter(X_batch, Y_batch):
# """Augmentation for data batch.
# X_batch is a list of input images (length at most batch_size)
# Y_batch is the corresponding list of ground-truth label images
# """
# # ...
# return X_batch, Y_batch | _____no_output_____ | BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
We recommend to monitor the progress during training with [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard). You can start it in the shell from the current working directory like this: $ tensorboard --logdir=.Then connect to [http://localhost:6006/](http://localhost:6006/) with your browser. | quick_demo = True
if quick_demo:
print (
"NOTE: This is only for a quick demonstration!\n"
" Please set the variable 'quick_demo = False' for proper (long) training.",
file=sys.stderr, flush=True
)
model.train(X_trn, Y_trn, validation_data=(X_val,Y_val), augmenter=augmenter,
epochs=2, steps_per_epoch=10)
print("====> Stopping training and loading previously trained demo model from disk.", file=sys.stderr, flush=True)
model = StarDist2D(None, name='2D_demo', basedir='../../models/examples')
model.basedir = None # to prevent files of the demo model to be overwritten (not needed for your model)
else:
model.train(X_trn, Y_trn, validation_data=(X_val,Y_val), augmenter=augmenter)
None; | NOTE: This is only for a quick demonstration!
Please set the variable 'quick_demo = False' for proper (long) training.
| BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
Threshold optimization While the default values for the probability and non-maximum suppression thresholds already yield good results in many cases, we still recommend to adapt the thresholds to your data. The optimized threshold values are saved to disk and will be automatically loaded with the model. | model.optimize_thresholds(X_val, Y_val) | NMS threshold = 0.3: 80%|ββββββββ | 16/20 [00:46<00:17, 4.42s/it, 0.485 -> 0.796]
NMS threshold = 0.4: 80%|ββββββββ | 16/20 [00:46<00:17, 4.45s/it, 0.485 -> 0.796]
NMS threshold = 0.5: 80%|ββββββββ | 16/20 [00:50<00:18, 4.63s/it, 0.485 -> 0.796]
| BSD-3-Clause | examples/2D/2_training.ipynb | feberhardt/stardist |
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). | # Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize() | _____no_output_____ | MIT | Datasets/Vectors/landsat_wrs2_grid.ipynb | YuePanEdward/earthengine-py-notebooks |
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function. | Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map | _____no_output_____ | MIT | Datasets/Vectors/landsat_wrs2_grid.ipynb | YuePanEdward/earthengine-py-notebooks |
Add Earth Engine Python script | # Add Earth Engine dataset
dataset = ee.FeatureCollection('projects/google/wrs2_descending')
empty = ee.Image().byte()
Map.setCenter(-78, 36, 8)
Map.addLayer(empty.paint(dataset, 0, 2), {}, 'Landsat WRS-2 grid') | _____no_output_____ | MIT | Datasets/Vectors/landsat_wrs2_grid.ipynb | YuePanEdward/earthengine-py-notebooks |
Display Earth Engine data layers | Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map | _____no_output_____ | MIT | Datasets/Vectors/landsat_wrs2_grid.ipynb | YuePanEdward/earthengine-py-notebooks |
Title : Longest Palindromic SubstringChapter : Dynamic ProgrammingLink : [YouTube](https://youtu.be/LYHFaO1lgYM)ChapterLink : [PlayList](https://youtube.com/playlist?list=PLDV-cCQnUlIa0owhTLK-VT994Qh6XTy4v)λ¬Έμ : μ£Όμ΄μ§ string sμμ, κ°μ₯ κΈ΄ palindromic substringμ returnνμ¬λΌ | def longestPalindrome(s: str) -> str:
str_length = len(s)
dp_table = [[0] * str_length for i in range(str_length)]
for idx in range (str_length):
dp_table[idx][idx] = 1
for idx in range (str_length -1):
start_char = s[idx]
end_char = s[idx+1]
if start_char == end_char:
dp_table[idx][idx+1] = 2
for idx in range (2, str_length):
row = 0
col = idx
while col < str_length:
start_char = s[row]
end_char = s[col]
prev_count = dp_table[row+1][col-1]
if start_char == end_char and prev_count != 0:
dp_table[row][col] = prev_count + 2
row += 1
col += 1
max_length = 0
start_idx = 0
end_idx = 0
for row in range (str_length):
for col in range (str_length):
crnt_length = dp_table[row][col]
if max_length < crnt_length:
max_length = crnt_length
start_idx = row
end_idx = col
sub_str = s[start_idx:end_idx+1]
return sub_str
print(longestPalindrome(s='baabc'))
| _____no_output_____ | MIT | dynamicProgramming/lgstPalSubstring.ipynb | NoCodeProgram/CodingTest |
Create Temporary Datasets for AnalysisSimulate the proofcheck dataset until you get access to it | import pandas as pd
mov_meta = pd.read_csv('movie_metadata.csv')
mov_meta.head()
# For the sake of simplicity only look at colmns with numeric data
mov_meta_nrw=mov_meta._get_numeric_data()
mov_meta_nrw.head()
# variable of interest is gross and to make data similar to proofcheck will create binary variable
# 1 if movie gross is greater than budget and 0 if not
mov_meta_nrw['gross_bin'] = mov_meta_nrw['gross']>mov_meta_nrw['budget']
mov_meta_nrw.drop(['gross','budget'], axis=1,inplace=True)
mov_meta_nrw.head()
xtrain = mov_meta_nrw.iloc[:,:-1]
ytrain = mov_meta_nrw.iloc[:,-1]
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr_fit = lr.fit(xtrain,ytrain)
type(ytrain)
from sklearn import datasets
dataset = datasets.load_iris()
dataset | _____no_output_____ | MIT | jupyter/model_comparison.ipynb | mseinstein/Proofcheck |
This notebook shows:* How to launch the [**StarGANv1**](https://arxiv.org/abs/1711.09020) model for inference* Example of results for both * attrubutes **detection** * new face **generation** with desired attributesHere I use [**PyTorch** implementation](https://github.com/yunjey/stargan) of the StarGANv1 model.[StarGANv1](https://arxiv.org/abs/1711.09020) was chosen because:* It provides an ability to generate images **contitionally**. One can control the "amount" of each desired feature via input vector.* It can **train (relatively) fast** on (relatively) small resources.The model is pretty old though and has its own drawbacks:* It works well only with small resolution images (~128).* For bigger images the artifacts are inavoidable. They sometimes happen even for 128x128 images.The obvious improvement is to use newer model, e.g., [StarGANv2](https://arxiv.org/abs/1912.01865) which was released in April 2020. It generates much better images at much higher resolution. But it requires both huge resoruces and lots of time to train.Prior to running this notebook please download the pretrained models:```../scripts/get_models.sh``` Imports Imort necessary libraries | import os
import sys
os.environ["KMP_DUPLICATE_LIB_OK"] = "True"
sys.path.extend(["../code/", "../stargan/"])
import torch
import torchvision.transforms as T
from PIL import Image
import matplotlib.pyplot as plt
from config import get_config
from solver import Solver | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Load model Let's first load the config for the model. It is mostly default except for the:* model checkpoint path* style classes, their order and numberNote that in the original StarGANv1 model 5 classes are used: `[Black_Hair Blond_Hair Brown_Hair Male Young]`.I retrained the model **4** times for different **face parts**. Each face part has several classes connected to it (see `DataExploration` notebook):* **nose**: `[Big_Nose, Pointy_Nose]`* **mouth**: `[Mouth_Slightly_Open, Smiling]`* **eyes**: `[Arched_Eyebrows, Bushy_Eyebrows, Bags_Under_Eyes, Eyeglasses, Narrow_Eyes]`* **hair**: `[Black_Hair, Blond_Hair, Brown_Hair, Gray_Hair, Bald Bangs, Receding_Hairline, Straight_Hair, Wavy_Hair]`Here I show the examples only for **eyes** class. But all other classes works in the same way and prediction examples are shown in the repo and in other notebooks. | config = get_config("""
--model_save_dir ../models/celeba_128_eyes/
--test_iters 200000
--c_dim 5
--selected_attrs Arched_Eyebrows Bushy_Eyebrows Bags_Under_Eyes Eyeglasses Narrow_Eyes
""") | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Load the model architecture with the provided config. | model = Solver(None, None, config) | Generator(
(main): Sequential(
(0): Conv2d(8, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False)
(1): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(4): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
(6): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(7): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(8): ReLU(inplace=True)
(9): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(10): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(11): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(12): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(13): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(14): ResidualBlock(
(main): Sequential(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(15): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(16): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(17): ReLU(inplace=True)
(18): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(19): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(20): ReLU(inplace=True)
(21): Conv2d(64, 3, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), bias=False)
(22): Tanh()
)
)
G
The number of parameters: 8430528
Discriminator(
(main): Sequential(
(0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(1): LeakyReLU(negative_slope=0.01)
(2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(3): LeakyReLU(negative_slope=0.01)
(4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(5): LeakyReLU(negative_slope=0.01)
(6): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(7): LeakyReLU(negative_slope=0.01)
(8): Conv2d(512, 1024, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(9): LeakyReLU(negative_slope=0.01)
(10): Conv2d(1024, 2048, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
(11): LeakyReLU(negative_slope=0.01)
)
(conv1): Conv2d(2048, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(conv2): Conv2d(2048, 5, kernel_size=(2, 2), stride=(1, 1), bias=False)
)
D
The number of parameters: 44762048
| MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Restore model weights. | model.restore_model(model.test_iters) | Loading the trained models from step 200000...
| MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Prediction example Let's read a test image.Note that the **face position and size** should be comparable to what the model has seen in the training data (CelebA). Here I do not use any face detector and crop the faces manually. But in production environment one needs to setup the face detector correspondingly. | image = Image.open("../data/test.jpg")
image | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
The input to the network is **3x128x128 image in a range [-1; 1]** (note that the channels is the first dimension).Thus one need to do preprocessing in advance. | transform = []
transform.append(T.Resize(128))
transform.append(T.CenterCrop(128))
transform.append(T.ToTensor())
transform.append(T.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)))
transform = T.Compose(transform) | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Create a batch of 1 image | x_real = torch.stack([transform(image)])
x_real.shape | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Attributes prediction Let's first predict the attbibutes of the image. To do so I use the **Discriminator** part of the network. In StarGAN architecture it predicts not only the fake/real label but also the classes/attributes/styles of the image.Here I call this vector **eigen style vector**. Note that due to the possible co-existence of multiple labels and the corresponding training procedure (Sigmoid + BCELoss instead of Softmax + CrossEntropyLoss) I use sigmoid activation function here and treat predicted labels separately (instead of softmax and 1-of-all). | with torch.no_grad():
eigen_style_vector = torch.sigmoid(model.D(x_real)[1]) | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Below is the probability of each label. The photo indeed depicts a person with big and little bit arched eyebrows. | for proba, tag in zip(eigen_style_vector.numpy()[0], model.selected_attrs):
print(f"{tag:20s}: {proba:.3f}") | Arched_Eyebrows : 0.334
Bushy_Eyebrows : 0.207
Bags_Under_Eyes : 0.054
Eyeglasses : 0.000
Narrow_Eyes : 0.081
| MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Now let's look at how well the **Generator** model can recreate the face without altering it using the just computed eigen style vector. | with torch.no_grad():
res_eigen = model.G(x_real, eigen_style_vector)
res_eigen.shape | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Plot the original face and the reconstructed one: | plt.figure(figsize=(9, 8))
plt.subplot(121)
_img = model.denorm(x_real).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Original", fontsize=16)
plt.subplot(122)
_img = model.denorm(res_eigen).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Eigen style reconstruction", fontsize=16); | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Looks good enough. Face modification using new attributes Now let's try to modify the face starting from the eigen style vector.Let's say, I want to **add eyeglasses**. To do so I am to set the corresponding style vector component to 1. | eigen_style_vector_modified_1 = eigen_style_vector.clone()
eigen_style_vector_modified_1[:, 3] = 1 | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Now the style vector looks the following: | for proba, tag in zip(eigen_style_vector_modified_1.numpy()[0], model.selected_attrs):
print(f"{tag:20s}: {proba:.3f}") | Arched_Eyebrows : 0.334
Bushy_Eyebrows : 0.207
Bags_Under_Eyes : 0.054
Eyeglasses : 1.000
Narrow_Eyes : 0.081
| MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Let's try to generate face with this modified style vector: | with torch.no_grad():
res_modified_1 = model.G(x_real, eigen_style_vector_modified_1)
res_modified_1.shape | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Plot the faces: | plt.figure(figsize=(13.5, 8))
plt.subplot(131)
_img = model.denorm(x_real).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Original", fontsize=16)
plt.subplot(132)
_img = model.denorm(res_eigen).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Eigen style reconstruction", fontsize=16);
plt.subplot(133)
_img = model.denorm(res_modified_1).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Eyeglasses", fontsize=16); | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Now let's try to **change two attributes simultaneously**:* Make the eyes narrow* Add archness to the eyebrows | eigen_style_vector_modified_2 = eigen_style_vector.clone()
eigen_style_vector_modified_2[:, 0] = 1
eigen_style_vector_modified_2[:, 4] = 1 | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Now the style vector looks the following: | for proba, tag in zip(eigen_style_vector_modified_2.numpy()[0], model.selected_attrs):
print(f"{tag:20s}: {proba:.3f}") | Arched_Eyebrows : 1.000
Bushy_Eyebrows : 0.207
Bags_Under_Eyes : 0.054
Eyeglasses : 0.000
Narrow_Eyes : 1.000
| MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Let's try to generate face with this modified style vector: | with torch.no_grad():
res_modified_2 = model.G(x_real, eigen_style_vector_modified_2)
res_modified_2.shape | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
Plot the faces: | plt.figure(figsize=(18, 8))
plt.subplot(141)
_img = model.denorm(x_real).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Original", fontsize=16)
plt.subplot(142)
_img = model.denorm(res_eigen).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Eigen style reconstruction", fontsize=16);
plt.subplot(143)
_img = model.denorm(res_modified_1).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Eyeglasses", fontsize=16);
plt.subplot(144)
_img = model.denorm(res_modified_2).numpy()[0].transpose((1, 2, 0))
plt.imshow(_img)
plt.axis("off")
plt.title("Arched eyebrows + Narrow", fontsize=16); | _____no_output_____ | MIT | notebooks/11_InferenceEyes.ipynb | vladimir-chernykh/facestyle-gan |
This example uses the [Universal Bank](https://www.kaggle.com/sriharipramod/bank-loan-classification) data set and some example code of running classification trees from chapter 9 of [Data Mining for Business Analytics](https://www.dataminingbook.com/book/python-edition)> The data include customer demographic information (age, income, etc.), the customer's relationship with the bank (mortgage, securities account, etc.), and the customer response to the last personal loan campaign (Personal Loan). Among these 5000 customers, only 480 (= 9.6%) accepted the personal loan that was offered to them in the earlier campaign[Source](https://www.kaggle.com/itsmesunil/campaign-for-selling-personal-loans)1. Train a decision tree classifier, print the tree and evaluate its accuracy.2. Prune the tree by changing its hyper parameters, evaluate the accuracy of the new tree.3. Using [grid search](https://scikit-learn.org/stable/modules/grid_search.html), perform a systematic tuning of the decision tree hyper parameters. | data = pd.read_csv('data/UniversalBank.csv')
data.head() | _____no_output_____ | MIT | 43-workout-solution_decision_trees.ipynb | hanisaf/advanced-data-management-and-analytics |
Courtesy - Statistics.com Data Description:ID Customer IDAge Customer's age in completed yearsExperience years of professional experienceIncome Annual income of the customer ($000)ZIPCode Home Address ZIP code.Family Family size of the customerCCAvg Avg. spending on credit cards per month ($000)Education Education Level. 1: Undergrad; 2: Graduate; 3: Advanced/ProfessionalMortgage Value of house mortgage if any. ($000)Personal Loan Did this customer accept the personal loan offered in the last campaign?Securities Account Does the customer have a securities account with the bank?CD Account Does the customer have a certificate of deposit (CD) account with the bank?Online Does the customer use internet banking facilities?CreditCard Does the customer use a credit card issued by UniversalBank? | bank_df = data.drop(columns=['ID', 'ZIP Code'])
X = bank_df.drop(columns=['Personal Loan'])
y = bank_df['Personal Loan']
train_X, valid_X, train_y, valid_y = train_test_split(X, y, test_size=0.4, random_state=1)
dtree = DecisionTreeClassifier()
dtree.fit(train_X, train_y)
print(export_text(dtree, feature_names=list(X.columns)))
print(confusion_matrix(train_y, dtree.predict(train_X)))
print(confusion_matrix(valid_y, dtree.predict(valid_X)))
accuracy_score(train_y, dtree.predict(train_X)), accuracy_score(valid_y, dtree.predict(valid_X))
dtree = DecisionTreeClassifier(max_depth=30, min_samples_split=20, min_impurity_decrease=0.01)
dtree.fit(train_X, train_y)
print(export_text(dtree, feature_names=list(X.columns)))
print(confusion_matrix(train_y, dtree.predict(train_X)))
print(confusion_matrix(valid_y, dtree.predict(valid_X)))
accuracy_score(train_y, dtree.predict(train_X)), accuracy_score(valid_y, dtree.predict(valid_X))
# Start with an initial guess for parameters
param_grid = {
'max_depth': [10, 20, 30, 40],
'min_samples_split': [20, 40, 60, 80, 100],
'min_impurity_decrease': [0, 0.0005, 0.001, 0.005, 0.01],
}
gridSearch = GridSearchCV(DecisionTreeClassifier(), param_grid, cv=5, n_jobs=-1)
gridSearch.fit(train_X, train_y)
print('Score: ', gridSearch.best_score_)
print('Parameters: ', gridSearch.best_params_)
dtree = gridSearch.best_estimator_
print(confusion_matrix(train_y, dtree.predict(train_X)))
print(confusion_matrix(valid_y, dtree.predict(valid_X)))
accuracy_score(train_y, dtree.predict(train_X)), accuracy_score(valid_y, dtree.predict(valid_X))
print(export_text(dtree, feature_names=list(X.columns))) | |--- Income <= 110.50
| |--- CCAvg <= 2.95
| | |--- class: 0
| |--- CCAvg > 2.95
| | |--- CD Account <= 0.50
| | | |--- Income <= 92.50
| | | | |--- class: 0
| | | |--- Income > 92.50
| | | | |--- Education <= 1.50
| | | | | |--- class: 0
| | | | |--- Education > 1.50
| | | | | |--- class: 1
| | |--- CD Account > 0.50
| | | |--- class: 1
|--- Income > 110.50
| |--- Education <= 1.50
| | |--- Family <= 2.50
| | | |--- class: 0
| | |--- Family > 2.50
| | | |--- class: 1
| |--- Education > 1.50
| | |--- Income <= 116.50
| | | |--- CCAvg <= 3.50
| | | | |--- class: 0
| | | |--- CCAvg > 3.50
| | | | |--- class: 1
| | |--- Income > 116.50
| | | |--- class: 1
| MIT | 43-workout-solution_decision_trees.ipynb | hanisaf/advanced-data-management-and-analytics |
Data Set-up and Cleaning | # Standard Library Imports
import pandas as pd
import numpy as np | _____no_output_____ | CC0-1.0 | 1_Data_Cleaning.ipynb | oaagboro/Healthcare_Insurance_Fraud |
For this section, I will be concatenating all the data sets into one large dataset. Load the datasets | inpatient = pd.read_csv('./data/Train_Inpatientdata-1542865627584.csv')
outpatient = pd.read_csv('./data/Train_Outpatientdata-1542865627584.csv')
beneficiary = pd.read_csv('./data/Train_Beneficiarydata-1542865627584.csv')
fraud = pd.read_csv('./data/Train-1542865627584.csv')
# Increase the max display options of the columns and rows
pd.set_option('display.max_columns', 100) | _____no_output_____ | CC0-1.0 | 1_Data_Cleaning.ipynb | oaagboro/Healthcare_Insurance_Fraud |
Inspect the first 5 rows of the datasets | # Inspect the first 5 rows of the inpatient claims
inpatient.head()
# Inspect the first 5 rows of the outpatient claims
outpatient.head()
# Inspect the first 5 rows of the beneficiary dataset
beneficiary.head()
# Inspect the first 5 rows of the fraud column
fraud.head() | _____no_output_____ | CC0-1.0 | 1_Data_Cleaning.ipynb | oaagboro/Healthcare_Insurance_Fraud |
Check the number of rows and columns for each dataset | inpatient.shape
outpatient.shape
beneficiary.shape
fraud.shape | _____no_output_____ | CC0-1.0 | 1_Data_Cleaning.ipynb | oaagboro/Healthcare_Insurance_Fraud |
Some columns in the inpatient dataset are not in the outpatient dataset or in the fraud (target) dataset and vice versa. In order to make sense of the data I would have to merge them together. Combine the Inpatient, Outpatient, beneficiary and fraud datasets | # Map the inpatient and outpatient columns, 1 for outpatient, 0 for inpatient
inpatient["IsOutpatient"] = 0
outpatient["IsOutpatient"] = 1
# Merging the datasets together
patient_df = pd.concat([inpatient, outpatient],axis = 0)
patient_df = patient_df.merge(beneficiary, how = 'left', on = 'BeneID').merge(fraud, how = 'left', on = 'Provider')
print("The shape of the dataset after merging is:", patient_df.shape)
# Inspect the final dataset after merging
patient_df.head() | _____no_output_____ | CC0-1.0 | 1_Data_Cleaning.ipynb | oaagboro/Healthcare_Insurance_Fraud |
After merging the dataset, we now have a dataframe with the fraud target column. | patient_df.describe()
patient_df.dtypes
# Convert columns with Date attributes to Datetime datatype : "ClaimStartDt", "ClaimEndDt", "AdmissionDt", "DischargeDt", "DOB", "DOD"
patient_df[["ClaimStartDt", "ClaimEndDt", "AdmissionDt", "DischargeDt", "DOB", "DOD"]] = patient_df[["ClaimStartDt", "ClaimEndDt", "AdmissionDt", "DischargeDt", "DOB", "DOD"]].apply(pd.to_datetime, format = '%Y-%m-%d', errors = 'coerce')
# Convert the Claims Procedure Code columns to object just as the Claims diagnoses code
patient_df.loc[:, patient_df.columns.str.contains('ClmProcedureCode')] = patient_df.loc[:, patient_df.columns.str.contains('ClmProcedureCode')].astype(object)
# Convert Race, County and State to objects
patient_df[['Race', 'State', 'County' ]] = patient_df[['Race', 'State', 'County']].astype(object)
# Investigate the RenalDiseasIndicator
patient_df['RenalDiseaseIndicator'].value_counts()
# Replace 'Y' with 1 in RenalDiseaseIndicator
patient_df['RenalDiseaseIndicator'] = patient_df['RenalDiseaseIndicator'].replace({'Y': 1})
# Check to see if replacement worked
patient_df['RenalDiseaseIndicator'].value_counts() | _____no_output_____ | CC0-1.0 | 1_Data_Cleaning.ipynb | oaagboro/Healthcare_Insurance_Fraud |
Change other binary variables to 0 and 1 | # Change the Gender column and any column having 'ChronicCond' to binary variables to 0 and 1
chronic = patient_df.columns[patient_df.columns.str.contains("ChronicCond")].tolist()
patient_df[chronic] = patient_df[chronic].apply(lambda x: np.where(x == 2,0,1))
patient_df['Gender'] = patient_df['Gender'].apply(lambda x: np.where(x == 2,0,1))
# Check to see if it changed
patient_df['Gender'].value_counts()
# Checking the change
patient_df['ChronicCond_Alzheimer'].value_counts()
# Check the data types again
patient_df.dtypes
# Save the data as 'patients'
patient_df.to_csv('./data/patients.csv', index=False)
patient_df.to_pickle('./data/patients.pkl') | _____no_output_____ | CC0-1.0 | 1_Data_Cleaning.ipynb | oaagboro/Healthcare_Insurance_Fraud |
Understanding the dataIn this first part, we load the data and perform some initial exploration on it. The main goal of this step is to acquire some basic knowledge about the data, how the various features are distributed, if there are missing values in it and so on. | ### imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# load hourly data
hourly_data = pd.read_csv('../data/hour.csv') | _____no_output_____ | MIT | Chapter01/Exercise1.03/Exercise1.03.ipynb | fenago/Applied_Data_Analytics |
Check data format, number of missing values in the data and general statistics: | # print some generic statistics about the data
print(f"Shape of data: {hourly_data.shape}")
print(f"Number of missing values in the data: {hourly_data.isnull().sum().sum()}")
# get statistics on the numerical columns
hourly_data.describe().T
# create a copy of the original data
preprocessed_data = hourly_data.copy()
# tranform seasons
seasons_mapping = {1: 'winter', 2: 'spring', 3: 'summer', 4: 'fall'}
preprocessed_data['season'] = preprocessed_data['season'].apply(lambda x: seasons_mapping[x])
# transform yr
yr_mapping = {0: 2011, 1: 2012}
preprocessed_data['yr'] = preprocessed_data['yr'].apply(lambda x: yr_mapping[x])
# transform weekday
weekday_mapping = {0: 'Sunday', 1: 'Monday', 2: 'Tuesday', 3: 'Wednesday', 4: 'Thursday', 5: 'Friday', 6: 'Saturday'}
preprocessed_data['weekday'] = preprocessed_data['weekday'].apply(lambda x: weekday_mapping[x])
# transform weathersit
weather_mapping = {1: 'clear', 2: 'cloudy', 3: 'light_rain_snow', 4: 'heavy_rain_snow'}
preprocessed_data['weathersit'] = preprocessed_data['weathersit'].apply(lambda x: weather_mapping[x])
# transorm hum and windspeed
preprocessed_data['hum'] = preprocessed_data['hum']*100
preprocessed_data['windspeed'] = preprocessed_data['windspeed']*67
# visualize preprocessed columns
cols = ['season', 'yr', 'weekday', 'weathersit', 'hum', 'windspeed']
preprocessed_data[cols].sample(10, random_state=123) | _____no_output_____ | MIT | Chapter01/Exercise1.03/Exercise1.03.ipynb | fenago/Applied_Data_Analytics |
Registered vs casual use analysis | # assert that total numer of rides is equal to the sum of registered and casual ones
assert (preprocessed_data.casual + preprocessed_data.registered == preprocessed_data.cnt).all(), \
'Sum of casual and registered rides not equal to total number of rides'
# plot distributions of registered vs casual rides
sns.distplot(preprocessed_data['registered'], label='registered')
sns.distplot(preprocessed_data['casual'], label='casual')
plt.legend()
plt.xlabel('rides')
plt.ylabel("frequency")
plt.title("Rides distributions")
plt.savefig('figs/rides_distributions.png', format='png')
# plot evolution of rides over time
plot_data = preprocessed_data[['registered', 'casual', 'dteday']]
ax = plot_data.groupby('dteday').sum().plot(figsize=(10,6))
ax.set_xlabel("time");
ax.set_ylabel("number of rides per day");
plt.savefig('figs/rides_daily.png', format='png')
# create new dataframe with necessary for plotting columns, and
# obtain number of rides per day, by grouping over each day
plot_data = preprocessed_data[['registered', 'casual', 'dteday']]
plot_data = plot_data.groupby('dteday').sum()
# define window for computing the rolling mean and standard deviation
window = 7
rolling_means = plot_data.rolling(window).mean()
rolling_deviations = plot_data.rolling(window).std()
# create a plot of the series, where we first plot the series of rolling means,
# then we color the zone between the series of rolling means
# +- 2 rolling standard deviations
ax = rolling_means.plot(figsize=(10,6))
ax.fill_between(rolling_means.index, \
rolling_means['registered'] + 2*rolling_deviations['registered'], \
rolling_means['registered'] - 2*rolling_deviations['registered'], \
alpha = 0.2)
ax.fill_between(rolling_means.index, \
rolling_means['casual'] + 2*rolling_deviations['casual'], \
rolling_means['casual'] - 2*rolling_deviations['casual'], \
alpha = 0.2)
ax.set_xlabel("time");
ax.set_ylabel("number of rides per day");
plt.savefig('figs/rides_aggregated.png', format='png')
# select relevant columns
plot_data = preprocessed_data[['hr', 'weekday', 'registered', 'casual']]
# transform the data into a format, in number of entries are computed as count,
# for each distinct hr, weekday and type (registered or casual)
plot_data = plot_data.melt(id_vars=['hr', 'weekday'], var_name='type', value_name='count')
# create FacetGrid object, in which a grid plot is produced.
# As columns, we have the various days of the week,
# as rows, the different types (registered and casual)
grid = sns.FacetGrid(plot_data, row='weekday', col='type', height=2.5,\
aspect=2.5, row_order=['Monday', 'Tuesday', \
'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'])
# populate the FacetGrid with the specific plots
grid.map(sns.barplot, 'hr', 'count', alpha=0.5)
grid.savefig('figs/weekday_hour_distributions.png', format='png')
# select subset of the data
plot_data = preprocessed_data[['hr', 'season', 'registered', 'casual']]
# unpivot data from wide to long format
plot_data = plot_data.melt(id_vars=['hr', 'season'], var_name='type', \
value_name='count')
# define FacetGrid
grid = sns.FacetGrid(plot_data, row='season', \
col='type', height=2.5, aspect=2.5, \
row_order=['winter', 'spring', 'summer', 'fall'])
# apply plotting function to each element in the grid
grid.map(sns.barplot, 'hr', 'count', alpha=0.5)
# save figure
grid.savefig('figs/exercise_1_02_a.png', format='png')
plot_data = preprocessed_data[['weekday', 'season', 'registered', 'casual']]
plot_data = plot_data.melt(id_vars=['weekday', 'season'], var_name='type', value_name='count')
grid = sns.FacetGrid(plot_data, row='season', col='type', height=2.5, aspect=2.5,
row_order=['winter', 'spring', 'summer', 'fall'])
grid.map(sns.barplot, 'weekday', 'count', alpha=0.5,
order=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'])
# save figure
grid.savefig('figs/exercise_1_02_b.png', format='png') | _____no_output_____ | MIT | Chapter01/Exercise1.03/Exercise1.03.ipynb | fenago/Applied_Data_Analytics |
Exercise 1.03: Estimating average registered rides | # compute population mean of registered rides
population_mean = preprocessed_data.registered.mean()
# get sample of the data (summer 2011)
sample = preprocessed_data[(preprocessed_data.season == "summer") &\
(preprocessed_data.yr == 2011)].registered
# perform t-test and compute p-value
from scipy.stats import ttest_1samp
test_result = ttest_1samp(sample, population_mean)
print(f"Test statistic: {test_result[0]:.03f}, p-value: {test_result[1]:.03f}")
# get sample as 5% of the full data
import random
random.seed(111)
sample_unbiased = preprocessed_data.registered.sample(frac=0.05)
test_result_unbiased = ttest_1samp(sample_unbiased, population_mean)
print(f"Unbiased test statistic: {test_result_unbiased[0]:.03f}, p-value: {test_result_unbiased[1]:.03f}") | _____no_output_____ | MIT | Chapter01/Exercise1.03/Exercise1.03.ipynb | fenago/Applied_Data_Analytics |
Finetuning of the pretrained Japanese BERT modelFinetune the pretrained model to solve multi-class classification problems. This notebook requires the following objects:- trained sentencepiece model (model and vocab files)- pretraiend Japanese BERT modelDataset is livedoor γγ₯γΌγΉγ³γΌγγΉ in https://www.rondhuit.com/download.html. We make test:dev:train = 2:2:6 datasets. Results:- Full training data - BERT with SentencePiece ``` precision recall f1-score support dokujo-tsushin 0.98 0.94 0.96 178 it-life-hack 0.96 0.97 0.96 172 kaden-channel 0.99 0.98 0.99 176 livedoor-homme 0.98 0.88 0.93 95 movie-enter 0.96 0.99 0.98 158 peachy 0.94 0.98 0.96 174 smax 0.98 0.99 0.99 167 sports-watch 0.98 1.00 0.99 190 topic-news 0.99 0.98 0.98 163 micro avg 0.97 0.97 0.97 1473 macro avg 0.97 0.97 0.97 1473 weighted avg 0.97 0.97 0.97 1473 ``` - sklearn GradientBoostingClassifier with MeCab ``` precision recall f1-score support dokujo-tsushin 0.89 0.86 0.88 178 it-life-hack 0.91 0.90 0.91 172 kaden-channel 0.90 0.94 0.92 176 livedoor-homme 0.79 0.74 0.76 95 movie-enter 0.93 0.96 0.95 158 peachy 0.87 0.92 0.89 174 smax 0.99 1.00 1.00 167 sports-watch 0.93 0.98 0.96 190 topic-news 0.96 0.86 0.91 163 micro avg 0.92 0.92 0.92 1473 macro avg 0.91 0.91 0.91 1473 weighted avg 0.92 0.92 0.91 1473 ```- Small training data (1/5 of full training data) - BERT with SentencePiece ``` precision recall f1-score support dokujo-tsushin 0.97 0.87 0.92 178 it-life-hack 0.86 0.86 0.86 172 kaden-channel 0.95 0.94 0.95 176 livedoor-homme 0.82 0.82 0.82 95 movie-enter 0.97 0.99 0.98 158 peachy 0.89 0.95 0.92 174 smax 0.94 0.96 0.95 167 sports-watch 0.97 0.97 0.97 190 topic-news 0.94 0.94 0.94 163 micro avg 0.93 0.93 0.93 1473 macro avg 0.92 0.92 0.92 1473 weighted avg 0.93 0.93 0.93 1473 ``` - sklearn GradientBoostingClassifier with MeCab ``` precision recall f1-score support dokujo-tsushin 0.82 0.71 0.76 178 it-life-hack 0.86 0.88 0.87 172 kaden-channel 0.91 0.87 0.89 176 livedoor-homme 0.67 0.63 0.65 95 movie-enter 0.87 0.95 0.91 158 peachy 0.70 0.78 0.73 174 smax 1.00 1.00 1.00 167 sports-watch 0.87 0.95 0.91 190 topic-news 0.92 0.82 0.87 163 micro avg 0.85 0.85 0.85 1473 macro avg 0.85 0.84 0.84 1473 weighted avg 0.86 0.85 0.85 1473 ``` | import configparser
import glob
import os
import pandas as pd
import subprocess
import sys
import tarfile
from urllib.request import urlretrieve
CURDIR = os.getcwd()
CONFIGPATH = os.path.join(CURDIR, os.pardir, 'config.ini')
config = configparser.ConfigParser()
config.read(CONFIGPATH) | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Data preparingYou need execute the following cells just once. | FILEURL = config['FINETUNING-DATA']['FILEURL']
FILEPATH = config['FINETUNING-DATA']['FILEPATH']
EXTRACTDIR = config['FINETUNING-DATA']['TEXTDIR'] | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Download and unzip data. | %%time
urlretrieve(FILEURL, FILEPATH)
mode = "r:gz"
tar = tarfile.open(FILEPATH, mode)
tar.extractall(EXTRACTDIR)
tar.close() | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Data preprocessing. | def extract_txt(filename):
with open(filename) as text_file:
# 0: URL, 1: timestamp
text = text_file.readlines()[2:]
text = [sentence.strip() for sentence in text]
text = list(filter(lambda line: line != '', text))
return ''.join(text)
categories = [
name for name
in os.listdir( os.path.join(EXTRACTDIR, "text") )
if os.path.isdir( os.path.join(EXTRACTDIR, "text", name) ) ]
categories = sorted(categories)
categories
table = str.maketrans({
'\n': '',
'\t': 'γ',
'\r': '',
})
%%time
all_text = []
all_label = []
for cat in categories:
files = glob.glob(os.path.join(EXTRACTDIR, "text", cat, "{}*.txt".format(cat)))
files = sorted(files)
body = [ extract_txt(elem).translate(table) for elem in files ]
label = [cat] * len(body)
all_text.extend(body)
all_label.extend(label)
df = pd.DataFrame({'text' : all_text, 'label' : all_label})
df.head()
df = df.sample(frac=1, random_state=23).reset_index(drop=True)
df.head() | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Save data as tsv files. test:dev:train = 2:2:6. To check the usability of finetuning, we also prepare sampled training data (1/5 of full training data). | df[:len(df) // 5].to_csv( os.path.join(EXTRACTDIR, "test.tsv"), sep='\t', index=False)
df[len(df) // 5:len(df)*2 // 5].to_csv( os.path.join(EXTRACTDIR, "dev.tsv"), sep='\t', index=False)
df[len(df)*2 // 5:].to_csv( os.path.join(EXTRACTDIR, "train.tsv"), sep='\t', index=False)
### 1/5 of full training data.
# df[:len(df) // 5].to_csv( os.path.join(EXTRACTDIR, "test.tsv"), sep='\t', index=False)
# df[len(df) // 5:len(df)*2 // 5].to_csv( os.path.join(EXTRACTDIR, "dev.tsv"), sep='\t', index=False)
# df[len(df)*2 // 5:].sample(frac=0.2, random_state=23).to_csv( os.path.join(EXTRACTDIR, "train.tsv"), sep='\t', index=False) | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Finetune pre-trained modelIt will take a lot of hours to execute the following cells on CPU environment. You can also use colab to recieve the power of TPU. You need to uplode the created data onto your GCS bucket.[](https://colab.research.google.com/drive/1zZH2GWe0U-7GjJ2w2duodFfEUptvHjcx) | PRETRAINED_MODEL_PATH = '../model/model.ckpt-1400000'
FINETUNE_OUTPUT_DIR = '../model/livedoor_output'
%%time
# It will take many hours on CPU environment.
!python3 ../src/run_classifier.py \
--task_name=livedoor \
--do_train=true \
--do_eval=true \
--data_dir=../data/livedoor \
--model_file=../model/wiki-ja.model \
--vocab_file=../model/wiki-ja.vocab \
--init_checkpoint={PRETRAINED_MODEL_PATH} \
--max_seq_length=512 \
--train_batch_size=4 \
--learning_rate=2e-5 \
--num_train_epochs=10 \
--output_dir={FINETUNE_OUTPUT_DIR} | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Predict using the finetuned modelLet's predict test data using the finetuned model. | import sys
sys.path.append("../src")
import tokenization_sentencepiece as tokenization
from run_classifier import LivedoorProcessor
from run_classifier import model_fn_builder
from run_classifier import file_based_input_fn_builder
from run_classifier import file_based_convert_examples_to_features
from utils import str_to_value
sys.path.append("../bert")
import modeling
import optimization
import tensorflow as tf
import configparser
import json
import glob
import os
import pandas as pd
import tempfile
bert_config_file = tempfile.NamedTemporaryFile(mode='w+t', encoding='utf-8', suffix='.json')
bert_config_file.write(json.dumps({k:str_to_value(v) for k,v in config['BERT-CONFIG'].items()}))
bert_config_file.seek(0)
bert_config = modeling.BertConfig.from_json_file(bert_config_file.name)
output_ckpts = glob.glob("{}/model.ckpt*data*".format(FINETUNE_OUTPUT_DIR))
latest_ckpt = sorted(output_ckpts)[-1]
FINETUNED_MODEL_PATH = latest_ckpt.split('.data-00000-of-00001')[0]
class FLAGS(object):
'''Parameters.'''
def __init__(self):
self.model_file = "../model/wiki-ja.model"
self.vocab_file = "../model/wiki-ja.vocab"
self.do_lower_case = True
self.use_tpu = False
self.output_dir = "/dummy"
self.data_dir = "../data/livedoor"
self.max_seq_length = 512
self.init_checkpoint = FINETUNED_MODEL_PATH
self.predict_batch_size = 4
# The following parameters are not used in predictions.
# Just use to create RunConfig.
self.master = None
self.save_checkpoints_steps = 1
self.iterations_per_loop = 1
self.num_tpu_cores = 1
self.learning_rate = 0
self.num_warmup_steps = 0
self.num_train_steps = 0
self.train_batch_size = 0
self.eval_batch_size = 0
FLAGS = FLAGS()
processor = LivedoorProcessor()
label_list = processor.get_labels()
tokenizer = tokenization.FullTokenizer(
model_file=FLAGS.model_file, vocab_file=FLAGS.vocab_file,
do_lower_case=FLAGS.do_lower_case)
tpu_cluster_resolver = None
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=FLAGS.master,
model_dir=FLAGS.output_dir,
save_checkpoints_steps=FLAGS.save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=FLAGS.iterations_per_loop,
num_shards=FLAGS.num_tpu_cores,
per_host_input_for_training=is_per_host))
model_fn = model_fn_builder(
bert_config=bert_config,
num_labels=len(label_list),
init_checkpoint=FLAGS.init_checkpoint,
learning_rate=FLAGS.learning_rate,
num_train_steps=FLAGS.num_train_steps,
num_warmup_steps=FLAGS.num_warmup_steps,
use_tpu=FLAGS.use_tpu,
use_one_hot_embeddings=FLAGS.use_tpu)
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=FLAGS.use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=FLAGS.train_batch_size,
eval_batch_size=FLAGS.eval_batch_size,
predict_batch_size=FLAGS.predict_batch_size)
predict_examples = processor.get_test_examples(FLAGS.data_dir)
predict_file = tempfile.NamedTemporaryFile(mode='w+t', encoding='utf-8', suffix='.tf_record')
file_based_convert_examples_to_features(predict_examples, label_list,
FLAGS.max_seq_length, tokenizer,
predict_file.name)
predict_drop_remainder = True if FLAGS.use_tpu else False
predict_input_fn = file_based_input_fn_builder(
input_file=predict_file.name,
seq_length=FLAGS.max_seq_length,
is_training=False,
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
%%time
# It will take a few hours on CPU environment.
result = list(result)
result[:2] | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Read test data set and add prediction results. | import pandas as pd
test_df = pd.read_csv("../data/livedoor/test.tsv", sep='\t')
test_df['predict'] = [ label_list[elem['probabilities'].argmax()] for elem in result ]
test_df.head()
sum( test_df['label'] == test_df['predict'] ) / len(test_df) | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
A littel more detailed check using `sklearn.metrics`. | !pip install scikit-learn
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(test_df['label'], test_df['predict']))
print(confusion_matrix(test_df['label'], test_df['predict'])) | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Simple baseline model. | import pandas as pd
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
train_df = pd.read_csv("../data/livedoor/train.tsv", sep='\t')
dev_df = pd.read_csv("../data/livedoor/dev.tsv", sep='\t')
test_df = pd.read_csv("../data/livedoor/test.tsv", sep='\t')
!apt-get install -q -y mecab libmecab-dev mecab-ipadic mecab-ipadic-utf8
!pip install mecab-python3==0.7
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import GradientBoostingClassifier
import MeCab
m = MeCab.Tagger("-Owakati")
train_dev_df = pd.concat([train_df, dev_df])
train_dev_xs = train_dev_df['text'].apply(lambda x: m.parse(x))
train_dev_ys = train_dev_df['label']
test_xs = test_df['text'].apply(lambda x: m.parse(x))
test_ys = test_df['label']
vectorizer = TfidfVectorizer(max_features=750)
train_dev_xs_ = vectorizer.fit_transform(train_dev_xs)
test_xs_ = vectorizer.transform(test_xs) | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
The following set up is not exactly identical to that of BERT because inside Classifier it uses `train_test_split` with shuffle. In addition, parameters are not well tuned, however, we think it's enough to check the power of BERT. | %%time
model = GradientBoostingClassifier(n_estimators=200,
validation_fraction=len(train_df)/len(dev_df),
n_iter_no_change=5,
tol=0.01,
random_state=23)
### 1/5 of full training data.
# model = GradientBoostingClassifier(n_estimators=200,
# validation_fraction=len(dev_df)/len(train_df),
# n_iter_no_change=5,
# tol=0.01,
# random_state=23)
model.fit(train_dev_xs_, train_dev_ys)
print(classification_report(test_ys, model.predict(test_xs_)))
print(confusion_matrix(test_ys, model.predict(test_xs_))) | _____no_output_____ | Apache-2.0 | notebook/finetune-to-livedoor-corpus.ipynb | minhpqn/bert-japanese |
Copyright 2019 The TensorFlow Authors. | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Mixed precision View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewMixed precision is the use of both 16-bit and 32-bit floating-point types in a model during training to make it run faster and use less memory. By keeping certain parts of the model in the 32-bit types for numeric stability, the model will have a lower step time and train equally as well in terms of the evaluation metrics such as accuracy. This guide describes how to use the experimental Keras mixed precision API to speed up your models. Using this API can improve performance by more than 3 times on modern GPUs and 60% on TPUs. Note: The Keras mixed precision API is currently experimental and may change. Today, most models use the float32 dtype, which takes 32 bits of memory. However, there are two lower-precision dtypes, float16 and bfloat16, each which take 16 bits of memory instead. Modern accelerators can run operations faster in the 16-bit dtypes, as they have specialized hardware to run 16-bit computations and 16-bit dtypes can be read from memory faster.NVIDIA GPUs can run operations in float16 faster than in float32, and TPUs can run operations in bfloat16 faster than float32. Therefore, these lower-precision dtypes should be used whenever possible on those devices. However, variables and a few computations should still be in float32 for numeric reasons so that the model trains to the same quality. The Keras mixed precision API allows you to use a mix of either float16 or bfloat16 with float32, to get the performance benefits from float16/bfloat16 and the numeric stability benefits from float32.Note: In this guide, the term "numeric stability" refers to how a model's quality is affected by the use of a lower-precision dtype instead of a higher precision dtype. We say an operation is "numerically unstable" in float16 or bfloat16 if running it in one of those dtypes causes the model to have worse evaluation accuracy or other metrics compared to running the operation in float32. Setup The Keras mixed precision API is available in TensorFlow 2.1. | import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.mixed_precision import experimental as mixed_precision | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Supported hardwareWhile mixed precision will run on most hardware, it will only speed up models on recent NVIDIA GPUs and Cloud TPUs. NVIDIA GPUs support using a mix of float16 and float32, while TPUs support a mix of bfloat16 and float32.Among NVIDIA GPUs, those with compute capability 7.0 or higher will see the greatest performance benefit from mixed precision because they have special hardware units, called Tensor Cores, to accelerate float16 matrix multiplications and convolutions. Older GPUs offer no math performance benefit for using mixed precision, however memory and bandwidth savings can enable some speedups. You can look up the compute capability for your GPU at NVIDIA's [CUDA GPU web page](https://developer.nvidia.com/cuda-gpus). Examples of GPUs that will benefit most from mixed precision include RTX GPUs, the Titan V, and the V100. Note: If running this guide in Google Colab, the GPU runtime typically has a P100 connected. The P100 has compute capability 6.0 and is not expected to show a significant speedup.You can check your GPU type with the following. The command only exists if theNVIDIA drivers are installed, so the following will raise an error otherwise. | !nvidia-smi -L | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
All Cloud TPUs support bfloat16.Even on CPUs and older GPUs, where no speedup is expected, mixed precision APIs can still be used for unit testing, debugging, or just to try out the API. Setting the dtype policy To use mixed precision in Keras, you need to create a `tf.keras.mixed_precision.experimental.Policy`, typically referred to as a *dtype policy*. Dtype policies specify the dtypes layers will run in. In this guide, you will construct a policy from the string `'mixed_float16'` and set it as the global policy. This will will cause subsequently created layers to use mixed precision with a mix of float16 and float32. | policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
The policy specifies two important aspects of a layer: the dtype the layer's computations are done in, and the dtype of a layer's variables. Above, you created a `mixed_float16` policy (i.e., a `mixed_precision.Policy` created by passing the string `'mixed_float16'` to its constructor). With this policy, layers use float16 computations and float32 variables. Computations are done in float16 for performance, but variables must be kept in float32 for numeric stability. You can directly query these properties of the policy. | print('Compute dtype: %s' % policy.compute_dtype)
print('Variable dtype: %s' % policy.variable_dtype) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
As mentioned before, the `mixed_float16` policy will most significantly improve performance on NVIDIA GPUs with compute capability of at least 7.0. The policy will run on other GPUs and CPUs but may not improve performance. For TPUs, the `mixed_bfloat16` policy should be used instead. Building the model Next, let's start building a simple model. Very small toy models typically do not benefit from mixed precision, because overhead from the TensorFlow runtime typically dominates the execution time, making any performance improvement on the GPU negligible. Therefore, let's build two large `Dense` layers with 4096 units each if a GPU is used. | inputs = keras.Input(shape=(784,), name='digits')
if tf.config.list_physical_devices('GPU'):
print('The model will run with 4096 units on a GPU')
num_units = 4096
else:
# Use fewer units on CPUs so the model finishes in a reasonable amount of time
print('The model will run with 64 units on a CPU')
num_units = 64
dense1 = layers.Dense(num_units, activation='relu', name='dense_1')
x = dense1(inputs)
dense2 = layers.Dense(num_units, activation='relu', name='dense_2')
x = dense2(x) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Each layer has a policy and uses the global policy by default. Each of the `Dense` layers therefore have the `mixed_float16` policy because you set the global policy to `mixed_float16` previously. This will cause the dense layers to do float16 computations and have float32 variables. They cast their inputs to float16 in order to do float16 computations, which causes their outputs to be float16 as a result. Their variables are float32 and will be cast to float16 when the layers are called to avoid errors from dtype mismatches. | print('x.dtype: %s' % x.dtype.name)
# 'kernel' is dense1's variable
print('dense1.kernel.dtype: %s' % dense1.kernel.dtype.name) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Next, create the output predictions. Normally, you can create the output predictions as follows, but this is not always numerically stable with float16. | # INCORRECT: softmax and model output will be float16, when it should be float32
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
print('Outputs dtype: %s' % outputs.dtype.name) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
A softmax activation at the end of the model should be float32. Because the dtype policy is `mixed_float16`, the softmax activation would normally have a float16 compute dtype and output a float16 tensors.This can be fixed by separating the Dense and softmax layers, and by passing `dtype='float32'` to the softmax layer | # CORRECT: softmax and model output are float32
x = layers.Dense(10, name='dense_logits')(x)
outputs = layers.Activation('softmax', dtype='float32', name='predictions')(x)
print('Outputs dtype: %s' % outputs.dtype.name) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Passing `dtype='float32'` to the softmax layer constructor overrides the layer's dtype policy to be the `float32` policy, which does computations and keeps variables in float32. Equivalently, we could have instead passed `dtype=mixed_precision.Policy('float32')`; layers always convert the dtype argument to a policy. Because the `Activation` layer has no variables, the policy's variable dtype is ignored, but the policy's compute dtype of float32 causes softmax and the model output to be float32. Adding a float16 softmax in the middle of a model is fine, but a softmax at the end of the model should be in float32. The reason is that if the intermediate tensor flowing from the softmax to the loss is float16 or bfloat16, numeric issues may occur.You can override the dtype of any layer to be float32 by passing `dtype='float32'` if you think it will not be numerically stable with float16 computations. But typically, this is only necessary on the last layer of the model, as most layers have sufficient precision with `mixed_float16` and `mixed_bfloat16`.Even if the model does not end in a softmax, the outputs should still be float32. While unnecessary for this specific model, the model outputs can be cast to float32 with the following: | # The linear activation is an identity function. So this simply casts 'outputs'
# to float32. In this particular case, 'outputs' is already float32 so this is a
# no-op.
outputs = layers.Activation('linear', dtype='float32')(outputs) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Next, finish and compile the model, and generate input data. | model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(),
metrics=['accuracy'])
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
x_test = x_test.reshape(10000, 784).astype('float32') / 255 | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
This example cast the input data from int8 to float32. We don't cast to float16 since the division by 255 is on the CPU, which runs float16 operations slower than float32 operations. In this case, the performance difference in negligible, but in general you should run input processing math in float32 if it runs on the CPU. The first layer of the model will cast the inputs to float16, as each layer casts floating-point inputs to its compute dtype.The initial weights of the model are retrieved. This will allow training from scratch again by loading the weights. | initial_weights = model.get_weights() | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Training the model with Model.fitNext, train the model. | history = model.fit(x_train, y_train,
batch_size=8192,
epochs=5,
validation_split=0.2)
test_scores = model.evaluate(x_test, y_test, verbose=2)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
| _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Notice the model prints the time per sample in the logs: for example, "4us/sample". The first epoch may be slower as TensorFlow spends some time optimizing the model, but afterwards the time per sample should stabilize. If you are running this guide in Colab, you can compare the performance of mixed precision with float32. To do so, change the policy from `mixed_float16` to `float32` in the "Setting the dtype policy" section, then rerun all the cells up to this point. On GPUs with at least compute capability 7.0, you should see the time per sample significantly increase, indicating mixed precision sped up the model. For example, with a Titan V GPU, the per-sample time increases from 4us to 12us. Make sure to change the policy back to `mixed_float16` and rerun the cells before continuing with the guide.For many real-world models, mixed precision also allows you to double the batch size without running out of memory, as float16 tensors take half the memory. This does not apply however to this toy model, as you can likely run the model in any dtype where each batch consists of the entire MNIST dataset of 60,000 images.If running mixed precision on a TPU, you will not see as much of a performance gain compared to running mixed precision on GPUs. This is because TPUs already do certain ops in bfloat16 under the hood even with the default dtype policy of `float32`. TPU hardware does not support float32 for certain ops which are numerically stable in bfloat16, such as matmul. For such ops the TPU backend will silently use bfloat16 internally instead. As a consequence, passing `dtype='float32'` to layers which use such ops may have no numerical effect, however it is unlikely running such layers with bfloat16 computations will be harmful. Loss scalingLoss scaling is a technique which `tf.keras.Model.fit` automatically performs with the `mixed_float16` policy to avoid numeric underflow. This section describes loss scaling and how to customize its behavior. Underflow and OverflowThe float16 data type has a narrow dynamic range compared to float32. This means values above $65504$ will overflow to infinity and values below $6.0 \times 10^{-8}$ will underflow to zero. float32 and bfloat16 have a much higher dynamic range so that overflow and underflow are not a problem.For example: | x = tf.constant(256, dtype='float16')
(x ** 2).numpy() # Overflow
x = tf.constant(1e-5, dtype='float16')
(x ** 2).numpy() # Underflow | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
In practice, overflow with float16 rarely occurs. Additionally, underflow also rarely occurs during the forward pass. However, during the backward pass, gradients can underflow to zero. Loss scaling is a technique to prevent this underflow. Loss scaling backgroundThe basic concept of loss scaling is simple: simply multiply the loss by some large number, say $1024$. We call this number the *loss scale*. This will cause the gradients to scale by $1024$ as well, greatly reducing the chance of underflow. Once the final gradients are computed, divide them by $1024$ to bring them back to their correct values.The pseudocode for this process is:```loss_scale = 1024loss = model(inputs)loss *= loss_scale We assume `grads` are float32. We do not want to divide float16 gradientsgrads = compute_gradient(loss, model.trainable_variables)grads /= loss_scale```Choosing a loss scale can be tricky. If the loss scale is too low, gradients may still underflow to zero. If too high, the opposite the problem occurs: the gradients may overflow to infinity.To solve this, TensorFlow dynamically determines the loss scale so you do not have to choose one manually. If you use `tf.keras.Model.fit`, loss scaling is done for you so you do not have to do any extra work. This is explained further in the next section. Choosing the loss scaleEach dtype policy optionally has an associated `tf.mixed_precision.experimental.LossScale` object, which represents a fixed or dynamic loss scale. By default, the loss scale for the `mixed_float16` policy is a `tf.mixed_precision.experimental.DynamicLossScale`, which dynamically determines the loss scale value. Other policies do not have a loss scale by default, as it is only necessary when float16 is used. You can query the loss scale of the policy: | loss_scale = policy.loss_scale
print('Loss scale: %s' % loss_scale) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
The loss scale prints a lot of internal state, but you can ignore it. The most important part is the `current_loss_scale` part, which shows the loss scale's current value. You can instead use a static loss scale by passing a number when constructing a dtype policy. | new_policy = mixed_precision.Policy('mixed_float16', loss_scale=1024)
print(new_policy.loss_scale) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
The dtype policy constructor always converts the loss scale to a `LossScale` object. In this case, it's converted to a `tf.mixed_precision.experimental.FixedLossScale`, the only other `LossScale` subclass other than `DynamicLossScale`. Note: *Using anything other than a dynamic loss scale is not recommended*. Choosing a fixed loss scale can be difficult, as making it too low will cause the model to not train as well, and making it too high will cause Infs or NaNs to appear in the gradients. A dynamic loss scale is typically near the optimal loss scale, so you do not have to do any work. Currently, dynamic loss scales are a bit slower than fixed loss scales, but the performance will be improved in the future. Models, like layers, each have a dtype policy. If present, a model uses its policy's loss scale to apply loss scaling in the `tf.keras.Model.fit` method. This means if `Model.fit` is used, you do not have to worry about loss scaling at all: The `mixed_float16` policy will have a dynamic loss scale by default, and `Model.fit` will apply it.With custom training loops, the model will ignore the policy's loss scale, and you will have to apply it manually. This is explained in the next section. Training the model with a custom training loop So far, you trained a Keras model with mixed precision using `tf.keras.Model.fit`. Next, you will use mixed precision with a custom training loop. If you do not already know what a custom training loop is, please read [the Custom training guide](../tutorials/customization/custom_training_walkthrough.ipynb) first. Running a custom training loop with mixed precision requires two changes over running it in float32:1. Build the model with mixed precision (you already did this)2. Explicitly use loss scaling if `mixed_float16` is used. For step (2), you will use the `tf.keras.mixed_precision.experimental.LossScaleOptimizer` class, which wraps an optimizer and applies loss scaling. It takes two arguments: the optimizer and the loss scale. Construct one as follows to use a dynamic loss scale | optimizer = keras.optimizers.RMSprop()
optimizer = mixed_precision.LossScaleOptimizer(optimizer, loss_scale='dynamic') | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Passing `'dynamic'` is equivalent to passing `tf.mixed_precision.experimental.DynamicLossScale()`. Next, define the loss object and the `tf.data.Dataset`s. | loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
train_dataset = (tf.data.Dataset.from_tensor_slices((x_train, y_train))
.shuffle(10000).batch(8192))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(8192) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Next, define the training step function. Two new methods from the loss scale optimizer are used in order to scale the loss and unscale the gradients:* `get_scaled_loss(loss)`: Multiplies the loss by the loss scale* `get_unscaled_gradients(gradients)`: Takes in a list of scaled gradients as inputs, and divides each one by the loss scale to unscale themThese functions must be used in order to prevent underflow in the gradients. `LossScaleOptimizer.apply_gradients` will then apply gradients if none of them have Infs or NaNs. It will also update the loss scale, halving it if the gradients had Infs or NaNs and potentially increasing it otherwise. | @tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
predictions = model(x)
loss = loss_object(y, predictions)
scaled_loss = optimizer.get_scaled_loss(loss)
scaled_gradients = tape.gradient(scaled_loss, model.trainable_variables)
gradients = optimizer.get_unscaled_gradients(scaled_gradients)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
The `LossScaleOptimizer` will likely skip the first few steps at the start of training. The loss scale starts out high so that the optimal loss scale can quickly be determined. After a few steps, the loss scale will stabilize and very few steps will be skipped. This process happens automatically and does not affect training quality. Now define the test step. | @tf.function
def test_step(x):
return model(x, training=False) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Load the initial weights of the model, so you can retrain from scratch. | model.set_weights(initial_weights) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Finally, run the custom training loop. | for epoch in range(5):
epoch_loss_avg = tf.keras.metrics.Mean()
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='test_accuracy')
for x, y in train_dataset:
loss = train_step(x, y)
epoch_loss_avg(loss)
for x, y in test_dataset:
predictions = test_step(x)
test_accuracy.update_state(y, predictions)
print('Epoch {}: loss={}, test accuracy={}'.format(epoch, epoch_loss_avg.result(), test_accuracy.result())) | _____no_output_____ | Apache-2.0 | site/en/guide/mixed_precision.ipynb | DorianKodelja/docs |
Tabular Datasets As we have already discovered, Elements are simple wrappers around your data that provide a semantically meaningful representation. HoloViews can work with a wide variety of data types, but many of them can be categorized as either: * **Tabular:** Tables of flat columns, or * **Gridded:** Array-like data on 2-dimensional or N-dimensional grids These two general data types are explained in detail in the [Tabular Data](../user_guide/07-Tabular_Datasets.ipynb) and [Gridded Data](../user_guide/08-Gridded_Datasets.ipynb) user guides, including all the many supported formats (including Python dictionaries of NumPy arrays, pandas ``DataFrames``, dask ``DataFrames``, and xarray ``DataArrays`` and ``Datasets``). In this Getting-Started guide we provide a quick overview and introduction to two of the most flexible and powerful formats: columnar **pandas** DataFrames (in this section), and gridded **xarray** Datasets (in the next section). TabularTabular data (also called columnar data) is one of the most common, general, and versatile data formats, corresponding to how data is laid out in a spreadsheet. There are many different ways to put data into a tabular format, but for interactive analysis having [**tidy data**](http://www.jeannicholashould.com/tidy-data-in-python.html) provides flexibility and simplicity. For tidy data, the **columns** of the table represent **variables** or **dimensions** and the **rows** represent **observations**. The best way to understand this format is to look at such a dataset: | import numpy as np
import pandas as pd
import holoviews as hv
hv.extension('bokeh', 'matplotlib')
diseases = pd.read_csv('../assets/diseases.csv.gz')
diseases.head() | _____no_output_____ | BSD-3-Clause | examples/getting_started/3-Tabular_Datasets.ipynb | adsbxchange/holoviews |
This particular dataset was the subject of an excellent piece of visual journalism in the [Wall Street Journal](http://graphics.wsj.com/infectious-diseases-and-vaccines/b02g20t20w15). The WSJ data details the incidence of various diseases over time, and was downloaded from the [University of Pittsburgh's Project Tycho](http://www.tycho.pitt.edu/). We can see we have 5 data columns, which each correspond either to independent variables that specify a particular measurement ('Year', 'Week', 'State'), or observed/dependent variables reporting what was then actually measured (the 'measles' or 'pertussis' incidence). Knowing the distinction between those two types of variables is crucial for doing visualizations, but unfortunately the tabular format does not declare this information. Plotting 'Week' against 'State' would not be meaningful, whereas 'measles' for each 'State' (averaging or summing across the other dimensions) would be fine, and there's no way to deduce those constraints from the tabular format. Accordingly, we will first make a HoloViews object called a ``Dataset`` that declares the independent variables (called key dimensions or **kdims** in HoloViews) and dependent variables (called value dimensions or **vdims**) that you want to work with: | vdims = [('measles', 'Measles Incidence'), ('pertussis', 'Pertussis Incidence')]
ds = hv.Dataset(diseases, ['Year', 'State'], vdims) | _____no_output_____ | BSD-3-Clause | examples/getting_started/3-Tabular_Datasets.ipynb | adsbxchange/holoviews |
Here we've used an optional tuple-based syntax **``(name,label)``** to specify a more meaningful description for the ``vdims``, while using the original short descriptions for the ``kdims``. We haven't yet specified what to do with the ``Week`` dimension, but we are only interested in yearly averages, so let's just tell HoloViews to average over all remaining dimensions: | ds = ds.aggregate(function=np.mean)
ds | _____no_output_____ | BSD-3-Clause | examples/getting_started/3-Tabular_Datasets.ipynb | adsbxchange/holoviews |
(We'll cover aggregations like ``np.mean`` in detail later, but here the important bit is simply that the ``Week`` dimension can now be ignored.)The ``repr`` shows us both the ``kdims`` (in square brackets) and the ``vdims`` (in parentheses) of the ``Dataset``. Because it can hold arbitrary combinations of dimensions, a ``Dataset`` is *not* immediately visualizable. There's no single clear mapping from these four dimensions onto a two-dimensional page, hence the textual representation shown above.To make this data visualizable, we'll need to provide a bit more metadata, by selecting one of the large library of Elements that can help answer the questions we want to ask about the data. Perhaps the most obvious representation of this dataset is as a ``Curve`` displaying the incidence for each year, for each state. We could pull out individual columns one by one from the original dataset, but now that we have declared information about the dimensions, the cleanest approach is to map the dimensions of our ``Dataset`` onto the dimensions of an Element using ``.to``: | %%opts Curve [width=600 height=250] {+framewise}
(ds.to(hv.Curve, 'Year', 'measles') + ds.to(hv.Curve, 'Year', 'pertussis')).cols(1) | _____no_output_____ | BSD-3-Clause | examples/getting_started/3-Tabular_Datasets.ipynb | adsbxchange/holoviews |
Here we specified two ``Curve`` elements showing measles and pertussis incidence respectively (the vdims), per year (the kdim), and laid them out in a vertical column. You'll notice that even though we specified only the short name for the value dimensions, the plot shows the longer names ("Measles Incidence", "Pertussis Incidence") that we declared on the ``Dataset``.You'll also notice that we automatically received a dropdown menu to select which ``State`` to view. Each ``Curve`` ignores unused value dimensions, because additional measurements don't affect each other, but HoloViews has to do *something* with every key dimension for every such plot. If the ``State`` (or any other key dimension) isn't somehow plotted or aggregated over, then HoloViews has to leave choosing a value for it to the user, hence the selection widget. Other options for what to do with extra dimensions or just extra data ranges are illustrated below. SelectingOne of the most common thing we might want to do is to select only a subset of the data. The ``select`` method makes this extremely easy, letting you select a single value, a list of values supplied as a list, or a range of values supplied as a tuple. Here we will use ``select`` to display the measles incidence in four states over one decade. After applying the selection, we use the ``.to`` method as shown earlier, now displaying the data as ``Bars`` indexed by 'Year' and 'State' key dimensions and displaying the 'Measles Incidence' value dimension: | %%opts Bars [width=800 height=400 tools=['hover'] group_index=1 legend_position='top_left']
states = ['New York', 'New Jersey', 'California', 'Texas']
ds.select(State=states, Year=(1980, 1990)).to(hv.Bars, ['Year', 'State'], 'measles').sort() | _____no_output_____ | BSD-3-Clause | examples/getting_started/3-Tabular_Datasets.ipynb | adsbxchange/holoviews |
FacetingAbove we already saw what happens to key dimensions that we didn't explicitly assign to the Element using the ``.to`` method: they are grouped over, popping up a set of widgets so the user can select the values to show at any one time. However, using widgets is not always the most effective way to view the data, and a ``Dataset`` lets you specify other alternatives using the ``.overlay``, ``.grid`` and ``.layout`` methods. For instance, we can lay out each state separately using ``.grid``: | %%opts Curve [width=200] (color='indianred')
grouped = ds.select(State=states, Year=(1930, 2005)).to(hv.Curve, 'Year', 'measles')
grouped.grid('State') | _____no_output_____ | BSD-3-Clause | examples/getting_started/3-Tabular_Datasets.ipynb | adsbxchange/holoviews |
Or we can take the same grouped object and ``.overlay`` the individual curves instead of laying them out in a grid: | %%opts Curve [width=600] (color=Cycle(values=['indianred', 'slateblue', 'lightseagreen', 'coral']))
grouped.overlay('State') | _____no_output_____ | BSD-3-Clause | examples/getting_started/3-Tabular_Datasets.ipynb | adsbxchange/holoviews |
These faceting methods even compose together, meaning that if we had more key dimensions we could ``.overlay`` one dimension, ``.grid`` another and have a widget for any other remaining key dimensions. AggregatingInstead of selecting a subset of the data, another common operation supported by HoloViews is computing aggregates. When we first loaded this dataset, we aggregated over the 'Week' column to compute the mean incidence for every year, thereby reducing our data significantly. The ``aggregate`` method is therefore very useful to compute statistics from our data.A simple example using our dataset is to compute the mean and standard deviation of the Measles Incidence by ``'Year'``. We can express this simply by passing the key dimensions to aggregate over (in this case just the 'Year') along with a function and optional ``spreadfn`` to compute the statistics we want. The ``spread_fn`` will append the name of the function to the dimension name so we can reference the computed value separately. Once we have computed the aggregate, we can simply cast it to a ``Curve`` and ``ErrorBars``: | %%opts Curve [width=600]
agg = ds.aggregate('Year', function=np.mean, spreadfn=np.std)
(hv.Curve(agg) * hv.ErrorBars(agg,vdims=['measles', 'measles_std'])).redim.range(measles=(0, None)) | _____no_output_____ | BSD-3-Clause | examples/getting_started/3-Tabular_Datasets.ipynb | adsbxchange/holoviews |
First steps with xmovie | import warnings
import matplotlib.pyplot as plt
import xarray as xr
from shapely.errors import ShapelyDeprecationWarning
from xmovie import Movie
warnings.filterwarnings(
action='ignore',
category=ShapelyDeprecationWarning, # in cartopy
)
warnings.filterwarnings(
action="ignore",
category=UserWarning,
message=r"No `(vmin|vmax)` provided. Data limits are calculated from input. Depending on the input this can take long. Pass `\1` to avoid this step"
)
%matplotlib inline | _____no_output_____ | MIT | docs/examples/quickstart.ipynb | zmoon/xmovie |
Basics | # Load test dataset
ds = xr.tutorial.open_dataset('air_temperature').isel(time=slice(0, 150))
# Create movie object
mov = Movie(ds.air) | _____no_output_____ | MIT | docs/examples/quickstart.ipynb | zmoon/xmovie |
Preview movie frames | # Preview 10th frame
mov.preview(10)
plt.savefig("movie_preview.png")
! rm -f frame*.png *.mp4 *.gif | rm: cannot remove 'frame*.png': No such file or directory
rm: cannot remove '*.mp4': No such file or directory
rm: cannot remove '*.gif': No such file or directory
| MIT | docs/examples/quickstart.ipynb | zmoon/xmovie |
Create movie files | mov.save('movie.mp4') # Use to save a high quality mp4 movie
mov.save('movie_gif.gif') # Use to save a gif | Movie created at movie.mp4
Movie created at movie_mp4.mp4
GIF created at movie_gif.gif
| MIT | docs/examples/quickstart.ipynb | zmoon/xmovie |
Subsets and Splits