markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
There are many other operations. ...but you will find more power in *pandas* for this. Copy and "deep copy" - when objects are passed between functions - you want to avoid an excessive amount of memory copying when it is not necessary - (techincal term: pass by reference)
A = array([[1, 2], [3, 4]]) A B = A # now B is referring to the same array data as A B A == B # check this # changing B affects A B[0,0] = 10 B A
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
If we want to avoid this behavior - get a new completely independent object `B` copied from `A`- we need to do a so-called "deep copy" using the function `copy`
B = copy(A) # now, if we modify B, A is not affected B[0,0] = -5 B A
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Iterating over array elements > Vectorization describes the absence of any explicit looping, indexing, etc., in the code - these things are taking place, of course, just “behind the scenes” (in optimized, pre-compiled C code).source: numpy website - Generally, we want to avoid iterating over the elements of arrays * at all costs- In a *interpreted language* like Python (or MATLAB, or R) * iterations are really slow compared to vectorized operations- Use always numpy functions which are optimized * if you try a `for` loop you know what you get Type casting - Numpy arrays are *statically typed*- the type of an array does not change once created- but we can explicitly cast an array of some type to another - using the `astype` functions - (see also the similar `asarray` function) - This always create a new array of new type
M.dtype M M2 = M.astype(bool) M2 M3 = M.astype(str) M3
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Versions
%reload_ext version_information %version_information numpy
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Uploading an image with graphical annotations stored in a CSV file======================We'll be using standard python tools to parse CSV and create an XML document describing cell nuclei for BisQueMake sure you have bisque api installed:> pip install bisque-api
import os import csv from datetime import datetime try: from lxml import etree except ImportError: import xml.etree.ElementTree as etree
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Include BisQue API
from bqapi import BQSession from bqapi.util import save_blob
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
define some paths
path = '.' path_img = os.path.join(path, 'BisQue_CombinedSubtractions.lsm') path_csv = os.path.join(path, 'BisQue_CombinedSubtractions.csv')
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Parse CSV file and load nuclei positions------------------------------------------We'll create a list of XYZT coordinates with confidence
#x, y, z, t, confidence coords = [] with open(path_csv, 'rb') as csvfile: reader = csv.reader(csvfile) h = reader.next() for r in reader: c = (r[0], r[1], r[2], r[4]) print c coords.append(c)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Initiaize authenticated session--------------Initialize a BisQue session using simple user credentials
root = 'https://bisque.cyverse.org' user = 'demo' pswd = 'iplant' session = BQSession().init_local(user, pswd, bisque_root=root, create_mex=False)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Create XML image descriptor---------------------------We'll provide a suggested path in the remote user's directory
path_on_bisque = 'demo/nuclei_%s/%s'%(datetime.now().strftime('%Y%m%dT%H%M%S'), os.path.basename(path_img)) resource = etree.Element('image', name=path_on_bisque) print etree.tostring(resource, pretty_print=True)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Upload the image-----------------
# use import service to /import/transfer activating import service r = etree.XML(session.postblob(path_img, xml=resource)).find('./') if r is None or r.get('uri') is None: print 'Upload failed' else: print 'Uploaded ID: %s, URL: %s\n'%(r.get('resource_uniq'), r.get('uri')) print etree.tostring(r, pretty_print=True)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Add graphical annotations------------------------------We'll create point annotaions as an XML attached to the image we just uploaded into BisQue
g = etree.SubElement (r, 'gobject', type='My nuclei') for c in coords: p = etree.SubElement(g, 'point') etree.SubElement(p, 'vertex', x=c[0], y=c[1], z=c[2]) etree.SubElement(p, 'tag', name='confidence', value=c[3]) print etree.tostring(r, pretty_print=True)
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Save graphical annotations to the system------------------------------------------After storing all annotations become searchable
url = session.service_url('data_service') r = session.postxml(url, r) if r is None or r.get('uri') is None: print 'Adding annotations failed' else: print 'Image ID: %s, URL: %s'%(r.get('resource_uniq'), r.get('uri'))
_____no_output_____
BSD-3-Clause
BisQue_Graphical_Annotations.ipynb
benlazarine/bisque-notebooks
Copyright © 2017-2021 ABBYY Production LLC
#@title # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
*k*-means clustering [Download the tutorial as a Jupyter notebook](https://github.com/neoml-lib/neoml/blob/master/NeoML/docs/en/Python/tutorials/KMeans.ipynb)In this tutorial, we will use the NeoML implementation of *k*-means clustering algorithm to clusterize a randomly generated dataset.The tutorial includes the following steps:* [Generate the dataset](Generate-the-dataset)* [Cluster the data](Cluster-the-data)* [Visualize the results](Visualize-the-results) Generate the dataset *Note:* This section doesn't have any NeoML-specific code. It just generates a dataset. If you are not running this notebook, you may [skip](Cluster-the-data) this section. Let's generate a dataset of 4 clusters on the plane. Each cluster will be generated from center + noise taken from normal distribution for each coordinate.
import numpy as np np.random.seed(451) n_dots = 128 n_clusters = 4 centers = np.array([(-2., -2.), (-2., 2.), (2., -2.), (2., 2.)]) X = np.zeros(shape=(n_dots, 2), dtype=np.float32) y = np.zeros(shape=(n_dots,), dtype=np.int32) for i in range(n_dots): # Choose random center cluster_id = np.random.randint(0, n_clusters) y[i] = cluster_id # object = center + some noise X[i, 0] = centers[cluster_id][0] + np.random.normal(0, 1) X[i, 1] = centers[cluster_id][1] + np.random.normal(0, 1)
_____no_output_____
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
Cluster the data Now we'll create a `neoml.Clustering.KMeans` class that represents the clustering algorithm, and feed the data into it.
import neoml kmeans = neoml.Clustering.KMeans(max_iteration_count=1000, cluster_count=n_clusters, thread_count=4) y_pred, centers_pred, vars_pred = kmeans.clusterize(X)
_____no_output_____
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
Before going further let's take a look at the returned data.
print('y_pred') print(' ', type(y_pred)) print(' ', y_pred.shape) print(' ', y_pred.dtype) print('centers_pred') print(' ', type(centers_pred)) print(' ', centers_pred.shape) print(' ', centers_pred.dtype) print('vars_pred') print(' ', type(vars_pred)) print(' ', vars_pred.shape) print(' ', vars_pred.dtype)
y_pred <class 'numpy.ndarray'> (128,) int32 centers_pred <class 'numpy.ndarray'> (4, 2) float64 vars_pred <class 'numpy.ndarray'> (4, 2) float64
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
As you can see, the `y_pred` array contains the cluster indices of each object. `centers_pred` and `disps_pred` contain centers and variances of each cluster. Visualize the results In this section we'll draw both clusterizations: ground truth and predicted.
%matplotlib inline import matplotlib.pyplot as plt colors = { 0: 'r', 1: 'g', 2: 'b', 3: 'y' } # Create figure with 2 subplots fig, axs = plt.subplots(ncols=2) fig.set_size_inches(10, 5) # Show ground truth axs[0].set_title('Ground truth') axs[0].scatter(X[:, 0], X[:, 1], marker='o', c=list(map(colors.get, y))) axs[0].scatter(centers[:, 0], centers[:, 1], marker='x', c='black') # Show NeoML markup axs[1].set_title('NeoML K-Means') axs[1].scatter(X[:, 0], X[:, 1], marker='o', c=list(map(colors.get, y_pred))) axs[1].scatter(centers_pred[:, 0], centers_pred[:, 1], marker='x', c='black') plt.show()
_____no_output_____
ECL-2.0
NeoML/docs/en/Python/tutorials/KMeans.ipynb
NValerij/neoml
BS: 32 MNIST![](../ckpts/nb/grads_300_bs_16_dataset_mnist.png)![](../ckpts/nb/grads_200_bs_16_dataset_mnist.png)![](../ckpts/nb/grads_100_bs_16_dataset_mnist.png)![](../ckpts/nb/grads_50_bs_16_dataset_mnist.png) BS: 128 MNIST![](../ckpts/nb/grads_300_bs_128_dataset_mnist.png)![](../ckpts/nb/grads_200_bs_128_dataset_mnist.png)![](../ckpts/nb/grads_100_bs_128_dataset_mnist.png)![](../ckpts/nb/grads_50_bs_128_dataset_mnist.png)
if sdirs_algo=='pca': sdirs0, _ = pca_transform(sdirs0, n0) # sdirs1, _ = pca_transform(sdirs1, n1) else: sdirs0, _ = np.linalg.qr(sdirs0) # sdirs1, _ = np.linalg.qr(sdirs1) sdirs0.shape, sdirs1.shape sdirs = [[t.Tensor(sdirs0[:, _].reshape(output_size, input_size)), t.Tensor(sdirs1[:,_].reshape(output_size,))] for _ in range(sdirs0.shape[1])]
_____no_output_____
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
pretraining
trainloader = get_trainloader(dataset, 256, False) testloader = get_testloader(dataset, 256, False) model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) correcti = 0 x_test = 0 for idx, (data, labels) in enumerate(testloader): x, y = data.to(device), labels.to(device) y_hat = model(x) loss_val = loss(y_hat, y) predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() y_test = correcti/len(testloader.dataset) x_test, y_test
_____no_output_____
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
w/o gradient approximation
model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) xb_train, yb_train = [], [] xb_test, yb_test =[], [] for _ in tqdm(range(1, epochs+1), leave=False): xb_train.append(_) correcti = 0 for idx, (data, labels) in enumerate(trainloader): x, y = data.to(device), labels.to(device) optimizer = t.optim.SGD(model.parameters(), lr=lr) optimizer.zero_grad() y_hat = model(x) loss_val = loss(y_hat, y) loss_val.backward() optimizer.step() predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() yb_train.append(correcti/len(trainloader.dataset)) correcti = 0 for idx, (data, labels) in enumerate(testloader): x, y = data.to(device), labels.to(device) y_hat = model(x) loss_val = loss(y_hat, y) predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() yb_test.append(correcti/len(testloader.dataset)) print('{} \t {:.4f} \t {:.2f} \t {:.2f}'.format( xb_train[-1], loss_val.item(), yb_train[-1], yb_test[-1] ))
_____no_output_____
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
gradient approximation using all directions
model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) xa_train, ya_train = [], [] xa_test, ya_test = [], [] for _ in tqdm(range(1, epochs+1), leave=False): start = time.time() xa_train.append(_) xa_test.append(_) correcti = 0 for idx, (data, labels) in enumerate(trainloader): x, y = data.to(device), labels.to(device) optimizer = t.optim.SGD(model.parameters(), lr=lr) optimizer.zero_grad() y_hat = model(x) loss_val = loss(y_hat, y) loss_val.backward() _, error = gradient_approximation(model, sdirs, device, []) optimizer.step() predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() ya_train.append(correcti/len(trainloader.dataset)) correcti = 0 for idx, (data, labels) in enumerate(testloader): x, y = data.to(device), labels.to(device) y_hat = model(x) loss_val = loss(y_hat, y) predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() ya_test.append(correcti/len(testloader.dataset)) print('{} \t {:.4f} \t {:.2f} \t {:.2f}'.format( xa_train[-1], loss_val.item(), ya_train[-1], ya_test[-1] ))
_____no_output_____
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
gradient approximation using n directions
n = 1 model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) xe_train, ye_train = [], [] xe_test, ye_test = [], [] for _ in tqdm(range(1, epochs+1), leave=False): start = time.time() xe_train.append(_) xe_test.append(_) correcti = 0 for idx, (data, labels) in enumerate(trainloader): x, y = data.to(device), labels.to(device) optimizer = t.optim.SGD(model.parameters(), lr=lr) optimizer.zero_grad() y_hat = model(x) loss_val = loss(y_hat, y) loss_val.backward() _, error = gradient_approximation( model, [sdirs[idx]], device, []) optimizer.step() predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() ye_train.append(correcti/len(trainloader.dataset)) correcti = 0 for idx, (data, labels) in enumerate(testloader): x, y = data.to(device), labels.to(device) y_hat = model(x) loss_val = loss(y_hat, y) predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() ye_test.append(correcti/len(testloader.dataset)) print('{} \t {:.4f} \t {:.2f} \t {:.2f}'.format( xe_train[-1], loss_val.item(), ye_train[-1], ye_test[-1] )) n = 10 model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) xc_train, yc_train = [], [] xc_test, yc_test = [], [] for _ in tqdm(range(1, epochs+1), leave=False): start = time.time() xc_train.append(_) xc_test.append(_) correcti = 0 for idx, (data, labels) in enumerate(trainloader): x, y = data.to(device), labels.to(device) optimizer = t.optim.SGD(model.parameters(), lr=lr) optimizer.zero_grad() y_hat = model(x) loss_val = loss(y_hat, y) loss_val.backward() _, error = gradient_approximation( model, sdirs[idx: idx+n] , device, []) optimizer.step() predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() yc_train.append(correcti/len(trainloader.dataset)) correcti = 0 for idx, (data, labels) in enumerate(testloader): x, y = data.to(device), labels.to(device) y_hat = model(x) loss_val = loss(y_hat, y) predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() yc_test.append(correcti/len(testloader.dataset)) print('{} \t {:.4f} \t {:.2f} \t {:.2f}'.format( xc_train[-1], loss_val.item(), yc_train[-1], yc_test[-1] )) n = 100 model = resnet.resnet18(num_channels=1, num_classes=output_size).to(device) model.load_state_dict(t.load('../ckpts/init/{}_resnet18.init'.format(dataset))) xd_train, yd_train = [], [] xd_test, yd_test = [], [] for _ in tqdm(range(1, epochs+1), leave=False): start = time.time() xd_train.append(_) xd_test.append(_) correcti = 0 for idx, (data, labels) in enumerate(trainloader): x, y = data.to(device), labels.to(device) optimizer = t.optim.SGD(model.parameters(), lr=lr) optimizer.zero_grad() y_hat = model(x) loss_val = loss(y_hat, y) loss_val.backward() _, error = gradient_approximation( model, sdirs[idx: idx+n], device, []) optimizer.step() predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() yd_train.append(correcti/len(trainloader.dataset)) correcti = 0 for idx, (data, labels) in enumerate(testloader): x, y = data.to(device), labels.to(device) y_hat = model(x) loss_val = loss(y_hat, y) predi = y_hat.argmax(1, keepdim=True) correcti += predi.eq(y.view_as(predi)).sum().item() yd_test.append(correcti/len(testloader.dataset)) print('{} \t {:.4f} \t {:.2f} \t {:.2f}'.format( xd_train[-1], loss_val.item(), yd_train[-1], yd_test[-1] )) plt.figure() plt.plot([x_test]+xb_train, [y_test]+yb_test, label='SGD', c='r') # plt.plot([x_test]+xa_train, [y_test]+ya_test, label='SGD {}-approx.'.format(len(sdirs)), c='b') plt.plot([x_test]+xc_train, [y_test]+yc_test, label='SGD 10-approx.', c='g') plt.plot([x_test]+xd_train, [y_test]+yd_test, label='SGD 100-approx.', c='k') plt.plot([x_test]+xe_train, [y_test]+ye_test, label='SGD 1-approx.', c='c') history = { 'test': [x_test, y_test], # 'a': [xa_train, ya_train, xa_test, ya_test], 'b': [xb_train, yb_train, xb_test, yb_test], 'c': [xc_train, yc_train, xc_test, yc_test], 'd': [xd_train, yd_train, xd_test, yd_test], 'e': [xe_train, ye_train, xe_test, ye_test], } name = 'clf_{}_{}_algo_{}_bs_{}_sgd_vs_sgd_approx_random_grad_sampling'.format( 'resnet18', dataset, sdirs_algo, bs) print(name) pkl.dump(history, open('../ckpts/history/{}.pkl'.format(name), 'wb')) plt.xlabel('epochs') plt.ylabel('accuracy') plt.legend() plt.savefig( '../ckpts/plots/{}.png'.format(name), dpi=300, bbox_inches='tight' )
clf_resnet18_mnist_algo_pca_bs_16_sgd_vs_sgd_approx_random_grad_sampling
MIT
LBGM/nb/11. RandomGradientSamplingMNIST.ipynb
sidsrini12/FURL_Sim
Dependencies
import os import sys import cv2 import shutil import random import warnings import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from tensorflow import set_random_seed from sklearn.utils import class_weight from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, cohen_kappa_score from keras import backend as K from keras.models import Model from keras.utils import to_categorical from keras import optimizers, applications from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler from keras.layers import Dense, Dropout, GlobalAveragePooling2D, GlobalMaxPooling2D, Input, Flatten, BatchNormalization, Activation def seed_everything(seed=0): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) set_random_seed(0) seed = 0 seed_everything(seed) %matplotlib inline sns.set(style="whitegrid") warnings.filterwarnings("ignore") sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/')) from efficientnet import *
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Using TensorFlow backend.
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Load data
hold_out_set = pd.read_csv('../input/aptos-data-split/hold-out.csv') X_train = hold_out_set[hold_out_set['set'] == 'train'] X_val = hold_out_set[hold_out_set['set'] == 'validation'] test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv') print('Number of train samples: ', X_train.shape[0]) print('Number of validation samples: ', X_val.shape[0]) print('Number of test samples: ', test.shape[0]) # Preprocecss data X_train["id_code"] = X_train["id_code"].apply(lambda x: x + ".png") X_val["id_code"] = X_val["id_code"].apply(lambda x: x + ".png") test["id_code"] = test["id_code"].apply(lambda x: x + ".png") display(X_train.head())
Number of train samples: 2929 Number of validation samples: 733 Number of test samples: 1928
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Model parameters
# Model parameters FACTOR = 4 BATCH_SIZE = 8 * FACTOR EPOCHS = 20 WARMUP_EPOCHS = 5 LEARNING_RATE = 1e-4 * FACTOR WARMUP_LEARNING_RATE = 1e-3 * FACTOR HEIGHT = 224 WIDTH = 224 CHANNELS = 3 ES_PATIENCE = 5 RLROP_PATIENCE = 3 DECAY_DROP = 0.5 LR_WARMUP_EPOCHS_1st = 2 LR_WARMUP_EPOCHS_2nd = 5 STEP_SIZE = len(X_train) // BATCH_SIZE TOTAL_STEPS_1st = WARMUP_EPOCHS * STEP_SIZE TOTAL_STEPS_2nd = EPOCHS * STEP_SIZE WARMUP_STEPS_1st = LR_WARMUP_EPOCHS_1st * STEP_SIZE WARMUP_STEPS_2nd = LR_WARMUP_EPOCHS_2nd * STEP_SIZE
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Pre-procecess images
train_base_path = '../input/aptos2019-blindness-detection/train_images/' test_base_path = '../input/aptos2019-blindness-detection/test_images/' train_dest_path = 'base_dir/train_images/' validation_dest_path = 'base_dir/validation_images/' test_dest_path = 'base_dir/test_images/' # Making sure directories don't exist if os.path.exists(train_dest_path): shutil.rmtree(train_dest_path) if os.path.exists(validation_dest_path): shutil.rmtree(validation_dest_path) if os.path.exists(test_dest_path): shutil.rmtree(test_dest_path) # Creating train, validation and test directories os.makedirs(train_dest_path) os.makedirs(validation_dest_path) os.makedirs(test_dest_path) def crop_image(img, tol=7): if img.ndim ==2: mask = img>tol return img[np.ix_(mask.any(1),mask.any(0))] elif img.ndim==3: gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) mask = gray_img>tol check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0] if (check_shape == 0): # image is too dark so that we crop out everything, return img # return original image else: img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))] img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))] img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))] img = np.stack([img1,img2,img3],axis=-1) return img def circle_crop(img): img = crop_image(img) height, width, depth = img.shape largest_side = np.max((height, width)) img = cv2.resize(img, (largest_side, largest_side)) height, width, depth = img.shape x = width//2 y = height//2 r = np.amin((x, y)) circle_img = np.zeros((height, width), np.uint8) cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1) img = cv2.bitwise_and(img, img, mask=circle_img) img = crop_image(img) return img def preprocess_image(base_path, save_path, image_id, HEIGHT, WIDTH, sigmaX=10): image = cv2.imread(base_path + image_id) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = circle_crop(image) image = cv2.resize(image, (HEIGHT, WIDTH)) image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128) cv2.imwrite(save_path + image_id, image) # Pre-procecss train set for i, image_id in enumerate(X_train['id_code']): preprocess_image(train_base_path, train_dest_path, image_id, HEIGHT, WIDTH) # Pre-procecss validation set for i, image_id in enumerate(X_val['id_code']): preprocess_image(train_base_path, validation_dest_path, image_id, HEIGHT, WIDTH) # Pre-procecss test set for i, image_id in enumerate(test['id_code']): preprocess_image(test_base_path, test_dest_path, image_id, HEIGHT, WIDTH)
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Data generator
datagen=ImageDataGenerator(rescale=1./255, rotation_range=360, horizontal_flip=True, vertical_flip=True) train_generator=datagen.flow_from_dataframe( dataframe=X_train, directory=train_dest_path, x_col="id_code", y_col="diagnosis", class_mode="raw", batch_size=BATCH_SIZE, target_size=(HEIGHT, WIDTH), seed=seed) valid_generator=datagen.flow_from_dataframe( dataframe=X_val, directory=validation_dest_path, x_col="id_code", y_col="diagnosis", class_mode="raw", batch_size=BATCH_SIZE, target_size=(HEIGHT, WIDTH), seed=seed) test_generator=datagen.flow_from_dataframe( dataframe=test, directory=test_dest_path, x_col="id_code", batch_size=1, class_mode=None, shuffle=False, target_size=(HEIGHT, WIDTH), seed=seed) def cosine_decay_with_warmup(global_step, learning_rate_base, total_steps, warmup_learning_rate=0.0, warmup_steps=0, hold_base_rate_steps=0): """ Cosine decay schedule with warm up period. In this schedule, the learning rate grows linearly from warmup_learning_rate to learning_rate_base for warmup_steps, then transitions to a cosine decay schedule. :param global_step {int}: global step. :param learning_rate_base {float}: base learning rate. :param total_steps {int}: total number of training steps. :param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}). :param warmup_steps {int}: number of warmup steps. (default: {0}). :param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}). :param global_step {int}: global step. :Returns : a float representing learning rate. :Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps. """ if total_steps < warmup_steps: raise ValueError('total_steps must be larger or equal to warmup_steps.') learning_rate = 0.5 * learning_rate_base * (1 + np.cos( np.pi * (global_step - warmup_steps - hold_base_rate_steps ) / float(total_steps - warmup_steps - hold_base_rate_steps))) if hold_base_rate_steps > 0: learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps, learning_rate, learning_rate_base) if warmup_steps > 0: if learning_rate_base < warmup_learning_rate: raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.') slope = (learning_rate_base - warmup_learning_rate) / warmup_steps warmup_rate = slope * global_step + warmup_learning_rate learning_rate = np.where(global_step < warmup_steps, warmup_rate, learning_rate) return np.where(global_step > total_steps, 0.0, learning_rate) class WarmUpCosineDecayScheduler(Callback): """Cosine decay with warmup learning rate scheduler""" def __init__(self, learning_rate_base, total_steps, global_step_init=0, warmup_learning_rate=0.0, warmup_steps=0, hold_base_rate_steps=0, verbose=0): """ Constructor for cosine decay with warmup learning rate scheduler. :param learning_rate_base {float}: base learning rate. :param total_steps {int}: total number of training steps. :param global_step_init {int}: initial global step, e.g. from previous checkpoint. :param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}). :param warmup_steps {int}: number of warmup steps. (default: {0}). :param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}). :param verbose {int}: quiet, 1: update messages. (default: {0}). """ super(WarmUpCosineDecayScheduler, self).__init__() self.learning_rate_base = learning_rate_base self.total_steps = total_steps self.global_step = global_step_init self.warmup_learning_rate = warmup_learning_rate self.warmup_steps = warmup_steps self.hold_base_rate_steps = hold_base_rate_steps self.verbose = verbose self.learning_rates = [] def on_batch_end(self, batch, logs=None): self.global_step = self.global_step + 1 lr = K.get_value(self.model.optimizer.lr) self.learning_rates.append(lr) def on_batch_begin(self, batch, logs=None): lr = cosine_decay_with_warmup(global_step=self.global_step, learning_rate_base=self.learning_rate_base, total_steps=self.total_steps, warmup_learning_rate=self.warmup_learning_rate, warmup_steps=self.warmup_steps, hold_base_rate_steps=self.hold_base_rate_steps) K.set_value(self.model.optimizer.lr, lr) if self.verbose > 0: print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Model
def create_model(input_shape): input_tensor = Input(shape=input_shape) base_model = EfficientNetB5(weights=None, include_top=False, input_tensor=input_tensor) base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b5_imagenet_1000_notop.h5') x = GlobalMaxPooling2D()(base_model.output) x = Dropout(0.5)(x) x = Dense(1024)(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = Dropout(0.5)(x) final_output = Dense(1, activation='linear', name='final_output')(x) model = Model(input_tensor, final_output) return model
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Train top layers
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS)) for layer in model.layers: layer.trainable = False for i in range(-7, 0): model.layers[i].trainable = True cosine_lr_1st = WarmUpCosineDecayScheduler(learning_rate_base=WARMUP_LEARNING_RATE, total_steps=TOTAL_STEPS_1st, warmup_learning_rate=0.0, warmup_steps=WARMUP_STEPS_1st, hold_base_rate_steps=(2 * STEP_SIZE)) metric_list = ["accuracy"] callback_list = [cosine_lr_1st] optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE) model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list) model.summary() STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size history_warmup = model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, validation_steps=STEP_SIZE_VALID, epochs=WARMUP_EPOCHS, callbacks=callback_list, verbose=2).history
Epoch 1/5 - 56s - loss: 3.6864 - acc: 0.2126 - val_loss: 2.2876 - val_acc: 0.2898 Epoch 2/5 - 42s - loss: 2.0355 - acc: 0.2989 - val_loss: 1.7932 - val_acc: 0.2739 Epoch 3/5 - 42s - loss: 1.3178 - acc: 0.3542 - val_loss: 1.9237 - val_acc: 0.2653 Epoch 4/5 - 42s - loss: 1.3226 - acc: 0.3797 - val_loss: 1.8302 - val_acc: 0.2397 Epoch 5/5 - 42s - loss: 1.2150 - acc: 0.3888 - val_loss: 1.5631 - val_acc: 0.2126
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Fine-tune the complete model
for layer in model.layers: layer.trainable = True es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1) cosine_lr_2nd = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE, total_steps=TOTAL_STEPS_2nd, warmup_learning_rate=0.0, warmup_steps=WARMUP_STEPS_2nd, hold_base_rate_steps=(3 * STEP_SIZE)) callback_list = [es, cosine_lr_2nd] optimizer = optimizers.Adam(lr=LEARNING_RATE) model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list) model.summary() history = model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, validation_steps=STEP_SIZE_VALID, epochs=EPOCHS, callbacks=callback_list, verbose=2).history fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 6)) ax1.plot(cosine_lr_1st.learning_rates) ax1.set_title('Warm up learning rates') ax2.plot(cosine_lr_2nd.learning_rates) ax2.set_title('Fine-tune learning rates') plt.xlabel('Steps') plt.ylabel('Learning rate') sns.despine() plt.show()
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Model loss graph
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14)) ax1.plot(history['loss'], label='Train loss') ax1.plot(history['val_loss'], label='Validation loss') ax1.legend(loc='best') ax1.set_title('Loss') ax2.plot(history['acc'], label='Train accuracy') ax2.plot(history['val_acc'], label='Validation accuracy') ax2.legend(loc='best') ax2.set_title('Accuracy') plt.xlabel('Epochs') sns.despine() plt.show() # Create empty arays to keep the predictions and labels df_preds = pd.DataFrame(columns=['label', 'pred', 'set']) train_generator.reset() valid_generator.reset() # Add train predictions and labels for i in range(STEP_SIZE_TRAIN + 1): im, lbl = next(train_generator) preds = model.predict(im, batch_size=train_generator.batch_size) for index in range(len(preds)): df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train'] # Add validation predictions and labels for i in range(STEP_SIZE_VALID + 1): im, lbl = next(valid_generator) preds = model.predict(im, batch_size=valid_generator.batch_size) for index in range(len(preds)): df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation'] df_preds['label'] = df_preds['label'].astype('int') def classify(x): if x < 0.5: return 0 elif x < 1.5: return 1 elif x < 2.5: return 2 elif x < 3.5: return 3 return 4 # Classify predictions df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x)) train_preds = df_preds[df_preds['set'] == 'train'] validation_preds = df_preds[df_preds['set'] == 'validation']
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Model Evaluation Confusion Matrix Original thresholds
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR'] def plot_confusion_matrix(train, validation, labels=labels): train_labels, train_preds = train validation_labels, validation_preds = validation fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7)) train_cnf_matrix = confusion_matrix(train_labels, train_preds) validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds) train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis] validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis] train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels) validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels) sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train') sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation') plt.show() plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Quadratic Weighted Kappa
def evaluate_model(train, validation): train_labels, train_preds = train validation_labels, validation_preds = validation print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic')) print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic')) print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic')) evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
Train Cohen Kappa score: 0.960 Validation Cohen Kappa score: 0.902 Complete set Cohen Kappa score: 0.949
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Apply model to test set and output predictions
def apply_tta(model, generator, steps=10): step_size = generator.n//generator.batch_size preds_tta = [] for i in range(steps): generator.reset() preds = model.predict_generator(generator, steps=step_size) preds_tta.append(preds) return np.mean(preds_tta, axis=0) preds = apply_tta(model, test_generator) predictions = [classify(x) for x in preds] results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions}) results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4]) # Cleaning created directories if os.path.exists(train_dest_path): shutil.rmtree(train_dest_path) if os.path.exists(validation_dest_path): shutil.rmtree(validation_dest_path) if os.path.exists(test_dest_path): shutil.rmtree(test_dest_path)
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
Predictions class distribution
fig = plt.subplots(sharex='col', figsize=(24, 8.7)) sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test') sns.despine() plt.show() results.to_csv('submission.csv', index=False) display(results.head())
_____no_output_____
MIT
Model backlog/EfficientNet/EfficientNetB5/169 - EfficientNetB5 - Reg - Big Classifier Max.ipynb
ThinkBricks/APTOS2019BlindnessDetection
4.5.1 load and save NDarray
import numpy as np x = tf.ones(3) x np.save('x.npy', x) x2 = np.load('x.npy') x2 y = tf.zeros(4) np.save('xy.npy',[x,y]) x2, y2 = np.load('xy.npy', allow_pickle=True) (x2, y2) mydict = {'x': x, 'y': y} np.save('mydict.npy', mydict) mydict2 = np.load('mydict.npy', allow_pickle=True) mydict2
_____no_output_____
Apache-2.0
ch4_DL_computation/4.5 load and save.ipynb
gunpowder78/Dive-into-DL-TensorFlow2.0
4.5.2 load and save model parameters
X = tf.random.normal((2,20)) X class MLP(tf.keras.Model): def __init__(self): super().__init__() self.flatten = tf.keras.layers.Flatten() # Flatten层将除第一维(batch_size)以外的维度展平 self.dense1 = tf.keras.layers.Dense(units=256, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(units=10) def call(self, inputs): x = self.flatten(inputs) x = self.dense1(x) output = self.dense2(x) return output net = MLP() Y = net(X) Y net.save_weights("4.5saved_model.h5") net2 = MLP() net2.load_weights("4.5saved_model.h5") Y2 = net2(X) Y2 == Y
_____no_output_____
Apache-2.0
ch4_DL_computation/4.5 load and save.ipynb
gunpowder78/Dive-into-DL-TensorFlow2.0
Exp 43 analysisSee `./informercial/Makefile` for experimentaldetails.
import os import numpy as np from IPython.display import Image import matplotlib import matplotlib.pyplot as plt` %matplotlib inline %config InlineBackend.figure_format = 'retina' import seaborn as sns sns.set_style('ticks') matplotlib.rcParams.update({'font.size': 16}) matplotlib.rc('axes', titlesize=16) from infomercial.exp import meta_bandit from infomercial.local_gym import bandit from infomercial.exp.meta_bandit import load_checkpoint import gym # ls ../data/exp2*
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Load and process data
data_path ="/Users/qualia/Code/infomercial/data/" exp_name = "exp43" best_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_best.pkl")) sorted_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_sorted.pkl")) best_params
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Performanceof best parameters
env_name = 'BanditOneHigh2-v0' num_episodes = 20*100 # Run w/ best params result = meta_bandit( env_name=env_name, num_episodes=num_episodes, lr=best_params["lr"], tie_threshold=best_params["tie_threshold"], seed_value=19, save="exp43_best_model.pkl" ) # Plot run episodes = result["episodes"] actions =result["actions"] scores_R = result["scores_R"] values_R = result["values_R"] scores_E = result["scores_E"] values_E = result["values_E"] # Get some data from the gym... env = gym.make(env_name) best = env.best print(f"Best arm: {best}, last arm: {actions[-1]}") # Init plot fig = plt.figure(figsize=(6, 14)) grid = plt.GridSpec(5, 1, wspace=0.3, hspace=0.8) # Do plots: # Arm plt.subplot(grid[0, 0]) plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit") plt.plot(episodes, np.repeat(best, np.max(episodes)+1), color="red", alpha=0.8, ls='--', linewidth=2) plt.ylim(-.1, np.max(actions)+1.1) plt.ylabel("Arm choice") plt.xlabel("Episode") # score plt.subplot(grid[1, 0]) plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=10, label="R") plt.scatter(episodes, scores_E, color="purple", alpha=0.9, s=10, label="E") plt.ylabel("log score") plt.xlabel("Episode") plt.semilogy() plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) _ = sns.despine() # Q plt.subplot(grid[2, 0]) plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=10, label="R") plt.scatter(episodes, values_E, color="purple", alpha=0.4, s=10, label="E") plt.ylabel("log Q(s,a)") plt.xlabel("Episode") plt.semilogy() plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) _ = sns.despine() # - plt.savefig("figures/epsilon_bandit.pdf", bbox_inches='tight') plt.savefig("figures/epsilon_bandit.eps", bbox_inches='tight')
Best arm: 0, last arm: 0
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Sensitivityto parameter choices
total_Rs = [] ties = [] lrs = [] trials = list(sorted_params.keys()) for t in trials: total_Rs.append(sorted_params[t]['total_E']) ties.append(sorted_params[t]['tie_threshold']) lrs.append(sorted_params[t]['lr']) # Init plot fig = plt.figure(figsize=(10, 18)) grid = plt.GridSpec(4, 1, wspace=0.3, hspace=0.8) # Do plots: # Arm plt.subplot(grid[0, 0]) plt.scatter(trials, total_Rs, color="black", alpha=.5, s=6, label="total R") plt.xlabel("Sorted params") plt.ylabel("total R") _ = sns.despine() plt.subplot(grid[1, 0]) plt.scatter(trials, ties, color="black", alpha=.3, s=6, label="total R") plt.xlabel("Sorted params") plt.ylabel("Tie threshold") _ = sns.despine() plt.subplot(grid[2, 0]) plt.scatter(trials, lrs, color="black", alpha=.5, s=6, label="total R") plt.xlabel("Sorted params") plt.ylabel("lr") _ = sns.despine()
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Distributionsof parameters
# Init plot fig = plt.figure(figsize=(5, 6)) grid = plt.GridSpec(2, 1, wspace=0.3, hspace=0.8) plt.subplot(grid[0, 0]) plt.hist(ties, color="black") plt.xlabel("tie threshold") plt.ylabel("Count") _ = sns.despine() plt.subplot(grid[1, 0]) plt.hist(lrs, color="black") plt.xlabel("lr") plt.ylabel("Count") _ = sns.despine()
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
of total reward
# Init plot fig = plt.figure(figsize=(5, 2)) grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8) plt.subplot(grid[0, 0]) plt.hist(total_Rs, color="black", bins=50) plt.xlabel("Total reward") plt.ylabel("Count") plt.xlim(0, 10) _ = sns.despine()
_____no_output_____
MIT
notebooks/exp43_analysis.ipynb
CoAxLab/infomercial
Lambda School Data Science*Unit 2, Sprint 2, Module 1*--- Decision Trees Assignment- [ ] [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. Go to our Kaggle InClass competition website. You will be given the URL in Slack. Go to the Rules page. Accept the rules of the competition. Notice that the Rules page also has instructions for the Submission process. The Data page has feature definitions.- [ ] Do train/validate/test split with the Tanzania Waterpumps data.- [ ] Begin with baselines for classification.- [ ] Select features. Use a scikit-learn pipeline to encode categoricals, impute missing values, and fit a decision tree classifier.- [ ] Get your validation accuracy score.- [ ] Get and plot your feature importances.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- A Visual Introduction to Machine Learning - [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) - [Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) — _Don’t worry about understanding the code, just get introduced to the concepts. This 10 minute video has excellent diagrams and explanations._- [Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/) Doing- [ ] Add your own stretch goal(s) !- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features. (For example, [what columns have zeros and shouldn't?](https://github.com/Quartz/bad-data-guidezeros-replace-missing-values) What columns are duplicates, or nearly duplicates? Can you extract the year from date_recorded? Can you engineer new features, such as the number of years from waterpump construction to waterpump inspection?)- [ ] Try other [scikit-learn imputers](https://scikit-learn.org/stable/modules/impute.html).- [ ] Make exploratory visualizations and share on Slack. Exploratory visualizationsVisualize the relationships between feature(s) and target. I recommend you do this with your training set, after splitting your data. For this problem, you may want to create a new column to represent the target as a number, 0 or 1. For example:```pythontrain['functional'] = (train['status_group']=='functional').astype(int)```You can try [Seaborn "Categorical estimate" plots](https://seaborn.pydata.org/tutorial/categorical.html) for features with reasonably few unique values. (With too many unique values, the plot is unreadable.)- Categorical features. (If there are too many unique values, you can replace less frequent values with "OTHER.")- Numeric features. (If there are too many unique values, you can [bin with pandas cut / qcut functions](https://pandas.pydata.org/pandas-docs/stable/getting_started/basics.html?highlight=qcutdiscretization-and-quantiling).)You can try [Seaborn linear model plots](https://seaborn.pydata.org/tutorial/regression.html) with numeric features. For this classification problem, you may want to use the parameter `logistic=True`, but it can be slow.You do _not_ need to use Seaborn, but it's nice because it includes confidence intervals to visualize uncertainty. High-cardinality categoricalsThis code from a previous assignment demonstrates how to replace less frequent values with 'OTHER'```python Reduce cardinality for NEIGHBORHOOD feature ... Get a list of the top 10 neighborhoodstop10 = train['NEIGHBORHOOD'].value_counts()[:10].index At locations where the neighborhood is NOT in the top 10, replace the neighborhood with 'OTHER'train.loc[~train['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'test.loc[~test['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER'```
import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* !pip install pandas-profiling==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape # Check Pandas Profiling version import pandas_profiling pandas_profiling.__version__ # Old code for Pandas Profiling version 2.3 # It can be very slow with medium & large datasets. # These parameters will make it faster. # profile = train.profile_report( # check_correlation_pearson=False, # correlations={ # 'pearson': False, # 'spearman': False, # 'kendall': False, # 'phi_k': False, # 'cramers': False, # 'recoded': False, # }, # plot={'histogram': {'bayesian_blocks_bins': False}}, # ) # # New code for Pandas Profiling version 2.4 from pandas_profiling import ProfileReport profile = ProfileReport(train, minimal=True).to_notebook_iframe() profile
_____no_output_____
MIT
module1-decision-trees/LS_DS_221_assignment.ipynb
SwetaSengupta/DS-Unit-2-Kaggle-Challenge
function code_toggle() { if (code_shown){ $('div.input').hide('500'); $('toggleButton').val('Show Code') } else { $('div.input').show('500'); $('toggleButton').val('Hide Code') } code_shown = !code_shown } $( document ).ready(function(){ code_shown=false; $('div.input').hide() });
# Importing some python libraries. import numpy as np from numpy.random import randn,rand import matplotlib.pyplot as pl from matplotlib.pyplot import plot import seaborn as sns %matplotlib inline # Fixing figure sizes from pylab import rcParams rcParams['figure.figsize'] = 10,5 sns.set_palette('Reds_r')
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Reaction Network HomeworkIn this homework, we will study a very simple set of reactions by modelling it through three different ways. First, we shall employ an ODE model called the **Reaction Rate Equation**. Then, we will solve the **Chemical Langevin Equation** and, finally, we will simulate the exact model by "solving" the **Chemical Master Equation**. The reaction network of choice shall be a simple birth-death process, described by the relations : $$\begin{align}\emptyset \stackrel{a}{\to} X,\\X \stackrel{\mu X}{\to} \emptyset.\end{align}$$$X$ here is the population number.Throughout, we shall use $a=10$ and $\mu=1.0$. Reaction Rate EquationThe reaction rate equation corresponding to the system is $$\begin{align}\dot{x}=a-\mu\cdot x,\\x(0)=x_0.\end{align}$$As this is a linear equation, we can solve it exactly, with solution$$x(t)=x(t) = a/\mu+(x_0-a/\mu) e^{\mu (-t)}$$
# Solution of the RRE def x(t,x0=3,a=10.0,mu=1.0): return (x0-a/mu)*np.exp(-t*mu)+a/mu
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
We note that there is a stationary solution, $x(t)=a/\mu$. From the exponential in the solution, we can see that this is an attracting fixed point.
t = np.linspace(0,3) x0list = np.array([0.5,1,15]) sns.set_palette("Reds",n_colors=3) for x0 in x0list: pl.plot(t,x(t,x0),linewidth=4) pl.title('Population numbers for different initial conditions.', fontsize=20) pl.xlabel('Time',fontsize=20)
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Chemical Langevin EquationNext, we will model the system by using the CLE. For our particular birth/death process, this will be $$dX_t=(a-\mu\cdot X_t)dt+(\sqrt{a}-\sqrt{\mu\cdot X_t})dW.$$To solve this, we shall use the Euler-Maruyama scheme from the previous homework. We fix a $\Delta t$ positive. Then, the scheme shall be : $$X_{n+1}=X_n+(a-\mu\cdot X_n)\Delta t+(\sqrt{a}-\sqrt{\mu\cdot X_t})\cdot \sqrt{\Delta t}\cdot z,\ z\sim N(0,1).$$
def EM(xinit,T,Dt=0.1,a=1,mu=2): ''' Returns the solution of the CLE with parameters a, mu Arguments ========= xinit : real, initial condition. Dt : real, stepsize of the Euler-Maruyama. T : real, final time to reach. a : real, parameter of the RHS. mu : real, parameter of the RHS. ''' n = int(T/Dt) # number of steps to reach T X = np.zeros(n) z = randn(n) X[0] = xinit # Initial condition # EM method for i in xrange(1,n): X[i] = X[i-1] + Dt* (a-mu*X[i-1])+(np.sqrt(a)-np.sqrt(mu*X[i-1]))*np.sqrt(Dt)*z[i] return X
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Similarly to the previous case, here is a run with multiple initial conditions.
T = 10 # final time to reach Dt = 0.01 # time-step for EM # Set the palette to reds with ten colors sns.set_palette('Reds',10) def plotPaths(T,Dt): n = int(T/Dt) t = np.linspace(0,T,n) xinitlist = np.linspace(10,15,10) for x0 in xinitlist : path = EM(xinit=x0,T=T,Dt=Dt,a=10.0,mu=1.0) pl.plot(t, path,linewidth=5) pl.xlabel('time', fontsize=20) pl.title('Paths for initial conditions between 1 and 10.', fontsize=20) return path path = plotPaths(T,Dt) print 'Paths decay towards', path[np.size(path)-1] print 'The stationary point is', 1.0
Paths decay towards 10.0004633499 The stationary point is 1.0
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
We notice that the asymptotic behavior of the CLE is the same as that of the RRE. The only notable difference is the initial random kicks in the paths, all because of the stochasticicity. Chemical Master EquationFinally, we shall simulate the system exactly by using the Stochastic Simulation Algorithm (SSA).
def SSA(xinit, nsteps, a=10.0, mu=1.0): ''' Using SSA to exactly simulate the death/birth process starting from xinit and for nsteps. a and mu are parameters of the propensities. Returns ======= path : array-like, the path generated. tpath: stochastic time steps ''' path = np.zeros(nsteps) tpath= np.zeros(nsteps) path[0] = xinit # initial population u = rand(2,nsteps) # pre-pick all the uniform variates we need for i in xrange(1,nsteps): # The propensities will be normalized tot_prop = path[i-1]*mu+a prob = path[i-1]*mu/tot_prop # probability of death if(u[0,i]<prob): # Death path[i] = path[i-1]-1 else: # Birth path[i] = path[i-1]+1 # Time stayed at current state tpath[i] = -np.log(u[1,i])*1/tot_prop tpath = np.cumsum(tpath) return path, tpath
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Now that we have SSA setup, we can run multiple paths and compare the results to the previous cases.
# Since the paths below are not really related # let's use a more interesting palette # for the plot. sns.set_palette('hls',1) for _ in xrange(1): path, tpath = SSA(xinit=1,nsteps=100) # Since this is the path of a jump process # I'm switching from "plot" to "step" # to get the figure right. :) pl.step(tpath,path,linewidth=5,alpha=0.9) pl.title('One path simulated with SSA, $a<\mu$. ', fontsize=20); pl.xlabel('Time', fontsize=20) # Since the paths below are not really related # let's use a more interesting palette # for the plot. sns.set_palette('hls',3) for _ in xrange(3): path, tpath = SSA(xinit=1,nsteps=100) # Since this is the path of a jump process # I'm switching from "plot" to "step" # to get the figure right. :) pl.step(tpath,path,linewidth=5,alpha=0.9) pl.title('Three paths simulated with SSA, $a<\mu$. ', fontsize=20); pl.xlabel('Time', fontsize=20)
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
We can see three chains above, all starting from $X_0=1$, and simulated with the SSA.
npaths = 1 nsteps = 30000 path = np.zeros([npaths,nsteps]) for i in xrange(npaths): path[i,:], tpath = SSA(xinit=1,nsteps=nsteps) skip = 20000 sum(path[0,skip:nsteps-1]*tpath[skip:nsteps-1])/sum(tpath[skip:nsteps-1])
_____no_output_____
MIT
ipython_notebooks/reactions.ipynb
kgourgou/stochastic-simulations-class
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE.
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
과대적합과 과소적합 구글 코랩(Colab)에서 실행하기 깃허브(GitHub) 소스 보기 Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.이 번역에 개선할 부분이 있다면[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.문서 번역이나 리뷰에 참여하려면[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)로메일을 보내주시기 바랍니다. 지금까지 그랬듯이 이 예제의 코드도 `tf.keras` API를 사용합니다. 텐서플로 [케라스 가이드](https://www.tensorflow.org/r1/guide/keras)에서 `tf.keras` API에 대해 더 많은 정보를 얻을 수 있습니다.앞서 영화 리뷰 분류와 주택 가격 예측의 두 예제에서 일정 에포크 동안 훈련하면 검증 세트에서 모델 성능이 최고점에 도달한 다음 감소하기 시작한 것을 보았습니다.다른 말로 하면, 모델이 훈련 세트에 *과대적합*(overfitting)된 것입니다. 과대적합을 다루는 방법은 꼭 배워야 합니다. *훈련 세트*에서 높은 성능을 얻을 수 있지만 진짜 원하는 것은 *테스트 세트*(또는 이전에 본 적 없는 데이터)에 잘 일반화되는 모델입니다.과대적합의 반대는 *과소적합*(underfitting)입니다. 과소적합은 테스트 세트의 성능이 향상될 여지가 아직 있을 때 일어납니다. 발생하는 원인은 여러가지입니다. 모델이 너무 단순하거나, 규제가 너무 많거나, 그냥 단순히 충분히 오래 훈련하지 않는 경우입니다. 즉 네트워크가 훈련 세트에서 적절한 패턴을 학습하지 못했다는 뜻입니다.모델을 너무 오래 훈련하면 과대적합되기 시작하고 테스트 세트에서 일반화되지 못하는 패턴을 훈련 세트에서 학습합니다. 과대적합과 과소적합 사이에서 균형을 잡아야 합니다. 이를 위해 적절한 에포크 횟수동안 모델을 훈련하는 방법을 배워보겠습니다.과대적합을 막는 가장 좋은 방법은 더 많은 훈련 데이터를 사용하는 것입니다. 많은 데이터에서 훈련한 모델은 자연적으로 일반화 성능이 더 좋습니다. 데이터를 더 준비할 수 없을 때 그다음으로 가장 좋은 방법은 규제(regularization)와 같은 기법을 사용하는 것입니다. 모델이 저장할 수 있는 정보의 양과 종류에 제약을 부과하는 방법입니다. 네트워크가 소수의 패턴만 기억할 수 있다면 최적화 과정 동안 일반화 가능성이 높은 가장 중요한 패턴에 촛점을 맞출 것입니다.이 노트북에서 널리 사용되는 두 가지 규제 기법인 가중치 규제와 드롭아웃(dropout)을 알아 보겠습니다. 이런 기법을 사용하여 IMDB 영화 리뷰 분류 모델의 성능을 향상시켜 보죠.
import tensorflow.compat.v1 as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print(tf.__version__)
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
IMDB 데이터셋 다운로드이전 노트북에서처럼 임베딩을 사용하지 않고 여기에서는 문장을 멀티-핫 인코딩(multi-hot encoding)으로 변환하겠습니다. 이 모델은 훈련 세트에 빠르게 과대적합될 것입니다. 과대적합을 발생시키기고 어떻게 해결하는지 보이기 위해 선택했습니다.멀티-핫 인코딩은 정수 시퀀스를 0과 1로 이루어진 벡터로 변환합니다. 정확하게 말하면 시퀀스 `[3, 5]`를 인덱스 3과 5만 1이고 나머지는 모두 0인 10,000 차원 벡터로 변환한다는 의미입니다.
NUM_WORDS = 10000 (train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS) def multi_hot_sequences(sequences, dimension): # 0으로 채워진 (len(sequences), dimension) 크기의 행렬을 만듭니다 results = np.zeros((len(sequences), dimension)) for i, word_indices in enumerate(sequences): results[i, word_indices] = 1.0 # results[i]의 특정 인덱스만 1로 설정합니다 return results train_data = multi_hot_sequences(train_data, dimension=NUM_WORDS) test_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
만들어진 멀티-핫 벡터 중 하나를 살펴 보죠. 단어 인덱스는 빈도 순으로 정렬되어 있습니다. 그래프에서 볼 수 있듯이 인덱스 0에 가까울수록 1이 많이 등장합니다:
plt.plot(train_data[0])
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
과대적합 예제과대적합을 막는 가장 간단한 방법은 모델의 규모를 축소하는 것입니다. 즉, 모델에 있는 학습 가능한 파라미터의 수를 줄입니다(모델 파라미터는 층(layer)의 개수와 층의 유닛(unit) 개수에 의해 결정됩니다). 딥러닝에서는 모델의 학습 가능한 파라미터의 수를 종종 모델의 "용량"이라고 말합니다. 직관적으로 생각해 보면 많은 파라미터를 가진 모델이 더 많은 "기억 용량"을 가집니다. 이런 모델은 훈련 샘플과 타깃 사이를 일반화 능력이 없는 딕셔너리와 같은 매핑으로 완벽하게 학습할 수 있습니다. 하지만 이전에 본 적 없는 데이터에서 예측을 할 땐 쓸모가 없을 것입니다.항상 기억해야 할 점은 딥러닝 모델이 훈련 세트에는 학습이 잘 되는 경향이 있지만 진짜 해결할 문제는 학습이 아니라 일반화라는 것입니다.반면에 네트워크의 기억 용량이 부족하다면 이런 매핑을 쉽게 학습할 수 없을 것입니다. 손실을 최소화하기 위해서는 예측 성능이 더 많은 압축된 표현을 학습해야 합니다. 또한 너무 작은 모델을 만들면 훈련 데이터를 학습하기 어렵울 것입니다. "너무 많은 용량"과 "충분하지 않은 용량" 사이의 균형을 잡아야 합니다.안타깝지만 어떤 모델의 (층의 개수나 뉴런 개수에 해당하는) 적절한 크기나 구조를 결정하는 마법같은 공식은 없습니다. 여러 가지 다른 구조를 사용해 실험을 해봐야만 합니다.알맞은 모델의 크기를 찾으려면 비교적 적은 수의 층과 파라미터로 시작해서 검증 손실이 감소할 때까지 새로운 층을 추가하거나 층의 크기를 늘리는 것이 좋습니다. 영화 리뷰 분류 네트워크를 사용해 이를 실험해 보죠.```Dense``` 층만 사용하는 간단한 기준 모델을 만들고 작은 규모의 버전와 큰 버전의 모델을 만들어 비교하겠습니다. 기준 모델 만들기
baseline_model = keras.Sequential([ # `.summary` 메서드 때문에 `input_shape`가 필요합니다 keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(16, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) baseline_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', 'binary_crossentropy']) baseline_model.summary() baseline_history = baseline_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2)
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
작은 모델 만들기 앞서 만든 기준 모델과 비교하기 위해 적은 수의 은닉 유닛을 가진 모델을 만들어 보죠:
smaller_model = keras.Sequential([ keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(4, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) smaller_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', 'binary_crossentropy']) smaller_model.summary()
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
같은 데이터를 사용해 이 모델을 훈련합니다:
smaller_history = smaller_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2)
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
큰 모델 만들기아주 큰 모델을 만들어 얼마나 빠르게 과대적합이 시작되는지 알아 볼 수 있습니다. 이 문제에 필요한 것보다 훨씬 더 큰 용량을 가진 네트워크를 추가해서 비교해 보죠:
bigger_model = keras.models.Sequential([ keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(512, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) bigger_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy','binary_crossentropy']) bigger_model.summary()
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
역시 같은 데이터를 사용해 모델을 훈련합니다:
bigger_history = bigger_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2)
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
훈련 손실과 검증 손실 그래프 그리기 실선은 훈련 손실이고 점선은 검증 손실입니다(낮은 검증 손실이 더 좋은 모델입니다). 여기서는 작은 네트워크가 기준 모델보다 더 늦게 과대적합이 시작되었습니다(즉 에포크 4가 아니라 6에서 시작됩니다). 또한 과대적합이 시작되고 훨씬 천천히 성능이 감소합니다.
def plot_history(histories, key='binary_crossentropy'): plt.figure(figsize=(16,10)) for name, history in histories: val = plt.plot(history.epoch, history.history['val_'+key], '--', label=name.title()+' Val') plt.plot(history.epoch, history.history[key], color=val[0].get_color(), label=name.title()+' Train') plt.xlabel('Epochs') plt.ylabel(key.replace('_',' ').title()) plt.legend() plt.xlim([0,max(history.epoch)]) plot_history([('baseline', baseline_history), ('smaller', smaller_history), ('bigger', bigger_history)])
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
큰 네트워크는 거의 바로 첫 번째 에포크 이후에 과대적합이 시작되고 훨씬 더 심각하게 과대적합됩니다. 네트워크의 용량이 많을수록 훈련 세트를 더 빠르게 모델링할 수 있습니다(훈련 손실이 낮아집니다). 하지만 더 쉽게 과대적합됩니다(훈련 손실과 검증 손실 사이에 큰 차이가 발생합니다). 전략 가중치를 규제하기 아마도 오캄의 면도날(Occam's Razor) 이론을 들어 보았을 것입니다. 어떤 것을 설명하는 두 가지 방법이 있다면 더 정확한 설명은 최소한의 가정이 필요한 가장 "간단한" 설명일 것입니다. 이는 신경망으로 학습되는 모델에도 적용됩니다. 훈련 데이터와 네트워크 구조가 주어졌을 때 이 데이터를 설명할 수 있는 가중치의 조합(즉, 가능한 모델)은 많습니다. 간단한 모델은 복잡한 것보다 과대적합되는 경향이 작을 것입니다.여기서 "간단한 모델"은 모델 파라미터의 분포를 봤을 때 엔트로피(entropy)가 작은 모델입니다(또는 앞 절에서 보았듯이 적은 파라미터를 가진 모델입니다). 따라서 과대적합을 완화시키는 일반적인 방법은 가중치가 작은 값을 가지도록 네트워크의 복잡도에 제약을 가하는 것입니다. 이는 가중치 값의 분포를 좀 더 균일하게 만들어 줍니다. 이를 "가중치 규제"(weight regularization)라고 부릅니다. 네트워크의 손실 함수에 큰 가중치에 해당하는 비용을 추가합니다. 이 비용은 두 가지 형태가 있습니다:* [L1 규제](https://developers.google.com/machine-learning/glossary/L1_regularization)는 가중치의 절댓값에 비례하는 비용이 추가됩니다(즉, 가중치의 "L1 노름(norm)"을 추가합니다).* [L2 규제](https://developers.google.com/machine-learning/glossary/L2_regularization)는 가중치의 제곱에 비례하는 비용이 추가됩니다(즉, 가중치의 "L2 노름"의 제곱을 추가합니다). 신경망에서는 L2 규제를 가중치 감쇠(weight decay)라고도 부릅니다. 이름이 다르지만 혼돈하지 마세요. 가중치 감쇠는 수학적으로 L2 규제와 동일합니다.`tf.keras`에서는 가중치 규제 객체를 층의 키워드 매개변수에 전달하여 가중치에 규제를 추가합니다. L2 가중치 규제를 추가해 보죠.
l2_model = keras.models.Sequential([ keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001), activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001), activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) l2_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', 'binary_crossentropy']) l2_model_history = l2_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2)
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
```l2(0.001)```는 네트워크의 전체 손실에 층에 있는 가중치 행렬의 모든 값이 ```0.001 * weight_coefficient_value**2```만큼 더해진다는 의미입니다. 이런 페널티(penalty)는 훈련할 때만 추가됩니다. 따라서 테스트 단계보다 훈련 단계에서 네트워크 손실이 훨씬 더 클 것입니다.L2 규제의 효과를 확인해 보죠:
plot_history([('baseline', baseline_history), ('l2', l2_model_history)])
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
결과에서 보듯이 모델 파라미터의 개수는 같지만 L2 규제를 적용한 모델이 기본 모델보다 과대적합에 훨씬 잘 견디고 있습니다. 드롭아웃 추가하기드롭아웃(dropout)은 신경망에서 가장 효과적이고 널리 사용하는 규제 기법 중 하나입니다. 토론토(Toronto) 대학의 힌튼(Hinton)과 그의 제자들이 개발했습니다. 드롭아웃을 층에 적용하면 훈련하는 동안 층의 출력 특성을 랜덤하게 끕니다(즉, 0으로 만듭니다). 훈련하는 동안 어떤 입력 샘플에 대해 [0.2, 0.5, 1.3, 0.8, 1.1] 벡터를 출력하는 층이 있다고 가정해 보죠. 드롭아웃을 적용하면 이 벡터에서 몇 개의 원소가 랜덤하게 0이 됩니다. 예를 들면, [0, 0.5, 1.3, 0, 1.1]가 됩니다. "드롭아웃 비율"은 0이 되는 특성의 비율입니다. 보통 0.2에서 0.5 사이를 사용합니다. 테스트 단계에서는 어떤 유닛도 드롭아웃하지 않습니다. 훈련 단계보다 더 많은 유닛이 활성화되기 때문에 균형을 맞추기 위해 층의 출력 값을 드롭아웃 비율만큼 줄입니다.`tf.keras`에서는 `Dropout` 층을 이용해 네트워크에 드롭아웃을 추가할 수 있습니다. 이 층은 바로 이전 층의 출력에 드롭아웃을 적용합니다.IMDB 네트워크에 두 개의 `Dropout` 층을 추가하여 과대적합이 얼마나 감소하는지 알아 보겠습니다:
dpt_model = keras.models.Sequential([ keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dropout(0.5), keras.layers.Dense(16, activation=tf.nn.relu), keras.layers.Dropout(0.5), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) dpt_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy','binary_crossentropy']) dpt_model_history = dpt_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2) plot_history([('baseline', baseline_history), ('dropout', dpt_model_history)])
_____no_output_____
Apache-2.0
site/ko/r1/tutorials/keras/overfit_and_underfit.ipynb
justaverygoodboy/docs-l10n
Gambling 101You are participating in a lottery game. A deck of cards numbered from 1-50 is shuffled and 5 cards are drawn out and laid out. You are given a coin. For each card, you toss the coin and pick it up if it says heads, otherwise you don't pick it up. The sum of the cards is what you win.The lottery ticket costs c rupees. If the expected value of the sum of cards you pick up is less than the lottery ticket, then you buy another ticket otherwise you don't.Input Format:The first 5 lines of the input will contain 5 numbers between 1 to 50.The next line will contain c, the cost of lottery ticket.Output Format:Print "Don't buy another" if the expected value is less than the ticket price and print "Buy another one" if the expected value is more than the ticket price.**Sample Input:**14617323**Sample Output:**Don't buy another**Note:** You have to take input using the input() function. For your practice with taking inputs, the stub will be empty.
#my_input1 = input("Enter the 1st input here :" ) #my_input2 = input("Enter the 2nd input here :" ) #my_input3 = input("Enter the 3rd input here :" ) #my_input4 = input("Enter the 4th input here :" ) #my_input5 = input("Enter the 5th input here :" ) c = input("cost of lottery ticket here :" ) import ast,sys input_str = sys.stdin.read() input_list = ast.literal_eval(input_str) my_input=input_list[0] sum=0 for i in range(0,5): sum=sum+int(my_input[i]) if sum<=int(c): print("Don't buy another") else: print("Buy another one")
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Generating normal distributionGenerate an array of real numbers representing a normal distribution. You will be given the mean and standard deviation as input. You have to generate 10 such numbers.**Hint:** You can use numpy's numpy's np.random here... np.random https://pynative.com/python-random-seed/. To keep the output consistent, you have to set the seed as a specific number which will be given to you as input. Setting a seed means that every time you generate random numbers, they will be the same for the same seed. You can read more about it here.. https://pynative.com/python-random-seed/**Input Format:**The input will contain 3 lines which have the seed, mean and standard deviation of the distribution in the same order.The output will be a numpy array of the generated normal distribution.**Sample Input:**100.1
import numpy as np seed=int(input()) mean=float(input()) std_dev=float(input()) np.random.seed(seed) s = np.random.normal(mean, std_dev, 10) print(s)
1 0 0.1 [ 0.16243454 -0.06117564 -0.05281718 -0.10729686 0.08654076 -0.23015387 0.17448118 -0.07612069 0.03190391 -0.02493704]
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Confidence IntervalsFor a given column in a dataframe, you have to calculate the 90 percent confidence interval for its mean value. (You can find Z* value for 90 percent confidence from previous segments)The input will have the column name. The output should have the confidence interval printed as a tuple.**Note:** Do not use the inbuilt function via statmodels.api or any other libraries. You should write the code on your own to get accurate answers.The confidence interval values have to be approximated up to two decimal places.**Sample Input:**GRE Score
import pandas as pd import numpy as np df=pd.read_csv("https://media-doselect.s3.amazonaws.com/generic/N9LKLvBAx1y14PLoBdL0yRn3/Admission_Predict.csv") col=input() mean = df[col].mean() sd = df[col].std() n = len(df) Zstar=1.65 se = sd/np.sqrt(n) lcb = mean - Zstar * se ucb = mean + Zstar * se print((round(lcb,2),round(ucb,2))) #via statmodels.api you can do this as follows: #import statsmodels.api as sm #sm.stats.DescrStatsW(df[col]).zconfint_mean()
GRE Score (315.86, 317.75)
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
College admissionsThe probability that a college will accept a student's application is x.Consider that m students have applied to college. You have to find the probability that at most n students are accepted by the college.The input will contain three lines with x, m and n respectively.The output should be rounded off to four decimal places.**Sample Input:**0.352**Sample Output:**0.8369
#probability of accepting an application x=float(input()) #number of applicants m=int(input()) #find the probability that at most n applications are accepted n=int(input()) #write your code here import scipy.stats as ss dist=ss.binom(m,x) sum=0.0 for i in range(0,n+1): sum=sum+dist.pmf(i) print(round(sum,4))
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Tossing a coinGiven that you are tossing a coin n times, you have to find the probability of getting heads at most m times.The input will have two lines containing n and m respectively.**Sample Input:**102**Sample Output:**0.0547
import scipy.stats as ss #number of trials n=int(input()) # find the probability of getting at most m heads m=int(input()) dist=ss.binom(n,0.5) sum=0.0 for i in range(0,m+1): sum=sum+dist.pmf(i) print(round(sum,4)) #you can also use the following #round(dist.cdf(m),2)
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Combination TheoryYou are given a list of n natural numbers. You select m numbers from the list at random. Find the probability that at least one of the selected alphabets is "x" where x is a number given to you as input.The first line of input will contain a list of numbers. The second line will contain m and the third line will contain x.The output should be printed out as float.**Sample Input:**[1,2,3,4,5,6,6,6,6,7,7,7]36**Sample Output:**0.7454545454545455
import ast,sys input_str = sys.stdin.read() input_list = ast.literal_eval(input_str) nums=input_list[0] #m numbers are chosen m=int(input_list[1]) #find probability of getting at least one x x=int(input_list[2]) from itertools import combinations num = 0 den = 0 for c in combinations(nums,m): den=den+1 if x in c: num=num+1
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Rolling the diceA die is rolled n times. You have to find the probability that a number i is rolled at least j times(up to four decimal places)The input will contain the integers n, i and j in three lines respectively. You can assume that j<n and 0<i<7.The output should be rounded off to four decimal places.**Sample Input:**412**Sample Output:**0.1319
import scipy.stats as ss n=int(input()) i=int(input()) j=int(input()) dist=ss.binom(n,1/6) print(round(1-dist.cdf(j-1),4))
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Lego StackYou are given a row of Lego Blocks consisting of n blocks. All the blocks given have a square base whose side length is known. You need to stack the blocks over each other and create a vertical tower. Block-1 can go over Block-2 only if sideLength(Block-2)>sideLength(Block-1).From the row of Lego blocks, you on only pick up either the leftmost or rightmost block.Print "Possible" if it is possible to stack all n cubes this way or else print "Impossible".**Input Format:**The input will contain a list of n integers representing the side length of each block's base in the row starting from the leftmost.**Sample Input:**[5 ,4, 2, 1, 4 ,5]**Sample Output:**Possible
import ast,sys input_str = sys.stdin.read() sides = ast.literal_eval(input_str)#list of side lengths l=len(sides) diff = [(sides[i]-sides[i+1]) for i in range(l-1)] i = 0 while (i<l-1 and diff[i]>=0) : i += 1 while (i<l-1 and diff[i]<=0) : i += 1 if (i==l-1) : print("Possible") else : print("Impossible") #to understand the code, try printing out all intermediate variables.
_____no_output_____
MIT
Inferential Coding Practice.ipynb
anushka-DS/Inferential-Statistics
Ensemble Learning Initial Imports
import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd from pathlib import Path from collections import Counter from sklearn.metrics import balanced_accuracy_score from sklearn.metrics import confusion_matrix from imblearn.metrics import classification_report_imbalanced
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Read the CSV and Perform Basic Data Cleaning
# Load the data file_path = Path('Resources/LoanStats_2019Q1.csv') df = pd.read_csv(file_path) # Preview the data df.head() df.shape df.info() pd.set_option('display.max_rows', None) # or 1000 df.nunique(axis=0) df['recoveries'].value_counts() df['pymnt_plan'].value_counts() # Drop all unnecessary columns with only a single value. pd.set_option('display.max_columns', None) # or 1000 df.drop(columns=['pymnt_plan','recoveries','collection_recovery_fee','policy_code','acc_now_delinq','num_tl_120dpd_2m','num_tl_30dpd','tax_liens','hardship_flag','debt_settlement_flag'], inplace=True) df.head() # Update the DataFrame to numerical values: df_encoded = pd.get_dummies(df, columns=['home_ownership','verification_status','issue_d','initial_list_status','next_pymnt_d','application_type',], drop_first=True) df_encoded.head()
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Split the Data into Training and Testing
# Create our features X = df_encoded.drop(columns=['loan_status']) # Create our target y = df_encoded['loan_status'].to_frame('loan_status') y.head() X.describe() # Check the balance of our target values y['loan_status'].value_counts() # Split the X and y into X_train, X_test, y_train, y_test from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Data Pre-ProcessingScale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
X_train.columns X_train.head() X_train.shape
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
NEW SOLUTION: USE SCIKIT-LEARN'S ColumnTransformer():OneHotEncoder versus GetDummies:https://www.quora.com/When-would-you-choose-to-use-pandas-get_dummies-vs-sklearn-OneHotEncoderBoth options are equally handy but the major difference is that OneHotEncoder is a transformer class, so it can be fitted to data. Once fitted, it is able to transform validation data based on the categories it learned. That is, if previously unseed data contains new categories and is being transformed, the encoder will ignore them or raise an error (depending on handle_unknown parameter). Also, OneHotEncoder matches scikit-learn’s transformer API so that one can use it in pipelines for convenience.Basically, get_dummies is used in exploratory analysis, whereas OneHotEncoder in computation and estimation. See documentations for more details.
from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import make_column_transformer ohe = OneHotEncoder() sc = StandardScaler() ct = make_column_transformer( (sc, ['loan_amnt', 'int_rate', 'installment', 'annual_inc', 'dti', 'delinq_2yrs', 'inq_last_6mths', 'open_acc', 'pub_rec', 'revol_bal', 'total_acc', 'out_prncp', 'out_prncp_inv', 'total_pymnt', 'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int', 'total_rec_late_fee', 'last_pymnt_amnt', 'collections_12_mths_ex_med', 'tot_coll_amt', 'tot_cur_bal', 'open_acc_6m', 'open_act_il', 'open_il_12m', 'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il', 'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl', 'inq_last_12m', 'acc_open_past_24mths', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util', 'chargeoff_within_12_mths', 'delinq_amnt', 'mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_rev_tl_op', 'mo_sin_rcnt_tl', 'mort_acc', 'mths_since_recent_bc', 'mths_since_recent_inq', 'num_accts_ever_120_pd', 'num_actv_bc_tl', 'num_actv_rev_tl', 'num_bc_sats', 'num_bc_tl', 'num_il_tl', 'num_op_rev_tl', 'num_rev_accts', 'num_rev_tl_bal_gt_0', 'num_sats', 'num_tl_90g_dpd_24m', 'num_tl_op_past_12m', 'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'pub_rec_bankruptcies', 'tot_hi_cred_lim', 'total_bal_ex_mort', 'total_bc_limit', 'total_il_high_credit_limit']), (ohe, ['home_ownership_MORTGAGE', 'home_ownership_OWN', 'home_ownership_RENT', 'verification_status_Source Verified', 'verification_status_Verified', 'issue_d_Jan-2019', 'issue_d_Mar-2019', 'initial_list_status_w', 'next_pymnt_d_May-2019', 'application_type_Joint App']) ) X_train_scaled = ct.fit_transform(X_train) print(type(X_train_scaled)) pd.DataFrame(X_train_scaled).head() print(type(X_train_scaled)) # Fit & Transform to standardize X_test: X_test_scaled = ct.fit_transform(X_test) print(type(X_test_scaled))
<class 'numpy.ndarray'>
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Ensemble LearnersIn this section, you will compare two ensemble algorithms to determine which algorithm results in the best performance. You will train a Balanced Random Forest Classifier and an Easy Ensemble classifier . For each algorithm, be sure to complete the folliowing steps:1. Train the model using the training data. 2. Calculate the balanced accuracy score from sklearn.metrics.3. Display the confusion matrix from sklearn.metrics.4. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.5. For the Balanced Random Forest Classifier only, print the feature importance sorted in descending order (most important feature to least important) along with the feature scoreNote: Use a random state of 1 for each algorithm to ensure consistency between tests Balanced Random Forest Classifier
# Resample the training data with the BalancedRandomForestClassifier from imblearn.ensemble import BalancedRandomForestClassifier brf = BalancedRandomForestClassifier(n_estimators=1000, random_state=1) brf.fit(X_train_scaled, y_train) # Predict y_pred_rf = brf.predict(X_test_scaled) # Calculated the balanced accuracy score from sklearn.metrics import balanced_accuracy_score balanced_accuracy_score(y_test, y_pred_rf) # Display the confusion matrix confusion_matrix(y_test, y_pred_rf) # Print the imbalanced classification report print(classification_report_imbalanced(y_test, y_pred_rf)) # Calculate the feature importance importance = brf.feature_importances_ # List the features sorted in descending order by feature importance sorted(zip(brf.feature_importances_, X.columns), reverse=True)
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Visualizing the Features by Importance to the model: (Top 20)
importance_df = pd.DataFrame(sorted(zip(brf.feature_importances_, X.columns), reverse=True)) importance_df.set_index(importance_df[1], inplace=True) importance_df.drop(columns=1, inplace=True) importance_df.rename(columns={0:'Feature Importances'}, inplace=True) importance_sorted = importance_df.sort_values(by='Feature Importances').head(20) importance_sorted.plot(kind='barh', color='blue', title='Top 20 Features Importances', legend=False)
_____no_output_____
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Easy Ensemble Classifierhttps://imbalanced-learn.org/stable/references/generated/imblearn.ensemble.EasyEnsembleClassifier.html
# Create an instance of an Easy Ensemble Classifier: from imblearn.ensemble import EasyEnsembleClassifier eec = EasyEnsembleClassifier(n_estimators=1000, random_state=1) # Train the Classifier eec.fit(X_train_scaled, y_train) # Predict y_pred_eec = eec.predict(X_test_scaled) # Calculated the balanced accuracy score balanced_accuracy_score(y_test, y_pred_eec) # Display the confusion matrix cm_eec = confusion_matrix(y_test, y_pred_eec) cm_eec_df = pd.DataFrame( cm_eec, index=['Actual=No: 0', 'Actual=Yes: 1'], columns=['Predicted=No: 0', 'Predicted=Yes: 1'] ) print('Confusion Matrix:') display(cm_eec_df) # Print the imbalanced classification report print(classification_report_imbalanced(y_test, y_pred_eec))
pre rec spe f1 geo iba sup high_risk 0.09 0.88 0.95 0.17 0.91 0.82 104 low_risk 1.00 0.95 0.88 0.97 0.91 0.83 17101 avg / total 0.99 0.95 0.88 0.97 0.91 0.83 17205
ADSL
Starter_Code/credit_risk_ensemble.ipynb
eriklarson33/HW_11-Risky_Business
Codenation - Data ScienceAutor: Leonardo Simões Desafio 7 - Descubra as melhores notas de matemática do ENEM 2016Você deverá criar um modelo para prever a nota da prova de matemática de quem participou do ENEM 2016. Para isso, usará Python, Pandas, Sklearn e Regression. DetalhesO contexto do desafio gira em torno dos resultados do ENEM 2016 (disponíveis no arquivo train.csv). Este arquivo, e apenas ele, deve ser utilizado para todos os desafios. Qualquer dúvida a respeito das colunas, consulte o [Dicionário dos Microdados do Enem 2016](https://s3-us-west-1.amazonaws.com/acceleration-assets-highway/data-science/dicionario-de-dados.zip).No arquivo test.csv crie um modelo para prever nota da prova de matemática (coluna `NU_NOTA_MT`) de quem participou do ENEM 2016. Salve sua resposta em um arquivo chamado answer.csv com duas colunas: `NU_INSCRICAO` e `NU_NOTA_MT`. TópicosNeste desafio você aprenderá:- Python- Pandas- Sklearn- Regression Setup geral
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_squared_error, mean_absolute_error
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Análise dos dados Lendo os arquivos de treino (train) e de teste (test).
df_train = pd.read_csv('train.csv') df_train.head() df_test = pd.read_csv('test.csv') df_test.head()
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Antes de manipular os dataframes, deve-separar as colunas de notas da prova de matemática do treino e a do número de inscrição do teste.
train_y = df_train['NU_NOTA_MT'].fillna(0) n_insc = df_test['NU_INSCRICAO'].values
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Idealmente os arquivos de teste e treino teriam as mesmas colunas, exceto a que deve ser predita. Então, primeiro verifica-se a quantidade de colunas de cada, e depois exclui-se as colunas que não pertence a ambas.
len(df_test.columns) len(df_train.columns) colunas_intersecao = np.intersect1d(df_train.columns.values, df_test.columns.values) colunas_intersecao df_train = df_train[colunas_intersecao] df_train.head() df_test = df_test[colunas_intersecao] df_test.head() df_train.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 13730 entries, 0 to 13729 Data columns (total 47 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 CO_PROVA_CH 13730 non-null object 1 CO_PROVA_CN 13730 non-null object 2 CO_PROVA_LC 13730 non-null object 3 CO_PROVA_MT 13730 non-null object 4 CO_UF_RESIDENCIA 13730 non-null int64 5 IN_BAIXA_VISAO 13730 non-null int64 6 IN_CEGUEIRA 13730 non-null int64 7 IN_DISCALCULIA 13730 non-null int64 8 IN_DISLEXIA 13730 non-null int64 9 IN_GESTANTE 13730 non-null int64 10 IN_IDOSO 13730 non-null int64 11 IN_SABATISTA 13730 non-null int64 12 IN_SURDEZ 13730 non-null int64 13 IN_TREINEIRO 13730 non-null int64 14 NU_IDADE 13730 non-null int64 15 NU_INSCRICAO 13730 non-null object 16 NU_NOTA_CH 10341 non-null float64 17 NU_NOTA_CN 10341 non-null float64 18 NU_NOTA_COMP1 10133 non-null float64 19 NU_NOTA_COMP2 10133 non-null float64 20 NU_NOTA_COMP3 10133 non-null float64 21 NU_NOTA_COMP4 10133 non-null float64 22 NU_NOTA_COMP5 10133 non-null float64 23 NU_NOTA_LC 10133 non-null float64 24 NU_NOTA_REDACAO 10133 non-null float64 25 Q001 13730 non-null object 26 Q002 13730 non-null object 27 Q006 13730 non-null object 28 Q024 13730 non-null object 29 Q025 13730 non-null object 30 Q026 13730 non-null object 31 Q027 6357 non-null object 32 Q047 13730 non-null object 33 SG_UF_RESIDENCIA 13730 non-null object 34 TP_ANO_CONCLUIU 13730 non-null int64 35 TP_COR_RACA 13730 non-null int64 36 TP_DEPENDENCIA_ADM_ESC 4282 non-null float64 37 TP_ENSINO 4282 non-null float64 38 TP_ESCOLA 13730 non-null int64 39 TP_LINGUA 13730 non-null int64 40 TP_NACIONALIDADE 13730 non-null int64 41 TP_PRESENCA_CH 13730 non-null int64 42 TP_PRESENCA_CN 13730 non-null int64 43 TP_PRESENCA_LC 13730 non-null int64 44 TP_SEXO 13730 non-null object 45 TP_STATUS_REDACAO 10133 non-null float64 46 TP_ST_CONCLUSAO 13730 non-null int64 dtypes: float64(12), int64(20), object(15) memory usage: 4.9+ MB
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Em um momento anterior eu usei todas as colunas numéricas na predição, mas isso se mostrou menos eficaz do que usar apenas as colunas de notas.
colunas_numericas = df_train.select_dtypes(include=['float64', 'int64']).columns colunas_numericas colunas_notas = ['NU_NOTA_CH', 'NU_NOTA_CN', 'NU_NOTA_COMP1', 'NU_NOTA_COMP2','NU_NOTA_COMP3', 'NU_NOTA_COMP4','NU_NOTA_COMP5','NU_NOTA_LC', 'NU_NOTA_REDACAO'] df_train = df_train[colunas_notas].fillna(0) df_test = df_test[colunas_notas].fillna(0) sc = StandardScaler() x_train = sc.fit_transform(df_train) x_test = sc.transform(df_test) lm = LinearRegression() lm.fit(x_train, train_y) y_teste = lm.predict(x_test) y_teste = [round(y,1) for y in y_teste]
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
Depois de predizer as notas do arquivo de teste, estas foram salvar em um arquivo csv.
answer = pd.DataFrame() answer['NU_INSCRICAO'] = n_insc answer['NU_NOTA_MT'] = y_teste answer.head() answer.to_csv('answer.csv', index=False)
_____no_output_____
MIT
Desafio 7 (enem-2)/main.ipynb
leosimoes/Codenation-AceleraDev-DataScience
IntroductionIf you've had any experience with the python scientific stack, you've probably come into contact with, or at least heard of, the [pandas][1] data analysis library. Before the introduction of pandas, if you were to ask anyone what language to learn as a budding data scientist, most would've likely said the [R statistical programming language][2]. With its [data frame][3] data structure, it was the obvious winner when it came to filtering, slicing, aggregating, or analyzing your data. However, with the introduction of pandas to python's growing set of data analysis libraries, the gap between the two langauges has effectively closed, and as a result, pandas has become a vital tool for data scientists using python.While we won't be covering the pandas library itself, since that's a topic fit for a course of its own, in this lesson we will be discussing the simple interface pandas provides for interacting with the matplotlib library. In addition, we'll also take a look at the recent changes the matplotlib team has made to make it possible for the two libraries to work together more harmoniously.That said, let's get set up and see what pandas has to offer.[1]: http://pandas.pydata.org/[2]: https://www.r-project.org/[3]: https://cran.r-project.org/doc/manuals/r-release/R-intro.htmlData-frames
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina')
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
What is pandas?Pandas is a library created by [Wes McKinney][1] that provides several data structures that make working with data fast, efficient, and easy. Chief among them is the `DataFrame`, which takes on R's `data.frame` data type, and in many scenarios, bests it. It also provides a simple wrapper around the `pyplot` interface, allowing you to plot the data in your `DataFrame` objects without any context switching in many cases. But, enough talk, let's see it in action.[1]: https://twitter.com/wesmckinn Import the LibraryThe following bit of code imports the pandas library using the widely accepted `pd` naming convention. You'll likely see pandas imported like this just about everywhere it's used, and it is recommended that you always use the same naming convention in your code as well.
import pandas as pd
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Load in Some DataIn the next cell, we'll use the `read_csv` function to load in the [Census Income][1] dataset from the [UCI Machine Learning Repository][2]. Incidentally, this is the exact same dataset that we used in our Exploratory Data Analysis (EDA) example in chapter 2, so we'll get to see some examples of how we could perform some of the same steps using the plotting commands on our `DataFrame` object. [1]: http://archive.ics.uci.edu/ml/datasets/Adult[2]: http://archive.ics.uci.edu/ml/index.html
import pandas as pd # Download and read in the data from the UCI Machine Learning Repository df = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data', header=None, names=('age', 'workclass', 'fnlwgt', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'sex', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'target'))
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Plotting With pandasJust like we did in our EDA example from chapter 2, we can once again create a simple histogram from our data. This time though, notice that we simply call the `hist` command on the column that contains the education level to plot our data.
df.education_num.hist(bins=16);
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
And, remember, pandas isn't doing anything magical here, it's just providing a very simple wrapper around the `pyplot` module. At the end of the day, the code above is simply calling the `pyplot.hist` function to create the histogram. So, we can interact with the plot that it produces the same way we would any other plot. As an example, let's create our histogram again, but this time let's get rid of that empty bar to the left by setting the plot's x-axis limits using the `pyplot.xlim` function.
df.education_num.hist(bins=16) # Remove the empty bar from the histogram that's below the # education_num's minimum value. plt.xlim(df.education_num.min(), df.education_num.max());
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Well, that looks better, but we're still stuck with many of the same problems that we had in the original EDA lesson. You'll notice that most of the x-ticks don't actually line up with their bars, and there's a good reason for that. Remember, in that lesson, we discussed how a histogram was meant to be used with continuous data, and in our case we're dealing with discrete values. So, a bar chart is actually what we want to use here.Luckily, pandas makes the task of creating the bar chart even easier. In our EDA lesson, we had to do the frequency count ourselves, and take care of lining the x-axis labels up properly, and several other small issues. With pandas, it's just a single line of code. First, we call the `value_counts` function on the `education` column to get a set of frequency counts, ordered largest to smallest, for each education level. Then, we call the `plot` function on the `Series` object returned from `value_counts`, and pass in the type of plot with the `kind` parameter, and while we're at it, we'll set our width to 1, like we did in the chapter 2 example, to make it look more histogram-ish.
df.education.value_counts().plot(kind='bar', width=1);
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Now, rather than passing in the plot type with the `kind` parameter, we could've also just called the `bar` function from the `plot` object, like we do in the next cell.
df.education.value_counts().plot.bar(width=1);
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Ok, so that's a pretty good introduction to the simple interface that pandas provides to the matplotlib library, but it doesn't stop there. Pandas also provides a handful of more complex plotting functions in the `pandas.tools.plotting` module. So, let's import another dataset and take a look at an example of what's available. In the cell below, we pull in the Iris dataset that we used in our scatterplot matrix example from chapter 3. Incidentally, if you don't want to mess with network connections, or if you happen to be in a situation where network access just isn't an option, I've copied the data file to the local data folder. The file can be found at `./data/iris_data.csv`
df = pd.read_csv('https://raw.githubusercontent.com/pydata/pandas/master/pandas/tests/data/iris.csv')
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
We'll need a color map, essentially just a dictionary mapping each species to a unique color, so we'll put one together in the next cell. Fortunately, pandas makes it easy to get the species names by simply calling the `unique` function on the `Name` column.
names = df.Name.unique() colors = ['red', 'green', 'blue'] cmap = dict(zip(names, colors))
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Now, before we take a look at one of the functions from the `plotting` module, let's quickly take a look at one of the [changes that was made to matplotlib in version 1.5][1] to accommodate labeled data, like a pandas `DataFrame` for example. The code in the next cell, creates a scatter plot using the `pyplot.scatter` function, like we've done in the past, but notice how we specify the columns that contain our `x` and `y` values. In our example below, we are simply passing in the names of the columns alongside the `DataFrame` object itself. Now, it's arguable just how much more readable this light layer of abstraction is over just passing in the data directly, but it's nice to have the option, nonetheless.[1]: http://matplotlib.org/users/whats_new.htmlworking-with-labeled-data-like-pandas-dataframes
plt.scatter(x='PetalLength', y='PetalWidth', data=df, c=df.Name.apply(lambda name: cmap[name]));
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Now, we're ready to take a look at one of the functions that pandas provides us, and for comparison sake, let's take a look at our old friend, the scatterplot matrix. In the next cell, we'll import the `scatter_matrix` function from the `pandas.tools.plotting` module and run it on the Iris dataset.
from pandas.tools.plotting import scatter_matrix scatter_matrix(df, figsize=(10,8), c=df.Name.apply(lambda name: cmap[name]), s=40);
_____no_output_____
MIT
08 - The matplotlib Ecosystem/0802 - Pandas.ipynb
croach/mastering-matplotlib
Interpreting Tree Models You'll need to install the `treeinterpreter` library.
# !pip install treeinterpreter import sklearn import tensorflow as tf import numpy as np import pandas as pd from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor, export_graphviz from sklearn.ensemble import RandomForestRegressor from treeinterpreter import treeinterpreter as ti from IPython.display import Image print('The scikit-learn version is {}.'.format(sklearn.__version__))
The scikit-learn version is 1.0.1.
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners