path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
.ipynb_checkpoints/mobile_activity-checkpoint.ipynb | ###Markdown
Import Stuff
###Code
import tensorflow as tf
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn import svm
from time import time
###Output
_____no_output_____
###Markdown
Convert Train and Test to PD DATAFRAME
###Code
df_train = pd.read_csv('train.csv')
df_test = pd.read_csv('test.csv')
print(df_train.shape)
print(df_test.shape)
x_train=df_train.drop('activity',axis=1)
y_train=df_train['activity']
x_train,x_test, y_train, y_test=train_test_split(x_train,y_train,test_size=.2)
###Output
(3609, 563)
(1541, 562)
###Markdown
Split train test data
###Code
x_train,x_test, y_train, y_test=train_test_split(x_train,y_train,test_size=.2)
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
###Output
(2887, 562)
(2887,)
(722, 562)
(722,)
###Markdown
Train
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
clf=svm.SVC(C=100,kernel='rbf')
st=time()
clf.fit(x_train,y_train)
print("time: ",end="" )
print(str(time()-st)+" sec")
pred=clf.predict(x_test)
print("score: "+str(accuracy_score(y_test, pred)))
clf=MLPClassifier()
st=time()
clf.fit(x_train,y_train)
print("time: ",end="" )
print(str(time()-st)+" sec")
pred=clf.predict(x_test)
print("score: "+str(accuracy_score(y_test, pred)))
from sklearn import naive_bayes
clf=naive_bayes.GaussianNB()
st=time()
clf.fit(x_train,y_train)
print("time: ",end="" )
print(str(time()-st)+" sec")
pred=clf.predict(x_test)
print("score: "+str(accuracy_score(y_test, pred)))
clf=DecisionTreeClassifier()
st=time()
clf.fit(x_train,y_train)
print("time: ",end="" )
print(str(time()-st)+" sec")
pred=clf.predict(x_test)
print("score: "+str(accuracy_score(y_test, pred)))
clf=RandomForestClassifier(criterion='entropy')
st=time()
clf.fit(x_train,y_train)
print("time: ",end="" )
print(str(time()-st)+" sec")
pred=clf.predict(x_test)
print("score: "+str(accuracy_score(y_test, pred)))
y_train.replace({'LAYING':1, 'STANDING':2, 'SITTING':3, 'WALKING':4, 'WALKING_UPSTAIRS':5, 'WALKING_DOWNSTAIRS':6},inplace=True)
y_test.replace({'LAYING':1, 'STANDING':2, 'SITTING':3, 'WALKING':4, 'WALKING_UPSTAIRS':5, 'WALKING_DOWNSTAIRS':6},inplace=True)
y_train.value_counts()
x_train=x_train.astype("float64")
y_train=y_train.astype("float64")
x_test=x_test.astype("float64")
y_test=y_test.astype("float64")
y_train.value_counts()
y_train = tf.keras.utils.to_categorical(y=y_train,num_classes=6)
y_train
model =tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1024 ,activation=tf.nn.relu, input_shape=(562,)))
model.add(tf.keras.layers.Dense(512 ,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(256 ,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128 ,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(64 ,activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(6 ,activation=tf.nn.softmax))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
model.summary()
hisory = model.fit(x_train,y_train,epochs=10)
import matplotlib.pyplot as plt
plt.plot(hisory.history['acc'])
plt.plot(hisory.history['loss'])
res = model.predict(x_test)
print(res)
# np.savetxt("foo.csv", res, delimiter=",")
# from google.colab import files
# files.download('foo.csv')
res.shape
###Output
_____no_output_____ |
dAlembert_with_CAS.ipynb | ###Markdown
Solving problem in mechanics using d'Alembert principle with CAS **d'Alembert** principle states that the sum of the differences between the forces acting on a system of mass particles and the time derivatives of the momenta of the system itself projected onto any virtual displacement consistent with the constraints of the system is zero.It can be written as following,$$\sum_{i} ( \mathbf {F}_{i} - m_i \mathbf{\ddot x}_i )\cdot \delta \mathbf r_i = 0,\label{eq:dalem}\tag{1}$$ where: - $i$ enumerates particles, - $\mathbf{F_i}$ $\mathbf{\ddot x_i}$ are forces and accelerations of $i$-th particle, - $ \delta \mathbf r_i $ is virtual displacement of $i$-th particle.We consider $N$ particles in $3$ dimensional physical space, subjected to $p$ holonomous constraints in the form:$$ f_k(x, t) = 0\quad k=1,2,...,p. $$The virtual displacements of each coordinates: $\delta x_j$, can be arbitrary numbers fulfilling: $$\sum_{j=1}^{3N} \frac{\partial f_k}{\partial x_j} \delta x_j=0,\quad k=1,2,...,p. \label{eq:constr}\tag{2}$$ This is a homogenoues system of $p$ linear equation for $3N$ $\delta x_j$, thus $p$ displacements can be expressed by remaining $3N-p$ which are arbitrary.We can substitute this solution to the original d'Alembert equation $\eqref{eq:dalem}$ and we will obtain $3N-p$ second order differential equations. Together with $p$ constraints $\eqref{eq:constr}$ they can allow to determine evolution of all variables. Let us note that this is system of differential-algebraic equations. It can be solved for example by differentiating algebraic equations and solvin the system od ODEs in classical manner.Better possibility, used in most of texbook problems, is to find equation of motion in $3N-p$ independent generalized coordinates which are compliant with the constraints. Then we need to transform d'Alembert principle $\eqref{eq:dalem}$ into those coordinates and it leads to a system of $3N-p$ ODEs. How to use CAS with d'Alembert principle.One of the problems which prohibit direct use of CAS is the need to treat symbols as independent variables and as functions of time, depending on the context. One possible solution to this is to define for each symbolic variable the correspoding Sage symbolic function and variables which would represent first and second derivative. - coordinate - small letter: `a` - its time derivatives as independent symbols: $\dot a$ i $\ddot a$ - `ad` i `add` - explicit function of time $a(t)$: `A` - virtual displacement $\delta a$ - `da` Example - step by stepLet $a$ denote some generalized coordinate in out dynamical system:
###Code
var('t')
var('a')
###Output
_____no_output_____
###Markdown
We add symbols representing its derivatives with nice $\LaTeX$ representation:
###Code
var('ad',latex_name=r'\dot a')
var('add',latex_name=r'\ddot a')
show([a,ad,add])
###Output
_____no_output_____
###Markdown
We define with capital `A` function of time.
###Code
A = function('A')(t)
show(A)
###Output
_____no_output_____
###Markdown
Now, we can do following:
###Code
show(1+A.diff())
show ( (1+A.diff()).subs({A.diff():ad}) )
###Output
_____no_output_____
###Markdown
Let us calculate second time derivative of (1+a)^3:
###Code
expr = (1+a)^3
###Output
_____no_output_____
###Markdown
we change variables to explicit function of time:
###Code
expr = expr.subs({a:A})
show(expr)
###Output
_____no_output_____
###Markdown
and calculate derivative:
###Code
expr = expr.diff(t,2)
show(expr)
###Output
_____no_output_____
###Markdown
we can now convert to the form containing symbols: `ad` and `add`
###Code
expr = expr.subs({A:a,A.diff():ad,A.diff(2):add})
show(expr)
###Output
_____no_output_____
###Markdown
And calculate derivative ove $\dot a$:
###Code
expr = expr.diff(ad)
show(expr)
###Output
_____no_output_____
###Markdown
Automatic definitionsWe can now easily for each variable, construct two symbols representing time derivatives and explicit time function and also dictionaries for converting from one form to the another.Let us define list of variables and their $\LaTeX$ representations in a list of pairs: `xy_wsp`. Then we cen write:
###Code
var('t')
xy_wsp = [('x','x'),('y','y')]
for v,lv in xy_wsp:
var("%s"%v,latex_name=r'%s'%lv)
vars()[v.capitalize()] = function(v.capitalize())(t)
var("%sdd"%v,latex_name=r'\ddot %s'%lv)
var("%sd"%v,latex_name=r'\dot %s'%lv)
var("d%s"%v,latex_name=r'\delta %s'%lv)
to_fun=dict()
for v,lv in xy_wsp:
to_fun[vars()[v]]=vars()[v.capitalize()]
to_fun[vars()[v+"d"]]=vars()[v.capitalize()].diff()
to_fun[vars()[v+"dd"]]=vars()[v.capitalize()].diff(2)
to_var = dict((v,k) for k,v in to_fun.items())
show(to_var)
show(to_fun)
###Output
_____no_output_____
###Markdown
Let's experiment with examples:
###Code
show( (1+x^2*y) )
show( (1+x^2*y).subs(to_fun))
show( (1+x^2*y).subs(to_fun).diff(t,2) )
show( (1+x^2*y).subs(to_fun).diff(t,2).subs(to_var) )
show( (1+x^2*y).subs(to_fun).diff(t,2).subs(to_var).diff(xd).diff(x) )
x.subs(to_fun).diff().subs(to_var).subs(to_fun)
###Output
_____no_output_____
###Markdown
Example, mathematical pendulum in cartesian coordinates in 2dWe consider in 2d a point with mass $m$ in Earth gravitation subjected to constraints: $x^2+y^2-l^2=0$. $l$ is a lenght of the pendulum.Position of the mass is $(x,y)$, thus:
###Code
var('t')
var('l g')
xy_wsp = [('x','x'),('y','y')]
for v,lv in xy_wsp:
var("%s"%v,latex_name=r'%s'%lv)
vars()[v.capitalize()] = function(v.capitalize())(t)
var("%sdd"%v,latex_name=r'\ddot %s'%lv)
var("%sd"%v,latex_name=r'\dot %s'%lv)
var("d%s"%v,latex_name=r'\delta %s'%lv)
xy = [vars()[v] for v,lv in xy_wsp]
dxy = [vars()['d'+repr(zm)] for zm in xy]
to_fun=dict()
for v,lv in xy_wsp:
to_fun[vars()[v]]=vars()[v.capitalize()]
to_fun[vars()[v+"d"]]=vars()[v.capitalize()].diff()
to_fun[vars()[v+"dd"]]=vars()[v.capitalize()].diff(2)
to_var = dict((v,k) for k,v in to_fun.items())
show(xy),show(dxy),
###Output
_____no_output_____
###Markdown
Having constraints, one can obtain its differential form:$$\frac{\partial f}{\partial x} \delta x + \frac{\partial f}{\partial y} \delta y = 0 $$
###Code
f = x^2+y^2-l^2
constr =sum([dz*f.diff(z) for z,dz in zip(xy,dxy)])
show( constr)
###Output
_____no_output_____
###Markdown
d'Alembert principle reads:
###Code
dAlemb = (X.diff(t,2))*dx + (Y.diff(t,2)+g)*dy
show(dAlemb.subs(to_var))
###Output
_____no_output_____
###Markdown
First equation we obtain by substituting e.g. $\delta x$ from the differential contraints equation to d'Alembert principle:
###Code
eq1=(dAlemb.subs(constr.solve(dx)[0])).expand().coefficient(dy).subs(to_var)
show(eq1)
###Output
_____no_output_____
###Markdown
The second equation can be obtained by differentiating contraints over time two times:
###Code
eq2 = f.subs(to_fun).diff(t,2).subs(to_var)
show(eq2)
###Output
_____no_output_____
###Markdown
We have to solve for $\ddot x$ i $\ddot y$ and we get equation of motion:
###Code
sol = solve( [eq1,eq2],[xdd,ydd])
show( sol[0] )
###Output
_____no_output_____
###Markdown
We can easily solve it with `desolve_odeint` numerically. Interestingly, the lenght of the pendulum must be taken into the account inside initial conditions, as $l$ was removed from the above system by diferentiation.Having access to right hand sides:
###Code
sol[0][0].rhs()
sol[0][1].rhs()
###Output
_____no_output_____
###Markdown
We solve the system of four first order ODEs (we treat $x$ and velocity: $\dot x$ as independent variables):$$\begin{eqnarray}\frac{dx}{dt} &=& \dot x \\\frac{dy}{dt} &=& \dot y \\\frac{d \dot x}{dt} &=& \frac{g {x} {y} - {\left({\dot x}^{2} + {\dot y}^{2}\right)} {x}}{{x}^{2} + {y}^{2}} \\\frac{d \dot y}{dt} &=& -\frac{g {x}^{2} + {\left({\dot x}^{2} + {\dot y}^{2}\right)} {y}}{{x}^{2} + {y}^{2}}\end{eqnarray}$$
###Code
ode=[xd,yd,sol[0][0].rhs().subs({g:1}),sol[0][1].rhs().subs({g:1})]
times = srange(0,14,0.01)
numsol=desolve_odeint(ode,[0,-1,1.2,0],times,[x,y,xd,yd])
p=line(zip(numsol[:,0],numsol[:,1]),figsize=5,aspect_ratio=1)
p.show()
###Output
_____no_output_____
###Markdown
We can compare this numerical solution with small amplitude approximation. Suppose that the pendulum starts at its lowest position, $\phi=\arctan(y/x)=-\pi/2$ with linear velocity $\dot x(0) = 0.2$. The analytical solution in that case reads: $$\phi = -\pi/2 + 0.2 \sin(\omega_0 t),$$where $\omega_0=\sqrt{g/l}=1$
###Code
times = srange(0,14,0.01)
numsol = desolve_odeint(ode,[0,-1,.2,0],times,[x,y,xd,yd])
import numpy as np
line(zip( times,np.arctan2(numsol[:,1],numsol[:,0]) ),figsize=(7,2))+\
plot(0.2*sin(t)-pi/2,(t,0,10),color='red')
###Output
_____no_output_____
###Markdown
We can also check if contraints, which are the lenght of the pendulum, are fulfilled during the simulation:
###Code
print "initial l:",numsol[0,0]**2+numsol[0,1]**2," final l:",numsol[-1,0]**2+numsol[-1,1]**2
###Output
initial l: 1.0 final l: 0.999999990079
###Markdown
Solution in coordinaes complaiant with contraints.Clearly, the derived system od DAE is not the best approach to describe mathematical pendulum. The better idea is to use coordinates which fulfill automatically the constraint. In the case of mathematical pendulum one can use the angle $\phi$.We will need two sets of coordinates: $(x,y)$ and $\phi$:
###Code
var('x y t')
var('l g')
xy_wsp = [('x','x'),('y','y')]
uv_wsp = [('phi','\phi')]
for v,lv in uv_wsp+xy_wsp:
var("%s"%v,latex_name=r'%s'%lv)
vars()[v.capitalize()] = function(v.capitalize())(t)
var("%sdd"%v,latex_name=r'\ddot %s'%lv)
var("%sd"%v,latex_name=r'\dot %s'%lv)
var("d%s"%v,latex_name=r'\delta %s'%lv)
uv = [vars()[v] for v,lv in uv_wsp]
xy = [vars()[v] for v,lv in xy_wsp]
to_fun=dict()
for v,lv in uv_wsp:
to_fun[vars()[v]]=vars()[v.capitalize()]
to_fun[vars()[v+"d"]]=vars()[v.capitalize()].diff()
to_fun[vars()[v+"dd"]]=vars()[v.capitalize()].diff(2)
to_var = dict((v,k) for k,v in to_fun.items())
x2u = {x:l*cos(phi),y:l*sin(phi)}
###Output
_____no_output_____
###Markdown
We have to express virtual displacements in new coordinates:$$\delta x = \frac{\partial x(r,\phi)}{\partial \phi}\delta \phi $$$$\delta y = \frac{\partial y(r,\phi)}{\partial \phi}\delta \phi $$Despite the fact that we have only one element on `uv`, i.e. one new coordinate, we will use general formula below:
###Code
for w in xy:
vars()['d'+repr(w)+'_polar']=sum([w.subs(x2u).diff(w2)*vars()['d'+repr(w2)] for w2 in uv])
show([dx_polar,dy_polar])
###Output
_____no_output_____
###Markdown
d'Alembert principle in new coordinates reads:
###Code
dAlemb = (x.subs(x2u).subs(to_fun).diff(t,2))*dx_polar + \
(y.subs(x2u).subs(to_fun).diff(t,2)+g)*dy_polar
dAlemb = dAlemb.subs(to_var)
show(dAlemb)
###Output
_____no_output_____
###Markdown
Above expression is zero when coefficient at $\delta \phi$ is zero:
###Code
for v in uv:
show(dAlemb.expand().coefficient(vars()['d'+repr(v)]).trig_simplify())
###Output
_____no_output_____
###Markdown
We finally arrive at known and expected equation:
###Code
show( dAlemb.expand().coefficient(dphi).trig_simplify().solve(phidd) )
###Output
_____no_output_____
###Markdown
Stable point is $\phi=-\frac{\pi}{2}$, we can expand in this point the right hand side and obtain harmonic oscilator in $\phi$:
###Code
taylor(-g/l*cos(phi),phi,-pi/2,1).show()
###Output
_____no_output_____
###Markdown
one can redefine $\phi$, so it is zero at lowest point, and we recognize the classical formula:
###Code
taylor(-g/l*cos(phi),phi,-pi/2,1).subs({phi:phi-pi/2}).expand().show()
###Output
_____no_output_____ |
02_Classification/-- TensorBoard.ipynb | ###Markdown
Start TensorBoard Process
###Code
from google.datalab.ml import TensorBoard
TensorBoard().start(model_dir)
TensorBoard().list()
###Output
_____no_output_____
###Markdown
Kill TensorBoard Process
###Code
# to stop TensorBoard
TensorBoard().stop(23002)
print('stopped TensorBoard')
TensorBoard().list()
###Output
_____no_output_____ |
examples/vision/ipynb/image_classification_efficientnet_fine_tuning.ipynb | ###Markdown
Image classification using EfficientNet and fine-tuning**Author:** Yixing Fu**Date created:** 2020/06/30**Last modified:** 2020/07/06**Description:** Use EfficientNet with weights pre-trained on imagenet for CIFAR-100 classification. What is EfficientNetEfficientNet, first introduced in https://arxiv.org/abs/1905.11946 is among the mostefficient models (i.e. requiring least FLOPS for inference) that reaches SOTA in bothimagenet and common image classification transfer learning tasks.The smallest base model is similar to MnasNet (https://arxiv.org/abs/1807.11626), whichreached near-SOTA with a significantly smaller model. By introducing a heuristic way toscale the model, EfficientNet provides a family of models (B0 to B7) that represents agood combination of efficiency and accuracy on a variety of scales. Such a scalingheuristics (compound-scaling, details see https://arxiv.org/abs/1905.11946) allows theefficiency-oriented base model (B0) to surpass models at every scale, while avoidingextensive grid-search of hyperparameters.A summary of the latest updates on the model is available athttps://github.com/tensorflow/tpu/tree/master/models/official/efficientnet, where variousaugmentation schemes and semi-supervised learning approaches are applied to furtherimprove the imagenet performance of the models. These extensions of the model can be usedby updating weights without changing model architecture. Compound scalingThe EfficientNet models are approximately created using compound scaling. Starting fromthe base model B0, as model size scales from B0 to B7, the extra computational resourceis proportioned into width, depth and resolution of the model by requiring each of thethree dimensions to grow at the same power of a set of fixed ratios.However, it must be noted that the ratios are not taken accurately. A few points need tobe taken into account:Resolution. Resolutions not divisible by 8, 16, etc. cause zero-padding near boundariesof some layers which wastes computational resources. This especially applies to smallervariants of the model, hence the input resolution for B0 and B1 are chosen as 224 and240.Depth and width. Channel size is always rounded to 8/16/32 because of the architecture.Resource limit. Perfect compound scaling would assume spatial (memory) and time allowancefor the computation to grow simultaneously, but OOM may further bottleneck the scaling ofresolution.As a result, compound scaling factor is significantly off fromhttps://arxiv.org/abs/1905.11946. Hence it is important to understand the compoundscaling as a rule of thumb that leads to this family of base models, rather than an exactoptimization scheme. This also justifies that in the keras implementation (detailedbelow), only these 8 models, B0 to B7, are exposed to the user and arbitrary width /depth / resolution is not allowed. Keras implementation of EfficientNetAn implementation of EfficientNet B0 to B7 has been shipped with tf.keras since TF2.3. Touse EfficientNetB0 for classifying 1000 classes of images from imagenet, run```from tensorflow.keras.applications import EfficientNetB0model = EfficientNetB0(weights='imagenet')```This model takes input images of shape (224, 224, 3), and the input data should range[0,255]. Resizing and normalization are included as part of the model.Because training EfficientNet on imagenet takes a tremendous amount of resources andseveral techniques that are not a part of the model architecture itself. Hence the Kerasimplementation by default loads pre-trained weights with AutoAugment(https://arxiv.org/abs/1805.09501).For B0 to B7 base models, the input shapes are different. Here is a list of input shapeexpected for each model:| Base model | resolution||----------------|-----|| EfficientNetB0 | 224 || EfficientNetB1 | 240 || EfficientNetB2 | 260 || EfficientNetB3 | 300 || EfficientNetB4 | 380 || EfficientNetB5 | 456 || EfficientNetB6 | 528 || EfficientNetB7 | 600 |When the use of the model is intended for transfer learning, the Keras implementationprovides a option to remove the top layers:```model = EfficientNetB0(include_top=False, weights='imagenet')```This option excludes the final Dense layer that turns 1280 features on the penultimatelayer into prediction of the 1000 classes in imagenet. Replacing the top with customlayers allows using EfficientNet as a feature extractor and transfers the pretrainedweights to other tasks.Another keyword in the model builder worth noticing is `drop_connect_rate` which controlsthe dropout rate responsible for stochastic depth (https://arxiv.org/abs/1603.09382).This parameter serves as a toggle for extra regularization in finetuning, but does notalter loaded weights. Example: EfficientNetB0 for CIFAR-100.As an architecture, EfficientNet is capable of a wide range of image classificationtasks. For example, we will show using pre-trained EfficientNetB0 on CIFAR-100. ForEfficientNetB0, image size is 224.
###Code
# IMG_SIZE is determined by EfficientNet model choice
IMG_SIZE = 224
###Output
_____no_output_____
###Markdown
prepare
###Code
!!pip install --quiet tensorflow==2.3.0rc0
!!pip install --quiet cloud-tpu-client
import tensorflow as tf
try:
from cloud_tpu_client import Client
c = Client()
c.configure_tpu_version(tf.__version__, restart_type="always")
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print("Running on TPU ", tpu.cluster_spec().as_dict()["worker"])
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError:
print("Not connected to a TPU runtime. Using CPU/GPU strategy")
strategy = tf.distribute.MirroredStrategy()
###Output
_____no_output_____
###Markdown
Below is example code for loading data.To see sensible result, you need to load entire dataset and adjust epochs fortraining; but you may truncate data for a quick verification of the workflow.Expect the notebook to run at least an hour for GPU, while much faster on TPU ifusing hosted Colab session.
###Code
from tensorflow import keras
from tensorflow.keras.datasets import cifar100
from tensorflow.keras.utils import to_categorical
batch_size = 64
(x_train, y_train), (x_test, y_test) = cifar100.load_data()
NUM_CLASSES = 100
x_train = tf.cast(x_train, tf.int32)
x_test = tf.cast(x_test, tf.int32)
truncate_data = False # @param {type: "boolean"}
if truncate_data:
x_train = x_train[0:5000]
y_train = y_train[0:5000]
x_test = x_test[0:1000]
y_test = y_test[0:1000]
# one-hot / categorical
y_train = to_categorical(y_train, NUM_CLASSES)
y_test = to_categorical(y_test, NUM_CLASSES)
ds_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
ds_train = ds_train.cache()
ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = tf.data.Dataset.from_tensor_slices((x_test, y_test))
ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True)
###Output
_____no_output_____
###Markdown
training from scratchTo build model that use EfficientNetB0 with 100 classes that is initiated from scratch:Note: to better see validation peeling off from training accuracy, run ~20 epochs.
###Code
from tensorflow.keras.applications import EfficientNetB0
from tensorflow.keras.layers.experimental.preprocessing import (
Resizing,
RandomFlip,
RandomContrast,
# RandomHeight,
)
from tensorflow.keras.optimizers import SGD
with strategy.scope():
inputs = keras.layers.Input(shape=(32, 32, 3))
x = inputs
x = RandomFlip()(x)
x = RandomContrast(0.1)(x)
# x = RandomHeight(0.1)(x)
x = Resizing(IMG_SIZE, IMG_SIZE, interpolation="bilinear")(x)
x = EfficientNetB0(include_top=True, weights=None, classes=100)(x)
model = keras.Model(inputs, x)
sgd = SGD(learning_rate=0.2, momentum=0.1, nesterov=True)
model.compile(optimizer=sgd, loss="categorical_crossentropy", metrics=["accuracy"])
model.summary()
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=5, min_lr=0.005, verbose=2
)
epochs = 20 # @param {type: "slider", min:5, max:50}
hist = model.fit(
ds_train, epochs=epochs, validation_data=ds_test, callbacks=[reduce_lr], verbose=2
)
###Output
_____no_output_____
###Markdown
Training the model is relatively fast (takes only 20 seconds per epoch on TPUv2 that isavailable on colab). This might make it sounds easy to simply train EfficientNet on anydataset wanted from scratch. However, training EfficientNet on smaller datasets,especially those with lower resolution like CIFAR-100, faces the significant challenge ofoverfitting or getting trapped in local extrema.Hence traning from scratch requires very careful choice of hyperparameters and isdifficult to find suitable regularization. Plotting the training and validation accuracymakes it clear that validation accuracy stagnates at very low value.
###Code
import matplotlib.pyplot as plt
def plot_hist(hist):
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
plot_hist(hist)
###Output
_____no_output_____
###Markdown
transfer learning from pretrained weightUsing pre-trained imagenet weights and only transfer learn (fine-tune) the model allowsutilizing the power of EfficientNet much easier. To use pretrained weight, the model canbe initiated through
###Code
from tensorflow import keras
from tensorflow.keras.layers.experimental.preprocessing import (
Resizing,
RandomContrast,
)
def build_model(n_classes):
inputs = keras.layers.Input(shape=(32, 32, 3))
x = inputs
x = RandomFlip()(x)
x = RandomContrast(0.1)(x)
x = Resizing(IMG_SIZE, IMG_SIZE, interpolation="bilinear")(x)
# other preprocessing layers can be used similar to Resizing and RandomRotation
model = EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet")
# freeze the pretrained weights
for l in model.layers:
l.trainable = False
# rebuild top
x = keras.layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = keras.layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = keras.layers.Dropout(top_dropout_rate, name="top_dropout")(x)
x = keras.layers.Dense(100, activation="softmax", name="pred")(x)
# compile
model = keras.Model(inputs, x, name="EfficientNet")
sgd = SGD(learning_rate=0.2, momentum=0.1, nesterov=True)
# sgd = tfa.optimizers.MovingAverage(sgd)
model.compile(optimizer=sgd, loss="categorical_crossentropy", metrics=["accuracy"])
return model
###Output
_____no_output_____
###Markdown
Note that it is also possible to freeze pre-trained part entirely by```model.trainable = False```instead of setting each layer separately.The first step to transfer learning is to freeze all layers and train only the toplayers. For this step a relatively large learning rate (~0.1) can be used to start with,while applying some learning rate decay (either ExponentialDecay or use ReduceLROnPlateaucallback). On CIFAR-100 with EfficientNetB0, this step will take validation accuracy to~70% with suitable (but not absolutely optimal) image augmentation. For this stage, usingEfficientNetB0, validation accuracy and loss will be consistently better than trainingaccuracy and loss. This is because the regularization is strong, which onlysuppresses train time metrics.Note that the convergence may take up to 50 epochs. If no data augmentation layer isapplied, expect the validation accuracy to reach only ~60% even for many epochs.
###Code
from tensorflow.keras.callbacks import ReduceLROnPlateau
with strategy.scope():
model = build_model(n_classes=NUM_CLASSES)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=5, min_lr=0.0001, verbose=2
)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(
ds_train, epochs=epochs, validation_data=ds_test, callbacks=[reduce_lr], verbose=2,
)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
The second step is to unfreeze a number of layers. Unfreezing layers and fine tuning isusually thought to only provide incremental improvements on validation accuracy, but forthe case of EfficientNetB0 it boosts validation accuracy by about 10% to pass 80%(reaching ~87% as in the original paper requires including AutoAugmentation or RandomAugmentaion).Note that the convergence may take more than 50 epochs. If no data augmentation layer isapplied, expect the validation accuracy to reach only ~70% even for many epochs.
###Code
def unfreeze_model(model):
for l in model.layers:
if "bn" in l.name:
print(f"{l.name} is staying untrainable")
else:
l.trainable = True
sgd = SGD(learning_rate=0.005)
model.compile(optimizer=sgd, loss="categorical_crossentropy", metrics=["accuracy"])
return model
model = unfreeze_model(model)
reduce_lr = ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=5, min_lr=0.00001, verbose=2
)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(
ds_train, epochs=epochs, validation_data=ds_test, callbacks=[reduce_lr], verbose=2,
)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
Image classification via fine-tuning with EfficientNet**Author:** [Yixing Fu](https://github.com/yixingfu)**Date created:** 2020/06/30**Last modified:** 2020/07/16**Description:** Use EfficientNet with weights pre-trained on imagenet for Stanford Dogs classification. Introduction: what is EfficientNetEfficientNet, first introduced in [Tan and Le, 2019](https://arxiv.org/abs/1905.11946)is among the most efficient models (i.e. requiring least FLOPS for inference)that reaches State-of-the-Art accuracy on bothimagenet and common image classification transfer learning tasks.The smallest base model is similar to [MnasNet](https://arxiv.org/abs/1807.11626), whichreached near-SOTA with a significantly smaller model. By introducing a heuristic way toscale the model, EfficientNet provides a family of models (B0 to B7) that represents agood combination of efficiency and accuracy on a variety of scales. Such a scalingheuristics (compound-scaling, details see[Tan and Le, 2019](https://arxiv.org/abs/1905.11946)) allows theefficiency-oriented base model (B0) to surpass models at every scale, while avoidingextensive grid-search of hyperparameters.A summary of the latest updates on the model is available at[here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet), where variousaugmentation schemes and semi-supervised learning approaches are applied to furtherimprove the imagenet performance of the models. These extensions of the model can be usedby updating weights without changing model architecture. B0 to B7 variants of EfficientNet*(This section provides some details on "compound scaling", and can be skippedif you're only interested in using the models)*Based on the [original paper](https://arxiv.org/abs/1905.11946) people may have theimpression that EfficientNet is a continuous family of models created by arbitrarilychoosing scaling factor in as Eq.(3) of the paper. However, choice of resolution,depth and width are also restricted by many factors:- Resolution: Resolutions not divisible by 8, 16, etc. cause zero-padding near boundariesof some layers which wastes computational resources. This especially applies to smallervariants of the model, hence the input resolution for B0 and B1 are chosen as 224 and240.- Depth and width: The building blocks of EfficientNet demands channel size to bemultiples of 8.- Resource limit: Memory limitation may bottleneck resolution when depthand width can still increase. In such a situation, increasing depth and/orwidth but keep resolution can still improve performance.As a result, the depth, width and resolution of each variant of the EfficientNet modelsare hand-picked and proven to produce good results, though they may be significantlyoff from the compound scaling formula.Therefore, the keras implementation (detailed below) only provide these 8 models, B0 to B7,instead of allowing arbitray choice of width / depth / resolution parameters. Keras implementation of EfficientNetAn implementation of EfficientNet B0 to B7 has been shipped with tf.keras since TF2.3. Touse EfficientNetB0 for classifying 1000 classes of images from imagenet, run:```pythonfrom tensorflow.keras.applications import EfficientNetB0model = EfficientNetB0(weights='imagenet')```This model takes input images of shape (224, 224, 3), and the input data should range[0, 255]. Normalization is included as part of the model.Because training EfficientNet on ImageNet takes a tremendous amount of resources andseveral techniques that are not a part of the model architecture itself. Hence the Kerasimplementation by default loads pre-trained weights obtained via training with[AutoAugment](https://arxiv.org/abs/1805.09501).For B0 to B7 base models, the input shapes are different. Here is a list of input shapeexpected for each model:| Base model | resolution||----------------|-----|| EfficientNetB0 | 224 || EfficientNetB1 | 240 || EfficientNetB2 | 260 || EfficientNetB3 | 300 || EfficientNetB4 | 380 || EfficientNetB5 | 456 || EfficientNetB6 | 528 || EfficientNetB7 | 600 |When the model is intended for transfer learning, the Keras implementationprovides a option to remove the top layers:```model = EfficientNetB0(include_top=False, weights='imagenet')```This option excludes the final `Dense` layer that turns 1280 features on the penultimatelayer into prediction of the 1000 ImageNet classes. Replacing the top layer with customlayers allows using EfficientNet as a feature extractor in a transfer learning workflow.Another argument in the model constructor worth noticing is `drop_connect_rate` which controlsthe dropout rate responsible for [stochastic depth](https://arxiv.org/abs/1603.09382).This parameter serves as a toggle for extra regularization in finetuning, but does notaffect loaded weights. For example, when stronger regularization is desired, try:```pythonmodel = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4)```The default value is 0.2. Example: EfficientNetB0 for Stanford Dogs.EfficientNet is capable of a wide range of image classification tasks.This makes it a good model for transfer learning.As an end-to-end example, we will show using pre-trained EfficientNetB0 on[Stanford Dogs](http://vision.stanford.edu/aditya86/ImageNetDogs/main.html) dataset.
###Code
# IMG_SIZE is determined by EfficientNet model choice
IMG_SIZE = 224
###Output
_____no_output_____
###Markdown
Setup and data loadingThis example requires TensorFlow 2.3 or above.To use TPU, the TPU runtime must match current running TensorFlowversion. If there is a mismatch, try:```pythonfrom cloud_tpu_client import Clientc = Client()c.configure_tpu_version(tf.__version__, restart_type="always")```
###Code
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
print("Device:", tpu.master())
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError:
print("Not connected to a TPU runtime. Using CPU/GPU strategy")
strategy = tf.distribute.MirroredStrategy()
###Output
_____no_output_____
###Markdown
Loading dataHere we load data from [tensorflow_datasets](https://www.tensorflow.org/datasets)(hereafter TFDS).Stanford Dogs dataset is provided inTFDS as [stanford_dogs](https://www.tensorflow.org/datasets/catalog/stanford_dogs).It features 20,580 images that belong to 120 classes of dog breeds(12,000 for training and 8,580 for testing).By simply changing `dataset_name` below, you may also try this notebook forother datasets in TFDS such as[cifar10](https://www.tensorflow.org/datasets/catalog/cifar10),[cifar100](https://www.tensorflow.org/datasets/catalog/cifar100),[food101](https://www.tensorflow.org/datasets/catalog/food101),etc. When the images are much smaller than the size of EfficientNet input,we can simply upsample the input images. It has been shown in[Tan and Le, 2019](https://arxiv.org/abs/1905.11946) that transfer learningresult is better for increased resolution even if input images remain small.For TPU: if using TFDS datasets,a [GCS bucket](https://cloud.google.com/storage/docs/key-termsbuckets)location is required to save the datasets. For example:```pythontfds.load(dataset_name, data_dir="gs://example-bucket/datapath")```Also, both the current environment and the TPU service account haveproper [access](https://cloud.google.com/tpu/docs/storage-bucketsauthorize_the_service_account)to the bucket. Alternatively, for small datasets you may try loading datainto the memory and use `tf.data.Dataset.from_tensor_slices()`.
###Code
import tensorflow_datasets as tfds
batch_size = 64
dataset_name = "stanford_dogs"
(ds_train, ds_test), ds_info = tfds.load(
dataset_name, split=["train", "test"], with_info=True, as_supervised=True
)
NUM_CLASSES = ds_info.features["label"].num_classes
###Output
_____no_output_____
###Markdown
When the dataset include images with various size, we need to resize them into ashared size. The Stanford Dogs dataset includes only images at least 200x200pixels in size. Here we resize the images to the input size needed for EfficientNet.
###Code
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
###Output
_____no_output_____
###Markdown
Visualizing the dataThe following code shows the first 9 images with their labels.
###Code
import matplotlib.pyplot as plt
def format_label(label):
string_label = label_info.int2str(label)
return string_label.split("-")[1]
label_info = ds_info.features["label"]
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Data augmentationWe can use the preprocessing layers APIs for image augmentation.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
img_augmentation = Sequential(
[
layers.RandomRotation(factor=0.15),
layers.RandomTranslation(height_factor=0.1, width_factor=0.1),
layers.RandomFlip(),
layers.RandomContrast(factor=0.1),
],
name="img_augmentation",
)
###Output
_____no_output_____
###Markdown
This `Sequential` model object can be used both as a part ofthe model we later build, and as a function to preprocessdata before feeding into the model. Using them as function makesit easy to visualize the augmented images. Here we plot 9 examplesof augmentation result of a given figure.
###Code
for image, label in ds_train.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
aug_img = img_augmentation(tf.expand_dims(image, axis=0))
plt.imshow(aug_img[0].numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Prepare inputsOnce we verify the input data and augmentation are working correctly,we prepare dataset for training. The input data are resized to uniform`IMG_SIZE`. The labels are put into one-hot(a.k.a. categorical) encoding. The dataset is batched.Note: `prefetch` and `AUTOTUNE` may in some situation improveperformance, but depends on environment and the specific dataset used.See this [guide](https://www.tensorflow.org/guide/data_performance)for more information on data pipeline performance.
###Code
# One-hot / categorical encoding
def input_preprocess(image, label):
label = tf.one_hot(label, NUM_CLASSES)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True)
###Output
_____no_output_____
###Markdown
Training a model from scratchWe build an EfficientNetB0 with 120 output classes, that is initialized from scratch:Note: the accuracy will increase very slowly and may overfit.
###Code
from tensorflow.keras.applications import EfficientNetB0
with strategy.scope():
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
outputs = EfficientNetB0(include_top=True, weights=None, classes=NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.summary()
epochs = 40 # @param {type: "slider", min:10, max:100}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
###Output
_____no_output_____
###Markdown
Training the model is relatively fast (takes only 20 seconds per epoch on TPUv2 that isavailable on Colab). This might make it sounds easy to simply train EfficientNet on anydataset wanted from scratch. However, training EfficientNet on smaller datasets,especially those with lower resolution like CIFAR-100, faces the significant challenge ofoverfitting.Hence training from scratch requires very careful choice of hyperparameters and isdifficult to find suitable regularization. It would also be much more demanding in resources.Plotting the training and validation accuracymakes it clear that validation accuracy stagnates at a low value.
###Code
import matplotlib.pyplot as plt
def plot_hist(hist):
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
plot_hist(hist)
###Output
_____no_output_____
###Markdown
Transfer learning from pre-trained weightsHere we initialize the model with pre-trained ImageNet weights,and we fine-tune it on our own dataset.
###Code
def build_model(num_classes):
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
model = EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(NUM_CLASSES, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
###Output
_____no_output_____
###Markdown
The first step to transfer learning is to freeze all layers and train only the toplayers. For this step, a relatively large learning rate (1e-2) can be used.Note that validation accuracy and loss will usually be better than trainingaccuracy and loss. This is because the regularization is strong, which onlysuppresses training-time metrics.Note that the convergence may take up to 50 epochs depending on choice of learning rate.If image augmentation layers were notapplied, the validation accuracy may only reach ~60%.
###Code
with strategy.scope():
model = build_model(num_classes=NUM_CLASSES)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
The second step is to unfreeze a number of layers and fit the model using smallerlearning rate. In this example we show unfreezing all layers, but depending onspecific dataset it may be desireble to only unfreeze a fraction of all layers.When the feature extraction withpretrained model works good enough, this step would give a very limited gain onvalidation accuracy. In our case we only see a small improvement,as ImageNet pretraining already exposed the model to a good amount of dogs.On the other hand, when we use pretrained weights on a dataset that is more differentfrom ImageNet, this fine-tuning step can be crucial as the feature extractor alsoneeds to be adjusted by a considerable amount. Such a situation can be demonstratedif choosing CIFAR-100 dataset instead, where fine-tuning boosts validation accuracyby about 10% to pass 80% on `EfficientNetB0`.In such a case the convergence may take more than 50 epochs.A side note on freezing/unfreezing models: setting `trainable` of a `Model` willsimultaneously set all layers belonging to the `Model` to the same `trainable`attribute. Each layer is trainable only if both the layer itself and the modelcontaining it are trainable. Hence when we need to partially freeze/unfreezea model, we need to make sure the `trainable` attribute of the model is setto `True`.
###Code
def unfreeze_model(model):
# We unfreeze the top 20 layers while leaving BatchNorm layers frozen
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
unfreeze_model(model)
epochs = 10 # @param {type: "slider", min:8, max:50}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
Image classification via fine-tuning with EfficientNet**Author:** [Yixing Fu](https://github.com/yixingfu)**Date created:** 2020/06/30**Last modified:** 2020/07/16**Description:** Use EfficientNet with weights pre-trained on imagenet for Stanford Dogs classification. Introduction: what is EfficientNetEfficientNet, first introduced in [Tan and Le, 2019](https://arxiv.org/abs/1905.11946)is among the most efficient models (i.e. requiring least FLOPS for inference)that reaches State-of-the-Art accuracy on bothimagenet and common image classification transfer learning tasks.The smallest base model is similar to [MnasNet](https://arxiv.org/abs/1807.11626), whichreached near-SOTA with a significantly smaller model. By introducing a heuristic way toscale the model, EfficientNet provides a family of models (B0 to B7) that represents agood combination of efficiency and accuracy on a variety of scales. Such a scalingheuristics (compound-scaling, details see[Tan and Le, 2019](https://arxiv.org/abs/1905.11946)) allows theefficiency-oriented base model (B0) to surpass models at every scale, while avoidingextensive grid-search of hyperparameters.A summary of the latest updates on the model is available at[here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet), where variousaugmentation schemes and semi-supervised learning approaches are applied to furtherimprove the imagenet performance of the models. These extensions of the model can be usedby updating weights without changing model architecture. B0 to B7 variants of EfficientNet*(This section provides some details on "compound scaling", and can be skippedif you're only interested in using the models)*Based on the [original paper](https://arxiv.org/abs/1905.11946) people may have theimpression that EfficientNet is a continuous family of models created by arbitrarilychoosing scaling factor in as Eq.(3) of the paper. However, choice of resolution,depth and width are also restricted by many factors:- Resolution: Resolutions not divisible by 8, 16, etc. cause zero-padding near boundariesof some layers which wastes computational resources. This especially applies to smallervariants of the model, hence the input resolution for B0 and B1 are chosen as 224 and240.- Depth and width: The building blocks of EfficientNet demands channel size to bemultiples of 8.- Resource limit: Memory limitation may bottleneck resolution when depthand width can still increase. In such a situation, increasing depth and/orwidth but keep resolution can still improve performance.As a result, the depth, width and resolution of each variant of the EfficientNet modelsare hand-picked and proven to produce good results, though they may be significantlyoff from the compound scaling formula.Therefore, the keras implementation (detailed below) only provide these 8 models, B0 to B7,instead of allowing arbitray choice of width / depth / resolution parameters. Keras implementation of EfficientNetAn implementation of EfficientNet B0 to B7 has been shipped with tf.keras since TF2.3. Touse EfficientNetB0 for classifying 1000 classes of images from imagenet, run:```pythonfrom tensorflow.keras.applications import EfficientNetB0model = EfficientNetB0(weights='imagenet')```This model takes input images of shape (224, 224, 3), and the input data should range[0, 255]. Normalization is included as part of the model.Because training EfficientNet on ImageNet takes a tremendous amount of resources andseveral techniques that are not a part of the model architecture itself. Hence the Kerasimplementation by default loads pre-trained weights obtained via training with[AutoAugment](https://arxiv.org/abs/1805.09501).For B0 to B7 base models, the input shapes are different. Here is a list of input shapeexpected for each model:| Base model | resolution||----------------|-----|| EfficientNetB0 | 224 || EfficientNetB1 | 240 || EfficientNetB2 | 260 || EfficientNetB3 | 300 || EfficientNetB4 | 380 || EfficientNetB5 | 456 || EfficientNetB6 | 528 || EfficientNetB7 | 600 |When the model is intended for transfer learning, the Keras implementationprovides a option to remove the top layers:```model = EfficientNetB0(include_top=False, weights='imagenet')```This option excludes the final `Dense` layer that turns 1280 features on the penultimatelayer into prediction of the 1000 ImageNet classes. Replacing the top layer with customlayers allows using EfficientNet as a feature extractor in a transfer learning workflow.Another argument in the model constructor worth noticing is `drop_connect_rate` which controlsthe dropout rate responsible for [stochastic depth](https://arxiv.org/abs/1603.09382).This parameter serves as a toggle for extra regularization in finetuning, but does notaffect loaded weights. For example, when stronger regularization is desired, try:```pythonmodel = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4)```The default value is 0.2. Example: EfficientNetB0 for Stanford Dogs.EfficientNet is capable of a wide range of image classification tasks.This makes it a good model for transfer learning.As an end-to-end example, we will show using pre-trained EfficientNetB0 on[Stanford Dogs](http://vision.stanford.edu/aditya86/ImageNetDogs/main.html) dataset.
###Code
# IMG_SIZE is determined by EfficientNet model choice
IMG_SIZE = 224
###Output
_____no_output_____
###Markdown
Setup and data loadingThis example requires TensorFlow 2.3 or above.To use TPU, the TPU runtime must match current running TensorFlowversion. If there is a mismatch, try:```pythonfrom cloud_tpu_client import Clientc = Client()c.configure_tpu_version(tf.__version__, restart_type="always")```
###Code
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print("Running on TPU ", tpu.cluster_spec().as_dict()["worker"])
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError:
print("Not connected to a TPU runtime. Using CPU/GPU strategy")
strategy = tf.distribute.MirroredStrategy()
###Output
_____no_output_____
###Markdown
Loading dataHere we load data from [tensorflow_datasets](https://www.tensorflow.org/datasets)(hereafter TFDS).Stanford Dogs dataset is provided inTFDS as [stanford_dogs](https://www.tensorflow.org/datasets/catalog/stanford_dogs).It features 20,580 images that belong to 120 classes of dog breeds(12,000 for training and 8,580 for testing).By simply changing `dataset_name` below, you may also try this notebook forother datasets in TFDS such as[cifar10](https://www.tensorflow.org/datasets/catalog/cifar10),[cifar100](https://www.tensorflow.org/datasets/catalog/cifar100),[food101](https://www.tensorflow.org/datasets/catalog/food101),etc. When the images are much smaller than the size of EfficientNet input,we can simply upsample the input images. It has been shown in[Tan and Le, 2019](https://arxiv.org/abs/1905.11946) that transfer learningresult is better for increased resolution even if input images remain small.For TPU: if using TFDS datasets,a [GCS bucket](https://cloud.google.com/storage/docs/key-termsbuckets)location is required to save the datasets. For example:```pythontfds.load(dataset_name, data_dir="gs://example-bucket/datapath")```Also, both the current environment and the TPU service account haveproper [access](https://cloud.google.com/tpu/docs/storage-bucketsauthorize_the_service_account)to the bucket. Alternatively, for small datasets you may try loading datainto the memory and use `tf.data.Dataset.from_tensor_slices()`.
###Code
import tensorflow_datasets as tfds
batch_size = 64
dataset_name = "stanford_dogs"
(ds_train, ds_test), ds_info = tfds.load(
dataset_name, split=["train", "test"], with_info=True, as_supervised=True
)
NUM_CLASSES = ds_info.features["label"].num_classes
###Output
_____no_output_____
###Markdown
When the dataset include images with various size, we need to resize them into ashared size. The Stanford Dogs dataset includes only images at least 200x200pixels in size. Here we resize the images to the input size needed for EfficientNet.
###Code
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
###Output
_____no_output_____
###Markdown
Visualizing the dataThe following code shows the first 9 images with their labels.
###Code
import matplotlib.pyplot as plt
def format_label(label):
string_label = label_info.int2str(label)
return string_label.split("-")[1]
label_info = ds_info.features["label"]
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Data augmentationWe can use the preprocessing layers APIs for image augmentation.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
img_augmentation = Sequential(
[
layers.RandomRotation(factor=0.15),
layers.RandomTranslation(height_factor=0.1, width_factor=0.1),
layers.RandomFlip(),
layers.RandomContrast(factor=0.1),
],
name="img_augmentation",
)
###Output
_____no_output_____
###Markdown
This `Sequential` model object can be used both as a part ofthe model we later build, and as a function to preprocessdata before feeding into the model. Using them as function makesit easy to visualize the augmented images. Here we plot 9 examplesof augmentation result of a given figure.
###Code
for image, label in ds_train.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
aug_img = img_augmentation(tf.expand_dims(image, axis=0))
plt.imshow(aug_img[0].numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Prepare inputsOnce we verify the input data and augmentation are working correctly,we prepare dataset for training. The input data are resized to uniform`IMG_SIZE`. The labels are put into one-hot(a.k.a. categorical) encoding. The dataset is batched.Note: `prefetch` and `AUTOTUNE` may in some situation improveperformance, but depends on environment and the specific dataset used.See this [guide](https://www.tensorflow.org/guide/data_performance)for more information on data pipeline performance.
###Code
# One-hot / categorical encoding
def input_preprocess(image, label):
label = tf.one_hot(label, NUM_CLASSES)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True)
###Output
_____no_output_____
###Markdown
Training a model from scratchWe build an EfficientNetB0 with 120 output classes, that is initialized from scratch:Note: the accuracy will increase very slowly and may overfit.
###Code
from tensorflow.keras.applications import EfficientNetB0
with strategy.scope():
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
outputs = EfficientNetB0(include_top=True, weights=None, classes=NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.summary()
epochs = 40 # @param {type: "slider", min:10, max:100}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
###Output
_____no_output_____
###Markdown
Training the model is relatively fast (takes only 20 seconds per epoch on TPUv2 that isavailable on Colab). This might make it sounds easy to simply train EfficientNet on anydataset wanted from scratch. However, training EfficientNet on smaller datasets,especially those with lower resolution like CIFAR-100, faces the significant challenge ofoverfitting.Hence training from scratch requires very careful choice of hyperparameters and isdifficult to find suitable regularization. It would also be much more demanding in resources.Plotting the training and validation accuracymakes it clear that validation accuracy stagnates at a low value.
###Code
import matplotlib.pyplot as plt
def plot_hist(hist):
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
plot_hist(hist)
###Output
_____no_output_____
###Markdown
Transfer learning from pre-trained weightsHere we initialize the model with pre-trained ImageNet weights,and we fine-tune it on our own dataset.
###Code
def build_model(num_classes):
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
model = EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(NUM_CLASSES, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
###Output
_____no_output_____
###Markdown
The first step to transfer learning is to freeze all layers and train only the toplayers. For this step, a relatively large learning rate (1e-2) can be used.Note that validation accuracy and loss will usually be better than trainingaccuracy and loss. This is because the regularization is strong, which onlysuppresses training-time metrics.Note that the convergence may take up to 50 epochs depending on choice of learning rate.If image augmentation layers were notapplied, the validation accuracy may only reach ~60%.
###Code
with strategy.scope():
model = build_model(num_classes=NUM_CLASSES)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
The second step is to unfreeze a number of layers and fit the model using smallerlearning rate. In this example we show unfreezing all layers, but depending onspecific dataset it may be desireble to only unfreeze a fraction of all layers.When the feature extraction withpretrained model works good enough, this step would give a very limited gain onvalidation accuracy. In our case we only see a small improvement,as ImageNet pretraining already exposed the model to a good amount of dogs.On the other hand, when we use pretrained weights on a dataset that is more differentfrom ImageNet, this fine-tuning step can be crucial as the feature extractor alsoneeds to be adjusted by a considerable amount. Such a situation can be demonstratedif choosing CIFAR-100 dataset instead, where fine-tuning boosts validation accuracyby about 10% to pass 80% on `EfficientNetB0`.In such a case the convergence may take more than 50 epochs.A side note on freezing/unfreezing models: setting `trainable` of a `Model` willsimultaneously set all layers belonging to the `Model` to the same `trainable`attribute. Each layer is trainable only if both the layer itself and the modelcontaining it are trainable. Hence when we need to partially freeze/unfreezea model, we need to make sure the `trainable` attribute of the model is setto `True`.
###Code
def unfreeze_model(model):
# We unfreeze the top 20 layers while leaving BatchNorm layers frozen
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
unfreeze_model(model)
epochs = 10 # @param {type: "slider", min:8, max:50}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
Image classification via fine-tuning with EfficientNet**Author:** [Yixing Fu](https://github.com/yixingfu)**Date created:** 2020/06/30**Last modified:** 2020/07/16**Description:** Use EfficientNet with weights pre-trained on imagenet for Stanford Dogs classification. Introduction: what is EfficientNetEfficientNet, first introduced in [Tan and Le, 2019](https://arxiv.org/abs/1905.11946)is among the most efficient models (i.e. requiring least FLOPS for inference)that reaches State-of-the-Art accuracy on bothimagenet and common image classification transfer learning tasks.The smallest base model is similar to [MnasNet](https://arxiv.org/abs/1807.11626), whichreached near-SOTA with a significantly smaller model. By introducing a heuristic way toscale the model, EfficientNet provides a family of models (B0 to B7) that represents agood combination of efficiency and accuracy on a variety of scales. Such a scalingheuristics (compound-scaling, details see[Tan and Le, 2019](https://arxiv.org/abs/1905.11946)) allows theefficiency-oriented base model (B0) to surpass models at every scale, while avoidingextensive grid-search of hyperparameters.A summary of the latest updates on the model is available at[here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet), where variousaugmentation schemes and semi-supervised learning approaches are applied to furtherimprove the imagenet performance of the models. These extensions of the model can be usedby updating weights without changing model architecture. B0 to B7 variants of EfficientNet*(This section provides some details on "compound scaling", and can be skippedif you're only interested in using the models)*Based on the [original paper](https://arxiv.org/abs/1905.11946) people may have theimpression that EfficientNet is a continuous family of models created by arbitrarilychoosing scaling factor in as Eq.(3) of the paper. However, choice of resolution,depth and width are also restricted by many factors:- Resolution: Resolutions not divisible by 8, 16, etc. cause zero-padding near boundariesof some layers which wastes computational resources. This especially applies to smallervariants of the model, hence the input resolution for B0 and B1 are chosen as 224 and240.- Depth and width: The building blocks of EfficientNet demands channel size to bemultiples of 8.- Resource limit: Memory limitation may bottleneck resolution when depthand width can still increase. In such a situation, increasing depth and/orwidth but keep resolution can still improve performance.As a result, the depth, width and resolution of each variant of the EfficientNet modelsare hand-picked and proven to produce good results, though they may be significantlyoff from the compound scaling formula.Therefore, the keras implementation (detailed below) only provide these 8 models, B0 to B7,instead of allowing arbitray choice of width / depth / resolution parameters. Keras implementation of EfficientNetAn implementation of EfficientNet B0 to B7 has been shipped with tf.keras since TF2.3. Touse EfficientNetB0 for classifying 1000 classes of images from imagenet, run:```pythonfrom tensorflow.keras.applications import EfficientNetB0model = EfficientNetB0(weights='imagenet')```This model takes input images of shape (224, 224, 3), and the input data should range[0, 255]. Normalization is included as part of the model.Because training EfficientNet on ImageNet takes a tremendous amount of resources andseveral techniques that are not a part of the model architecture itself. Hence the Kerasimplementation by default loads pre-trained weights obtained via training with[AutoAugment](https://arxiv.org/abs/1805.09501).For B0 to B7 base models, the input shapes are different. Here is a list of input shapeexpected for each model:| Base model | resolution||----------------|-----|| EfficientNetB0 | 224 || EfficientNetB1 | 240 || EfficientNetB2 | 260 || EfficientNetB3 | 300 || EfficientNetB4 | 380 || EfficientNetB5 | 456 || EfficientNetB6 | 528 || EfficientNetB7 | 600 |When the model is intended for transfer learning, the Keras implementationprovides a option to remove the top layers:```model = EfficientNetB0(include_top=False, weights='imagenet')```This option excludes the final `Dense` layer that turns 1280 features on the penultimatelayer into prediction of the 1000 ImageNet classes. Replacing the top layer with customlayers allows using EfficientNet as a feature extractor in a transfer learning workflow.Another argument in the model constructor worth noticing is `drop_connect_rate` which controlsthe dropout rate responsible for [stochastic depth](https://arxiv.org/abs/1603.09382).This parameter serves as a toggle for extra regularization in finetuning, but does notaffect loaded weights. For example, when stronger regularization is desired, try:```pythonmodel = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4)```The default value is 0.2. Example: EfficientNetB0 for Stanford Dogs.EfficientNet is capable of a wide range of image classification tasks.This makes it a good model for transfer learning.As an end-to-end example, we will show using pre-trained EfficientNetB0 on[Stanford Dogs](http://vision.stanford.edu/aditya86/ImageNetDogs/main.html) dataset.
###Code
# IMG_SIZE is determined by EfficientNet model choice
IMG_SIZE = 224
###Output
_____no_output_____
###Markdown
Setup and data loadingThis example requires TensorFlow 2.3 or above.To use TPU, the TPU runtime must match current running TensorFlowversion. If there is a mismatch, try:```pythonfrom cloud_tpu_client import Clientc = Client()c.configure_tpu_version(tf.__version__, restart_type="always")```
###Code
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print("Running on TPU ", tpu.cluster_spec().as_dict()["worker"])
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError:
print("Not connected to a TPU runtime. Using CPU/GPU strategy")
strategy = tf.distribute.MirroredStrategy()
###Output
_____no_output_____
###Markdown
Loading dataHere we load data from [tensorflow_datasets](https://www.tensorflow.org/datasets)(hereafter TFDS).Stanford Dogs dataset is provided inTFDS as [stanford_dogs](https://www.tensorflow.org/datasets/catalog/stanford_dogs).It features 20,580 images that belong to 120 classes of dog breeds(12,000 for training and 8,580 for testing).By simply changing `dataset_name` below, you may also try this notebook forother datasets in TFDS such as[cifar10](https://www.tensorflow.org/datasets/catalog/cifar10),[cifar100](https://www.tensorflow.org/datasets/catalog/cifar100),[food101](https://www.tensorflow.org/datasets/catalog/food101),etc. When the images are much smaller than the size of EfficientNet input,we can simply upsample the input images. It has been shown in[Tan and Le, 2019](https://arxiv.org/abs/1905.11946) that transfer learningresult is better for increased resolution even if input images remain small.For TPU: if using TFDS datasets,a [GCS bucket](https://cloud.google.com/storage/docs/key-termsbuckets)location is required to save the datasets. For example:```pythontfds.load(dataset_name, data_dir="gs://example-bucket/datapath")```Also, both the current environment and the TPU service account haveproper [access](https://cloud.google.com/tpu/docs/storage-bucketsauthorize_the_service_account)to the bucket. Alternatively, for small datasets you may try loading datainto the memory and use `tf.data.Dataset.from_tensor_slices()`.
###Code
import tensorflow_datasets as tfds
batch_size = 64
dataset_name = "stanford_dogs"
(ds_train, ds_test), ds_info = tfds.load(
dataset_name, split=["train", "test"], with_info=True, as_supervised=True
)
NUM_CLASSES = ds_info.features["label"].num_classes
###Output
_____no_output_____
###Markdown
When the dataset include images with various size, we need to resize them into ashared size. The Stanford Dogs dataset includes only images at least 200x200pixels in size. Here we resize the images to the input size needed for EfficientNet.
###Code
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
###Output
_____no_output_____
###Markdown
Visualizing the dataThe following code shows the first 9 images with their labels.
###Code
import matplotlib.pyplot as plt
def format_label(label):
string_label = label_info.int2str(label)
return string_label.split("-")[1]
label_info = ds_info.features["label"]
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Data augmentationWe can use preprocessing layers APIs for image augmentation.
###Code
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
img_augmentation = Sequential(
[
preprocessing.RandomRotation(factor=0.15),
preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),
preprocessing.RandomFlip(),
preprocessing.RandomContrast(factor=0.1),
],
name="img_augmentation",
)
###Output
_____no_output_____
###Markdown
This `Sequential` model object can be used both as a part ofthe model we later build, and as a function to preprocessdata before feeding into the model. Using them as function makesit easy to visualize the augmented images. Here we plot 9 examplesof augmentation result of a given figure.
###Code
for image, label in ds_train.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
aug_img = img_augmentation(tf.expand_dims(image, axis=0))
plt.imshow(aug_img[0].numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Prepare inputsOnce we verify the input data and augmentation are working correctly,we prepare dataset for training. The input data are resized to uniform`IMG_SIZE`. The labels are put into one-hot(a.k.a. categorical) encoding. The dataset is batched.Note: `prefetch` and `AUTOTUNE` may in some situation improveperformance, but depends on environment and the specific dataset used.See this [guide](https://www.tensorflow.org/guide/data_performance)for more information on data pipeline performance.
###Code
# One-hot / categorical encoding
def input_preprocess(image, label):
label = tf.one_hot(label, NUM_CLASSES)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True)
###Output
_____no_output_____
###Markdown
Training a model from scratchWe build an EfficientNetB0 with 120 output classes, that is initialized from scratch:Note: the accuracy will increase very slowly and may overfit.
###Code
from tensorflow.keras.applications import EfficientNetB0
with strategy.scope():
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
outputs = EfficientNetB0(include_top=True, weights=None, classes=NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.summary()
epochs = 40 # @param {type: "slider", min:10, max:100}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
###Output
_____no_output_____
###Markdown
Training the model is relatively fast (takes only 20 seconds per epoch on TPUv2 that isavailable on Colab). This might make it sounds easy to simply train EfficientNet on anydataset wanted from scratch. However, training EfficientNet on smaller datasets,especially those with lower resolution like CIFAR-100, faces the significant challenge ofoverfitting.Hence training from scratch requires very careful choice of hyperparameters and isdifficult to find suitable regularization. It would also be much more demanding in resources.Plotting the training and validation accuracymakes it clear that validation accuracy stagnates at a low value.
###Code
import matplotlib.pyplot as plt
def plot_hist(hist):
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
plot_hist(hist)
###Output
_____no_output_____
###Markdown
Transfer learning from pre-trained weightsHere we initialize the model with pre-trained ImageNet weights,and we fine-tune it on our own dataset.
###Code
from tensorflow.keras.layers.experimental import preprocessing
def build_model(num_classes):
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
model = EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(NUM_CLASSES, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
###Output
_____no_output_____
###Markdown
The first step to transfer learning is to freeze all layers and train only the toplayers. For this step, a relatively large learning rate (1e-2) can be used.Note that validation accuracy and loss will usually be better than trainingaccuracy and loss. This is because the regularization is strong, which onlysuppresses training-time metrics.Note that the convergence may take up to 50 epochs depending on choice of learning rate.If image augmentation layers were notapplied, the validation accuracy may only reach ~60%.
###Code
with strategy.scope():
model = build_model(num_classes=NUM_CLASSES)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
The second step is to unfreeze a number of layers and fit the model using smallerlearning rate. In this example we show unfreezing all layers, but depending onspecific dataset it may be desireble to only unfreeze a fraction of all layers.When the feature extraction withpretrained model works good enough, this step would give a very limited gain onvalidation accuracy. In our case we only see a small improvement,as ImageNet pretraining already exposed the model to a good amount of dogs.On the other hand, when we use pretrained weights on a dataset that is more differentfrom ImageNet, this fine-tuning step can be crucial as the feature extractor alsoneeds to be adjusted by a considerable amount. Such a situation can be demonstratedif choosing CIFAR-100 dataset instead, where fine-tuning boosts validation accuracyby about 10% to pass 80% on `EfficientNetB0`.In such a case the convergence may take more than 50 epochs.A side note on freezing/unfreezing models: setting `trainable` of a `Model` willsimultaneously set all layers belonging to the `Model` to the same `trainable`attribute. Each layer is trainable only if both the layer itself and the modelcontaining it are trainable. Hence when we need to partially freeze/unfreezea model, we need to make sure the `trainable` attribute of the model is setto `True`.
###Code
def unfreeze_model(model):
# We unfreeze the top 20 layers while leaving BatchNorm layers frozen
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
unfreeze_model(model)
epochs = 10 # @param {type: "slider", min:8, max:50}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
Image classification via fine-tuning with EfficientNet**Author:** [Yixing Fu](https://github.com/yixingfu)**Date created:** 2020/06/30**Last modified:** 2020/07/16**Description:** Use EfficientNet with weights pre-trained on imagenet for Stanford Dogs classification. Introduction: what is EfficientNetEfficientNet, first introduced in [Tan and Le, 2019](https://arxiv.org/abs/1905.11946)is among the most efficient models (i.e. requiring least FLOPS for inference)that reaches State-of-the-Art accuracy on bothimagenet and common image classification transfer learning tasks.The smallest base model is similar to [MnasNet](https://arxiv.org/abs/1807.11626), whichreached near-SOTA with a significantly smaller model. By introducing a heuristic way toscale the model, EfficientNet provides a family of models (B0 to B7) that represents agood combination of efficiency and accuracy on a variety of scales. Such a scalingheuristics (compound-scaling, details see[Tan and Le, 2019](https://arxiv.org/abs/1905.11946)) allows theefficiency-oriented base model (B0) to surpass models at every scale, while avoidingextensive grid-search of hyperparameters.A summary of the latest updates on the model is available at[here](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet), where variousaugmentation schemes and semi-supervised learning approaches are applied to furtherimprove the imagenet performance of the models. These extensions of the model can be usedby updating weights without changing model architecture. B0 to B7 variants of EfficientNet*(This section provides some details on "compound scaling", and can be skippedif you're only interested in using the models)*Based on the [original paper](https://arxiv.org/abs/1905.11946) people may have theimpression that EfficientNet is a continuous family of models created by arbitrarilychoosing scaling factor in as Eq.(3) of the paper. However, choice of resolution,depth and width are also restricted by many factors:- Resolution: Resolutions not divisible by 8, 16, etc. cause zero-padding near boundariesof some layers which wastes computational resources. This especially applies to smallervariants of the model, hence the input resolution for B0 and B1 are chosen as 224 and240.- Depth and width: The building blocks of EfficientNet demands channel size to bemultiples of 8.- Resource limit: Memory limitation may bottleneck resolution when depthand width can still increase. In such a situation, increasing depth and/orwidth but keep resolution can still improve performance.As a result, the depth, width and resolution of each variant of the EfficientNet modelsare hand-picked and proven to produce good results, though they may be significantlyoff from the compound scaling formula.Therefore, the keras implementation (detailed below) only provide these 8 models, B0 to B7,instead of allowing arbitray choice of width / depth / resolution parameters. Keras implementation of EfficientNetAn implementation of EfficientNet B0 to B7 has been shipped with tf.keras since TF2.3. Touse EfficientNetB0 for classifying 1000 classes of images from imagenet, run:```pythonfrom tensorflow.keras.applications import EfficientNetB0model = EfficientNetB0(weights='imagenet')```This model takes input images of shape (224, 224, 3), and the input data should range[0, 255]. Normalization is included as part of the model.Because training EfficientNet on ImageNet takes a tremendous amount of resources andseveral techniques that are not a part of the model architecture itself. Hence the Kerasimplementation by default loads pre-trained weights obtained via training with[AutoAugment](https://arxiv.org/abs/1805.09501).For B0 to B7 base models, the input shapes are different. Here is a list of input shapeexpected for each model:| Base model | resolution||----------------|-----|| EfficientNetB0 | 224 || EfficientNetB1 | 240 || EfficientNetB2 | 260 || EfficientNetB3 | 300 || EfficientNetB4 | 380 || EfficientNetB5 | 456 || EfficientNetB6 | 528 || EfficientNetB7 | 600 |When the model is intended for transfer learning, the Keras implementationprovides a option to remove the top layers:```model = EfficientNetB0(include_top=False, weights='imagenet')```This option excludes the final `Dense` layer that turns 1280 features on the penultimatelayer into prediction of the 1000 ImageNet classes. Replacing the top layer with customlayers allows using EfficientNet as a feature extractor in a transfer learning workflow.Another argument in the model constructor worth noticing is `drop_connect_rate` which controlsthe dropout rate responsible for [stochastic depth](https://arxiv.org/abs/1603.09382).This parameter serves as a toggle for extra regularization in finetuning, but does notaffect loaded weights. For example, when stronger regularization is desired, try:```pythonmodel = EfficientNetB0(weights='imagenet', drop_connect_rate=0.4)```The default value is 0.2. Example: EfficientNetB0 for Stanford Dogs.EfficientNet is capable of a wide range of image classification tasks.This makes it a good model for transfer learning.As an end-to-end example, we will show using pre-trained EfficientNetB0 on[Stanford Dogs](http://vision.stanford.edu/aditya86/ImageNetDogs/main.html) dataset.
###Code
# IMG_SIZE is determined by EfficientNet model choice
IMG_SIZE = 224
###Output
_____no_output_____
###Markdown
Setup and data loadingThis example requires TensorFlow 2.3 or above.To use TPU, the TPU runtime must match current running TensorFlowversion. If there is a mismatch, try:```pythonfrom cloud_tpu_client import Clientc = Client()c.configure_tpu_version(tf.__version__, restart_type="always")```
###Code
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
print("Running on TPU ", tpu.cluster_spec().as_dict()["worker"])
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError:
print("Not connected to a TPU runtime. Using CPU/GPU strategy")
strategy = tf.distribute.MirroredStrategy()
###Output
_____no_output_____
###Markdown
Loading dataHere we load data from [tensorflow_datasets](https://www.tensorflow.org/datasets)(hereafter TFDS).Stanford Dogs dataset is provided inTFDS as [stanford_dogs](https://www.tensorflow.org/datasets/catalog/stanford_dogs).It features 20,580 images that belong to 120 classes of dog breeds(12,000 for training and 8,580 for testing).By simply changing `dataset_name` below, you may also try this notebook forother datasets in TFDS such as[cifar10](https://www.tensorflow.org/datasets/catalog/cifar10),[cifar100](https://www.tensorflow.org/datasets/catalog/cifar100),[food101](https://www.tensorflow.org/datasets/catalog/food101),etc. When the images are much smaller than the size of EfficientNet input,we can simply upsample the input images. It has been shown in[Tan and Le, 2019](https://arxiv.org/abs/1905.11946) that transfer learningresult is better for increased resolution even if input images remain small.For TPU: if using TFDS datasets,a [GCS bucket](https://cloud.google.com/storage/docs/key-termsbuckets)location is required to save the datasets. For example:```pythontfds.load(dataset_name, data_dir="gs://example-bucket/datapath")```Also, both the current environment and the TPU service account haveproper [access](https://cloud.google.com/tpu/docs/storage-bucketsauthorize_the_service_account)to the bucket. Alternatively, for small datasets you may try loading datainto the memory and use `tf.data.Dataset.from_tensor_slices()`.
###Code
import tensorflow_datasets as tfds
batch_size = 64
dataset_name = "stanford_dogs"
(ds_train, ds_test), ds_info = tfds.load(
dataset_name, split=["train", "test"], with_info=True, as_supervised=True
)
NUM_CLASSES = ds_info.features["label"].num_classes
###Output
_____no_output_____
###Markdown
When the dataset include images with various size, we need to resize them into ashared size. The Stanford Dogs dataset includes only images at least 200x200pixels in size. Here we resize the images to the input size needed for EfficientNet.
###Code
size = (IMG_SIZE, IMG_SIZE)
ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label))
ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label))
###Output
_____no_output_____
###Markdown
Visualizing the dataThe following code shows the first 9 images with their labels.
###Code
import matplotlib.pyplot as plt
def format_label(label):
string_label = label_info.int2str(label)
return string_label.split("-")[1]
label_info = ds_info.features["label"]
for i, (image, label) in enumerate(ds_train.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Data augmentationWe can use preprocessing layers APIs for image augmentation.
###Code
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
img_augmentation = Sequential(
[
preprocessing.RandomRotation(factor=0.15),
preprocessing.RandomTranslation(height_factor=0.1, width_factor=0.1),
preprocessing.RandomFlip(),
preprocessing.RandomContrast(factor=0.1),
],
name="img_augmentation",
)
###Output
_____no_output_____
###Markdown
This `Sequential` model object can be used both as a part ofthe model we later build, and as a function to preprocessdata before feeding into the model. Using them as function makesit easy to visualize the augmented images. Here we plot 9 examplesof augmentation result of a given figure.
###Code
for image, label in ds_train.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
aug_img = img_augmentation(tf.expand_dims(image, axis=0))
plt.imshow(aug_img[0].numpy().astype("uint8"))
plt.title("{}".format(format_label(label)))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Prepare inputsOnce we verify the input data and augmentation are working correctly,we prepare dataset for training. The input data are resized to uniform`IMG_SIZE`. The labels are put into one-hot(a.k.a. categorical) encoding. The dataset is batched.Note: `prefetch` and `AUTOTUNE` may in some situation improveperformance, but depends on environment and the specific dataset used.See this [guide](https://www.tensorflow.org/guide/data_performance)for more information on data pipeline performance.
###Code
# One-hot / categorical encoding
def input_preprocess(image, label):
label = tf.one_hot(label, NUM_CLASSES)
return image, label
ds_train = ds_train.map(
input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds_train = ds_train.batch(batch_size=batch_size, drop_remainder=True)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(input_preprocess)
ds_test = ds_test.batch(batch_size=batch_size, drop_remainder=True)
###Output
_____no_output_____
###Markdown
Training a model from scratchWe build an EfficientNetB0 with 120 output classes, that is initialized from scratch:Note: the accuracy will increase very slowly and may overfit.
###Code
from tensorflow.keras.applications import EfficientNetB0
with strategy.scope():
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
outputs = EfficientNetB0(include_top=True, weights=None, classes=NUM_CLASSES)(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.summary()
epochs = 40 # @param {type: "slider", min:10, max:100}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
###Output
_____no_output_____
###Markdown
Training the model is relatively fast (takes only 20 seconds per epoch on TPUv2 that isavailable on Colab). This might make it sounds easy to simply train EfficientNet on anydataset wanted from scratch. However, training EfficientNet on smaller datasets,especially those with lower resolution like CIFAR-100, faces the significant challenge ofoverfitting.Hence training from scratch requires very careful choice of hyperparameters and isdifficult to find suitable regularization. It would also be much more demanding in resources.Plotting the training and validation accuracymakes it clear that validation accuracy stagnates at a low value.
###Code
import matplotlib.pyplot as plt
def plot_hist(hist):
plt.plot(hist.history["accuracy"])
plt.plot(hist.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left")
plt.show()
plot_hist(hist)
###Output
_____no_output_____
###Markdown
Transfer learning from pre-trained weightsHere we initialize the model with pre-trained ImageNet weights,and we fine-tune it on our own dataset.
###Code
from tensorflow.keras.layers.experimental import preprocessing
def build_model(num_classes):
inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3))
x = img_augmentation(inputs)
model = EfficientNetB0(include_top=False, input_tensor=x, weights="imagenet")
# Freeze the pretrained weights
model.trainable = False
# Rebuild top
x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
x = layers.BatchNormalization()(x)
top_dropout_rate = 0.2
x = layers.Dropout(top_dropout_rate, name="top_dropout")(x)
outputs = layers.Dense(NUM_CLASSES, activation="softmax", name="pred")(x)
# Compile
model = tf.keras.Model(inputs, outputs, name="EfficientNet")
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-2)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
###Output
_____no_output_____
###Markdown
The first step to transfer learning is to freeze all layers and train only the toplayers. For this step, a relatively large learning rate (1e-2) can be used.Note that validation accuracy and loss will usually be better than trainingaccuracy and loss. This is because the regularization is strong, which onlysuppresses training-time metrics.Note that the convergence may take up to 50 epochs depending on choice of learning rate.If image augmentation layers were notapplied, the validation accuracy may only reach ~60%.
###Code
with strategy.scope():
model = build_model(num_classes=NUM_CLASSES)
epochs = 25 # @param {type: "slider", min:8, max:80}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
###Output
_____no_output_____
###Markdown
The second step is to unfreeze a number of layers and fit the model using smallerlearning rate. In this example we show unfreezing all layers, but depending onspecific dataset it may be desireble to only unfreeze a fraction of all layers.When the feature extraction withpretrained model works good enough, this step would give a very limited gain onvalidation accuracy. In our case we only see a small improvement,as ImageNet pretraining already exposed the model to a good amount of dogs.On the other hand, when we use pretrained weights on a dataset that is more differentfrom ImageNet, this fine-tuning step can be crucial as the feature extractor alsoneeds to be adjusted by a considerable amount. Such a situation can be demonstratedif choosing CIFAR-100 dataset instead, where fine-tuning boosts validation accuracyby about 10% to pass 80% on `EfficientNetB0`.In such a case the convergence may take more than 50 epochs.A side note on freezing/unfreezing models: setting `trainable` of a `Model` willsimultaneously set all layers belonging to the `Model` to the same `trainable`attribute. Each layer is trainable only if both the layer itself and the modelcontaining it are trainable. Hence when we need to partially freeze/unfreezea model, we need to make sure the `trainable` attribute of the model is setto `True`.
###Code
def unfreeze_model(model):
# We unfreeze the top 20 layers while leaving BatchNorm layers frozen
for layer in model.layers[-20:]:
if not isinstance(layer, layers.BatchNormalization):
layer.trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
unfreeze_model(model)
epochs = 10 # @param {type: "slider", min:8, max:50}
hist = model.fit(ds_train, epochs=epochs, validation_data=ds_test, verbose=2)
plot_hist(hist)
###Output
_____no_output_____ |
notebooks/ch-labs/Lab02_QuantumMeasurement.ipynb | ###Markdown
Lab 2 Quantum Measurements Prerequisite- [Ch.1.4 Single Qubit Gates](https://qiskit.org/textbook/ch-states/single-qubit-gates.html)- [Ch.2.2 Multiple Qubits and Entangled States](https://qiskit.org/textbook/ch-gates/multiple-qubits-entangled-states.html)- [Mitigating Noise on Real Quantum Computers](https://www.youtube.com/watch?v=yuDxHJOKsVA&list=PLOFEBzvs-Vvp2xg9-POLJhQwtVktlYGbY&index=8)Other relevant materials- [Feynman Lectures Ch. III - 12](https://www.feynmanlectures.caltech.edu/III_12.html)- [Quantum Operation](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.html)- [Interactive Bloch Sphere](https://nonhermitian.org/kaleido/stubs/kaleidoscope.interactive.bloch_sphere.htmlkaleidoscope.interactive.bloch_sphere)- [Ch.5.2 Measurement Error Mitigation](https://qiskit.org/textbook/ch-quantum-hardware/measurement-error-mitigation.html)
###Code
from qiskit import *
import numpy as np
from numpy import linalg as la
from qiskit.tools.monitor import job_monitor
import qiskit.tools.jupyter
###Output
_____no_output_____
###Markdown
Part 1: Measuring the state of a qubit<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Determine the Bloch components of a qubit.Fundamental to the operation of a quantum computer is the ability to compute the Bloch components of a qubit or qubits. These components correspond to the expectation values of the Pauli operators $X, Y, Z$, and are important quantities for applications such as quantum chemistry and optimization. Unfortunately, it is impossible to simultaneously compute these values, thus requiring many executions of the same circuit. In addition, measurements are restricted to the computational basis (Z-basis) so that each Pauli needs to be rotated to the standard basis to access the x and y components. Here we verify the methods by considering the case of a random vector on the Bloch sphere. &128211; 1. Express the expectation values of the Pauli operators for an arbitrary qubit state $|q\rangle$ in the computational basis. The case for the expection value of Pauli Z gate is given as an example. Using the diagonal representation, also known as spectral form or orthonormal decomposition, of Pauli $Z$ gate and the relations among the Pauli gates (see [here](https://qiskit.org/textbook/ch-states/single-qubit-gates.html)), expectation values of $ X, Y, Z $ gates can be written as $$\begin{aligned}\langle Z \rangle &=\langle q | Z | q\rangle =\langle q|0\rangle\langle 0|q\rangle - \langle q|1\rangle\langle 1|q\rangle=|\langle 0 |q\rangle|^2 - |\langle 1 | q\rangle|^2\\\\\langle X \rangle &= \\\\\langle Y \rangle &=\end{aligned}\\$$, respectively.Therefore, the expectation values of the Paulis for a qubit state $|q\rangle$ can be obtained by making a measurement in the standard basis after rotating the standard basis frame to lie along the corresponding axis. The probabilities of obtaining the two possible outcomes 0 and 1 are used to evaluate the desired expectation value as the above equations show. 2. Measure the Bloch sphere coordinates of a qubit using the qasm simulator and plot the vector on the bloch sphere. &128211;Step A. Create a qubit state using the circuit method, initialize with two random complex numbers as the parameter.To learn how to use the function `initialize`, check [here](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.html). ( go to the `arbitrary initialization` section. )
###Code
qc = QuantumCircuit(1)
#### your code goes here
###Output
_____no_output_____
###Markdown
&128211; Step B. Build the circuits to measure the expectation values of $X, Y, Z$ gate based on your answers to the question 1. Run the cell below to estimate the bloch sphere coordinates of the qubit from step A using the qasm simulator.The circuit for $Z$ gate measurement is given as an example.
###Code
# z measurement of qubit 0
measure_z = QuantumCircuit(1,1)
measure_z.measure(0,0)
# x measurement of qubit 0
measure_x = QuantumCircuit(1,1)
# your code goes here
# y measurement of qubit 0
measure_y = QuantumCircuit(1,1)
# your code goes here
shots = 2**14 # number of samples used for statistics
sim = Aer.get_backend('qasm_simulator')
bloch_vector_measure = []
for measure_circuit in [measure_x, measure_y, measure_z]:
# run the circuit with a the selected measurement and get the number of samples that output each bit value
counts = execute(qc+measure_circuit, sim, shots=shots).result().get_counts()
# calculate the probabilities for each bit value
probs = {}
for output in ['0','1']:
if output in counts:
probs[output] = counts[output]/shots
else:
probs[output] = 0
bloch_vector_measure.append( probs['0'] - probs['1'] )
# normalizing the bloch sphere vector
bloch_vector = bloch_vector_measure/la.norm(bloch_vector_measure)
print('The bloch sphere coordinates are [{0:4.3f}, {1:4.3f}, {2:4.3f}]'
.format(*bloch_vector))
###Output
_____no_output_____
###Markdown
Step C. Plot the vector on the bloch sphere.Note that the following cell for the interactive bloch_sphere would not run properly unless you work in [IQX](https://quantum-computing.ibm.com/login). You can either use `plot_bloch_vector` for the non-interactive version or install `kaleidoscope` by running ```pip install kaleidoscope```in a terminal. You also need to restart your kernel after the installation. To learn more about how to use the interactive bloch sphere, go [here](https://nonhermitian.org/kaleido/stubs/kaleidoscope.interactive.bloch_sphere.htmlkaleidoscope.interactive.bloch_sphere).
###Code
from kaleidoscope.interactive import bloch_sphere
bloch_sphere(bloch_vector, vectors_annotation=True)
from qiskit.visualization import plot_bloch_vector
plot_bloch_vector( bloch_vector )
###Output
_____no_output_____
###Markdown
Part 2: Measuring Energy<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Evaluate the energy levels of the hydrogen ground state using qasm simulator.The energy of a quantum system can be estimated by measuring the expectation value of its hamiltonian, which is a hermitian operator, through the procedure we mastered in part 1.The ground state of hydrogen is not defined as a single unique state but actually contains four different states due to the spins of the electron and proton. In part 2 of this lab, we evaluate the energy difference among these four states, which is from the `hyperfine splitting`, by computing the energy expectation value for the system of two spins with the hamiltonian expressed in Pauli operators. For more information about `hyperfine structure`, see [here](https://www.feynmanlectures.caltech.edu/III_12.html) Consider the system with two qubit interaction hamiltonian $H = A(XX+YY+ZZ)$ where $A = 1.47e^{-6} eV$ and $X, Y, Z$ are Pauli gates. Then the energy expectation value of the system can be evaluated by combining the expectation value of each term in the hamiltonian.In this case, $E = \langle H\rangle = A( \langle XX\rangle + \langle YY\rangle + \langle ZZ\rangle )$. &128211; 1. Express the expectaion value of each term in the hamiltonian for an arbitrary two qubit state $|\psi \rangle$ in the computational basis.The case for the term $\langle ZZ\rangle$ is given as an example.$$\begin{aligned}\langle ZZ\rangle &=\langle \psi | ZZ | \psi\rangle =\langle \psi|(|0\rangle\langle 0| - |1\rangle\langle 1|)\otimes(|0\rangle\langle 0| - |1\rangle\langle 1|) |\psi\rangle=|\langle 00|\psi\rangle|^2 - |\langle 01 | \psi\rangle|^2 - |\langle 10 | \psi\rangle|^2 + |\langle 11|\psi\rangle|^2\\\\\langle XX\rangle &= \\\\\langle YY\rangle &=\end{aligned}$$ 2. Measure the expected energy of the system using the qasm simulator when two qubits are entangled. Regard the bell basis, four different entangled states. &128211;Step A. Construct the circuits to prepare four different bell states.Let's label each bell state as,$$\begin{aligned}Tri1 &= \frac{1}{\sqrt2} (|00\rangle + |11\rangle)\\Tri2 &= \frac{1}{\sqrt2} (|00\rangle - |11\rangle)\\Tri3 &= \frac{1}{\sqrt2} (|01\rangle + |10\rangle)\\Sing &= \frac{1}{\sqrt2} (|10\rangle - |01\rangle)\end{aligned}$$
###Code
# circuit for the state Tri1
Tri1 = QuantumCircuit(2)
# your code goes here
# circuit for the state Tri2
Tri2 = QuantumCircuit(2)
# your code goes here
# circuit for the state Tri3
Tri3 = QuantumCircuit(2)
# your code goes here
# circuit for the state Sing
Sing = QuantumCircuit(2)
# your code goes here
###Output
_____no_output_____
###Markdown
&128211;Step B. Create the circuits to measure the expectation value of each term in the hamiltonian based on your answer to the question 1.
###Code
# <ZZ>
measure_ZZ = QuantumCircuit(2)
measure_ZZ.measure_all()
# <XX>
measure_XX = QuantumCircuit(2)
# your code goes here
# <YY>
measure_YY = QuantumCircuit(2)
# your code goes here
###Output
_____no_output_____
###Markdown
Step C. Execute the circuits on qasm simulator by runnng the cell below and evaluate the energy expectation value for each state.
###Code
shots = 2**14 # number of samples used for statistics
A = 1.47e-6 #unit of A is eV
E_sim = []
for state_init in [Tri1,Tri2,Tri3,Sing]:
Energy_meas = []
for measure_circuit in [measure_XX, measure_YY, measure_ZZ]:
# run the circuit with a the selected measurement and get the number of samples that output each bit value
qc = state_init+measure_circuit
counts = execute(qc, sim, shots=shots).result().get_counts()
# calculate the probabilities for each computational basis
probs = {}
for output in ['00','01', '10', '11']:
if output in counts:
probs[output] = counts[output]/shots
else:
probs[output] = 0
Energy_meas.append( probs['00'] - probs['01'] - probs['10'] + probs['11'] )
E_sim.append(A * np.sum(np.array(Energy_meas)))
# Run this cell to print out your results
print('Energy expection value of the state Tri1 : {:.3e} eV'.format(E_sim[0]))
print('Energy expection value of the state Tri2 : {:.3e} eV'.format(E_sim[1]))
print('Energy expection value of the state Tri3 : {:.3e} eV'.format(E_sim[2]))
print('Energy expection value of the state Sing : {:.3e} eV'.format(E_sim[3]))
###Output
_____no_output_____
###Markdown
Step D. Understanding the result. If you found the energy expectation values successfully, you would have obtained exactly the same value, $A (= 1.47e^{-6} eV)$, for the trplet tates, $|Tri1\rangle, |Tri2\rangle, |Tri3\rangle$ and one lower energy level, $-3A (= -4.41e^{-6} eV)$ for the singlet state $|Sing\rangle$. What we have done here is measuring the energies of the four different spin states corresponding to the ground state of hydrogen and observed `hyperfine structure` in the energy levels caused by spin-spin coupling. This tiny energy difference between the singlet and triplet states is the reason for the famous 21-cm wavelength radiation used to map the structure of the galaxy. In the cell below, we varify the wavelength of the emission from the transition between the triplet states and singlet state.
###Code
# reduced plank constant in (eV) and the speed of light(cgs units)
hbar, c = 4.1357e-15, 3e10
# energy difference between the triplets and singlet
E_del = abs(E_sim[0] - E_sim[3])
# frequency associated with the energy difference
f = E_del/hbar
# convert frequency to wavelength in (cm)
wavelength = c/f
print('The wavelength of the radiation from the transition\
in the hyperfine structure is : {:.1f} cm'.format(wavelength))
###Output
_____no_output_____
###Markdown
Part 3: Execute the circuits on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;"> Re-run the circuits on a IBM quantum system. Perform measurement error mitigations on the result to improve the accuracy in the energy estimation. Step A. Run the following cells to load your account and select the backend
###Code
provider = IBMQ.load_account()
backend = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step B. Execute the circuits on the quantum system. In Lab1 when we excuted multiple circuits on a real quantum system, we submitted each circuit as a sperate job which produces the multiple job ids. This time, we put all the circuits in a list and execute the list of the circuits as one job. In this way, all the circuit execution can happen at once, which would possibly decrease your wait time in the queue.In addition, `transpile` is not used here as all the circuits that we run consist of one or two qubit gates. We can still specify the initial_layout and optimization_level through `execute` function. Without using `transpile`, the transpiled circuits are not accessible which is not a concern for this case. &128211; Check the backend configuration information and error map through the widget to determine your initial_layout.
###Code
# run this cell to get the backend information through the widget
backend
# assign your choice for the initial layout to the list variable `initial_layout`.
initial_layout =
###Output
_____no_output_____
###Markdown
Run the following cell to execute the circuits with the initial_layout on the backend.
###Code
qc_all = [state_init+measure_circuit for state_init in [Tri1,Tri2,Tri3,Sing]
for measure_circuit in [measure_XX, measure_YY, measure_ZZ] ]
shots = 8192
job = execute(qc_all, backend, initial_layout=initial_layout, optimization_level=3, shots=shots)
print(job.job_id())
job_monitor(job)
# getting the results of your job
results = job.result()
## To access the results of the completed job
#results = backend.retrieve_job('job_id').result()
###Output
_____no_output_____
###Markdown
Step C. Estimate the ground state energy levels from the results of the previous step by executing the cells below.
###Code
def Energy(results, shots):
"""Compute the energy levels of the hydrogen ground state.
Parameters:
results (obj): results, results from executing the circuits for measuring a hamiltonian.
shots (int): shots, number of shots used for the circuit execution.
Returns:
Energy (list): energy values of the four different hydrogen ground states
"""
E = []
A = 1.47e-6
for ind_state in range(4):
Energy_meas = []
for ind_comp in range(3):
counts = results.get_counts(ind_state*3+ind_comp)
# calculate the probabilities for each computational basis
probs = {}
for output in ['00','01', '10', '11']:
if output in counts:
probs[output] = counts[output]/shots
else:
probs[output] = 0
Energy_meas.append( probs['00'] - probs['01'] - probs['10'] + probs['11'] )
E.append(A * np.sum(np.array(Energy_meas)))
return E
E = Energy(results, shots)
print('Energy expection value of the state Tri1 : {:.3e} eV'.format(E[0]))
print('Energy expection value of the state Tri2 : {:.3e} eV'.format(E[1]))
print('Energy expection value of the state Tri3 : {:.3e} eV'.format(E[2]))
print('Energy expection value of the state Sing : {:.3e} eV'.format(E[3]))
###Output
_____no_output_____
###Markdown
Step D. Measurement error mitigation. The results you obtained from running the circuits on the quantum system are not exact due to the noise from the various sources such as enery relaxation, dephasing, crosstalk between qubits, etc. In this step, we will alleviate the effects of the noise through the measurement error mitigation. Before we start, watch this [video](https://www.youtube.com/watch?v=yuDxHJOKsVA&list=PLOFEBzvs-Vvp2xg9-POLJhQwtVktlYGbY&index=8).
###Code
from qiskit.ignis.mitigation.measurement import *
###Output
_____no_output_____
###Markdown
&128211;Construct the circuits to profile the measurement errors of all basis states using the function 'complete_meas_cal'. Obtain the measurement filter object, 'meas_filter', which will be applied to the noisy results to mitigate readout (measurement) error. For further helpful information to complete this task, check [here](https://qiskit.org/textbook/ch-quantum-hardware/measurement-error-mitigation.html) .
###Code
# your code to create the circuits, meas_calibs, goes here
meas_calibs, state_labels =
# execute meas_calibs on your choice of the backend
job = execute(meas_calibs, backend, shots = shots)
print(job.job_id())
job_monitor(job)
cal_results = job.result()
## To access the results of the completed job
#cal_results = backend.retrieve_job('job_id').result()
# your code to obtain the measurement filter object, 'meas_filter', goes here
results_new = meas_filter.apply(results)
E_new = Energy(results_new, shots)
print('Energy expection value of the state Tri1 : {:.3e} eV'.format(E_new[0]))
print('Energy expection value of the state Tri2 : {:.3e} eV'.format(E_new[1]))
print('Energy expection value of the state Tri3 : {:.3e} eV'.format(E_new[2]))
print('Energy expection value of the state Sing : {:.3e} eV'.format(E_new[3]))
###Output
_____no_output_____
###Markdown
Step E. Interpret the result. &128211; Compute the relative errors ( or the fractional error ) of the energy values for all four states with and without measurement error mitigation.
###Code
# results for the energy estimation from the simulation,
# execution on a quantum system without error mitigation and
# with error mitigation in numpy array format
Energy_exact, Energy_exp_orig, Energy_exp_new = np.array(E_sim), np.array(E), np.array(E_new)
# Calculate the relative errors of the energy values without error mitigation
# and assign to the numpy array variable `Err_rel_orig` of size 4
Err_rel_orig =
# Calculate the relative errors of the energy values with error mitigation
# and assign to the numpy array variable `Err_rel_new` of size 4
Err_rel_new =
np.set_printoptions(precision=3)
print('The relative errors of the energy values for four bell basis\
without measurement error mitigation : {}'.format(Err_rel_orig))
np.set_printoptions(precision=3)
print('The relative errors of the energy values for four bell basis\
with measurement error mitigation : {}'.format(Err_rel_new))
###Output
_____no_output_____
###Markdown
Lab 2 Quantum Measurements Prerequisite- [Ch.1.4 Single Qubit Gates](https://qiskit.org/textbook/ch-states/single-qubit-gates.html)- [Ch.2.2 Multiple Qubits and Entangled States](https://qiskit.org/textbook/ch-gates/multiple-qubits-entangled-states.html)- [Mitigating Noise on Real Quantum Computers](https://www.youtube.com/watch?v=yuDxHJOKsVA&list=PLOFEBzvs-Vvp2xg9-POLJhQwtVktlYGbY&index=8)Other relevant materials- [Feynman Lectures Ch. III - 12](https://www.feynmanlectures.caltech.edu/III_12.html)- [Quantum Operation](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.html)- [Interactive Bloch Sphere](https://nonhermitian.org/kaleido/stubs/kaleidoscope.interactive.bloch_sphere.htmlkaleidoscope.interactive.bloch_sphere)- [Ch.5.2 Measurement Error Mitigation](https://qiskit.org/textbook/ch-quantum-hardware/measurement-error-mitigation.html)
###Code
from qiskit import *
import numpy as np
from numpy import linalg as la
from qiskit.tools.monitor import job_monitor
import qiskit.tools.jupyter
###Output
_____no_output_____
###Markdown
Part 1: Measuring the state of a qubit<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Determine the Bloch components of a qubit.Fundamental to the operation of a quantum computer is the ability to compute the Bloch components of a qubit or qubits. These components correspond to the expectation values of the Pauli operators $X, Y, Z$, and are important quantities for applications such as quantum chemistry and optimization. Unfortunately, it is impossible to simultaneously compute these values, thus requiring many executions of the same circuit. In addition, measurements are restricted to the computational basis (Z-basis) so that each Pauli needs to be rotated to the standard basis to access the x and y components. Here we verify the methods by considering the case of a random vector on the Bloch sphere. &128211; 1. Express the expectation values of the Pauli operators for an arbitrary qubit state $|q\rangle$ in the computational basis. The case for the expection value of Pauli Z gate is given as an example. Using the diagonal representation, also known as spectral form or orthonormal decomposition, of Pauli $Z$ gate and the relations among the Pauli gates (see [here](https://qiskit.org/textbook/ch-states/single-qubit-gates.html)), expectation values of $ X, Y, Z $ gates can be written as $$\begin{align}\langle Z \rangle &=\langle q | Z | q\rangle =\langle q|0\rangle\langle 0|q\rangle - \langle q|1\rangle\langle 1|q\rangle=|\langle 0 |q\rangle|^2 - |\langle 1 | q\rangle|^2\\\\\langle X \rangle &= \\\\\langle Y \rangle &=\end{align}\\$$, respectively.Therefore, the expectation values of the Paulis for a qubit state $|q\rangle$ can be obtained by making a measurement in the standard basis after rotating the standard basis frame to lie along the corresponding axis. The probabilities of obtaining the two possible outcomes 0 and 1 are used to evaluate the desired expectation value as the above equations show. 2. Measure the Bloch sphere coordinates of a qubit using the qasm simulator and plot the vector on the bloch sphere. &128211;Step A. Create a qubit state using the circuit method, initialize with two random complex numbers as the parameter.To learn how to use the function `initialize`, check [here](https://qiskit.org/documentation/tutorials/circuits/3_summary_of_quantum_operations.html). ( go to the `arbitrary initialization` section. )
###Code
qc = QuantumCircuit(1)
#### your code goes here
###Output
_____no_output_____
###Markdown
&128211; Step B. Build the circuits to measure the expectation values of $X, Y, Z$ gate based on your answers to the question 1. Run the cell below to estimate the bloch sphere coordinates of the qubit from step A using the qasm simulator.The circuit for $Z$ gate measurement is given as an example.
###Code
# z measurement of qubit 0
measure_z = QuantumCircuit(1,1)
measure_z.measure(0,0)
# x measurement of qubit 0
measure_x = QuantumCircuit(1,1)
# your code goes here
# y measurement of qubit 0
measure_y = QuantumCircuit(1,1)
# your code goes here
shots = 2**14 # number of samples used for statistics
sim = Aer.get_backend('qasm_simulator')
bloch_vector_measure = []
for measure_circuit in [measure_x, measure_y, measure_z]:
# run the circuit with a the selected measurement and get the number of samples that output each bit value
counts = execute(qc+measure_circuit, sim, shots=shots).result().get_counts()
# calculate the probabilities for each bit value
probs = {}
for output in ['0','1']:
if output in counts:
probs[output] = counts[output]/shots
else:
probs[output] = 0
bloch_vector_measure.append( probs['0'] - probs['1'] )
# normalizing the bloch sphere vector
bloch_vector = bloch_vector_measure/la.norm(bloch_vector_measure)
print('The bloch sphere coordinates are [{0:4.3f}, {1:4.3f}, {2:4.3f}]'
.format(*bloch_vector))
###Output
_____no_output_____
###Markdown
Step C. Plot the vector on the bloch sphere.Note that the following cell for the interactive bloch_sphere would not run properly unless you work in [IQX](https://quantum-computing.ibm.com/login). You can either use `plot_bloch_vector` for the non-interactive version or install `kaleidoscope` by running ```pip install kaleidoscope```in a terminal. You also need to restart your kernel after the installation. To learn more about how to use the interactive bloch sphere, go [here](https://nonhermitian.org/kaleido/stubs/kaleidoscope.interactive.bloch_sphere.htmlkaleidoscope.interactive.bloch_sphere).
###Code
from kaleidoscope.interactive import bloch_sphere
bloch_sphere(bloch_vector, vectors_annotation=True)
from qiskit.visualization import plot_bloch_vector
plot_bloch_vector( bloch_vector )
###Output
_____no_output_____
###Markdown
Part 2: Measuring Energy<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Evaluate the energy levels of the hydrogen ground state using qasm simulator.The energy of a quantum system can be estimated by measuring the expectation value of its hamiltonian, which is a hermitian operator, through the procedure we mastered in part 1.The ground state of hydrogen is not defined as a single unique state but actually contains four different states due to the spins of the electron and proton. In part 2 of this lab, we evaluate the energy difference among these four states, which is from the `hyperfine splitting`, by computing the energy expectation value for the system of two spins with the hamiltonian expressed in Pauli operators. For more information about `hyperfine structure`, see [here](https://www.feynmanlectures.caltech.edu/III_12.html) Consider the system with two qubit interaction hamiltonian $H = A(XX+YY+ZZ)$ where $A = 1.47e^{-6} eV$ and $X, Y, Z$ are Pauli gates. Then the energy expectation value of the system can be evaluated by combining the expectation value of each term in the hamiltonian.In this case, $E = \langle H\rangle = A( \langle XX\rangle + \langle YY\rangle + \langle ZZ\rangle )$. &128211; 1. Express the expectaion value of each term in the hamiltonian for an arbitrary two qubit state $|\psi \rangle$ in the computational basis.The case for the term $\langle ZZ\rangle$ is given as an example.$$\begin{align}\langle ZZ\rangle &=\langle \psi | ZZ | \psi\rangle =\langle \psi|(|0\rangle\langle 0| - |1\rangle\langle 1|)\otimes(|0\rangle\langle 0| - |1\rangle\langle 1|) |\psi\rangle=|\langle 00|\psi\rangle|^2 - |\langle 01 | \psi\rangle|^2 - |\langle 10 | \psi\rangle|^2 + |\langle 11|\psi\rangle|^2\\\\\langle XX\rangle &= \\\\\langle YY\rangle &=\end{align}$$ 2. Measure the expected energy of the system using the qasm simulator when two qubits are entangled. Regard the bell basis, four different entangled states. &128211;Step A. Construct the circuits to prepare four different bell states.Let's label each bell state as,$$\begin{align}Tri1 &= \frac{1}{\sqrt2} (|00\rangle + |11\rangle)\\Tri2 &= \frac{1}{\sqrt2} (|00\rangle - |11\rangle)\\Tri3 &= \frac{1}{\sqrt2} (|01\rangle + |10\rangle)\\Sing &= \frac{1}{\sqrt2} (|10\rangle - |01\rangle)\end{align}$$
###Code
# circuit for the state Tri1
Tri1 = QuantumCircuit(2)
# your code goes here
# circuit for the state Tri2
Tri2 = QuantumCircuit(2)
# your code goes here
# circuit for the state Tri3
Tri3 = QuantumCircuit(2)
# your code goes here
# circuit for the state Sing
Sing = QuantumCircuit(2)
# your code goes here
###Output
_____no_output_____
###Markdown
&128211;Step B. Create the circuits to measure the expectation value of each term in the hamiltonian based on your answer to the question 1.
###Code
# <ZZ>
measure_ZZ = QuantumCircuit(2)
measure_ZZ.measure_all()
# <XX>
measure_XX = QuantumCircuit(2)
# your code goes here
# <YY>
measure_YY = QuantumCircuit(2)
# your code goes here
###Output
_____no_output_____
###Markdown
Step C. Execute the circuits on qasm simulator by runnng the cell below and evaluate the energy expectation value for each state.
###Code
shots = 2**14 # number of samples used for statistics
A = 1.47e-6 #unit of A is eV
E_sim = []
for state_init in [Tri1,Tri2,Tri3,Sing]:
Energy_meas = []
for measure_circuit in [measure_XX, measure_YY, measure_ZZ]:
# run the circuit with a the selected measurement and get the number of samples that output each bit value
qc = state_init+measure_circuit
counts = execute(qc, sim, shots=shots).result().get_counts()
# calculate the probabilities for each computational basis
probs = {}
for output in ['00','01', '10', '11']:
if output in counts:
probs[output] = counts[output]/shots
else:
probs[output] = 0
Energy_meas.append( probs['00'] - probs['01'] - probs['10'] + probs['11'] )
E_sim.append(A * np.sum(np.array(Energy_meas)))
# Run this cell to print out your results
print('Energy expection value of the state Tri1 : {:.3e} eV'.format(E_sim[0]))
print('Energy expection value of the state Tri2 : {:.3e} eV'.format(E_sim[1]))
print('Energy expection value of the state Tri3 : {:.3e} eV'.format(E_sim[2]))
print('Energy expection value of the state Sing : {:.3e} eV'.format(E_sim[3]))
###Output
_____no_output_____
###Markdown
Step D. Understanding the result. If you found the energy expectation values successfully, you would have obtained exactly the same value, $A (= 1.47e^{-6} eV)$, for the trplet tates, $|Tri1\rangle, |Tri2\rangle, |Tri3\rangle$ and one lower energy level, $-3A (= -4.41e^{-6} eV)$ for the singlet state $|Sing\rangle$. What we have done here is measuring the energies of the four different spin states corresponding to the ground state of hydrogen and observed `hyperfine structure` in the energy levels caused by spin-spin coupling. This tiny energy difference between the singlet and triplet states is the reason for the famous 21-cm wavelength radiation used to map the structure of the galaxy. In the cell below, we varify the wavelength of the emission from the transition between the triplet states and singlet state.
###Code
# reduced plank constant in (eV) and the speed of light(cgs units)
hbar, c = 4.1357e-15, 3e10
# energy difference between the triplets and singlet
E_del = abs(E_sim[0] - E_sim[3])
# frequency associated with the energy difference
f = E_del/hbar
# convert frequency to wavelength in (cm)
wavelength = c/f
print('The wavelength of the radiation from the transition\
in the hyperfine structure is : {:.1f} cm'.format(wavelength))
###Output
_____no_output_____
###Markdown
Part 3: Execute the circuits on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;"> Re-run the circuits on a IBM quantum system. Perform measurement error mitigations on the result to improve the accuracy in the energy estimation. Step A. Run the following cells to load your account and select the backend
###Code
provider = IBMQ.load_account()
backend = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step B. Execute the circuits on the quantum system. In Lab1 when we excuted multiple circuits on a real quantum system, we submitted each circuit as a sperate job which produces the multiple job ids. This time, we put all the circuits in a list and execute the list of the circuits as one job. In this way, all the circuit execution can happen at once, which would possibly decrease your wait time in the queue.In addition, `transpile` is not used here as all the circuits that we run consist of one or two qubit gates. We can still specify the initial_layout and optimization_level through `execute` function. Without using `transpile`, the transpiled circuits are not accessible which is not a concern for this case. &128211; Check the backend configuration information and error map through the widget to determine your initial_layout.
###Code
# run this cell to get the backend information through the widget
backend
# assign your choice for the initial layout to the list variable `initial_layout`.
initial_layout =
###Output
_____no_output_____
###Markdown
Run the following cell to execute the circuits with the initial_layout on the backend.
###Code
qc_all = [state_init+measure_circuit for state_init in [Tri1,Tri2,Tri3,Sing]
for measure_circuit in [measure_XX, measure_YY, measure_ZZ] ]
shots = 8192
job = execute(qc_all, backend, initial_layout=initial_layout, optimization_level=3, shots=shots)
print(job.job_id())
job_monitor(job)
# getting the results of your job
results = job.result()
## To access the results of the completed job
#results = backend.retrieve_job('job_id').result()
###Output
_____no_output_____
###Markdown
Step C. Estimate the ground state energy levels from the results of the previous step by executing the cells below.
###Code
def Energy(results, shots):
"""Compute the energy levels of the hydrogen ground state.
Parameters:
results (obj): results, results from executing the circuits for measuring a hamiltonian.
shots (int): shots, number of shots used for the circuit execution.
Returns:
Energy (list): energy values of the four different hydrogen ground states
"""
E = []
A = 1.47e-6
for ind_state in range(4):
Energy_meas = []
for ind_comp in range(3):
counts = results.get_counts(ind_state*3+ind_comp)
# calculate the probabilities for each computational basis
probs = {}
for output in ['00','01', '10', '11']:
if output in counts:
probs[output] = counts[output]/shots
else:
probs[output] = 0
Energy_meas.append( probs['00'] - probs['01'] - probs['10'] + probs['11'] )
E.append(A * np.sum(np.array(Energy_meas)))
return E
E = Energy(results, shots)
print('Energy expection value of the state Tri1 : {:.3e} eV'.format(E[0]))
print('Energy expection value of the state Tri2 : {:.3e} eV'.format(E[1]))
print('Energy expection value of the state Tri3 : {:.3e} eV'.format(E[2]))
print('Energy expection value of the state Sing : {:.3e} eV'.format(E[3]))
###Output
_____no_output_____
###Markdown
Step D. Measurement error mitigation. The results you obtained from running the circuits on the quantum system are not exact due to the noise from the various sources such as enery relaxation, dephasing, crosstalk between qubits, etc. In this step, we will alleviate the effects of the noise through the measurement error mitigation. Before we start, watch this [video](https://www.youtube.com/watch?v=yuDxHJOKsVA&list=PLOFEBzvs-Vvp2xg9-POLJhQwtVktlYGbY&index=8).
###Code
from qiskit.ignis.mitigation.measurement import *
###Output
_____no_output_____
###Markdown
&128211;Construct the circuits to profile the measurement errors of all basis states using the function 'complete_meas_cal'. Obtain the measurement filter object, 'meas_filter', which will be applied to the noisy results to mitigate readout (measurement) error. For further helpful information to complete this task, check [here](https://qiskit.org/textbook/ch-quantum-hardware/measurement-error-mitigation.html) .
###Code
# your code to create the circuits, meas_calibs, goes here
meas_calibs, state_labels =
# execute meas_calibs on your choice of the backend
job = execute(meas_calibs, backend, shots = shots)
print(job.job_id())
job_monitor(job)
cal_results = job.result()
## To access the results of the completed job
#cal_results = backend.retrieve_job('job_id').result()
# your code to obtain the measurement filter object, 'meas_filter', goes here
results_new = meas_filter.apply(results)
E_new = Energy(results_new, shots)
print('Energy expection value of the state Tri1 : {:.3e} eV'.format(E_new[0]))
print('Energy expection value of the state Tri2 : {:.3e} eV'.format(E_new[1]))
print('Energy expection value of the state Tri3 : {:.3e} eV'.format(E_new[2]))
print('Energy expection value of the state Sing : {:.3e} eV'.format(E_new[3]))
###Output
_____no_output_____
###Markdown
Step E. Interpret the result. &128211; Compute the relative errors ( or the fractional error ) of the energy values for all four states with and without measurement error mitigation.
###Code
# results for the energy estimation from the simulation,
# execution on a quantum system without error mitigation and
# with error mitigation in numpy array format
Energy_exact, Energy_exp_orig, Energy_exp_new = np.array(E_sim), np.array(E), np.array(E_new)
# Calculate the relative errors of the energy values without error mitigation
# and assign to the numpy array variable `Err_rel_orig` of size 4
Err_rel_orig =
# Calculate the relative errors of the energy values with error mitigation
# and assign to the numpy array variable `Err_rel_new` of size 4
Err_rel_new =
np.set_printoptions(precision=3)
print('The relative errors of the energy values for four bell basis\
without measurement error mitigation : {}'.format(Err_rel_orig))
np.set_printoptions(precision=3)
print('The relative errors of the energy values for four bell basis\
with measurement error mitigation : {}'.format(Err_rel_new))
###Output
_____no_output_____ |
Anime-Data-Analysis.ipynb | ###Markdown
[Chosen Dataset: myanimelist-dataset-animes-profiles-reviews](https://www.kaggle.com/marlesson/myanimelist-dataset-animes-profiles-reviews)- project data_animes.csv - project_data_profiles.csv- project_data_reviews.csv
###Code
!pip install fuzzywuzzy import process
# !pip install plotly
# !pip install wordcloud
# Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
import json
from scipy.sparse import csr_matrix
from fuzzywuzzy import process
%matplotlib inline
df_profile = pd.read_csv('data/project_data_profiles.csv', sep=',')
df_profile.head()
df_anime = pd.read_csv('data/project data_animes.csv', sep=',')
# rename column "uid" to "anime_uid"
df_anime = df_anime.rename(columns={'uid': 'anime_uid'})
df_anime.head()
df_review= pd.read_csv('data/project_data_reviews.csv', sep=',')
df_review.head()
###Output
_____no_output_____
###Markdown
--- Join/merge datasets ---
###Code
anime_review_data=pd.merge(df_anime,df_review,on='anime_uid',suffixes= ['', '_review'])
anime_full_data = pd.merge(anime_review_data, df_profile, on='profile', suffixes=['','_profile'])
anime_full_data.head()
###Output
_____no_output_____
###Markdown
--- Creating a dataframe that shows top 10 anime based on score counts---
###Code
def TopTenBasedOnScore():
combine_anime_rating = anime_full_data.dropna(axis = 0, subset = ['title'])
anime_ratingCount = (combine_anime_rating.
groupby(by=['title'])['score_review'].count().
reset_index().rename(columns = {'score':'totalScoreCount'})
[['title','score_review']])
#Plotting the bar plot for top 10 anime as per rating
top10_animerating=anime_ratingCount[['title', 'score_review']].sort_values(by = 'score_review',ascending = False).head(10)
ax=sns.barplot(x="title", y="score_review", data=top10_animerating, palette="Dark2")
ax.set_xticklabels(ax.get_xticklabels(), fontsize=11, rotation=40, ha="right")
ax.set_title('Top 10 Anime based on score/rating counts',fontsize = 22)
ax.set_xlabel('Anime',fontsize = 20)
ax.set_ylabel('User Rating count', fontsize = 20)
TopTenBasedOnScore()
###Output
_____no_output_____
###Markdown
--- Creating a dataframe that shows top 10 anime based on Community size---
###Code
def TopTenBasedOnCommunitySize():
duplicate_anime=anime_full_data.copy()
duplicate_anime.drop_duplicates(subset ="title",
keep = 'first', inplace = True)
#Plotting bar plot
top10_animemembers=duplicate_anime[['title', 'members']].sort_values(by = 'members',ascending = False).head(10)
ax=sns.barplot(x="title", y="members", data=top10_animemembers, palette="gnuplot2")
ax.set_xticklabels(ax.get_xticklabels(), fontsize=11, rotation=40, ha="right")
ax.set_title('Top 10 Anime based on members',fontsize = 22)
ax.set_xlabel('Anime',fontsize = 20)
ax.set_ylabel('Community Size', fontsize = 20)
TopTenBasedOnCommunitySize()
def RatingGraph():
#Distribution of ratings
plt.figure(figsize = (15, 7))
plt.subplot(1,2,1)
anime_full_data['score'].hist(bins=70)
plt.title("Rating of websites")
plt.subplot(1,2,2)
anime_full_data['score_review'].hist(bins=70)
plt.title("Rating of users")
RatingGraph()
# anime_full_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Genre Word Cloud
###Code
nonull_anime=anime_full_data.copy()
nonull_anime.dropna(inplace=True)
from collections import defaultdict
all_genres = defaultdict(int)
for genres in nonull_anime['genre']:
for genre in genres.split(','):
all_genres[genre.strip()] += 1
from wordcloud import WordCloud
genres_cloud = WordCloud(width=800, height=400, background_color='white', colormap='gnuplot').generate_from_frequencies(all_genres)
plt.imshow(genres_cloud, interpolation='bilinear')
plt.axis('off')
#Replacing -1 with NaN in user_rating column
anime_feature=anime_full_data.copy()
anime_feature["score_review"].replace({-1: np.nan}, inplace=True)
anime_feature.head()
#dropping all the null values as it aids nothing
anime_feature = anime_feature.dropna(axis = 0, how ='any')
anime_feature.isnull().sum()
counts = anime_feature['uid'].value_counts()
counts
###Output
_____no_output_____
###Markdown
There are users who has rated only once, even if they have rated it 5, it can’t be considered a valuable record for recommendation, so let's considered minimum 10 ratings by the user as threshold value.
###Code
anime_feature = anime_feature[anime_feature['uid'].isin(counts[counts >= 10].index)]
anime_feature['uid'].value_counts()
###Output
_____no_output_____
###Markdown
pivot table pivot table helps create sparse matrix, which can help for cosine similarity.
###Code
anime_pivot = anime_full_data.pivot_table(index = 'title', columns = 'uid', values='score_review').fillna(0)
anime_pivot.head()
###Output
_____no_output_____
###Markdown
--- Another approach to get the anime_pivot if ``pivot_table`` uses too much memory
###Code
# testing = anime_full_data.groupby(['title','uid'])['score_review'].max().unstack().fillna(0)
# testing.head()
# print(anime_pivot.shape)
# print(testing.shape)
###Output
_____no_output_____
###Markdown
--- Recommendation based on Collaborative Filtering
###Code
# creating a sparse martix
anime_matrix = csr_matrix(anime_pivot.values)
# fitting the model. Cosine Similarity using KNN.
from sklearn.neighbors import NearestNeighbors
knnmodel = NearestNeighbors(metric = 'cosine', algorithm = 'brute', n_neighbors=20)
knnmodel.fit(anime_matrix)
###Output
_____no_output_____
###Markdown
Testing collaborative filtering recommendation function
###Code
def recommend_collaborative_filtering(anime_name, data, model, n_recommendations):
idx = process.extractOne(anime_name, df_anime['title'] )[2] # using fuzzywuzzy to find the corresponding anime title
print('Anime Selected: ',df_anime['title'][idx],'Index: ',idx )
print('Searching for recommendations......')
distances, indices = model.kneighbors(data[idx], n_neighbors = n_recommendations)
for i in indices:
print(df_anime['title'][i].where(i!=idx))
recommend_collaborative_filtering('death notes', anime_matrix, knnmodel, 20)
print()
recommend_collaborative_filtering('Steins;Gate', anime_matrix, knnmodel, 20)
###Output
Anime Selected: Death Note Index: 740
Searching for recommendations......
740 NaN
5405 Kara no Kyoukai 4: Garan no Dou
5392 Aa! Megami-sama!: Tatakau Tsubasa
5393 Ashita no Nadja
5419 Zero no Tsukaima: Princesses no Rondo Picture ...
5406 Kaleido Star: Legend of Phoenix - Layla Hamilt...
5407 Soukyuu no Fafner: Dead Aggressor - Exodus 2nd...
5408 Isekai no Seikishi Monogatari
5409 Shokugeki no Souma: Ni no Sara OVA
5410 Coquelicot-zaka kara
5411 Xiao Lu He Xiao Lan
5412 Kuroko no Basket: Tip Off
5413 Hidamari Sketch x ☆☆☆
5414 Sky Girls
5415 Smile Precure!
5416 Bonobono (TV)
5417 Brotherhood: Final Fantasy XV
5404 Macross F
5394 Detective Conan Movie 01: The Timed Skyscraper
5396 Utawarerumono Specials
Name: title, dtype: object
Anime Selected: Steins;Gate Index: 773
Searching for recommendations......
773 NaN
5404 Macross F
5392 Aa! Megami-sama!: Tatakau Tsubasa
5403 Mahou Shoujo Lyrical Nanoha: The Movie 1st
5418 Osake wa Fuufu ni Natte kara: Yuzu Atsukan
5405 Kara no Kyoukai 4: Garan no Dou
5406 Kaleido Star: Legend of Phoenix - Layla Hamilt...
5407 Soukyuu no Fafner: Dead Aggressor - Exodus 2nd...
5408 Isekai no Seikishi Monogatari
5409 Shokugeki no Souma: Ni no Sara OVA
5410 Coquelicot-zaka kara
5411 Xiao Lu He Xiao Lan
5412 Kuroko no Basket: Tip Off
5413 Hidamari Sketch x ☆☆☆
5414 Sky Girls
5415 Smile Precure!
5416 Bonobono (TV)
5402 Special A
5390 Bishoujo Senshi Sailor Moon SuperS
5394 Detective Conan Movie 01: The Timed Skyscraper
Name: title, dtype: object
###Markdown
Old Approach collaborative filtering recommendation
###Code
# # get a random anime title and find recommendation for it.
# query = np.random.choice(anime_pivot.shape[0])
# def print_rec(query):
# distances, indices = knnmodel.kneighbors(anime_pivot.iloc[query,:].values.reshape(1, -1), n_neighbors=5)
# # ^ returning the distances and indices of 6 neighbours through KNN from the randomly chosen index(anime_title)
# # print(distances, indices)
# for i in range(0, len(distances.flatten())):
# if i==0:
# print('Recommendations for {0}:'.format(anime_pivot.index[query]))
# else:
# print('\t{0}: {1}, with distance of {2}:'.format(i, anime_pivot.index[indices.flatten()[i]], distances.flatten()[i]))
# print_rec(query)
###Output
_____no_output_____
###Markdown
Makes an array of anime as it's string name, not the anime id
###Code
# arr = []
# for i in range(len(anime_pivot)):
# arr.append(anime_pivot.index[int(i)])
# # print(anime_pivot.index[int(i)])
###Output
_____no_output_____
###Markdown
User is allowed to enter input (the index) instead of it being randomly chosen.
###Code
# inp = input ("enter a num ")
# # anime_pivot.index[int(inp)]
# query = int(inp)
# print_rec(query)
## User gets recommendations based on the anime title.
# query = arr.index("A.I.C.O.: Incarnation")
# print_rec(query)
###Output
_____no_output_____
###Markdown
Recommendation based on Content-Based Filtering (only using ``project data_animes.csv`` data )
###Code
# cleaning anime_title
import re
def text_cleaning(text):
text = re.sub(r'"', '', text)
text = re.sub(r'.hack//', '', text)
text = re.sub(r''', '', text)
text = re.sub(r'A's', '', text)
text = re.sub(r'I'', 'I\'', text)
text = re.sub(r'&', 'and', text)
return text
df_anime['title'] = df_anime['title'].apply(text_cleaning)
df_anime.head()
# Term Frequency (TF) and Inverse Document Frequency (IDF)
from sklearn.feature_extraction.text import TfidfVectorizer
#getting tfidf
tfv = TfidfVectorizer(min_df=3, max_features=None,
strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
ngram_range=(1, 3),
stop_words = 'english')
# Filling NaNs with empty string
df_anime['genre'] = df_anime['genre'].fillna('')
genres_str = df_anime['genre'].str.split(',').astype(str)
tfv_matrix = tfv.fit_transform(genres_str)
df_anime.shape
from sklearn.metrics.pairwise import sigmoid_kernel
# Compute the sigmoid kernel
sig = sigmoid_kernel(tfv_matrix, tfv_matrix)
#getting the indices of anime title
indices = pd.Series(df_anime.index, index=df_anime['title']).drop_duplicates()
indices.head()
###Output
_____no_output_____
###Markdown
Random anime chosen + recommendation function
###Code
# get a random anime title and find recommendation for it.
query = np.random.choice(anime_pivot.shape[0])
def print_rec(query):
distances, indices = knnmodel.kneighbors(anime_pivot.iloc[query,:].values.reshape(1, -1), n_neighbors=5)
# ^ returning the distances and indices of 6 neighbours through KNN from the randomly chosen index(anime_title)
# print(distances, indices)
for i in range(0, len(distances.flatten())):
if i==0:
print('Recommendations for {0}:'.format(anime_pivot.index[query]))
else:
print('\t{0}: {1}, with distance of {2}:'.format(i, anime_pivot.index[indices.flatten()[i]], distances.flatten()[i]))
print_rec(query)
###Output
Recommendations for Shuumatsu Nani Shitemasu ka? Isogashii Desu ka? Sukutte Moratte Ii Desu ka?:
1: Ore no Imouto ga Konnani Kawaii Wake ga Nai., with distance of 1.0:
2: Ore no Imouto ga Konnani Kawaii Wake ga Nai. Specials, with distance of 1.0:
3: Ore no Imouto ga Konnani Kawaii Wake ga Nai Specials, with distance of 1.0:
4: Ore no Imouto ga Konnani Kawaii Wake ga Nai, with distance of 1.0:
###Markdown
makes an array of anime as it's string name, not the anime id
###Code
arr = []
for i in range(len(anime_pivot)):
arr.append(anime_pivot.index[int(i)])
# print(anime_pivot.index[int(i)])
###Output
_____no_output_____
###Markdown
User is allowed to enter input (the index) instead of it being randomly chosen.
###Code
def recommend(title, sig=sig):
# Get the index corresponding to original_title
idx = indices[title]
idx = idx[0]
#Get the pairwsie similarity scores
sig_scores = list(enumerate(sig[idx]))
# Sort the movies
sig_scores = sorted(sig_scores, key=lambda x: x[1], reverse=True)
# Scores of the 10 most similar movies
sig_scores = sig_scores[1:11]
# Movie indices
anime_indices = [i[0] for i in sig_scores]
# Top 10 most similar movies
return pd.DataFrame({'Anime title': df_anime['title'].iloc[anime_indices].values,
'Rating': df_anime['score'].iloc[anime_indices].values})
# Testing it with different anime titles
print(recommend('Death Note'))
print()
print(recommend('Dragon Ball Z'))
print()
print(recommend('Steins;Gate'))
print()
###Output
Anime title Rating
0 Death Note 8.65
1 Death Note: Rewrite 7.78
2 Death Note: Rewrite 7.78
3 B: The Beginning 7.51
4 B: The Beginning 2 NaN
5 Imawa no Kuni no Alice (OVA) 7.58
6 Mo Ri Shu Guang 6.52
7 Higurashi no Naku Koro ni Kai 8.29
8 Higurashi no Naku Koro ni Kai 8.29
9 Bloody Night 3.91
Anime title Rating
0 Dragon Ball Kai 7.85
1 Dragon Ball Kai (2014) 7.83
2 Dragon Ball Z Movie 11: Super Senshi Gekiha!! ... 6.08
3 Dragon Ball Z 8.27
4 Dragon Ball Z Movie 15: Fukkatsu no "F" 7.27
5 Dragon Ball Z Movie 11: Super Senshi Gekiha!! ... 6.08
6 Dragon Ball Kai 7.85
7 Dragon Ball Kai (2014) 7.83
8 Dragon Ball 8.12
9 Dragon Ball 8.12
Anime title Rating
0 Steins;Gate 9.11
1 Uchiko no Mama to Okaasan 5.32
2 Hello World 7.90
3 Mou Hitotsu no Mirai wo. 6.10
4 Hoshi no Ko Poron 6.07
5 Amanatsu 6.07
6 Adachi-ga Hara 6.08
7 Seikaisuru Kado: Ekwari 6.12
8 Mirai Arise NaN
9 100-man-nen Chikyuu no Tabi: Bander Book 6.02
###Markdown
[Chosen Dataset: myanimelist-dataset-animes-profiles-reviews](https://www.kaggle.com/marlesson/myanimelist-dataset-animes-profiles-reviews)- project data_animes.csv - project_data_profiles.csv- project_data_reviews.csv
###Code
!pip install fuzzywuzzy import process
# !pip install plotly
# !pip install wordcloud
# Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
import json
from scipy.sparse import csr_matrix
import pickle
%matplotlib inline
df_profile = pd.read_csv('data/project_data_profiles.csv', sep=',')
df_profile.head()
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
df_anime = pd.read_csv('data/project_data_animes.csv', sep=',')
# rename column "uid" to "anime_uid"
df_anime = df_anime.rename(columns={'uid': 'anime_uid'})
# pickle.dump(df_anime, open('models/df_anime.pkl', 'wb') )
df_review= pd.read_csv('data/project_data_reviews.csv', sep=',')
df_review.head()
###Output
_____no_output_____
###Markdown
--- Join/merge datasets ---
###Code
anime_review_data=pd.merge(df_anime,df_review,on='anime_uid',suffixes= ['', '_review'])
anime_full_data = pd.merge(anime_review_data, df_profile, on='profile', suffixes=['','_profile'])
import pickle
pickle.dump(anime_full_data, open('models/anime_full_data.pkl', 'wb') )
anime_review_data.head()
selections = ['anime_uid','title','synopsis','score','img_url']
anime_data = anime_review_data[selections]
pickle.dump(anime_data, open('models/anime_data.pkl', 'wb') )
###Output
_____no_output_____
###Markdown
--- Creating a dataframe that shows top 10 anime based on score counts---
###Code
def TopTenBasedOnScore():
combine_anime_rating = anime_full_data.dropna(axis = 0, subset = ['title'])
anime_ratingCount = (combine_anime_rating.
groupby(by=['anime_uid','title'])['score_review'].count().
reset_index().rename(columns = {'score':'totalScoreCount'})
[['anime_uid','title','score_review']])
pickle.dump(anime_ratingCount, open('models/anime_ratingCount.pkl', 'wb') )
#Plotting the bar plot for top 10 anime as per rating
top10_animerating=anime_ratingCount[['anime_uid','title', 'score_review']].sort_values(by = 'score_review',ascending = False).head(10)
ax=sns.barplot(x="title", y="score_review", data=top10_animerating, palette="Dark2")
ax.set_xticklabels(ax.get_xticklabels(), fontsize=11, rotation=40, ha="right")
ax.set_title('Top 10 Anime based on score/rating counts',fontsize = 22)
ax.set_xlabel('Anime',fontsize = 20)
ax.set_ylabel('User Rating count', fontsize = 20)
TopTenBasedOnScore()
combine_anime_rating = anime_full_data.dropna(axis = 0, subset = ['title'])
anime_ratingCount = (combine_anime_rating.
groupby(by=['anime_uid','title','img_url','synopsis'])['score_review'].count().
reset_index().rename(columns = {'score':'totalScoreCount'})
[['anime_uid','title','score_review','img_url','synopsis']])
pickle.dump(anime_ratingCount, open('models/anime_ratingCount.pkl', 'wb') )
top10_animerating=anime_ratingCount[['anime_uid','title', 'score_review','img_url','synopsis']].sort_values(by = 'score_review',ascending = False).head(5)
# for idd in top10_animerating['anime_uid']:
# print(df_anime[df_anime['anime_uid']==idd]['title'].values[0])
# print(anime_full_data[anime_full_data['anime_uid']==idd]['synopsis'].values[0])
# print(df_anime[df_anime['anime_uid']==idd]['score'].values[0])
###Output
_____no_output_____
###Markdown
--- Creating a dataframe that shows top 10 anime based on Community size---
###Code
def TopTenBasedOnCommunitySize():
duplicate_anime=anime_full_data.copy()
duplicate_anime.drop_duplicates(subset ="title",
keep = 'first', inplace = True)
#Plotting bar plot
top10_animemembers=duplicate_anime[['title', 'members']].sort_values(by = 'members',ascending = False).head(10)
ax=sns.barplot(x="title", y="members", data=top10_animemembers, palette="gnuplot2")
ax.set_xticklabels(ax.get_xticklabels(), fontsize=11, rotation=40, ha="right")
ax.set_title('Top 10 Anime based on members',fontsize = 22)
ax.set_xlabel('Anime',fontsize = 20)
ax.set_ylabel('Community Size', fontsize = 20)
TopTenBasedOnCommunitySize()
def RatingGraph():
#Distribution of ratings
plt.figure(figsize = (15, 7))
plt.subplot(1,2,1)
anime_full_data['score'].hist(bins=70)
plt.title("Rating of websites")
plt.subplot(1,2,2)
anime_full_data['score_review'].hist(bins=70)
plt.title("Rating of users")
RatingGraph()
# anime_full_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Genre Word Cloud
###Code
nonull_anime=anime_full_data.copy()
nonull_anime.dropna(inplace=True)
from collections import defaultdict
all_genres = defaultdict(int)
for genres in nonull_anime['genre']:
for genre in genres.split(','):
all_genres[genre.strip()] += 1
from wordcloud import WordCloud
genres_cloud = WordCloud(width=800, height=400, background_color='white', colormap='gnuplot').generate_from_frequencies(all_genres)
plt.imshow(genres_cloud, interpolation='bilinear')
plt.axis('off')
#Replacing -1 with NaN in user_rating column
anime_feature=anime_full_data.copy()
anime_feature["score_review"].replace({-1: np.nan}, inplace=True)
anime_feature.head()
#dropping all the null values as it aids nothing
anime_feature = anime_feature.dropna(axis = 0, how ='any')
anime_feature.isnull().sum()
counts = anime_feature['uid'].value_counts()
###Output
_____no_output_____
###Markdown
There are users who has rated only once, even if they have rated it 5, it can’t be considered a valuable record for recommendation, so let's considered minimum 10 ratings by the user as threshold value.
###Code
anime_feature = anime_feature[anime_feature['uid'].isin(counts[counts >= 10].index)]
# anime_feature['uid'].value_counts()
###Output
_____no_output_____
###Markdown
pivot table pivot table helps create sparse matrix, which can help for cosine similarity.
###Code
# anime_full_data = anime_full_data.drop_duplicates(['uid','title'])
# anime_pivot = anime_full_data.pivot_table(index = 'title', columns = 'uid', values='score_review').fillna(0)
###Output
_____no_output_____
###Markdown
--- Another approach to get the anime_pivot if ``pivot_table`` uses too much memory
###Code
# testing = anime_full_data.groupby(['title','uid'])['score_review'].max().unstack().fillna(0)
# testing.head()
# print(anime_pivot.shape)
# print(testing.shape)
print(anime_full_data.shape)
anime_full_data = anime_full_data.drop_duplicates(['uid','title'])
print(anime_full_data.shape)
###Output
(811636, 22)
(130519, 22)
###Markdown
--- Recommendation based on Collaborative Filtering
###Code
anime_full_data = anime_full_data.drop_duplicates(['title'])
anime_pivot = anime_full_data.pivot_table(index = 'title', columns = 'uid', values='score_review').fillna(0)
# creating a sparse martix
anime_matrix = csr_matrix(anime_pivot.values)
# fitting the model. Cosine Similarity using KNN.
from sklearn.neighbors import NearestNeighbors
knnmodel = NearestNeighbors(metric = 'cosine', algorithm = 'brute', n_neighbors=20)
knnmodel.fit(anime_matrix)
# anime_pivot.head()
###Output
_____no_output_____
###Markdown
Testing collaborative filtering recommendation function
###Code
knnmodel
arr = []
for i in range(len(anime_pivot)):
arr.append(df_anime['title'][int(i)])
def recommend_collaborative_filtering(anime_name, model, n_recommendations):
idx = arr.index(anime_name)
# idx = process.extractOne(anime_name, df_anime['title'] )[2] # using fuzzywuzzy to find the corresponding anime title
print('Anime Selected: ',df_anime['title'][idx],'Index: ',idx )
print('Searching for recommendations......')
distances, indices = model.kneighbors(anime_matrix[idx], n_neighbors = n_recommendations)
for i in indices:
print(df_anime['title'][i])
recommend_collaborative_filtering('Death Note', knnmodel, 6)
print()
recommend_collaborative_filtering('Steins;Gate', knnmodel, 6)
def testing(anime_name, model, n_recommendations):
res = []
idx = arr.index(anime_name)
# idx = process.extractOne(anime_name, df_anime['title'] )[2] # using fuzzywuzzy to find the corresponding anime title
# print('Anime Selected: ',df_anime['title'][idx],'Index: ',idx )
# print('Searching for recommendations......')
distances, indices = model.kneighbors(anime_matrix[idx], n_neighbors = n_recommendations)
for i in indices:
res.append(df_anime['title'][i].values)
return res
collaborative_filtering_rec = {}
for anime_title in df_anime.title.unique():
try:
title_list =testing(anime_title,knnmodel, 11 )
collaborative_filtering_rec[anime_title]=title_list
except:
pass
# collaborative_filtering_rec
pickle.dump(collaborative_filtering_rec, open('models/collaborative_filtering_rec.pkl', 'wb') )
collaborative_filtering_rec['Xiao Lu He Xiao Lan']
results =[]
user_input_text='Death Note'
collaborative_img_url=[]
for index in range(len(collaborative_filtering_rec[user_input_text][0])):
if index <6:
anime_title = collaborative_filtering_rec[user_input_text][0][index]
results.append(anime_title)
img_url = (df_anime[df_anime['title']==anime_title]['img_url'].values[0])
collaborative_img_url.append(img_url)
print((collaborative_img_url))
print((results))
###Output
['https://cdn.myanimelist.net/images/anime/9/9453.jpg', 'https://cdn.myanimelist.net/images/anime/1670/93590.jpg', 'https://cdn.myanimelist.net/images/anime/10/76803.jpg', 'https://cdn.myanimelist.net/images/anime/7/76538.jpg', 'https://cdn.myanimelist.net/images/anime/2/20345.jpg', 'https://cdn.myanimelist.net/images/anime/1032/96640.jpg']
['Death Note', 'Xiao Lu He Xiao Lan', 'Kuroko no Basket: Tip Off', 'Soukyuu no Fafner: Dead Aggressor - Exodus 2nd Season', 'Isekai no Seikishi Monogatari', 'Shokugeki no Souma: Ni no Sara OVA']
###Markdown
Old Approach collaborative filtering recommendation
###Code
# # get a random anime title and find recommendation for it.
# query = np.random.choice(anime_pivot.shape[0])
# def print_rec(query):
# distances, indices = knnmodel.kneighbors(anime_pivot.iloc[query,:].values.reshape(1, -1), n_neighbors=5)
# # ^ returning the distances and indices of 6 neighbours through KNN from the randomly chosen index(anime_title)
# # print(distances, indices)
# for i in range(0, len(distances.flatten())):
# if i==0:
# print('Recommendations for {0}:'.format(anime_pivot.index[query]))
# else:
# print('\t{0}: {1}, with distance of {2}:'.format(i, anime_pivot.index[indices.flatten()[i]], distances.flatten()[i]))
# print_rec(query)
###Output
_____no_output_____
###Markdown
Makes an array of anime as it's string name, not the anime id
###Code
# arr = []
# for i in range(len(anime_pivot)):
# arr.append(anime_pivot.index[int(i)])
# # print(anime_pivot.index[int(i)])
###Output
_____no_output_____
###Markdown
User is allowed to enter input (the index) instead of it being randomly chosen.
###Code
# inp = input ("enter a num ")
# # anime_pivot.index[int(inp)]
# query = int(inp)
# print_rec(query)
## User gets recommendations based on the anime title.
# query = arr.index("A.I.C.O.: Incarnation")
# print_rec(query)
###Output
_____no_output_____
###Markdown
Recommendation based on Content-Based Filtering (only using ``project data_animes.csv`` data )
###Code
# cleaning anime_title
import re
def text_cleaning(text):
text = re.sub(r'"', '', text)
text = re.sub(r'.hack//', '', text)
text = re.sub(r''', '', text)
text = re.sub(r'A's', '', text)
text = re.sub(r'I'', 'I\'', text)
text = re.sub(r'&', 'and', text)
return text
df_anime['title'] = df_anime['title'].apply(text_cleaning)
print(df_anime.shape)
df_anime = df_anime.drop_duplicates('title')
print(df_anime.shape)
# Term Frequency (TF) and Inverse Document Frequency (IDF)
from sklearn.feature_extraction.text import TfidfVectorizer
#getting tfidf
tfv = TfidfVectorizer(min_df=3, max_features=None,
strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
ngram_range=(1, 3),
stop_words = 'english')
# Filling NaNs with empty string
df_anime['genre'] = df_anime['genre'].fillna('')
genres_str = df_anime['genre'].str.split(',').astype(str)
tfv_matrix = tfv.fit_transform(genres_str)
df_anime.head()
from sklearn.metrics.pairwise import sigmoid_kernel
# Compute the sigmoid kernel
sig = sigmoid_kernel(tfv_matrix, tfv_matrix)
#getting the indices of anime title
indices = pd.Series(df_anime.index, index=df_anime['title']).drop_duplicates()
indices['Death Note']
sig
###Output
_____no_output_____
###Markdown
Random anime chosen + recommendation function
###Code
# # get a random anime title and find recommendation for it.
# query = np.random.choice(anime_pivot.shape[0])
# def print_rec(query):
# distances, indices = knnmodel.kneighbors(anime_pivot.iloc[query,:].values.reshape(1, -1), n_neighbors=5)
# # ^ returning the distances and indices of 6 neighbours through KNN from the randomly chosen index(anime_title)
# # print(distances, indices)
# for i in range(0, len(distances.flatten())):
# if i==0:
# print('Recommendations for {0}:'.format(anime_pivot.index[query]))
# else:
# print('\t{0}: {1}, with distance of {2}:'.format(i, anime_pivot.index[indices.flatten()[i]], distances.flatten()[i]))
# print_rec(query)
###Output
_____no_output_____
###Markdown
makes an array of anime as it's string name, not the anime id
###Code
arr = []
for i in range(len(anime_pivot)):
arr.append(anime_pivot.index[int(i)])
# print(anime_pivot.index[int(i)])
###Output
_____no_output_____
###Markdown
User is allowed to enter input (the index) instead of it being randomly chosen.
###Code
def recommend(title, sig=sig):
# Get the index corresponding to original_title
idx = indices[title]
res = []
#Get the pairwsie similarity scores
sig_scores = list(enumerate(sig[idx]))
# Sort the movies
sig_scores = sorted(sig_scores, key=lambda x: x[1], reverse=True)
# Scores of the 10 most similar movies
sig_scores = sig_scores[1:11]
# Anime indices
anime_indices = [i[0] for i in sig_scores]
for anime_title in df_anime['title'].iloc[anime_indices].values:
res.append(anime_title)
return res;
A
# Top 10 most similar movies
# return pd.DataFrame({'Anime title': df_anime['title'].iloc[anime_indices].values,
# 'Rating': df_anime['score'].iloc[anime_indices].values})
# Testing it with different anime titles
print(recommend('Death Note'))
print()
print(recommend('Dragon Ball Z'))
print()
print(recommend('Steins;Gate'))
print()
content_based_rec = {}
for anime_title in df_anime.title.unique():
try:
title_list =recommend(anime_title )
content_based_rec[anime_title]=title_list
except:
pass
content_based_rec
pickle.dump(content_based_rec, open('models/content_based_rec.pkl', 'wb') )
content_based_rec['Death Note']
content_recommended_anime_title =[]
content_img_url=[]
for index in range(len(content_based_rec[user_input_text])):
if index <6:
anime_title = content_based_rec[user_input_text][index]
content_recommended_anime_title.append(anime_title)
img_url = (df_anime[df_anime['title']==anime_title]['img_url'].values[0])
content_img_url.append(img_url)
print(content_recommended_anime_title)
print(content_img_url)
###Output
['Death Note: Rewrite', 'B: The Beginning', 'B: The Beginning 2', 'Imawa no Kuni no Alice (OVA)', 'Mo Ri Shu Guang', 'Higurashi no Naku Koro ni Kai']
['https://cdn.myanimelist.net/images/anime/13/8518.jpg', 'https://cdn.myanimelist.net/images/anime/1564/90469.jpg', 'https://cdn.myanimelist.net/images/anime/1594/93194.jpg', 'https://cdn.myanimelist.net/images/anime/1004/96669.jpg', 'https://cdn.myanimelist.net/images/anime/1939/90507.jpg', 'https://cdn.myanimelist.net/images/anime/12/14114.jpg']
|
miscellaneous/coco_prepare.ipynb | ###Markdown
single class
###Code
import pandas as pd
import numpy as np
import json
from tqdm import tqdm
tqdm.pandas()
class NumpyEncoder(json.JSONEncoder):
"""
https://stackoverflow.com/questions/26646362/numpy-array-is-not-json-serializable
Special json encoder for numpy types
"""
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
return json.JSONEncoder.default(self, obj)
class COCOConverter:
"""Class to convert competition csv to coco format."""
def __init__(
self,
df: pd.DataFrame,
image_height: int = 720,
image_width: int = 1280,
type_agnostic: bool = False):
self.image_height = image_height
self.image_width = image_width
self.type_agnostic = type_agnostic
if self.type_agnostic:
self.categories = [{"id": 1, "name": "Helmet"}]
else:
self.categories = [
{"id": 1, "name": "impact_None",},
{"id": 2, "name": "impact_Helmet"},
{"id": 3, "name": "impact_Shoulder",},
{"id": 4, "name": "impact_Body"},
{"id": 5, "name": "impact_Ground",},
{"id": 6, "name": "impact_Hand"},
]
self.df = self._initialize(df)
def _get_file_name(self, row: pd.Series):
base_name = row.video[:-4]
file_name = f'{base_name}_frame{row.frame:04}.jpg'
return file_name
def _get_bbox(self, row: pd.Series):
return [row.left, row.top, row.width, row.height]
def _initialize(self, df: pd.DataFrame):
# set category id
if self.type_agnostic:
df['impactType'] = 'Helmet'
df['category_id'] = 1
else:
df['category_id'] = df['impactType'].map(
{
'None': 1,
'Helmet': 2,
'Shoulder': 3,
'Body': 4,
'Ground': 5,
'Hand': 6
}
)
# some preprocesses
df['file_name'] = df[['video', 'frame']].progress_apply(self._get_file_name, axis=1)
df['area'] = df['width'] * df['height']
df['bbox'] = df[['left', 'top', 'height', 'width']].progress_apply(self._get_bbox, axis=1)
df['iscrowd'] = 0
return df
def save(self, save_path):
"""
Save as coco json format.
But also has many supplemental items like gameKey or view.
"""
df = self.df.copy()
image_df = df[['gameKey', 'playID', 'view', 'video', 'frame', 'file_name']].drop_duplicates()
image_df['height'] = self.image_height
image_df['width'] = self.image_width
# add image id to images. Note that it's called just "id".
image_df['id'] = range(1, len(image_df) + 1)
# add image id to annotations.
df['image_id'] = df[['file_name']].merge(image_df[['file_name', 'id']])['id'].values
df['id'] = range(1, len(df) + 1)
print('start dumping...')
coco_annotations = dict()
coco_annotations['categories'] = self.categories
coco_annotations['images'] = [dict(row) for _, row in image_df.iterrows()]
coco_annotations['annotations'] = [dict(row) for _, row in df.iterrows()]
json.dump(coco_annotations, open(save_path, 'w'), indent=4, cls=NumpyEncoder)
df = pd.read_csv('work/NFL/train_labels.csv')
play_ids = df['playID'].unique()
num_train = int(len(play_ids) * 0.8)
train_play_ids = df['playID'].unique()[:num_train]
valid_play_ids = df['playID'].unique()[num_train:]
print('number of train videos:', len(train_play_ids))
print('number of valid videos:', len(valid_play_ids))
train_df = df.query('playID in @train_play_ids').reset_index(drop=True).copy()
valid_df = df.query('playID in @valid_play_ids').reset_index(drop=True).copy()
print('number of train annotations:', len(train_df))
print('number of valid annotations:', len(valid_df))
train_coco = COCOConverter(train_df, type_agnostic=True)
train_coco.save('train_single.json')
valid_coco = COCOConverter(valid_df, type_agnostic=True)
valid_coco.save('valid_single.json')
###Output
100%|██████████| 753170/753170 [00:15<00:00, 49854.29it/s]
100%|██████████| 753170/753170 [00:28<00:00, 26696.18it/s]
###Markdown
multiclass
###Code
import random
import numpy as np
from pathlib import Path
import datetime
import pandas as pd
from tqdm.notebook import tqdm
from sklearn.model_selection import train_test_split
import cv2
import os
import json
import matplotlib.pyplot as plt
from IPython.core.display import Video, display
import subprocess
import gc
import shutil
import pandas as pd
# Load image level csv file
extra_df = pd.read_csv('./work/NFL/image_labels.csv')
print('Number of ground truth bounding boxes: ', len(extra_df))
# Number of unique labels
label_to_id = {label: i for i, label in enumerate(extra_df.label.unique())}
print('Unique labels: ', label_to_id)
def create_ann_file(df, category_id):
now = datetime.datetime.now()
data = dict(
info=dict(
description='NFL-Helmet-Assignment',
url=None,
version=None,
year=now.year,
contributor=None,
date_created=now.strftime('%Y-%m-%d %H:%M:%S.%f'),
),
licenses=[dict(
url=None,
id=0,
name=None,
)],
images=[
# license, url, file_name, height, width, date_captured, id
],
type='instances',
annotations=[
# segmentation, area, iscrowd, image_id, bbox, category_id, id
],
categories=[
# supercategory, id, name
],
)
class_name_to_id = {}
labels = ["__ignore__",
'Helmet',
'Helmet-Blurred',
'Helmet-Difficult',
'Helmet-Sideline',
'Helmet-Partial']
for i, each_label in enumerate(labels):
class_id = i - 1 # starts with -1
class_name = each_label
if class_id == -1:
assert class_name == '__ignore__'
continue
class_name_to_id[class_name] = class_id
data['categories'].append(dict(
supercategory=None,
id=class_id,
name=class_name,
))
box_id = 0
for i, image in enumerate(os.listdir(TRAIN_PATH)):
img = cv2.imread(TRAIN_PATH+'/'+image)
height, width, _ = img.shape
data['images'].append({
'license':0,
'url': None,
'file_name': image,
'height': height,
'width': width,
'date_camputured': None,
'id': i
})
df_temp = df[df.image == image]
for index, row in df_temp.iterrows():
area = round(row.width*row.height, 1)
bbox =[row.left, row.top, row.width, row.height]
data['annotations'].append({
'id': box_id,
'image_id': i,
'category_id': category_id[row.label],
'area': area,
'bbox':bbox,
'iscrowd':0
})
box_id+=1
return data
from sklearn.model_selection import train_test_split
TRAIN_PATH = './work/NFL/images'
extra_df = pd.read_csv('./work/NFL/image_labels.csv')
category_id = {'Helmet':0, 'Helmet-Blurred':1,
'Helmet-Difficult':2, 'Helmet-Sideline':3,
'Helmet-Partial':4}
df_train, df_val = train_test_split(extra_df, test_size=0.2, random_state=66)
print('train:',df_train.shape,'val:',df_val.shape)
image_bbox_label = {}
for image, df in extra_df.groupby('image'):
image_bbox_label[image] = df.reset_index(drop=True)
train_names, valid_names = train_test_split(list(image_bbox_label), test_size=0.2, random_state=42)
print(f'Size of dataset: {len(image_bbox_label)},\
training images: {len(train_names)},\
validation images: {len(valid_names)}')
frames=[]
for i in train_names:
frames.append(extra_df[extra_df['image']==i])
df_train = pd.concat(frames)
frames=[]
for i in valid_names:
frames.append(extra_df[extra_df['image']==i])
df_val = pd.concat(frames)
print(df_val.head())
print(extra_df.head())
ann_file_train = create_ann_file(df_train.reset_index(), category_id)
ann_file_val = create_ann_file(df_val.reset_index(), category_id)
print('train:',df_train.shape,'val:',df_val.shape)
#save as json to gdrive
with open('work/NFL/ann_file_train.json', 'w') as f:
json.dump(ann_file_train, f, indent=4)
with open('work/NFL/ann_file_val.json', 'w') as f:
json.dump(ann_file_val, f, indent=4)
import base64
import IPython
import json
import numpy as np
import os
import random
import requests
from io import BytesIO
from math import trunc
from PIL import Image as PILImage
from PIL import ImageDraw as PILImageDraw
class CocoDataset():
def __init__(self, annotation_path, image_dir):
self.annotation_path = annotation_path
self.image_dir = image_dir
self.colors = ['blue', 'purple', 'red', 'green', 'orange', 'salmon', 'pink', 'gold',
'orchid', 'slateblue', 'limegreen', 'seagreen', 'darkgreen', 'olive',
'teal', 'aquamarine', 'steelblue', 'powderblue', 'dodgerblue', 'navy',
'magenta', 'sienna', 'maroon']
json_file = open(self.annotation_path)
self.coco = json.load(json_file)
json_file.close()
#self.process_info()
#self.process_licenses()
self.process_categories()
self.process_images()
self.process_segmentations()
def display_info(self):
print('Dataset Info:')
print('=============')
for key, item in self.info.items():
print(' {}: {}'.format(key, item))
requirements = [['description', str],
['url', str],
['version', str],
['year', int],
['contributor', str],
['date_created', str]]
for req, req_type in requirements:
if req not in self.info:
print('ERROR: {} is missing'.format(req))
elif type(self.info[req]) != req_type:
print('ERROR: {} should be type {}'.format(req, str(req_type)))
print('')
def display_licenses(self):
print('Licenses:')
print('=========')
requirements = [['id', int],
['url', str],
['name', str]]
for license in self.licenses:
for key, item in license.items():
print(' {}: {}'.format(key, item))
for req, req_type in requirements:
if req not in license:
print('ERROR: {} is missing'.format(req))
elif type(license[req]) != req_type:
print('ERROR: {} should be type {}'.format(req, str(req_type)))
print('')
print('')
def display_categories(self):
print('Categories:')
print('=========')
for sc_key, sc_val in self.super_categories.items():
print(' super_category: {}'.format(sc_key))
for cat_id in sc_val:
print(' id {}: {}'.format(cat_id, self.categories[cat_id]['name']))
print('')
def display_image(self, image_id, show_polys=False, show_bbox=True, show_labels=True, show_crowds=False, use_url=False):
print('Image:')
print('======')
if image_id == 'random':
image_id = random.choice(list(self.images.keys()))
# Print the image info
image = self.images[image_id]
for key, val in image.items():
print(' {}: {}'.format(key, val))
# Open the image
if use_url:
image_path = image['coco_url']
response = requests.get(image_path)
image = PILImage.open(BytesIO(response.content))
else:
image_path = os.path.join(self.image_dir, image['file_name'])
image = PILImage.open(image_path)
buffered = BytesIO()
image.save(buffered, format="PNG")
img_str = "data:image/png;base64, " + base64.b64encode(buffered.getvalue()).decode()
# Calculate the size and adjusted display size
max_width = 900
image_width, image_height = image.size
adjusted_width = min(image_width, max_width)
adjusted_ratio = adjusted_width / image_width
adjusted_height = adjusted_ratio * image_height
# Create list of polygons to be drawn
polygons = {}
bbox_polygons = {}
rle_regions = {}
poly_colors = {}
labels = {}
print(' segmentations ({}):'.format(len(self.segmentations[image_id])))
for i, segm in enumerate(self.segmentations[image_id]):
polygons_list = []
if segm['iscrowd'] != 0:
# Gotta decode the RLE
px = 0
x, y = 0, 0
rle_list = []
for j, counts in enumerate(segm['segmentation']['counts']):
if j % 2 == 0:
# Empty pixels
px += counts
else:
# Need to draw on these pixels, since we are drawing in vector form,
# we need to draw horizontal lines on the image
x_start = trunc(trunc(px / image_height) * adjusted_ratio)
y_start = trunc(px % image_height * adjusted_ratio)
px += counts
x_end = trunc(trunc(px / image_height) * adjusted_ratio)
y_end = trunc(px % image_height * adjusted_ratio)
if x_end == x_start:
# This is only on one line
rle_list.append({'x': x_start, 'y': y_start, 'width': 1 , 'height': (y_end - y_start)})
if x_end > x_start:
# This spans more than one line
# Insert top line first
rle_list.append({'x': x_start, 'y': y_start, 'width': 1, 'height': (image_height - y_start)})
# Insert middle lines if needed
lines_spanned = x_end - x_start + 1 # total number of lines spanned
full_lines_to_insert = lines_spanned - 2
if full_lines_to_insert > 0:
full_lines_to_insert = trunc(full_lines_to_insert * adjusted_ratio)
rle_list.append({'x': (x_start + 1), 'y': 0, 'width': full_lines_to_insert, 'height': image_height})
# Insert bottom line
rle_list.append({'x': x_end, 'y': 0, 'width': 1, 'height': y_end})
if len(rle_list) > 0:
rle_regions[segm['id']] = rle_list
# else:
# # Add the polygon segmentation
# for segmentation_points in segm['segmentation']:
# segmentation_points = np.multiply(segmentation_points, adjusted_ratio).astype(int)
# polygons_list.append(str(segmentation_points).lstrip('[').rstrip(']'))
polygons[segm['id']] = polygons_list
if i < len(self.colors):
poly_colors[segm['id']] = self.colors[i]
else:
poly_colors[segm['id']] = 'white'
bbox = segm['bbox']
bbox_points = [bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1],
bbox[0] + bbox[2], bbox[1] + bbox[3], bbox[0], bbox[1] + bbox[3],
bbox[0], bbox[1]]
bbox_points = np.multiply(bbox_points, adjusted_ratio).astype(int)
bbox_polygons[segm['id']] = str(bbox_points).lstrip('[').rstrip(']')
labels[segm['id']] = (self.categories[segm['category_id']]['name'], (bbox_points[0], bbox_points[1] - 4))
# Print details
print(' {}:{}:{}'.format(segm['id'], poly_colors[segm['id']], self.categories[segm['category_id']]))
# Draw segmentation polygons on image
html = '<div class="container" style="position:relative;">'
html += '<img src="{}" style="position:relative;top:0px;left:0px;width:{}px;">'.format(img_str, adjusted_width)
html += '<div class="svgclass"><svg width="{}" height="{}">'.format(adjusted_width, adjusted_height)
if show_polys:
for seg_id, points_list in polygons.items():
fill_color = poly_colors[seg_id]
stroke_color = poly_colors[seg_id]
for points in points_list:
html += '<polygon points="{}" style="fill:{}; stroke:{}; stroke-width:1; fill-opacity:0.5" />'.format(points, fill_color, stroke_color)
if show_crowds:
for seg_id, rect_list in rle_regions.items():
fill_color = poly_colors[seg_id]
stroke_color = poly_colors[seg_id]
for rect_def in rect_list:
x, y = rect_def['x'], rect_def['y']
w, h = rect_def['width'], rect_def['height']
html += '<rect x="{}" y="{}" width="{}" height="{}" style="fill:{}; stroke:{}; stroke-width:1; fill-opacity:0.5; stroke-opacity:0.5" />'.format(x, y, w, h, fill_color, stroke_color)
if show_bbox:
for seg_id, points in bbox_polygons.items():
fill_color = poly_colors[seg_id]
stroke_color = poly_colors[seg_id]
html += '<polygon points="{}" style="fill:{}; stroke:{}; stroke-width:1; fill-opacity:0" />'.format(points, fill_color, stroke_color)
if show_labels:
for seg_id, label in labels.items():
color = poly_colors[seg_id]
html += '<text x="{}" y="{}" style="fill:{}; font-size: 12pt;">{}</text>'.format(label[1][0], label[1][1], color, label[0])
html += '</svg></div>'
html += '</div>'
html += '<style>'
html += '.svgclass { position:absolute; top:0px; left:0px;}'
html += '</style>'
return html
def process_info(self):
self.info = self.coco['info']
def process_licenses(self):
self.licenses = self.coco['licenses']
def process_categories(self):
self.categories = {}
self.super_categories = {}
for category in self.coco['categories']:
cat_id = category['id']
super_category = category['supercategory']
# Add category to the categories dict
if cat_id not in self.categories:
self.categories[cat_id] = category
else:
print("ERROR: Skipping duplicate category id: {}".format(category))
# Add category to super_categories dict
if super_category not in self.super_categories:
self.super_categories[super_category] = {cat_id} # Create a new set with the category id
else:
self.super_categories[super_category] |= {cat_id} # Add category id to the set
def process_images(self):
self.images = {}
for image in self.coco['images']:
image_id = image['id']
if image_id in self.images:
print("ERROR: Skipping duplicate image id: {}".format(image))
else:
self.images[image_id] = image
def process_segmentations(self):
self.segmentations = {}
for segmentation in self.coco['annotations']:
image_id = segmentation['image_id']
if image_id not in self.segmentations:
self.segmentations[image_id] = []
self.segmentations[image_id].append(segmentation)
annotation_path = r'work/NFL/ann_file_train.json'
image_dir = r'work/NFL/images'
coco_dataset = CocoDataset(annotation_path, image_dir)
# coco_dataset.display_info()
# coco_dataset.display_licenses()
coco_dataset.display_categories()
html = coco_dataset.display_image('random', use_url=False)
IPython.display.HTML(html)
###Output
Image:
======
license: 0
url: None
file_name: 57873_000399_Endzone_frame1010.jpg
height: 720
width: 1280
date_camputured: None
id: 2066
|
Machine_Learning_Basics.ipynb | ###Markdown
 Introduction to Machine Learning with PythonIn our [Data Science](https://brockdsl.github.io/Python_2.0_Workshop/) workshop we introduced some concepts by looking at some fictional data about people that got sick with a mysterious illness. In this session we are going to see if we can build a machine learning model to see if we can predict who has the illness based on the answers to some questions. As a further exercise we'll setup two examples that try to guess the quality of wine. I encourage you to try out these examples after class is done. First, a brief recap on Python codeThe following code should look familiar to you
###Code
import pandas as pd
#Load the file into a dataframe using the pandas read_csv function
data = pd.read_csv("https://brockdsl.github.io/Python_2.0_Workshop/canadian_toy_dataset.csv")
#Tell it what our columns are by passing along a list of that information
data.columns = ["city","gender","age","income","ill"]
data.head()
###Output
_____no_output_____
###Markdown
Machine Learning BasicsDon't let the impressive name fool you. Machine learning is more or less the following steps1. Getting your data and cleaning it up1. Identify what parts of your data are **features**1. Identify what is your **target variable** that you'll guess based on your features1. Split your data in **training and testing sets**1. **Train** your model against the training set1. **Validate** your model against the testing set1. ????1. ProfitWe are going to use the Python library [scikit-learn](https://scikit-learn.org/stable/) and we are going to be doing a [classification](https://en.wikipedia.org/wiki/Statistical_classification) problem. Decision TreeThis is one of the most basic machine learning model you can use. It is considered a [supervised learning](https://en.wikipedia.org/wiki/Supervised_learning) method. You create the best [decision tree](https://en.wikipedia.org/wiki/Decision_tree_learning) that you can based on your training data. Here's an example tree that shows your chance of surviving the Titanic disaster. What we are creating is series of question that when answered will put observations into a _bucket_ or in other terms one of the classification options. We also devise a probability associated with an observation falling into that _bucket_.The features are described by the labels, however ``sibsp`` - is the number of spouses or siblings on board.So in this tree the most important question to ask first is what is the gender of the person you are considering, then next most important question is age above 9 and a half, followed lastly by, does this person have less than three spouses or siblings on board. Let's start by loading the Libraries we need
###Code
#This should look familar
import pandas as pd
#import numpy as np
#We'll draw a graph later on
import matplotlib.pyplot as plt
#Our 'Machine Learning pieces'
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.tree import export_text
from sklearn import metrics
from sklearn import tree
print("Ready to proceed!")
###Output
_____no_output_____
###Markdown
Getting the data readyNow, let's load our data. Our decision tree can only work with numerical values, so we'll have to modify the columns of data that are text based. As stated preparing the data is usually the most difficult part of the process.
###Code
data = pd.read_csv("https://brockdsl.github.io/Python_2.0_Workshop/canadian_toy_dataset.csv")
data.columns = ["city","gender","age","income","ill"]
data.head()
###Output
_____no_output_____
###Markdown
This dataset is fairly clean, we just need to represent it all as numbers instead of text labels. So that means we need to change the columns:- `ill` - instead of a No / Yes label we'll use 0 and 2 instead- `city` - this will break out the column into 8 different columns- `gender` - this will break out the column into 2 different columns
###Code
#Instead of yes/no we'll use a 0 or 2
#We use the value '2' to make our analysis later on less ambiguous
data["ill"].replace({"No":0, "Yes":2},inplace=True)
#We change categorical values into numeric ones using `dummies`
data = pd.get_dummies(data, columns=['city','gender'])
data.head(5)
###Output
_____no_output_____
###Markdown
The example above shows 5 entries that that come from Montreal.
###Code
data.tail(5)
###Output
_____no_output_____
###Markdown
This example shows the last 5 entries in the dataframe that come from Edmonton. Now we are done the most difficult part of the process, understanding the data and getting it ready.---- Building and Running the Model We now have our data cleaned up, and represented in a way that Scikit will be able to analyze. To be honest the most difficult part of the process is done.We now need to split our columns in two types:- **features** represent the data we use to build our guess- **target variable** the thing our model hopes to guess
###Code
#all of the following columns are features, we'll make a list of their names
features = ["age",\
"income",\
"city_Edmonton",\
"city_Halifax",\
"city_Montreal",
"city_Ottawa",\
"city_Regina",\
"city_Toronto",
"city_Vancouver",\
"city_Waterloo",\
"gender_Female",\
"gender_Male"]
X = data[features]
#We want to target the ill column
y = data.ill
###Output
_____no_output_____
###Markdown
Training and testingNow that we have built our model we need to get the data ready for it. We do this by breaking it into two different pieces. The diagram shows a conceptualization of how this is proportioned.- **Training set** - This is what is used to build the model. If we set this value too large the ML Model just _memorizes_ the data so we need to be careful when setting this value. This is called _overfitting_ the data.- **Testing set** - This is used to see if our guesses are correctBefore we were looking at the **columns** of the data, this investigation of training/testing looks at the **rows** of data.
###Code
#Training and test together make up 100% of the data!
#We start with a baseline of 30% of our data as testing
test_percent = 30
train_percent = 100 - test_percent
X_train, X_test, y_train, y_test = train_test_split(X, \
y, \
test_size=test_percent/100.0,
random_state=10)
###Output
_____no_output_____
###Markdown
Now the interesting part, we build our model, **train** it against the **training set** and see how it **predicts** against the **testing set**
###Code
# Create Decision Tree classifer object
treeClass = DecisionTreeClassifier()
# Train
treeClass = treeClass.fit(X_train,y_train)
#Predict
y_pred = treeClass.predict(X_test)
###Output
_____no_output_____
###Markdown
Accuracy of the ModelTo see how good our machine learning model is we need to see how accurate our predictions are. `Scikit` has built in functions and [metrics](https://scikit-learn.org/stable/modules/model_evaluation.html) to do this for us.
###Code
print("Accuracy: ")
print(metrics.accuracy_score(y_test,y_pred))
###Output
_____no_output_____
###Markdown
Making PredictionsNot bad. We can use our model to predict a guess for **ill** if we pass along all of the other parameters. Our model only tells us if someone is ill or not. This is directly asking our classification model to give us a prediction based on a pretend record.Since this classifier tells us if someone is ill or someone is not ill, it has two outputs.
###Code
data.ill.unique()
# I randomly picked a record in the dataset to test if the prediction is correct.
# This is from line: 149120 of the datafile
person_x_yes = [
32, #age
82311, #income
1, #city_Edmonton
0, #city_Halifax
0, #city_Montreal
0, #city_Ottawa
0, #city_Regina
0, #city_Toronto
0, #city_Vancouver
0, #city_Waterloo
0, #gender_Female
1, #gender_Male
]
person_x_yes = pd.DataFrame([person_x_yes],columns=X_test.columns)
print("Someone who is ill")
print("Class predicted by model: ",treeClass.predict(person_x_yes))
print("Probablity associated with the guess: ",treeClass.predict_proba(person_x_yes))
# I randomly picked a record in the dataset to test if the prediction is correct.
# This is from line: 149121 of the datafile
person_x_no = [
40, #age
89780, #income
1, #city_Edmonton
0, #city_Halifax
0, #city_Montreal
0, #city_Ottawa
0, #city_Regina
0, #city_Toronto
0, #city_Vancouver
0, #city_Waterloo
1, #gender_Female
0, #gender_Male
]
#Use the dataframe of our fictional person in our model and get our prediction
person_x_no = pd.DataFrame([person_x_no],columns=X_test.columns)
print("\nSomeone who was not ill")
print("Class predicted by model: ", treeClass.predict(person_x_no))
print("Probablity associated with the guess: ", treeClass.predict_proba(person_x_no))
###Output
_____no_output_____
###Markdown
Our model is very confident in it's ability to made predictions!With this model constucted we can make ask it question so to speak. We can provide it with details about a pretend person and see what classification the model will place this person. Making a prediction with our modelTry to set some parameters in the `pretend_person` variable below to make the prediction determine that the person is **ill**. If you can find one please copy and paste it into the chat box for others to try. You can do this by:- changing the values **line 2 & line 3** for age and income- pick one line from **line 4 to line 11** and change a single row to a value of 1- pick one line from **line 12 to line 13** and change a single row to a value of 1 Question 1- Try to come up with some set of values that creates an `ill` person. Share your choices in the chat box- Try to come up with some set of values that doesn't create an `ill` person. Share you choices in the chat box
###Code
pretend_person = pd.DataFrame([
30, #age - FILL IN
5000, #income - FILL IN
1, #city_Edmonton - ONLY 1 city at a time
0, #city_Halifax
0, #city_Montreal
0, #city_Ottawa
0, #city_Regina
0, #city_Toronto
0, #city_Vancouver
0, #city_Waterloo
1, #gender_Female - ONLY 1 gender at a time
0, #gender_Male
])
#turn our pretend person into a dataframe that is the correct dimensions
pretend_person = pretend_person.T
pretend_person.columns = X_test.columns
print("\Pretend person details")
print(pretend_person.head())
print("Pretend person Class predicted")
print(treeClass.predict(pretend_person))
print("Pretend person probablity of guess")
print(treeClass.predict_proba(pretend_person))
###Output
_____no_output_____
###Markdown
Visualizing our Decision TreeWe can 'visualize' the decision tree to trace through the decisions it makes. In this case we can tell that **income level** is the most important factor that we consider since we ask so many questions about that before looking at any of the other features. Question 2What is the first question the tree is asking you to make? Share your thoughts in the chat box.
###Code
printed_tree = export_text(treeClass,feature_names=features)
print(printed_tree)
###Output
_____no_output_____
###Markdown
Tuning parameters - Testing Set SizesTo make our models run better we can tweak _many, many, many_ different parameters. For example, we can vary the testing data size percentage. We'll try some different values and plot our our accuracy of our predictions.
###Code
testing_percents = [1,5,10,20,30,99]
accuracy = []
training_percents = []
for test_ratio in sorted(testing_percents):
X_train, X_test, y_train, y_test = train_test_split(X, \
y, \
test_size=test_percent/100.0,
random_state=10)
treeClassTest = DecisionTreeClassifier()
treeClassTest = treeClassTest.fit(X_train,y_train)
y_pred = treeClassTest.predict(X_test)
score = metrics.accuracy_score(y_test,y_pred)
accuracy.append(score)
training_percents.append(100 - test_ratio)
plt.plot(training_percents,accuracy)
plt.ylabel("Accuracy in %")
plt.xlabel("Training Size %")
plt.show()
###Output
_____no_output_____
###Markdown
(Your graph might look different, this is a statistical operation and will probably vary across different machines) Tuning Parameters - Maximum depth of the treeWe can restrict how deep our tree will be by setting `max_depth` in our `DecisionTreeClassifier` variable. Below is another example of trying different values in our ML model for this parameter and plotting out the accuracy of our model.
###Code
test_percent = 70
max_options = [5,10,15,20,25,30]
accuracy = []
tree_max = []
for max_d in sorted(max_options):
X_train, X_test, y_train, y_test = train_test_split(X, \
y, \
test_size=test_percent/100.0,
random_state=10,
)
#We set maximum depth in the DecisionTreeClassifer when we first create the variable
treeClassTest = DecisionTreeClassifier(max_depth=max_d)
treeClassTest = treeClassTest.fit(X_train,y_train)
y_pred = treeClassTest.predict(X_test)
score = metrics.accuracy_score(y_test,y_pred)
accuracy.append(score)
tree_max.append(max_d)
plt.plot(max_options,accuracy)
plt.ylabel("Accuracy")
plt.xlabel("Maximum Depth of Tree")
plt.show()
###Output
_____no_output_____
###Markdown
Example 2We are going to look at a [cancer survivor data set](http://archive.ics.uci.edu/ml/datasets/Haberman%27s+Survival) from the UCI machine learning archive.
###Code
cancer_data = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/haberman/haberman.data",header=None)
cancer_data.columns = ["age","operation_year","positive_nodes","survival_status"]
cancer_data.describe()
###Output
_____no_output_____
###Markdown
Setting up our model Question 2- What should our features list look like?- What should or target column beYou need to do two things:1. uncomment one line between **line 3 & line 7**2. uncomment one line between **line 13 & line 16**
###Code
#What features list should we use, uncomment the correct answer
#cancer_features = ["age","operation_year","positive_nodes","survival_status"]
#cancer_features = ["age","operation_year","positive_nodes"]
#cancer_features = ["age","operation_year"]
#cancer_features = ["age"]
#cancer_features = ["operation_year","positive_nodes","survival_status"]
#What target should we use, uncomment the correct answer
#cancer_target = cancer_data.age
#cancer_target = cancer_data.operation_year
#cancer_target = cancer_data.positive_nodes
#cancer_target = cancer_data.survival_status
X = cancer_data[cancer_features]
y = cancer_target
###Output
_____no_output_____
###Markdown
Question 2 ContinuedNow that you have your features and target set, run the next cell to build and test your model.Once you are done type "finished" in the chatbox.
###Code
#We'll start with 40 just for fun
test_percent = 40
X_train, X_test, y_train, y_test = train_test_split(X, \
y, \
test_size=test_percent/100.0,
random_state=10)
# Create Decision Tree classifer object
treeClass = DecisionTreeClassifier()
# Train
treeClass = treeClass.fit(X_train,y_train)
#Predict
y_pred = treeClass.predict(X_test)
#Accuracy
print("Accuracy:")
print(metrics.accuracy_score(y_test,y_pred))
###Output
_____no_output_____
###Markdown
Visualizing the treeNow that we have a tree built for this scenario let's display it to the screen.
###Code
printed_tree = export_text(treeClass,feature_names=cancer_features)
print(printed_tree)
###Output
_____no_output_____
###Markdown
Tuning the testing set size Question 3Experiment with adding some values in the `testing_percents_cancer` list on **line 3**. In the chat box put in the testing set size that produced the best results.When you are done experiementing type "Done!" in the chat box.
###Code
# put in some values between 0 - 99
# add at least 4 values in between the commas
testing_percents_cancer = [,,,]
#Let's make sure the numbers are all increasing so our graph looks good
testing_percents_cancer = sorted(testing_percents_cancer)
accuracy = []
training_percents = []
for test_ratio in testing_percents_cancer:
X_train, X_test, y_train, y_test = train_test_split(X, \
y, \
test_size=test_percent/100.0,
random_state=10)
treeClassTest = DecisionTreeClassifier()
treeClassTest = treeClassTest.fit(X_train,y_train)
y_pred = treeClassTest.predict(X_test)
score = metrics.accuracy_score(y_test,y_pred)
accuracy.append(score)
training_percents.append(100 - test_ratio)
plt.plot(training_percents,accuracy)
plt.ylabel("Accuracy in %")
plt.xlabel("Training Size %")
plt.show()
###Output
_____no_output_____
###Markdown
Turning Parameters, Maximum Tree Depth Question 4Experiment with adding some values in the `max_options_cancer` list on **line 5**. In the chat box put in the testing set size that produced the best results.When you are done experiementing type "Done!" in the chat box.
###Code
test_percent = 30
#Put in some options between 1 and 40
#add at least 4 values between the commas
max_options_cancer = [,,,]
#Let's make sure the numbers are all increasing so our graph looks good
max_options_cancer = sorted(max_options_cancer)
accuracy = []
tree_max = []
for max_d in max_options_cancer:
X_train, X_test, y_train, y_test = train_test_split(X, \
y, \
test_size=test_percent/100.0,
random_state=10,
)
#We set maximum depth in the DecisionTreeClassifer when we first create the variable
treeClassTest = DecisionTreeClassifier(max_depth=max_d)
treeClassTest = treeClassTest.fit(X_train,y_train)
y_pred = treeClassTest.predict(X_test)
score = metrics.accuracy_score(y_test,y_pred)
accuracy.append(score)
tree_max.append(max_d)
plt.plot(max_options_cancer,accuracy)
plt.ylabel("Accuracy")
plt.xlabel("Maximum Depth of Tree")
plt.show()
###Output
_____no_output_____
###Markdown
Maximizing Accuracy Question 5- What is the best combination of `max_depth` and `testing size` that produced the *highest* accuracy?- What is the worst combination of `max_depth` and `testing size` that produced the *lowest* acccuracy? You just need to add some values to **line 3** & **line 4**.Share your answers in the chat.You can experiment using the following cell:
###Code
#Change the following values
test_percent =
max_tree_depth =
#HINT: You can use the previous graphs to help you pick your values
X_train, X_test, y_train, y_test = train_test_split(X, \
y, \
test_size=test_percent/100.0,
random_state=10)
# Create Decision Tree classifer object
treeClass = DecisionTreeClassifier(max_depth=max_tree_depth)
# Train
treeClass = treeClass.fit(X_train,y_train)
#Predict
y_pred = treeClass.predict(X_test)
#Accuracy?
print("\nAccuracy of our tree: ")
print(metrics.accuracy_score(y_test,y_pred))
#Display our final tree
print("\nBest tree found:\n")
printed_tree = export_text(treeClass,feature_names=cancer_features)
print(printed_tree)
###Output
_____no_output_____ |
notebooks/09-Naive_Bayes.ipynb | ###Markdown
08 - Naive Bayesby [Alejandro Correa Bahnsen](albahnsen.com/)version 0.1, Mar 2016 Part of the class [Practical Machine Learning](https://github.com/albahnsen/PracticalMachineLearningClass)This notebook is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US). Special thanks goes to [Kevin Markham](https://github.com/justmarkham), [Sebastian Raschka](http://sebastianraschka.com/) & [Scikit-learn docs](http://scikit-learn.org/) Naive BayesNaive Bayes methods are a set of supervised learning algorithmsbased on applying Bayes' theorem with the "naive" assumption of independencebetween every pair of features. Given a class variable $y$ and adependent feature vector $x_1$ through $x_n$,Bayes' theorem states the following relationship:$$ P(y \mid x_1, \dots, x_n) = \frac{P(y) P(x_1, \dots x_n \mid y)} {P(x_1, \dots, x_n)}$$Using the naive independence assumption that$$ P(x_i | y, x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_n) = P(x_i | y),$$for all $i$, this relationship is simplified to$$ P(y \mid x_1, \dots, x_n) = \frac{P(y) \prod_{i=1}^{n} P(x_i \mid y)} {P(x_1, \dots, x_n)}$$Since $P(x_1, \dots, x_n)$ is constant given the input,we can use the following classification rule:$$ P(y \mid x_1, \dots, x_n) \propto P(y) \prod_{i=1}^{n} P(x_i \mid y) $$$$ \Downarrow$$$$ \hat{y} = \arg\max_y P(y) \prod_{i=1}^{n} P(x_i \mid y),$$and we can use Maximum A Posteriori (MAP) estimation to estimate$P(y)$ and $P(x_i \mid y)$;the former is then the relative frequency of class :math:`y`in the training set.The different naive Bayes classifiers differ mainly by the assumptions theymake regarding the distribution of $P(x_i \mid y)$.In spite of their apparently over-simplified assumptions, naive Bayesclassifiers have worked quite well in many real-world situations, famouslydocument classification and spam filtering. They require a small amountof training data to estimate the necessary parameters. (For theoreticalreasons why naive Bayes works well, and on which types of data it does, seethe references below.)Naive Bayes learners and classifiers can be extremely fast compared to moresophisticated methods.The decoupling of the class conditional feature distributions means that eachdistribution can be independently estimated as a one dimensional distribution.This in turn helps to alleviate problems stemming from the curse ofdimensionality.On the flip side, although naive Bayes is known as a decent classifier,it is known to be a bad estimator, so the probability outputs from``predict_proba`` are not to be taken too seriously. Gaussian Naive Bayes`GaussianNB` implements the Gaussian Naive Bayes algorithm forclassification. The likelihood of the features is assumed to be Gaussian:$$ P(x_i \mid y) = \frac{1}{\sqrt{2\pi\sigma^2_y}} \exp\left(-\frac{(x_i - \mu_y)^2}{2\sigma^2_y}\right) $$The parameters $\sigma_y$ and $\mu_y$are estimated using maximum likelihood. Applying Bayes' theorem to iris classification Preparing the dataWe'll read the iris data into a DataFrame, and **round up** all of the measurements to the next integer:
###Code
import pandas as pd
import numpy as np
# read the iris data into a DataFrame
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
col_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
iris = pd.read_csv(url, header=None, names=col_names)
iris.head()
# apply the ceiling function to the numeric columns
iris.loc[:, 'sepal_length':'petal_width'] = iris.loc[:, 'sepal_length':'petal_width'].apply(np.ceil)
iris.head()
###Output
_____no_output_____
###Markdown
Deciding how to make a predictionLet's say that I have an **out-of-sample iris** with the following measurements: **7, 3, 5, 2**. How might I predict the species?
###Code
# show all observations with features: 7, 3, 5, 2
iris[(iris.sepal_length==7) & (iris.sepal_width==3) & (iris.petal_length==5) & (iris.petal_width==2)]
# count the species for these observations
iris[(iris.sepal_length==7) & (iris.sepal_width==3) & (iris.petal_length==5) & (iris.petal_width==2)].species.value_counts()
# count the species for all observations
iris.species.value_counts()
###Output
_____no_output_____
###Markdown
Let's frame this as a **conditional probability problem**: What is the probability of some particular species, given the measurements 7, 3, 5, and 2?$$P(species \ | \ 7352)$$We could calculate the conditional probability for **each of the three species**, and then predict the species with the **highest probability**:$$P(setosa \ | \ 7352)$$$$P(versicolor \ | \ 7352)$$$$P(virginica \ | \ 7352)$$ Calculating the probability of each species**Bayes' theorem** gives us a way to calculate these conditional probabilities.Let's start with **versicolor**:$$P(versicolor \ | \ 7352) = \frac {P(7352 \ | \ versicolor) \times P(versicolor)} {P(7352)}$$We can calculate each of the terms on the right side of the equation:$$P(7352 \ | \ versicolor) = \frac {13} {50} = 0.26$$$$P(versicolor) = \frac {50} {150} = 0.33$$$$P(7352) = \frac {17} {150} = 0.11$$Therefore, Bayes' theorem says the **probability of versicolor given these measurements** is:$$P(versicolor \ | \ 7352) = \frac {0.26 \times 0.33} {0.11} = 0.76$$Let's repeat this process for **virginica** and **setosa**:$$P(virginica \ | \ 7352) = \frac {0.08 \times 0.33} {0.11} = 0.24$$$$P(setosa \ | \ 7352) = \frac {0 \times 0.33} {0.11} = 0$$We predict that the iris is a versicolor, since that species had the **highest conditional probability**. Using sklearn
###Code
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
X = iris[['sepal_length', 'sepal_width', 'petal_length', 'petal_width']]
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(iris['species'])
gnb.fit(X, y)
y_pred = gnb.predict(X)
print("Number of mislabeled points out of a total %d points : %d"
% (iris.shape[0],(y != y_pred).sum()))
###Output
Number of mislabeled points out of a total 150 points : 10
###Markdown
The intuition behind Bayes' theoremLet's make some hypothetical adjustments to the data, to demonstrate how Bayes' theorem makes intuitive sense:Pretend that **more of the existing versicolors had measurements of 7352:**- $P(7352 \ | \ versicolor)$ would increase, thus increasing the numerator.- It would make sense that given an iris with measurements of 7352, the probability of it being a versicolor would also increase.Pretend that **most of the existing irises were versicolor:**- $P(versicolor)$ would increase, thus increasing the numerator.- It would make sense that the probability of any iris being a versicolor (regardless of measurements) would also increase.Pretend that **17 of the setosas had measurements of 7352:**- $P(7352)$ would double, thus doubling the denominator.- It would make sense that given an iris with measurements of 7352, the probability of it being a versicolor would be cut in half. Why is the Naive Bayes Classifier naive?Let's start by taking a quick look at the Bayes' Theorem:In context of pattern classification, we can express it asIf we use the Bayes Theorem in classification, our goal (or objective function) is to maximize the posterior probabilityNow, let's talk a bit more about the individual components. The priors are representing our expert (or any other prior) knowledge; in practice, the priors are often estimated via MLE (computed as class frequencies). The evidence term cancels because it is constant for all classes.Moving on to the "naive" part in the Naive Bayes Classifier: What makes it "naive" is that we compute the conditional probability (sometimes also called likelihoods) as the product of the individual probabilities for each feature:Since this assumption (the absolute independence of features) is probably never met in practice, it's the truly "naive" part in naive Bayes. Working with Text Data and Naive Bayes in scikit-learn Representing text as dataFrom the [scikit-learn documentation](http://scikit-learn.org/stable/modules/feature_extraction.htmltext-feature-extraction):> Text Analysis is a major application field for machine learning algorithms. However the raw data, a sequence of symbols cannot be fed directly to the algorithms themselves as most of them expect **numerical feature vectors with a fixed size** rather than the **raw text documents with variable length**.We will use [CountVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to "convert text into a matrix of token counts":
###Code
from sklearn.feature_extraction.text import CountVectorizer
# start with a simple example
simple_train = ['call you tonight', 'Call me a cab', 'please call me... PLEASE!']
# learn the 'vocabulary' of the training data
vect = CountVectorizer()
vect.fit(simple_train)
vect.get_feature_names()
# transform training data into a 'document-term matrix'
simple_train_dtm = vect.transform(simple_train)
simple_train_dtm
# print the sparse matrix
print(simple_train_dtm)
# convert sparse matrix to a dense matrix
simple_train_dtm.toarray()
# examine the vocabulary and document-term matrix together
pd.DataFrame(simple_train_dtm.toarray(), columns=vect.get_feature_names())
###Output
_____no_output_____
###Markdown
From the [scikit-learn documentation](http://scikit-learn.org/stable/modules/feature_extraction.htmltext-feature-extraction):> In this scheme, features and samples are defined as follows:> - Each individual token occurrence frequency (normalized or not) is treated as a **feature**.> - The vector of all the token frequencies for a given document is considered a multivariate **sample**.> A **corpus of documents** can thus be represented by a matrix with **one row per document** and **one column per token** (e.g. word) occurring in the corpus.> We call **vectorization** the general process of turning a collection of text documents into numerical feature vectors. This specific strategy (tokenization, counting and normalization) is called the **Bag of Words** or "Bag of n-grams" representation. Documents are described by word occurrences while completely ignoring the relative position information of the words in the document.
###Code
# transform testing data into a document-term matrix (using existing vocabulary)
simple_test = ["please don't call me"]
simple_test_dtm = vect.transform(simple_test)
simple_test_dtm.toarray()
# examine the vocabulary and document-term matrix together
pd.DataFrame(simple_test_dtm.toarray(), columns=vect.get_feature_names())
###Output
_____no_output_____
###Markdown
**Summary:**- `vect.fit(train)` learns the vocabulary of the training data- `vect.transform(train)` uses the fitted vocabulary to build a document-term matrix from the training data- `vect.transform(test)` uses the fitted vocabulary to build a document-term matrix from the testing data (and ignores tokens it hasn't seen before) Reading SMS data
###Code
# read tab-separated file
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/sms.tsv'
col_names = ['label', 'message']
sms = pd.read_table(url, sep='\t', header=None, names=col_names)
print(sms.shape)
sms.head(20)
sms.label.value_counts()
# convert label to a numeric variable
sms['label'] = sms.label.map({'ham':0, 'spam':1})
# define X and y
X = sms.message
y = sms.label
# split into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
print(X_train.shape)
print(X_test.shape)
###Output
(4179,)
(1393,)
###Markdown
Vectorizing SMS data
###Code
# instantiate the vectorizer
vect = CountVectorizer()
# learn training data vocabulary, then create document-term matrix
vect.fit(X_train)
X_train_dtm = vect.transform(X_train)
X_train_dtm
# alternative: combine fit and transform into a single step
X_train_dtm = vect.fit_transform(X_train)
X_train_dtm
# transform testing data (using fitted vocabulary) into a document-term matrix
X_test_dtm = vect.transform(X_test)
X_test_dtm
###Output
_____no_output_____
###Markdown
Examining the tokens and their counts
###Code
# store token names
X_train_tokens = vect.get_feature_names()
# first 50 tokens
print(X_train_tokens[:50])
# last 50 tokens
print(X_train_tokens[-50:])
# view X_train_dtm as a dense matrix
X_train_dtm.toarray()
# count how many times EACH token appears across ALL messages in X_train_dtm
import numpy as np
X_train_counts = np.sum(X_train_dtm.toarray(), axis=0)
X_train_counts
X_train_counts.shape
# create a DataFrame of tokens with their counts
pd.DataFrame({'token':X_train_tokens, 'count':X_train_counts}).sort_values('count')
###Output
_____no_output_____
###Markdown
Building a Multinomial Naive Bayes model`MultinomialNB` implements the naive Bayes algorithm for multinomiallydistributed data, and is one of the two classic naive Bayes variants used intext classification (where the data are typically represented as word vectorcounts, although tf-idf vectors are also known to work well in practice).The distribution is parametrized by vectors$\theta_y = (\theta_{y1},\ldots,\theta_{yn})$for each class :math:`y`, where :math:`n` is the number of features(in text classification, the size of the vocabulary)and $\theta_{yi}$ is the probability $P(x_i \mid y)$of feature $i$ appearing in a sample belonging to class :math:`y`.The parameters $\theta_y$ is estimated by a smoothedversion of maximum likelihood, i.e. relative frequency counting:$$ \hat{\theta}_{yi} = \frac{ N_{yi} + \alpha}{N_y + \alpha n}$$where $N_{yi} = \sum_{x \in T} x_i$ isthe number of times feature $i$ appears in a sample of class $y$in the training set $T$,and $N_{y} = \sum_{i=1}^{|T|} N_{yi}$ is the total count ofall features for class $y$.The smoothing priors $\alpha \ge 0$ accounts forfeatures not present in the learning samples and prevents zero probabilitiesin further computations.Setting $\alpha = 1$ is called Laplace smoothing,while $\alpha < 1$ is called Lidstone smoothing.
###Code
# train a Naive Bayes model using X_train_dtm
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
nb.fit(X_train_dtm, y_train)
# make class predictions for X_test_dtm
y_pred_class = nb.predict(X_test_dtm)
# calculate accuracy of class predictions
from sklearn import metrics
print(metrics.accuracy_score(y_test, y_pred_class))
# confusion matrix
print(metrics.confusion_matrix(y_test, y_pred_class))
# predict (poorly calibrated) probabilities
y_pred_prob = nb.predict_proba(X_test_dtm)[:, 1]
y_pred_prob
# calculate AUC
print(metrics.roc_auc_score(y_test, y_pred_prob))
# print message text for the false positives
X_test[y_test < y_pred_class]
# print message text for the false negatives
X_test[y_test > y_pred_class]
# what do you notice about the false negatives?
X_test[3132]
###Output
_____no_output_____
###Markdown
Comparing Multinomial and Gaussian Naive Bayesscikit-learn documentation: [MultinomialNB](http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html) and [GaussianNB](http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html)Dataset: [Pima Indians Diabetes](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes) from the UCI Machine Learning Repository
###Code
# read the data
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data'
col_names = ['pregnant', 'glucose', 'bp', 'skin', 'insulin', 'bmi', 'pedigree', 'age', 'label']
pima = pd.read_csv(url, header=None, names=col_names)
# notice that all features are continuous
pima.head()
# create X and y
X = pima.drop('label', axis=1)
y = pima.label
# split into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# import both Multinomial and Gaussian Naive Bayes
from sklearn.naive_bayes import MultinomialNB, GaussianNB
from sklearn import metrics
# testing accuracy of Multinomial Naive Bayes
mnb = MultinomialNB()
mnb.fit(X_train, y_train)
y_pred_class = mnb.predict(X_test)
metrics.accuracy_score(y_test, y_pred_class)
# testing accuracy of Gaussian Naive Bayes
gnb = GaussianNB()
gnb.fit(X_train, y_train)
y_pred_class = gnb.predict(X_test)
metrics.accuracy_score(y_test, y_pred_class)
###Output
_____no_output_____ |
Forecasting_Jupyter/HW4/HW4.ipynb | ###Markdown
Key Resources:* [Statsmodels ARMA Model](http://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_model.ARMA.htmlstatsmodels.tsa.arima_model.ARMA)* [Statsmodels ARMA Fit](http://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_model.ARMA.fit.html)* [Statsmodels ARMA Results](http://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_model.ARMAResults.htmlstatsmodels.tsa.arima_model.ARMAResults)
###Code
import statsmodels.api as sm
from statsmodels import tsa
from statsmodels import graphics as smg
import numpy as np
from scipy import stats as SPstats
from time import strptime
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
# import pygal as pg
import math
%matplotlib inline
# import Housing Starts
HousingStarts_initial = pd.read_csv('HousingStarts.csv')
HousingStarts_initial.set_index('Year', inplace=True)
HousingStarts_initial['LogHStrt'] = HousingStarts_initial['HousingStarts'].apply(math.log)
HousingStarts_initial
# Import GDP
GDPdat_initial = pd.read_csv('GDP.csv')
GDPdat_initial['LogGDP'] = GDPdat_initial['GDP'].apply(math.log)
GDPdat_initial['DATE'] = pd.to_datetime(GDPdat_initial['DATE'])
GDPdat_initial.set_index('DATE', inplace=True)
# insert time predictor variable, intergers 1 to end
GDPdat_initial.insert(2,'TIME',list(range(1,len(GDPdat_initial['LogGDP'])+1)))
GDPdat_initial['Diff1LogGDP'] = GDPdat_initial['LogGDP'] - GDPdat_initial['LogGDP'].shift()
GDPdat_initial.tail(10)
# Import CPI
CPIdat_initial = pd.read_csv('CPI.csv')
CPIdat_initial.set_index('DATE', inplace=True)
CPIdat_initial['LogCPI'] = CPIdat_initial['CPI'].apply(math.log)
CPIdat_initial['Diff1LogCPI'] = CPIdat_initial['LogCPI'] - CPIdat_initial['LogCPI'].shift()
CPIdat_initial['Diff2LogCPI'] = CPIdat_initial['Diff1LogCPI'] - CPIdat_initial['Diff1LogCPI'].shift()
CPIdat_initial
# Housing Start ACF, PACF
smg.tsaplots.plot_acf(HousingStarts_initial['LogHStrt'])
plt.title('Housing Start ACF')
smg.tsaplots.plot_pacf(HousingStarts_initial['LogHStrt'])
plt.title('Housing Start PACF')
#LogGDP & Diff1_LogGDP ACF, PACF
smg.tsaplots.plot_acf(GDPdat_initial['LogGDP'],lags=60)
plt.title('LogGDP ACF')
smg.tsaplots.plot_pacf(GDPdat_initial['LogGDP'],lags=60)
plt.title('LogGDP PACF')
smg.tsaplots.plot_acf(GDPdat_initial.iloc[1:,3],lags=100)
plt.title('Diff1LogGDP ACF')
smg.tsaplots.plot_pacf(GDPdat_initial.iloc[1:,3],lags=100)
plt.title('Diff1LogGDP PACF')
# below returns PACF correlation values in an array
Diff1LogGDP_PACFvalues = tsa.stattools.pacf(GDPdat_initial.iloc[1:,3])
Diff1LogGDP_ACFvalues = tsa.stattools.acf(GDPdat_initial.iloc[1:,3])
# calculate yule walker for question 3.
bPlus_calc = (1+ np.sqrt(1-4*Diff1LogGDP_PACFvalues[1]**2))/(2*Diff1LogGDP_PACFvalues[1])
bMinus_calc = (1- np.sqrt(1-4*Diff1LogGDP_PACFvalues[1]**2))/(2*Diff1LogGDP_PACFvalues[1])
print('p=', Diff1LogGDP_PACFvalues[1], '\nb+:' , bPlus_calc, '\nb-:' , bMinus_calc)
# question 4
Diff1LogGDP_PACFvalues[1]
# Question 5
r1 = Diff1LogGDP_ACFvalues[1]
r2 = Diff1LogGDP_ACFvalues[2]
Ahat1 = ((r2-(r1/r2))/(r1-1))
Ahat2 = r2 - (Ahat1 * r1)
print('a1: ', Ahat1, 'a2: ', Ahat2)
# Diff1_LogCPI ACF and PACF
CPIACF = smg.tsaplots.plot_acf(CPIdat_initial.iloc[1:,2],lags=70)
plt.title('Diff1 LogCPI ACF')
smg.tsaplots.plot_pacf(CPIdat_initial.iloc[1:,2],lags=70)
plt.title('Diff1 LogCPI PACF')
# Diff1_LogCPI ACF and PACF
CPIACF = smg.tsaplots.plot_acf(CPIdat_initial.iloc[2:,3],lags=70)
plt.title('Diff2 LogCPI ACF')
smg.tsaplots.plot_pacf(CPIdat_initial.iloc[2:,3],lags=70)
plt.title('Diff2 LogCPI PACF')
# fit an ARMA(0,1) model. ?Same as MA(1) model?
D1LogGDP_MA1 = tsa.arima_model.ARMA(GDPdat_initial.iloc[1:,3],order=(0,1))
# trend='nc' removes constant
D1LogGDP_MA1 = D1LogGDP_MA1.fit(trend='nc')
D1LogGDP_MA1.summary()
# fit an ARMA(1,0) model. ?Same as AR(1) model?
D1LogGDP_ar1 = tsa.arima_model.ARMA(GDPdat_initial.iloc[1:,3],order=(1,0))
# trend='nc' removes constant
D1LogGDP_ar1 = D1LogGDP_ar1.fit(trend='nc')
D1LogGDP_ar1.summary()
# fit an ARMA(2,0) model.
D1LogGDP_ar2 = tsa.arima_model.ARMA(GDPdat_initial.iloc[1:,3],order=(2,0))
# trend='nc' removes constant
D1LogGDP_ar2 = D1LogGDP_ar2.fit(trend='nc')
D1LogGDP_ar2.summary()
# fit an ARIMA(2, 1, 0) model.
D1LogGDP_arima210 = tsa.arima_model.ARIMA(GDPdat_initial.iloc[1:,3],order=(2,1,0))
# trend='nc' removes constant
D1LogGDP_arima210 = D1LogGDP_arima210.fit(trend='nc')
D1LogGDP_arima210.summary()
D1LogGDP_arima210.predict(start=282,end=284)
GDPdat_initial.shape
###Output
_____no_output_____ |
notebooks/04_summary_statistics.ipynb | ###Markdown
eICU Collaborative Research Database Notebook 4: Summary statisticsThis notebook shows how summary statistics can be computed for a patient cohort using the `tableone` package. Usage instructions for tableone are at: https://pypi.org/project/tableone/ Load libraries and connect to the database
###Code
# Import libraries
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.path as path
# Make pandas dataframes prettier
from IPython.display import display, HTML
# Access data using Google BigQuery.
from google.colab import auth
from google.cloud import bigquery
# authenticate
auth.authenticate_user()
# Set up environment variables
project_id='mlhc-workshop'
os.environ["GOOGLE_CLOUD_PROJECT"]=project_id
# Helper function to read data from BigQuery into a DataFrame.
def run_query(query):
return pd.io.gbq.read_gbq(query, project_id=project_id, dialect="standard")
###Output
_____no_output_____
###Markdown
Install and load the `tableone` packageThe tableone package can be used to compute summary statistics for a patient cohort. Unlike the previous packages, it is not installed by default in Colab, so will need to install it first.
###Code
!pip install tableone
# Import the tableone class
from tableone import TableOne
###Output
_____no_output_____
###Markdown
Load the patient cohortIn this example, we will load all data from the patient data, and link it to APACHE data to provide richer summary information.
###Code
# Link the patient and apachepatientresult tables on patientunitstayid
# using an inner join.
query = """
SELECT p.unitadmitsource, p.gender, p.age, p.ethnicity, p.admissionweight,
p.unittype, p.unitstaytype, a.acutephysiologyscore,
a.apachescore, a.actualiculos, a.actualhospitalmortality,
a.unabridgedunitlos, a.unabridgedhosplos
FROM `physionet-data.eicu_crd_demo.patient` p
INNER JOIN `physionet-data.eicu_crd_demo.apachepatientresult` a
ON p.patientunitstayid = a.patientunitstayid
WHERE apacheversion LIKE 'IVa'
"""
cohort = run_query(query)
cohort.head()
###Output
_____no_output_____
###Markdown
Calculate summary statisticsBefore summarizing the data, we will need to convert the ages to numerical values.
###Code
cohort['agenum'] = pd.to_numeric(cohort['age'], errors='coerce')
columns = ['unitadmitsource', 'gender', 'agenum', 'ethnicity',
'admissionweight','unittype','unitstaytype',
'acutephysiologyscore','apachescore','actualiculos',
'unabridgedunitlos','unabridgedhosplos']
TableOne(cohort, columns=columns, labels={'agenum': 'age'},
groupby='actualhospitalmortality',
label_suffix=True, limit=4)
###Output
_____no_output_____
###Markdown
Questions- Are the severity of illness measures higher in the survival or non-survival group?- What issues suggest that some of the summary statistics might be misleading?- How might you address these issues? Visualizing the dataPlotting the distribution of each variable by group level via histograms, kernel density estimates and boxplots is a crucial component to data analysis pipelines. Vizualisation is often is the only way to detect problematic variables in many real-life scenarios. We'll review a couple of the variables.
###Code
# Plot distributions to review possible multimodality
cohort[['acutephysiologyscore','agenum']].dropna().plot.kde(figsize=[12,8])
plt.legend(['APS Score', 'Age (years)'])
plt.xlim([-30,250])
###Output
_____no_output_____ |
launcher.ipynb | ###Markdown
EMDL Launcher
###Code
import os, sys, shutil, traceback
line1 = "Type a name for your new project : "
line2 = \
"Select \n\
r + [enter] : RESOLVE\n"
project_name = input(line1)
xpsystem = input(line2)
project_dir = "project/" + project_name
if os.path.exists(project_dir):
try:
raise OSError(filename=project_dir)
except:
print("Project '{}' already exists. Rerun this cell and type a new project name.".format(project_name))
sys.exit
else:
if xpsystem == 'r':
shutil.copytree("core/resolve_unit", project_dir)
else:
print("Invalid key command.\nRerun this cell and enter a appropriate key command for your exploration system.")
sys.exit
print("Please close this notebook and move to the notebook in your new project")
###Output
Please close this notebook and move to the notebook in your new project
###Markdown
Lab CudaVision (MA-INF 4308) Submission by: Tridivraj Bhattacharyya (3035538) Dependencies:1. Python 3.62. Pytorch 0.4.13. Torchvision4. Visdom5. Numpy6. ImageIo7. OpenCV 3.x8. Pandas9. PIL Usable Paramters to initialize the code:*select model type - 1 (SweatyNet1) | 2 (SweatyNet2) | 3 (SweatyNet3) (Default: 3)'* ``model = 2`` *training set annotations and file names* ``traincsv = './data/imageset_train_annotations.csv'`` *path to training set files* ``trainset = './data/train'`` *size of training batch* ``train_batch_size = 4`` *test set annotations and file names* ``testcsv = './data/imageset_test_annotations.csv'`` *path to test set files* ``testset = './data/test'`` *size of test batch* ``test_batch_size = 10`` *checkpoint file to load* ``checkpoint = None`` *no of workers to load dataset* ``workers = 2`` *no of iterations* ``niter = 25`` *learning rate* ``lr = 0.001`` *momentum for adam optimizer* ``beta1 = 0.9`` *path to output checkpoint and data* ``outpath = './output'`` *manual seed for randomizer* ``manual_seed = 42`` *the height of the input image to network* ``image_height = 512`` *the width of the input image to network* ``image_width = 640`` *Input video file to process. Training will be turned off.* ``input_vid = None`` *To use the following flags just pass them as arguments while calling main.py eg. To turn off training use ``python main.py --train_off``* *Freeze weights of SweatyNet (Default: False)* ``freeze_wts = False`` *Add ConvLSTM to model (Default: False)* ``use_ConvLSTM = False`` *Turn off training (Default: False)* ``train_off = False`` **Instructions: The main.py initializes the models and starts the training. Available parameters along with their descriptions and default values are provided above. To freeze weights include --freeze_wts flag. Similary to include Convolutional LSTM to model, add --use_ConvLSTM flag.****Please start the visdom server separately using ``python -m visdom.server``. Starting it in jupyter might slow down the notebook.**
###Code
# Example Usage:
# Train without ConvLSTM
# python main.py --model 1 --niter 25
# Transfer learning and train with ConvLSTM
# python main.py --model 1 --niter 25 --checkpoint <checkpoint_file> --use_ConvLSTM --freeze_wts
# To only test the model
# python main.py --model 1 --checkpoint <checkpoint_file> --use_ConvLSTM --freeze_wts --train_off
# To process video file
# python main.py --model 1 --checkpoint <checkpoint_file> --use_ConvLSTM --freeze_wts --input_vid <video file>
!python main.py --model 1 --niter 1
###Output
Namespace(beta1=0.9, checkpoint=None, freeze_wts=False, image_height=512, image_width=640, input_vid=None, lr=0.001, manual_seed=42, model=1, niter=1, outpath='./output', test_batch_size=10, testcsv='./data/imageset_test_annotations.csv', testset='./data/test', train_batch_size=4, train_off=False, traincsv='./data/imageset_train_annotations.csv', trainset='./data/train', use_ConvLSTM=False, workers=2)
=============================================================
./output/checkpoint already exists.
=============================================================
./output/media already exists.
=============================================================
Model: SweatyNet1 selected.
=============================================================
Model Initialized. Start Epoch: 0, Min Validation Loss: inf
=============================================================
Training in progress...
-------------------------------------------------------------
Train Epoch: 1 [0/2900 (0%)] Loss: 4924.100586
Train Epoch: 1 [900/2900 (31%)] Loss: 19.820124
Train Epoch: 1 [1800/2900 (62%)] Loss: 2.825253
Train Epoch: 1 [2700/2900 (93%)] Loss: 8.813757
Epoch: 1, Train Loss: 42.865596771240234, Validation Loss: 6.435216346289963e-05, Validation Recall: 96.48%, Validation FDR: 9.73%, Epoch Run Time: 2 minute(s) and 29 second(s)
=============================================================
Processing test data...
-------------------------------------------------------------
Test Loss: 7.68202735343948e-05, Testset Recall: 95.93%, Testset FDR: 2.68%
=============================================================
Total time taken: 0 hour(s), 3 minute(s) and 3 second(s)
###Markdown
image 분할
###Code
# image 이름에 index 부여해서 어떻게 분할된 이미지인지 구분
# 동일모델 복사 후 모델 내에서 index와 함께 결과 전송
import pandas as pd
from PIL import Image
import os
from IPython.display import display
n_tile = 2 # partitioned width or height
path = 'data/val2017'
save_dir = 'data/cropped/val2017'
file_list = os.listdir(path)
for img_name in file_list:
img = Image.open(path+'/'+img_name)
#display(img)
width, height = img.size
sub_width = int(4*width/(3*n_tile+1))
sub_height = int(4*height/(3*n_tile+1))
#print('original : ',width,height)
#print('sub : ',sub_width,sub_height)
ind = 0
for h in range(n_tile):
for w in range(n_tile):
#print(w*(width-sub_width),h*(height-sub_height)) # 시작점
#print(sub_width+w*(width-sub_width),sub_height+h*(height-sub_height)) # 끝점
area = (w*(width-sub_width),h*(height-sub_height),sub_width+w*(width-sub_width),sub_height+h*(height-sub_height))
sub_img = img.crop(area)
#display(sub_img)
save_file = save_dir+'/'+img_name[:-4]+"_"+str(ind)+img_name[-4:]
ind+=1
sub_img.save(save_file)
###Output
_____no_output_____ |
03CNN/05Pretrained_Model.ipynb | ###Markdown
Pretrained Model을 활용하여 이미지 분류하기참고) https://pytorch.org/vision/stable/models.htmlPretrained Model은 ImageNet과 같은 대규모 이미지 데이터 세트에 대해 훈련된 신경망 모형이다. TorchVision을 사용하여 imagenet으로 사전 훈련된 모형을 로드할 수 있다.
###Code
from torchvision import models
import torch
#torchvision 모듈 에서 사용 가능한 다른 모델 확인
dir(models)
###Output
_____no_output_____
###Markdown
모형의 비교 1 단계 : 사전 훈련 된 모델로드일반적으로 PyTorch 모델의 확장자는 .pt 또는 .pth 이다.가중치가 다운로드되면 다음과 같이 네트워크 아키텍처에 대한 세부 정보를 확인할 수도 있다.모델은 'TORCH_HOME'으로 지정된 디렉토리에 다운로드 된다.
###Code
import os
# 모형이 다운로드 될 디렉토리를 지정
os.environ['TORCH_HOME'] = '../models'
m_v2 = models.mobilenet_v2(pretrained=True)
print(m_v2)
###Output
MobileNetV2(
(features): Sequential(
(0): ConvBNActivation(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=144, bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(6): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(7): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(8): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(9): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(10): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(11): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(12): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(13): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(14): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=576, bias=False)
(1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(15): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(16): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(17): InvertedResidual(
(conv): Sequential(
(0): ConvBNActivation(
(0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): ConvBNActivation(
(0): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
(1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(2): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
(3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(18): ConvBNActivation(
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
)
(classifier): Sequential(
(0): Dropout(p=0.2, inplace=False)
(1): Linear(in_features=1280, out_features=1000, bias=True)
)
)
###Markdown
2 단계 : 이미지 변환 지정모델을 가지고 나면 다음 단계는 입력 이미지를 올바른 모양과 평균 및 표준 편차와 같은 다른 특성을 갖도록 변환한다.
###Code
from torchvision import transforms
transform = transforms.Compose([ #[1] 입력 이미지에서 수행 될 모든 이미지 변환의 조합
transforms.Resize(256), #[2] 이미지 크기를 256 × 256 픽셀로 조정
transforms.CenterCrop(224), #[3] 이미지를 중심을 기준으로 224 × 224 픽셀로 자름
transforms.ToTensor(), #[4] 이미지를 PyTorch Tensor 데이터 유형으로 변환
transforms.Normalize( #[5] 평균 및 표준 편차를 지정된 값으로 설정하여 이미지를 정규화
mean=[0.485, 0.456, 0.406], #[6] 계산된 평균
std=[0.229, 0.224, 0.225] #[7] 계산된 표준 편차
)])
###Output
_____no_output_____
###Markdown
3 단계 : 입력 이미지로드 및 전처리입력 이미지를 로드하고 위에서 지정한 이미지 변환을 수행한다.
###Code
# Import Pillow
from PIL import Image
import matplotlib.pyplot as plt
%matplotlib inline
img = Image.open("images/Labrador.jpg")
plt.imshow(img)
# 해당 이미지를 입력에 맞는 형태로 transform 한다.
img_t = transform(img)
batch_t = torch.unsqueeze(img_t, 0)
###Output
_____no_output_____
###Markdown
4 단계 : 모델 추론마지막으로 사전 훈련 된 모델을 사용하여 모델이 이미지를 어떻게 예측하는지 확인한다.먼저 모델을 평가 모드로 설정 해야 한다.
###Code
m_v2.eval()
out = m_v2(batch_t)
print(out.shape)
###Output
torch.Size([1, 1000])
###Markdown
**이미지의 클래스 (또는 레이블)** 이미지의 클래스 (또는 레이블)이 없다. 이를 위해 먼저 1000 개의 레이블이 모두있는 텍스트 파일에서 레이블을 읽고 저장한다.
###Code
with open('imagenet_classes.txt') as f:
labels = [line.strip() for line in f.readlines()]
###Output
_____no_output_____
###Markdown
예측값 중 score가 가장 높은 1개를 출력한다.
###Code
_, index = torch.max(out, 1)
percentage = torch.nn.functional.softmax(out, dim=1)[0] * 100
print(labels[index[0]], percentage[index[0]].item())
###Output
208, Labrador_retriever 60.26508712768555
###Markdown
예측 값 중 가장 높은 상위 5개를 출력한다.
###Code
_, indices = torch.sort(out, descending=True)
[(labels[idx], percentage[idx].item()) for idx in indices[0][:5]]
###Output
_____no_output_____ |
notebooks/Spark Streaming.ipynb | ###Markdown
Spark Streaming pré requis Vous devez d’abord exécuter Netcat (un petit utilitaire présent dans la plupart des systèmes de type Unix) en tant que serveur de données à l’aide de$ nc -lk 9999 après avoir executer toute les cellule copiez le texte que vous voulez dans la terminal netct, spark va lire les données et afficher le résultat Premièrement, nous importons les noms des classes Spark Streaming et certaines conversions implicites de StreamingContext dans notre environnement afin d’ajouter des méthodes utiles aux autres classes dont nous avons besoin (comme DStream). StreamingContext est le point d'entrée principal pour toutes les fonctionnalités de diffusion en continu. Nous créons un StreamingContext local avec deux threads d'exécution et un intervalle de traitement par lots d'une seconde. c'est un streamingcontext qui cherche les nouvelle données toute les secondes
###Code
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._ // not necessary since Spark 1.3
// Create a local StreamingContext with two working thread and batch interval of 1 second.
// The master requires 2 cores to prevent a starvation scenario.
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
val ssc = new StreamingContext(conf, Seconds(10))
###Output
_____no_output_____
###Markdown
En utilisant ce contexte, nous pouvons créer un DStream qui représente les données en continu d'une source TCP, spécifiées comme nom d'hôte (par exemple localhost) et port (par exemple 9999).
###Code
// Create a DStream that will connect to hostname:port, like localhost:9999
val lines = ssc.socketTextStream("localhost", 9999)
###Output
_____no_output_____
###Markdown
Cette ligne DStream représente le flux de données qui sera reçu du serveur de données. Chaque enregistrement de ce DStream est une ligne de texte. Ensuite, nous voulons diviser les lignes par des caractères d'espace.
###Code
// Split each line into words
val words = lines.flatMap(_.split(" "))
###Output
_____no_output_____
###Markdown
flatMap est une opération DStream d'un à plusieurs qui crée un nouveau DStream en générant plusieurs nouveaux enregistrements à partir de chaque enregistrement du DStream source. Dans ce cas, chaque ligne sera scindée en plusieurs mots et le flux de mots est représenté par les mots DStream. Ensuite, nous voulons compter ces mots.
###Code
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
// Print the first ten elements of each RDD generated in this DStream to the console
wordCounts.print()
###Output
_____no_output_____
###Markdown
Les mots DStream sont en outre mappés (transformation un à un) en un flux DStream de (mot, 1) paires, qui est ensuite réduit pour obtenir la fréquence des mots dans chaque lot de données. Enfin, wordCounts.print () imprimera quelques-uns des comptes générés chaque seconde.Notez que lorsque ces lignes sont exécutées, Spark Streaming configure uniquement le calcul qu'il effectuera au démarrage et aucun traitement réel n'a encore commencé. Pour commencer le traitement après que toutes les transformations ont été configurées, nous appelons finalement
###Code
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to terminate
###Output
_____no_output_____ |
examples/gkt/run_generalized_kt_experiment.ipynb | ###Markdown
A notebook for running experiments with generalized kernel thinning of Dwivedi and Mackey 2021 https://arxiv.org/pdf/2110.01593.pdf and standard thinning
###Code
import numpy as np
import numpy.random as npr
import numpy.linalg as npl
from scipy.spatial.distance import pdist
from argparse import ArgumentParser
import pickle as pkl
import pathlib
import os
import os.path
# import kernel thinning
from goodpoints import kt # kt.thin is the main thinning function; kt.split and kt.swap are other important functions
from goodpoints.util import isnotebook # Check whether this file is being executed as a script or as a notebook
from goodpoints.util import fprint # for printing while flushing buffer
from goodpoints.tictoc import tic, toc # for timing blocks of code
# utils for generating samples, evaluating kernels, and mmds
from util_sample import sample, compute_params_p, sample_string
from util_k_mmd import kernel_eval, compute_params_k, compute_power_kernel_params_k
from util_k_mmd import p_kernel, ppn_kernel, pp_kernel, pnpn_kernel, squared_mmd, get_combined_results_filename
from util_parse import init_parser, convert_arg_flags
# for partial functions, to use kernel_eval for kernel
from functools import partial
# experiment functions
from util_experiments import run_kernel_thinning_experiment #, kt_split_best, kt_split_rand
from util_experiments import run_standard_thinning_experiment, run_iid_thinning_experiment
# set things a bit when running the notebook
if isnotebook():
# Autoreload packages that are modified
%load_ext autoreload
%autoreload 2
%matplotlib inline
%load_ext line_profiler
# https://jakevdp.github.io/PythonDataScienceHandbook/01.07-timing-and-profiling.html
# for relevant parameters check util_parse file
parser = init_parser()
args, opt = parser.parse_known_args()
args = convert_arg_flags(args)
print(args, opt, parser)
###Output
_____no_output_____
###Markdown
Set parameters for thinning experiments- Helpful to check first init_parser function in util_parse.py to become familiar with list of arguments- Code allows args.P supported by compute_params_p in util_sample, currently {mog, gauss, mcmc}- Code allows args.kernel supported by compute_params_k in kernel_eval function, currently {gauss, bspline- - For args.P="gauss", the only degree of freedom is in setting args.d (arbitrary d allowed), for "mog", its only args.M (supports only M = 4, 6, 8), and for "mcmc" its args.filename- - For args.kernel, var_k is computed automatically based on P, set to 2d for gauss/mog, and median BW^2 for mcmc; It is equal to sigma^2 for Gauss, Laplace kernel in the notation of https://arxiv.org/pdf/2110.01593.pdf, and gamma^2 for IMQ/Matern kernels, 1/theta^2 for sinc, and it scales the distance in bspline kernel. args.nu is another parameter used only for IMQ/Matern/Bspline kernels. It denotes the nu parameter for IMQ/Matern kernel in the notation of https://arxiv.org/pdf/2110.01593.pdf, and the beta paraemter for the bspline kernel.- - For args.power, there is a check in the code to ensure power kernel is valid based on the theory related to table 3 of the paper- One can allow for general P and K by making changes in the functions listed above, and also checking feasilibility of MMD computations, and making changes to get_combined_results_filename
###Code
#
# Choose sample and kernel parameters
#
if isnotebook():
args.d = 2
args.M = 4
args.P = "mog"
args.kernel = "bspline"
args.nu = 2
args.computepower = True
args.power = 2/3.
# for bspline nu is same as beta, and with even nu, power should be (nu+2)/(2*nu+2)
args.rep0 = 0 # starting rep index
args.repn = 2 # number of reps
args.m = 5 # size of input is n = 4^m, and output size is sqrt(n) = 2^m
# args.filename = 'Hinch_P_seed_1_temp_1_scaled'
# collection of all MCMC filenames
# ['Goodwin_RW', 'Goodwin_ADA-RW', 'Goodwin_MALA', 'Goodwin_PRECOND-MALA',
# 'Lotka_RW', 'Lotka_ADA-RW', 'Lotka_MALA', 'Lotka_PRECOND-MALA',
# 'Hinch_P_seed_1_temp_1', 'Hinch_P_seed_2_temp_1',
# 'Hinch_TP_seed_1_temp_8', 'Hinch_TP_seed_2_temp_8',
# 'Hinch_P_seed_1_temp_1_scaled', 'Hinch_P_seed_2_temp_1_scaled',
# 'Hinch_TP_seed_1_temp_8_scaled', 'Hinch_TP_seed_2_temp_8_scaled']
d, params_p, var_k = compute_params_p(args)
# d can change from args when mcmc filename is specified
args.d = d
params_k, params_k_power = compute_params_k(args, var_k, power_kernel=args.computepower,power=args.power)
if args.ktplus: # if running KT+, need to define the KT+ kernel called as params_k_combo
assert(args.power is not None)
params_k_combo = dict()
params_k_combo["name"] = "combo_" + params_k["name"] + f"_{args.power}"
params_k_combo["k"] = params_k.copy()
params_k_combo["kpower"] = params_k_power.copy()
params_k_combo["var"] = params_k["var"]
params_k_combo["d"] = args.d
# if isnotebook():
print("p", params_p)
print("k", params_k)
print("kpower", params_k_power)
if args.ktplus:
print("combo", params_k_combo)
#
# Choose experiment parameters
#
# List of replicate ID numbers
rep_ids = np.arange(args.rep0, args.rep0+args.repn)
# List of halving round numbers m to evaluate
ms = range(args.m)
# Failure probability
delta = .5
if isnotebook():
args.rerun = False
rep_ids = range(10)
# initialize result matrices
# by default our code returns mmd(P, Pout), mmd(Pin, Pout), Pf-Poutf, Pinf-Poutf for f = k(0, .)
# if k is not Gauss, Pf-Poutf is set equal to Pinf-Pout f in the run_X_experiments function
if args.stdthin: #
mmds_st = np.zeros((max(ms)+1, len(rep_ids))) # mmds from P
mmds_st_sin = np.zeros((max(ms)+1, len(rep_ids))) # mmds from Sin
fun_diff_st = np.zeros((max(ms)+1, len(rep_ids))) # fun diff from P
fun_diff_st_sin = np.zeros((max(ms)+1, len(rep_ids))) # fun diff from Sin
if args.targetkt:
mmds_kt = np.zeros((max(ms)+1, len(rep_ids))) # mmds from P
mmds_kt_sin = np.zeros((max(ms)+1, len(rep_ids))) # mmds from Sin
fun_diff_kt = np.zeros((max(ms)+1, len(rep_ids)))# fun diff from P
fun_diff_kt_sin = np.zeros((max(ms)+1, len(rep_ids))) # fun diff from Sin
if args.powerkt:
mmds_kt_krt = np.zeros((max(ms)+1, len(rep_ids))) # mmds from P
mmds_kt_krt_sin = np.zeros((max(ms)+1, len(rep_ids))) # mmds from Sin
fun_diff_kt_krt = np.zeros((max(ms)+1, len(rep_ids)))# fun diff from P
fun_diff_kt_krt_sin = np.zeros((max(ms)+1, len(rep_ids)))# fun diff from Sin
if args.ktplus:
mmds_ktplus = np.zeros((max(ms)+1, len(rep_ids))) # mmds from P
mmds_ktplus_sin = np.zeros((max(ms)+1, len(rep_ids))) # mmds from Sin
fun_diff_ktplus = np.zeros((max(ms)+1, len(rep_ids)))# fun diff from P
fun_diff_ktplus_sin = np.zeros((max(ms)+1, len(rep_ids))) # fun diff from Sin
###Output
_____no_output_____
###Markdown
Deploy thinning experiments
###Code
print(f"Exp setting: k = {params_k}, P = {params_p}, m = {ms}")
tic()
# print(args.rerun, args)
for m in ms:
#
# Run experiments and store quality of the 2^m thinned coreset
#
if args.stdthin:
mmd_st, mmd_st_sin, fd_st, fd_st_sin = run_standard_thinning_experiment(m, params_p=params_p, rerun=args.rerun,
params_k_mmd=params_k, rep_ids=rep_ids,
compute_mmds=args.computemmd)
mmds_st[m, :] = mmd_st[m, :]
mmds_st_sin[m, :] = mmd_st_sin[m, :]
fun_diff_st[m, :] = fd_st[m, :]
fun_diff_st_sin[m, :] = fd_st_sin[m, :]
if args.targetkt:
mmd_kt, mmd_kt_sin, fd_kt, fd_kt_sin = run_kernel_thinning_experiment(m, thin_fun=kt.thin, thin_str="", params_p=params_p, rerun=args.rerun,
params_k_split=params_k, params_k_swap=params_k, rep_ids=rep_ids,
delta=delta, store_K=args.store_K,
compute_mmds=args.computemmd
)
mmds_kt[m, :] = mmd_kt[m, :]
mmds_kt_sin[m, :] = mmd_kt_sin[m, :]
fun_diff_kt[m, :] = fd_kt[m, :]
fun_diff_kt_sin[m, :] = fd_kt_sin[m, :]
if args.powerkt:
mmd_kt_krt, mmd_kt_krt_sin, fd_kt_krt, fd_kt_krt_sin = run_kernel_thinning_experiment(m, thin_fun=kt.thin, thin_str="", params_p=params_p, rerun=args.rerun,
params_k_split=params_k_power, params_k_swap=params_k, rep_ids=rep_ids,
delta=delta, store_K=args.store_K,
compute_mmds=args.computemmd)
mmds_kt_krt[m, :] = mmd_kt_krt[m, :]
mmds_kt_krt_sin[m, :] = mmd_kt_krt_sin[m, :]
fun_diff_kt_krt[m, :] = fd_kt_krt[m, :]
fun_diff_kt_krt_sin[m, :] = fd_kt_krt_sin[m, :]
if args.ktplus:
mmd_ktplus, mmd_ktplus_sin, fd_ktplus, fd_ktplus_sin = run_kernel_thinning_experiment(m, thin_fun=kt.thin, thin_str="-plus", params_p=params_p, rerun=args.rerun,
params_k_split=params_k_combo, params_k_swap=params_k, rep_ids=rep_ids,
delta=delta, store_K=args.store_K,
compute_mmds=args.computemmd
)
mmds_ktplus[m, :] = mmd_ktplus[m, :]
mmds_ktplus_sin[m, :] = mmd_ktplus_sin[m, :]
fun_diff_ktplus[m, :] = fd_ktplus[m, :]
fun_diff_ktplus_sin[m, :] = fd_ktplus_sin[m, :]
if args.targetkt:
print('mmd target_kt', mmds_kt)
toc()
if isnotebook():
print(mmds_kt.mean(1), mmds_kt_krt.mean(1), mmds_ktplus.mean(1))
###Output
_____no_output_____
###Markdown
Save MMD and fun diff results
###Code
#
# Save all combined results
#
if isnotebook():
# change this code to save results manually when running notebook
save_combined_results = True #True if args is None else args.save_combined_results
else:
save_combined_results = False if args is None else args.save_combined_results
generic_prefixes = ["-combinedmmd-", "-sin-combinedmmd-", "-combinedfundiff-", "-sin-combinedfundiff-"]
if save_combined_results:
if args.stdthin:
prefixes = ["mc" + prefix for prefix in generic_prefixes]
data_arrays = [mmds_st, mmds_st_sin, fun_diff_st, fun_diff_st_sin]
for prefix, data_array in zip(prefixes, data_arrays):
filename = get_combined_results_filename(prefix, ms, params_p, params_k, params_k, rep_ids, delta)
with open(filename, 'wb') as file:
print(f"Saving {prefix} to {filename}")
pkl.dump(data_array, file, protocol=pkl.HIGHEST_PROTOCOL)
if args.targetkt:
prefixes = ["kt" + prefix for prefix in generic_prefixes]
data_arrays = [mmds_kt, mmds_kt_sin, fun_diff_kt, fun_diff_kt_sin]
for prefix, data_array in zip(prefixes, data_arrays):
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k, params_k_swap=params_k, rep_ids=rep_ids, delta=delta)
with open(filename, 'wb') as file:
print(f"Saving {prefix} to {filename}")
pkl.dump(data_array, file, protocol=pkl.HIGHEST_PROTOCOL)
if args.powerkt:
temp = "kt_krt" if args.power == 0.5 else f"kt_power{args.power}"
prefixes = [temp + prefix for prefix in generic_prefixes]
data_arrays = [mmds_kt_krt, mmds_kt_krt_sin, fun_diff_kt_krt, fun_diff_kt_krt_sin]
for prefix, data_array in zip(prefixes, data_arrays):
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_power, params_k_swap=params_k, rep_ids=rep_ids, delta=delta)
with open(filename, 'wb') as file:
print(f"Saving {prefix} to {filename}")
pkl.dump(data_array, file, protocol=pkl.HIGHEST_PROTOCOL)
if args.ktplus:
prefixes = [f"kt-plus{args.power}" + prefix for prefix in generic_prefixes]
data_arrays = [mmds_ktplus, mmds_ktplus_sin, fun_diff_ktplus, fun_diff_ktplus_sin]
for prefix, data_array in zip(prefixes, data_arrays):
filename = get_combined_results_filename(prefix, ms, params_p, params_k_split=params_k_combo, params_k_swap=params_k, rep_ids=rep_ids, delta=delta)
with open(filename, 'wb') as file:
print(f"Saving {prefix} to {filename}")
pkl.dump(data_array, file, protocol=pkl.HIGHEST_PROTOCOL)
###Output
_____no_output_____ |
docsrc/source/auto_gallery/plot_omni.ipynb | ###Markdown
Omni-directional antenna========================
###Code
import pyant
import numpy as np
class Omni(pyant.Beam):
def gain(self,k):
if len(k.shape) == 1:
return 1.0
else:
return np.ones((k.shape[1],), dtype=k.dtype)
ant = Omni(
azimuth=0.0,
elevation=90.0,
frequency=47e6,
)
print(ant.gain(np.array([0,0,1])))
pyant.plotting.gain_heatmap(ant)
pyant.plotting.show()
###Output
_____no_output_____ |
Module_1_Simple_Classification.ipynb | ###Markdown
Applied Machine Learning, Module 1: A simple Classification Task Import required modules and data file
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.model_selection import train_test_split
fruits = pd.read_table('fruit_data_with_colors.txt')
fruits.head() #.head() displays first 5 entries
###Output
_____no_output_____
###Markdown
color_score is a single number which roughly captures the color of the fruit
###Code
fruits.shape
# Create mapping from fruit label value to fruit name to make results easier to interpret
lookup_fruit_name = dict(zip(fruits.fruit_label.unique(),fruits.fruit_name.unique()))
lookup_fruit_name
###Output
_____no_output_____
###Markdown
Examining the Data Creating training and test data sets from original data set by partitioning original data set (75% - 25%)
###Code
X = fruits[['height','width','mass','color_score']]
y = fruits['fruit_label']
X
y
###Output
_____no_output_____
###Markdown
Creating Train and test data sets
###Code
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=0) #does 75-25 partision (75% to train and 25% to test)
X_train
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
It is always good to analyze and visualize raw data before proceeding onto training it. Sometimes data may be inconsistent and may have missing data. What type of cleaning and pre-processing needs to be done, can be identified
###Code
# Plotting a scatter matrix
from matplotlib import cm
from pandas.plotting import scatter_matrix
fig = plt.figure()
cmap = cm.get_cmap('gnuplot')
scatter = scatter_matrix(X_train, c=y_train, marker='o', s=40, hist_kwds={'bins':15}, figsize=(9,9), cmap=cmap)
plt.show()
# Plotting a 3D scatter plot
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection = '3d')
ax.scatter(X_train['width'], X_train['height'], X_train['color_score'], c = y_train, marker = 'o', s=100)
ax.set_xlabel('width')
ax.set_ylabel('height')
ax.set_zlabel('color_score')
plt.show()
###Output
_____no_output_____
###Markdown
Create a Classifier Object - k-NN (k-Nearest-Neighbour)KNN can be used for both classification and regression; k = of nearest neighbours; good to use odd number for k; majority vote of class labels is used for calssification
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 5)
###Output
_____no_output_____
###Markdown
Train the Classifier using Training Data
###Code
knn.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Estimate the accuracy of the classfier on future data using training data
###Code
knn.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Use the trained k-NN classifier model to classify new data
###Code
# Lets create a hypothetical fruit with mass, width and height attributes.
#Feed it to k-NN classifier and see what fruit it turns out to be
fruit_prediction = knn.predict([[5.5,4.3,20,0.5]]) #[height,width,mass,color_score]
lookup_fruit_name[fruit_prediction[0]]
###Output
_____no_output_____
###Markdown
Plot decision boundaries of the k-NN classifier
###Code
from adspy_shared_utilities import plot_fruit_knn
plot_fruit_knn(X_train, y_train, 5, 'uniform') #k=5 i.e. 5 nearest neighbors
###Output
_____no_output_____
###Markdown
How sensitive is k-NN classification accuracy to choice of 'k' parameter
###Code
k_range = range(1,20)
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.figure()
plt.xlabel('k')
plt.ylabel('accuracy')
plt.scatter(k_range, scores)
plt.xticks([0,5,10,15,20]);
plt.show()
###Output
_____no_output_____
###Markdown
How sensitive is k-NN classification accuracy to train/test split proportion
###Code
t = [0.8,0.7,0.6,0.5,0.4,0.3,0.3,0.2]
knn = KNeighborsClassifier(n_neighbors = 5)
plt.figure()
for s in t:
scores=[]
for i in range(1,1000):
X_train,X_test,y_train,y_test = train_test_split(X,y, test_size = 1-s) #test_size: proportion of data to be included in TEST set
knn.fit(X_train,y_train)
scores.append(knn.score(X_test,y_test))
plt.plot(s,np.mean(scores), 'bo')
plt.xlabel('Training set proportion (%)')
plt.ylabel('accuracy')
plt.show()
help(train_test_split)
scores
###Output
_____no_output_____ |
Week1Ch1.ipynb | ###Markdown
###Code
import nltk # Python library for NLP
from nltk.corpus import twitter_samples # sample Twitter dataset from NLTK
import matplotlib.pyplot as plt # library for visualization
import random # pseudo-random number generator
#downloads sample twitter dataset
nltk.download('twitter_samples')
#We can load the text fields of the positive and negative tweets by using the module's strings() method
# select the set of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
print('Number of positive tweets: ', len(all_positive_tweets))
print('Number of negative tweets: ', len(all_negative_tweets))
#It is also essential to know the data structure of the datasets
print('\nThe type of all_positive_tweets is: ', type(all_positive_tweets))
print('The type of a tweet entry is: ', type(all_negative_tweets[0]))
# Declare a figure with a custom size
fig = plt.figure(figsize=(5, 5))
# labels for the two classes
labels = 'Positive', 'Negative'
# Sizes for each slide
sizes = [len(all_positive_tweets), len(all_negative_tweets)]
# Declare pie chart, where the slices will be ordered and plotted counter-clockwise:
plt.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
# Equal aspect ratio ensures that pie is drawn as a circle.
plt.axis('equal')
# Display the chart
plt.show()
# print positive in greeen
print('\033[92m' + all_positive_tweets[random.randint(0,5000)])
# print negative in red
print('\033[91m' + all_negative_tweets[random.randint(0,5000)])
#One observation you may have is the presence of emoticons and URLs in many of the tweets.
#Preprocessing raw data for setiment analysis
# Tokenizing the string
# Lowercasing
# Removing stop words and punctuation
# Stemming
# Our selected sample. Complex enough to exemplify each step
tweet = all_positive_tweets[2277]
print(tweet)
# download the stopwords from NLTK
nltk.download('stopwords')
import re # library for regular expression operations
import string # for string operations
from nltk.corpus import stopwords # module for stop words that come with NLTK
from nltk.stem import PorterStemmer # module for stemming
from nltk.tokenize import TweetTokenizer # module for tokenizing strings
#Remove hyperlinks, Twitter marks and styles
print('\033[92m' + tweet)
print('\033[94m')
# remove old style retweet text "RT"
tweet2 = re.sub(r'^RT[\s]+', '', tweet)
# remove hyperlinks
tweet2 = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet2)
# remove hashtags
# only removing the hash # sign from the word
tweet2 = re.sub(r'#', '', tweet2)
print(tweet2)
print()
print('\033[92m' + tweet2)
print('\033[94m')
#initialize a tokenizer class
tokenizer = TweetTokenizer(preserve_case= False, reduce_len= True, strip_handles = False)
tokens = tokenizer.tokenize(tweet2)
print("tokenized strings are :")
print(tokens)
#Remove stop words and punctuations
#Stop words are words that don't add significant meaning to the text. You'll see the list provided by NLTK when you run the cells below.
stopwords_of_english = stopwords.words("english")
print("the stopwords of english are:")
print(stopwords_of_english)
print("punctuations")
print(string.punctuation)
print(tokens)
cleaned_tokens = []
for token in tokens:
if (token not in stopwords_of_english and token not in string.punctuation):
cleaned_tokens.append(token)
print(cleaned_tokens)
#Stemming
#Stemming is the process of converting a word to its most general form, or stem. This helps in reducing the size of our vocabulary.
print(cleaned_tokens)
stemmer = PorterStemmer()
after_stemming = []
for token in cleaned_tokens:
stemmed_words = stemmer.stem(token)
after_stemming.append(stemmed_words)
print(after_stemming)
#That's it! Now we have a set of words we can feed into to the next stage of our machine learning project.
!pip install utils
#use the function process_tweet(tweet) available in utils.py
#To obtain the same result as in the previous code cells, you will only need to call the function process_tweet(). Let's do that.
# from utils import process_tweet # Import the process_tweet function
# # choose the same tweet
# tweet = all_positive_tweets[2277]
# print()
# print('\033[92m')
# print(tweet)
# print('\033[94m')
# # call the imported function
# tweets_stem = process_tweet(tweet); # Preprocess a given tweet
# print('preprocessed tweet:')
# print(tweets_stem) # Print the result
###Output
_____no_output_____ |
notebooks_completos/034-SciPy-EcuacionesNoLineales.ipynb | ###Markdown
Búsqueda de raíces de ecuaciones no lineales con SciPy _¿Te acuerdas de todos esos esquemas numéricos para integrar ecuaciones diferenciales ordinarias? Es bueno saber que existen y qué peculiaridades tiene cada uno, pero en este curso no queremos implementar esos esquemas: queremos resolver las ecuaciones. Los problemas de evolución están por todas partes en ingeniería y son de los más divertidos de programar._
###Code
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Visto cómo resolver sistemas de ecuaciones lineales, tal vez sea incluso más atractivo resolver ecuaciones no lineales. Para ello, importaremos el paquete `optimize` de SciPy:
###Code
from scipy import optimize
###Output
_____no_output_____
###Markdown
La ayuda de este paquete es bastante larga (puedes consultarla también en http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html). El paquete `optimize` incluye multitud de métodos para **optimización**, **ajuste de curvas** y **búsqueda de raíces**. Vamos a centrarnos ahora en la búsqueda de raíces de funciones escalares. Para más información puedes leer http://pybonacci.org/2012/10/25/como-resolver-ecuaciones-algebraicas-en-python-con-scipy/ **Nota**: La función `root` se utiliza para hallar soluciones de *sistemas* de ecuaciones no lineales así que obviamente también funciona para ecuaciones escalares. No obstante, vamos a utilizar las funciones `brentq` y `newton` para que el método utilizado quede más claro. Hay básicamente dos tipos de algoritmos para hallar raíces de ecuaciones no lineales:* Aquellos que operan en un intervalo $[a, b]$ tal que $f(a) \cdot f(b) < 0$. Más lentos, convergencia asegurada.* Aquellos que operan dando una condición inicial $x_0$ más o menos cerca de la solución. Más rápidos, convergencia condicionada.De los primeros vamos a usar la función `brentq` (aunque podríamos usar `bisect`) y de los segundos vamos a usar `newton` (que en realidad engloba los métodos de Newton y de la secante). **Ejemplo**:$\ln{x} = \sin{x} \Rightarrow F(x) \equiv \ln{x} - \sin{x} = 0$ Lo primero que tengo que hacer es definir la ecuación, que matemáticamente será una función $F(x)$ que quiero igualar a cero.
###Code
def F(x):
return np.log(x) - np.sin(x)
###Output
_____no_output_____
###Markdown
Para hacernos una idea de las posibles soluciones siempre podemos representar gráficamente esa función:
###Code
x = np.linspace(0, 10, num=100)
plt.plot(x, F(x), 'k', lw=2, label="$F(x)$")
plt.plot(x, np.log(x), label="$\log{x}$")
plt.plot(x, np.sin(x), label="$\sin{x}$")
plt.plot(x, np.zeros_like(x), 'k--')
plt.legend(loc=4)
###Output
/home/juanlu/.miniconda3/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in log
from ipykernel import kernelapp as app
/home/juanlu/.miniconda3/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py:3: RuntimeWarning: divide by zero encountered in log
app.launch_new_instance()
###Markdown
Y utilizando por ejemplo el método de Brent en el intervalo $[0, 3]$:
###Code
optimize.brentq(F, 0, 3)
###Output
/home/juanlu/.miniconda3/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py:2: RuntimeWarning: divide by zero encountered in log
from ipykernel import kernelapp as app
###Markdown
¿No habíamos dicho que en Python no se puede dividir por cero? Observa esto:
###Code
1 / 0
1 / np.array([0])
###Output
/home/juanlu/.miniconda3/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py:1: RuntimeWarning: divide by zero encountered in true_divide
if __name__ == '__main__':
###Markdown
Si manejamos arrays de NumPy las operaciones siguen las reglas dadas en el estándar de punto flotante (IEEE 754). Las divisiones por cero resultan en infinito, 0 / 0 es NaN, etc. Podemos controlar si queremos warnings o errores con la función `np.seterr`. Ejercicio Obtener por ambos métodos (`newton` y `brentq`) una solución a la ecuación $\tan{x} = x$ distinta de $x = 0$. Visualizar el resultado. Argumentos extra Nuestras funciones siempre tienen que tomar como primer argumento la incógnita, el valor que la hace cero. Si queremos incluir más, tendremos que usar el argumento `args` de la funciones de búsqueda de raíces. Este patrón se usa también en otras partes de SciPy, como ya veremos.Vamos a resolver ahora una ecuación que depende de un parámetro:$$\sqrt{x} + \log{x} = C$$
###Code
def G(x, C):
return C - np.sqrt(x) - np.log(x)
###Output
_____no_output_____
###Markdown
**Nuestra incógnita sigue siendo $x$**, así que debe ir en primer lugar. El resto de parámetros van a continuación, y sus valores se especifican a la hora de resolver la ecuación usando `args`:
###Code
optimize.newton(G, 2.0, args=(2,))
###Output
_____no_output_____
###Markdown
Flujo compresible Esta es la relación isentrópica entre el número de Mach $M(x)$ en un conducto de área $A(x)$: $$ \frac{A(x)}{A^*} = \frac{1}{M(x)} \left( \frac{2}{1 + \gamma} \left( 1 + \frac{\gamma - 1}{2} M(x)^2 \right) \right)^{\frac{\gamma + 1}{2 (\gamma - 1)}}$$ Para un conducto convergente:$$ \frac{A(x)}{A^*} = 3 - 2 x \quad x \in [0, 1]$$ Hallar el número de Mach en la sección $x = 0.9$.
###Code
def A(x):
return 3 - 2 * x
x = np.linspace(0, 1)
area = A(x)
r = np.sqrt(area / np.pi)
plt.fill_between(x, r, -r, color="#ffcc00")
###Output
_____no_output_____
###Markdown
¿Cuál es la función $F$ ahora? Hay dos opciones: definir una función $F_{0.9}(M)$ que me da el número de Mach en la sección $0.9$ o una función $F(M; x)$ con la que puedo hallar el número de Mach en cualquier sección. *Bonus points* si haces la segunda opción :) Para resolver la ecuación utiliza el método de Brent (bisección). ¿En qué intervalo se encontrará la solución? ¡Si no te haces una idea es tan fácil como pintar la función $F$!
###Code
def F(M, x, g):
return A(x) - (1 / M) * ((2 / (1 + g)) * (1 + (g - 1) / 2 * M ** 2)) ** ((g + 1) / (2 * (g - 1)))
optimize.brentq(F, 0.01, 1, args=(0.9, 1.4))
###Output
_____no_output_____
###Markdown
Ecuación de Kepler Representar la ecuación de Kepler$$M = E - e \sin E$$que relaciona dos parámetros geométricos de las órbitas elípticas, la anomalía media $M$ y la anomalía excéntrica $E$.para los siguientes valores de excentricidad:* Tierra: $0.0167$* Plutón: $0.249$* Cometa Holmes: $0.432$* 28P/Neujmin: $0.775$* Cometa Halley: $0.967$Para reproducir esta gráfica:
###Code
from IPython.display import HTML
HTML('<iframe src="http://en.m.wikipedia.org/wiki/Kepler%27s_equation" width="800" height="400"></iframe>')
###Output
_____no_output_____
###Markdown
Para ello utilizaremos el método de Newton (secante).1- Define la función correspondiente a la ecuación de Kepler, que no solo es una ecuación implícita sino que además depende de un parámetro. ¿Cuál es la incógnita?
###Code
def F(E, e, M):
return M - E + e * np.sin(E)
###Output
_____no_output_____
###Markdown
2- Como primer paso, resuélvela para la excentricidad terrerestre y anomalía media $M = 0.3$. ¿Qué valor escogerías como condición inicial?
###Code
optimize.newton(F, 0.3, args=(0.0167, 0.3))
###Output
_____no_output_____
###Markdown
3- Como siguiente paso, crea un dominio (`linspace`) de anomalías medias entre $0$ y $2 \pi$ y resuelve la ecuación de Kepler con excentricidad terrestre para todos esos valores. Fíjate que necesitarás un array donde almacenar las soluciones. Representa la curva resultante.
###Code
N = 500
M = np.linspace(0, 2 * np.pi, N)
sol = np.zeros_like(M)
for ii in range(N):
sol[ii] = optimize.newton(F, sol[ii - 1], args=(0.249, M[ii]))
plt.plot(M, sol)
###Output
_____no_output_____
###Markdown
4- Como último paso, solo tienes que meter parte del código que ya has escrito en un bucle que cambie el valor de la excentricidad 5 veces. Es aconsejable que tengas todo ese código en una única celda (esta de aquí abajo). Vamos a introducir aquí un truco muy útil en Python:
###Code
M = np.linspace(0, 2 * np.pi, N)
sol = np.zeros_like(M)
plt.figure(figsize=(6, 6))
for ee in 0.0167, 0.249, 0.432, 0.775, 0.967:
# Para cada valor de excentricidad sobreescribimos el array sol
for ii in range(N):
sol[ii] = optimize.newton(F, sol[ii - 1], args=(ee, M[ii]))
plt.plot(M, sol)
plt.xlim(0, 2 * np.pi)
plt.ylim(0, 2 * np.pi)
plt.xlabel("$M$", fontsize=15)
plt.ylabel("$E$", fontsize=15)
plt.gca().set_aspect(1)
plt.grid(True)
plt.legend(["Earth", "Pluto", "Comet Holmes", "28P/Neujmin", "Halley's Comet"], loc=2)
plt.title("Kepler's equation solutions")
###Output
_____no_output_____
###Markdown
--- ¡Síguenos en Twitter! Follow @AeroPython !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs'); Este notebook ha sido realizado por: Juan Luis Cano, y Álex Sáez Curso AeroPython por Juan Luis Cano Rodriguez y Alejandro Sáez Mollejo se distribuye bajo una Licencia Creative Commons Atribución 4.0 Internacional. ---_Las siguientes celdas contienen configuración del Notebook__Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_ File > Trusted Notebook
###Code
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
###Output
_____no_output_____
###Markdown
Búsqueda de raíces de ecuaciones no lineales con SciPy _¿Te acuerdas de todos esos esquemas numéricos para integrar ecuaciones diferenciales ordinarias? Es bueno saber que existen y qué peculiaridades tiene cada uno, pero en este curso no queremos implementar esos esquemas: queremos resolver las ecuaciones. Los problemas de evolución están por todas partes en ingeniería y son de los más divertidos de programar._
###Code
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Visto cómo resolver sistemas de ecuaciones lineales, tal vez sea incluso más atractivo resolver ecuaciones no lineales. Para ello, importaremos el paquete `optimize` de SciPy:
###Code
from scipy import optimize
###Output
_____no_output_____
###Markdown
La ayuda de este paquete es bastante larga (puedes consultarla también en http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html). El paquete `optimize` incluye multitud de métodos para **optimización**, **ajuste de curvas** y **búsqueda de raíces**. Vamos a centrarnos ahora en la búsqueda de raíces de funciones escalares. Para más información puedes leer http://pybonacci.org/2012/10/25/como-resolver-ecuaciones-algebraicas-en-python-con-scipy/ **Nota**: La función `root` se utiliza para hallar soluciones de *sistemas* de ecuaciones no lineales así que obviamente también funciona para ecuaciones escalares. No obstante, vamos a utilizar las funciones `brentq` y `newton` para que el método utilizado quede más claro. Hay básicamente dos tipos de algoritmos para hallar raíces de ecuaciones no lineales:* Aquellos que operan en un intervalo $[a, b]$ tal que $f(a) \cdot f(b) < 0$. Más lentos, convergencia asegurada.* Aquellos que operan dando una condición inicial $x_0$ más o menos cerca de la solución. Más rápidos, convergencia condicionada.De los primeros vamos a usar la función `brentq` (aunque podríamos usar `bisect`) y de los segundos vamos a usar `newton` (que en realidad engloba los métodos de Newton y de la secante). **Ejemplo**:$\ln{x} = \sin{x} \Rightarrow F(x) \equiv \ln{x} - \sin{x} = 0$ Lo primero que tengo que hacer es definir la ecuación, que matemáticamente será una función $F(x)$ que quiero igualar a cero.
###Code
def F(x):
return np.log(x) - np.sin(x)
###Output
_____no_output_____
###Markdown
Para hacernos una idea de las posibles soluciones siempre podemos representar gráficamente esa función:
###Code
x = np.linspace(0, 10, num=100)
plt.plot(x, F(x), 'k', lw=2, label="$F(x)$")
plt.plot(x, np.log(x), label="$\log{x}$")
plt.plot(x, np.sin(x), label="$\sin{x}$")
plt.plot(x, np.zeros_like(x), 'k--')
plt.legend(loc=4)
###Output
-c:2: RuntimeWarning: divide by zero encountered in log
-c:3: RuntimeWarning: divide by zero encountered in log
###Markdown
Y utilizando por ejemplo el método de Brent en el intervalo $[0, 3]$:
###Code
optimize.brentq(F, 0, 3)
###Output
_____no_output_____
###Markdown
¿No habíamos dicho que en Python no se puede dividir por cero? Observa esto:
###Code
1 / 0
1 / np.array([0])
###Output
-c:1: RuntimeWarning: divide by zero encountered in true_divide
###Markdown
Si manejamos arrays de NumPy las operaciones siguen las reglas dadas en el estándar de punto flotante (IEEE 754). Las divisiones por cero resultan en infinito, 0 / 0 es NaN, etc. Podemos controlar si queremos warnings o errores con la función `np.seterr`. Ejercicio Obtener por ambos métodos (`newton` y `brentq`) una solución a la ecuación $\tan{x} = x$ distinta de $x = 0$. Visualizar el resultado. Argumentos extra Nuestras funciones siempre tienen que tomar como primer argumento la incógnita, el valor que la hace cero. Si queremos incluir más, tendremos que usar el argumento `args` de la funciones de búsqueda de raíces. Este patrón se usa también en otras partes de SciPy, como ya veremos. Vamos a resolver ahora una ecuación que depende de un parámetro: $$\sqrt{x} + \log{x} = C$$.
###Code
def G(x, C):
return C - np.sqrt(x) - np.log(x)
###Output
_____no_output_____
###Markdown
**Nuestra incógnita sigue siendo $x$**, así que debe ir en primer lugar. El resto de parámetros van a continuación, y sus valores se especifican a la hora de resolver la ecuación usando `args`:
###Code
optimize.newton(G, 2.0, args=(2,))
###Output
_____no_output_____
###Markdown
Flujo compresible Esta es la relación isentrópica entre el número de Mach $M(x)$ en un conducto de área $A(x)$: $$ \frac{A(x)}{A^*} = \frac{1}{M(x)} \left( \frac{2}{1 + \gamma} \left( 1 + \frac{\gamma - 1}{2} M(x)^2 \right) \right)^{\frac{\gamma + 1}{2 (\gamma - 1)}}$$ Para un conducto convergente:$$ \frac{A(x)}{A^*} = 3 - 2 x \quad x \in [0, 1]$$ Hallar el número de Mach en la sección $x = 0.9$.
###Code
def A(x):
return 3 - 2 * x
x = np.linspace(0, 1)
area = A(x)
r = np.sqrt(area / np.pi)
plt.fill_between(x, r, -r, color="#ffcc00")
###Output
_____no_output_____
###Markdown
¿Cuál es la función $F$ ahora? Hay dos opciones: definir una función $F_{0.9}(M)$ que me da el número de Mach en la sección $0.9$ o una función $F(M; x)$ con la que puedo hallar el número de Mach en cualquier sección. *Bonus points* si haces la segunda opción :) Para resolver la ecuación utiliza el método de Brent (bisección). ¿En qué intervalo se encontrará la solución? ¡Si no te haces una idea es tan fácil como pintar la función $F$!
###Code
def F(M, x, g):
return A(x) - (1 / M) * ((2 / (1 + g)) * (1 + (g - 1) / 2 * M ** 2)) ** ((g + 1) / (2 * (g - 1)))
optimize.brentq(F, 0.01, 1, args=(0.9, 1.4))
###Output
_____no_output_____
###Markdown
Ecuación de Kepler Representar la ecuación de Kepler$$M = E - e \sin E$$que relaciona dos parámetros geométricos de las órbitas elípticas, la anomalía media $M$ y la anomalía excéntrica $E$.para los siguientes valores de excentricidad:* Tierra: $0.0167$* Plutón: $0.249$* Cometa Holmes: $0.432$* 28P/Neujmin: $0.775$* Cometa Halley: $0.967$Para reproducir esta gráfica:
###Code
from IPython.display import HTML
HTML('<iframe src="http://en.m.wikipedia.org/wiki/Kepler%27s_equation" width="800" height="400"></iframe>')
###Output
_____no_output_____
###Markdown
Para ello utilizaremos el método de Newton (secante).1- Define la función correspondiente a la ecuación de Kepler, que no solo es una ecuación implícita sino que además depende de un parámetro. ¿Cuál es la incógnita?
###Code
def F(E, e, M):
return M - E + e * np.sin(E)
###Output
_____no_output_____
###Markdown
2- Como primer paso, resuélvela para la excentricidad terrerestre y anomalía media $M = 0.3$. ¿Qué valor escogerías como condición inicial?
###Code
optimize.newton(F, 0.3, args=(0.0167, 0.3))
###Output
_____no_output_____
###Markdown
3- Como siguiente paso, crea un dominio (`linspace`) de anomalías medias entre $0$ y $2 \pi$ y resuelve la ecuación de Kepler con excentricidad terrestre para todos esos valores. Fíjate que necesitarás un array donde almacenar las soluciones. Representa la curva resultante.
###Code
N = 500
M = np.linspace(0, 2 * np.pi, N)
sol = np.zeros_like(M)
for ii in range(N):
sol[ii] = optimize.newton(F, sol[ii - 1], args=(0.249, M[ii]))
plt.plot(M, sol)
###Output
_____no_output_____
###Markdown
4- Como último paso, solo tienes que meter parte del código que ya has escrito en un bucle que cambie el valor de la excentricidad 5 veces. Es aconsejable que tengas todo ese código en una única celda (esta de aquí abajo). Vamos a introducir aquí un truco muy útil en Python:
###Code
M = np.linspace(0, 2 * np.pi, N)
sol = np.zeros_like(M)
plt.figure(figsize=(6, 6))
for ee in 0.0167, 0.249, 0.432, 0.775, 0.967:
# Para cada valor de excentricidad sobreescribimos el array sol
for ii in range(N):
sol[ii] = optimize.newton(F, sol[ii - 1], args=(ee, M[ii]))
plt.plot(M, sol)
plt.xlim(0, 2 * np.pi)
plt.ylim(0, 2 * np.pi)
plt.xlabel("$M$", fontsize=15)
plt.ylabel("$E$", fontsize=15)
plt.gca().set_aspect(1)
plt.grid(True)
plt.legend(["Earth", "Pluto", "Comet Holmes", "28P/Neujmin", "Halley's Comet"], loc=2)
plt.title("Kepler's equation solutions")
###Output
_____no_output_____
###Markdown
--- ¡Síguenos en Twitter! Follow @AeroPython !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs'); Este notebook ha sido realizado por: Juan Luis Cano, y Álex Sáez Curso AeroPython por Juan Luis Cano Rodriguez y Alejandro Sáez Mollejo se distribuye bajo una Licencia Creative Commons Atribución 4.0 Internacional. ---_Las siguientes celdas contienen configuración del Notebook__Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_ File > Trusted Notebook
###Code
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
###Output
_____no_output_____ |
WeatherPy/WeatherPy_PPhilip.ipynb | ###Markdown
WeatherPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import scipy.stats as sts
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
#Import pprint
from pprint import pprint
# Output File (CSV)
output_data_file = "../output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
###Output
_____no_output_____
###Markdown
Generate Cities List
###Code
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
###Output
_____no_output_____
###Markdown
Perform API Calls* Perform a weather check on each city using a series of successive API calls.* Include a print log of each city as it'sbeing processed (with the city number and city name).
###Code
print('Beginning Data Retrieval')
print('------------------------')
#Initializes lists for holding City Names, City ID, Country, Cloudiness,
#Date, Latitude, Longitude, Humidity, Maximum Temperature, and Wind Speed.
#These lists will later become the columns of the random city data frame.
city_names = []
city_id = []
country = []
cloud = []
dt = []
lats = []
lngs = []
hums = []
max_temps = []
ws = []
record = 0
#For loop for Weather API calls
for city in cities:
#URL for API Call
url = f'http://api.openweathermap.org/data/2.5/weather?q={city}&units=imperial&appid={weather_api_key}'
try:
#Creates request for API Call using above URL and converts to JSON format
response = requests.get(url).json()
#Pulls information for filling lists initialized outside the for loop
city_names.append(response['name'])
city_id.append(response['id'])
country.append(response['sys']['country'])
cloud.append(response['clouds']['all'])
dt.append(response['dt'])
lats.append(response['coord']['lat'])
lngs.append(response['coord']['lon'])
hums.append(response['main']['humidity'])
max_temps.append(response['main']['temp_max'])
ws.append(response['wind']['speed'])
#Prints the city name and respective record number
print(f'City Name: {city}, City Number: {record}')
#Increases record number for iterration purposes
record += 1
except:
#If city is not found during API call, prints 'City not found'
print('City not found')
###Output
Beginning Data Retrieval
------------------------
City Name: vaini, City Number: 0
City Name: ushuaia, City Number: 1
City Name: lipnita, City Number: 2
City Name: baykit, City Number: 3
City Name: lavrentiya, City Number: 4
City not found
City Name: oskarshamn, City Number: 5
City Name: atkinson, City Number: 6
City Name: mount gambier, City Number: 7
City Name: bredasdorp, City Number: 8
City Name: dingle, City Number: 9
City Name: barrow, City Number: 10
City Name: rikitea, City Number: 11
City Name: punta arenas, City Number: 12
City Name: luderitz, City Number: 13
City Name: buala, City Number: 14
City Name: kaitangata, City Number: 15
City Name: port alfred, City Number: 16
City Name: kununurra, City Number: 17
City Name: ginda, City Number: 18
City Name: saskylakh, City Number: 19
City Name: georgetown, City Number: 20
City not found
City Name: beringovskiy, City Number: 21
City Name: inhambane, City Number: 22
City Name: isangel, City Number: 23
City Name: ayagoz, City Number: 24
City Name: lebu, City Number: 25
City Name: champerico, City Number: 26
City Name: ancud, City Number: 27
City Name: asilah, City Number: 28
City Name: bardiyah, City Number: 29
City Name: scarborough, City Number: 30
City Name: busselton, City Number: 31
City Name: saldanha, City Number: 32
City Name: vanimo, City Number: 33
City Name: jamestown, City Number: 34
City Name: hilo, City Number: 35
City Name: lompoc, City Number: 36
City Name: north branch, City Number: 37
City Name: mataura, City Number: 38
City Name: bluff, City Number: 39
City Name: portland, City Number: 40
City not found
City Name: bethel, City Number: 41
City Name: meulaboh, City Number: 42
City Name: springbok, City Number: 43
City Name: dzerzhinsk, City Number: 44
City Name: gariaband, City Number: 45
City Name: souillac, City Number: 46
City Name: ribeira grande, City Number: 47
City Name: arman, City Number: 48
City Name: nagua, City Number: 49
City Name: sorland, City Number: 50
City Name: yasenskaya, City Number: 51
City Name: ambilobe, City Number: 52
City Name: khatanga, City Number: 53
City Name: bulawayo, City Number: 54
City not found
City Name: alofi, City Number: 55
City Name: kavaratti, City Number: 56
City Name: telciu, City Number: 57
City Name: monte patria, City Number: 58
City Name: tasiilaq, City Number: 59
City Name: caucaia, City Number: 60
City Name: arraial do cabo, City Number: 61
City Name: fairbanks, City Number: 62
City Name: mafra, City Number: 63
City not found
City Name: adrar, City Number: 64
City Name: longyearbyen, City Number: 65
City Name: nichinan, City Number: 66
City Name: bambous virieux, City Number: 67
City Name: hobart, City Number: 68
City Name: anaconda, City Number: 69
City Name: sirjan, City Number: 70
City Name: yellowknife, City Number: 71
City Name: chuy, City Number: 72
City Name: thompson, City Number: 73
City Name: maningrida, City Number: 74
City Name: leningradskiy, City Number: 75
City Name: butaritari, City Number: 76
City Name: puerto ayora, City Number: 77
City Name: jimeta, City Number: 78
City Name: cap-aux-meules, City Number: 79
City Name: hofn, City Number: 80
City Name: upernavik, City Number: 81
City Name: namibe, City Number: 82
City Name: sayyan, City Number: 83
City Name: kamenka, City Number: 84
City Name: quartucciu, City Number: 85
City Name: buenavista, City Number: 86
City Name: pevek, City Number: 87
City Name: ilulissat, City Number: 88
City Name: san juan, City Number: 89
City Name: hithadhoo, City Number: 90
City Name: la peca, City Number: 91
City Name: geraldton, City Number: 92
City Name: iqaluit, City Number: 93
City Name: hermanus, City Number: 94
City Name: burnie, City Number: 95
City Name: airai, City Number: 96
City Name: quatre cocos, City Number: 97
City Name: avarua, City Number: 98
City Name: kapaa, City Number: 99
City Name: hasaki, City Number: 100
City Name: agirish, City Number: 101
City Name: itaituba, City Number: 102
City Name: porto novo, City Number: 103
City Name: carballo, City Number: 104
City Name: nanakuli, City Number: 105
City Name: husavik, City Number: 106
City not found
City Name: biltine, City Number: 107
City Name: dikson, City Number: 108
City Name: sinnamary, City Number: 109
City Name: atuona, City Number: 110
City Name: amahai, City Number: 111
City Name: carnarvon, City Number: 112
City not found
City Name: luanda, City Number: 113
City Name: ossora, City Number: 114
City not found
City Name: soe, City Number: 115
City Name: camacha, City Number: 116
City Name: vila franca do campo, City Number: 117
City Name: la ronge, City Number: 118
City Name: albany, City Number: 119
City Name: north bend, City Number: 120
City Name: severo-kurilsk, City Number: 121
City Name: san patricio, City Number: 122
City Name: takoradi, City Number: 123
City not found
City Name: pisco, City Number: 124
City Name: san cristobal, City Number: 125
City Name: faanui, City Number: 126
City Name: new norfolk, City Number: 127
City not found
City Name: katsuura, City Number: 128
City Name: saint-philippe, City Number: 129
City Name: vestmannaeyjar, City Number: 130
City Name: flinders, City Number: 131
City Name: mahebourg, City Number: 132
City Name: esperance, City Number: 133
City not found
City Name: port elizabeth, City Number: 134
City Name: brae, City Number: 135
City Name: korcula, City Number: 136
City Name: puerto gaitan, City Number: 137
City not found
City Name: victoria, City Number: 138
City Name: padang, City Number: 139
City Name: tuatapere, City Number: 140
City Name: nikolskoye, City Number: 141
City Name: zhangjiakou, City Number: 142
City Name: paso de carrasco, City Number: 143
City Name: george, City Number: 144
City Name: chokurdakh, City Number: 145
City Name: kaeo, City Number: 146
City Name: kodiak, City Number: 147
City Name: lucapa, City Number: 148
City Name: east london, City Number: 149
City Name: bengkulu, City Number: 150
City not found
City Name: maniitsoq, City Number: 151
City Name: cape town, City Number: 152
City Name: qaanaaq, City Number: 153
City Name: arandis, City Number: 154
City Name: vila, City Number: 155
City Name: ponta do sol, City Number: 156
City Name: kapit, City Number: 157
City Name: nome, City Number: 158
City Name: kresttsy, City Number: 159
City not found
City Name: fortuna, City Number: 160
City Name: wukari, City Number: 161
City Name: dothan, City Number: 162
City Name: labuhan, City Number: 163
City Name: berlevag, City Number: 164
City Name: bundaberg, City Number: 165
City Name: acari, City Number: 166
City Name: shimoda, City Number: 167
City Name: bandarbeyla, City Number: 168
City Name: yar-sale, City Number: 169
City Name: conde, City Number: 170
City Name: saint anthony, City Number: 171
City Name: puerto escondido, City Number: 172
City Name: halifax, City Number: 173
City Name: ulaanbaatar, City Number: 174
City Name: grindavik, City Number: 175
City Name: lasa, City Number: 176
City Name: cidreira, City Number: 177
City Name: omboue, City Number: 178
City Name: henties bay, City Number: 179
City Name: paamiut, City Number: 180
City Name: muisne, City Number: 181
City Name: lamar, City Number: 182
City Name: slave lake, City Number: 183
City Name: ratingen, City Number: 184
City Name: ilo, City Number: 185
City Name: trelew, City Number: 186
City Name: sinzig, City Number: 187
City not found
City Name: kedougou, City Number: 188
City Name: bagacay, City Number: 189
City Name: avera, City Number: 190
City not found
City not found
City Name: tuktoyaktuk, City Number: 191
City not found
City Name: kushmurun, City Number: 192
City Name: faya, City Number: 193
City Name: klaksvik, City Number: 194
City Name: beyla, City Number: 195
City Name: mwense, City Number: 196
City Name: auki, City Number: 197
City Name: solnechnyy, City Number: 198
City Name: tiksi, City Number: 199
City Name: zunyi, City Number: 200
City Name: gaoyou, City Number: 201
City Name: tayu, City Number: 202
City not found
City Name: agadez, City Number: 203
City Name: bagratashen, City Number: 204
City Name: grand gaube, City Number: 205
City Name: kalga, City Number: 206
City Name: pathein, City Number: 207
City Name: saint-felicien, City Number: 208
City Name: vao, City Number: 209
City Name: maceio, City Number: 210
City not found
City Name: kharp, City Number: 211
City Name: torbay, City Number: 212
City Name: visakhapatnam, City Number: 213
City Name: poum, City Number: 214
City Name: agapovka, City Number: 215
City Name: tilichiki, City Number: 216
City Name: kashirskoye, City Number: 217
City Name: chingirlau, City Number: 218
City Name: nanortalik, City Number: 219
City not found
City Name: mayor pablo lagerenza, City Number: 220
City Name: laguna, City Number: 221
City Name: dunedin, City Number: 222
City Name: katherine, City Number: 223
City Name: dwarka, City Number: 224
City Name: baoqing, City Number: 225
City Name: pontypool, City Number: 226
City Name: yulara, City Number: 227
City Name: puerto leguizamo, City Number: 228
City Name: sorong, City Number: 229
City Name: sarkand, City Number: 230
City Name: kaligutan, City Number: 231
City Name: egvekinot, City Number: 232
City not found
City Name: plouzane, City Number: 233
City Name: kenai, City Number: 234
City Name: umm lajj, City Number: 235
City Name: erzincan, City Number: 236
City Name: ramotswa, City Number: 237
City Name: eston, City Number: 238
City Name: touros, City Number: 239
City Name: chiang klang, City Number: 240
City Name: khani, City Number: 241
City Name: cuamba, City Number: 242
City Name: llorente, City Number: 243
City not found
City Name: cabras, City Number: 244
City Name: ketchikan, City Number: 245
City Name: beidao, City Number: 246
City Name: syracuse, City Number: 247
City Name: balkanabat, City Number: 248
City Name: ingham, City Number: 249
City Name: lagoa, City Number: 250
City Name: ucluelet, City Number: 251
City Name: najran, City Number: 252
City Name: barra, City Number: 253
City Name: rebrikha, City Number: 254
City Name: kasongo-lunda, City Number: 255
City Name: porto santo, City Number: 256
City Name: taos, City Number: 257
City Name: baoning, City Number: 258
City Name: bronnoysund, City Number: 259
City Name: klatovy, City Number: 260
City Name: vila do maio, City Number: 261
City Name: mafinga, City Number: 262
City Name: margate, City Number: 263
City Name: morant bay, City Number: 264
City Name: tupelo, City Number: 265
City Name: dustlik, City Number: 266
City Name: narsaq, City Number: 267
City Name: gloversville, City Number: 268
City Name: itoman, City Number: 269
City Name: calama, City Number: 270
City Name: iquique, City Number: 271
City Name: udachnyy, City Number: 272
City Name: thurso, City Number: 273
City Name: masvingo, City Number: 274
City Name: belmonte, City Number: 275
City Name: saint-francois, City Number: 276
City not found
City Name: cayenne, City Number: 277
City Name: atar, City Number: 278
City Name: nacala, City Number: 279
City not found
City Name: ferme-neuve, City Number: 280
City Name: dezful, City Number: 281
City Name: te anau, City Number: 282
City Name: clyde river, City Number: 283
City Name: port macquarie, City Number: 284
City Name: alice springs, City Number: 285
City Name: saint george, City Number: 286
City Name: safaga, City Number: 287
City Name: pimentel, City Number: 288
City Name: saratov, City Number: 289
City Name: maragogi, City Number: 290
City Name: los llanos de aridane, City Number: 291
City Name: batticaloa, City Number: 292
City not found
City Name: muli, City Number: 293
City Name: cairo, City Number: 294
City Name: zhangye, City Number: 295
City Name: itarema, City Number: 296
City Name: cap malheureux, City Number: 297
City Name: nantucket, City Number: 298
City Name: juneau, City Number: 299
City Name: havre-saint-pierre, City Number: 300
City Name: kutum, City Number: 301
City Name: norman wells, City Number: 302
City Name: santa isabel, City Number: 303
City Name: kenora, City Number: 304
City Name: farim, City Number: 305
City Name: lima duarte, City Number: 306
City not found
City Name: caravelas, City Number: 307
City Name: gogrial, City Number: 308
City Name: seoul, City Number: 309
City not found
City Name: mehamn, City Number: 310
City Name: palmer, City Number: 311
City Name: marawi, City Number: 312
City Name: kruisfontein, City Number: 313
City Name: shingu, City Number: 314
City Name: ekhabi, City Number: 315
City Name: karasjok, City Number: 316
City Name: sioux lookout, City Number: 317
City Name: atherton, City Number: 318
City Name: corinto, City Number: 319
City Name: cairns, City Number: 320
City Name: pucallpa, City Number: 321
City Name: vallenar, City Number: 322
City Name: amethi, City Number: 323
City Name: cherskiy, City Number: 324
City Name: bonfim, City Number: 325
City Name: aras, City Number: 326
City Name: buldana, City Number: 327
City not found
City Name: castro, City Number: 328
City not found
City not found
City Name: panama city, City Number: 329
City Name: hambantota, City Number: 330
City Name: gizo, City Number: 331
City Name: ostrovnoy, City Number: 332
City Name: broken hill, City Number: 333
City Name: shestakovo, City Number: 334
City Name: bougouni, City Number: 335
City Name: mar del plata, City Number: 336
City Name: fare, City Number: 337
City Name: okhotsk, City Number: 338
City Name: bismarck, City Number: 339
City not found
City Name: baglung, City Number: 340
City Name: brewster, City Number: 341
City Name: shelburne, City Number: 342
City Name: trinidad, City Number: 343
City Name: barahona, City Number: 344
City Name: cheney, City Number: 345
City not found
City Name: mount isa, City Number: 346
City Name: ahipara, City Number: 347
City not found
City Name: hamilton, City Number: 348
City Name: hovd, City Number: 349
City Name: tura, City Number: 350
City Name: nova odesa, City Number: 351
City Name: kyabram, City Number: 352
City not found
City Name: plettenberg bay, City Number: 353
City Name: catia la mar, City Number: 354
City not found
City Name: lerwick, City Number: 355
City Name: vardo, City Number: 356
City Name: glen carbon, City Number: 357
City Name: kirakira, City Number: 358
City Name: anqiu, City Number: 359
City Name: zhigansk, City Number: 360
City Name: tuni, City Number: 361
City Name: cabo san lucas, City Number: 362
City Name: dawei, City Number: 363
City Name: barranca, City Number: 364
City Name: vicuna, City Number: 365
City Name: pandhana, City Number: 366
City Name: alta gracia, City Number: 367
City Name: cockburn town, City Number: 368
City Name: vestmanna, City Number: 369
City Name: helena, City Number: 370
City Name: goundam, City Number: 371
City Name: turbat, City Number: 372
City Name: novosergiyevka, City Number: 373
City Name: anzio, City Number: 374
City Name: coquimbo, City Number: 375
City Name: stonewall, City Number: 376
City Name: saint-augustin, City Number: 377
City Name: lata, City Number: 378
City not found
City not found
City Name: juegang, City Number: 379
City Name: deputatskiy, City Number: 380
City Name: tecoanapa, City Number: 381
City Name: bela, City Number: 382
City Name: urumqi, City Number: 383
City Name: constantine, City Number: 384
City Name: half moon bay, City Number: 385
City not found
City Name: peniche, City Number: 386
City Name: gobabis, City Number: 387
City Name: honningsvag, City Number: 388
City Name: boguchany, City Number: 389
City Name: oriximina, City Number: 390
City Name: hobyo, City Number: 391
City Name: nioro, City Number: 392
City Name: iberia, City Number: 393
City Name: marsa matruh, City Number: 394
City Name: ituiutaba, City Number: 395
City Name: dharchula, City Number: 396
City Name: birao, City Number: 397
City Name: kinshasa, City Number: 398
City Name: grants pass, City Number: 399
City Name: ginir, City Number: 400
City Name: manadhoo, City Number: 401
City Name: westport, City Number: 402
City Name: hami, City Number: 403
City Name: the valley, City Number: 404
City Name: slobodskoy, City Number: 405
City Name: rawson, City Number: 406
City Name: ajdabiya, City Number: 407
City Name: yarke pole, City Number: 408
City Name: homestead, City Number: 409
City Name: krasnovishersk, City Number: 410
City Name: kavieng, City Number: 411
City Name: san quintin, City Number: 412
City Name: carlos chagas, City Number: 413
City Name: shagonar, City Number: 414
City Name: berbera, City Number: 415
City not found
City Name: saint albans, City Number: 416
City Name: yumen, City Number: 417
City not found
City Name: provideniya, City Number: 418
City Name: lanzhou, City Number: 419
City Name: central point, City Number: 420
City Name: polovinnoye, City Number: 421
City Name: shahpur, City Number: 422
City Name: saint-pierre, City Number: 423
City Name: mana, City Number: 424
City Name: mayumba, City Number: 425
City Name: mursalimkino, City Number: 426
City Name: haines junction, City Number: 427
City Name: siguiri, City Number: 428
City Name: kurayoshi, City Number: 429
City Name: ashland, City Number: 430
City Name: manzhouli, City Number: 431
City Name: saint-leu, City Number: 432
City Name: nurobod, City Number: 433
City Name: talnakh, City Number: 434
City Name: tamandare, City Number: 435
City Name: oranjemund, City Number: 436
City Name: omsukchan, City Number: 437
City Name: huarmey, City Number: 438
City Name: mayo, City Number: 439
City Name: antalaha, City Number: 440
City Name: san isidro, City Number: 441
City Name: barkhan, City Number: 442
City not found
City Name: raudeberg, City Number: 443
City Name: hervey bay, City Number: 444
City not found
City Name: marsh harbour, City Number: 445
City Name: paragominas, City Number: 446
City Name: leiyang, City Number: 447
City Name: phuket, City Number: 448
City Name: parintins, City Number: 449
City Name: valdivia, City Number: 450
City Name: baracoa, City Number: 451
City not found
City Name: lorengau, City Number: 452
City Name: mandalgovi, City Number: 453
City Name: khorixas, City Number: 454
City Name: comodoro rivadavia, City Number: 455
City Name: xingcheng, City Number: 456
City Name: bambanglipuro, City Number: 457
City Name: ipixuna, City Number: 458
City Name: kroya, City Number: 459
City Name: batemans bay, City Number: 460
City Name: kahului, City Number: 461
City Name: ode, City Number: 462
City Name: xuddur, City Number: 463
City Name: fuzhou, City Number: 464
City Name: port blair, City Number: 465
City Name: sidi ali, City Number: 466
City Name: skala, City Number: 467
City Name: tazovskiy, City Number: 468
City Name: caninde de sao francisco, City Number: 469
City Name: puerto colombia, City Number: 470
City Name: lazaro cardenas, City Number: 471
City Name: tautira, City Number: 472
City Name: kupang, City Number: 473
City Name: la paz, City Number: 474
City not found
City Name: bereda, City Number: 475
City Name: luau, City Number: 476
City Name: turukhansk, City Number: 477
City Name: sisimiut, City Number: 478
City Name: gisborne, City Number: 479
City not found
City Name: sola, City Number: 480
City Name: stephenville, City Number: 481
City Name: oktyabrskoye, City Number: 482
City Name: miles city, City Number: 483
City Name: iquitos, City Number: 484
City Name: murray bridge, City Number: 485
City Name: esna, City Number: 486
City Name: lubao, City Number: 487
City Name: port hardy, City Number: 488
City Name: altamira, City Number: 489
City Name: kaya, City Number: 490
City Name: iskateley, City Number: 491
City Name: petatlan, City Number: 492
City Name: ibb, City Number: 493
City Name: bukachacha, City Number: 494
City Name: senanga, City Number: 495
City Name: arlit, City Number: 496
City Name: manggar, City Number: 497
City Name: qaracala, City Number: 498
City Name: azul, City Number: 499
City Name: umm kaddadah, City Number: 500
City Name: prince rupert, City Number: 501
City Name: marzuq, City Number: 502
City Name: kimbe, City Number: 503
City Name: paka, City Number: 504
City Name: melbu, City Number: 505
City Name: orangeburg, City Number: 506
City Name: basco, City Number: 507
City Name: mulege, City Number: 508
City Name: upata, City Number: 509
City Name: mitu, City Number: 510
City not found
City Name: raga, City Number: 511
City Name: tarko-sale, City Number: 512
City not found
City Name: coihaique, City Number: 513
City Name: kosmynino, City Number: 514
City Name: codrington, City Number: 515
City Name: boden, City Number: 516
City Name: uthal, City Number: 517
City Name: taoudenni, City Number: 518
City Name: flin flon, City Number: 519
City Name: alihe, City Number: 520
City Name: togur, City Number: 521
City Name: bulgan, City Number: 522
City Name: pangnirtung, City Number: 523
City Name: farmington, City Number: 524
City Name: aksha, City Number: 525
City Name: narrabri, City Number: 526
City Name: sao filipe, City Number: 527
City Name: nhulunbuy, City Number: 528
City Name: yaan, City Number: 529
City Name: kurilsk, City Number: 530
City Name: madimba, City Number: 531
City not found
City Name: richards bay, City Number: 532
City Name: japura, City Number: 533
City Name: santa isabel do rio negro, City Number: 534
City Name: seymchan, City Number: 535
City not found
City Name: lukiv, City Number: 536
City Name: cetraro, City Number: 537
City Name: forecariah, City Number: 538
City Name: visnes, City Number: 539
City Name: ust-ordynskiy, City Number: 540
City Name: jalingo, City Number: 541
City Name: phan thiet, City Number: 542
City Name: san andres, City Number: 543
City Name: baruun-urt, City Number: 544
City Name: rocha, City Number: 545
City Name: marfino, City Number: 546
City Name: kidal, City Number: 547
City Name: komsomolskiy, City Number: 548
City Name: dondo, City Number: 549
City not found
City Name: mosquera, City Number: 550
City Name: moerai, City Number: 551
City Name: kendari, City Number: 552
City Name: port-gentil, City Number: 553
City Name: casper, City Number: 554
City Name: nizhniy kuranakh, City Number: 555
City not found
City Name: seydi, City Number: 556
City Name: matara, City Number: 557
City Name: walvis bay, City Number: 558
City Name: tokmak, City Number: 559
City Name: yavaros, City Number: 560
City Name: gbarnga, City Number: 561
City Name: sussex, City Number: 562
City Name: almenara, City Number: 563
City not found
City Name: tuy hoa, City Number: 564
City Name: jawhar, City Number: 565
City not found
###Markdown
Convert Raw Data to DataFrame* Export the city data into a .csv.* Display the DataFrame
###Code
#Dictionary containing data pulled from OpenWeather API
city_dict = {'City_ID':city_id,'City':city_names,'Cloudiness':cloud,
'Country':country,'Date':dt,'Humidity':hums,'Lat':lats,
'Lng':lngs,'Max Temp':max_temps,'Wind Speed':ws}
#Creates data frame from dictionary and saves the data to a csv file
city_df = pd.DataFrame(city_dict)
city_csv = city_df.to_csv('random_cities.csv',index=True)
city_df
###Output
_____no_output_____
###Markdown
Inspect the data and remove the cities where the humidity > 100%.----Skip this step if there are no cities that have humidity > 100%.
###Code
# Get the indices of cities that have humidity over 100%.
hum_100 = city_df.loc[city_df['Humidity'] > 100].index
print(hum_100)
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
#city_df_2 = city_df.drop(hum_100[0], inplace=False)
#city_df_2
#No cities with humidity over 100%
###Output
_____no_output_____
###Markdown
Plotting the Data* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.* Save the plotted figures as .pngs. Latitude vs. Temperature Plot
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Latitude and Max Temp data from random cities data frame
lat = city_df['Lat']
temp = city_df['Max Temp']
#Creates a scatter plot relating latitude and temperature
#Includes title and axes labels
plt.grid(True,linewidth=0.5)
plt.scatter(lat,temp,edgecolors='black')
plt.title('Latitude vs Temperature')
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
#Saves plot to a PNG file
plt.savefig('Lat_vs_Temp.png',bbox_inches='tight')
#This plot tracks the variety in temperature across the randomized city data frame
#based on latitude. This accounts for both the Northern and Southern Hemisphere.
###Output
_____no_output_____
###Markdown
Latitude vs. Humidity Plot
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Humidity data from random cities data frame
hum = city_df['Humidity']
#Creates scatter plot relating latitude and humidity
plt.grid(True,linewidth=0.5)
plt.scatter(lat,hum,edgecolors='black')
plt.title('Latitude vs Humidity')
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
#plt.show()
#Saves plot to PNG file
plt.savefig('Lat_vs_Hum.png',bbox_inches='tight')
#This plot tracks the variety in humidity across the randomized city data frame
#based on latitude. This accounts for both the Northern and Southern Hemisphere.
###Output
_____no_output_____
###Markdown
Latitude vs. Cloudiness Plot
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Cloudiness data from random cities data frame
cloud = city_df['Cloudiness']
#Creates scatter plot relating latitude and cloudiness
plt.grid(True,linewidth=0.5)
plt.scatter(lat,cloud,edgecolors='black')
plt.title('Latitude vs Cloudiness')
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
#plt.show()
#Saves plot to PNG file
plt.savefig('Lat_vs_Cloud.png',bbox_inches='tight')
#This plot tracks the variety in cloudiness across the randomized city data frame
#based on latitude. This accounts for both the Northern and Southern Hemisphere.
###Output
_____no_output_____
###Markdown
Latitude vs. Wind Speed Plot
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Wind Speed data from random cities data frame
ws = city_df['Wind Speed']
#Creates scatter plot relating latitude and cloudiness
plt.grid(True,linewidth=0.5)
plt.scatter(lat,ws,edgecolors='black')
plt.title('Latitude vs Wind Speed')
plt.xlabel('Latitude')
plt.ylabel('Winder Speed (mph)')
#plt.show()
#Saves plot to PNG file
plt.savefig('Lat_vs_WS.png',bbox_inches='tight')
#This plot tracks the variety in wind speed across the randomized city data frame
#based on latitude. This accounts for both the Northern and Southern Hemisphere.
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
#Separates all cities in Northern and South Hemispheres
north = city_df.loc[city_df['Lat'] >= 0]
south = city_df.loc[city_df['Lat'] < 0]
###Output
_____no_output_____
###Markdown
Northern Hemisphere - Max Temp vs. Latitude Linear Regression
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Max Temp and Latitude data from Northern Hemisphere dataframe
north_temp = north['Max Temp']
north_lat = north['Lat']
#Creates scatter plot relating latitude and temperature
#for Northern Hemisphere cities
plt.scatter(north_lat,north_temp)
plt.title('Northern Hemisphere Latitude vs Temperature')
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
#Creates linear regression line to show trend in data
(slope,intercept,rvalue,pvalue,stderr) = sts.linregress(north_lat,north_temp)
regress_line = north_lat*slope + intercept
#Creates equation for linear regression line, plots it against
#respective scatter plot data, and puts annotation of equation on
#figure. Also prints the R value
equation = 'y=' + str(round(slope,3)) + 'x + ' + str(round(intercept,3))
plt.plot(north_lat,regress_line,color='red')
plt.annotate(equation,(0,0),fontsize=20,color='red')
print('R value = ' + str(rvalue))
#Saves plot to PNG file
plt.savefig('Lin_Reg_Lat_v_Temp_N.png',bbox_inches='tight')
#It can be seen from the linear regression line that the data has a negative correlation
#associated to it, with the temperature tending to decrease as latitude approaches
#the equator.
###Output
R value = -0.8777522133935689
###Markdown
Southern Hemisphere - Max Temp vs. Latitude Linear Regression
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Max Temp and Latitude data from Southern Hemisphere dataframe
south_temp = south['Max Temp']
south_lat = south['Lat']
#Creates scatter plot relating latitude and temperature
#for Southern Hemisphere cities
plt.scatter(south_lat,south_temp)
plt.title('Southern Hemisphere Latitude vs Temperature')
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
#Creates linear regression line to show trend in data
(slope,intercept,rvalue,pvalue,stderr) = sts.linregress(south_lat,south_temp)
regress_line = south_lat*slope + intercept
#Creates equation for linear regression line, plots it against
#respective scatter plot data, and puts annotation of equation on
#figure. Also prints the R value
equation = 'y=' + str(round(slope,3)) + 'x + ' + str(round(intercept,3))
plt.plot(south_lat,regress_line,color='red')
plt.annotate(equation,(-55,80),fontsize=20,color='red')
print('R value = ' + str(rvalue))
#Saves plot to PNG file
plt.savefig('Lin_Reg_Lat_v_Temp_S.png',bbox_inches='tight')
#It can be seen from the linear regression line that the data has a
#slightly positive correlation associated to it, with the temperature
#tending to increase as latitude approaches the equator.
###Output
R value = 0.47109508666515654
###Markdown
Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Humidity and Latitude data from Northern Hemisphere dataframe
north_hum = north['Humidity']
north_lat = north['Lat']
#Creates scatter plot relating latitude and humidity
#for Northern Hemisphere cities
plt.scatter(north_lat,north_hum)
plt.title('Northern Hemisphere Latitude vs Humidity')
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
#Creates linear regression line to show trend in data
(slope,intercept,rvalue,pvalue,stderr) = sts.linregress(north_lat,north_hum)
regress_line = north_lat*slope + intercept
#Creates equation for linear regression line, plots it against
#respective scatter plot data, and puts annotation of equation on
#figure. Also prints the R value
equation = 'y=' + str(round(slope,3)) + 'x + ' + str(round(intercept,3))
plt.plot(north_lat,regress_line,color='red')
plt.annotate(equation,(30,16),fontsize=20,color='red')
print('R value = ' + str(rvalue))
#Saves plot to PNG file
plt.savefig('Lin_Reg_Lat_v_Hum_N.png',bbox_inches='tight')
#It can be seen from the linear regression line that the data with humidities between
#approximately 55% and 90% have a clear positive correlation with
#humidity increase as latitude increases. There is also a positive trend
#shown for data between 20% and 50% humidity, but the positive slope is much steeper
#and not exactly consistent with the rest of the data shown.
###Output
R value = 0.4116467597720929
###Markdown
Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Humidity and Latitude data from Southern Hemisphere dataframe
south_hum = south['Humidity']
south_lat = south['Lat']
#Creates scatter plot relating latitude and humidity
#for Southern Hemisphere cities
plt.scatter(south_lat,south_hum)
plt.title('Southern Hemisphere Latitude vs Humidity')
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
#Creates linear regression line to show trend in data
(slope,intercept,rvalue,pvalue,stderr) = sts.linregress(south_lat,south_hum)
regress_line = south_lat*slope + intercept
#Creates equation for linear regression line, plots it against
#respective scatter plot data, and puts annotation of equation on
#figure. Also prints the R value
equation = 'y=' + str(round(slope,3)) + 'x + ' + str(round(intercept,3))
plt.plot(south_lat,regress_line,color='red')
plt.annotate(equation,(-40,32),fontsize=20,color='red')
print('R value = ' + str(rvalue))
#Saves plot to PNG file
plt.savefig('Lin_Reg_Lat_v_Hum_S.png',bbox_inches='tight')
#While the equation of the the linear regression line states a positive slope,
#the data does not technically trend positive or negative
#since there is no degree of correlation between data points.
#However, it can be stated that a large number of cities from the data set
#tend to have humidities above 60% regardless of latitude.
###Output
R value = 0.39196688894787257
###Markdown
Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Cloudiness and Latitude data from Northern Hemisphere dataframe
north_cloud = north['Cloudiness']
north_lat = north['Lat']
#Creates scatter plot relating latitude and cloudiness
#for Northern Hemisphere cities
plt.scatter(north_lat,north_cloud)
plt.title('Northern Hemisphere Latitude vs Cloudiness')
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
#Creates linear regression line to show trend in data
(slope,intercept,rvalue,pvalue,stderr) = sts.linregress(north_lat,north_cloud)
regress_line = north_lat*slope + intercept
#Creates equation for linear regression line, plots it against
#respective scatter plot data, and puts annotation of equation on
#figure. Also prints the R value
equation = 'y=' + str(round(slope,3)) + 'x + ' + str(round(intercept,3))
plt.plot(north_lat,regress_line,color='red')
plt.annotate(equation,(25,30),fontsize=20,color='red')
print('R value = ' + str(rvalue))
#Saves plot to PNG file
plt.savefig('Lin_Reg_Lat_v_Cloud_N.png',bbox_inches='tight')
#The linear regression line dictates a positive correlation in the data points,
#but this only reflects around 12 data points in total. For the rest of the plot,
#there seems to be no true correlation. The remaining data not included on the linear
#regression line is split between two regions of cloudiness: 60-100% and 0-20%.
###Output
R value = 0.34557253149726586
###Markdown
Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Cloudiness and Latitude data from Southern Hemisphere dataframe
south_cloud = south['Cloudiness']
south_lat = south['Lat']
#Creates scatter plot relating latitude and cloudiness
#for Southern Hemisphere cities
plt.scatter(south_lat,south_cloud)
plt.title('Southern Hemisphere Latitude vs Cloudiness')
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
#Creates linear regression line to show trend in data
(slope,intercept,rvalue,pvalue,stderr) = sts.linregress(south_lat,south_cloud)
regress_line = south_lat*slope + intercept
#Creates equation for linear regression line, plots it against
#respective scatter plot data, and puts annotation of equation on
#figure. Also prints the R value
equation = 'y=' + str(round(slope,3)) + 'x + ' + str(round(intercept,3))
plt.plot(south_lat,regress_line,color='red')
plt.annotate(equation,(-35,25),fontsize=20,color='red')
print('R value = ' + str(rvalue))
#Saves plot to PNG file
plt.savefig('Lin_Reg_Lat_v_Cloud_S.png',bbox_inches='tight')
#The linear regression line seems to suggest a positive trend in the data,
#but due to the limited number of data points and how spread out the data is,
#it can be determined that there is no correlation in the data.
#The data is mostly split between two regions of humidity: 80-100% and 0-20%.
###Output
R value = 0.21390572870274271
###Markdown
Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Wind Speed and Latitude data from Northern Hemisphere dataframe
north_ws = north['Wind Speed']
north_lat = north['Lat']
#Creates scatter plot relating latitude and wind speed
#for Northern Hemisphere cities
plt.scatter(north_lat,north_ws)
plt.title('Northern Hemisphere Latitude vs Wind Speed')
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
#Creates linear regression line to show trend in data
(slope,intercept,rvalue,pvalue,stderr) = sts.linregress(north_lat,north_ws)
regress_line = north_lat*slope + intercept
#Creates equation for linear regression line, plots it against
#respective scatter plot data, and puts annotation of equation on
#figure. Also prints the R value
equation = 'y=' + str(round(slope,3)) + 'x + ' + str(round(intercept,3))
plt.plot(north_lat,regress_line,color='red')
plt.annotate(equation,(20,23),fontsize=20,color='red')
print('R value = ' + str(rvalue))
#Saves plot to PNG file
plt.savefig('Lin_Reg_Lat_v_WS_N.png',bbox_inches='tight')
#The slope in the linear regression model suggests an constancy
#in wind speed across latitudes. This fits what the plot suggests as most of the data
#is situated between 0 mph and 15 mph. This linear regression model could effectively
#exclude the outlier points above 30 mph.
###Output
R value = 0.1807017111926461
###Markdown
Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
###Code
fig = plt.figure()
fig.patch.set_facecolor('white')
#Pulls Wind Speed and Latitude data from Southern Hemisphere dataframe
south_ws = south['Wind Speed']
south_lat = south['Lat']
#Creates scatter plot relating latitude and wind speed
#for Southern Hemisphere cities
plt.scatter(south_lat,south_ws)
plt.title('Southern Hemisphere Latitude vs Wind Speed')
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
#Creates linear regression line to show trend in data
(slope,intercept,rvalue,pvalue,stderr) = sts.linregress(south_lat,south_ws)
regress_line = south_lat*slope + intercept
#Creates equation for linear regression line, plots it against
#respective scatter plot data, and puts annotation of equation on
#figure. Also prints the R value
equation = 'y=' + str(round(slope,3)) + 'x + ' + str(round(intercept,3))
plt.plot(south_lat,regress_line,color='red')
plt.annotate(equation,(-35,20),fontsize=20,color='red')
print('R value = ' + str(rvalue))
#Saves plot to PNG file
plt.savefig('Lin_Reg_Lat_v_WS_S.png',bbox_inches='tight')
#Like in the previous plot, the linear regression model included here suggests
#no effective change in wind speed while increases latitude. This lines up with
#most of the data in the plot which is situated between 0 and 10 mph. Knowing this,
#the data points above 15 mph could be argued as outliers.
###Output
R value = -0.34650816628122094
|
notebook/audit/TextFeatureClassification.ipynb | ###Markdown
Text Feature Classification===Text feature classification of reverts. Messing around with large linear models of text features.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import os
from tqdm import tqdm
import bz2
import sqlite3
import difflib
import gzip
import json
import base64
import pickle
import re
import hashlib
from datetime import datetime
from datetime import timezone
import nltk
import scipy.stats
import para
from itertools import groupby
from collections import Counter, defaultdict
import multiprocessing as mp
import deltas
from deltas.tokenizers import wikitext_split
from deltas import segment_matcher
import sklearn
import sklearn.ensemble
import sklearn.metrics
import sklearn.calibration
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
import scipy.sparse
git_root_dir = !git rev-parse --show-toplevel
git_root_dir = git_root_dir[0]
git_root_dir
raw_data_dir = "/export/scratch2/wiki_data"
derived_data_dir = os.path.join(git_root_dir, "data", "derived")
raw_data_dir, derived_data_dir
stub_history_dir = os.path.join(derived_data_dir, 'stub-history-all-revisions')
stub_history_dir
revision_sample_dir = os.path.join(derived_data_dir, 'revision_sample')
working_dir = os.path.join(derived_data_dir, 'audit')
working_dir
###Output
_____no_output_____
###Markdown
Read sample data
###Code
# read in the sample dataframe
s = datetime.now()
revision_sample_dir = os.path.join(derived_data_dir, 'revision_sample')
sample3_filepath = os.path.join(revision_sample_dir, 'sample3_all.pkl')
rev_df = pd.read_pickle(sample3_filepath)
print(f"Sample 3 data loaded in {datetime.now() - s}.")
len(rev_df)
rev_df.head()
###Output
_____no_output_____
###Markdown
Load texts into memory
###Code
audit_dir = os.path.join(derived_data_dir, 'audit')
text_db_filepath = os.path.join(audit_dir, 'text_2020-07-23T13:08:38Z.sqlite')
def get_db(db_filename):
db = sqlite3.connect(
db_filename,
detect_types=sqlite3.PARSE_DECLTYPES
)
db.row_factory = sqlite3.Row
return db
def get_existing_rev_ids(db_filepath):
rev_ids = set()
try:
db = get_db(db_filepath)
cursor = db.execute("SELECT rev_id FROM revisionText")
for result in cursor:
rev_id = result['rev_id']
rev_ids.add(rev_id)
finally:
db.close()
return rev_ids
#text_dict_list = []
rev_id_content_dict = {}
rev_id_comment_dict = {}
try:
db = get_db(text_db_filepath)
cursor = db.execute("SELECT rev_id, content, comment FROM revisionText")
for result in tqdm(cursor, total=1106018):
rev_id = result['rev_id']
rev_id_content_dict[rev_id] = result['content']
rev_id_comment_dict[rev_id] = result['comment']
#comment = result['comment']
#content = result['content']
#text_dict_list.append({
# 'rev_id': rev_id,
# 'content': content,
# 'comment': comment
#})
finally:
db.close()
len(rev_id_content_dict)
rev_ids_with_text = get_existing_rev_ids(text_db_filepath)
len(rev_ids_with_text)
#text_df = pd.DataFrame(text_dict_list)
#print(len(text_df))
#text_df.head()
###Output
_____no_output_____
###Markdown
Add text availability to sample3 revision dataEither join in a dataframe with the text data or just record which entries have text available.
###Code
#df = pd.merge(rev_df, text_df, how='left', on='rev_id')
df = rev_df
#df['has_text'] = ~df.content.isna()
df['has_text'] = df.rev_id.map(lambda rev_id: rev_id in rev_id_content_dict)
np.sum(df.has_text), np.sum(df.has_text) / len(df)
rev_ids_with_text = set(df[df.has_text].rev_id)
df['prev_rev_has_text'] = df.prev_rev_id.map(lambda rev_id: rev_id in rev_ids_with_text)
np.sum(df.prev_rev_has_text), np.sum(df.prev_rev_has_text) / len(df)
np.sum((df.prev_rev_has_text)&(df.has_text))
###Output
_____no_output_____
###Markdown
Mess around with creating some features
###Code
sdf = df[(df.prev_rev_has_text)&(df.has_text)]
len(sdf)
prev_rev_id = sdf.iloc[0].prev_rev_id
curr_rev_id = sdf.iloc[0].rev_id
prev_content = rev_id_content_dict[prev_rev_id]
curr_content = rev_id_content_dict[curr_rev_id]
len(prev_content), len(curr_content)
prev_tokens = wikitext_split.tokenize(prev_content)
curr_tokens = wikitext_split.tokenize(curr_content)
print(list(segment_matcher.diff(prev_tokens, curr_tokens)))
all_removed_tokens = []
all_inserted_tokens = []
for segment in segment_matcher.diff(prev_tokens, curr_tokens):
if segment.name == 'equal':
continue
elif segment.name == 'delete':
removed_tokens = prev_tokens[segment.a1:segment.a2]
#print(' '.join(removed_tokens))
all_removed_tokens.extend(removed_tokens)
elif segment.name == 'insert':
inserted_tokens = curr_tokens[segment.b1:segment.b2]
#print(' '.join(inserted_tokens))
all_inserted_tokens.extend(inserted_tokens)
else:
raise ValueError('I do not think substitutitions are implemented...')
diff = Counter(curr_tokens)
curr_counter = Counter(curr_tokens)
prev_counter = Counter(prev_tokens)
diff.subtract(prev_counter)
len(diff), len(curr_counter), len(prev_counter)
for token, count in diff.items():
if count != 0:
print(f"{repr(token):>40}\t{count}")
rev_id_tokens_dict = {}
c = 0
MAX_TEXTS = 10000
for row in tqdm(sdf.itertuples(), total=len(sdf)):
prev_rev_id = row.prev_rev_id
curr_rev_id = row.rev_id
if prev_rev_id not in rev_id_tokens_dict:
prev_content = rev_id_content_dict[prev_rev_id]
rev_id_tokens_dict[prev_rev_id] = wikitext_split.tokenize(prev_content)
c += 1
if curr_rev_id not in rev_id_tokens_dict:
curr_content = rev_id_content_dict[curr_rev_id]
rev_id_tokens_dict[curr_rev_id] = wikitext_split.tokenize(curr_content)
c += 1
if c >= MAX_TEXTS:
break
len(rev_id_tokens_dict)
word_counts = Counter()
for rev_id, tokens in tqdm(rev_id_tokens_dict.items(), total=len(rev_id_tokens_dict)):
word_counts.update(tokens)
len(word_counts)
word_counts.most_common(20)
len([1 for v in word_counts.values() if v >= 100])
labeled_rev_ids = set()
for row in tqdm(sdf.itertuples(), total=len(sdf)):
prev_rev_id = row.prev_rev_id
curr_rev_id = row.rev_id
if prev_rev_id in rev_id_tokens_dict and curr_rev_id in rev_id_tokens_dict:
labeled_rev_ids.add(curr_rev_id)
len(labeled_rev_ids)
prev_rev_id_dict = {row.rev_id: row.prev_rev_id for row in sdf.itertuples()}
n_features = len([1 for v in word_counts.values() if v >= 100])
n_features
token_index_dict = {tup[0]: i for i, tup in enumerate(word_counts.most_common(n_features))}
len(token_index_dict)
X = np.zeros((len(labeled_rev_ids),n_features))
for row, curr_rev_id in tqdm(enumerate(labeled_rev_ids), total=len(labeled_rev_ids)):
prev_rev_id = prev_rev_id_dict[rev_id]
prev_tokens = rev_id_tokens_dict[prev_rev_id]
curr_tokens = rev_id_tokens_dict[curr_rev_id]
diff = Counter(curr_tokens)
prev_counter = Counter(prev_tokens)
diff.subtract(prev_counter)
for token, count in diff.items():
if count != 0 and word_counts[token] >= 100:
X[row,token_index_dict[token]] = count
X[row,:] /= max(len(curr_tokens), len(prev_tokens))
X.shape
is_reverted_dict = {row.rev_id: row.is_reverted == 1 for row in sdf.itertuples()}
y = np.array([is_reverted_dict[rev_id] for rev_id in labeled_rev_ids])
y.shape
np.sum(y), np.sum(y) / len(y)
np.sum(X == 0)
# 91% of entries are 0
382788226 / (8263 * 50895)
clf = sklearn.linear_model.LogisticRegression(
penalty='l2',
C=1.0,
solver='lbfgs'
)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.20, random_state=500)
s = datetime.now()
print(clf)
# train the model
md = clf.fit(X_train, y_train)
print(f"{datetime.now() - s}")
# predict with the model
y_pred_test = md.predict(X_test)
y_pred_test_proba = md.predict_proba(X_test)[:,1]
np.sum(y_test == y_pred_test) / len(y_test)
roc_auc = sklearn.metrics.roc_auc_score(y_test, y_pred_test_proba)
roc_auc
# construct the vocabulary on all of the text documents
# this should only include TRAINING documents, not TESTING documents
s = datetime.now()
def dummy(doc):
return doc
count_vectorizer = CountVectorizer(
tokenizer=dummy,
preprocessor=dummy,
max_features=40000
)
count_vectorizer.fit(rev_id_tokens_dict.values())
print(f"{datetime.now() - s}")
# this is the size of the vocabulary
len(count_vectorizer.vocabulary_)
X_docs = []
for curr_rev_id in tqdm(labeled_rev_ids):
X_docs.append(rev_id_tokens_dict[curr_rev_id])
X = count_vectorizer.transform(X_docs)
s = datetime.now()
tfidf = TfidfTransformer()
tfidf.fit(X)
print(f"{datetime.now() - s}")
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.20, random_state=500)
s = datetime.now()
clf = sklearn.linear_model.LogisticRegression(
penalty='l2',
C=1.0,
solver='lbfgs',
max_iter=1000
)
# train the model
md = clf.fit(X_train, y_train)
print(f"{datetime.now() - s}")
y_pred_test = md.predict(X_test)
y_pred_test_proba = md.predict_proba(X_test)[:,1]
pct_predicted_reverted = np.sum(y_pred_test) / len(y_pred_test)
test_acc = np.sum(y_test == y_pred_test) / len(y_test)
roc_auc = sklearn.metrics.roc_auc_score(y_test, y_pred_test_proba)
pct_predicted_reverted, test_acc, roc_auc
X[0,:]
###Output
_____no_output_____
###Markdown
Experimenting with diff features
###Code
diff_list = []
diff_json_filepath = os.path.join(audit_dir, 'diff_2020-07-23T13:08:38Z.ldjson')
skip_count = 0
with open(diff_json_filepath, 'r') as infile:
for line in tqdm(infile, total=len(rev_ids_with_text)):
if np.random.random() >= 0.55:
skip_count += 1
continue
diff = json.loads(line)
diff_list.append(diff)
if len(diff_list) >= 50000: # optional early-stopping condition to reduce concurrently loaded data size
break
len(diff_list), skip_count
rev_id_is_reverted_dict = {row.rev_id: row.is_reverted for row in tqdm(rev_df[rev_df.rev_id.isin(rev_ids_with_text)].itertuples())}
# add reverting information to the diff list
# optionally, can also bring in the content text from the text database
should_add_content_text = False
for diff in tqdm(diff_list):
rev_id = diff['rev_id']
diff['is_reverted'] = rev_id_is_reverted_dict[rev_id]
if should_add_content_text:
try:
db = get_db(text_db_filepath)
cursor = db.execute("SELECT rev_id, content FROM revisionText WHERE rev_id = ?", (rev_id,))
result = cursor.fetchall()
if len(result) > 1:
raise ValueError("WARNING: Duplicated rev_id in database, check integrity.")
if len(result) == 0:
raise ValueError(f"Failed to find rev_id {rev_id} in database.")
result = result[0]
curr_content = result['content']
finally:
db.close()
diff['content'] = curr_content
###Output
100%|██████████| 50000/50000 [00:00<00:00, 154547.81it/s]
###Markdown
Compute odds ratios to identify representative words
###Code
rev_id_is_reverted_dict = {row.rev_id: row.is_reverted for row in tqdm(rev_df[rev_df.rev_id.isin(rev_ids_with_text)].itertuples())}
# compute counts for the reverted reverts only, in order to compute Odds Ratios
# oc = occurrence count (document frequency)
content_oc = Counter()
removed_oc = Counter()
inserted_oc = Counter()
reverted_content_oc = Counter()
reverted_removed_oc = Counter()
reverted_inserted_oc = Counter()
diff_json_filepath = os.path.join(audit_dir, 'diff_2020-07-23T13:08:38Z.ldjson')
with open(diff_json_filepath, 'r') as infile:
for line in tqdm(infile, total=len(rev_ids_with_text)):
diff = json.loads(line)
content_set = set(diff['content_tokens'])
removed_set = set(diff['removed_tokens'])
inserted_set = set(diff['inserted_tokens'])
content_oc.update(content_set)
removed_oc.update(removed_set)
inserted_oc.update(inserted_set)
if rev_id_is_reverted_dict[diff['rev_id']] == 1:
reverted_content_oc.update(content_set)
reverted_removed_oc.update(removed_set)
reverted_inserted_oc.update(inserted_set)
print(f"Content tokens: {len(content_oc)} (reverted {len(reverted_content_oc)})")
print(f"Removed tokens: {len(removed_oc)} (reverted {len(reverted_removed_oc)})")
print(f"Inserted tokens: {len(inserted_oc)} (reverted {len(reverted_inserted_oc)})")
print(f"Content tokens: {len(content_oc)} (reverted {len(reverted_content_oc)})")
print(f"Removed tokens: {len(removed_oc)} (reverted {len(reverted_removed_oc)})")
print(f"Inserted tokens: {len(inserted_oc)} (reverted {len(reverted_inserted_oc)})")
# print some summary statistics
print("Token document frequency in reverted revisions")
for counter_name, counter in zip(['Article Content', 'Removals', 'Insertions'], [reverted_content_oc, reverted_removed_oc, reverted_inserted_oc]):
print(counter_name)
print('='*41)
for token, count in counter.most_common(14):
if token == '\n':
token = 'NEWLINE'
elif token == ' ':
token = 'WHITESPACE'
print(f"{token:>30} {count:>10}")
print()
def compute_token_odds_ratios(total_oc, reverted_oc, n=10000, min_freq=5):
token_odds_ratio_list = []
total_all_tokens_count = sum(total_oc.values())
reverted_all_tokens_count = sum(reverted_oc.values())
considered_tokens_count = 0
for token, total_count in tqdm(total_oc.most_common(n)):
if total_count < min_freq:
break
considered_tokens_count += 1
reverted_count = reverted_oc[token] if token in reverted_oc else 0
nonreverted_count = total_count - reverted_count
otherToken_nonreverted_count = (total_all_tokens_count - reverted_all_tokens_count) - nonreverted_count
otherToken_reverted_count = reverted_all_tokens_count - reverted_count
if nonreverted_count == 0:
odds_ratio = 999
else:
odds_ratio = (reverted_count * otherToken_nonreverted_count) / (otherToken_reverted_count * nonreverted_count)
token_odds_ratio_list.append((token, odds_ratio, reverted_count, total_count))
if considered_tokens_count != n:
print(f"Due to minimum frequency threshold, considered only {considered_tokens_count} / {n} top tokens (total unique: {len(total_oc)}).")
token_odds_ratio_list.sort(key=lambda tup: tup[1], reverse=True)
return token_odds_ratio_list
for counter_name, total_oc, reverted_oc in zip(['Article Content', 'Removals', 'Insertions'], [content_oc, removed_oc, inserted_oc], [reverted_content_oc, reverted_removed_oc, reverted_inserted_oc]):
token_odds_ratio_list = compute_token_odds_ratios(total_oc, reverted_oc, n=50000, min_freq=500)
print(counter_name)
print('='*41)
for tup in token_odds_ratio_list[:40]:
token, odds_ratio, reverted_count, total_count = tup
if token == '\n':
token = 'NEWLINE'
elif token == ' ':
token = 'WHITESPACE'
elif token.isspace():
token = 'WHITESPACE+'
print(f"{token:>30} {odds_ratio:>10.3f} ({reverted_count} / {total_count} = {reverted_count / total_count*100:.2f}%)")
###Output
100%|██████████| 50000/50000 [00:00<00:00, 285876.00it/s]
###Markdown
Compute the vocabularies
###Code
content_counter = Counter()
removed_counter = Counter()
inserted_counter = Counter()
include_bigrams = False
def get_bigrams(token_list):
ts = token_list
return [ts[i] + "_" + ts[i+1] for i in range(len(ts) - 1)]
for diff in tqdm(diff_list, desc='Generating word counts'):
content_counter.update(diff['content_tokens'])
removed_counter.update(diff['removed_tokens'])
inserted_counter.update(diff['inserted_tokens'])
if include_bigrams:
content_counter.update(get_bigrams(diff['content_tokens']))
removed_counter.update(get_bigrams(diff['removed_tokens']))
inserted_counter.update(get_bigrams(diff['inserted_tokens']))
len(content_counter), len(removed_counter), len(inserted_counter)
# print some summary statistics
for counter_name, counter in zip(['Article Content', 'Removals', 'Insertions'], [content_counter, removed_counter, inserted_counter]):
print(counter_name)
print('='*41)
for token, count in counter.most_common(20):
if token == '\n':
token = 'NEWLINE'
elif token == ' ':
token = 'WHITESPACE'
print(f"{token:>30} {count:>10}")
print()
content_vocabulary = [token for token, count in content_counter.most_common(20000)]
removed_vocabulary = [token for token, count in content_counter.most_common(10000)]
inserted_vocabulary = [token for token, count in content_counter.most_common(10000)]
len(content_vocabulary), len(removed_vocabulary), len(inserted_vocabulary)
# construct the vocabulary on all of the text documents
# this should only include TRAINING documents, not TESTING documents
# for now, it seems mostly innocent to compute the vocab from all documents
def dummy(doc):
return doc
def get_count_vectorizer(vocabulary):
vectorizer = CountVectorizer(
tokenizer=dummy,
preprocessor=dummy,
vocabulary=vocabulary
)
return vectorizer
def stream_dict_key(diffs, key):
for diff in diffs:
yield diff[key]
s = datetime.now()
content_vectorizer = get_count_vectorizer(content_vocabulary)
X_content = content_vectorizer.fit_transform(stream_dict_key(diff_list, 'content_tokens'))
print(f"Built CountVectorizer for full-page tokens in {datetime.now() - s}")
s = datetime.now()
removed_vectorizer = get_count_vectorizer(removed_vocabulary)
X_removed = removed_vectorizer.fit_transform(stream_dict_key(diff_list, 'removed_tokens'))
print(f"Built CountVectorizer for removed tokens in {datetime.now() - s}")
s = datetime.now()
inserted_vectorizer = get_count_vectorizer(inserted_vocabulary)
X_inserted = inserted_vectorizer.fit_transform(stream_dict_key(diff_list, 'inserted_tokens'))
print(f"Built CountVectorizer for inserted tokens in {datetime.now() - s}")
X_content.shape, X_removed.shape, X_inserted.shape
X = scipy.sparse.hstack((X_content, X_removed, X_inserted))
X.shape
y = np.array([diff['is_reverted'] for diff in diff_list])
y.shape
# percentage reverted in this sample
np.sum(y) / len(y)
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.20, random_state=500)
s = datetime.now()
clf = sklearn.linear_model.LogisticRegression(
penalty='l2',
C=1.0,
solver='lbfgs',
max_iter=1000
)
#clf = sklearn.svm.LinearSVC(
# C=0.1,
# dual=False,
#)
clf = sklearn.linear_model.SGDClassifier(
loss='log',
penalty='l2',
early_stopping=False,
validation_fraction=0.05,
verbose=1,
)
# scaling can help some of the solvers converge more rapidly...
X_train = sklearn.preprocessing.scale(X_train, with_mean=False)
X_test = sklearn.preprocessing.scale(X_test, with_mean=False)
# train the model
md = clf.fit(X_train, y_train)
print(f"{datetime.now() - s}")
y_pred_test = md.predict(X_test)
y_pred_test_proba = md.predict_proba(X_test)[:,1]
#y_pred_test_proba = 1 / (1 + np.exp(-md.decision_function(X_test))) # can use as lazy eval for models without a probability output
pct_predicted_reverted = np.sum(y_pred_test) / len(y_pred_test)
test_acc = np.sum(y_test == y_pred_test) / len(y_test)
roc_auc = sklearn.metrics.roc_auc_score(y_test, y_pred_test_proba)
pct_predicted_reverted, test_acc, roc_auc
clf.coef_.shape
content_token_weights = list(zip(content_vocabulary, clf.coef_[0,:20000]))
removed_token_weights = list(zip(removed_vocabulary, clf.coef_[0,20000:30000]))
inserted_token_weights = list(zip(inserted_vocabulary, clf.coef_[0,30000:]))
content_token_weights.sort(key=lambda tup: abs(tup[1]), reverse=True)
removed_token_weights.sort(key=lambda tup: abs(tup[1]), reverse=True)
inserted_token_weights.sort(key=lambda tup: abs(tup[1]), reverse=True)
for token, weight in content_token_weights[:30]:
print(f"{token:>20} {weight:.3f}")
for token, weight in removed_token_weights[:50]:
print(f"{token:>20} {weight:.3f}")
for token, weight in inserted_token_weights[:50]:
print(f"{token:>20} {weight:.3f}")
###Output
platforms -9.751
my 6.570
Meanwhile -4.333
big 4.331
decade -3.890
Jew 3.760
978 -3.692
209 3.676
Jeff 3.673
suspected -3.648
elite 3.531
website -3.498
very 3.484
Is 3.415
little 3.289
Use -3.261
practiced 3.200
124 3.195
IT 3.166
DF -3.090
duration 3.086
rifle -3.078
controversial 3.072
jstor -3.031
September -2.940
linear 2.928
Portuguese -2.923
killed 2.922
you 2.921
present -2.911
because 2.890
affected 2.878
archive -2.874
Opening -2.862
gallery -2.843
2.827
Ulster -2.820
df -2.816
introduced -2.809
Chiang -2.798
Hispanic 2.791
hunting -2.749
fertility -2.743
alt -2.736
alcohol -2.704
dead -2.694
save -2.680
immigrants 2.675
gay 2.663
disambiguation -2.655
|
notebooks/etag_and_last_modified.ipynb | ###Markdown
ETag
###Code
url = "https://python-podcast.de/show/feed/podcast/mp3/rss.xml"
url = "https://freakshow.fm/feed/m4a"
d = feedparser.parse(url)
d.etag
d2 = feedparser.parse(url, etag=d.etag)
d2.status
d2.headers
d3 = feedparser.parse(url, etag='"620601c4-41c53c"')
d3.status
d3.etag
d3.debug_message
d3.headers
###Output
_____no_output_____
###Markdown
Last Modified
###Code
url = "https://python-podcast.de/show/feed/podcast/mp3/rss.xml"
d = feedparser.parse(url)
d.headers
url = "https://freakshow.fm/feed/m4a"
d = feedparser.parse(url)
d.headers
###Output
_____no_output_____ |
prsm_demo_2.ipynb | ###Markdown
PRSMPCA with Random Matrix Theoretic Spectral MeasuresPRSM is a python package applying Random Matrix Theory (RMT) to high-dimensional PCA. PRSM fits densities to the empirical eigenvalue distribution with the goal of estimating various quantities associated with outlying eigenvalues. This includes diagnostic quantities which may be used to test whether or not a candidate eigenvalue is an outlier, or whether neighboring outlying eigenvalues are too close to trust estimates of the overlap between sample and population eigenvectors. Brief random matrix theory overviewThe main model of random matrix theory applications to high-dimensional data is as follows. We consider an $N \times M$ data matrix $X$ of $N$ independent samples of $M$-dimensional data. If the spectral measure of the population covariance matrix $\Sigma := N^{-1} \mathbb{E} X X^T$ converges to a measure $H$, then the spectral measure of the sample covariance matrix converges to a deterministic measure $\rho(x)$ which is a function of $H$ defined below. The Stieltjes transform of $\rho$ is defined by,$$m (z) = \int \frac{ \rho (x) }{x -z} d x.$$The matrices $ N^{-1} X X^T$ and $N^{-1} X^T X$ have the same eigenvalues up to $|M-N|$ zeros and so the empirical spectral measure of the latter matrix also converges to a deterministic measure which we denote by $\tilde{\rho}$ with Stieltjes transform $\tilde{m} (z)$ related to $m(z)$ by$$\gamma z m(z) = (1- \gamma) + z \tilde{m} (z)$$where $\gamma$ is the limit of the ratio $M/N$. The function $\tilde{m}(z)$ satisfies the functional equation,$$\tilde{m} (z) = - \left( z - \gamma \int \frac{ x d H (x) }{ 1 + x \tilde{m} (z) } \right)^{-1}$$This may also be used to define $\tilde{m}(z)$ as the holomorphic solution of the above equation satisfying $\tilde{m}(z) \sim z^{-1}$ as $|z| \to \infty$, which then in turn defines $m(z)$ and the corresponding measures through the Stieltjes inversion formula. Theoretical behavior of outliersLet $\psi ( \alpha )$ be the function,$$\psi ( \alpha) := \alpha + \gamma \alpha \int \frac{ x d H (x ) }{ \alpha - x }.$$The functional relation $$\psi ( -1 / \tilde{m} (z) ) = z$$holds. Denote by $\mathfrak{p}$ the point,$$\mathfrak{p} := \inf_p \{ p' : \psi' (p' ) > 0 \mbox{ } \forall \mbox{ }p' > p \}.$$Any population eigenvalue of $\Sigma$ such that $p > \mathfrak{p}$ gives rise to an outlying eigenvalue $s$ of the sample covariance matrix. The locations of $s$ and $p$ are related asymptotically by,$$p \approx - \frac{1}{ \tilde{m} (s) }.$$Moreover, if the population eigenvalue $p$ is simple, the squared inner product between sample and population eigenvectors converges to the deterministic quantity$$- \frac{ s \tilde{m}(s) }{ \tilde{m}' (s) }.$$Both the sample eigenvalue and the squared overlap of the sample and population eigenvectors exhibit fluctuations. When the population eigenvalue is simple, the sample eigenvalue has Gaussian fluctuations. If the population eigenvector is localized then the variance depends on the fourth cumulant of the matrix entries. In the event that this cumulant vanishes (e.g., the Gaussian distribution) the variance is known to be,$$\mathrm{Var} ( s) \approx \frac{2}{N \tilde{m}''(s)}.$$If the population eigenvector is delocalized, then due to universality this expression is expected to hold asymptotically. Under similar conditions, the variance of the squared overlap between sample and population eigenvectors is,$$\mathrm{Var} ( (v_s \cdot v_p)^2 ) \approx \frac{1}{3N} \frac{ \tilde{m}'''(s) ( \tilde{m} (s) )^4}{s^2 ( \tilde{m}' (s) )^4} \approx ( v_s \cdot v_p )^2 \frac{\tilde{m}'''(s) \tilde{m}(s)^2 }{ 3N \tilde{m}'(s)^2}$$Lorem ipsum Practical considerationsIn practice, it can be difficult to decide which eigenvalues are truly outliers and which belong to the spectral bulk. In the case that $H$ is trivial, Johnstone proposed a hypothesis testing framework based on the fact that under the absence of outliers, the limiting distribution of the largest eigenvalue is Tracy-Widom. The p-value is then $\mathbb{P}_{TW} ( \lambda > s)$.A goal of PRSM is to build on this approach by reporting further diagnostic quantities and additionally treating the case in which the typical square-root behavior and Tracy-Widom fluctuations are absent. PRSM aims to estimate the various quantities listed above, including the variances of the sample eigenvalue and the squared overlap. As seen from the above formulas, all quantities may be related to the limiting density of states $\rho (x)$.Due to the functional relation between $\psi$ and $\tilde{m}$, one can instead try to estimate $H$. This is the approach proposed by Dey-Lee and El Karoui. Our approach is different and based on the observations that it is not necessary to estimate $H$; in fact, in the theoretical set-up above there is no reason why one cannot simply estimate the density $\rho$ by the empirical measure$$\rho(x) \approx \frac{1}{M} \sum_{i=1}^M \delta_{ \lambda_i} (x).$$Indeed, the limit $\rho$ is somewhat of a theoretical abstraction. Nonetheless, this approximation has some limitations. The main limitation is in the fact that the approximation$$m(s) \approx \frac{1}{M} \sum_i \frac{1}{ \lambda_i - s}$$breaks down near the spectral edge. In deed, the limit of the RHS is $\infty$ as $s$ approaches the edge of the spectrum, while often the LHS has a finite limit as $s$ approaches the edge of the support of $\rho(x)$. PRSM functionalityPRSM seeks to fit a spectral measure to the empirical eigenvalue measure by fitting a continuous density to a fraction of the eigenvalues near the spectral edge, and just using the empirical measure for the remaining eigenvalues. A common observation in RMT is that the limiting density of states has a square root behavior, $$\rho(x) \approx \sqrt{E-x}.$$For a given exponent $\alpha >0$ and cutoff $n$ PRSM approximates the empirical eigenvalue measure as,$$\frac{1}{M} \sum_i \delta_{ \lambda_i } (x) \approx c (E-x)^\alpha \mathbb{1}_{\{ \lambda_n n} \delta_{ \lambda_i} (x).$$The exponent $\alpha >0$ may be chosen by the user (the generic choice in RMT is $\alpha = 0.5$). PRSM also provides a tool to find $\alpha$, however this appears to be somewhat sensitive and requires large datasets. PRSM also allows for higher-order corrections to the continuous density.After finding the approximate spectral measure, PRSM calculates the quantities listed above as well as the distance between the sample eigenvalue and the edge $E$ of the fitted spectral measure. In the case that $\alpha = 0.5$, PRSM moreover finds the correct scaling constants for the TW-distribution and reports the mean and variance of a Tracy-Widom random variable under the null hypothesis that $\Sigma$ contains no outliers. Our view is that the distance between outlier and spectral edge when normalized by the standard deviation of the sample eigenvalue or TW distribution can serve as meaningful diagnostic tools to alert the user that an eigenvalue may be too close to the spectrum and estimates of the population eigenvalue squared overlap may not be reliable. In fact it is known (Bloemendal et. al), that when $s-E$ is on the same scale as the $N^{-2/3}$ Tracy-Widom fluctuations, that sample eigenvector no longer has any correlation with the population eigenvector - the goal of PRSM is to essentially fit data to find what this length scale is.An additional diagnostic is the RHS of,$$\frac{ \mathrm{Var} ( (v_s \cdot v_p )^2 )}{ (v_s \cdot v_p)^2 } \approx \frac{\tilde{m}'''(s) \tilde{m}(s)^2}{3N \tilde{m}'(s)^2}.$$As $s$ approaches $E$ this behaves, in the square-root setting, as $(N (s-E)^{3/2} )^{-1}$. This quantity is large only when $s$ is on the Tracy-Widom scale, and so can serve as another diagnostic. In any case it is an estimate of the relative error of the squared overlap and if it is large, then the estimate of the squared overlap may not be reliable. This observation does not depend on the square root behavior. Demonstration
###Code
import numpy as np
import matplotlib.pyplot as plt
from prsm import simulation_tools as st
from prsm import spectrum
#simulate some data to run the methods on
N, M = 1250, 500
sigmas = np.concatenate((np.array([10, 8, 5]), np.ones(M-3)))
#sample a covariance matrix with population covariance with eigenvalues = sigmas
U, S, V = st.samples(N, M, sigmas)
#returns left/right singular vectors of data matrix and eigenvalues of N^{-1}X^T X
spec = spectrum.spectrum(S, N, M, nout =3) #choose 3 outliers.
#now we fit a density to the edge. Empirical CDF is calculated and raised to the 2/3-power.
#Fits the empirical CDF at nbins = 500 points, using largest n=100 eigenvalues and a k=3 degree polynomial
spec.fit(nbins = 500,n = 100, k=3)
#plot a histogram of the first n=100 bulk eigenvalues S and an overlay of the fitted density:
spec.plot_density()
#plot all of the bulk eigenvalues and the portion fitted to a continuous density:
spec.plot_sm()
#calculate and report outlier quantities
spec.calc_outlier_quants()
_ = spec.report()
#report outlier diagnostic quantities:
_ = spec.outlier_diagnostics()
#investigate how close a hypothetical eigenvalue just outside the fitted spectral edge is an eigenvalue:
from prsm.outlier import outlier
from prsm.methods import calc_outlier, print_outlier_edge
fourth_ev = outlier(N, M, sample=spec.appr_esd.dens.r+0.1)
calc_outlier(fourth_ev, spec.appr_esd)
_ = print_outlier_edge(fourth_ev, spec)
#we can try again with automatic outlier finding.
sp_auto = spectrum.spectrum(S, N, M)
sp_auto.auto_sq_fit(nbins=500, n=100, k=1)
###Output
Number of outliers found: 3
Index of eigenvalue that failed test: 3
Reason: Eigenvalue within threshold sample stds of spectral edge
|
Assignment-.ipynb | ###Markdown
Exploring and Analyzing the IBM Attrition
###Code
#packages to load
import numpy as np
#for data processing ie the csv file file
import pandas as pd
#for visualization
import seaborn as sns
import matplotlib.pyplot as plt
import plotly.express as px
###Output
_____no_output_____
###Markdown
Reading the Data Set
###Code
#loading the dataframe declared as df
df = pd.read_csv("WA_Fn-UseC_-HR-Employee-Attrition.csv")
df.head()
###Output
_____no_output_____
###Markdown
Displaying all Rows and Columns
###Code
#using the set option
pd.set_option('display.max_columns', 1500)
df.head()
#rows and columns that my df has
df.shape
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
#datatypes of all the columns
df.info()
df.columns
###Output
_____no_output_____
###Markdown
Summary of the whole DataFrame
###Code
#summary of the whole data set
#measure of central tendancies- description of each and every column
df.describe()
###Output
_____no_output_____
###Markdown
Some of the columns are not normalized as mean != median and can be displayed
###Code
#Attrition and JobRole are categorical columns
cat_df = df[['Attrition', 'JobRole']]
cat_df
# a column that is continous
con_df = df[['DistanceFromHome']]
con_df
###Output
_____no_output_____
###Markdown
Checking for null values
###Code
# handling missing values
df.isnull().sum()
###Output
_____no_output_____
###Markdown
As shown we have no null data set since the value is zero with unique values
###Code
df["EmployeeCount"].unique().sum()
df["StandardHours"].unique().sum()
df.EmployeeCount.shape
df = df.drop(["EmployeeCount", "StandardHours"], axis = 1)
df.head()
###Output
_____no_output_____
###Markdown
A small data frame that contains the counts of No and Yes in the attrition
###Code
attrition_count = pd.DataFrame(df["Attrition"].value_counts())
attrition_count
# pie chart for the data attrition
#can use , explode = (0.2,0)
plt.pie(attrition_count["Attrition"], labels = ["No", "Yes"])
###Output
_____no_output_____
###Markdown
OBSERVATION: Major part of the data of the employeees was not attrited
###Code
attrition_count = sns.countplot(df["Attrition"])
df.head(5)
df.DistanceFromHome.plot.box()
df["JobRole"]
df[["Attrition", "DistanceFromHome"]].groupby("Attrition").mean()
#df[["JobRole", "DistanceFromHome"]].groupby("JobRole").mean()
breakDown = df.groupby(["JobRole", "Attrition"])["DistanceFromHome"].mean()
breakDown
df.JobRole.value_counts()
fig, ax = plt.subplots(figsize=(15,7))
# use unstack()
df.groupby(['JobRole','Attrition']).count()['DistanceFromHome'].unstack().plot(ax=ax)
###Output
_____no_output_____
###Markdown
Monthly IncomeComparison by Education and Attrition
###Code
df.MonthlyIncome
df['average_monthly_income'] = df["MonthlyIncome"].mean()
df.average_monthly_income
df[["Education", "MonthlyIncome"]].groupby("MonthlyIncome").mean()
#comp = df.groupby(["Education", "Attrition"])["average_monthly_income"]
fig, ax = plt.subplots(figsize=(15,7))
# use unstack()
df.groupby(['Education','Attrition']).count()['average_monthly_income'].unstack().plot(ax=ax)
###Output
_____no_output_____ |
week6/Optimization methods.ipynb | ###Markdown
Optimization MethodsUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this: **Figure 1** : **Minimizing the cost is like finding the lowest point in a hilly landscape** At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. **Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`.To get started, run the following code to import the libraries you will need.
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
1 - Gradient DescentA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. **Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 = [[ 1.74604067]
[-0.75184921]]
W2 = [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 = [[-0.88020257]
[ 0.02561572]
[ 0.57539477]]
###Markdown
**Expected Output**: **W1** [[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]] **b1** [[ 1.74604067] [-0.75184921]] **W2** [[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.1404819 -1.09976462 -0.1612551 ]] **b2** [[-0.88020257] [ 0.02561572] [ 0.57539477]] A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. - **(Batch) Gradient Descent**:``` pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): Forward propagation a, caches = forward_propagation(X, parameters) Compute cost. cost = compute_cost(a, Y) Backward propagation. grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads) ```- **Stochastic Gradient Descent**:```pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): for j in range(0, m): Forward propagation a, caches = forward_propagation(X[:,j], parameters) Compute cost cost = compute_cost(a, Y[:,j]) Backward propagation grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads)``` In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this: **Figure 1** : **SGD vs GD** "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). **Note** also that implementing SGD requires 3 for-loops in total:1. Over the number of iterations2. Over the $m$ training examples3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples. **Figure 2** : **SGD vs Mini-Batch GD** "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. **What you should remember**:- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.- You have to tune a learning rate hyperparameter $\alpha$.- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large). 2 - Mini-Batch Gradient descentLet's learn how to build mini-batches from the training set (X, Y).There are two steps:- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. - **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this: **Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:```pythonfirst_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]...```Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$).
###Code
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
## X -> dim (size of input * num of inputs) -> ie each column is a test Case & row represents 1 dimensin
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m)) # Generates permutation of given (0,Range)
shuffled_X = X[:, permutation]
## X [: [2,4,1]] -> this is np.array type: Select All (:) in first Dim,
# Select 2, 4, 1 from the second dimension, the Second Arg is a list that acts as selector in that dimension
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size)
# Floor of the number of full mini batches, Will handle last ( non full ) batch seperately
for k in range(0, num_complete_minibatches):
mini_batch_X = shuffled_X [:, k * mini_batch_size : (k + 1) * mini_batch_size]
mini_batch_Y = shuffled_Y [:, k * mini_batch_size : (k + 1) * mini_batch_size]
## ARRAY [ x : y Slice in dim1 , a : b Slice in dim2, ----]
## We want to Slice on Columns as Each Column is a Training data and keep Every Row correspondingly
## Thus we do [ : , k* miniBatchSize : (k+1) * miniBatchSize]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size :]
mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size :]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
###Output
shape of the 1st mini_batch_X: (12288, 64)
shape of the 2nd mini_batch_X: (12288, 64)
shape of the 3rd mini_batch_X: (12288, 20)
shape of the 1st mini_batch_Y: (1, 64)
shape of the 2nd mini_batch_Y: (1, 64)
shape of the 3rd mini_batch_Y: (1, 20)
mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]
###Markdown
**Expected Output**: **shape of the 1st mini_batch_X** (12288, 64) **shape of the 2nd mini_batch_X** (12288, 64) **shape of the 3rd mini_batch_X** (12288, 20) **shape of the 1st mini_batch_Y** (1, 64) **shape of the 2nd mini_batch_Y** (1, 64) **shape of the 3rd mini_batch_Y** (1, 20) **mini batch sanity check** [ 0.90085595 -0.7612069 0.2344157 ] **What you should remember**:- Shuffling and Partitioning are the two steps required to build mini-batches- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128. 3 - MomentumBecause mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations. Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. **Figure 3**: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$. **Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:for $l =1,...,L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```**Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop.
###Code
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
this is the Exponentially Weighted Averages of gradients Calcluated as v = Beta * v + (1 - Beta) * dW
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
###Output
v["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] = [[ 0.]
[ 0.]]
v["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] = [[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected Output**: **v["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **v["db1"]** [[ 0.] [ 0.]] **v["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **v["db2"]** [[ 0.] [ 0.] [ 0.]] **Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: $$ \begin{cases}v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}\end{cases}\tag{3}$$$$\begin{cases}v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}} \end{cases}\tag{4}$$where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta * v["dW" + str(l+1)] + (1 - beta) * grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta * v["db" + str(l+1)] + (1 - beta) * grads["db" + str(l+1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v ["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v["db" + str(l+1)]
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
###Output
W1 = [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]]
b1 = [[ 1.74493465]
[-0.76027113]]
W2 = [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]]
b2 = [[-0.87809283]
[ 0.04055394]
[ 0.58207317]]
v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] = [[-0.01228902]
[-0.09357694]]
v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
###Markdown
**Expected Output**: **W1** [[ 1.62544598 -0.61290114 -0.52907334] [-1.07347112 0.86450677 -2.30085497]] **b1** [[ 1.74493465] [-0.76027113]] **W2** [[ 0.31930698 -0.24990073 1.4627996 ] [-2.05974396 -0.32173003 -0.38320915] [ 1.13444069 -1.0998786 -0.1713109 ]] **b2** [[-0.87809283] [ 0.04055394] [ 0.58207317]] **v["dW1"]** [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] **v["db1"]** [[-0.01228902] [-0.09357694]] **v["dW2"]** [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] **v["db2"]** [[ 0.02344157] [ 0.16598022] [ 0.07420442]] **Note** that:- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.- If $\beta = 0$, then this just becomes standard gradient descent without momentum. **How do you choose $\beta$?**- The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much. - Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default. - Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. **What you should remember**:- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$. 4 - AdamAdam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. **How does Adam work?**1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). 2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). 3. It updates parameters in a direction based on combining information from "1" and "2".The update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_1)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon}\end{cases}$$where:- t counts the number of steps taken of Adam - L is the number of layers- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages. - $\alpha$ is the learning rate- $\varepsilon$ is a very small number to avoid dividing by zeroAs usual, we will store all parameters in the `parameters` dictionary **Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:for $l = 1, ..., L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])s["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])s["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```
###Code
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
## ADAM = RMS_PROP + Momentum
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
v["dW" + str(l+1)] = np.zeros_like(parameters["W" + str(l+1)])
v["db" + str(l+1)] = np.zeros_like(parameters["b" + str(l+1)])
s["dW" + str(l+1)] = np.zeros_like(parameters["W" + str(l + 1)])
s["db" + str(l+1)] = np.zeros_like(parameters["b" + str(l + 1)])
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
###Output
v["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] = [[ 0.]
[ 0.]]
v["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] = [[ 0.]
[ 0.]
[ 0.]]
s["dW1"] = [[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db1"] = [[ 0.]
[ 0.]]
s["dW2"] = [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db2"] = [[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected Output**: **v["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **v["db1"]** [[ 0.] [ 0.]] **v["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **v["db2"]** [[ 0.] [ 0.] [ 0.]] **s["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **s["db1"]** [[ 0.] [ 0.]] **s["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **s["db2"]** [[ 0.] [ 0.] [ 0.]] **Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon}\end{cases}$$**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l + 1)] = beta1 * v["dW" + str(l + 1)] + (1 - beta1) * grads['dW' + str(l + 1)]
v["db" + str(l + 1)] = beta1 * v["db" + str(l + 1)] + (1 - beta1) * grads['db' + str(l + 1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l + 1)] = v["dW" + str(l + 1)] / (1 - np.power(beta1, t))
v_corrected["db" + str(l + 1)] = v["db" + str(l + 1)] / (1 - np.power(beta1, t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l + 1)] = beta2 * s["dW" + str(l + 1)] + (1 - beta2) * np.power(grads['dW' + str(l + 1)], 2)
s["db" + str(l + 1)] = beta2 * s["db" + str(l + 1)] + (1 - beta2) * np.power(grads['db' + str(l + 1)], 2)
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l + 1)] = s["dW" + str(l + 1)] / (1 - np.power(beta2, t))
s_corrected["db" + str(l + 1)] = s["db" + str(l + 1)] / (1 - np.power(beta2, t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * v_corrected["dW" + str(l + 1)] / np.sqrt(s["dW" + str(l + 1)] + epsilon)
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * v_corrected["db" + str(l + 1)] / np.sqrt(s["db" + str(l + 1)] + epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
###Output
W1 = [[ 1.79078034 -0.77819144 -0.69460639]
[-1.23940099 0.69897299 -2.13510481]]
b1 = [[ 1.91119235]
[-0.59477218]]
W2 = [[ 0.48546317 -0.41580308 1.62854186]
[-1.89371033 -0.1559833 -0.21761985]
[ 1.30020326 -0.93841334 -0.00599321]]
b2 = [[-1.04427894]
[-0.12422162]
[ 0.41638106]]
v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] = [[-0.01228902]
[-0.09357694]]
v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
s["dW1"] = [[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]]
s["db1"] = [[ 1.51020075e-05]
[ 8.75664434e-04]]
s["dW2"] = [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]
s["db2"] = [[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]]
###Markdown
**Expected Output**: **W1** [[ 1.63178673 -0.61919778 -0.53561312] [-1.08040999 0.85796626 -2.29409733]] **b1** [[ 1.75225313] [-0.75376553]] **W2** [[ 0.32648046 -0.25681174 1.46954931] [-2.05269934 -0.31497584 -0.37661299] [ 1.14121081 -1.09245036 -0.16498684]] **b2** [[-0.88529978] [ 0.03477238] [ 0.57537385]] **v["dW1"]** [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] **v["db1"]** [[-0.01228902] [-0.09357694]] **v["dW2"]** [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] **v["db2"]** [[ 0.02344157] [ 0.16598022] [ 0.07420442]] **s["dW1"]** [[ 0.00121136 0.00131039 0.00081287] [ 0.0002525 0.00081154 0.00046748]] **s["db1"]** [[ 1.51020075e-05] [ 8.75664434e-04]] **s["dW2"]** [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04] [ 1.57413361e-04 4.72206320e-04 7.14372576e-04] [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] **s["db2"]** [[ 5.49507194e-05] [ 2.75494327e-03] [ 5.50629536e-04]] You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference. 5 - Model with different optimization algorithmsLets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
###Code
train_X, train_Y = load_dataset()
###Output
_____no_output_____
###Markdown
We have already implemented a 3-layer neural network. You will train it with: - Mini-batch **Gradient Descent**: it will call your function: - `update_parameters_with_gd()`- Mini-batch **Momentum**: it will call your functions: - `initialize_velocity()` and `update_parameters_with_momentum()`- Mini-batch **Adam**: it will call your functions: - `initialize_adam()` and `update_parameters_with_adam()`
###Code
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
# For Each Epoch you train the minibatch and update parameters
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
#(this is the Train_X , Train_Y)
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
You will now run this 3 layer neural network with each of the 3 optimization methods. 5.1 - Mini-batch Gradient descentRun the following code to see how the model does with mini-batch gradient descent.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.690736
Cost after epoch 1000: 0.685273
Cost after epoch 2000: 0.647072
Cost after epoch 3000: 0.619525
Cost after epoch 4000: 0.576584
Cost after epoch 5000: 0.607243
Cost after epoch 6000: 0.529403
Cost after epoch 7000: 0.460768
Cost after epoch 8000: 0.465586
Cost after epoch 9000: 0.464518
###Markdown
5.2 - Mini-batch gradient descent with momentumRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.690741
Cost after epoch 1000: 0.685341
Cost after epoch 2000: 0.647145
Cost after epoch 3000: 0.619594
Cost after epoch 4000: 0.576665
Cost after epoch 5000: 0.607324
Cost after epoch 6000: 0.529476
Cost after epoch 7000: 0.460936
Cost after epoch 8000: 0.465780
Cost after epoch 9000: 0.464740
###Markdown
5.3 - Mini-batch with Adam modeRun the following code to see how the model does with Adam.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.687550
Cost after epoch 1000: 0.173593
Cost after epoch 2000: 0.150145
Cost after epoch 3000: 0.072939
Cost after epoch 4000: 0.125896
Cost after epoch 5000: 0.104185
Cost after epoch 6000: 0.116069
Cost after epoch 7000: 0.031774
Cost after epoch 8000: 0.112908
Cost after epoch 9000: 0.197732
|
notebook/weed_training_01.ipynb | ###Markdown
雑草の生育期間を区別せずに分類器を作る(10種類) 雑草の生育期間(芽生え・生育済み)を区別せずに分類器を作成します。 育成した雑草の種類はハキダメギク、ホソアオゲイトウ、イチビ、イヌビエ、コセンダングサ、マメアサガオ、メヒシバ、オヒシバ、オイヌタデ、シロザの10種類です ■データのダウンロード ・cluster.zipをダウンロードします
###Code
#グーグルドライブからファイルをダウンロードする方法
#ファイル限定
import requests
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
#取得されるトークン
file_id = '1R9a1hhjnjX72Ov3dRZcdo8T7qE8da5L0'
#欲しいフォルダ名
destination = 'cluster.zip'
download_file_from_google_drive(file_id, destination)
###Output
_____no_output_____
###Markdown
・cluster.zipを解凍します
###Code
!unzip cluster.zip
print("clusterファイルの解凍が完了しました。")
###Output
_____no_output_____
###Markdown
■データセットの作成 ・データセットの画像を表示します(芽生え・生育済み)
###Code
import os
import glob
import matplotlib.pyplot as plt
from PIL import Image
def show_weed():
# 雑草名と生育状態をリストに定義
weed_names = ["hakidamegiku","hosoaogeitou","ichibi","inubie","kosendangusa","mameasagao","mehishiba","ohishiba","oinutade","shiroza"]
weed_type = ["sprout", "grown"]
# データ格納フォルダを指定
input_dir = "./cluster"
# リストの長さを足して表示枚数を確認
hs = len(weed_names)*len(weed_type)
# 表示設定
col=len(weed_names)
row=hs/col
cols=col*4
rows=row*4
dpis = 100
# イメージの表示サイズ、解像度
fig = plt.figure(figsize=(cols,rows),dpi=dpis)
# **番目に指定
pi=1
# イメージ表示
for weed_name in weed_names:
# 1段目
img_path = os.path.join(input_dir, weed_name, weed_type[0])
img_list = os.listdir(img_path)
plot_num = pi
ax=fig.add_subplot(row, col, plot_num)
ax.set_title(weed_name, fontsize=20)
if plot_num == 1:
plt.ylabel(weed_type[0], fontsize=20) # y軸ラベル
img = Image.open(os.path.join(img_path, img_list[6])) # indexを変更して別の画像を表示!!
plt.xticks(color="None")
plt.yticks(color="None")
plt.tick_params(length=0)
plt.imshow(img, cmap='gray')
# 2段目
img_path = os.path.join(input_dir, weed_name, weed_type[1])
img_list = os.listdir(img_path)
plot_num2 = pi+10
bx=fig.add_subplot(row, col, plot_num2)
bx.set_title(weed_name, fontsize=20, pad=0)
if plot_num2 == 11:
plt.ylabel(weed_type[1], fontsize=20) # y軸ラベル
img2 = Image.open(os.path.join(img_path, img_list[2]))
plt.xticks(color="None")
plt.yticks(color="None")
plt.tick_params(length=0)
plt.imshow(img2, cmap='gray')
pi = pi+1
fig.align_labels()
show_weed()
###Output
_____no_output_____
###Markdown
・train、validation、prediction用のディレクトリを作成し、class用のディレクトリを追加します
###Code
import os, shutil
# CLSディレクトリを作成します
base_dir = "./CLS"
if "CLS" not in os.listdir("./"):
os.mkdir(base_dir)
else:
print(base_dir, "は既に存在します")
# train用ディレクトリを作成します
train_index = "train"
train_dir = os.path.join(base_dir, train_index)
if train_index not in os.listdir(base_dir):
os.mkdir(train_dir)
else:
print(train_dir + "は既に存在します")
# validation用ディレクトリを作成します
validation_index = "validation"
validation_dir = os.path.join(base_dir, validation_index)
if validation_index not in os.listdir(base_dir):
os.mkdir(validation_dir)
else:
print(validation_dir + "は既に存在します")
# prediction用ディレクトリを作成します
prediction_index = "prediction"
prediction_dir = os.path.join(base_dir, prediction_index)
if prediction_index not in os.listdir(base_dir):
os.mkdir(prediction_dir)
else:
print(prediction_dir + "は既に存在します")
#class用ディレクトリを作成します
classes=["hakidamegiku","hosoaogeitou","ichibi",
"inubie","kosendangusa","mameasagao",
"mehishiba","ohishiba","oinutade","shiroza",]
dirs = os.listdir(base_dir)
for dir in dirs:
for cls in classes:
# Directory with our training pictures
class_dir = os.path.join(base_dir, dir, cls)
if cls not in os.listdir(base_dir + "/" + dir):
os.mkdir(class_dir)
else:
print(class_dir, "は既に存在します")
print("作成完了!")
###Output
_____no_output_____
###Markdown
・画像をディレクトリに振り分けます
###Code
from os.path import join
import random
clsdir = "./cluster"
base_dir = "./CLS"
dirs = os.listdir(base_dir)
weed_types = ["sprout","grown"]
weed_names = os.listdir(clsdir)
# ファイル(クラスごとに分かれている)を順番に読み取り
# train, validationデータを作成
for weed_name in weed_names:
for weed_type in weed_types:
print(weed_name, "/",weed_type)
# 画像をランダムに抽出
file_names = os.listdir(os.path.join(clsdir, weed_name, weed_type))
files100 = random.sample(file_names, int(50))
num1 = 0
num2 = 0
for file_name in file_names:
if file_name in files100:
if num1 >= 45:
continue
# 移動元のファイル
path1 = os.path.join(clsdir, weed_name, weed_type, file_name)
# 移動先のファイル
path2= os.path.join(base_dir, "train", weed_name, file_name)
# ファイルを移動
new_path = shutil.move(path1, path2)
# ファイルの存在確認
print(os.path.exists(path2))
num1 = num1 + 1
else:
if num2 >= 45:
continue
# 移動元のファイル
path1 = os.path.join(clsdir, weed_name, weed_type, file_name)
# 移動先のファイル
path2= os.path.join(base_dir, "validation", weed_name, file_name)
# ファイルを移動
new_path = shutil.move(path1, path2)
# ファイルの存在確認
print(os.path.exists(path2))
num2 = num2 + 1
# predictionデータを作成
for weed_name in weed_names:
for weed_type in weed_types:
print(weed_name, "/",weed_type)
file_names = os.listdir(os.path.join(clsdir, weed_name, weed_type))
for file_name in file_names:
# 移動元のファイル
path1 = os.path.join(clsdir, weed_name, weed_type, file_name)
# 移動先のファイル
path2= os.path.join(base_dir, "prediction", weed_name, file_name)
# ファイルを移動
new_path = shutil.move(path1, path2)
# ファイルの存在確認
print(os.path.exists(path2))
print("振り分け完了!")
###Output
_____no_output_____
###Markdown
・trainデータ、validationデータ、predictionデータのgeneratorを作成します
###Code
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
import keras.preprocessing.image as Image
input_size = 224
train_dir = "./CLS/train"
validation_dir = "./CLS/validation"
train_datagen = Image.ImageDataGenerator(
featurewise_center = False,
samplewise_center = False,
featurewise_std_normalization = False,
samplewise_std_normalization = False,
zca_whitening = False,
rotation_range = 90,
width_shift_range = 0.3,
height_shift_range = 0.3,
horizontal_flip = True,
vertical_flip = False,
rescale=1./255
)
val_datagen = Image.ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(input_size,input_size),
batch_size=10,
class_mode='categorical'
)
validation_generator = val_datagen.flow_from_directory(
validation_dir,
target_size=(input_size,input_size),
batch_size=10,
class_mode='categorical'
)
print("データセット作成完了!")
###Output
_____no_output_____
###Markdown
■トレーニングの実行 ・モデルのレイヤー構成を定義します
###Code
from tensorflow.keras.utils import to_categorical
#ファインチューニング+VGG+水増し。ここから実行してOK(VGG16をダウンロード)
from keras import models
from keras import layers
from keras.optimizers import Adam
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.applications.vgg16 import VGG16
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import History, Callback
from keras.objectives import categorical_crossentropy
import numpy as np
from sklearn.metrics import confusion_matrix, accuracy_score
# from keras.utils import to_categorical
from scipy.stats import mode
import os, pickle
from sklearn.metrics import classification_report, confusion_matrix
def create_cnn():
input_size=224
#input_sizeは224,224までOK。
vgg_conv = VGG16(weights='imagenet', include_top=False, input_shape=(input_size,input_size, 3))
last = vgg_conv.output
vgg_conv.trainable = True
set_trainable = False
for layer in vgg_conv.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
mod = Flatten()(last)
mod = Dense(256, activation='relu')(mod)
#mod = Dropout(0.5)(mod)
preds = Dense(10, activation='softmax')(mod)
model = models.Model(vgg_conv.input, preds)
return model
print("レイヤー構成を定義しました!")
###Output
_____no_output_____
###Markdown
・チェックポイントを定義します(val lossが一番低い値の時にweightファイルを保存)
###Code
class Checkpoint(Callback):
def __init__(self, model, filepath):
self.model = model
self.filepath = filepath
self.best_val_acc = 0.0
self.best_val_loss = 0.7
def on_epoch_end(self, epoch, logs):
# val_lossが最小の時ににweightを保存する
if self.best_val_loss > logs["val_loss"]:
self.model.save_weights(self.filepath)
self.best_val_loss = logs["val_loss"]
print("Weights saved.", self.best_val_loss)
print("チェックポイントを定義しました!")
###Output
_____no_output_____
###Markdown
・学習の実行手順を定義します
###Code
def train():
print("学習を開始します")
hist = History()
train_model = create_cnn()
train_model.compile(optimizer=Adam(learning_rate=1e-5),loss='categorical_crossentropy',metrics=['accuracy'])
cp = Checkpoint(train_model, f"weights.hdf5")
train_model.fit_generator(train_generator,epochs=10,validation_data=validation_generator,callbacks=[hist, cp])
print("学習が完了しました")
return hist.history
print("実行手順を定義しました!")
###Output
_____no_output_____
###Markdown
・学習を開始します
###Code
K.clear_session()
hist = train()
###Output
_____no_output_____
###Markdown
■正解率と損失率をグラフ化 ・trainの正解率と損失率、validationの正解率と損失率をグラフ化します
###Code
import matplotlib.pyplot as plt
history = hist
acc=history['accuracy']
val_acc=history['val_accuracy']
loss=history['loss']
val_loss=history['val_loss']
epochs=range(1,len(acc)+1)
#正解率plot
plt.plot(epochs,acc,'bo',label='Training acc')
plt.plot(epochs,val_acc,'b',label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
#損失値をplot
plt.plot(epochs,loss,'bo',label='Training loss')
plt.plot(epochs,val_loss,'b',label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
■テスト ・prediction用のデータセットを作成します
###Code
from PIL import Image
import os, glob
import numpy as np
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES =True
prediction_dir = "./CLS/prediction"
prediction_classes = ["hakidamegiku","hosoaogeitou","ichibi","inubie","kosendangusa","mameasagao","mehishiba","ohishiba","oinutade","shiroza"]
image_size = 224
print(prediction_classes)
X_test = []
y_test = []
for index, classlabel in enumerate(prediction_classes):
photos_dir = os.path.join(prediction_dir, classlabel)
files = glob.glob(photos_dir + "/*.JPG")
print(files)
for i, file in enumerate(files):
image = Image.open(file)
image = image.convert("RGB")
image = image.resize((image_size, image_size))
data = np.asarray(image)
if i == 0:
print(data.shape)
X_test.append(data)
y_test.append(index)
X_test1 = np.array(X_test)
y_test1 = np.array(y_test)
print("predictionデータ作成完了!")
###Output
_____no_output_____
###Markdown
・混同行列表示用の関数を定義します
###Code
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
# print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=-90)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
print("混同行列表示用の関数を定義しました!")
###Output
_____no_output_____
###Markdown
・保存したweightファイルをロードしてテストを行います
###Code
def sin_predict():
X_test, y_test = X_test1, y_test1
X_test = X_test / 255.0
y_test_label = np.ravel(y_test)
y_test = to_categorical(y_test)
train_model = create_cnn()
train_model.compile(optimizer=Adam(learning_rate=1e-5),loss='categorical_crossentropy',metrics=['accuracy'])
# 最良のモデルの読み込み
train_model.load_weights(f"weights.hdf5")
for layer in train_model.layers:
layer.trainable = False
# 単体のテスト
single_pred = np.argmax(train_model.predict(X_test), axis=-1)
# テストのスコア
test_acc = accuracy_score(y_test, to_categorical(single_pred))
print("テストの結果は", test_acc, "です")
target_names = ["hakidamegiku","hosoaogeitou","ichibi","inubie","kosendangusa","mameasagao","mehishiba","ohishiba","oinutade","shiroza"]
cm = confusion_matrix(y_test_label, single_pred)
plot_confusion_matrix(cm, classes = target_names)
# print('Classification Report')
# print(classification_report(y_test_label, single_preds, target_names=target_names))
print("テスト用の関数を定義しました!")
###Output
_____no_output_____
###Markdown
・テストを実行します
###Code
# テスト
sin_predict()
###Output
_____no_output_____
###Markdown
■任意の写真をテストします ・画像テスト用の関数を定義します
###Code
def result_predict(path):
prediction_classes = ["hakidamegiku","hosoaogeitou","ichibi","inubie","kosendangusa","mameasagao","mehishiba","ohishiba","oinutade","shiroza"]
train_model = create_cnn()
train_model.compile(optimizer=Adam(lr=1e-5),loss='categorical_crossentropy',metrics=['accuracy'])
# 最良のモデルの読み込み
train_model.load_weights(f"weights.hdf5")
for layer in train_model.layers:
layer.trainable = False
X_test = []
image_size = 224
image = Image.open(path)
image = image.convert("RGB")
image = image.resize((image_size, image_size))
data = np.asarray(image)
X_test.append(data)
X_test = np.array(X_test) / 255.0
result = np.argmax(train_model.predict(X_test), axis=-1)
print("雑草の種類は", prediction_classes[result[0]], "です")
print("画像テスト用の関数を定義しました!")
###Output
_____no_output_____
###Markdown
・predictionディレクトリから任意の写真を選択しpathを設定します
###Code
image_dir = "/content/CLS/prediction/ichibi/ichibi_IMG_4164_12.jpg"
result_predict(image_dir)
###Output
_____no_output_____ |
classification/learnai_classification2/Exercises/.ipynb_checkpoints/Ex_EDA.start-checkpoint.ipynb | ###Markdown
Classification 2 Exercise 1: Exploratory Data Analysis OverviewThe objective of this course is to build models to predict customer churn for a fictitious telco company. Before we start creating models, let's begin by having a closer look at our data and doing some basic data wrangling.Go through this notebook and modify the code accordingly (i.e. TASK) based on the text and/or the comments. DataDownload data from here:https://public.dhe.ibm.com/software/data/sw-library/cognos/mobile/C11/data/Telco_customer_churn.xlsxDescription of data (for a newer version)https://community.ibm.com/community/user/businessanalytics/blogs/steven-macko/2019/07/11/telco-customer-churn-1113 Importing Libraries
###Code
import numpy as np
import pandas as pd
# TASK: Import visualization libraries, matplotlib and seaborn using standard aliases plt and sns respectively
###Output
_____no_output_____
###Markdown
Reading in the Data
###Code
# TASK: Read in the Excel file. Use the parameter na_values=" " to convert any empty cells to a NA value.
# You may also need to use parameter engine='openpyxl') in newer versions of pandas if you encounter an XLRD error.
data = # TASK: Use pandas to read in an Excel file.
# Define columns to keep and filter the original dataset
cols_to_keep = ['CustomerID', 'Gender', 'Senior Citizen', 'Partner', 'Dependents', 'Tenure Months', 'Phone Service', 'Multiple Lines', 'Internet Service', 'Online Security', 'Online Backup', 'Device Protection', 'Tech Support', 'Streaming TV', 'Streaming Movies', 'Contract', 'Paperless Billing', 'Payment Method', 'Monthly Charges', 'Total Charges', 'Churn Label']
data = data[cols_to_keep]
# TASK: Rename the multi-worded columns to remove the space
# HINT: You can either manually remove the spaces in the column name list or use a loop to remove the space
data.columns =
###Output
_____no_output_____
###Markdown
Basic Information
###Code
# TASK: Display the number of rows and columns for the dataset
print("Rows & Columns: {}".format())
# TASK: Display the datatypes for the columns in the dataframe i.e. use the dtypes variable
# How many columns are numerical and how many are non-numerical
data.dtypes
# TASK: use count() on the dataframe to count the number of entries for each of the column. Are there any columns with missing values?
# TASK: Use nunique() on the dataframe to count the number of unique values for each of the columns
# TASK: Display first few values of the dataframe
# Based on this and the previous display, how would you describe the columns with a small number (less than 10) of unique values?
# TASK: Let's analyze the values for the categorical features (columns with less than 10 unique values)
for id, row in data.nunique().iteritems(): # this counts the number of unique values for each feature and returns the result as a dictionary
if(row < 10):
# TASK: Print out the unique values for the feature
# For columns with 3 or 4 unique values, display them to see if they make sense
for col in ['MultipleLines', 'InternetService', 'OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies', 'Contract', "PaymentMethod"]:
print("{} : {}".format(col, np.unique(data[col].values)))
###Output
_____no_output_____
###Markdown
**Observations**- The value 'No phone service' found in MultipleLines is already captured by the PhoneService feature ('No' value)- The value 'No internet service' found in the several features is already captured by InternetService feature ('No' value)- Values that are longer or more complex may need to be simplified.Conclusion: These values can be considered duplicated information as they are found in the PhoneService and InternetService features. There are several options to consider here:- Retain all features and values as is- Convert the 'No Internet Service'/'No phone service' to 'No' in the features as PhoneService and InternetService features has already captured this information- Remove the PhoneService feature as MultipleLines feature has this information. To remove the InternetService feature, we would have to 'fold in' the values in the other features e.g. the values for OnlineSecurity could be changed to ['DSL_No','DSL_Yes','FiberOptic_No','FiberOptic_Yes','No internet service']For this course, we will be using the second option (without justification). You are encouraged to test the others options during modelling to see if there are any impact. Data WranglingBased on the discoveries made above, we will be modifying our data before continuing the exploration.
###Code
# Replace 'No phone service'
data['MultipleLines'] = data['MultipleLines'].replace({'No phone service':'No'})
# TASK: Replace 'No internet service'
for col in ['OnlineSecurity', 'OnlineBackup', 'DeviceProtection', 'TechSupport', 'StreamingTV', 'StreamingMovies']:
data[col] = # similar to the operation for 'No phone service' above
# Simplify the values made up of phrases
data['PaymentMethod'] = data['PaymentMethod'].replace({
'Bank transfer (automatic)':'transfer',
'Credit card (automatic)':'creditcard',
'Electronic check':'echeck',
'Mailed check':'mcheck'
})
data['InternetService'] = data['InternetService'].replace({
'Fiber optic':'FiberOptic'
})
data['Contract'] = data['Contract'].replace({
'Month-to-month':'M2M',
'One year':'OneYear',
'Two year':'TwoYear'
})
# Remove the rows with empty TotalCharges value
data = data[data["TotalCharges"].notnull()]
# After data wrangling, repeat prints
print("Rows & Columns: {}".format(data.shape))
print("################################################")
# Number of unique values for each of the columns
print(data.nunique())
print("################################################")
# Check the data types
print(data.dtypes)
print("################################################")
# Display first few values
print(data.head())
# Randomly display 1 row from the dataframe
print(data.sample(n=1).iloc[0])
# TASK: Save the data as a CSV fiile
data.to_csv("telco_churn.csv", index=False)
###Output
_____no_output_____
###Markdown
Additional Exploration**TASK:** This is the open-ended section of the exercise. Use any exploration techniques that you know to further explore and understand your data. We expect a number of visualizations that can show the relationships between features as well as between features and the outcome variable 'ChurnLabel'. Some of the questions in the quiz may require you to perform additional analyses.
###Code
# Example: Look at Churn vs MonthCharges
plt.clf()
for label in ['Yes','No']:
subset = data[data.ChurnLabel==label]
# Draw the density plot
sns.distplot(subset['MonthlyCharges'], hist = False, kde = True,
kde_kws = {'linewidth': 3, 'shade':True},
label = label)
# Plot formatting
plt.legend(prop={'size': 16}, title = 'ChurnLabel')
plt.title('Density Plot with ChurnLabel')
plt.xlabel('') # Monthly Charges
plt.ylabel('Density')
plt.show()
# Additional Exploration
###Output
_____no_output_____ |
Proyecto_Final.ipynb | ###Markdown
Procesamiento inicial del conjunto de datosSe leerán los datos de un archivo ".npy" que es un fichero que contiene datos compatibles con numpy. También se realizara la divisón de los datos en 2 subconjuntos quedando distribuidos de la siguiente manera:| Subconjunto | Proporción del conjunto original || -------------------- | -------------------------------- || Entrenamiento | 80% || Validación y pruebas | 20% | Descripción del conjunto de datos| No | Nombre Columna | Descripción | Dimensional || -- | -------------- | ----------- | ----------- || 0 | SalePrice | Precio de la casa. Objetivo de predicción, **Y**.| $ || 1 | OverallQual | Calidad de los materiales y acabados de la construcción. Escala de 1 - 10, donde 10 es lo mejor. | -- || 2 | 1stFlrSF | Metros cuadrados de construcción en primer nivel. | mts2 || 3 | TotRmsAbvGrd | Habitaciones con un grado alto. Excluyendo los baños. | -- || 4 | YearBuilt | Año de construcción | -- || 5 | LotFrontAge | Medida de largo de casa conectado a la calle. | ft |
###Code
# Nombres de las columnas
data_columns = {0: "SalePrice",
1: "OverallQual",
2: "1stFlrSF",
3: "TotRmsAbvGrd",
4: "YearBuilt",
5: "LotFrontAge"}
# Mostrar las dimensiones de un conjunto de datos
def print_dimensions(data_title, data):
print(f"{data_title}".center(30, "-"))
print(f"Cantidad de filas: {data.shape[0]}")
print(f"Cantidad de columnas: {data.shape[1]}")
print("".center(30,"-"))
# Función que retorna el conjunto de datos de entrenamiento y validación
def data_train_test_split(data, test_size=0.8):
subset_index = int(data.shape[0] * test_size)
return data[:subset_index,], data[subset_index:,]
# Cargar los datos originales y aplicar la función para separar el conjunto de entrenamiento y validación.
original_data = np.load('data/proyecto_training_data.npy')
# Replace NaN with 0.0
original_data = np.nan_to_num(original_data)
train_data, valid_data = data_train_test_split(original_data, test_size=0.8)
print_dimensions("Original", original_data)
print_dimensions("Train data", train_data)
print_dimensions("Valid data", valid_data)
# Función para describir los datos en cada columna
def data_describe(data, columns_names):
'''
Esta función describe los datos de un arreglo multidimensional en numpy.
Inputs:
data -> matriz numpy de forma NxM
columns_names -> diccionario que contiene en cada llave un entero y corresponde al número de columna de la matriz.
Outputs:
Se imprimen los siguientes número significativos de cada columna.
- media
- valor máximo
- valor mínimo
- rango (peak to peak)
- desviación estandar
'''
for ind, column_name in columns_names.items():
print(f"{column_name}".center(50, "-"))
print(f" Media: {np.mean(data[:,ind])}")
print(f" Valor máximo: {np.amax(data[:,ind])}")
print(f" Valor mínimo: {np.amin(data[:,ind])}")
print(f"Desviación estandar: {np.std(data[:,ind])}")
print(f" Rango(PeakToPeak): {np.ptp(data[:,ind])}")
# Descripción del conjunto de datos original
data_describe(original_data, data_columns)
###Output
--------------------SalePrice---------------------
Media: 180921.19589041095
Valor máximo: 755000.0
Valor mínimo: 34900.0
Desviación estandar: 79415.29188606751
Rango(PeakToPeak): 720100.0
-------------------OverallQual--------------------
Media: 6.0993150684931505
Valor máximo: 10.0
Valor mínimo: 1.0
Desviación estandar: 1.3825228366585953
Rango(PeakToPeak): 9.0
---------------------1stFlrSF---------------------
Media: 1162.626712328767
Valor máximo: 4692.0
Valor mínimo: 334.0
Desviación estandar: 386.45532230228963
Rango(PeakToPeak): 4358.0
-------------------TotRmsAbvGrd-------------------
Media: 6.517808219178082
Valor máximo: 14.0
Valor mínimo: 2.0
Desviación estandar: 1.624836553698191
Rango(PeakToPeak): 12.0
--------------------YearBuilt---------------------
Media: 1971.267808219178
Valor máximo: 2010.0
Valor mínimo: 1872.0
Desviación estandar: 30.192558810489448
Rango(PeakToPeak): 138.0
-------------------LotFrontAge--------------------
Media: 57.62328767123287
Valor máximo: 313.0
Valor mínimo: 0.0
Desviación estandar: 34.65243086038386
Rango(PeakToPeak): 313.0
###Markdown
Histograma de cada variable
###Code
# Se utiliza la estructura de datos DataFrame de la librería de pandas para graficar con Seaborn de una forma más sencilla
df = pd.DataFrame(original_data, columns=[i for _, i in data_columns.items()])
# Plot distributions
def plotting_dist(data, rows, cols, names, graph_type="dist", coefs=None, y=0,size=(15,5)):
fig, ax = plt.subplots(rows, cols, figsize=size)
#sns.despine()
i = 0
for row in range(rows):
for col in range(cols):
if graph_type=="dist":
sns.distplot(data[names[i]],ax=ax[row, col])
elif graph_type=="scatter":
if i == 5:
pass
else:
sns.scatterplot(x=data[names[i+1]], y=data['SalePrice'],
ax=ax[row, col]).set_title("r={0:.3f}".format(coefs[i]))
i += 1
plt.tight_layout()
plotting_dist(df, 2, 3, names=data_columns, size=(15,7))
###Output
_____no_output_____
###Markdown
Cálculo de correlación
###Code
def correlacion(data, interest_column=0):
columns = data.shape[1]
coefs = []
for i in range(1,columns):
coefs.append(np.amin(np.corrcoef(data[:,0], data[:,i])))
return coefs
coefs = correlacion(original_data, interest_column=0)
plotting_dist(df, 2, 3, names=data_columns, graph_type="scatter", coefs=coefs, y=0, size=(15,7))
###Output
_____no_output_____
###Markdown
Se seleccionan las columnas **OverallQual(1)** y **1stFlrSF(2)**.
###Code
# Creamos la función LinearRegression
def oLinearRegression(x, y, epochs=25, print_error_step=5, lr=0.01):
# Inicialización de las variables
params = np.array([1., 1.])
errors = []
for e in range(epochs):
y_hat = predecir(params, x)
error = funcion_costo(y_hat, y)
errors.append(error)
gradients = np.array( [np.mean((y_hat - y)*x), np.mean(y_hat-y)])
params = params - lr*gradients
if e % print_error_step == 0:
print(f"Epoch {e}".center(30, "-"))
print(f"Error {error}")
return params, errors
def predecir(params, x):
# Función utilizada para predecir el valor utilizando el modelo
return np.matmul(params, np.vstack((x, np.ones_like(x))))
def funcion_costo(y_hat, y):
# Función utilizanda para calcular el valor de la función de coste
return 0.5 * np.mean((y - y_hat)**2)
X1 = train_data[:,1]
Y = train_data[:,0]
print("Entrenando el modelo con X1".center(100, "*"))
model1, error_history1 = oLinearRegression(X1,Y,2000,100,0.025)
print(f"Modelo 1")
print(f"m = {model1[0]}")
print(f"b = {model1[1]}")
print(f"Y_hat_m1 = {model1[0]}x + {model1[1]}")
X2 = train_data[:,2]
print("Entrenando el modelo con X2".center(100, "*"))
model2, error_history2 = oLinearRegression(X2,Y,20,1,0.0000007)
print(f"Modelo 2")
print(f"m = {model2[0]}")
print(f"b = {model2[1]}")
print(f"Y_hat_m2 = {model2[0]}*x + {model2[1]}")
def plot_error(y, name):
sns.lineplot(x=[i for i in range(len(y))], y=y).set_title(name)
plt.show()
plot_error(error_history1, "Gráfica de error con X1(OverallQual)")
plot_error(error_history2, "Gráfica de error con X2(1stFlrSF)")
###Output
_____no_output_____
###Markdown
Entrenando el modelo con Scikit-learn
###Code
# Utilizando X1
model1_skl = LinearRegression()
model1_skl.fit(X1.reshape(-1,1), Y)
model1_skl = np.array((model1_skl.coef_[0], model1_skl.intercept_))
#Utilizando X2
model2_skl = LinearRegression()
model2_skl.fit(X2.reshape(-1,1), Y)
model2_skl = np.array((model2_skl.coef_[0], model2_skl.intercept_))
print(f"Modelo X1 Scikit-learn \n {model1_skl}")
print(f"Modelo X2 Scikit-learn \n {model2_skl}")
###Output
Modelo X1 Scikit-learn
[ 45411.99877916 -96469.57131874]
Modelo X2 Scikit-learn
[ 129.95124229 30324.58517496]
###Markdown
Predicciones utilizando ambos modelos
###Code
def predicciones(modelo, modeloSkl, x):
y_hat_own = predecir(modelo, x)
y_hat_skl = predecir(modeloSkl, x)
y_hat_prm = (y_hat_own + y_hat_skl) / 2
return y_hat_own, y_hat_skl, y_hat_prm
Y_test = valid_data[:,0]
#Costo para modelo de X1
y_own1, y_skl1, y_prm1 = predicciones(model1, model1_skl, valid_data[:, 1])
costoModels1 = [funcion_costo(y_own1, Y_test), funcion_costo(y_skl1, Y_test), funcion_costo(y_prm1,Y_test)]
#Costo para modelo de X2
y_own2, y_skl2, y_prm2 = predicciones(model2, model2_skl, valid_data[:, 2])
costoModels2 = [funcion_costo(y_own2, Y_test), funcion_costo(y_skl2, Y_test), funcion_costo(y_prm2,Y_test)]
# Grafica de los costos evaluados para cada modelo
n_groups = 3
# create plot
fig = plt.figure(figsize=(12,6))
index = np.arange(n_groups)
bar_width = 0.35
opacity = 0.8
rects1 = plt.bar(index, costoModels1, bar_width,
alpha=opacity,
color='b',
label='Modelo para X1(OverallQual)')
rects2 = plt.bar(index + bar_width, costoModels2, bar_width,
alpha=opacity,
color='g',
label='Modelo para X2(1stFlrSF)')
plt.xlabel('Modelos')
plt.ylabel('Costo')
plt.title('Comparativa del costo de cada modelo')
plt.xticks(index + bar_width, ('Modelo propio', 'Modelo scikit-learn', 'Promedio'))
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
AutoencoderDenseConvolutional .ipynb | ###Markdown
Autoencoder
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorboard as tb
%load_ext tensorboard
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Input, Conv2D, MaxPool2D, Flatten
from tensorflow.keras.layers import Conv2DTranspose, UpSampling2D, MaxPooling2D, Reshape
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.optimizers import SGD
from sklearn.model_selection import train_test_split
mnist = pd.read_csv("./mnist_train_small.csv", header=None, sep=';').values
X, Y = mnist[:, 1:], mnist[:, 0:1]
n, p = X.shape
size = int(np.sqrt(p))
Xt = X / 255.0
Yt = to_categorical(Y, 10)
X_train, X_test, Y_train, Y_test = train_test_split(Xt, Yt, train_size=0.7)
z = 2
inpE = Input(shape=(size, size, 1)) # BSx28x28x1
x = Flatten()(inpE) # BSx784
x = Dense(256, activation='relu')(x) # BSx256
x = Dense(128, activation='relu')(x) # BSx128
x = Dense(64, activation='relu')(x) # BSx64
c = Dense(z)(x) # BSx2
inpD = Input(shape=c.shape[1:]) # BSx2
x = Dense(64, activation='relu')(inpD) # BSx64
x = Dense(128, activation='relu')(x) # BSx128
x = Dense(256, activation='relu')(x) # BSx256
x = Dense(784, activation='sigmoid')(x) # BSx784
x = Reshape(inpE.shape[1:])(x) # BSx28x28x1
encoder = Model(inputs=inpE, outputs=c)
decoder = Model(inputs=inpD, outputs=x)
inpA = Input(shape=(size, size, 1))
autoencoder = Model(inputs=inpA, outputs=decoder(encoder(inpA)))
autoencoder.compile(optimizer=SGD(5), loss='mse', metrics=['accuracy'])
logdir="logs/MINIST-Autoencoder"
callbackMetrics = tf.keras.callbacks.TensorBoard(log_dir="logs/MINIST-Autoencoder-metrics")
# tf.profiler.experimental.start(logdir=logdir)
autoencoder.fit(X_train.reshape(-1, size, size, 1),
X_train.reshape(-1, size, size, 1),
validation_data=(X_test.reshape(-1, size, size, 1), X_test.reshape(-1, size, size, 1)),
epochs=100,
callbacks=[callbackMetrics],
batch_size=128)
# tf.profiler.experimental.stop()
e = Xt[1000,:].reshape((28, 28))
plt.imshow(e)
plt.show()
e = e.reshape(-1,size,size,1)
e = autoencoder.predict(e)
e = e.reshape(28,28)
plt.imshow(e)
plt.show()
e = encoder.predict(Xt.reshape(-1, size, size, 1))
print (e.shape)
print (Y.shape)
for i in range(10):
plt.scatter(e[Y.ravel()==i, 0] , e[Y.ravel()==i, 1])
###Output
(20001, 2)
(20001, 1)
###Markdown
Convolutional Autoencoder---
###Code
z = 10
inpE = Input(shape=(size, size, 1)) # BSx28x28x1
x = Conv2D(32,kernel_size=(2,2),strides=2, activation='relu')(inpE) # BSx256
x = Conv2D(64,kernel_size=(2,2),strides=2, activation='relu')(x) # BSx128
x = Conv2D(128,kernel_size=(2,2),strides=2, activation='relu')(x) # BSx64
x = Flatten()(x)
c = Dense(z)(x)
inpD = Input(shape=c.shape[1:])
x = Dense(1152)(inpD)
x = Reshape((3,3,128))(x)
x = Conv2DTranspose(64,kernel_size=(3,3), strides=2, activation='relu')(x) # BSx128
x = Conv2DTranspose(32,kernel_size=(2,2), strides=2, activation='relu')(x) # BSx256
x = Conv2DTranspose(1,kernel_size=(2,2), strides=2, activation='sigmoid')(x) # BSx64
encoder = Model(inputs=inpE, outputs=c)
decoder = Model(inputs=inpD, outputs=x)
inpA = Input(shape=(size, size, 1))
autoencoder = Model(inputs=inpA, outputs=decoder(encoder(inpA)))
autoencoder.compile(optimizer=SGD(5), loss='mse', metrics=['accuracy'])
logdir="logs/MINIS-convolutional-profile"
tf.profiler.experimental.start(logdir=logdir)
# metricsCallback = tf.keras.callbacks.TensorBoard(log_dir="logs/MINIS-convolutional-metrics")
autoencoder.fit(X_train.reshape(-1, size, size, 1),
X_train.reshape(-1, size, size, 1),
validation_data=(X_test.reshape(-1, size, size, 1), X_test.reshape(-1, size, size, 1)),
epochs=100,
# callbacks=[metricsCallback],
batch_size=128)
tf.profiler.experimental.stop()
def noise_imgs(data, noise):
sol = np.copy(data.reshape((28,28)))
for i in range(28):
for j in range(28):
sol[i][j] = sol[i][j] + (np.random.random_sample() % noise)
return sol.reshape((784))
noise = 0.5
plt.imshow(Xt[38,:].reshape((28,28)))
plt.show()
plt.imshow(noise_imgs(Xt[38, :], noise).reshape(28, 28))
plt.show()
e = autoencoder.predict(noise_imgs(Xt[38, :], noise).reshape(-1,size, size, 1))
plt.imshow(e.reshape((28,28)))
plt.show()
plt.imshow(Xt[31,:].reshape((28,28)))
plt.show()
a = encoder.predict(Xt[30,:].reshape(-1, size, size, 1))
b = encoder.predict(Xt[31,:].reshape(-1, size, size, 1))
for i in range(10):
c = ((a*i) / 10) + ((b*1) - (i/10))
plt.imshow(decoder.predict(c).reshape((28, 28)))
plt.show()
plt.imshow(Xt[30,:].reshape((28,28)))
plt.show()
###Output
_____no_output_____ |
nlu/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_finanical_news.ipynb | ###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_finanical_news.ipynb) Training a Sentiment Analysis Classifier with NLU 2 class Finance News sentiment classifier trainingWith the [SentimentDL model](https://nlp.johnsnowlabs.com/docs/en/annotatorssentimentdl-multi-class-sentiment-analysis-annotator) from Spark NLP you can achieve State Of the Art results on any multi class text classification problem This notebook showcases the following features : - How to train the deep learning classifier- How to store a pipeline to disk- How to load the pipeline from disk (Enables NLU offline mode)You can achieve these results or even better on this dataset with training data:You can achieve these results or even better on this dataset with test data: 1. Install Java 8 and NLU
###Code
!wget https://setup.johnsnowlabs.com/nlu/colab.sh -O - | bash
import nlu
###Output
--2021-05-05 05:09:06-- https://raw.githubusercontent.com/JohnSnowLabs/nlu/master/scripts/colab_setup.sh
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1671 (1.6K) [text/plain]
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s Installing NLU 3.0.0 with PySpark 3.0.2 and Spark NLP 3.0.1 for Google Colab ...
- 100%[===================>] 1.63K --.-KB/s in 0.001s
2021-05-05 05:09:06 (1.82 MB/s) - written to stdout [1671/1671]
[K |████████████████████████████████| 204.8MB 64kB/s
[K |████████████████████████████████| 153kB 46.5MB/s
[K |████████████████████████████████| 204kB 21.5MB/s
[K |████████████████████████████████| 204kB 50.1MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
###Markdown
2. Download Finanical News Sentiment dataset https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-newsThis dataset contains the sentiments for financial news headlines from the perspective of a retail investor. Further details about the dataset can be found in: Malo, P., Sinha, A., Takala, P., Korhonen, P. and Wallenius, J. (2014): “Good debt or bad debt: Detecting semantic orientations in economic texts.” Journal of the American Society for Information Science and Technology.
###Code
! wget http://ckl-it.de/wp-content/uploads/2021/01/all-data.csv
import pandas as pd
train_path = '/content/all-data.csv'
train_df = pd.read_csv(train_path)
# the text data to use for classification should be in a column named 'text'
# the label column must have name 'y' name be of type str
columns=['text','y']
train_df = train_df[columns]
train_df = train_df[~train_df["y"].isin(["neutral"])]
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(train_df, test_size=0.2)
train_df
###Output
_____no_output_____
###Markdown
3. Train Deep Learning Classifier using nlu.load('train.sentiment')You dataset label column should be named 'y' and the feature column with text data should be named 'text'
###Code
import nlu
from sklearn.metrics import classification_report
# load a trainable pipeline by specifying the train. prefix and fit it on a datset with label and text columns
# by default the Universal Sentence Encoder (USE) Sentence embeddings are used for generation
trainable_pipe = nlu.load('train.sentiment')
fitted_pipe = trainable_pipe.fit(train_df.iloc[:50])
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:50],output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['trained_sentiment']))
preds
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
sentence_detector_dl download started this may take some time.
Approximate size to download 354.6 KB
[OK!]
precision recall f1-score support
negative 0.00 0.00 0.00 14
positive 0.72 1.00 0.84 36
accuracy 0.72 50
macro avg 0.36 0.50 0.42 50
weighted avg 0.52 0.72 0.60 50
###Markdown
4. Test the fitted pipe on new example
###Code
fitted_pipe.predict('According to the most recent update there has been a major decrese in the rate of oil')
###Output
_____no_output_____
###Markdown
5. Configure pipe training parameters
###Code
trainable_pipe.print_info()
###Output
The following parameters are configurable for this NLU pipeline (You can copy paste the examples) :
>>> pipe['sentiment_dl'] has settable params:
pipe['sentiment_dl'].setMaxEpochs(1) | Info: Maximum number of epochs to train | Currently set to : 1
pipe['sentiment_dl'].setLr(0.005) | Info: Learning Rate | Currently set to : 0.005
pipe['sentiment_dl'].setBatchSize(64) | Info: Batch size | Currently set to : 64
pipe['sentiment_dl'].setDropout(0.5) | Info: Dropout coefficient | Currently set to : 0.5
pipe['sentiment_dl'].setEnableOutputLogs(True) | Info: Whether to use stdout in addition to Spark logs. | Currently set to : True
pipe['sentiment_dl'].setThreshold(0.6) | Info: The minimum threshold for the final result otheriwse it will be neutral | Currently set to : 0.6
pipe['sentiment_dl'].setThresholdLabel('neutral') | Info: In case the score is less than threshold, what should be the label. Default is neutral. | Currently set to : neutral
>>> pipe['use@tfhub_use'] has settable params:
pipe['use@tfhub_use'].setDimension(512) | Info: Number of embedding dimensions | Currently set to : 512
pipe['use@tfhub_use'].setLoadSP(False) | Info: Whether to load SentencePiece ops file which is required only by multi-lingual models. This is not changeable after it's set with a pretrained model nor it is compatible with Windows. | Currently set to : False
pipe['use@tfhub_use'].setStorageRef('tfhub_use') | Info: unique reference name for identification | Currently set to : tfhub_use
>>> pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'] has settable params:
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setExplodeSentences(False) | Info: whether to explode each sentence into a different row, for better parallelization. Defaults to false. | Currently set to : False
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setStorageRef('SentenceDetectorDLModel_c83c27f46b97') | Info: storage unique identifier | Currently set to : SentenceDetectorDLModel_c83c27f46b97
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setEncoder(com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@3933547a) | Info: Data encoder | Currently set to : com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@3933547a
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setImpossiblePenultimates(['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']) | Info: Impossible penultimates | Currently set to : ['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setModelArchitecture('cnn') | Info: Model architecture (CNN) | Currently set to : cnn
>>> pipe['document_assembler'] has settable params:
pipe['document_assembler'].setCleanupMode('shrink') | Info: possible values: disabled, inplace, inplace_full, shrink, shrink_full, each, each_full, delete_full | Currently set to : shrink
###Markdown
6. Retrain with new parameters
###Code
# Train longer!
trainable_pipe['sentiment_dl'].setMaxEpochs(5)
fitted_pipe = trainable_pipe.fit(train_df.iloc[:100])
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:100],output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['trained_sentiment']))
preds
###Output
precision recall f1-score support
negative 0.00 0.00 0.00 31
neutral 0.00 0.00 0.00 0
positive 0.86 0.99 0.92 69
accuracy 0.68 100
macro avg 0.29 0.33 0.31 100
weighted avg 0.59 0.68 0.63 100
###Markdown
7. Try training with different Embeddings
###Code
# We can use nlu.print_components(action='embed_sentence') to see every possibler sentence embedding we could use. Lets use bert!
nlu.print_components(action='embed_sentence')
trainable_pipe = nlu.load('en.embed_sentence.small_bert_L12_768 train.sentiment')
# We need to train longer and user smaller LR for NON-USE based sentence embeddings usually
# We could tune the hyperparameters further with hyperparameter tuning methods like gridsearch
# Also longer training gives more accuracy
trainable_pipe['sentiment_dl'].setMaxEpochs(70)
trainable_pipe['sentiment_dl'].setLr(0.0005)
fitted_pipe = trainable_pipe.fit(train_df)
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df,output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['trained_sentiment']))
#preds
###Output
sent_small_bert_L12_768 download started this may take some time.
Approximate size to download 392.9 MB
[OK!]
sentence_detector_dl download started this may take some time.
Approximate size to download 354.6 KB
[OK!]
precision recall f1-score support
negative 0.87 0.84 0.86 488
neutral 0.00 0.00 0.00 0
positive 0.96 0.92 0.94 1085
accuracy 0.90 1573
macro avg 0.61 0.59 0.60 1573
weighted avg 0.94 0.90 0.92 1573
###Markdown
7.1 evaluate on Test Data
###Code
preds = fitted_pipe.predict(test_df,output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['trained_sentiment']))
###Output
precision recall f1-score support
negative 0.84 0.76 0.80 116
neutral 0.00 0.00 0.00 0
positive 0.95 0.90 0.93 278
accuracy 0.86 394
macro avg 0.60 0.55 0.57 394
weighted avg 0.92 0.86 0.89 394
###Markdown
8. Lets save the model
###Code
stored_model_path = './models/classifier_dl_trained'
fitted_pipe.save(stored_model_path)
###Output
Stored model in ./models/classifier_dl_trained
###Markdown
9. Lets load the model from HDD.This makes Offlien NLU usage possible! You need to call nlu.load(path=path_to_the_pipe) to load a model/pipeline from disk.
###Code
hdd_pipe = nlu.load(path=stored_model_path)
preds = hdd_pipe.predict('According to the most recent update there has been a major decrese in the rate of oil')
preds
hdd_pipe.print_info()
###Output
The following parameters are configurable for this NLU pipeline (You can copy paste the examples) :
>>> pipe['document_assembler'] has settable params:
pipe['document_assembler'].setCleanupMode('shrink') | Info: possible values: disabled, inplace, inplace_full, shrink, shrink_full, each, each_full, delete_full | Currently set to : shrink
>>> pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'] has settable params:
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setExplodeSentences(False) | Info: whether to explode each sentence into a different row, for better parallelization. Defaults to false. | Currently set to : False
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setStorageRef('SentenceDetectorDLModel_c83c27f46b97') | Info: storage unique identifier | Currently set to : SentenceDetectorDLModel_c83c27f46b97
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setEncoder(com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@dcd6682) | Info: Data encoder | Currently set to : com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@dcd6682
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setImpossiblePenultimates(['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']) | Info: Impossible penultimates | Currently set to : ['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setModelArchitecture('cnn') | Info: Model architecture (CNN) | Currently set to : cnn
>>> pipe['bert_sentence@sent_small_bert_L12_768'] has settable params:
pipe['bert_sentence@sent_small_bert_L12_768'].setBatchSize(8) | Info: Size of every batch | Currently set to : 8
pipe['bert_sentence@sent_small_bert_L12_768'].setCaseSensitive(False) | Info: whether to ignore case in tokens for embeddings matching | Currently set to : False
pipe['bert_sentence@sent_small_bert_L12_768'].setDimension(768) | Info: Number of embedding dimensions | Currently set to : 768
pipe['bert_sentence@sent_small_bert_L12_768'].setMaxSentenceLength(128) | Info: Max sentence length to process | Currently set to : 128
pipe['bert_sentence@sent_small_bert_L12_768'].setIsLong(False) | Info: Use Long type instead of Int type for inputs buffer - Some Bert models require Long instead of Int. | Currently set to : False
pipe['bert_sentence@sent_small_bert_L12_768'].setStorageRef('sent_small_bert_L12_768') | Info: unique reference name for identification | Currently set to : sent_small_bert_L12_768
>>> pipe['sentiment_dl@sent_small_bert_L12_768'] has settable params:
pipe['sentiment_dl@sent_small_bert_L12_768'].setThreshold(0.6) | Info: The minimum threshold for the final result otheriwse it will be neutral | Currently set to : 0.6
pipe['sentiment_dl@sent_small_bert_L12_768'].setThresholdLabel('neutral') | Info: In case the score is less than threshold, what should be the label. Default is neutral. | Currently set to : neutral
pipe['sentiment_dl@sent_small_bert_L12_768'].setClasses(['positive', 'negative']) | Info: get the tags used to trained this SentimentDLModel | Currently set to : ['positive', 'negative']
pipe['sentiment_dl@sent_small_bert_L12_768'].setStorageRef('sent_small_bert_L12_768') | Info: unique reference name for identification | Currently set to : sent_small_bert_L12_768
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/Training/binary_text_classification/NLU_training_sentiment_classifier_demo_finanical_news.ipynb) Training a Sentiment Analysis Classifier with NLU 2 class Finance News sentiment classifier trainingWith the [SentimentDL model](https://nlp.johnsnowlabs.com/docs/en/annotatorssentimentdl-multi-class-sentiment-analysis-annotator) from Spark NLP you can achieve State Of the Art results on any multi class text classification problem This notebook showcases the following features : - How to train the deep learning classifier- How to store a pipeline to disk- How to load the pipeline from disk (Enables NLU offline mode)You can achieve these results or even better on this dataset with training data:You can achieve these results or even better on this dataset with test data: 1. Install Java 8 and NLU
###Code
!wget https://setup.johnsnowlabs.com/nlu/colab.sh -O - | bash
import nlu
###Output
--2021-05-05 05:09:06-- https://raw.githubusercontent.com/JohnSnowLabs/nlu/master/scripts/colab_setup.sh
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1671 (1.6K) [text/plain]
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s Installing NLU 3.0.0 with PySpark 3.0.2 and Spark NLP 3.0.1 for Google Colab ...
- 100%[===================>] 1.63K --.-KB/s in 0.001s
2021-05-05 05:09:06 (1.82 MB/s) - written to stdout [1671/1671]
[K |████████████████████████████████| 204.8MB 64kB/s
[K |████████████████████████████████| 153kB 46.5MB/s
[K |████████████████████████████████| 204kB 21.5MB/s
[K |████████████████████████████████| 204kB 50.1MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
###Markdown
2. Download Finanical News Sentiment dataset https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-newsThis dataset contains the sentiments for financial news headlines from the perspective of a retail investor. Further details about the dataset can be found in: Malo, P., Sinha, A., Takala, P., Korhonen, P. and Wallenius, J. (2014): “Good debt or bad debt: Detecting semantic orientations in economic texts.” Journal of the American Society for Information Science and Technology.
###Code
! wget http://ckl-it.de/wp-content/uploads/2021/01/all-data.csv
import pandas as pd
train_path = '/content/all-data.csv'
train_df = pd.read_csv(train_path)
# the text data to use for classification should be in a column named 'text'
# the label column must have name 'y' name be of type str
columns=['text','y']
train_df = train_df[columns]
train_df = train_df[~train_df["y"].isin(["neutral"])]
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(train_df, test_size=0.2)
train_df
###Output
_____no_output_____
###Markdown
3. Train Deep Learning Classifier using nlu.load('train.sentiment')You dataset label column should be named 'y' and the feature column with text data should be named 'text'
###Code
import nlu
from sklearn.metrics import classification_report
# load a trainable pipeline by specifying the train. prefix and fit it on a datset with label and text columns
# by default the Universal Sentence Encoder (USE) Sentence embeddings are used for generation
trainable_pipe = nlu.load('train.sentiment')
fitted_pipe = trainable_pipe.fit(train_df.iloc[:50])
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:50],output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['sentiment']))
preds
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
sentence_detector_dl download started this may take some time.
Approximate size to download 354.6 KB
[OK!]
precision recall f1-score support
negative 0.00 0.00 0.00 14
positive 0.72 1.00 0.84 36
accuracy 0.72 50
macro avg 0.36 0.50 0.42 50
weighted avg 0.52 0.72 0.60 50
###Markdown
4. Test the fitted pipe on new example
###Code
fitted_pipe.predict('According to the most recent update there has been a major decrese in the rate of oil')
###Output
_____no_output_____
###Markdown
5. Configure pipe training parameters
###Code
trainable_pipe.print_info()
###Output
The following parameters are configurable for this NLU pipeline (You can copy paste the examples) :
>>> pipe['sentiment_dl'] has settable params:
pipe['sentiment_dl'].setMaxEpochs(1) | Info: Maximum number of epochs to train | Currently set to : 1
pipe['sentiment_dl'].setLr(0.005) | Info: Learning Rate | Currently set to : 0.005
pipe['sentiment_dl'].setBatchSize(64) | Info: Batch size | Currently set to : 64
pipe['sentiment_dl'].setDropout(0.5) | Info: Dropout coefficient | Currently set to : 0.5
pipe['sentiment_dl'].setEnableOutputLogs(True) | Info: Whether to use stdout in addition to Spark logs. | Currently set to : True
pipe['sentiment_dl'].setThreshold(0.6) | Info: The minimum threshold for the final result otheriwse it will be neutral | Currently set to : 0.6
pipe['sentiment_dl'].setThresholdLabel('neutral') | Info: In case the score is less than threshold, what should be the label. Default is neutral. | Currently set to : neutral
>>> pipe['use@tfhub_use'] has settable params:
pipe['use@tfhub_use'].setDimension(512) | Info: Number of embedding dimensions | Currently set to : 512
pipe['use@tfhub_use'].setLoadSP(False) | Info: Whether to load SentencePiece ops file which is required only by multi-lingual models. This is not changeable after it's set with a pretrained model nor it is compatible with Windows. | Currently set to : False
pipe['use@tfhub_use'].setStorageRef('tfhub_use') | Info: unique reference name for identification | Currently set to : tfhub_use
>>> pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'] has settable params:
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setExplodeSentences(False) | Info: whether to explode each sentence into a different row, for better parallelization. Defaults to false. | Currently set to : False
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setStorageRef('SentenceDetectorDLModel_c83c27f46b97') | Info: storage unique identifier | Currently set to : SentenceDetectorDLModel_c83c27f46b97
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setEncoder(com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@3933547a) | Info: Data encoder | Currently set to : com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@3933547a
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setImpossiblePenultimates(['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']) | Info: Impossible penultimates | Currently set to : ['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']
pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setModelArchitecture('cnn') | Info: Model architecture (CNN) | Currently set to : cnn
>>> pipe['document_assembler'] has settable params:
pipe['document_assembler'].setCleanupMode('shrink') | Info: possible values: disabled, inplace, inplace_full, shrink, shrink_full, each, each_full, delete_full | Currently set to : shrink
###Markdown
6. Retrain with new parameters
###Code
# Train longer!
trainable_pipe = nlu.load('train.sentiment')
trainable_pipe['trainable_sentiment_dl'].setMaxEpochs(5)
fitted_pipe = trainable_pipe.fit(train_df.iloc[:100])
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df.iloc[:100],output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['sentiment']))
preds
###Output
precision recall f1-score support
negative 0.00 0.00 0.00 31
neutral 0.00 0.00 0.00 0
positive 0.86 0.99 0.92 69
accuracy 0.68 100
macro avg 0.29 0.33 0.31 100
weighted avg 0.59 0.68 0.63 100
###Markdown
7. Try training with different Embeddings
###Code
# We can use nlu.print_components(action='embed_sentence') to see every possibler sentence embedding we could use. Lets use bert!
nlu.print_components(action='embed_sentence')
trainable_pipe = nlu.load('en.embed_sentence.small_bert_L12_768 train.sentiment')
# We need to train longer and user smaller LR for NON-USE based sentence embeddings usually
# We could tune the hyperparameters further with hyperparameter tuning methods like gridsearch
# Also longer training gives more accuracy
trainable_pipe['trainable_sentiment_dl'].setMaxEpochs(70)
trainable_pipe['trainable_sentiment_dl'].setLr(0.0005)
fitted_pipe = trainable_pipe.fit(train_df)
# predict with the trainable pipeline on dataset and get predictions
preds = fitted_pipe.predict(train_df,output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['sentiment']))
#preds
###Output
sent_small_bert_L12_768 download started this may take some time.
Approximate size to download 392.9 MB
[OK!]
sentence_detector_dl download started this may take some time.
Approximate size to download 354.6 KB
[OK!]
precision recall f1-score support
negative 0.87 0.84 0.86 488
neutral 0.00 0.00 0.00 0
positive 0.96 0.92 0.94 1085
accuracy 0.90 1573
macro avg 0.61 0.59 0.60 1573
weighted avg 0.94 0.90 0.92 1573
###Markdown
7.1 evaluate on Test Data
###Code
preds = fitted_pipe.predict(test_df,output_level='document')
#sentence detector that is part of the pipe generates sone NaNs. lets drop them first
preds.dropna(inplace=True)
print(classification_report(preds['y'], preds['sentiment']))
###Output
precision recall f1-score support
negative 0.84 0.76 0.80 116
neutral 0.00 0.00 0.00 0
positive 0.95 0.90 0.93 278
accuracy 0.86 394
macro avg 0.60 0.55 0.57 394
weighted avg 0.92 0.86 0.89 394
###Markdown
8. Lets save the model
###Code
stored_model_path = './models/classifier_dl_trained'
fitted_pipe.save(stored_model_path)
###Output
Stored model in ./models/classifier_dl_trained
###Markdown
9. Lets load the model from HDD.This makes Offlien NLU usage possible! You need to call nlu.load(path=path_to_the_pipe) to load a model/pipeline from disk.
###Code
hdd_pipe = nlu.load(path=stored_model_path)
preds = hdd_pipe.predict('According to the most recent update there has been a major decrese in the rate of oil')
preds
hdd_pipe.print_info()
###Output
The following parameters are configurable for this NLU pipeline (You can copy paste the examples) :
>>> pipe['document_assembler'] has settable params:
pipe['document_assembler'].setCleanupMode('shrink') | Info: possible values: disabled, inplace, inplace_full, shrink, shrink_full, each, each_full, delete_full | Currently set to : shrink
>>> pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'] has settable params:
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setExplodeSentences(False) | Info: whether to explode each sentence into a different row, for better parallelization. Defaults to false. | Currently set to : False
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setStorageRef('SentenceDetectorDLModel_c83c27f46b97') | Info: storage unique identifier | Currently set to : SentenceDetectorDLModel_c83c27f46b97
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setEncoder(com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@dcd6682) | Info: Data encoder | Currently set to : com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@dcd6682
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setImpossiblePenultimates(['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']) | Info: Impossible penultimates | Currently set to : ['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']
pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setModelArchitecture('cnn') | Info: Model architecture (CNN) | Currently set to : cnn
>>> pipe['bert_sentence@sent_small_bert_L12_768'] has settable params:
pipe['bert_sentence@sent_small_bert_L12_768'].setBatchSize(8) | Info: Size of every batch | Currently set to : 8
pipe['bert_sentence@sent_small_bert_L12_768'].setCaseSensitive(False) | Info: whether to ignore case in tokens for embeddings matching | Currently set to : False
pipe['bert_sentence@sent_small_bert_L12_768'].setDimension(768) | Info: Number of embedding dimensions | Currently set to : 768
pipe['bert_sentence@sent_small_bert_L12_768'].setMaxSentenceLength(128) | Info: Max sentence length to process | Currently set to : 128
pipe['bert_sentence@sent_small_bert_L12_768'].setIsLong(False) | Info: Use Long type instead of Int type for inputs buffer - Some Bert models require Long instead of Int. | Currently set to : False
pipe['bert_sentence@sent_small_bert_L12_768'].setStorageRef('sent_small_bert_L12_768') | Info: unique reference name for identification | Currently set to : sent_small_bert_L12_768
>>> pipe['sentiment_dl@sent_small_bert_L12_768'] has settable params:
pipe['sentiment_dl@sent_small_bert_L12_768'].setThreshold(0.6) | Info: The minimum threshold for the final result otheriwse it will be neutral | Currently set to : 0.6
pipe['sentiment_dl@sent_small_bert_L12_768'].setThresholdLabel('neutral') | Info: In case the score is less than threshold, what should be the label. Default is neutral. | Currently set to : neutral
pipe['sentiment_dl@sent_small_bert_L12_768'].setClasses(['positive', 'negative']) | Info: get the tags used to trained this SentimentDLModel | Currently set to : ['positive', 'negative']
pipe['sentiment_dl@sent_small_bert_L12_768'].setStorageRef('sent_small_bert_L12_768') | Info: unique reference name for identification | Currently set to : sent_small_bert_L12_768
|
01_RampUp/week2/practices/01-snail-and-well/caracol.ipynb | ###Markdown
01 - Caracol y el pozoUn caracol cae en el fondo de un pozo de 125 cm. Cada día el caracol sube 30 cm. pero por la noche, mientras duerme, resbala 20 cm debido a que las paredes son húmedas. ¿Cuantos días tarda en escapar del pozo?TIP: https://www.vix.com/es/btg/curiosidades/59215/acertijos-matematicos-el-caracol-en-el-pozo-facilTIP: http://puzzles.nigelcoldwell.co.uk/sixtytwo.htm Solución
###Code
# Asigna los datos del problema a variables con nombres representativos
# altura del pozo, avance diario, retroceso nocturno, distancia acumulada
# Asigna 0 a la variable que representa la solución
# Escribe el código que soluciona el problema
# Imprime el resultado con print('Dias =', dias)
###Output
_____no_output_____
###Markdown
Objetivos1. Tratamiento de variables2. Uso de bucle **while**3. Uso de condicionales **if-else**4. Imprimir por consola BonusLa distancia recorrida por el caracol viene ahora definida por una lista. ```avance_cm = [30, 21, 33, 77, 44, 45, 23, 45, 12, 34, 55]```¿Cuánto tarda en subir el pozo?¿Cuál es su máximo de desplazamiento en un día? ¿Y su mínimo?¿Cuál es su media de velocidad durante el día?¿Cuál es la desviación típica de su desplazamiento durante el día?
###Code
# Asigna los datos del problema a variables con nombres representativos
# altura del pozo, avance diario, retroceso nocturno, distancia acumulada
# Asigna 0 a la variable que representa la solución
# Escribe el código que soluciona el problema
# Imprime el resultado con print('Dias =', dias)
# ¿Cuál es su máximo de desplazamiento en un día? ¿Y su mínimo?
# ¿Cuál es su media de avance?
# ¿Cuál es la desviación típica de su desplazamiento durante el día?
###Output
_____no_output_____ |
171IT102_Aastha_CNN_Object_Detection_and_Tracking.ipynb | ###Markdown
Loading the YOLO pretrained model
###Code
#various labels that YOLO detects
labels = open('coco.names').read().strip().split("\n")
print(labels)
print("Number of classes: ",len(labels))
colors = np.random.randint(0, 255, size=(len(labels), 3), dtype="uint8")
# for creating the network
model = cv2.dnn.readNetFromDarknet('yolov3.cfg', 'yolov3.weights')
###Output
_____no_output_____
###Markdown
Using YOLO pretrained model for Object Detection and Tracking
###Code
cap = cv2.VideoCapture('video.mp4')
# out = cv2.VideoWriter('yolo_output.avi', cv2.VideoWriter_fourcc(*'MJPG'), 10, (1920, 1080))
frame_no = 0
while(True):
ret, frame = cap.read()
frame_no += 1
(H, W) = frame.shape[:2]
# getting the output layers
ln = model.getLayerNames()
ln = [ln[i[0] - 1] for i in model.getUnconnectedOutLayers()]
# preparing the input image to feed into DNN
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
model.setInput(blob)
layerOutputs = model.forward(ln)
boxes = []
confidences = []
classIDs = []
for output in layerOutputs:
for detection in output:
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
if confidence > 0.5:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
classIDs.append(classID)
# Performing Non max supression to find the perfect bounding box
idxs = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.5)
obj_detected = []
if len(idxs) > 0:
for i in idxs.flatten():
x = boxes[i][0]
y = boxes[i][1]
w = boxes[i][2]
h = boxes[i][3]
color = [int(c) for c in colors[classIDs[i]]]
cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
text = "{}: {:.4f}".format(labels[classIDs[i]], confidences[i])
cv2.putText(frame, text, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
obj_detected.append(labels[classIDs[i]])
count = {}
for label in obj_detected:
if label in count :
count[label] += 1
else:
count[label] = 1
st = ''
for key, value in count.items():
st += str(value) + " " + str(key) + ", "
st = st[:-2]
print("Objects in frame ",frame_no," are: ",st)
cv2.imshow("Result", frame)
# out.write(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
cap.release()
# out.release()
###Output
Objects in frame 1 are: 12 person, 3 pottedplant, 3 car, 1 traffic light
Objects in frame 2 are: 9 person, 2 pottedplant, 3 car
Objects in frame 3 are: 12 person, 2 pottedplant, 4 car, 1 traffic light
Objects in frame 4 are: 12 person, 2 pottedplant, 3 car, 1 traffic light
Objects in frame 5 are: 9 person, 2 pottedplant, 4 car, 1 traffic light
Objects in frame 6 are: 13 person, 2 pottedplant, 3 car, 1 traffic light
Objects in frame 7 are: 11 person, 2 pottedplant, 2 car, 1 traffic light
Objects in frame 8 are: 14 person, 2 pottedplant, 1 traffic light, 2 car
Objects in frame 9 are: 13 person, 2 pottedplant, 1 car
Objects in frame 10 are: 14 person, 2 pottedplant, 1 car
Objects in frame 11 are: 13 person, 2 pottedplant, 1 car, 1 traffic light
Objects in frame 12 are: 13 person, 2 pottedplant, 2 car
Objects in frame 13 are: 14 person, 2 pottedplant, 1 car
Objects in frame 14 are: 14 person, 3 pottedplant, 1 car
Objects in frame 15 are: 12 person, 3 pottedplant, 2 car, 1 traffic light
Objects in frame 16 are: 11 person, 3 pottedplant, 1 traffic light, 1 car
Objects in frame 17 are: 12 person, 3 pottedplant, 1 traffic light
Objects in frame 18 are: 13 person, 3 pottedplant, 1 traffic light
Objects in frame 19 are: 14 person, 3 pottedplant
Objects in frame 20 are: 15 person, 3 pottedplant, 1 backpack, 1 truck
Objects in frame 21 are: 13 person, 4 pottedplant, 1 truck, 1 backpack, 2 traffic light
Objects in frame 22 are: 11 person, 4 pottedplant, 1 truck
Objects in frame 23 are: 12 person, 5 pottedplant, 1 truck, 1 backpack, 2 car
Objects in frame 24 are: 11 person, 4 pottedplant, 2 car, 1 truck, 1 traffic light, 1 backpack
Objects in frame 25 are: 11 person, 4 pottedplant, 3 car, 1 truck, 1 traffic light, 1 backpack
Objects in frame 26 are: 12 person, 3 pottedplant, 2 car, 1 truck, 1 traffic light, 1 backpack
Objects in frame 27 are: 11 person, 4 pottedplant, 2 car, 1 truck, 1 backpack
Objects in frame 28 are: 12 person, 2 car, 2 pottedplant, 1 truck, 2 traffic light, 1 backpack
Objects in frame 29 are: 11 person, 3 car, 2 pottedplant, 1 backpack, 1 truck, 1 traffic light
Objects in frame 30 are: 12 person, 2 car, 3 pottedplant, 1 truck, 1 backpack
Objects in frame 31 are: 10 person, 2 car, 2 pottedplant, 1 truck, 1 backpack, 1 traffic light
Objects in frame 32 are: 11 person, 3 car, 1 truck, 2 pottedplant, 1 traffic light, 1 backpack
Objects in frame 33 are: 12 person, 2 car, 2 pottedplant, 1 truck, 1 backpack
Objects in frame 34 are: 13 person, 2 car, 2 pottedplant, 1 truck, 1 traffic light, 1 backpack
Objects in frame 35 are: 11 person, 1 truck, 2 car, 2 pottedplant, 1 traffic light
Objects in frame 36 are: 12 person, 2 car, 1 truck, 2 pottedplant, 1 backpack, 1 traffic light
Objects in frame 37 are: 13 person, 3 car, 2 pottedplant, 1 truck, 1 traffic light, 1 backpack
Objects in frame 38 are: 11 person, 2 car, 1 truck, 2 pottedplant, 1 backpack, 1 traffic light, 1 handbag
Objects in frame 39 are: 13 person, 2 car, 1 truck, 2 pottedplant, 1 backpack, 1 traffic light
Objects in frame 40 are: 13 person, 1 car, 1 truck, 2 pottedplant, 1 traffic light
Objects in frame 41 are: 13 person, 1 car, 1 truck, 2 pottedplant, 1 backpack
Objects in frame 42 are: 13 person, 3 car, 2 pottedplant, 1 backpack, 1 truck
Objects in frame 43 are: 12 person, 2 car, 1 pottedplant, 1 truck, 1 backpack, 1 traffic light
Objects in frame 44 are: 12 person, 1 car, 1 truck, 3 pottedplant, 2 traffic light, 1 handbag
Objects in frame 45 are: 15 person, 2 car, 1 truck, 1 pottedplant, 1 backpack
Objects in frame 46 are: 14 person, 1 car, 1 truck, 1 pottedplant, 1 backpack
Objects in frame 47 are: 13 person, 1 car, 1 pottedplant, 1 truck, 2 traffic light, 1 backpack
Objects in frame 48 are: 12 person, 1 truck, 1 car, 1 pottedplant, 1 backpack
Objects in frame 49 are: 13 person, 1 truck, 1 car, 1 pottedplant, 1 backpack, 1 traffic light
Objects in frame 50 are: 15 person, 1 truck, 1 pottedplant, 1 car, 1 traffic light
Objects in frame 51 are: 12 person, 1 truck, 3 traffic light, 1 car, 1 backpack, 1 pottedplant
Objects in frame 52 are: 14 person, 1 truck, 2 car, 1 pottedplant, 1 backpack
Objects in frame 53 are: 15 person, 3 car, 1 backpack, 1 traffic light
Objects in frame 54 are: 14 person, 3 car, 1 pottedplant
Objects in frame 55 are: 14 person, 1 car, 1 traffic light, 1 pottedplant
Objects in frame 56 are: 16 person, 1 car, 1 traffic light, 1 pottedplant
Objects in frame 57 are: 17 person, 3 car, 1 handbag
Objects in frame 58 are: 16 person, 1 car, 1 traffic light
Objects in frame 59 are: 16 person, 1 car
Objects in frame 60 are: 15 person, 2 car, 1 traffic light, 1 skateboard
Objects in frame 61 are: 16 person, 2 car, 1 traffic light
Objects in frame 62 are: 14 person, 2 traffic light
Objects in frame 63 are: 14 person, 2 traffic light, 2 car
Objects in frame 64 are: 17 person, 2 traffic light, 1 car
Objects in frame 65 are: 17 person, 1 traffic light
Objects in frame 66 are: 16 person, 2 traffic light, 1 car
Objects in frame 67 are: 14 person, 2 traffic light, 1 handbag, 1 pottedplant
Objects in frame 68 are: 16 person, 1 traffic light, 1 car
Objects in frame 69 are: 17 person, 2 traffic light
Objects in frame 70 are: 16 person, 1 car, 1 handbag, 1 traffic light
Objects in frame 71 are: 17 person, 2 car, 3 traffic light, 1 handbag
Objects in frame 72 are: 15 person, 1 traffic light
Objects in frame 73 are: 14 person, 3 traffic light, 1 car
Objects in frame 74 are: 15 person, 2 traffic light
Objects in frame 75 are: 15 person, 2 traffic light, 1 handbag
Objects in frame 76 are: 14 person, 1 traffic light, 1 car
Objects in frame 77 are: 15 person, 2 traffic light, 1 clock, 1 car
Objects in frame 78 are: 16 person, 1 car, 2 traffic light, 1 clock
Objects in frame 79 are: 14 person, 2 car, 1 traffic light
Objects in frame 80 are: 13 person, 1 car, 2 traffic light
Objects in frame 81 are: 14 person, 1 car, 1 backpack, 1 handbag
Objects in frame 82 are: 13 person, 1 backpack
Objects in frame 83 are: 13 person, 1 traffic light, 1 backpack
Objects in frame 84 are: 15 person, 2 traffic light
Objects in frame 85 are: 14 person, 1 traffic light, 1 handbag
Objects in frame 86 are: 15 person
Objects in frame 87 are: 13 person, 2 traffic light
Objects in frame 88 are: 12 person, 1 traffic light, 1 suitcase
Objects in frame 89 are: 12 person, 2 traffic light
Objects in frame 90 are: 13 person, 1 car, 1 traffic light
Objects in frame 91 are: 14 person, 1 car, 1 clock
Objects in frame 92 are: 13 person, 1 car, 1 backpack
Objects in frame 93 are: 15 person, 1 car, 1 traffic light
Objects in frame 94 are: 14 person, 1 car, 1 backpack, 1 handbag
Objects in frame 95 are: 14 person, 1 backpack, 1 car
Objects in frame 96 are: 14 person, 1 car
Objects in frame 97 are: 13 person, 2 car, 1 backpack
Objects in frame 98 are: 13 person, 2 car, 1 backpack
Objects in frame 99 are: 13 person, 2 car, 1 backpack
Objects in frame 100 are: 12 person, 2 car, 1 backpack, 1 handbag
Objects in frame 101 are: 12 person, 2 car, 1 backpack
Objects in frame 102 are: 12 person, 2 car
Objects in frame 103 are: 12 person, 2 car, 1 pottedplant
Objects in frame 104 are: 12 person, 2 car, 1 pottedplant
Objects in frame 105 are: 13 person, 2 car
Objects in frame 106 are: 10 person, 2 car, 1 pottedplant
Objects in frame 107 are: 14 person, 2 car, 1 pottedplant
Objects in frame 108 are: 12 person, 2 car, 1 pottedplant
Objects in frame 109 are: 11 person, 2 car, 1 pottedplant
Objects in frame 110 are: 13 person, 2 car, 1 pottedplant
Objects in frame 111 are: 12 person, 2 car, 1 pottedplant, 1 handbag
Objects in frame 112 are: 14 person, 2 car
Objects in frame 113 are: 11 person, 2 car, 1 backpack
Objects in frame 114 are: 15 person, 2 car, 1 backpack, 1 pottedplant
Objects in frame 115 are: 12 person, 2 car, 1 pottedplant, 1 backpack
Objects in frame 116 are: 13 person, 2 car
Objects in frame 117 are: 14 person, 3 car
###Markdown
Sample images from the video
###Code
img = mpimg.imread('output1.png')
imgplot = plt.imshow(img)
plt.show()
img = mpimg.imread('output2.png')
imgplot = plt.imshow(img)
plt.show()
###Output
_____no_output_____ |
[14]fashion_mnist_keras.ipynb | ###Markdown
近代深度學習: 透過Keras設計一個簡易的CNN來分類衣服 - MNIST Fashion 為何使用Fashion-MNIST 這個資料集過去deep learning往往使用MINST手寫數字辨識作為入門以及hello world介紹。但由於:* MNIST 太過簡單* MNIST 被過度使用(超過10年)* MNIST 無法呈現近代電腦視覺的問題有關Fashion-Mnist請參考本篇 [here](https://arxiv.org/abs/1708.07747) (**Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms**) 資料整理[fashion_mnist](https://github.com/zalandoresearch/fashion-mnist) 資料: 60,000 訓練資料和 10,000 測試資料,分類總共有10類每一張圖皆為 28x28的灰階圖. **類別** **描述** 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot 下載資料首先我們先下載Tensorflow 1.8.0版,接著在Keras資料集的功能下下載fashion-mnist.
###Code
!pip install -q -U tensorflow>=1.8.0
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import IPython.display
# Load the fashion-mnist pre-shuffled train data and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
###Output
x_train shape: (60000, 28, 28) y_train shape: (60000,)
###Markdown
我們可以看到訓練資料集資料維度為 (60000, 28, 28),標籤則有60000筆 EDA: 視覺化資料將資料整理後並畫出其中一張。
###Code
# Print training set shape - note there are 60,000 training data of image size of 28x28, 60,000 train labels)
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
# Print the number of training and test datasets
print(x_train.shape[0], 'train set')
print(x_test.shape[0], 'test set')
# Define the text labels
fashion_mnist_labels = ["T-shirt/top", # index 0
"Trouser", # index 1
"Pullover", # index 2
"Dress", # index 3
"Coat", # index 4
"Sandal", # index 5
"Shirt", # index 6
"Sneaker", # index 7
"Bag", # index 8
"Ankle boot"] # index 9
# Image index, you can pick any number between 0 and 59,999
img_index = 5
# y_train contains the lables, ranging from 0 to 9
label_index = y_train[img_index]
# Print the label, for example 2 Pullover
print ("y = " + str(label_index) + " " +(fashion_mnist_labels[label_index]))
# # Show one of the images from the training dataset
plt.imshow(x_train[img_index])
###Output
x_train shape: (60000, 28, 28) y_train shape: (60000,)
60000 train set
10000 test set
y = 2 Pullover
###Markdown
資料正規化資料正規化是電腦視覺中訓練模型相當重要的一項前處理。將資料根據資料軸進行正規化可以讓所有的資料軸有近似的資料範圍。
###Code
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
print("Number of train data - " + str(len(x_train)))
print("Number of test data - " + str(len(x_test)))
###Output
Number of train data - 60000
Number of test data - 10000
###Markdown
Modelling : 將資料切成 train/validaate/test 資料集* 訓練資料 - 訓練CNN網路用* 驗證資料 - 用來調整非模型直接學習的超參數(hyperparameters, 例: 學習率learning rate)* 測試資料 - 用來驗證所學習的模型的效度。
###Code
# Further break training data into train / validation sets (# put 5000 into validation set and keep remaining 55,000 for train)
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# Reshape input data from (28, 28) to (28, 28, 1)
w, h = 28, 28
x_train = x_train.reshape(x_train.shape[0], w, h, 1)
x_valid = x_valid.reshape(x_valid.shape[0], w, h, 1)
x_test = x_test.reshape(x_test.shape[0], w, h, 1)
# One-hot encode the labels
y_train = tf.keras.utils.to_categorical(y_train, 10)
y_valid = tf.keras.utils.to_categorical(y_valid, 10)
y_test = tf.keras.utils.to_categorical(y_test, 10)
# Print training set shape
print("x_train shape:", x_train.shape, "y_train shape:", y_train.shape)
# Print the number of training, validation, and test datasets
print(x_train.shape[0], 'train set')
print(x_valid.shape[0], 'validation set')
print(x_test.shape[0], 'test set')
###Output
x_train shape: (55000, 28, 28, 1) y_train shape: (55000, 10)
55000 train set
5000 validation set
10000 test set
###Markdown
設計深度網路結構Keras裡我們使用兩種方式來建構模型:1. [Sequential model API](https://keras.io/models/sequential/)2. [Functional API](https://keras.io/models/model/)本課程將使用 Sequential model API。對Functional API有興趣的同學可以參考 [用Keras Funtional API和Tensorflow預測酒的價格](https://medium.com/tensorflow/predicting-the-price-of-wine-with-the-keras-functional-api-and-tensorflow-a95d1c2c1b03).建構網路上我們會使用以下幾種基本元件:* Conv2D() [link text](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D/) - 添加卷積層(Convolutional Layer)* Pooling() [link text](https://keras.io/layers/pooling/) - 添加池化層(Pooling Layer) * Dropout() [link text](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout) - 加上 Dropout並使用model.summary()函數顯示所建構出的模型資訊。
###Code
model = tf.keras.Sequential()
# Must define the input shape in the first layer of the neural network
model.add(tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(28,28,1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
# Take a look at the model summary
model.summary()
model_large = tf.keras.models.Sequential()
model_large.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model_large.add(tf.keras.layers.Conv2D(64, (5, 5), padding='same', activation='elu'))
model_large.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model_large.add(tf.keras.layers.Dropout(0.25))
model_large.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model_large.add(tf.keras.layers.Conv2D(128, (5, 5), padding='same', activation='elu'))
model_large.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model_large.add(tf.keras.layers.Dropout(0.25))
model_large.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model_large.add(tf.keras.layers.Conv2D(256, (5, 5), padding='same', activation='elu'))
model_large.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model_large.add(tf.keras.layers.Dropout(0.25))
model_large.add(tf.keras.layers.Flatten())
model_large.add(tf.keras.layers.Dense(256))
model_large.add(tf.keras.layers.Activation('elu'))
model_large.add(tf.keras.layers.Dropout(0.5))
model_large.add(tf.keras.layers.Dense(10))
model_large.add(tf.keras.layers.Activation('softmax'))
model_large.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_7 (Conv2D) (None, 28, 28, 64) 320
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 14, 14, 64) 0
_________________________________________________________________
dropout_10 (Dropout) (None, 14, 14, 64) 0
_________________________________________________________________
conv2d_8 (Conv2D) (None, 14, 14, 32) 8224
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 7, 7, 32) 0
_________________________________________________________________
dropout_11 (Dropout) (None, 7, 7, 32) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 1568) 0
_________________________________________________________________
dense_6 (Dense) (None, 256) 401664
_________________________________________________________________
dropout_12 (Dropout) (None, 256) 0
_________________________________________________________________
dense_7 (Dense) (None, 10) 2570
=================================================================
Total params: 412,778
Trainable params: 412,778
Non-trainable params: 0
_________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
batch_normalization_3 (Batch (None, 28, 28, 1) 4
_________________________________________________________________
conv2d_9 (Conv2D) (None, 28, 28, 64) 1664
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 14, 14, 64) 0
_________________________________________________________________
dropout_13 (Dropout) (None, 14, 14, 64) 0
_________________________________________________________________
batch_normalization_4 (Batch (None, 14, 14, 64) 256
_________________________________________________________________
conv2d_10 (Conv2D) (None, 14, 14, 128) 204928
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 7, 7, 128) 0
_________________________________________________________________
dropout_14 (Dropout) (None, 7, 7, 128) 0
_________________________________________________________________
batch_normalization_5 (Batch (None, 7, 7, 128) 512
_________________________________________________________________
conv2d_11 (Conv2D) (None, 7, 7, 256) 819456
_________________________________________________________________
max_pooling2d_11 (MaxPooling (None, 3, 3, 256) 0
_________________________________________________________________
dropout_15 (Dropout) (None, 3, 3, 256) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 2304) 0
_________________________________________________________________
dense_8 (Dense) (None, 256) 590080
_________________________________________________________________
activation_2 (Activation) (None, 256) 0
_________________________________________________________________
dropout_16 (Dropout) (None, 256) 0
_________________________________________________________________
dense_9 (Dense) (None, 10) 2570
_________________________________________________________________
activation_3 (Activation) (None, 10) 0
=================================================================
Total params: 1,619,470
Trainable params: 1,619,084
Non-trainable params: 386
_________________________________________________________________
###Markdown
編譯模型模型在定義好後需要編譯,編譯在這的意思是將參數的值做初始化。在編譯前必須設定好以下三個參數:* An optimizer - SGD或ADAM等梯度下降法* A loss function - 損失函數,為資料與產生之預測的距離加總* A list of metrics - 使用何種指標
###Code
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_large.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
訓練模型編譯好後就可以將資料對這模型配適,使用fit().我們預計進行10個回合的訓練,並將每回合的半成品模型收編建檔。[ModelCheckpoint](https://keras.io/callbacks/modelcheckpoint) 模型只會在有最佳的validation準確度上進行update.
###Code
from keras.callbacks import ModelCheckpoint
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose = 1, save_best_only=True)
checkpointer_large = ModelCheckpoint(filepath='model_large.weights.best.hdf5', verbose = 1, save_best_only=True)
model.fit(x_train,
y_train,
batch_size=64,
epochs=10,
validation_data=(x_valid, y_valid),
callbacks=[checkpointer])
model_large.fit(x_train,
y_train,
batch_size=64,
epochs=10,
validation_data=(x_valid, y_valid),
callbacks=[checkpointer_large])
###Output
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
54912/55000 [============================>.] - ETA: 0s - loss: 0.6046 - acc: 0.7769
Epoch 00001: val_loss improved from inf to 0.38444, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 11s 194us/step - loss: 0.6041 - acc: 0.7771 - val_loss: 0.3844 - val_acc: 0.8632
Epoch 2/10
54720/55000 [============================>.] - ETA: 0s - loss: 0.4165 - acc: 0.8488
Epoch 00002: val_loss improved from 0.38444 to 0.32371, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 9s 172us/step - loss: 0.4166 - acc: 0.8488 - val_loss: 0.3237 - val_acc: 0.8826
Epoch 3/10
54720/55000 [============================>.] - ETA: 0s - loss: 0.3731 - acc: 0.8645
Epoch 00003: val_loss improved from 0.32371 to 0.29404, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 9s 172us/step - loss: 0.3732 - acc: 0.8644 - val_loss: 0.2940 - val_acc: 0.8916
Epoch 4/10
54784/55000 [============================>.] - ETA: 0s - loss: 0.3468 - acc: 0.8725
Epoch 00004: val_loss improved from 0.29404 to 0.27929, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 10s 175us/step - loss: 0.3465 - acc: 0.8725 - val_loss: 0.2793 - val_acc: 0.8996
Epoch 5/10
54848/55000 [============================>.] - ETA: 0s - loss: 0.3264 - acc: 0.8797
Epoch 00005: val_loss improved from 0.27929 to 0.27472, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 10s 174us/step - loss: 0.3264 - acc: 0.8798 - val_loss: 0.2747 - val_acc: 0.9030
Epoch 6/10
54912/55000 [============================>.] - ETA: 0s - loss: 0.3135 - acc: 0.8831
Epoch 00006: val_loss improved from 0.27472 to 0.25534, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 10s 173us/step - loss: 0.3135 - acc: 0.8831 - val_loss: 0.2553 - val_acc: 0.9050
Epoch 7/10
54912/55000 [============================>.] - ETA: 0s - loss: 0.3016 - acc: 0.8881
Epoch 00007: val_loss improved from 0.25534 to 0.25027, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 10s 174us/step - loss: 0.3016 - acc: 0.8881 - val_loss: 0.2503 - val_acc: 0.9078
Epoch 8/10
54720/55000 [============================>.] - ETA: 0s - loss: 0.2892 - acc: 0.8934
Epoch 00008: val_loss improved from 0.25027 to 0.24657, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 10s 173us/step - loss: 0.2891 - acc: 0.8934 - val_loss: 0.2466 - val_acc: 0.9048
Epoch 9/10
54784/55000 [============================>.] - ETA: 0s - loss: 0.2837 - acc: 0.8952
Epoch 00009: val_loss improved from 0.24657 to 0.23321, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 10s 173us/step - loss: 0.2837 - acc: 0.8952 - val_loss: 0.2332 - val_acc: 0.9146
Epoch 10/10
54912/55000 [============================>.] - ETA: 0s - loss: 0.2721 - acc: 0.8985
Epoch 00010: val_loss improved from 0.23321 to 0.22609, saving model to model.weights.best.hdf5
55000/55000 [==============================] - 10s 174us/step - loss: 0.2722 - acc: 0.8984 - val_loss: 0.2261 - val_acc: 0.9198
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.6776 - acc: 0.7829
Epoch 00001: val_loss improved from inf to 0.36301, saving model to model_large.weights.best.hdf5
55000/55000 [==============================] - 29s 527us/step - loss: 0.6774 - acc: 0.7830 - val_loss: 0.3630 - val_acc: 0.8760
Epoch 2/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.3973 - acc: 0.8586
Epoch 00002: val_loss improved from 0.36301 to 0.28662, saving model to model_large.weights.best.hdf5
55000/55000 [==============================] - 27s 488us/step - loss: 0.3973 - acc: 0.8586 - val_loss: 0.2866 - val_acc: 0.8948
Epoch 3/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.3334 - acc: 0.8811
Epoch 00003: val_loss improved from 0.28662 to 0.28298, saving model to model_large.weights.best.hdf5
55000/55000 [==============================] - 27s 488us/step - loss: 0.3334 - acc: 0.8811 - val_loss: 0.2830 - val_acc: 0.8956
Epoch 4/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.3092 - acc: 0.8882
Epoch 00004: val_loss improved from 0.28298 to 0.25280, saving model to model_large.weights.best.hdf5
55000/55000 [==============================] - 27s 488us/step - loss: 0.3092 - acc: 0.8882 - val_loss: 0.2528 - val_acc: 0.9054
Epoch 5/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.2826 - acc: 0.8978
Epoch 00005: val_loss improved from 0.25280 to 0.25083, saving model to model_large.weights.best.hdf5
55000/55000 [==============================] - 27s 490us/step - loss: 0.2826 - acc: 0.8978 - val_loss: 0.2508 - val_acc: 0.9102
Epoch 6/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.2680 - acc: 0.9030
Epoch 00006: val_loss improved from 0.25083 to 0.24714, saving model to model_large.weights.best.hdf5
55000/55000 [==============================] - 27s 488us/step - loss: 0.2680 - acc: 0.9029 - val_loss: 0.2471 - val_acc: 0.9122
Epoch 7/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.2595 - acc: 0.9065
Epoch 00007: val_loss did not improve from 0.24714
55000/55000 [==============================] - 27s 486us/step - loss: 0.2595 - acc: 0.9064 - val_loss: 0.2475 - val_acc: 0.9154
Epoch 8/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.2420 - acc: 0.9130
Epoch 00008: val_loss improved from 0.24714 to 0.21690, saving model to model_large.weights.best.hdf5
55000/55000 [==============================] - 27s 488us/step - loss: 0.2419 - acc: 0.9130 - val_loss: 0.2169 - val_acc: 0.9194
Epoch 9/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.2291 - acc: 0.9168
Epoch 00009: val_loss did not improve from 0.21690
55000/55000 [==============================] - 27s 488us/step - loss: 0.2291 - acc: 0.9168 - val_loss: 0.2727 - val_acc: 0.9106
Epoch 10/10
54976/55000 [============================>.] - ETA: 0s - loss: 0.2199 - acc: 0.9189
Epoch 00010: val_loss did not improve from 0.21690
55000/55000 [==============================] - 27s 487us/step - loss: 0.2199 - acc: 0.9189 - val_loss: 0.2236 - val_acc: 0.9192
###Markdown
將最好的validation accuracy之模型取出
###Code
# Load the weights with the best validation accuracy
# model.load_weights('model.weights.best.hdf5')
model_large.load_weights('model_large.weights.best.hdf5')
model.load_weights('model.weights.best.hdf5')
###Output
_____no_output_____
###Markdown
用剩下的test資料集判斷test資料準確度
###Code
# Evaluate the model on test set
score = model.evaluate(x_test, y_test, verbose=0)
score_large = model_large.evaluate(x_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score[1])
print('\n', 'Test accuracy of large model:', score_large[1])
###Output
Test accuracy: 0.9098
Test accuracy of large model: 0.9115
###Markdown
將預測結果視覺化讓我們用剛剛訓練好的模型預測並視覺圖像化。首先我們將預測值轉成類別標籤。接著我們印出前15張預測照片並附註預測結果於文字上。紅色代表預測失準,綠色則代表成功。
###Code
y_hat = model_large.predict(x_test)
# Plot a random sample of 10 test images, their predicted labels and ground truth
figure = plt.figure(figsize=(20, 8))
for i, index in enumerate(np.random.choice(x_test.shape[0], size=15, replace=False)):
ax = figure.add_subplot(3, 5, i + 1, xticks=[], yticks=[])
# Display each image
ax.imshow(np.squeeze(x_test[index]))
predict_index = np.argmax(y_hat[index])
true_index = np.argmax(y_test[index])
# Set the title for each image
ax.set_title("{} ({})".format(fashion_mnist_labels[predict_index],
fashion_mnist_labels[true_index]),
color=("green" if predict_index == true_index else "red"))
###Output
_____no_output_____ |
research/deeplab/notebook/customized_deeplab_demo.ipynb | ###Markdown
Customized DeepLab Demo
###Code
#@title Import
import os
import sys
import tensorflow as tf
import numpy as np
from urllib.request import urlopen
from matplotlib import gridspec
from matplotlib import pyplot as plt
from PIL import Image
from io import BytesIO
sys.path.append("/home/jhpark/dohai90/workspaces/models/research")
#@title Helper method
class DeepLabModel(object):
"""Class to load deeplab model and run inference"""
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
INPUT_SIZE = 513
FROZEN_GRAPH_NAME = 'frozen_inference_graph'
def __init__(self, graph_path):
"""Load pretrained deeplab model"""
self.graph = tf.Graph()
self.graph_def = None
with tf.gfile.Open(graph_path, 'rb') as f:
self.graph_def = tf.GraphDef.FromString(f.read())
if self.graph_def is None:
raise RuntimeError('Cannot read inference graph')
with self.graph.as_default():
tf.import_graph_def(self.graph_def, name='')
self.sess = tf.Session(graph=self.graph)
def run(self, image):
"""Runs inference on a single image
Args:
image: A PIL.Image object, raw input image
Returns:
resized_imaged: RGB image resized from original input image
seg_map: Segmentation map of 'resized_image'
"""
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})
seg_map = batch_seg_map[0]
print(batch_seg_map.shape)
return resized_image, seg_map
def create_pascal_label_colormap():
"""Creates a label colormap used in PASCAL VOC segmentation benchmark.
Returns:
A colormap for visualizing segmentation results.
"""
colormap = np.zeros((256, 3), dtype=int)
ind = np.arange(256, dtype=int)
for shift in reversed(range(8)):
for channel in range(3):
colormap[:, channel] |= ((ind >> channel) & 1) << shift
ind >>= 3
return colormap
def label_to_color_image(label):
"""Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with interger type,storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the PASCAL color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color map maximum entry.
"""
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_pascal_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image, seg_map):
"""Visualizes input, segmentation map and overlay view."""
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation image')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
LABEL_NAMES = np.asarray([
'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv'
])
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
#@title Select model
# base model path
BASE_PATH = '/home/jhpark/dohai90/models'
# @param ['mobilenetv2_coco_voctrainaug', 'xception_coco_voctrainaug']
MODEL_NAME = 'mobilenetv2_coco_voctrainaug'
_MODEL_PATH = {
'mobilenetv2_coco_voctrainaug': os.path.join(BASE_PATH, 'deeplabv3_mnv2_pascal_train_aug/mnv2_frozen_inference_graph.pb'),
'xception_coco_voctrainaug': os.path.join(BASE_PATH, 'deeplabv3_pascal_train_aug/frozen_inference_graph.pb')
}
model_path = _MODEL_PATH[MODEL_NAME]
MODEL = DeepLabModel(model_path)
print('model loaded successfully!')
###Output
model loaded successfully!
###Markdown
Run on sample images
###Code
#@title Run on sample images
SAMPLE_IMAGE = 'image3'
SAMPLE_URL = ('https://github.com/tensorflow/models/blob/master/research/'
'deeplab/g3doc/img/%s.jpg?raw=true')
def run_visualization(url):
"""Inferences DeepLab model and viusalizes result."""
try:
f = urlopen(url)
jpeg_str = f.read()
original_img = Image.open(BytesIO(jpeg_str))
except IOError:
print('Cannot retrieve image. Please check url: '+ url)
return
print('Running deeplab on image %s...' % url)
resized_img, seg_map = MODEL.run(original_img)
print(seg_map.shape)
vis_segmentation(resized_img, seg_map)
img_url = SAMPLE_URL % SAMPLE_IMAGE
run_visualization(img_url)
###Output
Running deeplab on image https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/img/image3.jpg?raw=true...
(1, 487, 513)
(487, 513)
###Markdown
Note
###Code
f = urlopen(img_url)
jpeg_str = f.read()
ori_img = Image.open(BytesIO(jpeg_str))
type(ori_img)
ori_img.size # ImageFIle has size (width, height)
np_img = np.asarray(ori_img)
np_img.shape # numpy array has shape (height, width, depth)
wts = [n for n in MODEL.graph_def.node if n.op=='Const'] # filter weights
from tensorflow.python.framework import tensor_util
for n in wts:
print("name %s" %n.name)
print("shape ", tensor_util.MakeNdarray(n.attr['value'].tensor).shape)
###Output
name strided_slice/stack
shape (1,)
name strided_slice/stack_1
shape (1,)
name strided_slice/stack_2
shape (1,)
name strided_slice_1/stack
shape (1,)
name strided_slice_1/stack_1
shape (1,)
name strided_slice_1/stack_2
shape (1,)
name strided_slice_2/stack
shape (1,)
name strided_slice_2/stack_1
shape (1,)
name strided_slice_2/stack_2
shape (1,)
name sub/x
shape ()
name Maximum/y
shape ()
name sub_1/x
shape ()
name Maximum_1/y
shape ()
name Reshape/tensor
shape (3,)
name Reshape/shape
shape (3,)
name Rank
shape ()
name Equal/y
shape ()
name Assert/Assert/data_0
shape ()
name Assert/Assert/data_1
shape ()
name strided_slice_3/stack
shape (1,)
name strided_slice_3/stack_1
shape (1,)
name strided_slice_3/stack_2
shape (1,)
name strided_slice_4/stack
shape (1,)
name strided_slice_4/stack_1
shape (1,)
name strided_slice_4/stack_2
shape (1,)
name Assert_1/Assert/data_0
shape ()
name Assert_2/Assert/data_0
shape ()
name sub_3/y
shape ()
name sub_5/y
shape ()
name GreaterEqual_2/y
shape ()
name GreaterEqual_3/y
shape ()
name Assert_3/Assert/data_0
shape ()
name stack/0
shape ()
name stack_1/0
shape ()
name stack_2
shape (2,)
name strided_slice_5/stack
shape (1,)
name strided_slice_5/stack_1
shape (1,)
name strided_slice_5/stack_2
shape (1,)
name ExpandDims/dim
shape ()
name mul/x
shape ()
name sub_7/y
shape ()
name MobilenetV2/Conv/weights
shape (3, 3, 3, 32)
name MobilenetV2/Conv/BatchNorm/gamma
shape (32,)
name MobilenetV2/Conv/BatchNorm/beta
shape (32,)
name MobilenetV2/Conv/BatchNorm/moving_mean
shape (32,)
name MobilenetV2/Conv/BatchNorm/moving_variance
shape (32,)
name MobilenetV2/expanded_conv/depthwise/depthwise_weights
shape (3, 3, 32, 1)
name MobilenetV2/expanded_conv/depthwise/BatchNorm/gamma
shape (32,)
name MobilenetV2/expanded_conv/depthwise/BatchNorm/beta
shape (32,)
name MobilenetV2/expanded_conv/depthwise/BatchNorm/moving_mean
shape (32,)
name MobilenetV2/expanded_conv/depthwise/BatchNorm/moving_variance
shape (32,)
name MobilenetV2/expanded_conv/project/weights
shape (1, 1, 32, 16)
name MobilenetV2/expanded_conv/project/BatchNorm/gamma
shape (16,)
name MobilenetV2/expanded_conv/project/BatchNorm/beta
shape (16,)
name MobilenetV2/expanded_conv/project/BatchNorm/moving_mean
shape (16,)
name MobilenetV2/expanded_conv/project/BatchNorm/moving_variance
shape (16,)
name MobilenetV2/expanded_conv_1/expand/weights
shape (1, 1, 16, 96)
name MobilenetV2/expanded_conv_1/expand/BatchNorm/gamma
shape (96,)
name MobilenetV2/expanded_conv_1/expand/BatchNorm/beta
shape (96,)
name MobilenetV2/expanded_conv_1/expand/BatchNorm/moving_mean
shape (96,)
name MobilenetV2/expanded_conv_1/expand/BatchNorm/moving_variance
shape (96,)
name MobilenetV2/expanded_conv_1/depthwise/depthwise_weights
shape (3, 3, 96, 1)
name MobilenetV2/expanded_conv_1/depthwise/BatchNorm/gamma
shape (96,)
name MobilenetV2/expanded_conv_1/depthwise/BatchNorm/beta
shape (96,)
name MobilenetV2/expanded_conv_1/depthwise/BatchNorm/moving_mean
shape (96,)
name MobilenetV2/expanded_conv_1/depthwise/BatchNorm/moving_variance
shape (96,)
name MobilenetV2/expanded_conv_1/project/weights
shape (1, 1, 96, 24)
name MobilenetV2/expanded_conv_1/project/BatchNorm/gamma
shape (24,)
name MobilenetV2/expanded_conv_1/project/BatchNorm/beta
shape (24,)
name MobilenetV2/expanded_conv_1/project/BatchNorm/moving_mean
shape (24,)
name MobilenetV2/expanded_conv_1/project/BatchNorm/moving_variance
shape (24,)
name MobilenetV2/expanded_conv_2/expand/weights
shape (1, 1, 24, 144)
name MobilenetV2/expanded_conv_2/expand/BatchNorm/gamma
shape (144,)
name MobilenetV2/expanded_conv_2/expand/BatchNorm/beta
shape (144,)
name MobilenetV2/expanded_conv_2/expand/BatchNorm/moving_mean
shape (144,)
name MobilenetV2/expanded_conv_2/expand/BatchNorm/moving_variance
shape (144,)
name MobilenetV2/expanded_conv_2/depthwise/depthwise_weights
shape (3, 3, 144, 1)
name MobilenetV2/expanded_conv_2/depthwise/BatchNorm/gamma
shape (144,)
name MobilenetV2/expanded_conv_2/depthwise/BatchNorm/beta
shape (144,)
name MobilenetV2/expanded_conv_2/depthwise/BatchNorm/moving_mean
shape (144,)
name MobilenetV2/expanded_conv_2/depthwise/BatchNorm/moving_variance
shape (144,)
name MobilenetV2/expanded_conv_2/project/weights
shape (1, 1, 144, 24)
name MobilenetV2/expanded_conv_2/project/BatchNorm/gamma
shape (24,)
name MobilenetV2/expanded_conv_2/project/BatchNorm/beta
shape (24,)
name MobilenetV2/expanded_conv_2/project/BatchNorm/moving_mean
shape (24,)
name MobilenetV2/expanded_conv_2/project/BatchNorm/moving_variance
shape (24,)
name MobilenetV2/expanded_conv_3/expand/weights
shape (1, 1, 24, 144)
name MobilenetV2/expanded_conv_3/expand/BatchNorm/gamma
shape (144,)
name MobilenetV2/expanded_conv_3/expand/BatchNorm/beta
shape (144,)
name MobilenetV2/expanded_conv_3/expand/BatchNorm/moving_mean
shape (144,)
name MobilenetV2/expanded_conv_3/expand/BatchNorm/moving_variance
shape (144,)
name MobilenetV2/expanded_conv_3/depthwise/depthwise_weights
shape (3, 3, 144, 1)
name MobilenetV2/expanded_conv_3/depthwise/BatchNorm/gamma
shape (144,)
name MobilenetV2/expanded_conv_3/depthwise/BatchNorm/beta
shape (144,)
name MobilenetV2/expanded_conv_3/depthwise/BatchNorm/moving_mean
shape (144,)
name MobilenetV2/expanded_conv_3/depthwise/BatchNorm/moving_variance
shape (144,)
name MobilenetV2/expanded_conv_3/project/weights
shape (1, 1, 144, 32)
name MobilenetV2/expanded_conv_3/project/BatchNorm/gamma
shape (32,)
name MobilenetV2/expanded_conv_3/project/BatchNorm/beta
shape (32,)
name MobilenetV2/expanded_conv_3/project/BatchNorm/moving_mean
shape (32,)
name MobilenetV2/expanded_conv_3/project/BatchNorm/moving_variance
shape (32,)
name MobilenetV2/expanded_conv_4/expand/weights
shape (1, 1, 32, 192)
name MobilenetV2/expanded_conv_4/expand/BatchNorm/gamma
shape (192,)
name MobilenetV2/expanded_conv_4/expand/BatchNorm/beta
shape (192,)
name MobilenetV2/expanded_conv_4/expand/BatchNorm/moving_mean
shape (192,)
name MobilenetV2/expanded_conv_4/expand/BatchNorm/moving_variance
shape (192,)
name MobilenetV2/expanded_conv_4/depthwise/depthwise_weights
shape (3, 3, 192, 1)
name MobilenetV2/expanded_conv_4/depthwise/BatchNorm/gamma
shape (192,)
name MobilenetV2/expanded_conv_4/depthwise/BatchNorm/beta
shape (192,)
name MobilenetV2/expanded_conv_4/depthwise/BatchNorm/moving_mean
shape (192,)
name MobilenetV2/expanded_conv_4/depthwise/BatchNorm/moving_variance
shape (192,)
name MobilenetV2/expanded_conv_4/project/weights
shape (1, 1, 192, 32)
name MobilenetV2/expanded_conv_4/project/BatchNorm/gamma
shape (32,)
name MobilenetV2/expanded_conv_4/project/BatchNorm/beta
shape (32,)
name MobilenetV2/expanded_conv_4/project/BatchNorm/moving_mean
shape (32,)
name MobilenetV2/expanded_conv_4/project/BatchNorm/moving_variance
shape (32,)
name MobilenetV2/expanded_conv_5/expand/weights
shape (1, 1, 32, 192)
name MobilenetV2/expanded_conv_5/expand/BatchNorm/gamma
shape (192,)
name MobilenetV2/expanded_conv_5/expand/BatchNorm/beta
shape (192,)
name MobilenetV2/expanded_conv_5/expand/BatchNorm/moving_mean
shape (192,)
name MobilenetV2/expanded_conv_5/expand/BatchNorm/moving_variance
shape (192,)
name MobilenetV2/expanded_conv_5/depthwise/depthwise_weights
shape (3, 3, 192, 1)
name MobilenetV2/expanded_conv_5/depthwise/BatchNorm/gamma
shape (192,)
name MobilenetV2/expanded_conv_5/depthwise/BatchNorm/beta
shape (192,)
name MobilenetV2/expanded_conv_5/depthwise/BatchNorm/moving_mean
shape (192,)
name MobilenetV2/expanded_conv_5/depthwise/BatchNorm/moving_variance
shape (192,)
name MobilenetV2/expanded_conv_5/project/weights
shape (1, 1, 192, 32)
name MobilenetV2/expanded_conv_5/project/BatchNorm/gamma
shape (32,)
name MobilenetV2/expanded_conv_5/project/BatchNorm/beta
shape (32,)
name MobilenetV2/expanded_conv_5/project/BatchNorm/moving_mean
shape (32,)
name MobilenetV2/expanded_conv_5/project/BatchNorm/moving_variance
shape (32,)
name MobilenetV2/expanded_conv_6/expand/weights
shape (1, 1, 32, 192)
name MobilenetV2/expanded_conv_6/expand/BatchNorm/gamma
shape (192,)
name MobilenetV2/expanded_conv_6/expand/BatchNorm/beta
shape (192,)
name MobilenetV2/expanded_conv_6/expand/BatchNorm/moving_mean
shape (192,)
name MobilenetV2/expanded_conv_6/expand/BatchNorm/moving_variance
shape (192,)
name MobilenetV2/expanded_conv_6/depthwise/depthwise_weights
shape (3, 3, 192, 1)
name MobilenetV2/expanded_conv_6/depthwise/BatchNorm/gamma
shape (192,)
name MobilenetV2/expanded_conv_6/depthwise/BatchNorm/beta
shape (192,)
name MobilenetV2/expanded_conv_6/depthwise/BatchNorm/moving_mean
shape (192,)
name MobilenetV2/expanded_conv_6/depthwise/BatchNorm/moving_variance
shape (192,)
name MobilenetV2/expanded_conv_6/project/weights
shape (1, 1, 192, 64)
name MobilenetV2/expanded_conv_6/project/BatchNorm/gamma
shape (64,)
name MobilenetV2/expanded_conv_6/project/BatchNorm/beta
shape (64,)
name MobilenetV2/expanded_conv_6/project/BatchNorm/moving_mean
shape (64,)
name MobilenetV2/expanded_conv_6/project/BatchNorm/moving_variance
shape (64,)
name MobilenetV2/expanded_conv_7/expand/weights
shape (1, 1, 64, 384)
name MobilenetV2/expanded_conv_7/expand/BatchNorm/gamma
shape (384,)
name MobilenetV2/expanded_conv_7/expand/BatchNorm/beta
shape (384,)
name MobilenetV2/expanded_conv_7/expand/BatchNorm/moving_mean
shape (384,)
name MobilenetV2/expanded_conv_7/expand/BatchNorm/moving_variance
shape (384,)
name MobilenetV2/expanded_conv_7/depthwise/depthwise_weights
shape (3, 3, 384, 1)
name MobilenetV2/expanded_conv_7/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_7/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_7/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_7/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_7/depthwise/BatchNorm/gamma
shape (384,)
name MobilenetV2/expanded_conv_7/depthwise/BatchNorm/beta
shape (384,)
name MobilenetV2/expanded_conv_7/depthwise/BatchNorm/moving_mean
shape (384,)
name MobilenetV2/expanded_conv_7/depthwise/BatchNorm/moving_variance
shape (384,)
name MobilenetV2/expanded_conv_7/project/weights
shape (1, 1, 384, 64)
name MobilenetV2/expanded_conv_7/project/BatchNorm/gamma
shape (64,)
name MobilenetV2/expanded_conv_7/project/BatchNorm/beta
shape (64,)
name MobilenetV2/expanded_conv_7/project/BatchNorm/moving_mean
shape (64,)
name MobilenetV2/expanded_conv_7/project/BatchNorm/moving_variance
shape (64,)
name MobilenetV2/expanded_conv_8/expand/weights
shape (1, 1, 64, 384)
name MobilenetV2/expanded_conv_8/expand/BatchNorm/gamma
shape (384,)
name MobilenetV2/expanded_conv_8/expand/BatchNorm/beta
shape (384,)
name MobilenetV2/expanded_conv_8/expand/BatchNorm/moving_mean
shape (384,)
name MobilenetV2/expanded_conv_8/expand/BatchNorm/moving_variance
shape (384,)
name MobilenetV2/expanded_conv_8/depthwise/depthwise_weights
shape (3, 3, 384, 1)
name MobilenetV2/expanded_conv_8/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_8/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_8/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_8/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_8/depthwise/BatchNorm/gamma
shape (384,)
name MobilenetV2/expanded_conv_8/depthwise/BatchNorm/beta
shape (384,)
name MobilenetV2/expanded_conv_8/depthwise/BatchNorm/moving_mean
shape (384,)
name MobilenetV2/expanded_conv_8/depthwise/BatchNorm/moving_variance
shape (384,)
name MobilenetV2/expanded_conv_8/project/weights
shape (1, 1, 384, 64)
name MobilenetV2/expanded_conv_8/project/BatchNorm/gamma
shape (64,)
name MobilenetV2/expanded_conv_8/project/BatchNorm/beta
shape (64,)
name MobilenetV2/expanded_conv_8/project/BatchNorm/moving_mean
shape (64,)
name MobilenetV2/expanded_conv_8/project/BatchNorm/moving_variance
shape (64,)
name MobilenetV2/expanded_conv_9/expand/weights
shape (1, 1, 64, 384)
name MobilenetV2/expanded_conv_9/expand/BatchNorm/gamma
shape (384,)
name MobilenetV2/expanded_conv_9/expand/BatchNorm/beta
shape (384,)
name MobilenetV2/expanded_conv_9/expand/BatchNorm/moving_mean
shape (384,)
name MobilenetV2/expanded_conv_9/expand/BatchNorm/moving_variance
shape (384,)
name MobilenetV2/expanded_conv_9/depthwise/depthwise_weights
shape (3, 3, 384, 1)
name MobilenetV2/expanded_conv_9/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_9/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_9/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_9/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_9/depthwise/BatchNorm/gamma
shape (384,)
name MobilenetV2/expanded_conv_9/depthwise/BatchNorm/beta
shape (384,)
name MobilenetV2/expanded_conv_9/depthwise/BatchNorm/moving_mean
shape (384,)
name MobilenetV2/expanded_conv_9/depthwise/BatchNorm/moving_variance
shape (384,)
name MobilenetV2/expanded_conv_9/project/weights
shape (1, 1, 384, 64)
name MobilenetV2/expanded_conv_9/project/BatchNorm/gamma
shape (64,)
name MobilenetV2/expanded_conv_9/project/BatchNorm/beta
shape (64,)
name MobilenetV2/expanded_conv_9/project/BatchNorm/moving_mean
shape (64,)
name MobilenetV2/expanded_conv_9/project/BatchNorm/moving_variance
shape (64,)
name MobilenetV2/expanded_conv_10/expand/weights
shape (1, 1, 64, 384)
name MobilenetV2/expanded_conv_10/expand/BatchNorm/gamma
shape (384,)
name MobilenetV2/expanded_conv_10/expand/BatchNorm/beta
shape (384,)
name MobilenetV2/expanded_conv_10/expand/BatchNorm/moving_mean
shape (384,)
name MobilenetV2/expanded_conv_10/expand/BatchNorm/moving_variance
shape (384,)
name MobilenetV2/expanded_conv_10/depthwise/depthwise_weights
shape (3, 3, 384, 1)
name MobilenetV2/expanded_conv_10/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_10/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_10/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_10/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_10/depthwise/BatchNorm/gamma
shape (384,)
name MobilenetV2/expanded_conv_10/depthwise/BatchNorm/beta
shape (384,)
name MobilenetV2/expanded_conv_10/depthwise/BatchNorm/moving_mean
shape (384,)
name MobilenetV2/expanded_conv_10/depthwise/BatchNorm/moving_variance
shape (384,)
name MobilenetV2/expanded_conv_10/project/weights
shape (1, 1, 384, 96)
name MobilenetV2/expanded_conv_10/project/BatchNorm/gamma
shape (96,)
name MobilenetV2/expanded_conv_10/project/BatchNorm/beta
shape (96,)
name MobilenetV2/expanded_conv_10/project/BatchNorm/moving_mean
shape (96,)
name MobilenetV2/expanded_conv_10/project/BatchNorm/moving_variance
shape (96,)
name MobilenetV2/expanded_conv_11/expand/weights
shape (1, 1, 96, 576)
name MobilenetV2/expanded_conv_11/expand/BatchNorm/gamma
shape (576,)
name MobilenetV2/expanded_conv_11/expand/BatchNorm/beta
shape (576,)
name MobilenetV2/expanded_conv_11/expand/BatchNorm/moving_mean
shape (576,)
name MobilenetV2/expanded_conv_11/expand/BatchNorm/moving_variance
shape (576,)
name MobilenetV2/expanded_conv_11/depthwise/depthwise_weights
shape (3, 3, 576, 1)
name MobilenetV2/expanded_conv_11/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_11/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_11/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_11/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_11/depthwise/BatchNorm/gamma
shape (576,)
name MobilenetV2/expanded_conv_11/depthwise/BatchNorm/beta
shape (576,)
name MobilenetV2/expanded_conv_11/depthwise/BatchNorm/moving_mean
shape (576,)
name MobilenetV2/expanded_conv_11/depthwise/BatchNorm/moving_variance
shape (576,)
name MobilenetV2/expanded_conv_11/project/weights
shape (1, 1, 576, 96)
name MobilenetV2/expanded_conv_11/project/BatchNorm/gamma
shape (96,)
name MobilenetV2/expanded_conv_11/project/BatchNorm/beta
shape (96,)
name MobilenetV2/expanded_conv_11/project/BatchNorm/moving_mean
shape (96,)
name MobilenetV2/expanded_conv_11/project/BatchNorm/moving_variance
shape (96,)
name MobilenetV2/expanded_conv_12/expand/weights
shape (1, 1, 96, 576)
name MobilenetV2/expanded_conv_12/expand/BatchNorm/gamma
shape (576,)
name MobilenetV2/expanded_conv_12/expand/BatchNorm/beta
shape (576,)
name MobilenetV2/expanded_conv_12/expand/BatchNorm/moving_mean
shape (576,)
name MobilenetV2/expanded_conv_12/expand/BatchNorm/moving_variance
shape (576,)
name MobilenetV2/expanded_conv_12/depthwise/depthwise_weights
shape (3, 3, 576, 1)
name MobilenetV2/expanded_conv_12/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_12/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_12/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_12/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_12/depthwise/BatchNorm/gamma
shape (576,)
name MobilenetV2/expanded_conv_12/depthwise/BatchNorm/beta
shape (576,)
name MobilenetV2/expanded_conv_12/depthwise/BatchNorm/moving_mean
shape (576,)
name MobilenetV2/expanded_conv_12/depthwise/BatchNorm/moving_variance
shape (576,)
name MobilenetV2/expanded_conv_12/project/weights
shape (1, 1, 576, 96)
name MobilenetV2/expanded_conv_12/project/BatchNorm/gamma
shape (96,)
name MobilenetV2/expanded_conv_12/project/BatchNorm/beta
shape (96,)
name MobilenetV2/expanded_conv_12/project/BatchNorm/moving_mean
shape (96,)
name MobilenetV2/expanded_conv_12/project/BatchNorm/moving_variance
shape (96,)
name MobilenetV2/expanded_conv_13/expand/weights
shape (1, 1, 96, 576)
name MobilenetV2/expanded_conv_13/expand/BatchNorm/gamma
shape (576,)
name MobilenetV2/expanded_conv_13/expand/BatchNorm/beta
shape (576,)
name MobilenetV2/expanded_conv_13/expand/BatchNorm/moving_mean
shape (576,)
name MobilenetV2/expanded_conv_13/expand/BatchNorm/moving_variance
shape (576,)
name MobilenetV2/expanded_conv_13/depthwise/depthwise_weights
shape (3, 3, 576, 1)
name MobilenetV2/expanded_conv_13/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_13/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_13/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_13/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_13/depthwise/BatchNorm/gamma
shape (576,)
name MobilenetV2/expanded_conv_13/depthwise/BatchNorm/beta
shape (576,)
name MobilenetV2/expanded_conv_13/depthwise/BatchNorm/moving_mean
shape (576,)
name MobilenetV2/expanded_conv_13/depthwise/BatchNorm/moving_variance
shape (576,)
name MobilenetV2/expanded_conv_13/project/weights
shape (1, 1, 576, 160)
name MobilenetV2/expanded_conv_13/project/BatchNorm/gamma
shape (160,)
name MobilenetV2/expanded_conv_13/project/BatchNorm/beta
shape (160,)
name MobilenetV2/expanded_conv_13/project/BatchNorm/moving_mean
shape (160,)
name MobilenetV2/expanded_conv_13/project/BatchNorm/moving_variance
shape (160,)
name MobilenetV2/expanded_conv_14/expand/weights
shape (1, 1, 160, 960)
name MobilenetV2/expanded_conv_14/expand/BatchNorm/gamma
shape (960,)
name MobilenetV2/expanded_conv_14/expand/BatchNorm/beta
shape (960,)
name MobilenetV2/expanded_conv_14/expand/BatchNorm/moving_mean
shape (960,)
name MobilenetV2/expanded_conv_14/expand/BatchNorm/moving_variance
shape (960,)
name MobilenetV2/expanded_conv_14/depthwise/depthwise_weights
shape (3, 3, 960, 1)
name MobilenetV2/expanded_conv_14/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_14/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_14/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_14/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_14/depthwise/BatchNorm/gamma
shape (960,)
name MobilenetV2/expanded_conv_14/depthwise/BatchNorm/beta
shape (960,)
name MobilenetV2/expanded_conv_14/depthwise/BatchNorm/moving_mean
shape (960,)
name MobilenetV2/expanded_conv_14/depthwise/BatchNorm/moving_variance
shape (960,)
name MobilenetV2/expanded_conv_14/project/weights
shape (1, 1, 960, 160)
name MobilenetV2/expanded_conv_14/project/BatchNorm/gamma
shape (160,)
name MobilenetV2/expanded_conv_14/project/BatchNorm/beta
shape (160,)
name MobilenetV2/expanded_conv_14/project/BatchNorm/moving_mean
shape (160,)
name MobilenetV2/expanded_conv_14/project/BatchNorm/moving_variance
shape (160,)
name MobilenetV2/expanded_conv_15/expand/weights
shape (1, 1, 160, 960)
name MobilenetV2/expanded_conv_15/expand/BatchNorm/gamma
shape (960,)
name MobilenetV2/expanded_conv_15/expand/BatchNorm/beta
shape (960,)
name MobilenetV2/expanded_conv_15/expand/BatchNorm/moving_mean
shape (960,)
name MobilenetV2/expanded_conv_15/expand/BatchNorm/moving_variance
shape (960,)
name MobilenetV2/expanded_conv_15/depthwise/depthwise_weights
shape (3, 3, 960, 1)
name MobilenetV2/expanded_conv_15/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_15/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_15/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_15/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_15/depthwise/BatchNorm/gamma
shape (960,)
name MobilenetV2/expanded_conv_15/depthwise/BatchNorm/beta
shape (960,)
name MobilenetV2/expanded_conv_15/depthwise/BatchNorm/moving_mean
shape (960,)
name MobilenetV2/expanded_conv_15/depthwise/BatchNorm/moving_variance
shape (960,)
name MobilenetV2/expanded_conv_15/project/weights
shape (1, 1, 960, 160)
name MobilenetV2/expanded_conv_15/project/BatchNorm/gamma
shape (160,)
name MobilenetV2/expanded_conv_15/project/BatchNorm/beta
shape (160,)
name MobilenetV2/expanded_conv_15/project/BatchNorm/moving_mean
shape (160,)
name MobilenetV2/expanded_conv_15/project/BatchNorm/moving_variance
shape (160,)
name MobilenetV2/expanded_conv_16/expand/weights
shape (1, 1, 160, 960)
name MobilenetV2/expanded_conv_16/expand/BatchNorm/gamma
shape (960,)
name MobilenetV2/expanded_conv_16/expand/BatchNorm/beta
shape (960,)
name MobilenetV2/expanded_conv_16/expand/BatchNorm/moving_mean
shape (960,)
name MobilenetV2/expanded_conv_16/expand/BatchNorm/moving_variance
shape (960,)
name MobilenetV2/expanded_conv_16/depthwise/depthwise_weights
shape (3, 3, 960, 1)
name MobilenetV2/expanded_conv_16/depthwise/depthwise/SpaceToBatchND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_16/depthwise/depthwise/SpaceToBatchND/paddings
shape (2, 2)
name MobilenetV2/expanded_conv_16/depthwise/depthwise/BatchToSpaceND/block_shape
shape (2,)
name MobilenetV2/expanded_conv_16/depthwise/depthwise/BatchToSpaceND/crops
shape (2, 2)
name MobilenetV2/expanded_conv_16/depthwise/BatchNorm/gamma
shape (960,)
name MobilenetV2/expanded_conv_16/depthwise/BatchNorm/beta
shape (960,)
name MobilenetV2/expanded_conv_16/depthwise/BatchNorm/moving_mean
shape (960,)
name MobilenetV2/expanded_conv_16/depthwise/BatchNorm/moving_variance
shape (960,)
name MobilenetV2/expanded_conv_16/project/weights
shape (1, 1, 960, 320)
name MobilenetV2/expanded_conv_16/project/BatchNorm/gamma
shape (320,)
name MobilenetV2/expanded_conv_16/project/BatchNorm/beta
shape (320,)
name MobilenetV2/expanded_conv_16/project/BatchNorm/moving_mean
shape (320,)
name MobilenetV2/expanded_conv_16/project/BatchNorm/moving_variance
shape (320,)
name image_pooling/weights
shape (1, 1, 320, 256)
name image_pooling/BatchNorm/gamma
shape (256,)
name image_pooling/BatchNorm/beta
|
memefly-ml/notebooks/train_3.2.2.ipynb | ###Markdown
Memefly Image Captioning Word LevelInspired by [Show, Attend and Tell: Neural Image Caption Generation with Visual Attention](https://arxiv.org/abs/1502.03044), [Dank Learning: Generating Memes Using Deep Neural Networks](https://arxiv.org/abs/1806.04510), and [CS231n Assignment 3](http://cs231n.github.io/assignments2019/assignment3/). Code references [Image captioning with visual attention](https://www.tensorflow.org/tutorials/text/image_captioning), [Text generation with an RNN](https://www.tensorflow.org/tutorials/text/text_generation)
###Code
import os
import sys
sys.path.append(os.path.abspath('../datasets'))
sys.path.append(os.path.abspath('../weights'))
import pathlib
import time
import pickle
import json
from tqdm import tqdm
import ipykernel
import requests
import shutil
from typing import List, Dict, Tuple, Sequence
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense, LSTM, Embedding, Dropout, GRU, Add, add, Attention, RepeatVector, AdditiveAttention
from tensorflow.keras.applications.inception_v3 import InceptionV3, preprocess_input
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import ModelCheckpoint
from sklearn.model_selection import train_test_split
import wandb
from wandb.keras import WandbCallback
os.environ['WANDB_NOTEBOOK_NAME'] = '10_meme_word_gen_model_3.2.2'
np.random.seed(45)
###Output
_____no_output_____
###Markdown
1. CONFIG
###Code
class Config:
DATA_VERSION = 'v2'
MODEL_TYPE = 'word'
TIMESTAMP = time.strftime('%Y%m%d%H%M')
TOKENIZER = f'../weights/memefly-{MODEL_TYPE}-data-{DATA_VERSION}-tokenizer.pkl'
IMAGE_MODEL_FILENAME = "../weights/inceptionv3_embeddings.h5"
INPUT_JSON_FILE = '../datasets/combined_data.json'
DESCRIPTION_FILE = f'../datasets/memefly-{Config.DATA_VERSION}-descriptions.txt'
IMG_FEATURES_PKL = f'../datasets/memefly-{Config.DATA_VERSION}-features.pkl'
###Output
_____no_output_____
###Markdown
2. Data Loading and Preprocessing
###Code
class MemeflyDataset:
def __init__(self, *, input_json_file: str, img_model: tf.keras.Model, description_file: str, img_features_pkl: str):
self.json_data = self.__load_json(input_json_file)
self.description_file = description_file
self.img_features_pkl = img_features_pkl
self.img_model = tf.keras.models.load_model(img_model, compile=False)
self.tokenizer = None
self.max_length = None
self.vocab_size = None
self.text_data = None
self.img_data = None
def __load_json(self, path: str):
""" Loads json file """
try:
with open(path) as robj:
data = json.load(robj)
return data
except Exception as e:
raise e
def preprocess_text(self):
"""
Preprocess input_json and save to instance attributes
Generates:
========
description_file: meme_name meme_text file, for sanity checking and debugging
tokenizer: tf.keras tokenizer
vocab_size: size of the tokenizer, int
max_length: maximum length of meme text, int
text_data: list of [meme_name, meme_text] pairs
"""
print("Preprocessing text ...")
corpus = []
meme_data = []
with open(self.description_file, 'w') as outfile:
for row in iter(self.json_data):
meme_name = row["meme_name"]
for meme_text in row["meme_text"]:
meme_text = f"startseq {meme_text} endseq\n"
#text = f"{meme_name} startseq {meme_text} endseq\n"
corpus.append(meme_text.rstrip())#.split(' '))
meme_data.append([meme_name, meme_text.rstrip()])
outfile.write(f"{meme_name} {meme_text}")#text)
tokenizer = Tokenizer(lower=True)
tokenizer.fit_on_texts(corpus)
pickle.dump(tokenizer, open(Config.TOKENIZER, 'wb'))
self.tokenizer = tokenizer
self.vocab_size = len(tokenizer.word_index) + 1
self.text_data = meme_data
self.max_length = len(max([item[1] for item in meme_data], key=len))
pass
def preprocess_img(self):
"""
Preprocess input_json and save to instance attributes
Generates:
========
images files: downloaded image file given the urls
img_features_pkl: pickled dictionary of {meme_name: img_vec file}
img_data: dictionary of {meme_name: img_vec file}
"""
print("Preprocessing images ...")
img_urls = [item['meme_url'] for item in self.json_data]
meme_names = [item['meme_name'] for item in self.json_data]
self.__download_images(img_urls, meme_names)
self.img_data = self.__extract_features(meme_names)
pass
def __download_images(self, url_list: List, meme_name: List):
""" Download meme images from 'meme_url', skip if already exists """
print("Downloading images ...")
count = 0
for i in tqdm(range(len(url_list))):
filename = f"../datasets/images/{meme_name[i]}.jpg"
if not pathlib.Path(filename).exists():
r = requests.get(url_list[i],
stream=True,
#headers={'User-agent': 'Mozilla/5.0'}
)
if r.status_code == 200:
count += 1
with open(filename, 'wb') as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
if count == len(url_list):
print("all images in url_list downloaded")
pass
def __extract_features(self, meme_name: list) -> dict:
"""
Takes a preloaded Tensorflow Keras InceptionV3 Model with embeddings and a list of images
and return a dict with keys: image_name w/o the .jpg and the values: image embeddings extracted
using InceptionV3 with global average pooling layer and pretrained imagenet weights.
"""
print("Creating image embedding vectors ...")
features = dict()
for img_file in tqdm(meme_name):
filename = f"../datasets/images/{img_file}.jpg"
img = load_img(filename, target_size=(299, 299))
img = img_to_array(img)
img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2]))
img = preprocess_input(img)
feature = self.img_model.predict(img, verbose=0)
features[img_file] = feature
pickle.dump(features, open(self.img_features_pkl, 'wb'))
return features
dataset = MemeflyDataset(input_json_file=INPUT_JSON_FILE,
img_model=Config.IMAGE_MODEL_FILENAME,
description_file=DESCRIPTION_FILE,
img_features_pkl=IMG_FEATURES_PKL)
dataset.preprocess_text()
dataset.preprocess_img()
meme_dataset = dataset.text_data
MEME_IMG_VEC = dataset.img_data
VOCAB_SIZE = dataset.vocab_size
MAX_LENGTH = dataset.max_length
TOKENIZER = dataset.tokenizer
print(f"Full data: {len(meme_dataset)}\nmemes images: {len(MEME_IMG_VEC)}\nVocab size: {VOCAB_SIZE}\nMax meme length: {MAX_LENGTH}\n")
train_dataset, val_dataset = train_test_split(meme_dataset, test_size=0.05)
print(len(train_dataset), len(val_dataset))
###Output
Preprocessing text ...
###Markdown
5. Data Generator
###Code
class MemeDataGenerator(tf.keras.utils.Sequence):
"""
An iterable that returns [batch_size, (images embeddigns, [unrolled input text sequences, text target])].
Instead of batching over images, we choose to batch over [image, description] pairs because unlike typical
image captioning tasks that has 3-5 texts per image, we have 180-200 texts per image. Batching over images
in our case significantly boosted memory cost and we could only batch 1-2 images using AWS p2.xLarge or
p3
This class inherets from tf.keras.utils.Sequences to avoid data redundancy and syncing error.
https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence
https://keras.io/utils/#sequence
dataset: [meme name, meme text] pairs
shuffle: If True, shuffles the samples before every epoch
batch_size: How many images to return in each call
INPUT:
========
- dataset: list of meme_name and meme_text pairs. [[meme_name, meme_text], [...], ...]
- img_embds: a pickled dictionary of {meme_name: image embeddings}
- tokenizer: tf.keras.preprocessing.text.Tokenizer
- batch_size: batch size
- max_length: maximum length of words
- vocab_size: size of the vocaburaries.
- shuffle: if True, shuffles the dataset between every epoch
OUTPUT:
=======
- outputs list: Usually empty in regular training. But if detection_targets
is True then the outputs list contains target class_ids, bbox deltas,
and masks.
"""
def __init__(self, *, dataset, img_embds, tokenizer, batch_size: int, max_length: int, vocab_size: int, shuffle=True):
self.dataset = dataset
self.img_embds = img_embds
self.tokenizer = tokenizer
self.batch_size = batch_size
self.max_length = max_length
self.vocab_size = vocab_size
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
""" Number of batches in the Sequence """
return int(np.floor(len(self.dataset) / self.batch_size))
def __getitem__(self, idx):
"""
Generate one batch of data. One element in a batch is a pair of meme_name, meme_text.
Dataset is indexed using 'indexes' and 'indexes' will be shuffled every epoch if shuffle is True.
"""
indexes = self.indexes[idx*self.batch_size:(idx+1)*self.batch_size]
current_data = [self.dataset[i] for i in indexes]
in_img, in_seq, out_word = self.__generate_data(current_data)
return [in_img, in_seq], out_word
def on_epoch_end(self):
""" Method called at between every epoch """
self.indexes = np.arange(len(self.dataset))
if self.shuffle == True:
np.random.shuffle(self.indexes)
pass
def __generate_data(self, data_batch):
"""
Loop through the batch of data list and generate unrolled sequences of each list of data
"""
X1, X2, y = list(), list(), list()
for data in data_batch:
img_embd = self.img_embds[data[0]][0]
X1_tmp, X2_tmp, y_tmp = self.__create_sequence(img_embd, data[1])
# append creates list of lists. extend doesnt.
X1.extend(X1_tmp)
X2.extend(X2_tmp)
y.extend(y_tmp)
return np.array(X1), np.array(X2), np.array(y)
def __create_sequence(self, image, meme_text):
"""
Create one sequence of images, input sequences and output text for a single meme_text, e.g.,
img_vec input output
======== ======== ========
IMAGE_VEC startseq hi
IMAGE_VEC startseq hi this
IMAGE_VEC startseq hi this is
IMAGE_VEC startseq hi this is not
IMAGE_VEC startseq hi this is not fun
IMAGE_VEC startseq hi this is not fun endseq
Tokenized sequences will be padded from the front, keras default. The output word will be
one hot encoded w/ keras' to_categorical, and to save memory size, we cast it to float16
# https://stackoverflow.com/questions/42943291/what-does-keras-io-preprocessing-sequence-pad-sequences-do
INPUT:
========
image: image vectors
meme_text: text to be unrolled into max length length of sequences
tokenizer: tokenizer used to convert words to numbers
OUTPUT:
========
X1: image vector, list
X2: tokenized sequences, padded to max length, list
y: next texts, target, list
"""
X1, X2, y = list(), list(), list()
seq = self.tokenizer.texts_to_sequences([meme_text])[0]
for i in range(1, len(seq)):
in_seq, out_seq = seq[:i], seq[i]
in_seq = pad_sequences([in_seq], maxlen=self.max_length)[0]
out_seq = to_categorical([out_seq], num_classes=self.vocab_size, dtype='float16')[0]
X1.append(image)
X2.append(in_seq)
y.append(out_seq)
return X1, X2, y
###Output
_____no_output_____
###Markdown
4. Model
###Code
def image_captioning_model(*, vocab_size: int, maxlen: int, embedding_dim: int, rnn_units: int, batch_size: int) -> tf.keras.Model:
"""
Injecting image embedding using par-inject method (3) as described in the following paper.
[Where to put the Image in an Image CaptionGenerator](https://arxiv.org/abs/1703.09137)
Par-inject was used as [Neural Image Caption Generation with Visual Attention](https://arxiv.org/abs/1502.03044)
"""
img_emb_input = Input(shape=(2048,), name="image_input")
x1 = Dropout(0.5)(img_emb_input)
x1 = Dense(embedding_dim, activation='relu', name='image_dense')(x1)
x1 = RepeatVector(maxlen)(x1)
tokenized_text_input = Input(shape=(maxlen,), name='text_input')
x2 = Embedding(vocab_size, embedding_dim, mask_zero=True, batch_input_shape=[batch_size, None], name='text_embedding')(tokenized_text_input)
decoder = Concatenate(name='image_text_concat')([x1, x2]) #add([x1, x2])
decoder = GRU(rnn_units, name='GRU')(decoder)
decoder = Dense(256, activation='relu', name='last_dense')(decoder)
outputs = Dense(vocab_size, activation='softmax', name='output')(decoder)
# tie it together [image, seq] [word]
model = Model(inputs=[img_emb_input, tokenized_text_input], outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam')
# summarize model
print(model.summary())
plot_model(model, to_file='model.png', show_shapes=True)
return model
###Output
_____no_output_____
###Markdown
Training
###Code
EPOCHS = 100
BATCH_SIZE = 128 # p3.2xlarge
wandb.init(config={"hyper": "parameter"}, project="")
train_datagen = MemeDataGenerator(dataset=train_dataset,
img_embds=MEME_IMG_VEC,
tokenizer=TOKENIZER,
batch_size=BATCH_SIZE,
max_length=MAX_LENGTH,
vocab_size=VOCAB_SIZE)
val_datagen = MemeDataGenerator(dataset=val_dataset,
img_embds=MEME_IMG_VEC,
tokenizer=TOKENIZER,
batch_size=BATCH_SIZE,
max_length=MAX_LENGTH,
vocab_size=VOCAB_SIZE)
model = image_captioning_model(vocab_size=VOCAB_SIZE,
maxlen=MAX_LENGTH,
embedding_dim=256,
rnn_units=256,
batch_size=BATCH_SIZE)
model.summary()
filepath = f"../weights/ckpt/memefly-{Config.MODEL_TYPE}-{MAX_LENGTH}-{Config.TIMESTAMP}"+"-{epoch:02d}-{val_loss:.2f}.h5"
checkpoint = ModelCheckpoint(filepath,
verbose=1,
save_weights_only=False,
save_best_only=False)
model.fit_generator(train_datagen,
epochs=EPOCHS,
verbose=1,
validation_data=val_datagen,
callbacks=[checkpoint])
#callbacks=[WandbCallback(), checkpoint])
###Output
Epoch 1/2
553/554 [============================>.] - ETA: 1s - loss: 6.9877
Epoch 00001: saving model to ../weights/ckpt/memefly-word-150-201912062354-01-6.56.h5
554/554 [==============================] - 693s 1s/step - loss: 6.9874 - val_loss: 6.5648
Epoch 2/2
553/554 [============================>.] - ETA: 1s - loss: 6.1801
Epoch 00002: saving model to ../weights/ckpt/memefly-word-150-201912062354-02-5.94.h5
554/554 [==============================] - 697s 1s/step - loss: 6.1799 - val_loss: 5.9438
|
ClaesPauline_scripts/ClaesPauline_metadata_script.ipynb | ###Markdown
Claes Pauline. Master Digital Text Analysis. Student ID: 20163274 MetadataThis script contains all code used for adding metadata to data frames.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Add wordcounts to French dataWhile English textdata already have information on word counts from EEBO and EMMA metadata, I do not have this for the French data. Therefore, this needs to be counted. For French texts coming from Frantext as well as Google Books, all textdata was parsed into a data frame of four columns (Word, Lemma, POS, filename), containing one row per word. Therefore, it makes sense to group that data frame per filename and count the number of rows that each file contains (as one word = one row). However, it needs to be filtered, since punctuation needs to be excluded.
###Code
def read(path):
return pd.read_csv(path)
wlp_frantext_early = read("/Users/paulineclaes/Documents/dta/Thesis/Data/Dataframes/WLP/frantext_WLP_early.csv")
wlp_epub_early = read("/Users/paulineclaes/Documents/dta/Thesis/Data/Dataframes/WLP/epub_WLP_early.csv")
wlp_frantext_later = read("/Users/paulineclaes/Documents/dta/Thesis/Data/Dataframes/WLP/frantext_WLP_later.csv")
wlp_epub_later = read("/Users/paulineclaes/Documents/dta/Thesis/Data/Dataframes/WLP/epub_WLP_later.csv")
def get_separate_df_per_filename(df):
"""Function to get a separate data frame per file name.
Takes as input the WLP data frame, prints the number of words excluding punctuation and spaces."""
for filename in df["file_name"].unique(): # for each unique filename in data frame
new_df = df[df["file_name"] == filename] # construct a new df of only that file name
new_df = new_df.drop(new_df.index[new_df['POS'].isin(["PUNCT", "PONCT", "SPACE"])], axis=0) # drop punctuation and spaces using their POS-tags
print(f"filename: {filename} \tTokens: {len(new_df)}\n") # print the filename and the number of rows in that data frame ( so the number of words )
def get_dict_per_filename(df):
"""Function to get a separate data frame per file name.
Takes as input the WLP data frame, adds the file name and its number of words excluding punctuation and spaces to a dictionary.
Key = filename, value = number of words.
"""
author_dict = {}
for filename in df["file_name"].unique():
new_df = df[df["file_name"] == filename]
new_df = new_df.drop(new_df.index[new_df['POS'].isin(["PUNCT", "PONCT", "SPACE"])], axis=0) # drop punctuation and spaces using their POS-tags
author_dict[filename] = len(new_df)
return author_dict
def merge_dicts(dict1, dict2):
dict1.update(dict2)
return dict1
## EXAMPLE
# building the dictionary to contain the wordcounts
author_dict_frantext = get_dict_per_filename(wlp_frantext_early) # wordcounts for frantext WLP
author_dict_epub = get_dict_per_filename(wlp_epub_early) # wordcounts for EPUB WLP
author_dict = merge_dicts(author_dict_frantext, author_dict_epub) # add them to one dictionary
### EXAMPLE
# read in the data frame that we want to map the word counts to (per file name)
df = pd.read_csv("/Users/paulineclaes/Documents/dta/Thesis/Data/Dataframes/concordance/all_early_concordance.csv")
# insert a column containing the wordcounts based on the file name column
df.insert(5, "all_tokens", df["filename"].map(author_dict))
###Output
_____no_output_____
###Markdown
Merge metadata with concordance dataframe 1. Assigning a unique ID to each author
###Code
df = pd.read_excel("/Users/paulineclaes/Documents/dta/thesis/ClaesPauline_thesis_finaleversie/data/final_metadata.xlsx")
# assign unique number to authors, starting from 1
df.insert(3, "author_id", df.groupby(["author"], sort=False).ngroup()+1)
author_id_list = [] # instantiate empty list
for author_id, author_df in df.groupby(["author_id"]): # groupby author and iterate
author_df = author_df.reset_index() # reset the index to the author
author_df.insert(5, "text_id_per_author", author_df.index+1) # number of texts per author (count restarts at 1 for each new author)
author_id_list.append(author_df) # add to list
new_df = pd.concat(author_id_list).reset_index(drop=True) # get it into one dataframe
# merge authorIDs with textIDs, so that each text effectively has a unique ID
new_df.insert(6,
"authorId_textId",
[f"{row['author_id']}_{row['text_id_per_author']}" for index, row in new_df.iloc[:, 4:6].iterrows()])
# insert a new column that indicates whether a text is a translation, reference text or source text
new_df.insert(7, "transl_ref_srcTxt",
["transl" if "T" in value else "srcTxt" if "FS" in value else "ref" for value in new_df["data_identifier"]])
###Output
_____no_output_____
###Markdown
2. Mapping metadata to concordance dataframe (inserting unique identifiers per text, and other information)
###Code
#function to map metadata dictionary with key=filename to the dataframe
def map_filename_dict_to_df(source_df, target_df, target_index, target_colname, col1, col2):
"""
Arguments: source_df, target_df, target_index, target_colname, col1, col2
- source_df: metadata df
- target_df : df you want to insert metadata
- target_index: index you want new column to be
- target_colname: column name you want new column to have
- col1: column name of column you want the metadata to be based on (so a column that is shared across dataframes)
- col2: the column containing information you want to transfer across dataframes.
Actual function:
def map_filename_dict_to_df(source_df, target_df, target_index, target_colname, col1, col2):
filename_dict = {filename:value for filename, value in zip(source_df[col1], source_df[col2])}
target_df.insert(target_index, target_colname, target_df[col1].map(filename_dict))
return target_df
"""
filename_dict = {filename:value for filename, value in zip(source_df[col1], source_df[col2])}
target_df.insert(target_index, target_colname, target_df[col1].map(filename_dict))
return target_df
# EXAMPLE of doing it for 1 column
# add unique author id based on the unique data identifier
f = map_filename_dict_to_df(source_df = m,
target_df = f,
target_index=3,
target_colname="author_id",
col1 = "data_identifier",
col2 = "author_id"
)
# EXAMPLE of doing it in bulk at once
col_list = ["period",
"data_identifier",
"author_id",
"text_id_per_author",
"authorId_textId",
"title",
"USTC_subject_classification",
"author",
"textDate", "wordcount"]
print(len(col_list))
index_list = [i for i in range(0, len(col_list))]
print(index_list, len(index_list))
for col_name, col_ix in zip(col_list, index_list):
addData = map_filename_dict_to_df(
source_df = m,
target_df = addData,
target_index = col_ix,
target_colname = col_name,
col1 = "filename",
col2 = col_name
)
###Output
10
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] 10
###Markdown
3. Add numeric period categoryFor the English data, we have a classification into decades: 1580-1589, 1590-1599, ...We now want to turn this into a numeric variable. There are 5 decades in total (1580-1589, 1590-1599, 1600-1609, 1680-1689, 1690-1699). These will be assigned a number chronologically.
###Code
en = pd.read_excel('/Users/paulineclaes/Documents/dta/thesis/finaldata/final_GoToInf.xlsx')
fr = pd.read_excel('/Users/paulineclaes/Documents/dta/thesis/finaldata/final_AllerINF.xlsx')
en.head()
en.insert(1, "period_category", en.groupby(["period"], sort=True).ngroup()+1)
###Output
_____no_output_____
###Markdown
> However, for French data, this is less straightforward, since these were not first classified into decades, and have a wider range of text dates, since they precede the corresponding English translation.> Therefore, the range of possible textdates for French is divided into 5 equal chunks.
###Code
fr.head()
import numpy as np
conditions = [
(fr['fr_source_textDate'] >= 1502) & (fr['fr_source_textDate'] <= 1542), # 1
(fr['fr_source_textDate'] >= 1543) & (fr['fr_source_textDate'] <= 1583), # 2
(fr['fr_source_textDate'] >= 1584) & (fr['fr_source_textDate'] <= 1624), # 3
(fr['fr_source_textDate'] >= 1625) & (fr['fr_source_textDate'] <= 1665), # 4
(fr['fr_source_textDate'] >= 1666) & (fr['fr_source_textDate'] <= 1699) # 5
]
values = ['1', '2', '3', '4', '5']
fr.insert(1, 'period_category', np.select(conditions, values))
fr.head()
###Output
_____no_output_____
###Markdown
> Inserting the same period category in the metadata dataframe.
###Code
meta = pd.read_excel('/Users/paulineclaes/Documents/dta/thesis/finaldata/final_metadata.xlsx')
# english subset (which already has a classification per decade)
meta_en_subset = meta[meta['transl_ref_srcTxt'] != 'srcTxt']
# french subset (which does not yet have a classification per decade)
meta_fr_subset = meta[meta['transl_ref_srcTxt'] == 'srcTxt']
# insert period category in english data
meta_en_subset.insert(3, "period_category", meta_en_subset.groupby(["period"], sort=True).ngroup()+1)
# insert period category in French data
meta_fr_subset.insert(3, 'period_category', np.select(conditions, values))
# concatenating two data frames row-wise
meta_new = pd.concat([meta_en_subset, meta_fr_subset])
# write to excel file
#meta_new.to_excel('/Users/paulineclaes/Documents/dta/thesis/finaldata/final_metadata.xlsx',
# index=False,
# na_rep='NA')
###Output
_____no_output_____ |
_notebooks/2021-04-09-mineracao3.ipynb | ###Markdown
Projeto Mineração - Dados Abertos UFRN
###Code
#hide
import pandas as pd
import numpy as np
projetos = pd.read_csv("http://dados.ufrn.br/dataset/e48162fa-0668-4098-869a-8aacfd177f9f/resource/3f12a9a4-7084-43e7-a4ac-091a8ae14020/download/projetos-de-pesquisa.csv", sep=';')
bolsistas = pd.read_csv("http://dados.ufrn.br/dataset/81608a4d-c76b-4758-a8d8-54be32209833/resource/d21c94fe-22ba-4cf3-89db-54d8e739c567/download/bolsas-iniciacao-cientifica.csv", sep=';')
#hide
bolsistas_categoria = bolsistas.groupby('categoria', as_index=False).agg({"discente": "count"})
#print(bolsistas_categoria)
bolsistas_tipo = bolsistas.groupby('tipo_de_bolsa', as_index=False).agg({"discente" : "count"})
#print(bolsistas_tipo)
bolsistas_status = bolsistas.groupby('status', as_index=False).agg({"codigo_projeto": "count"})
#print(bolsistas_status)
bolsistas_ano = bolsistas.groupby('ano', as_index=False).agg({"discente": "count"})
#print(bolsistas_ano)
grupo_projetos = bolsistas.groupby("id_projeto_pesquisa")
#print(grupo_projetos)
projetos_situacao = projetos.groupby('situacao', as_index=False).agg({"codigo_projeto":"count"})
#print(projetos_situacao)
projetos_categoria = projetos.groupby('categoria_projeto', as_index=False).agg({"codigo_projeto":"count"})
#print(projetos_categoria)
projetos_unidades = projetos.groupby('unidade', as_index=False).agg({"codigo_projeto":"count"})
#print(projetos_unidades)
#hide
projetos.drop(columns=['data_inicio', 'data_fim', 'id_coordenador', 'coordenador', 'edital', 'objetivos_desenvolvimento_sustentavel', 'id_grupo_pesquisa', 'grupo_pesquisa'])
bolsistas.drop(columns=['id_grupo_pesquisa', 'grupo_pesquisa', 'fim' ])
#13
#17
#hide
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import sklearn
from sklearn.cluster import KMeans
#from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import scale
import sklearn.metrics as sm
from sklearn.metrics import confusion_matrix, classification_report
#hide
import cufflinks as cf
import plotly.express as px
import plotly.offline as py
import plotly.graph_objs as go
from plotly.subplots import make_subplots
#hide
anos_projetos = projetos['ano'].unique()
qtd_projetos = projetos['ano'].value_counts().to_list()
#hide_input
print("Tabela de Projetos:", projetos.shape)
print("Tabela de Bolsistas:", bolsistas.shape)
#hide
areas = projetos['area_conhecimento_cnpq'].unique()
np.size(areas)
#hide
from google.colab import drive
drive.mount('/content/drive')
#hide
#num
#numeros
#hide
file = open('/content/drive/MyDrive/projetos/areas_cnpq.csv', 'r')
lines = file.readlines()
print(lines)
#hide
projetos_unico = projetos[['id_projeto_pesquisa','titulo','palavras_chave','ano','situacao','unidade','linha_pesquisa','area_conhecimento_cnpq']]
#hide
np.size(projetos_unico['id_projeto_pesquisa'].unique())
#hide
nam_rows = projetos_unico[projetos_unico.isnull().T.any()]
#hide
np.size(nam_rows)
#hide
projetos_unico_limpo = projetos_unico.dropna(thresh=1)
areas_notnull = projetos_unico[projetos_unico.area_conhecimento_cnpq.notnull()]
#hide
np.size(areas_notnull['id_projeto_pesquisa'].unique())
df_areas = pd.read_csv('/content/drive/MyDrive/projetos/areas_corrigido.tsv', names=['codigo','area_conhecimento_cnpq'],delimiter='\t')
#hide
df_areas
#hide
df_areas_corrigido = pd.read_csv('/content/drive/MyDrive/projetos/areas_corrigido.tsv', names=['codigo','area_conhecimento_cnpq'],delimiter='\t')
#hide
df_areas_corrigido['area_conhecimento_cnpq'] = df_areas_corrigido['area_conhecimento_cnpq'].apply(lambda x: x.strip())
#hide
areas_notnull['area_conhecimento_cnpq'] = areas_notnull['area_conhecimento_cnpq'].replace([''])
df_cd = areas_notnull.replace(df_areas.set_index('area_conhecimento_cnpq')['codigo'])
df_cd.head()
#hide
df_corrigido = df_cd.replace(df_areas_corrigido.set_index('area_conhecimento_cnpq')['codigo'])
#hide
df_cd.to_csv('/content/drive/MyDrive/projetos/df_area_m.csv')
#hide
df_area_m = pd.read_csv('/content/drive/MyDrive/projetos/df_area_mixed.csv')
#hide
df_area_m[~df_area_m['area_conhecimento_cnpq'].str.isnumeric()]
#hide
df_corrigido.head(3)
#hide
to_drop_areas = ['Política Energética Regional e Nacional','Mecânica dos Fluídos','Trajetórias e Órbitas','Física dos Fluídos, Física de Plasmas e Descargas Elétricas''Biomedicina','Química Industrial','Cinética e Teoria de Transporte de Fluídos; Propriedades Físicas de Gases','Instalações Elétricas e Industriais','Desenvolvimento e Inovação Tecnológica em Biologia','Estruturas Eletrônicas e Propriedades Elétricas de Superfícies; Interf. e Partículas','Finanças Públicas Internas','Organização Industrial e Estudos Industriais','Linguística','Balanços Globais de Matéria e Energia','Engenharia Textil','Ciências Sociais','Engenharia Mecatrônica','Educação Pré-Escolar','Filosofia da Linguagem','Fontes Alternativas de Energia','Administração Hospitalar','Multidisciplinar','Tratamentos Térmicos, Mecânicos e Químicos','Tecnologia e Inovação','Química, Física, Fisico-Química e Bioquímica dos Alim. e das Mat-Primas Alimentares','Mudança Tecnológica','Energia Eólica','Teoria Eletromagnetica, Microondas, Propagação de Ondas, Antenas','Planejamento em Ciência e Tecnologia', 'Sociolinguística e Dialetologia','Mutagenese','Síntese Orgânica','Fisiologia Endócrina','Anatomia Animal','Robótica, Mecatrônica e Automação','Físico Química Inorgânica', 'Energia de Biomassa', 'Prop. Óticas e Espectrosc. da Mat. Condens; Outras Inter. da Mat. com Rad. e Part.', 'Matemática Discreta e Combinatória']
shape(df_corrigido)
#hide
df_corrigido.drop(df_corrigido.loc[df_corrigido['area_conhecimento_cnpq']=="Física dos Fluídos, Física de Plasmas e Descargas Elétricas"].index, inplace=True)
df_corrigido['area_conhecimento_cnpq'] = pd.to_numeric(df_corrigido['area_conhecimento_cnpq'])
###df_corrigido
#hide
df_corrigido.to_csv('/content/drive/MyDrive/projetos/df_corrigido.csv')
#hide
df_final = pd.read_csv('/content/drive/MyDrive/projetos/df_corrigido.csv')
#hide
df_final
df_final.isna().sum()
df_nan = df_final.dropna(subset=['linha_pesquisa'])
#hide
df_nan['linha_pesquisa'].isna().sum()
#hide
df_nan.head()
#hide
df_nan.info()
#hide
from wordcloud import WordCloud
import matplotlib.pyplot as plt
#hide
import nltk
nltk.download('stopwords')
#hide
from nltk.corpus import stopwords
stopwords_pt = stopwords.words("portuguese")
#hide
df_nan['palavras_chave']
#hide
palavras_chave = df_nan['palavras_chave'].str
#hide
palavras_chave = pd.read_csv('/content/drive/MyDrive/projetos/df_corrigido.csv', usecols=['palavras_chave']) #aqui é o df final
###Output
_____no_output_____
###Markdown
Nuvem palavras chave
###Code
#hide_input
wordcloud = WordCloud(width=550, height=550, background_color="white", stopwords=stopwords_pt).generate(str(palavras_chave))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
pd.get_dummies(df_nan['linha_pesquisa']).shape
pd.get_dummies(df_nan['palavras_chave']).shape
#hide
pip install feature-engine
#hide
from sklearn.model_selection import train_test_split
from feature_engine.encoding import CountFrequencyEncoder
#hide
df_nan.head()
#hide
df3 = df_nan[['palavras_chave','linha_pesquisa','area_conhecimento_cnpq']]
X_train, X_test, y_train, y_test = train_test_split(df3.drop(['area_conhecimento_cnpq'], axis=1),
df3['area_conhecimento_cnpq'], test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Test Encoding
###Code
encoder = CountFrequencyEncoder(encoding_method='frequency',
variables=['palavras_chave', 'linha_pesquisa'])
X_train.head()
encoder.fit(X_train)
train_t = encoder.transform(X_train)
test_t = encoder.transform(X_test)
#hide
encoder.encoder_dict_
#hide
df_nan['palavras_chave'].value_counts()
#hide
count_linhas = df_nan['linha_pesquisa'].value_counts()
#hide
count_linhas_unidade = df_nan['unidade'].value_counts()
#hide
count_linhas.values #.index
#hide
count_linhas_unidade.values
#hide
idx = count_linhas[count_linhas.gt(10)].index
#hide
np.shape(idx)
#hide
values = count_linhas.where(count_linhas >10).values
#hide
values_nan = [x for x in values if str(x) != 'nan']
#hide
np.shape(values_nan)
#hide
df_nan
#hide
df_nomes = df_final[['unidade','linha_pesquisa','area_conhecimento_cnpq']]
#hide_input
df_nomes
#hide
df_nan['new_linha'] = df_nan.linha_pesquisa.map(df_nan.linha_pesquisa.value_counts().astype(int))
#hide
df_final['new_linha'] = df_final.linha_pesquisa.map(df_final.linha_pesquisa.value_counts())
#hide
df_nan.unidade = df_nan.unidade.map(df_nan.unidade.value_counts())
#hide
df_final
#hide
df_merge = df_final.loc[df_final['new_linha'] > 10]
#hide
df_merge['cluster'] = kmeans.labels_
#hide
df_merge
#hide
plt.scatter(df_merge['cluster'], df_merge['area_conhecimento_cnpq'])
#hide
df_process = df_nan[['unidade','linha_pesquisa','area_conhecimento_cnpq']]
#hide_output
df_filtered = df_process.loc[df_process['linha_pesquisa'] > 10]
#hide
df_filtered.head()
#hide_output
X = np.array(df_filtered.drop(['area_conhecimento_cnpq'], 1))
np.shape(X)
#hide_output
kmeans = KMeans(n_clusters=8).fit(X)
centroides = np.array(kmeans.cluster_centers_)
centroides
#hide_output
centroides = kmeans.cluster_centers_
#hide_output
y = kmeans.predict(X)
#hide_input
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='viridis')
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5);
#hide_input
plt.scatter(X[:,0],X[:,1], c=kmeans.labels_, cmap='rainbow')
#hide_input
plt.scatter(X[:,0], X[:,1], c=kmeans.labels_, cmap='rainbow')
plt.scatter(kmeans.cluster_centers_[:,0] ,kmeans.cluster_centers_[:,1], color='black')
#hide
def doKmeans(X, nclust=8):
model = KMeans(nclust)
model.fit(X)
clust_labels = model.predict(X)
cent = model.cluster_centers_
return (clust_labels, cent)
clust_labels, cent = doKmeans(df_filtered, 8)
kmeans = pd.DataFrame(clust_labels)
df_filtered.insert((df_filtered.shape[1]),'kmeans',kmeans)
#hide
fig = plt.figure()
ax = fig.add_subplot(111)
scatter = ax.scatter(df_filtered['unidade'],df_filtered['linha_pesquisa'],
c=kmeans[0],s=50)
ax.set_title('K-Means Clustering')
ax.set_xlabel('unidade')
ax.set_ylabel('linha de pesquisa')
plt.colorbar(scatter)
#hide
fig = plt.figure()
ax = fig.add_subplot(111)
scatter = ax.scatter(df_filtered['unidade'],df_filtered['area_conhecimento_cnpq'],
c=kmeans[0],s=50)
ax.set_title('K-Means Clustering')
ax.set_xlabel('unidade')
ax.set_ylabel('area de conhecimento')
plt.colorbar(scatter)
###Output
_____no_output_____ |
examples/notebooks/20_timeseries_inspector.ipynb | ###Markdown
Update the geemap packageIf you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemapinstallation) package to the latest version from GitHub. Restart the Kernel (Menu -> Kernel -> Restart) to take effect.
###Code
# geemap.update_package()
###Output
_____no_output_____
###Markdown
NAIP: National Agriculture Imagery ProgramThe National Agriculture Imagery Program (NAIP) acquires aerial imagery during the agricultural growing seasons in the continental U.S.NAIP projects are contracted each year based upon available funding and the FSA imagery acquisition cycle. Beginning in 2003, NAIP was acquired on a 5-year cycle. 2008 was a transition year, and a three-year cycle began in 2009.NAIP imagery is acquired at a **one-meter** ground sample distance (GSD) with a horizontal accuracy that matches within six meters of photo-identifiable ground control points, which are used during image inspection.Older images were collected using 3 bands (Red, Green, and Blue: RGB), but newer imagery is usually collected with an additional near-infrared band (RGBN). More information about NAIP imagery can be found on [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/USDA_NAIP_DOQQ). Create annual composite of NAIP imagery Select 4-band (RGBN) NAIP imagery.
###Code
naip_ts = geemap.naip_timeseries(start_year=2009, end_year=2018)
###Output
_____no_output_____
###Markdown
Create a list of layer names to be shown under the dropdown list.
###Code
layer_names = ['NAIP ' + str(year) for year in range(2009, 2019)]
print(layer_names)
###Output
_____no_output_____
###Markdown
Set visualization parameters.
###Code
naip_vis = {'bands': ['N', 'R', 'G']}
###Output
_____no_output_____
###Markdown
Create a split-panel map for visualizing NAIP imagery
###Code
Map = geemap.Map()
Map.ts_inspector(left_ts=naip_ts, right_ts=naip_ts, left_names=layer_names, right_names=layer_names, left_vis=naip_vis, right_vis=naip_vis)
Map
###Output
_____no_output_____
###Markdown
Create annual composite of Landsat imageryUse the drawing tools to create an ROI
###Code
import ee
import geemap
Map = geemap.Map()
Map
region = Map.draw_last_feature
roi = region.geometry()
# roi = ee.Geometry.Polygon(
# [[[-115.897448, 35.640766],
# [-115.897448, 36.603608],
# [-113.784915, 36.603608],
# [-113.784915, 35.640766],
# [-115.897448, 35.640766]]], None, False)
print(roi.getInfo())
landsat_ts = geemap.landsat_timeseries(roi=roi, start_year=1984, end_year=2019, start_date='01-01', end_date='12-31')
layer_names = ['Landsat ' + str(year) for year in range(1984, 2020)]
print(layer_names)
landsat_vis = {
'min': 0,
'max': 4000,
'gamma': [1, 1, 1],
'bands': ['NIR', 'Red', 'Green']}
Map = geemap.Map()
Map.ts_inspector(left_ts=landsat_ts, right_ts=landsat_ts, left_names=layer_names, right_names=layer_names, left_vis=landsat_vis, right_vis=landsat_vis)
Map.centerObject(roi, zoom=8)
Map
###Output
_____no_output_____
###Markdown
Compare Landsat imagery and National Land Cover Database (NLCD)More information about NLCD can be found at the [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD).
###Code
import ee
import geemap
Map = geemap.Map()
Map
NLCD = ee.ImageCollection('USGS/NLCD')
NLCD_layers = NLCD.aggregate_array('system:id').getInfo()
print(NLCD_layers)
NLCD2001 = ee.Image('USGS/NLCD/NLCD2001').select('landcover')
NLCD2006 = ee.Image('USGS/NLCD/NLCD2006').select('landcover')
NLCD2011 = ee.Image('USGS/NLCD/NLCD2011').select('landcover')
NLCD2016 = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
NLCD = ee.ImageCollection([NLCD2001, NLCD2006, NLCD2011, NLCD2016])
NLCD_layer_names = ['NLCD ' + str(year) for year in range(2001, 2017, 5)]
print(NLCD_layer_names)
roi = ee.Geometry.Polygon(
[[[-115.897448, 35.640766],
[-115.897448, 36.603608],
[-113.784915, 36.603608],
[-113.784915, 35.640766],
[-115.897448, 35.640766]]], None, False)
landsat_ts = geemap.landsat_timeseries(roi=roi, start_year=2001, end_year=2016, start_date='01-01', end_date='12-31')
landsat_layer_names = ['Landsat ' + str(year) for year in range(2001, 2017)]
landsat_vis = {
'min': 0,
'max': 4000,
'gamma': [1, 1, 1],
'bands': ['NIR', 'Red', 'Green']}
nlcd_vis = {
'bands': ['landcover']
}
Map = geemap.Map()
Map.ts_inspector(left_ts=landsat_ts, right_ts=NLCD, left_names=landsat_layer_names, right_names=NLCD_layer_names, left_vis=landsat_vis, right_vis=nlcd_vis)
Map.centerObject(roi, zoom=8)
Map
###Output
_____no_output_____
###Markdown
Update the geemap packageIf you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemapinstallation) package to the latest version from GitHub. Restart the Kernel (Menu -> Kernel -> Restart) to take effect.
###Code
# geemap.update_package()
###Output
_____no_output_____
###Markdown
NAIP: National Agriculture Imagery ProgramThe National Agriculture Imagery Program (NAIP) acquires aerial imagery during the agricultural growing seasons in the continental U.S.NAIP projects are contracted each year based upon available funding and the FSA imagery acquisition cycle. Beginning in 2003, NAIP was acquired on a 5-year cycle. 2008 was a transition year, and a three-year cycle began in 2009.NAIP imagery is acquired at a **one-meter** ground sample distance (GSD) with a horizontal accuracy that matches within six meters of photo-identifiable ground control points, which are used during image inspection.Older images were collected using 3 bands (Red, Green, and Blue: RGB), but newer imagery is usually collected with an additional near-infrared band (RGBN). More information about NAIP imagery can be found on [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/USDA_NAIP_DOQQ). Create annual composite of NAIP imagery Select 4-band (RGBN) NAIP imagery.
###Code
Map = geemap.Map()
naip_ts = geemap.naip_timeseries(start_year=2009, end_year=2018)
###Output
_____no_output_____
###Markdown
Create a list of layer names to be shown under the dropdown list.
###Code
layer_names = ['NAIP ' + str(year) for year in range(2009, 2019)]
print(layer_names)
###Output
_____no_output_____
###Markdown
Set visualization parameters.
###Code
naip_vis = {'bands': ['N', 'R', 'G']}
###Output
_____no_output_____
###Markdown
Create a split-panel map for visualizing NAIP imagery
###Code
Map = geemap.Map()
Map.ts_inspector(left_ts=naip_ts, right_ts=naip_ts, left_names=layer_names, right_names=layer_names, left_vis=naip_vis, right_vis=naip_vis)
Map
###Output
_____no_output_____
###Markdown
Create annual composite of Landsat imageryUse the drawing tools to create an ROI
###Code
import ee
import geemap
Map = geemap.Map()
Map
region = Map.draw_last_feature
if region is not None:
roi = region.geometry()
else:
roi = ee.Geometry.Polygon(
[[[-115.897448, 35.640766],
[-115.897448, 36.603608],
[-113.784915, 36.603608],
[-113.784915, 35.640766],
[-115.897448, 35.640766]]], None, False)
print(roi.getInfo())
landsat_ts = geemap.landsat_timeseries(roi=roi, start_year=1984, end_year=2019, start_date='01-01', end_date='12-31')
layer_names = ['Landsat ' + str(year) for year in range(1984, 2020)]
print(layer_names)
landsat_vis = {
'min': 0,
'max': 4000,
'gamma': [1, 1, 1],
'bands': ['NIR', 'Red', 'Green']}
Map = geemap.Map()
Map.ts_inspector(left_ts=landsat_ts, right_ts=landsat_ts, left_names=layer_names, right_names=layer_names, left_vis=landsat_vis, right_vis=landsat_vis)
Map.centerObject(roi, zoom=8)
Map
###Output
_____no_output_____
###Markdown
Compare Landsat imagery and National Land Cover Database (NLCD)More information about NLCD can be found at the [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD).
###Code
import ee
import geemap
Map = geemap.Map()
Map
NLCD = ee.ImageCollection('USGS/NLCD')
NLCD_layers = NLCD.aggregate_array('system:id').getInfo()
print(NLCD_layers)
NLCD2001 = ee.Image('USGS/NLCD/NLCD2001').select('landcover')
NLCD2006 = ee.Image('USGS/NLCD/NLCD2006').select('landcover')
NLCD2011 = ee.Image('USGS/NLCD/NLCD2011').select('landcover')
NLCD2016 = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
NLCD = ee.ImageCollection([NLCD2001, NLCD2006, NLCD2011, NLCD2016])
NLCD_layer_names = ['NLCD ' + str(year) for year in range(2001, 2017, 5)]
print(NLCD_layer_names)
roi = ee.Geometry.Polygon(
[[[-115.897448, 35.640766],
[-115.897448, 36.603608],
[-113.784915, 36.603608],
[-113.784915, 35.640766],
[-115.897448, 35.640766]]], None, False)
landsat_ts = geemap.landsat_timeseries(roi=roi, start_year=2001, end_year=2016, start_date='01-01', end_date='12-31')
landsat_layer_names = ['Landsat ' + str(year) for year in range(2001, 2017)]
landsat_vis = {
'min': 0,
'max': 4000,
'gamma': [1, 1, 1],
'bands': ['NIR', 'Red', 'Green']}
nlcd_vis = {
'bands': ['landcover']
}
Map = geemap.Map()
Map.ts_inspector(left_ts=landsat_ts, right_ts=NLCD, left_names=landsat_layer_names, right_names=NLCD_layer_names, left_vis=landsat_vis, right_vis=nlcd_vis)
Map.centerObject(roi, zoom=8)
Map
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
import ee
import geemap
geemap.show_youtube('0CZ7Aj8hCyo')
###Output
_____no_output_____
###Markdown
Update the geemap packageIf you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemapinstallation) package to the latest version from GitHub. Restart the Kernel (Menu -> Kernel -> Restart) to take effect.
###Code
# geemap.update_package()
###Output
_____no_output_____
###Markdown
NAIP: National Agriculture Imagery ProgramThe National Agriculture Imagery Program (NAIP) acquires aerial imagery during the agricultural growing seasons in the continental U.S.NAIP projects are contracted each year based upon available funding and the FSA imagery acquisition cycle. Beginning in 2003, NAIP was acquired on a 5-year cycle. 2008 was a transition year, and a three-year cycle began in 2009.NAIP imagery is acquired at a **one-meter** ground sample distance (GSD) with a horizontal accuracy that matches within six meters of photo-identifiable ground control points, which are used during image inspection.Older images were collected using 3 bands (Red, Green, and Blue: RGB), but newer imagery is usually collected with an additional near-infrared band (RGBN). More information about NAIP imagery can be found on [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/USDA_NAIP_DOQQ). Create annual composite of NAIP imagery Select 4-band (RGBN) NAIP imagery.
###Code
Map = geemap.Map()
naip_ts = geemap.naip_timeseries(start_year=2009, end_year=2018)
###Output
_____no_output_____
###Markdown
Create a list of layer names to be shown under the dropdown list.
###Code
layer_names = ['NAIP ' + str(year) for year in range(2009, 2019)]
print(layer_names)
###Output
_____no_output_____
###Markdown
Set visualization parameters.
###Code
naip_vis = {'bands': ['N', 'R', 'G']}
###Output
_____no_output_____
###Markdown
Create a split-panel map for visualizing NAIP imagery
###Code
Map = geemap.Map()
Map.ts_inspector(left_ts=naip_ts, right_ts=naip_ts, left_names=layer_names, right_names=layer_names, left_vis=naip_vis, right_vis=naip_vis)
Map
###Output
_____no_output_____
###Markdown
Create annual composite of Landsat imageryUse the drawing tools to create an ROI
###Code
import ee
import geemap
Map = geemap.Map()
Map
region = Map.draw_last_feature
if region is not None:
roi = region.geometry()
else:
roi = ee.Geometry.Polygon(
[[[-115.897448, 35.640766],
[-115.897448, 36.603608],
[-113.784915, 36.603608],
[-113.784915, 35.640766],
[-115.897448, 35.640766]]], None, False)
print(roi.getInfo())
landsat_ts = geemap.landsat_timeseries(roi=roi, start_year=1984, end_year=2019, start_date='01-01', end_date='12-31')
layer_names = ['Landsat ' + str(year) for year in range(1984, 2020)]
print(layer_names)
landsat_vis = {
'min': 0,
'max': 4000,
'gamma': [1, 1, 1],
'bands': ['NIR', 'Red', 'Green']}
Map = geemap.Map()
Map.ts_inspector(left_ts=landsat_ts, right_ts=landsat_ts, left_names=layer_names, right_names=layer_names, left_vis=landsat_vis, right_vis=landsat_vis)
Map.centerObject(roi, zoom=8)
Map
###Output
_____no_output_____
###Markdown
Compare Landsat imagery and National Land Cover Database (NLCD)More information about NLCD can be found at the [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD).
###Code
import ee
import geemap
Map = geemap.Map()
Map
NLCD = ee.ImageCollection('USGS/NLCD')
NLCD_layers = NLCD.aggregate_array('system:id').getInfo()
print(NLCD_layers)
NLCD2001 = ee.Image('USGS/NLCD/NLCD2001').select('landcover')
NLCD2006 = ee.Image('USGS/NLCD/NLCD2006').select('landcover')
NLCD2011 = ee.Image('USGS/NLCD/NLCD2011').select('landcover')
NLCD2016 = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
NLCD = ee.ImageCollection([NLCD2001, NLCD2006, NLCD2011, NLCD2016])
NLCD_layer_names = ['NLCD ' + str(year) for year in range(2001, 2017, 5)]
print(NLCD_layer_names)
roi = ee.Geometry.Polygon(
[[[-115.897448, 35.640766],
[-115.897448, 36.603608],
[-113.784915, 36.603608],
[-113.784915, 35.640766],
[-115.897448, 35.640766]]], None, False)
landsat_ts = geemap.landsat_timeseries(roi=roi, start_year=2001, end_year=2016, start_date='01-01', end_date='12-31')
landsat_layer_names = ['Landsat ' + str(year) for year in range(2001, 2017)]
landsat_vis = {
'min': 0,
'max': 4000,
'gamma': [1, 1, 1],
'bands': ['NIR', 'Red', 'Green']}
nlcd_vis = {
'bands': ['landcover']
}
Map = geemap.Map()
Map.ts_inspector(left_ts=landsat_ts, right_ts=NLCD, left_names=landsat_layer_names, right_names=NLCD_layer_names, left_vis=landsat_vis, right_vis=nlcd_vis)
Map.centerObject(roi, zoom=8)
Map
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap
import ee
import geemap
geemap.show_youtube('0CZ7Aj8hCyo')
###Output
_____no_output_____
###Markdown
Update the geemap packageIf you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemapinstallation) package to the latest version from GitHub. Restart the Kernel (Menu -> Kernel -> Restart) to take effect.
###Code
# geemap.update_package()
###Output
_____no_output_____
###Markdown
NAIP: National Agriculture Imagery ProgramThe National Agriculture Imagery Program (NAIP) acquires aerial imagery during the agricultural growing seasons in the continental U.S.NAIP projects are contracted each year based upon available funding and the FSA imagery acquisition cycle. Beginning in 2003, NAIP was acquired on a 5-year cycle. 2008 was a transition year, and a three-year cycle began in 2009.NAIP imagery is acquired at a **one-meter** ground sample distance (GSD) with a horizontal accuracy that matches within six meters of photo-identifiable ground control points, which are used during image inspection.Older images were collected using 3 bands (Red, Green, and Blue: RGB), but newer imagery is usually collected with an additional near-infrared band (RGBN). More information about NAIP imagery can be found on [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/USDA_NAIP_DOQQ). Create annual composite of NAIP imagery Select 4-band (RGBN) NAIP imagery.
###Code
Map = geemap.Map()
naip_ts = geemap.naip_timeseries(start_year=2009, end_year=2018)
###Output
_____no_output_____
###Markdown
Create a list of layer names to be shown under the dropdown list.
###Code
layer_names = ['NAIP ' + str(year) for year in range(2009, 2019)]
print(layer_names)
###Output
_____no_output_____
###Markdown
Set visualization parameters.
###Code
naip_vis = {'bands': ['N', 'R', 'G']}
###Output
_____no_output_____
###Markdown
Create a split-panel map for visualizing NAIP imagery
###Code
Map = geemap.Map()
Map.ts_inspector(
left_ts=naip_ts,
right_ts=naip_ts,
left_names=layer_names,
right_names=layer_names,
left_vis=naip_vis,
right_vis=naip_vis,
)
Map
###Output
_____no_output_____
###Markdown
Create annual composite of Landsat imageryUse the drawing tools to create an ROI
###Code
import ee
import geemap
Map = geemap.Map()
Map
region = Map.draw_last_feature
if region is not None:
roi = region.geometry()
else:
roi = ee.Geometry.Polygon(
[
[
[-115.897448, 35.640766],
[-115.897448, 36.603608],
[-113.784915, 36.603608],
[-113.784915, 35.640766],
[-115.897448, 35.640766],
]
],
None,
False,
)
print(roi.getInfo())
landsat_ts = geemap.landsat_timeseries(
roi=roi, start_year=1984, end_year=2019, start_date='01-01', end_date='12-31'
)
layer_names = ['Landsat ' + str(year) for year in range(1984, 2020)]
print(layer_names)
landsat_vis = {
'min': 0,
'max': 4000,
'gamma': [1, 1, 1],
'bands': ['NIR', 'Red', 'Green'],
}
Map = geemap.Map()
Map.ts_inspector(
left_ts=landsat_ts,
right_ts=landsat_ts,
left_names=layer_names,
right_names=layer_names,
left_vis=landsat_vis,
right_vis=landsat_vis,
)
Map.centerObject(roi, zoom=8)
Map
###Output
_____no_output_____
###Markdown
Compare Landsat imagery and National Land Cover Database (NLCD)More information about NLCD can be found at the [Earth Engine Data Catalog](https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD).
###Code
import ee
import geemap
Map = geemap.Map()
Map
NLCD = ee.ImageCollection('USGS/NLCD')
NLCD_layers = NLCD.aggregate_array('system:id').getInfo()
print(NLCD_layers)
NLCD2001 = ee.Image('USGS/NLCD/NLCD2001').select('landcover')
NLCD2006 = ee.Image('USGS/NLCD/NLCD2006').select('landcover')
NLCD2011 = ee.Image('USGS/NLCD/NLCD2011').select('landcover')
NLCD2016 = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
NLCD = ee.ImageCollection([NLCD2001, NLCD2006, NLCD2011, NLCD2016])
NLCD_layer_names = ['NLCD ' + str(year) for year in range(2001, 2017, 5)]
print(NLCD_layer_names)
roi = ee.Geometry.Polygon(
[
[
[-115.897448, 35.640766],
[-115.897448, 36.603608],
[-113.784915, 36.603608],
[-113.784915, 35.640766],
[-115.897448, 35.640766],
]
],
None,
False,
)
landsat_ts = geemap.landsat_timeseries(
roi=roi, start_year=2001, end_year=2016, start_date='01-01', end_date='12-31'
)
landsat_layer_names = ['Landsat ' + str(year) for year in range(2001, 2017)]
landsat_vis = {
'min': 0,
'max': 4000,
'gamma': [1, 1, 1],
'bands': ['NIR', 'Red', 'Green'],
}
nlcd_vis = {'bands': ['landcover']}
Map = geemap.Map()
Map.ts_inspector(
left_ts=landsat_ts,
right_ts=NLCD,
left_names=landsat_layer_names,
right_names=NLCD_layer_names,
left_vis=landsat_vis,
right_vis=nlcd_vis,
)
Map.centerObject(roi, zoom=8)
Map
###Output
_____no_output_____ |
lab6/ex1.ipynb | ###Markdown
 Lab 6: Qubit Drive: Rabi & Ramsey Experiments In this lab, you will take what you learned about qubit drive to perform Rabi and Ramsey experiment on a Pulse Simulator. The goal of this lab is to familiarize yourself with the important concepts of manipulating qubit states with microwave pulses. Installing Necessary PackagesBefore we begin, you will need to install some prerequisites into your environment. Run the cell below to complete these installations. At the end, the cell outputs will be cleared.
###Code
!pip install -U -r grading_tools/requirements.txt
from IPython.display import clear_output
clear_output()
###Output
_____no_output_____
###Markdown
Simulating the Transmon as a Duffing Oscillator As you learned in Lecture 6, the transmon can be understood as a Duffing oscillator specified by a frequency $\nu$, anharmonicity $\alpha$, and drive strength $r$, which results in the Hamiltonian$$ \hat{H}_{\rm Duff}/\hbar = 2\pi\nu a^\dagger a + \pi \alpha a^\dagger a(a^\dagger a - 1) + 2 \pi r (a + a^\dagger) \times D(t),$$where $D(t)$ is the signal on the drive channel for the qubit, and $a^\dagger$ and $a$ are, respectively, the creation and annihilation operators for the qubit. Note that the drive strength $r$ sets the scaling of the control term, with $D(t)$ assumed to be a complex and unitless number satisfying $|D(t)| \leq 1$. Qiskit Pulse OverviewAs a brief overview, Qiskit Pulse schedules (experiments) consist of Instructions (i.e., Play) acting on Channels (i.e., the drive channel). Here is a summary table of available Instructions and Channels:For more detail, this table summarizes the interaction of the channels with the actual quantum hardware:However, we find it is more instructive to begin with guided programming in Pulse. Below you will learn how to create pulses, schedules, and run experiments on a simulator. These lessons can be immediately applied to actual pulse-enabled quantum hardware, in particular [`ibmq_armonk`](https://www.ibm.com/blogs/research/2019/12/qiskit-openpulse/). Let's get started! In most of the cells below, nothing needs to be modified. **However, you will need to execute the cells by pressing `shift+Enter` in each code block**. In order to keep things tidy and focus on the important aspects of Qiskit Pulse, the following cells make use of methods from the `helper` module. For the gory details, please refer back to the [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb). Before coming to Exercise 1a, the following code blocks- create backend pulse simulator and instantiate the transmon as a Duffing oscillator of frequency $\sim 5$ GHz- import libraries for numerics and visualization, and define helpful constants- create the channels for the pulse schedule and define measurment schedule (we will only work with the drive channel)
###Code
# our backend is the Pulse Simulator
from resources import helper
from qiskit.providers.aer import PulseSimulator
backend_sim = PulseSimulator()
# sample duration for pulse instructions
dt = 1e-9
# create the model
duffing_model = helper.get_transmon(dt)
# get qubit frequency from Duffing model
qubit_lo_freq = duffing_model.hamiltonian.get_qubit_lo_from_drift()
import numpy as np
# visualization tools
import matplotlib.pyplot as plt
plt.style.use('dark_background')
# unit conversion factors -> all backend properties returned in SI (Hz, sec, etc)
GHz = 1.0e9 # Gigahertz
MHz = 1.0e6 # Megahertz
kHz = 1.0e3 # kilohertz
us = 1.0e-6 # microseconds
ns = 1.0e-9 # nanoseconds
###Output
_____no_output_____
###Markdown
Instantiate channels and create measurement scheduleWe will use the same measurement schedule throughout, whereas the drive schedules will vary. This must be built for the simulator, for a real backend we can ask for its default measurement pulse.
###Code
from qiskit import pulse
from qiskit.pulse import Play, Acquire
from qiskit.pulse.pulse_lib import GaussianSquare
# qubit to be used throughout the notebook
qubit = 0
### Collect the necessary channels
drive_chan = pulse.DriveChannel(qubit)
meas_chan = pulse.MeasureChannel(qubit)
acq_chan = pulse.AcquireChannel(qubit)
# Construct a measurement schedule and add it to an InstructionScheduleMap
meas_samples = 1200
meas_pulse = GaussianSquare(duration=meas_samples, amp=0.025, sigma=4, width=1150)
measure_sched = Play(meas_pulse, meas_chan) | Acquire(meas_samples, acq_chan, pulse.MemorySlot(qubit))
inst_map = pulse.InstructionScheduleMap()
inst_map.add('measure', [qubit], measure_sched)
# save the measurement/acquire pulse for later
measure = inst_map.get('measure', qubits=[qubit])
###Output
_____no_output_____
###Markdown
Graded Exercise 1a: Rabi ScheduleAdd code to the method below in order to build a Rabi pulse schedule. A Rabi experiment consists of a drive pulse at the qubit frequency, followed by a measurement. A list of Rabis schedules will vary the drive amplitude each time. For a review of creating pulse schedules, see [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb).
###Code
from qiskit.pulse import pulse_lib
def build_rabi_pulse_schedule(drive_duration, drive_amp, drive_sigma):
### create a Rabi schedule (already done)
### create a Gaussian Rabi pulse using pulse_lib
### play Rabi pulse on the Rabi schedule and return
rabi_schedule = pulse.Schedule(name='rabi_experiment')
### WRITE YOUR CODE BETWEEN THESE LINES - START
rabi_pulse = pulse_lib.gaussian(duration=drive_duration, amp=drive_amp,
sigma=drive_sigma, name='rabi_pulse')
rabi_schedule += Play(rabi_pulse, drive_chan)
### WRITE YOUR CODE BETWEEN THESE LINES - END
# add measurement to rabi_schedule
# << indicates time shift the beginning to the start of the schedule
rabi_schedule += measure << rabi_schedule.duration
return rabi_schedule
###Output
_____no_output_____
###Markdown
From the Rabi schedule of Exercise 1a, create a list of schedules for the experiment
###Code
# Gaussian pulse parameters, with varying amplitude
drive_duration = 128
num_rabi_points = 41
drive_amps = np.linspace(0, 0.9, num_rabi_points)
drive_sigma = 16
# now vary the amplitude for each drive amp
rabi_schedules = []
for drive_amp in drive_amps:
rabi_schedules.append(build_rabi_pulse_schedule(drive_duration, drive_amp, drive_sigma))
rabi_schedules[-1].draw()
# assemble the schedules into a Qobj
from qiskit import assemble
rabi_qobj = assemble(**helper.get_params('rabi', globals()))
answer1a = rabi_qobj
# run the simulation
rabi_result = backend_sim.run(rabi_qobj, duffing_model).result()
# retrieve the data from the experiment
rabi_values = helper.get_values_from_result(rabi_result, qubit)
###Output
_____no_output_____
###Markdown
Fit Results and Plot Rabi ExperimentOnce the rough frequency of the qubit is know, the Rabi experiment determines the amplitude of a $\pi$-pulse, that is, the strength of a pulse needed to rotate the qubit around the Bloch sphere from the $|0\rangle$ to $|1\rangle$ states (or vice versa). We assume the rotation axis to be the $x$-axis.
###Code
fit_params, y_fit = helper.fit_sinusoid(drive_amps, rabi_values, [1, 0, 0.5, 0])
plt.scatter(drive_amps, rabi_values, color='white')
plt.plot(drive_amps, y_fit, color='red')
drive_period = fit_params[2] # get period of rabi oscillation
plt.axvline(0, color='red', linestyle='--')
plt.axvline(drive_period/2, color='red', linestyle='--')
plt.xlabel("Drive amp [a.u.]", fontsize=15)
plt.ylabel("Measured signal [a.u.]", fontsize=15)
plt.show()
print("Pi pulse amplitude is %f"%float(drive_period/2))
###Output
_____no_output_____
###Markdown
The $\pi$ pulse amplitude is half the period of the sinusoid (a full period of $360^\circ$ brings it back to zero, but we wish to take the qubit from the zero to one state). For the following experiment, we want a $\pi/2$ pulse: we wish to place the qubit on the equator of the Bloch sphere. The following creates a pulse that rotates the qubit $\pi/2$ ($90^\circ$ degrees) around the Bloch sphere:
###Code
# x_90 is a concise way to say pi_over_2; i.e., an X rotation of 90 degrees
x90_pulse = pulse_lib.gaussian(duration=drive_duration,
amp=drive_period/4,
sigma=drive_sigma,
name='x90_pulse')
###Output
_____no_output_____
###Markdown
Ramsey ExperimentThe Ramsey experiment reveals the time dynamics of driving the qubit off-resonantly. In particular, we vary the delay between two $\pi/2$-pulses.
###Code
# Ramsey experiment parameters
time_max_us = 0.4
time_step_us = 0.0035
times_us = np.arange(0.1, time_max_us, time_step_us)
# Convert to units of dt
delay_times_dt = times_us * us / dt
###Output
_____no_output_____
###Markdown
Graded Exercise 1b: Ramsey ScheduleAdd code to the method below in order to build a Ramsey pulse schedule. For a review of creating pulse schedules, see [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb).
###Code
def build_ramsey_pulse_schedule(delay):
### create a Ramsey pulse schedule (already done)
### play an x90 pulse on the drive channel
### play another x90 pulse after delay
### add measurement pulse to schedule
ramsey_schedule = pulse.Schedule(name='ramsey_experiment')
### HINT: include delay by adding it to the duration of the schedule
### round delay to nearest integer with int(delay)
### WRITE YOUR CODE BETWEEN THESE LINES - START
ramsey_schedule |= Play(x90_pulse, drive_chan)
ramsey_schedule |= Play(x90_pulse, drive_chan) << int(ramsey_schedule.duration + delay)
ramsey_schedule |= measure << int(ramsey_schedule.duration)
### WRITE YOUR CODE BETWEEN THESE LINES - END
return ramsey_schedule
# create schedules for Ramsey experiment
ramsey_schedules = []
for delay in delay_times_dt:
ramsey_schedules.append(build_ramsey_pulse_schedule(delay))
ramsey_schedules[-1].draw()
# assemble the schedules into a Qobj
# the helper will drive the pulses off-resonantly by an unknown value
ramsey_qobj = assemble(**helper.get_params('ramsey', globals()))
answer1b = ramsey_qobj
# run the simulation
ramsey_result = backend_sim.run(ramsey_qobj, duffing_model).result()
# retrieve the data from the experiment
ramsey_values = helper.get_values_from_result(ramsey_result, qubit)
# off-resonance component
fit_params, y_fit = helper.fit_sinusoid(times_us, ramsey_values, [1, 0.7, 0.1, 0.25])
_, _, ramsey_period_us, _, = fit_params
del_f_MHz = 1/ramsey_period_us # freq is MHz since times in us
plt.scatter(times_us, np.real(ramsey_values), color='white')
plt.plot(times_us, y_fit, color='red', label=f"df = {del_f_MHz:.6f} MHz")
plt.xlim(np.min(times_us), np.max(times_us))
plt.xlabel('Delay between X90 pulses [$\mu$s]', fontsize=15)
plt.ylabel('Measured Signal [a.u.]', fontsize=15)
plt.title('Ramsey Experiment', fontsize=15)
plt.legend(loc=3)
plt.show()
print("Drive is off-resonant by %f MHz"%float(del_f_MHz))
###Output
_____no_output_____
###Markdown
 Lab 6: Qubit Drive: Rabi & Ramsey Experiments In this lab, you will take what you learned about qubit drive to perform Rabi and Ramsey experiment on a Pulse Simulator. The goal of this lab is to familiarize yourself with the important concepts of manipulating qubit states with microwave pulses. Installing Necessary PackagesBefore we begin, you will need to install some prerequisites into your environment. Run the cell below to complete these installations. At the end, the cell outputs will be cleared.
###Code
!pip install -U -r grading_tools/requirements.txt
from IPython.display import clear_output
clear_output()
###Output
_____no_output_____
###Markdown
Simulating the Transmon as a Duffing Oscillator As you learned in Lecture 6, the transmon can be understood as a Duffing oscillator specified by a frequency $\nu$, anharmonicity $\alpha$, and drive strength $r$, which results in the Hamiltonian$$ \hat{H}_{\rm Duff}/\hbar = 2\pi\nu a^\dagger a + \pi \alpha a^\dagger a(a^\dagger a - 1) + 2 \pi r (a + a^\dagger) \times D(t),$$where $D(t)$ is the signal on the drive channel for the qubit, and $a^\dagger$ and $a$ are, respectively, the creation and annihilation operators for the qubit. Note that the drive strength $r$ sets the scaling of the control term, with $D(t)$ assumed to be a complex and unitless number satisfying $|D(t)| \leq 1$. Qiskit Pulse OverviewAs a brief overview, Qiskit Pulse schedules (experiments) consist of Instructions (i.e., Play) acting on Channels (i.e., the drive channel). Here is a summary table of available Instructions and Channels:For more detail, this table summarizes the interaction of the channels with the actual quantum hardware:However, we find it is more instructive to begin with guided programming in Pulse. Below you will learn how to create pulses, schedules, and run experiments on a simulator. These lessons can be immediately applied to actual pulse-enabled quantum hardware, in particular [`ibmq_armonk`](https://www.ibm.com/blogs/research/2019/12/qiskit-openpulse/). Let's get started! In most of the cells below, nothing needs to be modified. **However, you will need to execute the cells by pressing `shift+Enter` in each code block**. In order to keep things tidy and focus on the important aspects of Qiskit Pulse, the following cells make use of methods from the `helper` module. For the gory details, please refer back to the [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb). Before coming to Exercise 1a, the following code blocks- create backend pulse simulator and instantiate the transmon as a Duffing oscillator of frequency $\sim 5$ GHz- import libraries for numerics and visualization, and define helpful constants- create the channels for the pulse schedule and define measurment schedule (we will only work with the drive channel)
###Code
# our backend is the Pulse Simulator
from resources import helper
from qiskit.providers.aer import PulseSimulator
backend_sim = PulseSimulator()
# sample duration for pulse instructions
dt = 1e-9
# create the model
duffing_model = helper.get_transmon(dt)
# get qubit frequency from Duffing model
qubit_lo_freq = duffing_model.hamiltonian.get_qubit_lo_from_drift()
import numpy as np
# visualization tools
import matplotlib.pyplot as plt
plt.style.use('dark_background')
# unit conversion factors -> all backend properties returned in SI (Hz, sec, etc)
GHz = 1.0e9 # Gigahertz
MHz = 1.0e6 # Megahertz
kHz = 1.0e3 # kilohertz
us = 1.0e-6 # microseconds
ns = 1.0e-9 # nanoseconds
###Output
_____no_output_____
###Markdown
Instantiate channels and create measurement scheduleWe will use the same measurement schedule throughout, whereas the drive schedules will vary. This must be built for the simulator, for a real backend we can ask for its default measurement pulse.
###Code
from qiskit import pulse
from qiskit.pulse import Play, Acquire
from qiskit.pulse.pulse_lib import GaussianSquare
# qubit to be used throughout the notebook
qubit = 0
### Collect the necessary channels
drive_chan = pulse.DriveChannel(qubit)
meas_chan = pulse.MeasureChannel(qubit)
acq_chan = pulse.AcquireChannel(qubit)
# Construct a measurement schedule and add it to an InstructionScheduleMap
meas_samples = 1200
meas_pulse = GaussianSquare(duration=meas_samples, amp=0.025, sigma=4, width=1150)
measure_sched = Play(meas_pulse, meas_chan) | Acquire(meas_samples, acq_chan, pulse.MemorySlot(qubit))
inst_map = pulse.InstructionScheduleMap()
inst_map.add('measure', [qubit], measure_sched)
# save the measurement/acquire pulse for later
measure = inst_map.get('measure', qubits=[qubit])
###Output
_____no_output_____
###Markdown
Graded Exercise 1a: Rabi ScheduleAdd code to the method below in order to build a Rabi pulse schedule. A Rabi experiment consists of a drive pulse at the qubit frequency, followed by a measurement. A list of Rabis schedules will vary the drive amplitude each time. For a review of creating pulse schedules, see [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb).
###Code
from qiskit.pulse import pulse_lib
def build_rabi_pulse_schedule(drive_duration, drive_amp, drive_sigma):
### create a Rabi schedule (already done)
### create a Gaussian Rabi pulse using pulse_lib
### play Rabi pulse on the Rabi schedule and return
rabi_schedule = pulse.Schedule(name='rabi_experiment')
### WRITE YOUR CODE BETWEEN THESE LINES - START
rabi_pulse = pulse_lib.gaussian(duration=drive_duration, amp=drive_amp,
sigma=drive_sigma, name="Rabi pulse")
rabi_schedule += Play(rabi_pulse, drive_chan)
### WRITE YOUR CODE BETWEEN THESE LINES - END
# add measurement to rabi_schedule
# << indicates time shift the beginning to the start of the schedule
rabi_schedule += measure << rabi_schedule.duration
return rabi_schedule
###Output
_____no_output_____
###Markdown
From the Rabi schedule of Exercise 1a, create a list of schedules for the experiment
###Code
# Gaussian pulse parameters, with varying amplitude
drive_duration = 128
num_rabi_points = 41
drive_amps = np.linspace(0, 0.9, num_rabi_points)
drive_sigma = 16
# now vary the amplitude for each drive amp
rabi_schedules = []
for drive_amp in drive_amps:
rabi_schedules.append(build_rabi_pulse_schedule(drive_duration, drive_amp, drive_sigma))
rabi_schedules[-1].draw()
# assemble the schedules into a Qobj
from qiskit import assemble
rabi_qobj = assemble(**helper.get_params('rabi', globals()))
answer1a = rabi_qobj
# run the simulation
rabi_result = backend_sim.run(rabi_qobj, duffing_model).result()
# retrieve the data from the experiment
rabi_values = helper.get_values_from_result(rabi_result, qubit)
###Output
_____no_output_____
###Markdown
Fit Results and Plot Rabi ExperimentOnce the rough frequency of the qubit is know, the Rabi experiment determines the amplitude of a $\pi$-pulse, that is, the strength of a pulse needed to rotate the qubit around the Bloch sphere from the $|0\rangle$ to $|1\rangle$ states (or vice versa). We assume the rotation axis to be the $x$-axis.
###Code
fit_params, y_fit = helper.fit_sinusoid(drive_amps, rabi_values, [1, 0, 0.5, 0])
plt.scatter(drive_amps, rabi_values, color='white')
plt.plot(drive_amps, y_fit, color='red')
drive_period = fit_params[2] # get period of rabi oscillation
plt.axvline(0, color='red', linestyle='--')
plt.axvline(drive_period/2, color='red', linestyle='--')
plt.xlabel("Drive amp [a.u.]", fontsize=15)
plt.ylabel("Measured signal [a.u.]", fontsize=15)
plt.show()
print("Pi pulse amplitude is %f"%float(drive_period/2))
###Output
_____no_output_____
###Markdown
The $\pi$ pulse amplitude is half the period of the sinusoid (a full period of $360^\circ$ brings it back to zero, but we wish to take the qubit from the zero to one state). For the following experiment, we want a $\pi/2$ pulse: we wish to place the qubit on the equator of the Bloch sphere. The following creates a pulse that rotates the qubit $\pi/2$ ($90^\circ$ degrees) around the Bloch sphere:
###Code
# x_90 is a concise way to say pi_over_2; i.e., an X rotation of 90 degrees
x90_pulse = pulse_lib.gaussian(duration=drive_duration,
amp=drive_period/4,
sigma=drive_sigma,
name='x90_pulse')
###Output
_____no_output_____
###Markdown
Ramsey ExperimentThe Ramsey experiment reveals the time dynamics of driving the qubit off-resonantly. In particular, we vary the delay between two $\pi/2$-pulses.
###Code
# Ramsey experiment parameters
time_max_us = 0.4
time_step_us = 0.0035
times_us = np.arange(0.1, time_max_us, time_step_us)
# Convert to units of dt
delay_times_dt = times_us * us / dt
###Output
_____no_output_____
###Markdown
Graded Exercise 1b: Ramsey ScheduleAdd code to the method below in order to build a Ramsey pulse schedule. For a review of creating pulse schedules, see [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb).
###Code
def build_ramsey_pulse_schedule(delay):
### create a Ramsey pulse schedule (already done)
### play an x90 pulse on the drive channel
### play another x90 pulse after delay
### add measurement pulse to schedule
ramsey_schedule = pulse.Schedule(name='ramsey_experiment')
### HINT: include delay by adding it to the duration of the schedule
### round delay to nearest integer with int(delay)
### WRITE YOUR CODE BETWEEN THESE LINES - START
ramsey_schedule += Play(x90_pulse, drive_chan)
ramsey_schedule += Play(x90_pulse, drive_chan) << ramsey_schedule.duration + int(delay)
ramsey_schedule += measure << ramsey_schedule.duration
### WRITE YOUR CODE BETWEEN THESE LINES - END
return ramsey_schedule
# create schedules for Ramsey experiment
ramsey_schedules = []
for delay in delay_times_dt:
ramsey_schedules.append(build_ramsey_pulse_schedule(delay))
ramsey_schedules[-1].draw()
# assemble the schedules into a Qobj
# the helper will drive the pulses off-resonantly by an unknown value
ramsey_qobj = assemble(**helper.get_params('ramsey', globals()))
answer1b = ramsey_qobj
# run the simulation
ramsey_result = backend_sim.run(ramsey_qobj, duffing_model).result()
# retrieve the data from the experiment
ramsey_values = helper.get_values_from_result(ramsey_result, qubit)
# off-resonance component
fit_params, y_fit = helper.fit_sinusoid(times_us, ramsey_values, [1, 0.7, 0.1, 0.25])
_, _, ramsey_period_us, _, = fit_params
del_f_MHz = 1/ramsey_period_us # freq is MHz since times in us
plt.scatter(times_us, np.real(ramsey_values), color='white')
plt.plot(times_us, y_fit, color='red', label=f"df = {del_f_MHz:.6f} MHz")
plt.xlim(np.min(times_us), np.max(times_us))
plt.xlabel('Delay between X90 pulses [$\mu$s]', fontsize=15)
plt.ylabel('Measured Signal [a.u.]', fontsize=15)
plt.title('Ramsey Experiment', fontsize=15)
plt.legend(loc=3)
plt.show()
print("Drive is off-resonant by %f MHz"%float(del_f_MHz))
###Output
_____no_output_____
###Markdown
Now grade your solutions by running the cell below. **Provide always the same name and email, as the one you wrote during the course sign up.**
###Code
name = 'Hernan Amiune'
email = '[email protected]'
from grading_tools import grade
#grade(answer1a, name, email, 'lab6', 'ex1a')
grade(answer1b, name, email, 'lab6', 'ex1b')
###Output
Grading...
lab6/ex1b - 🎉 Correct
🎊 Hurray! You have a new correct answer! Let's submit it.
Submitting the answers for lab6...
📝 Our records, so far, are:
Correct answers: lab1:ex1, lab2:ex1, lab3:ex1, lab4:ex1, lab5:ex1, lab6:ex1a, lab6:ex1b
###Markdown
**Help us improve our educational tools by submitting your code**If you would like to help us learn how to improve our educational materials and offerings, you can opt in to send us a copy of your Jupyter notebook. By executing the cell below, you consent to sending us the code in your Jupyter notebook. All of the personal information will be anonymized.
###Code
from IPython.display import display, Javascript;display(Javascript('IPython.notebook.save_checkpoint();'));
from grading_tools import send_code;send_code('ex1.ipynb')
###Output
_____no_output_____
###Markdown
 Lab 6: Qubit Drive: Rabi & Ramsey Experiments In this lab, you will take what you learned about qubit drive to perform Rabi and Ramsey experiment on a Pulse Simulator. The goal of this lab is to familiarize yourself with the important concepts of manipulating qubit states with microwave pulses. Installing Necessary PackagesBefore we begin, you will need to install some prerequisites into your environment. Run the cell below to complete these installations. At the end, the cell outputs will be cleared.
###Code
!pip install -U -r grading_tools/requirements.txt
from IPython.display import clear_output
clear_output()
###Output
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from requests>=2.19->qiskit-ibmq-provider==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (2.9)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from requests>=2.19->qiskit-ibmq-provider==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (2020.6.20)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from scikit-learn>=0.20.0->qiskit-aqua==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (0.15.1)
Requirement already satisfied, skipping upgrade: threadpoolctl>=2.0.0 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from scikit-learn>=0.20.0->qiskit-aqua==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (2.1.0)
Requirement already satisfied, skipping upgrade: inflection>=0.3.1 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from quandl->qiskit-aqua==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (0.5.0)
Requirement already satisfied, skipping upgrade: more-itertools in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from quandl->qiskit-aqua==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (8.4.0)
Requirement already satisfied, skipping upgrade: pandas>=0.14 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from quandl->qiskit-aqua==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (1.0.5)
Requirement already satisfied, skipping upgrade: zipp>=0.5 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from importlib-metadata; python_version < "3.8"->jsonschema>=2.6->qiskit-terra==0.14.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (3.1.0)
Requirement already satisfied, skipping upgrade: cffi!=1.11.3,>=1.8 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (1.14.0)
Requirement already satisfied, skipping upgrade: pytz>=2017.2 in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from pandas>=0.14->quandl->qiskit-aqua==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (2020.1)
Requirement already satisfied, skipping upgrade: pycparser in c:\users\codie\appdata\local\programs\python\python37\lib\site-packages (from cffi!=1.11.3,>=1.8->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.0->qiskit==0.19->-r grading_tools/requirements.txt (line 1)) (2.20)
###Markdown
Simulating the Transmon as a Duffing Oscillator As you learned in Lecture 6, the transmon can be understood as a Duffing oscillator specified by a frequency $\nu$, anharmonicity $\alpha$, and drive strength $r$, which results in the Hamiltonian$$ \hat{H}_{\rm Duff}/\hbar = 2\pi\nu a^\dagger a + \pi \alpha a^\dagger a(a^\dagger a - 1) + 2 \pi r (a + a^\dagger) \times D(t),$$where $D(t)$ is the signal on the drive channel for the qubit, and $a^\dagger$ and $a$ are, respectively, the creation and annihilation operators for the qubit. Note that the drive strength $r$ sets the scaling of the control term, with $D(t)$ assumed to be a complex and unitless number satisfying $|D(t)| \leq 1$. Qiskit Pulse OverviewAs a brief overview, Qiskit Pulse schedules (experiments) consist of Instructions (i.e., Play) acting on Channels (i.e., the drive channel). Here is a summary table of available Instructions and Channels:For more detail, this table summarizes the interaction of the channels with the actual quantum hardware:However, we find it is more instructive to begin with guided programming in Pulse. Below you will learn how to create pulses, schedules, and run experiments on a simulator. These lessons can be immediately applied to actual pulse-enabled quantum hardware, in particular [`ibmq_armonk`](https://www.ibm.com/blogs/research/2019/12/qiskit-openpulse/). Let's get started! In most of the cells below, nothing needs to be modified. **However, you will need to execute the cells by pressing `shift+Enter` in each code block**. In order to keep things tidy and focus on the important aspects of Qiskit Pulse, the following cells make use of methods from the `helper` module. For the gory details, please refer back to the [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb). Before coming to Exercise 1a, the following code blocks- create backend pulse simulator and instantiate the transmon as a Duffing oscillator of frequency $\sim 5$ GHz- import libraries for numerics and visualization, and define helpful constants- create the channels for the pulse schedule and define measurment schedule (we will only work with the drive channel)
###Code
# our backend is the Pulse Simulator
from resources import helper
from qiskit.providers.aer import PulseSimulator
backend_sim = PulseSimulator()
# sample duration for pulse instructions
dt = 1e-9
# create the model
duffing_model = helper.get_transmon(dt)
# get qubit frequency from Duffing model
qubit_lo_freq = duffing_model.hamiltonian.get_qubit_lo_from_drift()
import numpy as np
# visualization tools
import matplotlib.pyplot as plt
plt.style.use('dark_background')
# unit conversion factors -> all backend properties returned in SI (Hz, sec, etc)
GHz = 1.0e9 # Gigahertz
MHz = 1.0e6 # Megahertz
kHz = 1.0e3 # kilohertz
us = 1.0e-6 # microseconds
ns = 1.0e-9 # nanoseconds
###Output
_____no_output_____
###Markdown
Instantiate channels and create measurement scheduleWe will use the same measurement schedule throughout, whereas the drive schedules will vary. This must be built for the simulator, for a real backend we can ask for its default measurement pulse.
###Code
from qiskit import pulse
from qiskit.pulse import Play, Acquire
from qiskit.pulse.pulse_lib import GaussianSquare
# qubit to be used throughout the notebook
qubit = 0
### Collect the necessary channels
drive_chan = pulse.DriveChannel(qubit)
meas_chan = pulse.MeasureChannel(qubit)
acq_chan = pulse.AcquireChannel(qubit)
# Construct a measurement schedule and add it to an InstructionScheduleMap
meas_samples = 1200
meas_pulse = GaussianSquare(duration=meas_samples, amp=0.025, sigma=4, width=1150)
measure_sched = Play(meas_pulse, meas_chan) | Acquire(meas_samples, acq_chan, pulse.MemorySlot(qubit))
inst_map = pulse.InstructionScheduleMap()
inst_map.add('measure', [qubit], measure_sched)
# save the measurement/acquire pulse for later
measure = inst_map.get('measure', qubits=[qubit])
###Output
_____no_output_____
###Markdown
Graded Exercise 1a: Rabi ScheduleAdd code to the method below in order to build a Rabi pulse schedule. A Rabi experiment consists of a drive pulse at the qubit frequency, followed by a measurement. A list of Rabis schedules will vary the drive amplitude each time. For a review of creating pulse schedules, see [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb).
###Code
from qiskit.pulse import pulse_lib
def build_rabi_pulse_schedule(drive_duration, drive_amp, drive_sigma):
### create a Rabi schedule (already done)
### create a Gaussian Rabi pulse using pulse_lib
### play Rabi pulse on the Rabi schedule and return
rabi_schedule = pulse.Schedule(name='rabi_experiment')
### WRITE YOUR CODE BETWEEN THESE LINES - START
drive_samples = 8*drive_sigma
rabi_pulse = pulse_lib.gaussian(duration=drive_samples, amp=drive_amp,
sigma=drive_sigma, name=f"Rabi drive amplitude = {drive_amp}")
rabi_schedule = pulse.Schedule(name=f"Rabi drive amplitude = {drive_amp}")
rabi_schedule += Play(rabi_pulse, drive_chan)
### WRITE YOUR CODE BETWEEN THESE LINES - END
# add measurement to rabi_schedule
# << indicates time shift the beginning to the start of the schedule
rabi_schedule += measure << rabi_schedule.duration
return rabi_schedule
###Output
_____no_output_____
###Markdown
From the Rabi schedule of Exercise 1a, create a list of schedules for the experiment
###Code
# Gaussian pulse parameters, with varying amplitude
drive_duration = 128
num_rabi_points = 41
drive_amps = np.linspace(0, 0.9, num_rabi_points)
drive_sigma = 16
# now vary the amplitude for each drive amp
rabi_schedules = []
for drive_amp in drive_amps:
rabi_schedules.append(build_rabi_pulse_schedule(drive_duration, drive_amp, drive_sigma))
rabi_schedules[-1].draw()
# assemble the schedules into a Qobj
from qiskit import assemble
rabi_qobj = assemble(**helper.get_params('rabi', globals()))
answer1a = rabi_qobj
# run the simulation
rabi_result = backend_sim.run(rabi_qobj, duffing_model).result()
# retrieve the data from the experiment
rabi_values = helper.get_values_from_result(rabi_result, qubit)
###Output
_____no_output_____
###Markdown
Fit Results and Plot Rabi ExperimentOnce the rough frequency of the qubit is know, the Rabi experiment determines the amplitude of a $\pi$-pulse, that is, the strength of a pulse needed to rotate the qubit around the Bloch sphere from the $|0\rangle$ to $|1\rangle$ states (or vice versa). We assume the rotation axis to be the $x$-axis.
###Code
fit_params, y_fit = helper.fit_sinusoid(drive_amps, rabi_values, [1, 0, 0.5, 0])
plt.scatter(drive_amps, rabi_values, color='white')
plt.plot(drive_amps, y_fit, color='red')
drive_period = fit_params[2] # get period of rabi oscillation
plt.axvline(0, color='red', linestyle='--')
plt.axvline(drive_period/2, color='red', linestyle='--')
plt.xlabel("Drive amp [a.u.]", fontsize=15)
plt.ylabel("Measured signal [a.u.]", fontsize=15)
plt.show()
print("Pi pulse amplitude is %f"%float(drive_period/2))
###Output
_____no_output_____
###Markdown
The $\pi$ pulse amplitude is half the period of the sinusoid (a full period of $360^\circ$ brings it back to zero, but we wish to take the qubit from the zero to one state). For the following experiment, we want a $\pi/2$ pulse: we wish to place the qubit on the equator of the Bloch sphere. The following creates a pulse that rotates the qubit $\pi/2$ ($90^\circ$ degrees) around the Bloch sphere:
###Code
# x_90 is a concise way to say pi_over_2; i.e., an X rotation of 90 degrees
x90_pulse = pulse_lib.gaussian(duration=drive_duration,
amp=drive_period/4,
sigma=drive_sigma,
name='x90_pulse')
###Output
_____no_output_____
###Markdown
Ramsey ExperimentThe Ramsey experiment reveals the time dynamics of driving the qubit off-resonantly. In particular, we vary the delay between two $\pi/2$-pulses.
###Code
# Ramsey experiment parameters
time_max_us = 0.4
time_step_us = 0.0035
times_us = np.arange(0.1, time_max_us, time_step_us)
# Convert to units of dt
delay_times_dt = times_us * us / dt
###Output
_____no_output_____
###Markdown
Graded Exercise 1b: Ramsey ScheduleAdd code to the method below in order to build a Ramsey pulse schedule. For a review of creating pulse schedules, see [Lab 6 notebook](lab6-drive-ham-rabi-ramsey.ipynb).
###Code
def build_ramsey_pulse_schedule(delay):
### create a Ramsey pulse schedule (already done)
### play an x90 pulse on the drive channel
### play another x90 pulse after delay
### add measurement pulse to schedule
ramsey_schedule = pulse.Schedule(name='ramsey_experiment')
### HINT: include delay by adding it to the duration of the schedule
### round delay to nearest integer with int(delay)
### WRITE YOUR CODE BETWEEN THESE LINES - START
#this_schedule = pulse.Schedule(name=f"Ramsey delay = {delay * dt / us} us")
ramsey_schedule += Play(x90_pulse, drive_chan)
ramsey_schedule += Play(x90_pulse, drive_chan) << ramsey_schedule.duration + int(delay)
ramsey_schedule += measure << ramsey_schedule.duration
### WRITE YOUR CODE BETWEEN THESE LINES - END
return ramsey_schedule
# create schedules for Ramsey experiment
ramsey_schedules = []
for delay in delay_times_dt:
ramsey_schedules.append(build_ramsey_pulse_schedule(delay))
ramsey_schedules[-1].draw()
# assemble the schedules into a Qobj
# the helper will drive the pulses off-resonantly by an unknown value
ramsey_qobj = assemble(**helper.get_params('ramsey', globals()))
answer1b = ramsey_qobj
# run the simulation
ramsey_result = backend_sim.run(ramsey_qobj, duffing_model).result()
# retrieve the data from the experiment
ramsey_values = helper.get_values_from_result(ramsey_result, qubit)
# off-resonance component
fit_params, y_fit = helper.fit_sinusoid(times_us, ramsey_values, [1, 0.7, 0.1, 0.25])
_, _, ramsey_period_us, _, = fit_params
del_f_MHz = 1/ramsey_period_us # freq is MHz since times in us
plt.scatter(times_us, np.real(ramsey_values), color='white')
plt.plot(times_us, y_fit, color='red', label=f"df = {del_f_MHz:.6f} MHz")
plt.xlim(np.min(times_us), np.max(times_us))
plt.xlabel('Delay between X90 pulses [$\mu$s]', fontsize=15)
plt.ylabel('Measured Signal [a.u.]', fontsize=15)
plt.title('Ramsey Experiment', fontsize=15)
plt.legend(loc=3)
plt.show()
print("Drive is off-resonant by %f MHz"%float(del_f_MHz))
###Output
_____no_output_____
###Markdown
Now grade your solutions by running the cell below. **Provide always the same name and email, as the one you wrote during the course sign up.**
###Code
name = 'Rohit Prasad'
email = '[email protected]'
from grading_tools import grade
grade(answer1a, name, email, 'lab6', 'ex1a')
grade(answer1b, name, email, 'lab6', 'ex1b')
#grade(answer1a, name, email, 'lab6', 'ex1a',force_commit=True)
#grade(answer1b, name, email, 'lab6', 'ex1b',force_commit=True)
#grade(answer1b, name, email, 'lab6', 'ex1b', server = 'https://salvadelapuente.com:8088/%27')
###Output
Grading...
lab6/ex1a - 🎉 Correct
Submitting the answers for lab6...
📝 Our records, so far, are:
Correct answers: lab1:ex1, lab2:ex1, lab3:ex1, lab4:ex1, lab5:ex1, lab6:ex1a, lab6:ex1b
Grading...
lab6/ex1b - 🎉 Correct
Submitting the answers for lab6...
📝 Our records, so far, are:
Correct answers: lab1:ex1, lab2:ex1, lab3:ex1, lab4:ex1, lab5:ex1, lab6:ex1a, lab6:ex1b
###Markdown
**Help us improve our educational tools by submitting your code**If you would like to help us learn how to improve our educational materials and offerings, you can opt in to send us a copy of your Jupyter notebook. By executing the cell below, you consent to sending us the code in your Jupyter notebook. All of the personal information will be anonymized.
###Code
from IPython.display import display, Javascript;display(Javascript('IPython.notebook.save_checkpoint();'));
from grading_tools import send_code;send_code('ex1.ipynb')
###Output
_____no_output_____ |
06_EDA_01_Jupyter_EDA/wandb/run-20211207_210236-rwxnbkxy/tmp/code/EDA.ipynb | ###Markdown
For tracking this notebook:
###Code
run = wandb.init(project = 'exercise_4',
save_code = True)
###Output
[34m[1mwandb[0m: Currently logged in as: [33mjobquiroz[0m (use `wandb login --relogin` to force relogin)
[34m[1mwandb[0m: wandb version 0.12.7 is available! To upgrade, please run:
[34m[1mwandb[0m: $ pip install wandb --upgrade
###Markdown
Fetching the artifact (already in wandb):
###Code
artifact = run.use_artifact("exercise_4/genres_mod.parquet:latest")
df = pd.read_parquet(artifact.file())
df.head() #
###Output
_____no_output_____
###Markdown
Generate a profile and note the warnings:
###Code
profile = ProfileReport(df, title="Pandas Profiling Report", explorative=True)
profile.to_widgets()
###Output
_____no_output_____
###Markdown
Remove duplicates
###Code
df = df.drop_duplicates().reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Let's perform a minimal feature enginnering. Let's create a new feature by concatenating them, after replacing all missing values with the empty string:
###Code
df['title'].fillna(value='', inplace=True)
df['song_name'].fillna(value='', inplace=True)
df['text_feature'] = df['title'] + ' ' + df['song_name']
###Output
_____no_output_____
###Markdown
NOTE: this feature will have to go to the feature store. If you do not have a feature store, then you should not compute it here as part of the preprocessing step. Instead, you should compute it within the inference pipeline.
###Code
hist = sns.histplot( df['loudness'].dropna() )
wandb.log({"chart": wandb.Image(hist)})
###Output
_____no_output_____ |
Data Visualization/Plotly/.ipynb_checkpoints/checkpoint.ipynb | ###Markdown
Loading Datasets
###Code
pokemon = pd.read_csv("pokemon_updated.csv")
pokemon.head(10)
stdperf = pd.read_csv("studentp.csv")
stdperf.head(10)
corona = pd.read_csv('C:/Users/DELL/Documents/GitHub/Public/COVID-19/covid/data/countries-aggregated.csv' ,
index_col='Date' , parse_dates=True)
corona.head(10)
spotify = pd.read_csv("spotify.csv" , index_col="Date")
spotify.head(10)
housing = pd.read_csv('C:/Users/DELL/Documents/GitHub/Data-Visualization/housing.csv')
housing.tail()
insurance = pd.read_csv('C:/Users/DELL/Documents/GitHub/Data-Visualization/insurance.csv')
insurance.head(10)
employment = pd.read_excel("unemployment.xlsx")
employment.head(10)
helpdesk = pd.read_csv("helpdesk.csv")
helpdesk.head(10)
fish= pd.read_csv("Fish.csv")
fish.head(10)
exercise = pd.read_csv("exercise.csv")
exercise.head(10)
suicide = pd.read_csv("suicide.csv")
suicide.head(10)
iris = pd.read_csv("iris.csv")
iris.head()
canada = pd.read_csv("canada.csv")
canada.head()
canada.columns
canada.drop(columns=['AREA' , 'DEV', 'DevName' , 'REG', 'Type', 'Coverage' , 'AreaName', 'RegName' ], inplace=True)
canada.head()
canada.rename(columns={'OdName':'Country'} , inplace=True)
canada.set_index(canada.Country,inplace=True)
canada.head()
canada2 = canada.copy()
canada2.head()
canada.index.name=None
canada.head()
del canada['Country']
canada.head()
canada = canada.transpose()
canada.head()
###Output
_____no_output_____ |
Deuteron executed on IBMQ Tokyo 20 Machine.ipynb | ###Markdown
Define your backend
###Code
from qiskit import IBMQ
# insert your token & URL here
IBMQ.enable_account('<your token>', url='<your url>')
# check available backends
print("Available backends:")
IBMQ.backends()
###Output
Remote backend "ibmqx_qasm_simulator" could not be instantiated due to an invalid config: {'conditional': ['Missing data for required field.'], 'basis_gates': ['Missing data for required field.'], 'local': ['Missing data for required field.'], 'memory': ['Missing data for required field.'], 'backend_version': ['Missing data for required field.'], 'max_shots': ['Missing data for required field.'], 'open_pulse': ['Missing data for required field.'], 'n_qubits': ['Missing data for required field.'], 'gates': {0: {'name': ['Missing data for required field.'], 'qasm_def': ['Missing data for required field.'], 'parameters': ['Missing data for required field.']}}, 'backend_name': ['Missing data for required field.']}
###Markdown
Define the layout
###Code
# execute on the IBM Tokyo 20 Qubit Machine
backend = IBMQ.get_backend('ibmq_20_tokyo')
print(backend)
backend.status()
###Output
ibmq_20_tokyo
###Markdown
Define the UCCSD ansatz circuit
###Code
def get_ucc_ansatz(theta):
circuit = QuantumCircuit(2, 2)
circuit.x(0)
circuit.ry(theta, 1)
circuit.cx(1, 0)
return circuit
###Output
_____no_output_____
###Markdown
Define the naive measurement circuitsmore automations can be done in here
###Code
# define the number of shots
shots = 1000
def measure_zi(theta):
circuit = get_ucc_ansatz(theta)
circuit.measure(0, 0)
circuit.measure(1, 1)
job = qiskit.execute(circuit, backend=backend, shots=shots)
job_monitor(job)
counts = job.result().get_counts(circuit)
return (counts.get('00', 0) + counts.get('10', 0) - counts.get('11', 0) - counts.get('01', 0))/shots
def measure_iz(theta):
circuit = get_ucc_ansatz(theta)
circuit.measure(0, 0)
circuit.measure(1, 1)
job = qiskit.execute(circuit, backend=backend, shots=shots)
job_monitor(job)
counts = job.result().get_counts(circuit)
return (counts.get('00', 0) + counts.get('01', 0) - counts.get('10', 0) - counts.get('11', 0))/shots
def measure_xx(theta):
circuit = get_ucc_ansatz(theta)
circuit.h(0)
circuit.h(1)
circuit.measure(0, 0)
circuit.measure(1, 1)
job = qiskit.execute(circuit, backend=backend, shots=shots)
job_monitor(job)
counts = job.result().get_counts(circuit)
return (counts.get('00', 0) + counts.get('11', 0) - counts.get('01', 0) - counts.get('10', 0))/shots
def measure_yy(theta):
circuit = get_ucc_ansatz(theta)
circuit.h(0)
circuit.sdg(0)
circuit.h(1)
circuit.sdg(1)
circuit.measure(0, 0)
circuit.measure(1, 1)
job = qiskit.execute(circuit, backend=backend, shots=shots)
job_monitor(job)
counts = job.result().get_counts(circuit)
return (counts.get('00', 0) + counts.get('11', 0) - counts.get('01', 0) - counts.get('10', 0))/shots
def measure_hamiltonian(theta):
return 5.9 + .22 * measure_zi(theta) - 6.1 * measure_iz(theta) - 2.14 * measure_xx(theta) - 2.14 * measure_yy(theta)
###Output
_____no_output_____
###Markdown
run the experiment with different theta value
###Code
import numpy as np
import qiskit
values = []
for theta in np.arange(-np.pi, np.pi, np.pi / 6):
values.append(measure_hamiltonian(theta))
# print out the values after each runs in order to save progress from program collapse/network issue
print('theta is: ')
print(theta)
print('the current values[] array is: ')
print(values)
###Output
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-3.141592653589793
the current values[] array is:
[11.77248]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-2.6179938779914944
the current values[] array is:
[11.77248, 12.465000000000002]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-2.0943951023931957
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-1.570796326794897
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-1.0471975511965983
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-0.5235987755982996
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894, 2.0026800000000016]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-8.881784197001252e-16
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894, 2.0026800000000016, -0.6282799999999995]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
0.5235987755982978
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894, 2.0026800000000016, -0.6282799999999995, -1.3829999999999996]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
1.0471975511965965
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894, 2.0026800000000016, -0.6282799999999995, -1.3829999999999996, -0.22271999999999914]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
1.5707963267948948
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894, 2.0026800000000016, -0.6282799999999995, -1.3829999999999996, -0.22271999999999914, 2.59764]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
2.094395102393194
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894, 2.0026800000000016, -0.6282799999999995, -1.3829999999999996, -0.22271999999999914, 2.59764, 6.116479999999999]
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
2.617993877991493
the current values[] array is:
[11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894, 2.0026800000000016, -0.6282799999999995, -1.3829999999999996, -0.22271999999999914, 2.59764, 6.116479999999999, 9.31788]
###Markdown
Plot the resultsSuperimpose the results of the simultaneous measurements and that of the naive measurement
###Code
# simulated results copied from the other ipython notebook
values_simulated = [12.318439999999999, 13.70312, 12.840320000000002, 10.260440000000001, 6.42872, 2.7654000000000014, -0.4627999999999991, -1.6761599999999992, -1.1323599999999996, 1.9101599999999999, 5.18572, 9.35652]
# physical results from Tokyo 20 for naive measurement method
values_physical = [11.77248, 12.465000000000002, 11.226320000000001, 8.612680000000001, 5.894, 2.0026800000000016, -0.6282799999999995, -1.3829999999999996, -0.22271999999999914, 2.59764, 6.116479999999999, 9.31788]
# array of theta values
thetas = [-3.141592653589793, -2.6179938779914944, -2.0943951023931957, -1.570796326794897, -1.0471975511965983, -0.5235987755982996, -8.881784197001252e-16, 0.5235987755982978, 1.0471975511965965, 1.5707963267948948, 2.094395102393194, 2.617993877991493]
# physical results from Tokyo 20 for simultaneous measurement method
values2_physical = [11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012, -0.7365999999999993, -1.1382, -0.11680000000000001, 2.16244, 5.65808, 9.46308]
# plot them!
import matplotlib.pyplot as plt
plt.plot(thetas, values2_physical, 'bs', label = 'simultaneous measurement')
plt.plot(thetas, values2_physical)
plt.plot(thetas, values_physical, 'g^', label = "naive measurement")
plt.plot(thetas, values_physical)
plt.legend(loc='best')
plt.title('simultaneous vs naive for Tokyo')
plt.xlabel('theta values')
plt.ylabel('energy sums')
plt.show()
###Output
_____no_output_____
###Markdown
Plot the difference of simultaneous vs naive for Tokyo 20
###Code
diff = []
for i in range(len(thetas)):
diff.append(values_physical[i] - values2_physical[i])
plt.plot(thetas, diff, 'g^')
plt.plot(thetas, diff)
plt.title('difference between simultaneous and naive for Tokyo')
plt.xlabel('theta values')
plt.ylabel('energy sums')
plt.show()
###Output
_____no_output_____
###Markdown
Define the simultaneous measurement circuitsmore automations can be done in here
###Code
# define the number of shots
shots = 1000
def measure_zi_and_iz(theta):
circuit = get_ucc_ansatz(theta)
circuit.measure(0, 0)
circuit.measure(1, 1)
job = qiskit.execute(circuit, backend=backend, shots=shots)
job_monitor(job)
counts = job.result().get_counts(circuit)
zi = (counts.get('00', 0) + counts.get('10', 0) - counts.get('11', 0) - counts.get('01', 0))/shots
iz = (counts.get('00', 0) + counts.get('01', 0) - counts.get('10', 0) - counts.get('11', 0))/shots
return zi, iz
def measure_xx_and_yy(theta):
circuit = get_ucc_ansatz(theta)
circuit.h(1)
circuit.cx(1, 0)
circuit.cz(0, 1)
circuit.h(0)
circuit.h(1)
circuit.measure(0, 0)
circuit.measure(1, 1)
job = qiskit.execute(circuit, backend=backend, shots=shots)
job_monitor(job)
counts = job.result().get_counts(circuit)
xx = (counts.get('00', 0) + counts.get('10', 0) - counts.get('11', 0) - counts.get('01', 0))/shots
yy = (counts.get('00', 0) + counts.get('01', 0) - counts.get('10', 0) - counts.get('11', 0))/shots
return xx, yy
def measure_simultaneously_hamiltonian(theta):
xx, yy = measure_xx_and_yy(theta)
zi, iz = measure_zi_and_iz(theta)
return 5.9 + .22 * zi - 6.1 * iz - 2.14 * xx - 2.14 * yy
###Output
_____no_output_____
###Markdown
Run the experiment with different theta value
###Code
import qiskit
import numpy as np
values2 = []
for theta in np.arange(-np.pi, np.pi, np.pi / 6):
values2.append(measure_simultaneously_hamiltonian(theta))
# print out the values after each runs in order to save progress from program collapse/network issue
print('theta is: ')
print(theta)
print('the current values[] array is: ')
print(values2)
###Output
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-3.141592653589793
the current values[] array is:
[11.78468]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-2.6179938779914944
the current values[] array is:
[11.78468, 12.558240000000001]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-2.0943951023931957
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-1.570796326794897
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-1.0471975511965983
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-0.5235987755982996
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
-8.881784197001252e-16
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012, -0.7365999999999993]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
0.5235987755982978
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012, -0.7365999999999993, -1.1382]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
1.0471975511965965
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012, -0.7365999999999993, -1.1382, -0.11680000000000001]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
1.5707963267948948
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012, -0.7365999999999993, -1.1382, -0.11680000000000001, 2.16244]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
2.094395102393194
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012, -0.7365999999999993, -1.1382, -0.11680000000000001, 2.16244, 5.65808]
Job Status: job has successfully run
Job Status: job has successfully run
theta is:
2.617993877991493
the current values[] array is:
[11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012, -0.7365999999999993, -1.1382, -0.11680000000000001, 2.16244, 5.65808, 9.46308]
###Markdown
plot the results for the simultaneous measurement
###Code
values2_physical = [11.78468, 12.558240000000001, 11.13888, 8.84964, 5.1592400000000005, 1.5702000000000012, -0.7365999999999993, -1.1382, -0.11680000000000001, 2.16244, 5.65808, 9.46308]
thetas = []
for theta in np.arange(-np.pi, np.pi, np.pi/6):
# put the theta values to a list for plotting
thetas.append(theta)
import matplotlib.pyplot as plt
plt.plot(thetas, values2_physical, 'g^')
plt.plot(thetas, values2_physical)
plt.xlabel('theta values')
plt.ylabel('energy sums')
plt.show()
###Output
_____no_output_____
###Markdown
Calculate and plot the difference between results obtained from the simulator and from Tokyo 20
###Code
# simulated values copied from the other ipython notebook
values2_simulated = [12.2842, 13.644680000000001, 12.96908, 10.20912, 6.76572, 2.5197200000000013, -0.45851999999999904, -1.7290799999999997, -1.2657199999999995, 1.5503200000000001, 5.36568, 9.352079999999997]
diff_phys_sim = []
for theta in range(len(thetas)):
diff_phys_sim.append(values2_physical[theta] - values2_simulated[theta])
import matplotlib.pyplot as plt
plt.plot(thetas, diff_phys_sim, 'g^')
plt.plot(thetas, diff_phys_sim)
plt.xlabel('theta values')
plt.ylabel('energy sums')
plt.show()
###Output
_____no_output_____ |
Source Codes/GST_fasion_mnist_[Classification].ipynb | ###Markdown
###Code
import keras
import numpy as np
import pandas
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
###Output
Using TensorFlow backend.
###Markdown
Import the Fashion MNIST datasetFashion MNIST dataset contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images.shape
train_images[0][5]
"""
Label Class
0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot
"""
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
sns.set(style="white")
def display_image(img):
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap = "gray_r")
plt.colorbar()
plt.show()
display_image(train_images[0])
# scale these values to a range of 0 to 1 before feeding to the neural network model.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Display the first 25 images from the training set and display the class name below each image.
# Verify that the data is in the correct format and we're ready to build and train the network.
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap = plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
plt.show()
###Output
_____no_output_____
###Markdown
Model Building
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=50)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
pred = model.predict(test_images)
pred[0]
results = pred.argmax(axis=1)
results
# We can graph to look at the full set of 10 channels:
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, pred, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, pred, test_labels)
plt.show()
i = 12
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, pred, test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, pred, test_labels)
plt.show()
# Plot the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, pred, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, pred, test_labels)
plt.show()
# Grab an image from the test dataset
img = test_images[0]
print(img.shape)
# Add the image to a batch where it's the only member.
img = (np.expand_dims(img,0))
print(img.shape)
predictions_single = model.predict(img)
print(predictions_single)
plot_value_array(0, predictions_single, test_labels)
plt.xticks(range(10), class_names, rotation=45)
plt.show()
np.argmax(predictions_single[0])
###Output
_____no_output_____ |
Notebooks/Bouysset_mols2grid.ipynb | ###Markdown
Interactive visualization and filtering of small molecule datasets with mols2gridA (short) tutorial by Cédric Bouysset - RDKit UGM 2021`mols2grid` is a Python package for 2D molecular visualization, focused on Jupyter notebooks.💻 **GitHub**: https://github.com/cbouy/mols2grid👏 **Acknowledgments**:* Contributors: [@fredrikw](https://github.com/fredrikw), [@JustinChavez](https://github.com/JustinChavez)* Conda maintainer: [@hadim](https://github.com/hadim)* Tutorials/code snippets: [@PatWalters](https://practicalcheminformatics.blogspot.com/2021/07/viewing-clustered-chemical-structures.html), [@czodrowskilab](https://github.com/czodrowskilab/5minfame/blob/main/2021_09_02-czodrowski-mols2grid.ipynb), [@dataprofessor](https://www.youtube.com/watch?v=0rqIwSeUImo), [@iwatobipen](https://iwatobipen.wordpress.com/2021/06/13/draw-molecules-on-jupyter-notebook-rdkit-mols2grid/), [@JustinChavez](https://blog.reverielabs.com/building-web-applications-from-python-scripts-with-streamlit/)This tutorial covers the basics on how to use mols2grid and some more advanced use cases. It requires beginner knowledge with pandas and RDKit, and for the (optional) more advanced features some beginner knowledge in JavaScript, HTML and CSS may be necessary.
###Code
# Install requirements for the tutorial
!pip install rdkit-pypi mols2grid ipywidgets py3Dmol
import mols2grid
import pandas as pd
from rdkit import Chem
from rdkit.Chem import Descriptors, Draw
from ipywidgets import interact, widgets
import urllib
from IPython.display import display
import py3Dmol
###Output
_____no_output_____
###Markdown
The dataList of drugs approved by the FDA and others downloaded from [DrugCentral](https://drugcentral.org/), prefiltered to only contain the first 200 compounds with a molecular weight below 600 g/mol. You can get the raw dataset [here](https://unmtid-shinyapps.net/download/DrugCentral/20200516/structures.smiles.tsv).
###Code
# read the dataset
df = pd.read_csv("https://raw.githubusercontent.com/cbouy/UGM_2021/main/Notebooks/data/drugcentral_filtered.tsv", sep="\t")
df["mol"] = df["SMILES"].apply(Chem.MolFromSmiles)
# compute some descriptors
df["MolWt"] = df["mol"].apply(Descriptors.ExactMolWt)
df["LogP"] = df["mol"].apply(Descriptors.MolLogP)
df["NumHDonors"] = df["mol"].apply(Descriptors.NumHDonors)
df["NumHAcceptors"] = df["mol"].apply(Descriptors.NumHAcceptors)
# reformat the dataframe
df.drop(columns=["mol"], inplace=True)
df.rename(columns={"INN": "Name", "CAS_RN": "CAS"}, inplace=True)
print(f"{len(df)} molecules read")
df.head()
###Output
_____no_output_____
###Markdown
The basics- The input can be a DataFrame, a list of RDKit molecules, or an SDFile. The other arguments are optional.
###Code
mols2grid.display(
df,
# set the fields displayed on the grid
subset=["ID", "img", "CAS"],
# set the fields displayed on mouse hover
tooltip=["Name", "MolWt"],
)
###Output
_____no_output_____
###Markdown
- You can make simple text searches using the text bar on the bottom right: try with `acid` for example- But we can also make substructure queries by clicking on 🔎 > SMARTS and search for `C(=O)-[OH]`- Next, let's sort our molecules by molecular weight (click again to reverse the order)- Finally, select a couple of molecules (click on the checkbox) and you can then export you selection to a SMILES file (clipboard copy is blocked on Colab unfortunately) The main point of mols2grid is that the widget let's you access your selections from Python afterwards:
###Code
mols2grid.get_selection()
# retrieve the corresponding entries in the dataframe
df.iloc[list(mols2grid.get_selection().keys())]
###Output
_____no_output_____
###Markdown
Interactive filteringLet's add more options for filtering the grid!We'll use ipywidgets to add sliders for the molecular weight and the other molecular descriptors, and define a function that queries the internal dataframe using the values in the sliders.Everytime the sliders are moved, the function is called to filter our grid.
###Code
grid = mols2grid.MolGrid(df, name="filters")
view = grid.display(
n_rows=2,
subset=["ID", "img", "CAS"],
tooltip=["Name", "MolWt", "LogP", "NumHDonors", "NumHAcceptors"],
)
@interact(
MolWt=widgets.IntRangeSlider(value=[0, 600], min=0, max=600, step=10),
LogP=widgets.IntRangeSlider(value=[-10, 10], min=-10, max=10, step=1),
NumHDonors=widgets.IntRangeSlider(value=[0, 20], min=0, max=20, step=1),
NumHAcceptors=widgets.IntRangeSlider(value=[0, 20], min=0, max=20, step=1),
)
def filter_grid(MolWt, LogP, NumHDonors, NumHAcceptors):
results = grid.dataframe.query(
"@MolWt[0] <= MolWt <= @MolWt[1] and "
"@LogP[0] <= LogP <= @LogP[1] and "
"@NumHDonors[0] <= NumHDonors <= @NumHDonors[1] and "
"@NumHAcceptors[0] <= NumHAcceptors <= @NumHAcceptors[1]"
)
return grid.filter_by_index(results.index)
view
###Output
_____no_output_____
###Markdown
Another advantage of using `mols2grid.MolGrid` instead of `mols2grid.display`: you get a shortcut for getting your selection as a DataFrame (equivalent to `df.iloc[list(mols2grid.get_selection().keys())]`)
###Code
grid.get_selection()
###Output
_____no_output_____
###Markdown
CallbacksCallbacks are **functions that are executed when you click on a molecule's image**. They can be written in *JavaScript* or *Python*.It can be used to display some additional information on the molecule or run some more complex code like database queries, docking or machine-learning predictions.For Python callbacks, you need to declare a function that takes a dictionnary as first argument. This dictionnary contains all the data related to the molecule you've just clicked on. For example, the SMILES of the molecule will be available as `data["SMILES"]`.One limitation to keep in mind for Python callbacks is that using print or any other "output" functions inside the callback will not display anything by default. You need to use ipywidgets's `Output` widget to capture what the function is trying to display, and then show it.
###Code
output = widgets.Output()
# the Output widget let's us capture the output generated by the callback function
# its presence is mandatory if you want to print/display some info with your callback
@output.capture(clear_output=True, wait=True)
def show_3d(data):
"""Query PubChem to download the SDFile with 3D coordinates and
display the molecule with py3Dmol
"""
url = "https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/smiles/{}/SDF?record_type=3d"
smi = urllib.parse.quote(data["SMILES"])
try:
response = urllib.request.urlopen(url.format(smi))
except urllib.error.HTTPError:
print(f"Could not find corresponding match on PubChem")
print(data["SMILES"])
else:
sdf = response.read().decode()
view = py3Dmol.view(height=300, width=800)
view.addModel(sdf, "sdf")
view.setStyle({'stick': {}})
view.zoomTo()
view.show()
## Google Colab requirement
try:
from google import colab
except:
pass
else:
colab.output.register_callback("show_3d", show_3d)
##
g = grid.display(
subset=["ID", "img", "Name"],
tooltip_trigger="hover",
callback=show_3d,
)
display(g)
output
###Output
_____no_output_____
###Markdown
You can also use JavaScript callbacks. JS callbacks don't require to declare a function, and you can directly access and use the `data` object similarly to Python in your callback script. The callback could then be as simple as `callback="console.log(JSON.stringify(data))"`To display popup windows on click, a helper function is available: `mols2grid.make_popup_callback`. It requires a `title` as well as some `html` code to format and display the information that you'd like to show. All of the values inside the `data` object can be inserted in the title and html arguments using `${data["field_name"]}`. Additionally, you can execute a prerequisite JavaScript snippet to create variables that are then accessible in the html code.In the following exemple, we create an RDKit molecule using the SMILES of the molecule (the `SMILES` field is always present in the data object, no matter your input when creating the grid). We then create an SVG image of the molecule, and calculate some descriptors. Finally, we inject these variables inside the HTML code. You can also style the popup window through the `style` argument.You can also define your own JS callback from scratch, depending on your needs.It is possible to load additional JS libraries by passing `custom_header=""` to `mols2grid.display`, and they will then be available in the callback.
###Code
callback = mols2grid.make_popup_callback(
title="${data['Name']}",
js="""
var mol = RDKitModule.get_mol(data["SMILES"]);
var svg = mol.get_svg(400, 300);
var desc = JSON.parse(mol.get_descriptors());
mol.delete();
""",
html="""
<div class="row">
<div class="col">${svg}</div>
<div class="col">
<b>Molecular weight</b>: ${desc.amw}<br/>
<b>HBond Acceptors</b>: ${desc.NumHBA}<br/>
<b>HBond Donors</b>: ${desc.NumHBD}<br/>
<b>ClogP</b>: ${desc.CrippenClogP}<br/>
</div>
</div>""",
style="max-width: 80%;",
)
grid.display(
subset=["ID", "img", "Name"],
tooltip_trigger="hover",
callback=callback,
)
print(callback)
###Output
_____no_output_____
###Markdown
Advanced customizationYou can have full control on how molecules and the grid are rendered:
###Code
# custom drawing options for molecules:
opts = Draw.MolDrawOptions()
# white carbon and hydrogen atoms
opts.updateAtomPalette({x: (1, 1, 1) for x in [1, 6]})
# lighter blue for nitrogen
opts.updateAtomPalette({7: (.4, .4, 1)})
# transparent background
opts.clearBackground = False
# greg's favorite 🤡
opts.comicMode = True
# put the background of each cell in black with white font
custom_css = """
.cell {
background-color: black;
color: white;
}
"""
def lipinsky(item):
"""Colors cells in dark blue if they don't follow Lipinsky's rules"""
if not (
(item["MolWt"] < 500) and
(item["NumHDonors"] <= 5) and
(item["NumHAcceptors"] <= 10) and
(item["LogP"] < 5)
):
return "background-color: navy;"
return ""
mols2grid.display(
df.sample(45),
subset=["ID", "img", "CAS"],
tooltip=["Name", "CAS", "MolWt", "LogP", "NumHDonors", "NumHAcceptors"],
size=(180, 180),
n_columns=4, n_rows=2,
MolDrawOptions=opts,
custom_css=custom_css,
hover_color="#727272",
border="2px solid #333",
# modify the style of some fields (MolWt), or of the entire cell (__all__)
style={
"MolWt": lambda x: "color: red" if x > 500 else "",
"__all__": lipinsky,
},
# modify some fields (less significant digits in this case)
transform={
"MolWt": lambda x: round(x, 1),
"LogP": lambda x: round(x, 1)
},
# hide checkboxes
selection=False,
name="customization",
)
###Output
_____no_output_____ |
phenomenon.ipynb | ###Markdown
Programming for Data Analysis Semester 2 Project 2018 Student: David O'Brien Student ID: G00364766 Creating a dataset 1 - Choosing a PhenonmenonAs someone who likes to go for the occassional run and analyse the various statistics my Garmin Forerunner watch provides me with at the end of the run, I have decided to look at the performance of runners throughout a given year. Any person that owns a garmin watch is able to log their run. The data recorded on this run is then saved onto Garmins database, where it is used to generate insights for individual runners. Below, I will outline the variables recorded and the informative dashboards provided to individual users. 1.1 - Variables recorded by the Garmin watch:The Garmin Forerunner watch records multiple variables throughout your run. Some of these are as follows:- Average Pace- Average Speed- Total Distance- Total Time- Average Heart Rate- Training Effect- Elevation- CaloriesAnother interesting feature of the watch is the Garmin Connect app. This is what provides the insights dashboard into your running performance by comparing different variables recorded to different groups of runners. For example, you are able compare your running distance for a particular month against all other garmin users and see where you stand. This can be broken down further into gender/age group/average pace. In the images below, I have extracted (using the garmin app) different plots showing the variables pace, distance and duration for both male and female users for the month of July 2018Variable | Male | Female---------|------|--------Pace | | Distance | | Time | | The plots next to the Pace variable shows that the average pace for the majority of male runners is 5.5min/km and the majority of female runners is 6min/km, indicating that males run faster than femalesThe plots next to the Distance variable shows that the average weekly distance for the majority of male runners is 2km and the majority of female runners is 1km, indicating that males run more than femalesThe plots next to the Duration variable shows that the average weekly duration for the majority of male runners is 10 minutes and the majority of female runners is 10 minutes, indicating that males and females, on average, run for the same length of time each week 1.2 - Identifying the variablesFor the purpose of this project, the variables identified for this project, based on the information above will be:- Gender- Pace- Distance- DurationThe table below outlines the reasonable average values for each of the variables and how the variables might be realted to each other.Variable | Male (min) | Male (max) | Male (average) | Female (min) | Female (max) | Female (average)---------|------------|------------|----------------|--------------|--------------|----------------Pace (min/km) | 14 | 3 | 5.5 | 14 | 3.5 |6Distance (km) | 1 | 30 | 2 | 1 | 30 | 1Duration (hours) | 0.17 | 5 | 0.17 | 0.17 | 5 | 0.17It should be noted that the proportion of male to female runners is not given in Garmins data. Therefore in order to get a realistic value of what this might be, I downloaded the 10k results from the Galway Bay Run for 2018. Of the 1,374 participants that had recorded their gender, 861 (63%) were female and 511 (37%) were male. I found this to be an interesting statistic, so decided to check the same information for the two longer race events for the Galway Bay Run. Below are the findings from this.Race | Male | Female-----|------|--------10k | 37% | 63%21k (half marathon) | 56% | 44%42k (full marathon) | 64% | 36%It can be seen from above that as the race distance increases, the proportions shift from a female majority in the lower distance to a male majority in the higher distance. To use reasonable proportions of male to female runners, i have based the percentages used in the synthesised data on the 10k values as the plots above show us that the average weekly distance is less than 10k. 2 - Syntesise the Data Set
###Code
# Import modules
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
import numpy as np
# set the seed so that the same set of numbers appear every time
np.random.seed(0)
# create dataframe for variables Gender, Distance, Duration, Pace
# create the 'running' dataframe for 1000 different runs showing the gender for each run
# create the variable 'gender' for the data frame using random.choice
gender = ['Male','Female'] # define the values
running = pd.DataFrame(np.random.choice(gender,1000,p=[0.37,0.63])) # the random.choice function lets us distribute the data to match the real world data from the Galway Bay marathon
running.columns = ['Gender'] # name the column
# check the percentage of male and female runners
(running['Gender'].value_counts()/running['Gender'].count())*100
running[0:5] # show the first five rows of data to confirm it works
# add a column for distance to the running dataframe. adapted from https://jeffdelaney.me/blog/useful-snippets-in-pandas/
distance = []
for row in running.Gender:
if row in ['Male']:
distance.append(np.random.triangular(1,2,30)) # note that the min and max run distances we want to create for both male and female are 1 and 30 respectively, if they were different, this code would create the desired random numbers also
else:
distance.append(np.random.triangular(1,2,30)) # note that the min and max run distances we want to create for both male and female are 1 and 30 respectively, if they were different, this code would create the desired random numbers also
running['distance'] = distance
# show the first five rows data to confirm it works
running[0:5]
# add a column for pace. sourced from https://jeffdelaney.me/blog/useful-snippets-in-pandas/
pace = []
for row in running.Gender:
if row in ['Male']:
pace.append(np.random.triangular(3,5.5,14))
else:
pace.append(np.random.triangular(3.5,6,14))
running['pace'] = pace
# show the first five rows data to confirm it works
running[0:5]
# add a column for time. Sourced from https://jeffdelaney.me/blog/useful-snippets-in-pandas/
running['time mins'] = running['distance'] * running['pace']
# show the first five rows data to confirm it works
running[0:5]
# show summary statistics for the running data set
running[['distance','pace','time mins']].describe()
# show the summary statistics of the variables for the male and female
running.groupby("Gender").describe()
###Output
_____no_output_____
###Markdown
2.1 - Plots The plots below demonstrate the simulated data closely match the data in the graphs at the top of the notebook.
###Code
# plot the distribution of males and female in relation to pace
running.groupby("Gender").hist(column=["distance","pace","time mins"], figsize=(12,8))
# Plot the running data set with a pair plot.
sns.pairplot(running, hue="Gender")
###Output
_____no_output_____
###Markdown
2.2 - The simulated data set
###Code
# show the simulated data set
running
###Output
_____no_output_____ |
notebooks/write_MessengerMDISPds3LabelNaifSpiceDriver.ipynb | ###Markdown
Writing out a USGSCSM ISD from a PDS3 Messenger MDIS image
###Code
import ale
from ale.drivers.messenger_drivers import MessengerMdisPds3NaifSpiceDriver
from ale.formatters.usgscsm_formatter import to_usgscsm
import json
import os
###Output
_____no_output_____
###Markdown
Instantiating an ALE driverALE drivers are objects that define how to acquire common ISD keys from an input image format, in this case we are reading in a PDS3 image using NAIF SPICE kernels for exterior orientation data. If the driver utilizes NAIF SPICE kernels, it is implemented as a [context manager](https://docs.python.org/3/reference/datamodel.htmlcontext-managers) and will furnish metakernels when entering the context (i.e. when entering the `with` block) and free the metakernels on exit. This maintains the integrity of spicelib's internal data structures. These driver objects are short-lived and are input to a formatter function that consumes the API to create a serializable file format. `ale.formatters` contains available formatter functions. The default config file is located at `ale/config.yml` and is copied into your home directory at `.ale/config.yml` on first use of the library. The config file can be modified using a text editor. `ale.config` is loaded into memory as a dictionary. It is used to find metakernels for different missions. For example, there is an entry for MDIS that points to `/scratch/jlaura/spice/mess-e_v_h-spice-6-v1.0/messsp_1000/extras/mk` by default. If you want to use your own metakernels, you will need to update this path. For example, if the metakernels are located in `/data/mdis/mk/` the MDIS entry should be updated with this path. If you are using the default metakernels, then you do not need to update the path.ALE has a two step process for writing out an ISD: 1. Instantiate your driver (in this case `MessengerMdisPds3NaifSpiceDriver`) within a context and 2. pass the driver object into a formatter (in this case, `to_usgscsm`). Requirements: * A PDS3 Messenger MDIS image * NAIF metakernels installed * Config file path for MDIS (ale.config.mdis) pointing to MDIS NAIF metakernel directory * A conda environment with ALE installed into it usisng the `conda install` command or created using the environment.yml file at the base of ALE.
###Code
# printing config displays the yaml formatted string
print(ale.config)
# config object is a dictionary so it has the same access patterns
print('MDIS spice directory:', ale.config['mdis'])
# updating config for new MDIS path in this notebook
# Note: this will not change the path in `.ale/config.yml`. This change only lives in the notebook.
# ale.config['mdis'] = '/data/mdis/mk/'
# change to desired PDS3 image path
fileName = 'EN1072174528M.IMG'
# metakernels are furnished when entering the context (with block) with a driver instance
# most driver constructors simply accept an image path
with MessengerMdisPds3NaifSpiceDriver(fileName) as driver:
# pass driver instance into formatter function
usgscsmString = to_usgscsm(driver)
###Output
_____no_output_____
###Markdown
Write ISD to disk ALE formatter functions generally return bytes or a string that can be written out to disk. ALE's USGSCSM formatter function returns a JSON encoded string that can be written out using any JSON library. USGSCSM requires the ISD to be colocated with the image file with a `.json` extension in place of the image extension.
###Code
# load the json encoded string ISD
usgscsm_dict = json.loads(usgscsmString)
# strip the image file extension and append .json
jsonFile = os.path.splitext(fileName)[0] + '.json'
# write to disk
with open(jsonFile, 'w') as fp:
json.dump(usgscsm_dict, fp)
###Output
_____no_output_____
###Markdown
Writing out a USGSCSM ISD from a PDS3 Messenger MDIS image
###Code
import ale
from ale.drivers.messenger_drivers import MessengerMdisPds3NaifSpiceDriver
from ale.formatters.usgscsm_formatter import to_usgscsm
import json
import os
###Output
_____no_output_____
###Markdown
Instantiating an ALE driverALE drivers are objects that define how to acquire common ISD keys from an input image format, in this case we are reading in a PDS3 image using NAIF SPICE kernels for exterior orientation data. If the driver utilizes NAIF SPICE kernels, it is implemented as a [context manager](https://docs.python.org/3/reference/datamodel.htmlcontext-managers) and will furnish metakernels when entering the context (i.e. when entering the `with` block) and free the metakernels on exit. This maintains the integrity of spicelib's internal data structures. These driver objects are short-lived and are input to a formatter function that consumes the API to create a serializable file format. `ale.formatters` contains available formatter functions. The default config file is located at `ale/config.yml` and is copied into your home directory at `.ale/config.yml` on first use of the library. The config file can be modified using a text editor. `ale.config` is loaded into memory as a dictionary. It is used to find metakernels for different missions. For example, there is an entry for MDIS that points to `/scratch/jlaura/spice/mess-e_v_h-spice-6-v1.0/messsp_1000/extras/mk` by default. If you want to use your own metakernels, you will need to update this path. For example, if the metakernels are located in `/data/mdis/mk/` the MDIS entry should be updated with this path. If you are using the default metakernels, then you do not need to update the path.ALE has a two step process for writing out an ISD: 1. Instantiate your driver (in this case `MessengerMdisPds3NaifSpiceDriver`) within a context and 2. pass the driver object into a formatter (in this case, `to_usgscsm`). Requirements: * A PDS3 Messenger MDIS image * NAIF metakernels installed * Config file path for MDIS (ale.config.mdis) pointing to MDIS NAIF metakernel directory * A conda environment with ALE installed into it usisng the `conda install` command or created using the environment.yml file at the base of ALE.
###Code
# printing config displays the yaml formatted string
print(ale.config)
# config object is a dictionary so it has the same access patterns
print('MDIS spice directory:', ale.config['mdis'])
# updating config for new MDIS path in this notebook
# Note: this will not change the path in `.ale/config.yml`. This change only lives in the notebook.
# ale.config['mdis'] = '/data/mdis/mk/'
# change to desired PDS3 image path
fileName = 'EN1072174528M.IMG'
# metakernels are furnished when entering the context (with block) with a driver instance
# most driver constructors simply accept an image path
with MessengerMdisPds3NaifSpiceDriver(fileName) as driver:
# pass driver instance into formatter function
usgscsmString = to_usgscsm(driver)
###Output
_____no_output_____
###Markdown
Write ISD to disk ALE formatter functions generally return bytes or a string that can be written out to disk. ALE's USGSCSM formatter function returns a JSON encoded string that can be written out using any JSON library. USGSCSM requires the ISD to be colocated with the image file with a `.json` extension in place of the image extension.
###Code
# load the json encoded string ISD
usgscsm_dict = json.loads(usgscsmString)
# strip the image file extension and append .json
jsonFile = os.path.splitext(fileName)[0] + '.json'
# write to disk
with open(jsonFile, 'w') as fp:
json.dump(usgscsm_dict, fp)
###Output
_____no_output_____ |
insurance-claims-eda-hypothesis-testing.ipynb | ###Markdown
<h1 style="font-size: 40px; font-family:Garamond; margin-bottom:2px; background-color: steelblue; border-radius: 5px 5px; padding: 5px; color: white; text-align: center;">Business Statistics: EDA & Insurance claims **Objective – Explore the dataset and extract insights from the data. Using statistical evidence to**- Prove (or disprove) that the medical claims made by the people who smoke is greater than those who don't? - Prove (or disprove) with statistical evidence that the BMI of females is different from that of males.- Is the proportion of smokers significantly different across different regions? - Is the mean BMI of women with no children, one child, and two children the same? <h2 style="background-color: steelblue; color: white; padding: 8px; padding-right: 300px; font-size: 24px; font-family:Garamond max-width: 1500px; margin-top: 50px; margin-bottom:4px;"> Table of Contents - Context - Data Dictionary - Libraries - Read and Understand Data - Exploratory Data Analysis - Conclusion - Statistical Analysis- Recommendation <h2 style="background-color: steelblue; color: white; padding: 8px; padding-right: 300px; font-size: 24px; font-family:Garamond max-width: 1500px; margin-top: 50px; margin-bottom:4px;">Context Leveraging customer information is of paramount importance for most businesses. In the case of an insurance company, attributes of customers like the age, sex,bmi,smoker,children can be crucial in making business decisions. <h2 style="background-color: steelblue; color: white; padding: 8px; padding-right: 300px; font-size: 24px; font-family:Garamond max-width: 1500px; margin-top: 50px; margin-bottom:4px;">Data Dictionary- Age :- This is an integer indicating the age of the primary beneficiary (excluding those above 64 years, since they are generally covered by the government).- Sex :- This is the policy holder's gender, either male or female.- BMI :- This is the body mass index (BMI), which provides a sense of how over or under-weight a person is relative to their height. BMI is equal to weight (in kilograms) divided by height (in meters) squared. An ideal BMI is within the range of 18.5 to 24.9.- Children :- This is an integer indicating the number of children / dependents covered by the insurance plan.- Smoker :- This is yes or no depending on whether the insured regularly smokes tobacco.- Region :- This is the beneficiary's place of residence in the U.S., divided into four geographic regions - northeast, southeast, southwest, or northwest.- Charges :- Individual medical costs billed to health insurance Question to be answered- Are there more Male beneficary ?- Are there more smoker ?- Which region has maximum , medical cost billed to health insurance.?- What is age of beneficary.?- Do beneficary having more dependents had more medical cost billed.? <h2 style="background-color: steelblue; color: white; padding: 8px; padding-right: 300px; font-size: 24px; font-family:Garamond max-width: 1500px; margin-top: 50px; margin-bottom:4px;">Libraries
###Code
### IMPORT: ------------------------------------
import scipy.stats as stats #It has all the probability distributions available along with many statistical functions.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import warnings
warnings.filterwarnings('ignore') # To supress warnings
sns.set(style="darkgrid") # set the background for the graphs
from scipy.stats import skew
from statsmodels.stats.proportion import proportions_ztest # For proportion Z-test
from statsmodels.formula.api import ols # For n-way ANOVA
from statsmodels.stats.anova import anova_lm # For n-way ANOVA
from scipy.stats import chi2_contingency # For Chi-Sq
###Output
_____no_output_____
###Markdown
<h2 style="background-color: steelblue; color: white; padding: 8px; padding-right: 300px; font-size: 24px; font-family:Garamond max-width: 1500px; margin-top: 50px; margin-bottom:4px;">Read and Understand Data
###Code
#Reading the csv file AxisInsurance.csv
data_path='../input/insurance/insurance.csv'
df=pd.read_csv(data_path)
insured=df.copy()
# inspect data, print top 5
insured.head(5)
# bottom 5 rows:
insured.tail(5)
#get the size of dataframe
print ("Rows : " , insured.shape[0])
print ("Columns : " , insured.shape[1])
print ("\nFeatures : \n", insured.columns.tolist())
print ("\nMissing values : ", insured.isnull().sum().values.sum())
print ("\nUnique values : \n", insured.nunique())
insured.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1338 entries, 0 to 1337
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 1338 non-null int64
1 sex 1338 non-null object
2 bmi 1338 non-null float64
3 children 1338 non-null int64
4 smoker 1338 non-null object
5 region 1338 non-null object
6 charges 1338 non-null float64
dtypes: float64(2), int64(2), object(3)
memory usage: 73.3+ KB
###Markdown
**Types of variables**- Categorical varibles - sex,smoker,region,children- Quantitative variables -age,bmi,charges. Here children is a discrete variable where as age, bmi, and charges are continous variables.- There are no missing values
###Code
#changing object dtype to category to save memory
insured.sex=insured['sex'].astype("category")
insured.smoker=insured['smoker'].astype("category")
insured.region=insured['region'].astype("category")
insured.info()
insured.describe()
###Output
_____no_output_____
###Markdown
**Observations** - Average age of the primary beneficiary is 39.2 and maximum age is 64. - Average BMI is 30.66, that is out of normal BMI range, Maximum BMI is 53.13 - Average medical costs billed to health insurance is 13270, median is 9382 and maximum is 63770 - Median is less than mean in charges , indicating distrubution is postively skewed . - Customer on an average has 1 child. - For Age, BMI, children , mean is almost equal to median , suggesting data is normally distrubuted
###Code
#Are there more Male beneficary ?
# Are there more smoker ?
# which region has maximum , claims .?
insured.describe(include='category')
# get counts of unique observations for each category variable
list_col= insured.select_dtypes(['category']).columns
for i in range(len(list_col)):
print(insured[list_col[i]].value_counts())
###Output
male 676
female 662
Name: sex, dtype: int64
no 1064
yes 274
Name: smoker, dtype: int64
southeast 364
northwest 325
southwest 325
northeast 324
Name: region, dtype: int64
###Markdown
**Observations** - 676 male and 662 female, indicated sample has slightly more males than females. - 1064 nonsomker and 274 smoker, indicating sample has more nonsmokers. - Number of claims from customer who reside in southwest region is more compared to other regions <h2 style="background-color: steelblue; color: white; padding: 8px; padding-right: 300px; font-size: 24px; font-family:Garamond max-width: 1500px; margin-top: 50px; margin-bottom:4px;">Exploratory Data Analysis Univariate Analysis
###Code
def dist_box(data):
# function plots a combined graph for univariate analysis of continous variable
#to check spread, central tendency , dispersion and outliers
Name=data.name.upper()
fig,(ax_box,ax_dis) =plt.subplots(2,1,gridspec_kw = {"height_ratios": (.25, .75)},figsize=(8, 5))
mean=data.mean()
median=data.median()
mode=data.mode().tolist()[0]
fig.suptitle("SPREAD OF DATA FOR "+ Name , fontsize=18, fontweight='bold')
sns.boxplot(x=data,showmeans=True, orient='h',color="violet",ax=ax_box)
ax_box.set(xlabel='')
sns.distplot(data,kde=False,color='blue',ax=ax_dis)
ax_dis.axvline(mean, color='r', linestyle='--',linewidth=2)
ax_dis.axvline(median, color='g', linestyle='-',linewidth=2)
ax_dis.axvline(mode, color='y', linestyle='-',linewidth=2)
plt.legend({'Mean':mean,'Median':median,'Mode':mode})
#select all quantitative columns for checking the spread
list_col= insured.select_dtypes([np.number]).columns
for i in range(len(list_col)):
dist_box(insured[list_col[i]])
###Output
_____no_output_____
###Markdown
**Observations**- Age of primary beneficary lies approximately between 20 - 65 . Average Age is aprrox. 40. Majority of customer are in range 18- 20's.- Bmi is normally distrubuted and Average BMI of beneficiary is 30.This BMI is outside the normal range of BMI. There are lot of outliers at upper end- Most of the beneficary have no childrens.- Charges distrubution is unimodal and is right skewed .Average cost incured to the insurance is appro. 130000 and highest charge is 63770.There are lot of outliers at upper end.
###Code
# Function to create barplots that indicate percentage for each category.
def bar_perc(plot, feature):
total = len(feature) # length of the column
for p in plot.patches:
percentage = '{:.1f}%'.format(100 * p.get_height()/total) # percentage of each class of the category
x = p.get_x() + p.get_width() / 2 - 0.05 # width of the plot
y = p.get_y() + p.get_height() # hieght of the plot
plot.annotate(percentage, (x, y), size = 12) # annotate the percentage
#get all category datatype
list_col= insured.select_dtypes(['category']).columns
fig1, axes1 =plt.subplots(1,3,figsize=(14, 5))
for i in range(len(list_col)):
order = insured[list_col[i]].value_counts(ascending=False).index # to display bar in ascending order
axis=sns.countplot(x=list_col[i], data=insured , order=order,ax=axes1[i],palette='viridis').set(title=list_col[i].upper())
bar_perc(axes1[i],insured[list_col[i]])
###Output
_____no_output_____
###Markdown
**Observations** - 50.5% of beneficiary are male and 49.5 % are female. Approximately same number of male and female beneficiary. - 20.5% of beneficary are smokers. - Beneficary are evenly distributed across regions with South East being the most populous one (~27%) with the rest of regions each containing around ~24% - Most of the beneficiary don't have kid. Bivariate & Multivariate Analysis
###Code
plt.figure(figsize=(15,5))
sns.heatmap(insured.corr(),annot=True ,cmap="YlGn" )
plt.show()
cat_columns=insured.select_dtypes(['category']).columns
cat_columns
###Output
_____no_output_____
###Markdown
**Observation** - There is very little significant correlation between charges &age and charges and bmi.
###Code
sns.pairplot(data=insured , corner=True)
plt.show()
#Sex vs all numerical variable
fig1, axes1 =plt.subplots(2,2,figsize=(14, 11))
#select all quantitative columns for checking the spread
list_col= insured.select_dtypes([np.number]).columns
for i in range(len(list_col)):
row=i//2
col=i%2
ax=axes1[row,col]
sns.boxplot(y=insured[list_col[i]],x=insured['sex'],ax=ax,palette="PuBu", orient='v').set(title='SEX VS '+ list_col[i].upper())
###Output
_____no_output_____
###Markdown
**Observation** - Avergae Age of female beneficiary is slightly higher than male beneficiary - No of children both male and female beneficary have is same - BMI of Male policy holder has many outliers and Average BMI of male is slightly higher than female - Male policy holder has incure more charges to insurance compared to female policy holder. There are lot of outliers in female policy holder
###Code
#smoker vs all numerical variables
fig1, axes1 =plt.subplots(2,2,figsize=(14, 11))
#select all quantitative columns for checking the spread
list_col= insured.select_dtypes([np.number]).columns
for i in range(len(list_col)):
row=i//2
col=i%2
ax=axes1[row,col]
sns.boxplot(y=insured[list_col[i]],x=insured['smoker'],ax=ax,palette="PuBu",orient='v').set(title='SMOKER VS '+ list_col[i].upper() )
###Output
_____no_output_____
###Markdown
**Observation**- Smoker have incured more cost to insurance than nonsmoker. There are outliers in nonsmoker , need to analyze.- BMI of non smoker has lot of outliers.
###Code
#region vs all numerical variable
fig1, axes1 =plt.subplots(2,2,figsize=(14, 11))
#select all quantitative columns for checking the outliers
list_col= insured.select_dtypes([np.number]).columns
for i in range(len(list_col)):
row=i//2
col=i%2
ax=axes1[row,col]
sns.boxplot(y=insured[list_col[i]],x=insured['region'],ax=ax,palette="PuBu",orient='v').set(title='REGION VS '+ list_col[i].upper() )
###Output
_____no_output_____
###Markdown
**Observations** - Age and numnber of children across regions is almost same. - Average Bmi of policy holder from southeast higher compared to other regions - Charges incured because of policy holder from southeast is higher compared to othe regions - There are lot of outliers on upper end in charges
###Code
#smoker vs Sex
plt.figure(figsize=(13,5))
ax=sns.countplot(x='smoker',hue='sex',data=insured,palette='rainbow')
bar_perc(ax,insured['sex'])
ax.set(title="Smoker vs Sex")
#smoker vs charges
sns.barplot(x=insured.smoker,y=insured.charges).set(title="Smoker vs Charges")
#region vs smoker
plt.figure(figsize=(13,5))
ax=sns.countplot(x='region',hue='smoker',data=insured)
bar_perc(ax,insured['smoker'])
ax.set(title="Smoker vs Region")
###Output
_____no_output_____
###Markdown
**Observation**- There are more male smokers than female.- Southeast region has more smokers- Smoker have more costlier claims than nonsmoker.
###Code
plt.figure(figsize=(13,5))
ax=sns.countplot(x='region',hue='sex',data=insured,palette='spring')
bar_perc(ax,insured['sex'])
ax.set(title="Sex vs Region")
###Output
_____no_output_____
###Markdown
**Observations** - There are more smokers in southeast region compared to other regions.
###Code
insured.groupby(insured.sex).charges.mean()
sns.barplot(x=insured.children,y=insured.charges).set(title="Children vs Charges")
sns.barplot(x=insured.sex,y=insured.charges).set(title='Sex Vs Charges')
sns.barplot(x='region',y='charges',data=insured).set(title='Region Vs Charges')
plt.figure(figsize=(15,7))
sns.lineplot(insured["age"],insured["charges"],hue=insured["sex"],ci=0).set(title= 'Cost incured by Age for Female and Males')
plt.legend(bbox_to_anchor=(1.00, 1))
plt.show()
df_smoker_char_sex=pd.crosstab(index=insured.smoker,columns=insured.sex , values=insured.charges,aggfunc='sum')
fig1, axes1=plt.subplots(1,1,figsize=(13, 7))
df_smoker_char_sex.plot(kind='bar',ax=axes1,title="Smoker Vs Charges for Males and Females")
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
**Observations** - Charges incurred for males are more than charges incured for females - With increasing age of policy holder charges incured are going high for both male and female. - There some spikes for female at an approximate ages of 23,28,43. - Most claims are from southeast regions. - Males who smoke have most claims and have higher bills - Number of claims made by female who dont smoke is more compared to female who smoke.
###Code
#creating groups of bmi
category=pd.cut(insured.bmi,bins=[15,25,35,45,55],labels=['15-25','25-35','35-45','45-55'])
insured.insert(5,'BMIGroup',category)
insured.head()
#no of children has no relation with charges
insured.groupby(insured.children).charges.mean()
insured.groupby(insured.BMIGroup).charges.mean()
category1=pd.cut(insured.age,bins=[18,28,38,48,58,68],labels=['18-28','28-38','38-48','48-58','58-68'])
insured.insert(6,'AgeBin',category1)
insured.groupby(insured.AgeBin).charges.mean()
insured.groupby(['region','sex','smoker']).mean()['charges'].unstack()
sns.barplot(x=insured.AgeBin,y=insured.charges).set(title='Age Vs Charges')
sns.barplot(x=insured.BMIGroup,y=insured.charges)
plt.figure(figsize=(15,7))
sns.barplot(x=insured["BMIGroup"],y=insured["age"],hue=insured['sex'],ci=0).set(title= 'Age and Bmi of Males and Females')
plt.legend(bbox_to_anchor=(1.00, 1))
plt.show()
sns.barplot(x='BMIGroup',y='charges',hue='sex',data=insured).set(title="Fig 2:BMI group and Charges " )
###Output
_____no_output_____
###Markdown
**Observations**- FeMales with most BMI has incured most charges to the insurance company- BMI for male and females are not same- Beneficary with higher BMI have incurred more cost to insurance.
###Code
pd.crosstab(insured['sex'],insured['children'])
plt.figure(figsize=(25,10))
g=sns.FacetGrid(insured,row='smoker',height=4, aspect=2)
g=(g.map(plt.scatter ,'age','charges').add_legend())
sns.relplot(x=insured.BMIGroup, y=insured.charges, hue=insured.smoker, size= insured.AgeBin,
sizes=(40, 400), alpha=.5, palette="spring",
height=6, data=insured).set(title='Charges by Age,BMI,Smoker');
###Output
_____no_output_____
###Markdown
**Observation**- Males who smoker have incured more cost compared to nonsmokers.- As age increased claims increased-Smoker have higher medical claims <h2 style="background-color: steelblue; color: white; padding: 8px; padding-right: 300px; font-size: 24px; font-family:Garamond max-width: 1500px; margin-top: 50px; margin-bottom:4px;">Conclusion - As expected , as the age of the beneficiary increases ,the cost to insurance increases.- Males who smoke have most claims and have higher bills.- Female who are nonsmoker also have more claims to nonsmoker males this may be because of child birth , need to explore claims type to understand better.- Beneficiary with 2 or 3 dependent have billed higher compared to others people who have 5.This is unusual and may be because of uneven number of observations in each group. For example, no dependents group has 574 observations whereas five dependents group only has 18.- Customer with bmi >30 are on higher side of obesity, have more health issues and have higher claims.- Females with BMI more than 45 have billed higher to insurance.- Age, BMI and Smoking are important attributes which can cost insurance company more. <h2 style="background-color: steelblue; color: white; padding: 8px; padding-right: 300px; font-size: 24px; font-family:Garamond max-width: 1500px; margin-top: 50px; margin-bottom:4px;">Statistical Analysis 1.Prove (or disprove) that the medical claims made by the people who smoke is greater than those who don't? Step 1: Define null and alternative hypothesis$\ H_0 : \mu_1 <= \mu_2 $ The average charges of smokers is less than or equal to nonsmokers $\ H_a :\mu_1 > \mu_2 $ The average charges of smokers is greater than nonsmokers Step 2: Decide the significance level. If P values is less than alpha reject the null hypothesis.α = 0.05 Step 3: Identify the testStandard deviation of the population is not known ,will perform a T stat test . The > sign in alternate hypothesis indicate test is right tailed, that is all z values that would cause us to reject null hypothesis are in just one tail to the right of sampling distribution curve. Step 4: Calculate the test-statistics and p-value
###Code
smoker=insured.loc[insured.smoker=="yes"]
smoker.head()
smoker.count()
nonsmoker=insured.loc[insured.smoker=='no']
nonsmoker.head()
nonsmoker.count()
# Adjusting the size of the rows to be equal
nonsmoker = nonsmoker[-274:]
charges_yes = smoker.charges
charges_no = nonsmoker.charges
print('Average Cost charged to Insurance for smoker is {} and nonsmoker is {} '.format(charges_yes.mean(),charges_no.mean()))
#smoker vs charges
sns.boxplot(x=insured.charges,y=insured.smoker,data=insured).set(title="Fig:1 Smoker vs Charges")
alpha=0.05
t_statistic_1, p_value_1 = stats.ttest_ind(charges_yes, charges_no)
p_value_onetail=p_value_1/2
print("Test statistic = {} , Pvalue ={} , OnetailPvalue = {}".format(t_statistic_1,p_value_1, p_value_onetail ))
if p_value_1 <alpha :
print("Conclusion:Since P value {} is less than alpha {} ". format (p_value_onetail,alpha) )
print("Reject Null Hypothesis that Average charges for smokers are less than or equal to nonsmoker.")
else:
print("Conclusion:Since P value {} is greater than alpha {} ". format (p_value_onetail,alpha))
print("Failed to Reject Null Hypothesis that Average charges for smokers are less than nonsmoker.")
###Output
Conclusion:Since P value 1.080249501584019e-118 is less than alpha 0.05
Reject Null Hypothesis that Average charges for smokers are less than or equal to nonsmoker.
###Markdown
Step 5: Decide whethere to reject or failed to reject null hypothesis We reject the null hypothesis and can conclude that people who smoke have on an average larger medical claim compared to people who don't smoke. Similar result can also been seen in Fig no.1 Smokers Vs Charges 2.Prove (or disprove) with statistical evidence that the BMI of females is different from that of males. Let $\mu_1 \mu_2 $ and be the respective population means for BMI of males and BMI of females Step 1: Define null and alternative hypothesis$\ H_0 : \mu_1 - \mu_2 = 0$ There is no difference between the BMI of Male and BMI of female.$\ H_a : \mu_1 - \mu_2 !=0 $ There is difference between the BMI of Male and BMI of female. Step 2: Decide the significance levelα = 0.05 Step 3:Identify the testStandard deviation of the population is not known ,will perform a T stat test.Not equal to sign in alternate hypothesis indicate its a two tailed test. Step 4: Calculate the test-statistics and p-value
###Code
#get all observation for male.
df_male=insured.loc[insured.sex=="male"]
#get all observation for females
df_female=insured.loc[insured.sex=="female"]
#get bmi of male and female
bmi_female=df_female.bmi
bmi_male=df_male.bmi
sns.distplot(bmi_male,color='green',hist=False)
sns.distplot(bmi_female,color='red',hist=False)
df_female.bmi.mean()
df_male.bmi.mean()
# get statistic and p value
t_statistic_2, p_value_2 = stats.ttest_ind(bmi_male, bmi_female)
print("tstats = ",t_statistic_2, ", pvalue = ", p_value_2)
if p_value_2 <alpha :
print("Conclusion:Since P value {} is less than alpha {} ". format (p_value_2,alpha) )
print("Reject Null Hypothesis that there is no difference in bmi of men and bmi of female.")
else:
print("Conclusion:Since P value {} is greater than alpha {} ". format (p_value_2,alpha))
print("Failed to Reject Null Hypothesis that there is difference in bmi of men and bmi of female .")
###Output
Conclusion:Since P value 0.08997637178984932 is greater than alpha 0.05
Failed to Reject Null Hypothesis that there is difference in bmi of men and bmi of female .
###Markdown
Step 5: Decide to reject or accept null hypothesis We fail to reject the null hypothesis and can conclude that There is no difference between BMI of Female and BMI of Male. 3.Is the proportion of smokers significantly different across different regions? Step 1: Define null and alternative hypotheses* H0 Smokers proportions is not significantly different across different regions* Ha Smokers proportions is different across different regions Step 2: Decide the significance levelα = 0.05 Step 3: Identify Test Here we are comparing two different categorical variables, smoker and different region. So perform a Chi-sq Test. Step 4: Calculate the test-statistics and p-value
###Code
contigency= pd.crosstab(insured.region, insured.smoker)
contigency
contigency.plot(kind='bar')
# Using the chi2_contingency test
chi2, pval, dof, exp_freq = chi2_contingency(contigency, correction = False)
print('chi-square statistic: {} , Pvalue: {} , Degree of freedom: {} ,expected frequencies: {} '.format(chi2, pval, dof, exp_freq))
if (pval < 0.05):
print('Reject Null Hypothesis')
else:
print('Failed to reject Null Hypothesis')
###Output
Failed to reject Null Hypothesis
###Markdown
Step 5: Decide to reject or accept null hypothesis We failed to reject the null hypothesis and conclude that Smoker proportions is not significantly different across different regions. 4.Is the mean BMI of women with no children, one child, and two children the same? Explain your answer with statistical evidence. Step 1: Define null and alternative hypotheses* H0: μ1 = μ2 = μ3 The mean BMI of women with no children , one child,two children is same * Ha: Atleast one of mean BMI of women is not same Step 2: Decide the significance levelα = 0.05 Step 3: Identify Test One-way ANOVA - Equality of population through variances of samples. Step 4: Calculate the test-statistics and p-value
###Code
# Filtering data of only women with 0, 1 and 2 children
df_female_child = df_female.loc[df_female['children']<=2]
df_female_child.head()
#pd.pivot_table(data=df_female_filtered,index=df_female_filtered.children,columns=df_female_filtered.bmi,values=df_female_filtered.bmi,fill_value=0)
df_female_child.groupby([df_female_child.children]).mean().bmi
# Women BMI with children 0, 1, 2;
sns.boxplot(x="children", y="bmi", data=df_female_child)
plt.grid()
plt.show()
# Applying ANOVA and cheking each children count (0,1,2) with the bmi;
formula = 'bmi ~ C(children)'
model = ols(formula, df_female_child).fit()
aov_table = anova_lm(model)
aov_table
###Output
_____no_output_____ |
examples/week3.ipynb | ###Markdown
Classes and ObjectsThis is our week 3 examples notebook and will be available on Github from the powderflask/cap-comp215 repository.As usual, the first code block just imports the modules we will use.
###Code
import random
import matplotlib.pyplot as plt
from pprint import pprint
# Everything in Python is an object and everything has a "type" which is its class
import math
print("Type of the log function:", type(math.log))
# A function is an object...
def f():
print("Hello")
# ... so we can define additional "attributes" for that object:
f.meaning = "greeting"
print(f.meaning)
###Output
Type of the log function: <class 'builtin_function_or_method'>
greeting
###Markdown
Problem: Collision detection using circles
###Code
class Circle:
def __init__(self, centre, radius):
"""
Initialze a Circle object with given centre and radius
:param centre: 2-tuple with (x, y) coordinate of centre
:param radius: numeric radius of circle
"""
self.centre = centre
self.radius = radius
def __str__(self):
return f'c:{self.centre} r:{self.radius}'
def area(self):
""" Return the area of this circle """
return math.pi * self.radius**2
def move(self, offset):
""" Move this circle by given (x,y) offset """
# self.centre = tuple(offset[i] + self.centre[i] for i in range(2))
self.centre = tuple(w1 + w2 for w1,w2 in zip(self.centre, offset) )
def distance(self, other):
""" Return the distance between this circle and the other one """
# sqrt((x2-x1)**2 + (y2-y1)**2)
return sum((w1-w2)**2 for w1,w2 in zip(self.centre, other.centre)) **0.5
def intersects(self, other):
""" Return True iff this circle intersects the other one """
return self.distance(other) <= self.radius + other.radius
c1 = Circle(centre=(4,4), radius=2)
c2 = Circle(centre=(9, 9), radius=2)
print("Type of c1:", type(c1))
print("Area of c1:", Circle.area(c1), c1.area)
def print_state():
print('State:', str(c1), str(c2), 'Dist:', round(c1.distance(c2), 2), 'Intersects?', c1.intersects(c2))
print_state()
c1.move((2,2))
print_state()
c2.move((-1, 0))
print_state()
import random
list_of_circles = [Circle(centre=(random.randint(0,10), random.randint(0,10)),
radius=random.randint(3,7)) for i in range(10)
]
print(tuple(str(c) for c in list_of_circles[0:10]))
print(list(zip((1,2), (10,11))))
print(sum((1,2,3,4,5,6)))
d = {'a': 0, 'b':2}
print(d.get('c', 0))
###Output
('c:(9, 10) r:6', 'c:(3, 9) r:7', 'c:(4, 2) r:7', 'c:(7, 6) r:6', 'c:(4, 10) r:7', 'c:(4, 1) r:4', 'c:(5, 9) r:6', 'c:(1, 0) r:5', 'c:(8, 0) r:7', 'c:(10, 9) r:7')
[(1, 10), (2, 11)]
21
0
###Markdown
Problem: CountingWhen conducting an experiment, it is common to count occurances. For example* in a physics experiment, we might count the number of atomic collisions in which certain sub-atomic particles were produced* in biology, we might count the number of cells infected by a virus after a fixed time period* in a computational experiment, we might count the frequency with which clusters of a give size formTo visualize such experimental results, we would generally plot a histogram, like this:
###Code
# Experiment: Get the age distribution for Cap students
n_cap_students = 11500
# Simulate getting the age for one Cap student
def get_age(student_id):
return int(random.normalvariate(mu=24, sigma=4)) # Normally distributed age with mean of 24 years
# Run experiment to obtain the age for each student
data = [get_age(id) for id in range(n_cap_students)]
# Set the number of bins to the number of ages we found
n_bins = len(set(data))
fig, ax = plt.subplots()
ax.set_title("Age distribution for Cap Students")
ax.set_xlabel('Age (years)')
# plot a histogram of the data, divided into n "equal width" bins
ax.hist(data, bins=n_bins)
plt.plot()
###Output
_____no_output_____
###Markdown
Custom Histogram ClassDefine our own historgram class that serves as a "wrapper" for clunky pyplot ax.hist
###Code
class Histogram:
""" A simple histogram with a nice API """
def __init__(self, title, xlabel=None):
fig, ax = plt.subplots()
ax.set_title(title)
if xlabel:
ax.set_xlabel(xlabel)
ax.set_ylabel('Count')
self.ax = ax
self.fig = fig
self.counts = {}
def count(self, category):
self.counts[category] = self.counts.get(category, 0) + 1
def plot(self):
self.ax.bar(self.counts.keys(), self.counts.values())
plt.show()
hist = Histogram(title='Age Distribution for Cap Students', xlabel='Age (years)')
for id in range(n_cap_students):
hist.count(get_age(id))
hist.plot()
###Output
_____no_output_____
###Markdown
Classes and ObjectsThis is our week 3 examples notebook and will be available on Github from the powderflask/cap-comp215 repository.As usual, the first code block just imports the modules we will use.
###Code
import random
import matplotlib.pyplot as plt
from pprint import pprint
# Everything in Python is an object and everything has a "type" which is its class
import math
print("Type of the log function:", type(math.log))
# A function is an object...
def f():
print("Hello")
# ... so we can define additional "attributes" for that object:
f.meaning = "greeting"
print(f.meaning)
###Output
Type of the log function: <class 'builtin_function_or_method'>
greeting
###Markdown
Problem: Collision detection using circles
###Code
class Circle:
def __init__(self, centre, radius):
"""
Initialze a Circle object with given centre and radius
:param centre: 2-tuple with (x, y) coordinate of centre
:param radius: numeric radius of circle
"""
self.centre = centre
self.radius = radius
def __str__(self):
return f'c:{self.centre} r:{self.radius}'
def area(self):
""" Return the area of this circle """
return math.pi * self.radius**2
def move(self, offset):
""" Move this circle by given (x,y) offset """
# self.centre = tuple(offset[i] + self.centre[i] for i in range(2))
self.centre = tuple(w1 + w2 for w1,w2 in zip(self.centre, offset) )
def distance(self, other):
""" Return the distance between this circle and the other one """
# sqrt((x2-x1)**2 + (y2-y1)**2)
return sum((w1-w2)**2 for w1,w2 in zip(self.centre, other.centre)) **0.5
def intersects(self, other):
""" Return True iff this circle intersects the other one """
return self.distance(other) <= self.radius + other.radius
c1 = Circle(centre=(4,4), radius=2)
c2 = Circle(centre=(9, 9), radius=2)
print("Type of c1:", type(c1))
print("Area of c1:", Circle.area(c1), c1.area)
def print_state():
print('State:', str(c1), str(c2), 'Dist:', round(c1.distance(c2), 2), 'Intersects?', c1.intersects(c2))
print_state()
c1.move((2,2))
print_state()
c2.move((-1, 0))
print_state()
import random
list_of_circles = [Circle(centre=(random.randint(0,10), random.randint(0,10)),
radius=random.randint(3,7)) for i in range(10)
]
print(tuple(str(c) for c in list_of_circles[0:10]))
print(list(zip((1,2), (10,11))))
print(sum((1,2,3,4,5,6)))
d = {'a': 0, 'b':2}
print(d.get('c', 0))
###Output
('c:(9, 10) r:6', 'c:(3, 9) r:7', 'c:(4, 2) r:7', 'c:(7, 6) r:6', 'c:(4, 10) r:7', 'c:(4, 1) r:4', 'c:(5, 9) r:6', 'c:(1, 0) r:5', 'c:(8, 0) r:7', 'c:(10, 9) r:7')
[(1, 10), (2, 11)]
21
0
###Markdown
Problem: CountingWhen conducting an experiment, it is common to count occurances. For example* in a physics experiment, we might count the number of atomic collisions in which certain sub-atomic particles were produced* in biology, we might count the number of cells infected by a virus after a fixed time period* in a computational experiment, we might count the frequency with which clusters of a give size formTo visualize such experimental results, we would generally plot a histogram, like this:
###Code
# Experiment: Get the age distribution for Cap students
n_cap_students = 11500
# Simulate getting the age for one Cap student
def get_age(student_id):
return int(random.normalvariate(mu=24, sigma=4)) # Normally distributed age with mean of 24 years
# Run experiment to obtain the age for each student
data = [get_age(id) for id in range(n_cap_students)]
# Set the number of bins to the number of ages we found
n_bins = len(set(data))
fig, ax = plt.subplots()
ax.set_title("Age distribution for Cap Students")
ax.set_xlabel('Age (years)')
# plot a histogram of the data, divided into n "equal width" bins
ax.hist(data, bins=n_bins)
plt.plot()
###Output
_____no_output_____
###Markdown
Custom Histogram ClassDefine our own historgram class that serves as a "wrapper" for clunky pyplot ax.hist
###Code
class Histogram:
""" A simple histogram with a nice API """
def __init__(self, title, xlabel=None):
fig, ax = plt.subplots()
ax.set_title(title)
if xlabel:
ax.set_xlabel(xlabel)
ax.set_ylabel('Count')
self.ax = ax
self.fig = fig
self.counts = {}
def count(self, category):
self.counts[category] = self.counts.get(category, 0) + 1
def plot(self):
self.ax.bar(self.counts.keys(), self.counts.values())
plt.show()
hist = Histogram(title='Age Distribution for Cap Students', xlabel='Age (years)')
for id in range(n_cap_students):
hist.count(get_age(id))
hist.plot()
###Output
_____no_output_____ |
mecab.ipynb | ###Markdown
###Code
!pip install konlpy
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!pip3 install JPype1-py3
import os
os.chdir('/tmp/')
!curl -LO https://bitbucket.org/eunjeon/mecab-ko/downloads/mecab-0.996-ko-0.9.1.tar.gz
!tar zxfv mecab-0.996-ko-0.9.1.tar.gz
os.chdir('/tmp/mecab-0.996-ko-0.9.1')
!./configure
!make
!make check
!make install
os.chdir('/tmp')
!curl -LO http://ftpmirror.gnu.org/automake/automake-1.11.tar.gz
!tar -zxvf automake-1.11.tar.gz
os.chdir('/tmp/automake-1.11')
!./configure
!make
!make install
import os
os.chdir('/tmp/')
!wget -O m4-1.4.9.tar.gz http://ftp.gnu.org/gnu/m4/m4-1.4.9.tar.gz
!tar -zvxf m4-1.4.9.tar.gz
os.chdir('/tmp/m4-1.4.9')
!./configure
!make
!make install
os.chdir('/tmp')
!curl -OL http://ftpmirror.gnu.org/autoconf/autoconf-2.69.tar.gz
!tar xzf autoconf-2.69.tar.gz
os.chdir('/tmp/autoconf-2.69')
!./configure --prefix=/usr/local
!make
!make install
!export PATH=/usr/local/bin
import os
os.chdir('/tmp')
!curl -LO https://bitbucket.org/eunjeon/mecab-ko-dic/downloads/mecab-ko-dic-2.1.1-20180720.tar.gz
!tar -zxvf mecab-ko-dic-2.1.1-20180720.tar.gz
os.chdir('/tmp/mecab-ko-dic-2.1.1-20180720')
!./autogen.sh
!./configure
!make
# !sh -c 'echo "dicdir=/usr/local/lib/mecab/dic/mecab-ko-dic" > /usr/local/etc/mecabrc'
!make install
os.chdir('/tmp/mecab-ko-dic-2.1.1-20180720')
!ldconfig
!ldconfig -p | grep /usr/local/lib
import os
os.chdir('/content')
!git clone https://bitbucket.org/eunjeon/mecab-python-0.996.git
os.chdir('/content/mecab-python-0.996')
!python3 setup.py build
!python3 setup.py install
from konlpy.tag import Mecab
mecab=Mecab()
text = '자연어는 처리가 필요합니다. 전처리가 중요하지요. 철수가 전처리를 합니다.'
print(mecab.morphs(text))
print(mecab.pos(text))
print(mecab.nouns(text))
import nltk
from nltk.stem import WordNetLemmatizer
nltk.download('wordnet')
n=WordNetLemmatizer()
words=['lives', 'started', 'has', 'seen', 'eaten']
[n.lemmatize(w) for w in words]
import nltk
from nltk.stem import WordNetLemmatizer
n=WordNetLemmatizer()
n.lemmatize('has', 'v')
n.lemmatize('started', 'v')
import nltk
nltk.download('punkt')
import nltk
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
ps = PorterStemmer()
text="The company has not only benefited from the Peloton effect, but also from a near-immediate interest from celebrities and influencers in its product. Kate Hudson, Alicia Keys, Reese Witherspoon, Jennifer Aniston and Gwyneth Paltrow are among the many celebrities to have publicly boasted about Mirror, undoubtedly boosting sales for the up-and-coming startup."
words=word_tokenize(text)
print(words)
print([ps.stem(w) for w in words])
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
stopwords.words('english')[:50]
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
text = "We have the ability to create personalized premium content across a wide range of verticals, with fitness being our first vertical"
stop_words = set(stopwords.words('english'))
tokens = word_tokenize(text)
result = []
for w in tokens :
if w not in stop_words:
result.append(w)
print(tokens)
print(result)
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
text = "development 중 혹은 배포된 software에서 이미 발견한 버그의 상태 변화 과정을 추적하기 위한 소프트웨어다. 버그가 수정되었는지 여부, 수정 중이라면 현재 진행상황을 알 수 있고, 이미 알려진 버그의 누락을 막을 수 있으며, 버그 수정과 관련된 불필요한 커뮤니케이션 비용을 줄일 수 있다."
stop_words = "혹은 이미 위한 있다 있으며"
stop_words=stop_words.split(' ')
tokens = word_tokenize(text)
result = []
for w in tokens:
if w not in stop_words:
result.append(w)
else:
print(w)
print(tokens)
print(result)
#도메인을 많이 타므로 언어별/분야별로 사전을 만들어야 한다.
###Output
혹은
이미
위한
이미
있으며
있다
['development', '중', '혹은', '배포된', 'software에서', '이미', '발견한', '버그의', '상태', '변화', '과정을', '추적하기', '위한', '소프트웨어다', '.', '버그가', '수정되었는지', '여부', ',', '수정', '중이라면', '현재', '진행상황을', '알', '수', '있고', ',', '이미', '알려진', '버그의', '누락을', '막을', '수', '있으며', ',', '버그', '수정과', '관련된', '불필요한', '커뮤니케이션', '비용을', '줄일', '수', '있다', '.']
['development', '중', '배포된', 'software에서', '발견한', '버그의', '상태', '변화', '과정을', '추적하기', '소프트웨어다', '.', '버그가', '수정되었는지', '여부', ',', '수정', '중이라면', '현재', '진행상황을', '알', '수', '있고', ',', '알려진', '버그의', '누락을', '막을', '수', ',', '버그', '수정과', '관련된', '불필요한', '커뮤니케이션', '비용을', '줄일', '수', '.']
|
07_parkingtosvgcar.ipynb | ###Markdown
Extract car parking spaces from mongoDB, export as SVG for nesting, stitch nested SVGs 🚗 🚗 🚗 This notebook extracts geometries (areas, like polygons of parking spaces) from a mongoDB, then exports all areas in an svg file for nesting with SVGNest. In the end, the SVG bins are stitched back together.Created on: 2016-10-28 Last update: 2017-03-30 Contact: [email protected], [email protected] (Michael Szell) Preliminaries Parameters
###Code
cityname = "vienna"
mode = "car" # do car here. bike is another file
pathdatain = 'output/'+cityname+'/'+mode+'in/'
pathdataout = 'output/'+cityname+'/'+mode+'out/'
###Output
_____no_output_____
###Markdown
Imports
###Code
from __future__ import unicode_literals
import sys
import csv
import os
import math
from random import shuffle, choice, uniform
import random
import pprint
pp = pprint.PrettyPrinter(indent=4)
from collections import defaultdict
import time
import datetime
import numpy as np
from numpy import *
from scipy import stats
import pyprind
import itertools
import logging
from collections import OrderedDict
import json
from xml.dom import minidom
from shapely.geometry import mapping, shape, LineString, LinearRing, Polygon, MultiPolygon
import shapely
import shapely.ops as ops
from shapely import affinity
from functools import partial
import pyproj
Projection = pyproj.Proj("+proj=merc +lon_0=0 +x_0=0 +y_0=0 +ellps=WGS84 +units=m +no_defs")
from scipy.ndimage.interpolation import rotate
from scipy.spatial import ConvexHull
import pymongo
from pymongo import MongoClient
# plotting stuff
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
DB connection
###Code
client = MongoClient()
db = client[cityname+'_derived']
ways = db['ways']
cursor = ways.find({"$and": [{"properties.amenity.amenity": "parking"}, {"geometry.type": "Polygon"}, {"properties_derived.area": { "$gte": 12 }}]}).sort("properties_derived.area",-1)
numparkingareas = cursor.count()
print("There are " + str(numparkingareas) + " " + mode + " parking spaces in " + cityname)
###Output
_____no_output_____
###Markdown
Functions
###Code
def coordinatesToSVGString(coo, xoffset = 0, yoffset = 0, idname = "", classname = "", rot = 0, centroidlatlon = [0,0]):
svgstring = "\n <polygon"
if idname:
svgstring += " id=\""+idname+"\""
if classname:
svgstring += " class=\""+classname+"\""
svgstring += " points=\""
strxylist = [str(coo[i][0]+xoffset)+","+str(coo[i][1]+yoffset) for i in range(coo.shape[0])]
for s in strxylist:
svgstring += s+" "
svgstring += "\""
svgstring += " moovel_rot=\""+str(rot)+"\"" # pseudo-namespace, because svgnest strips namespace info. http://stackoverflow.com/questions/15532371/do-svg-docs-support-custom-data-attributes
centroid = [Polygon(coo).centroid.x, Polygon(coo).centroid.y]
svgstring += " moovel_centroid=\""+str(centroid[0]+xoffset)+","+str(centroid[1]+yoffset)+"\""
svgstring += " moovel_centroidlatlon=\""+str(centroidlatlon[0])+","+str(centroidlatlon[1])+"\""
svgstring += "/>"
return svgstring
def drawPolygon(poly, title=""): # poly is a shapely Polygon
x, y = poly.exterior.xy
fig = plt.figure(figsize=(4,4), dpi=90)
ax = fig.add_subplot(111)
ax.set_title(title)
ax.plot(x, y)
def getLargestSubPolygon(multipoly): # multipoly is a shapely polygon or multipolygon
# if its a polygon, do nothing, else give largest subpolygon
if not (isinstance(multipoly, shapely.geometry.multipolygon.MultiPolygon)):
return multipoly
else:
a = 0
j = 0
for i in range(len(multipoly)):
if multipoly[i].area > a:
j = i
a = multipoly[i].area
return multipoly[j]
def getSmallestSubPolygon(multipoly): # multipoly is a shapely polygon or multipolygon
# if its a polygon, do nothing, else give largest subpolygon
if not (isinstance(multipoly, shapely.geometry.multipolygon.MultiPolygon)):
return multipoly
else:
a = float("inf")
j = 0
for i in range(len(multipoly)):
if multipoly[i].area < a:
j = i
a = multipoly[i].area
return multipoly[j]
def getTwoLargestSubPolygons(multipoly): # multipoly is a shapely polygon or multipolygon
# if its a polygon, do nothing, else give two largest subpolygon
if not (isinstance(multipoly, shapely.geometry.multipolygon.MultiPolygon)):
return multipoly
else:
a = [multipoly[i].area for i in range(len(multipoly))]
sortorder = sorted(range(len(a)), key=lambda k: a[k], reverse=True) # http://stackoverflow.com/questions/7851077/how-to-return-index-of-a-sorted-list
return MultiPolygon([ multipoly[i] for i in sortorder[0:2] ])
def rotationToSmallestWidthRecursive(poly, maxdepth = 3, w = float("inf"), rot = 0, rotdelta = 10, depth = 1): # poly is a shapely polygon
# unit: degrees
# returns the angle the polygon needs to be rotated to be at minimum width
# Note: Is not guaranteed to converge to the global minimum
# Requires import numpy as np, from shapely import affinity
if depth <= maxdepth:
for theta in np.arange(rot-rotdelta*9, rot+rotdelta*9, rotdelta):
temp = affinity.rotate(poly, theta, origin='centroid')
x, y = temp.exterior.coords.xy
temp = np.array([[x[i],y[i]] for i in range(len(x))])
objectwidth = max(temp[:, 0])-min(temp[:, 0])
if objectwidth < w:
w = objectwidth
rot = theta
return rotationToSmallestWidthRecursive(poly, maxdepth, w, rot, rotdelta/10, depth+1)
else:
return rot
def getCoordinatesFromSVG(filepath, reversexdir = False, b = 1): # The SVG needs to have polygons with ids, embedded in gs
doc = minidom.parse(filepath) # parseString also exists
path_strings = [path.getAttribute('points') for path
in doc.getElementsByTagName('polygon')]
id_strings = [path.getAttribute('id') for path
in doc.getElementsByTagName('polygon')]
class_strings = [path.getAttribute('class') for path
in doc.getElementsByTagName('polygon')]
g_strings = [path.getAttribute('transform') for path
in doc.getElementsByTagName('g')]
rot_strings = [path.getAttribute('moovel_rot') for path
in doc.getElementsByTagName('polygon')]
centroidlatlon_strings = [path.getAttribute('moovel_centroidlatlon') for path
in doc.getElementsByTagName('polygon')]
doc.unlink()
data = dict()
numbins = 0
for i,path in enumerate(path_strings):
if class_strings[i] == "bin":
numbins += 1
if numbins == b:
path = path.split()
coo = []
for temp in path:
p = temp.split(",")
try:
trans = g_strings[i] # looks like this: "translate(484.1119359029915 -1573.8819930603422) rotate(0)"
trans = trans.split()
trans = [float(trans[0][10:]), float(trans[1][0:-1])] # gives [484.1119359029915,-1573.8819930603422]
except:
trans = [0,0]
if reversexdir:
coo.append([-(float(p[0])+trans[0]), float(p[1])+trans[1]])
else:
coo.append([float(p[0])+trans[0], float(p[1])+trans[1]])
data[id_strings[i]] = dict()
data[id_strings[i]]["coordinates"] = coo
data[id_strings[i]]["rot"] = rot_strings[i]
data[id_strings[i]]["class"] = class_strings[i]
data[id_strings[i]]["centroidlatlon"] = centroidlatlon_strings[i].split(",")
elif numbins > b:
break
return data
###Output
_____no_output_____
###Markdown
Get parking spaces for multiple SVG bins Features1. add buffer around big tiles ✔️2. give more bin space for small tiles on top of bins ✔️3. pre-select tiles from DB and exhaust them fully (and uniquely!) ✔️4. do not order big tiles by size, but more randomly ✔️5. add id as id, and more properties ✔️
###Code
parameters = {"car": {"berlin":{"maxbigparts":180, "maxbins":16},
"newyork":{"maxbigparts":29, "maxbins":7},
"stuttgart":{"maxbigparts":30, "maxbins":3},
"amsterdam":{"maxbigparts":20, "maxbins":2},
"portland":{"maxbigparts":70, "maxbins":4},
"vienna":{"maxbigparts":30, "maxbins":5},
"losangeles":{"maxbigparts":50, "maxbins":11},
"sanfrancisco":{"maxbigparts":16, "maxbins":2},
"boston":{"maxbigparts":8, "maxbins":1},
"budapest":{"maxbigparts":26,"maxbins":5},
"hongkong":{"maxbigparts":8,"maxbins":1},
"beijing":{"maxbigparts":3,"maxbins":2},
"helsinki":{"maxbigparts":30,"maxbins":7},
"copenhagen":{"maxbigparts":20,"maxbins":3},
"london":{"maxbigparts":100,"maxbins":20},
"chicago":{"maxbigparts":80, "maxbins":18},
"jakarta":{"maxbigparts":6, "maxbins":1},
"moscow":{"maxbigparts":90, "maxbins":21},
"rome":{"maxbigparts":30, "maxbins":6},
"singapore":{"maxbigparts":14, "maxbins":3},
"tokyo":{"maxbigparts":35, "maxbins":7},
"johannesburg":{"maxbigparts":15, "maxbins":3},
"barcelona":{"maxbigparts":5,"maxbins":1}
}
}
# to find out maxbins, make 10 bins or so. If all have same file size, increase until file size gets small. Select maxbins to cover all big parts.
# to find out maxbigparts, set to around maxbins*6. Possibly decrease by looking at bigparts.svg
maxbins = parameters[mode][cityname]["maxbins"]
# get parking spaces (ALL in a city)
cursor = ways.find({"$and": [{"properties.amenity.amenity": "parking"}, {"geometry.type": "Polygon"}, {"properties_derived.area": { "$gte": 12 }}]}).sort("properties_derived.area",-1)
random.seed(1)
maxbigparts = parameters[mode][cityname]["maxbigparts"]
scale = 0.6
erectbigparts = True
erectnonbigparts = True
randomrotatenonbigparts = False
binareafactor = 0.83 # This factor ensures that there are slightly more parking spaces than could fit into one bin. We later collect all leftover parking spots from those second bins.
smallvsmedium = 11
buffereps = 5 # should be the same number as the distances between parts in SVGNest
height = 1200
width = 600 - 1.5*buffereps
draw = False # for debugging purposes (drawing all big parts and bins) set this to True
eps = 0.000001
bigbin = Polygon([[0,0], [width,0], [width, maxbins*height], [0, maxbins*height]])
bigbin = bigbin.difference(Polygon([[width/2-eps,-1], [width/2-eps,maxbins*height-2], [width/2+eps,maxbins*height-2], [width/2+eps,-1]]))
# pre-select all parts
idsused = set()
idsnotused = set()
indicesused = set()
alltiles = []
alltileskeys = []
alltilesarea = 0
for i,way in enumerate(cursor):
npway = np.asarray(way["geometry"]["coordinates"])
centroidlatlon = [Polygon(npway).centroid.x, Polygon(npway).centroid.y]
npwayxy = [Projection(npway[i][0], npway[i][1]) for i in range(npway.shape[0])]
npwayxy = np.asarray([[npwayxy[i][0],-npwayxy[i][1]] for i in range(npway.shape[0])])
if i < maxbigparts: # big parts
if erectbigparts:
rot = rotationToSmallestWidthRecursive(Polygon(npwayxy))
else:
rot = 0
else: # non-big parts
if erectnonbigparts:
rot = rotationToSmallestWidthRecursive(Polygon(npwayxy))
elif randomrotatenonbigparts:
rot = uniform(10, 350)
else:
rot = 0
if rot:
temp = affinity.rotate(Polygon(npwayxy), rot, origin='centroid', use_radians=False)
x, y = temp.exterior.coords.xy
npwayxy = np.array([[x[i],y[i]] for i in range(len(x))])
objectwidth = max(npwayxy[:, 0])-min(npwayxy[:, 0])
npwayxy[:, 0] -= min(npwayxy[:, 0])
npwayxy[:, 1] -= min(npwayxy[:, 1])
npwayxy *= scale
objectwidth *= scale
if objectwidth < width:
objectheight = max(npwayxy[:, 1])
idsnotused.add(way["_id"])
coo = [[npwayxy[k][0], npwayxy[k][1]] for k in range(npwayxy.shape[0])]
area = Polygon(coo).buffer(buffereps/2).area
alltiles.append( { "_id": way["_id"], "width": objectwidth, "height": objectheight, "area": area, "coordinates": coo , "rot": rot, "centroidlatlon": centroidlatlon})
alltileskeys.append(way["_id"])
alltilesarea += area
else:
print("Object "+str(way["_id"])+" was too wide (" +str(objectwidth)+ " pixel) and was ignored.")
bigpartstodiff = []
partareastaken = []
ypos = -0.5*buffereps
numbigparts = 0
randomizedindices = list(range(maxbigparts))
shuffle(randomizedindices)
# Determine big parts
for i in randomizedindices:
tile = alltiles[i]
if ypos >= height*maxbins-1-tile["height"]: # see if this part still fits
break
tile["coordinates"] = [[-1*tile["coordinates"][k][0]+width/2+tile["width"]/2, tile["coordinates"][k][1]+ypos] for k in range(np.array(tile["coordinates"]).shape[0])]
bigpartstodiff.append( tile )
ypos += tile["height"] + 1*buffereps
indicesused.add(i)
idsused.add(tile["_id"])
idsnotused.remove(tile["_id"])
numbigparts += 1 # increase number of parts in any case
if draw:
drawPolygon(Polygon(bigpartstodiff[-1]["coordinates"]), "Big part "+str(numbigparts))
# Export the big parts
svg = "<svg xmlns=\"http://www.w3.org/2000/svg\" version=\"1.1\" width=\""+str(width)+"px\" height=\""+str(height)+"px\">"
for i in range(len(bigpartstodiff)):
svg += coordinatesToSVGString(np.array(bigpartstodiff[i]["coordinates"]), 0, 0, str(bigpartstodiff[i]["_id"]), "tile", bigpartstodiff[i]["rot"], bigpartstodiff[i]["centroidlatlon"])
svg += "\n</svg>"
with open(pathdataout + "bigparts.svg", "w") as f:
f.write(svg)
# Clip bins into batches and diff the big parts
for j in range(numbigparts):
bigbin = getLargestSubPolygon(bigbin.difference(Polygon(bigpartstodiff[j]["coordinates"]).buffer(1.75*buffereps, 1, 2, 2)))
# Cut the big part into sub-bins
scissorv = Polygon([[width/2-eps, -1], [width/2-eps, maxbins*height+1], [width/2+eps, maxbins*height+1], [width/2+eps, -1]])
cutbins = [[], []]
bigbin = getTwoLargestSubPolygons(bigbin.difference(scissorv)) # cut in half vertically
for m in range(len(bigbin)): # cut horizontally
rest = bigbin[m]
for i in range(maxbins):
scissorh = Polygon([[-1, (i+1)*height], [width+1, (i+1)*height], [width+1, (i+1)*height+eps], [-1, (i+1)*height+eps]])
temp = rest
temp = getTwoLargestSubPolygons(temp.difference(scissorh))
cutbins[m].append(getSmallestSubPolygon(temp).buffer(0.75*buffereps, 1, 2, 2))
rest = getLargestSubPolygon(temp)
if draw:
drawPolygon(cutbins[m][-1], "Bin "+str(i)+", m:"+str(m))
# Fill with small parts
remainingtiles = len(idsnotused)
if remainingtiles > 0:
randomizedindices_medium = list(range(round(remainingtiles/smallvsmedium)+numbigparts))
randomizedindices_medium = list(set(randomizedindices_medium) - indicesused)
shuffle(randomizedindices_medium)
else:
randomizedindices_medium = []
randomizedindices_small = set(list(range(remainingtiles+numbigparts))) - set(randomizedindices_medium)
randomizedindices_small = list(randomizedindices_small - indicesused)
shuffle(randomizedindices_small)
imedium = 0
ismall = 0
for numbin in range(maxbins):
for mirrored in [0,1]:
if mirrored:
m = -1
else:
m = 1
binarea = binareafactor*cutbins[mirrored][numbin].area
binbound = np.array(cutbins[mirrored][numbin].exterior.coords)
binbound = np.asarray([[-m*(binbound[i,0]-min(binbound[:,0])/2),binbound[i,1]-min(binbound[:,1])] for i in range(binbound.shape[0])])
xpos = 0
ypos = height+1 # start placing the elements below the bin
yextent = 0
svg = "<svg xmlns=\"http://www.w3.org/2000/svg\" version=\"1.1\" width=\""+str(width)+"px\" height=\""+str(height)+"px\">"
svg += coordinatesToSVGString(binbound, 0, 0, str(numbin), "bin")
totalarea = 0
for j in range(len(idsnotused)):
if len(idsnotused) == 0:
break
try:
i = randomizedindices_medium[imedium]
imedium += 1
except: # no more medium tiles left
i = randomizedindices_small[ismall]
ismall += 1
tile = alltiles[i]
if tile["width"] <= width:
if xpos + tile["width"] + 1 <= width: # there is space in this row
xdelta = m*(xpos+1)
ydelta = ypos
else: # new row
xdelta = 0
ypos += yextent
yextent = 0
ydelta = ypos
xpos = 0
svg += coordinatesToSVGString(np.array([[m*tile["coordinates"][k][0], tile["coordinates"][k][1]] for k in range(np.array(tile["coordinates"]).shape[0])]), xdelta, ydelta, str(tile["_id"]), "tile", tile["rot"], tile["centroidlatlon"])
yextent = max([yextent, tile["height"]])
xpos += tile["width"]+1
idsused.add(tile["_id"])
idsnotused.remove(tile["_id"])
totalarea += tile["area"]
if totalarea > binarea:
break
else:
print("Object "+str(way["_id"])+" was too wide (" +str(max(npwayxy[:, 0]))+ " pixel) and could not be placed.")
for kk in range(smallvsmedium):
if len(idsnotused) == 0:
break
try:
i = randomizedindices_small[ismall]
ismall += 1
except:
i = randomizedindices_medium[imedium]
imedium += 1
tile = alltiles[i]
if tile["width"] <= width:
if xpos + tile["width"] + 1 <= width: # there is space in this row
xdelta = m*(xpos+1)
ydelta = ypos
else: # new row
xdelta = 0
ypos += yextent
yextent = 0
ydelta = ypos
xpos = 0
svg += coordinatesToSVGString(np.array([[m*tile["coordinates"][k][0], tile["coordinates"][k][1]] for k in range(np.array(tile["coordinates"]).shape[0])]), xdelta, ydelta, str(tile["_id"]), "tile", tile["rot"], tile["centroidlatlon"])
yextent = max([yextent, tile["height"]])
xpos += tile["width"]+1
idsused.add(tile["_id"])
idsnotused.remove(tile["_id"])
totalarea += tile["area"]
if totalarea > binarea:
break
else:
print("Object "+str(way["_id"])+" was too wide (" +str(max(npwayxy[:, 0]))+ " pixel) and could not be placed.")
if totalarea > binarea:
break
svg += "\n</svg>"
if mirrored:
with open(pathdatain + cityname + mode + "parking"+ str(numbin).zfill(3) +"min.svg", "w") as f:
f.write(svg)
else:
with open(pathdatain + cityname + mode + "parking"+ str(numbin).zfill(3) +"in.svg", "w") as f:
f.write(svg)
print("First export done. " + str(len(idsnotused))+" tiles were not used.")
###Output
_____no_output_____
###Markdown
This has generated bigparts.svg in {{pathdataout}}, and {{maxbins}}*2 files in {{pathdatain}}. Use SVGNest on these latter files. Move the files returned from SVGNest (Download folder) into {{pathdataout}}. After SVGNest was executed a 1st time, collect all leftover parking spots
###Code
# Collect leftover tiles
idsinsecondbins = set()
for i in range(maxbins):
for mirrored in [0,1]:
if mirrored:
m = -1
tiles = getCoordinatesFromSVG(pathdataout + str(i).zfill(3) + "m.svg", False, 2)
else:
m = 1
tiles = getCoordinatesFromSVG(pathdataout + str(i).zfill(3) + ".svg", False, 2)
for key in tiles:
if tiles[key]["class"] == "tile":
idsinsecondbins.add(int(key))
idsnotusedtotal = idsnotused | idsinsecondbins
print(str(len(idsnotused))+" tiles were not used first.")
print(str(len(idsinsecondbins))+" tiles were in second bins.")
print(str(len(idsnotusedtotal))+" tiles were not used in total. Packing them into an extra bin...")
# Calculate needed area
leftovertilesarea = 0
for j in idsnotusedtotal:
i = int(np.where(np.array(alltileskeys) == j)[0])
tile = alltiles[i]
leftovertilesarea += tile["area"]
numidsnotusedtotal = len(idsnotusedtotal)
# Make an extra bin pair for the leftover tiles
bigbin = Polygon([[0,0], [width,0], [width, height], [0, height]])
bigbinarea = bigbin.area
# change the big bin area according to the leftover tiles area
heightx = round(height * leftovertilesarea/bigbinarea)
bigbin = Polygon([[0,0], [width,0], [width, heightx], [0, heightx]])
scissorv = Polygon([[width/2-eps, -1], [width/2-eps, maxbins*heightx+1], [width/2+eps, maxbins*heightx+1], [width/2+eps, -1]])
bigbin = getTwoLargestSubPolygons(bigbin.difference(scissorv)) # cut in half vertically
numbin = 0
cutbins = [[], []]
randomizedindices = list(idsnotusedtotal)
shuffle(randomizedindices)
for m in range(len(bigbin)): # cut horizontally
rest = bigbin[m]
for i in range(maxbins):
scissorh = Polygon([[-1, (i+1)*heightx], [width+1, (i+1)*heightx], [width+1, (i+1)*heightx+eps], [-1, (i+1)*heightx+eps]])
temp = rest
temp = getTwoLargestSubPolygons(temp.difference(scissorh))
cutbins[m].append(getSmallestSubPolygon(temp).buffer(0.75*buffereps, 1, 2, 2))
rest = getLargestSubPolygon(temp)
for mirrored in [0,1]:
if mirrored:
m = -1
else:
m = 1
binbound = np.array(cutbins[mirrored][numbin].exterior.coords)
binbound = np.asarray([[(binbound[i,0]-min(binbound[:,0])/2),binbound[i,1]-min(binbound[:,1])] for i in range(binbound.shape[0])])
xpos = 0
ypos = heightx+buffereps # start placing the elements below the bin
yextent = 0
svg = "<svg xmlns=\"http://www.w3.org/2000/svg\" version=\"1.1\" width=\""+str(width)+"px\" height=\""+str(heightx)+"px\">"
svg += coordinatesToSVGString(binbound, 0, 0, str(maxbins+1), "bin")
while len(randomizedindices):
j = randomizedindices[0]
if len(idsnotusedtotal) <= int(numidsnotusedtotal - (mirrored+1)*numidsnotusedtotal/2): # put half the tiles here, half in the other bin
break
i = int(np.where(np.array(alltileskeys) == j)[0])
tile = alltiles[i]
if tile["width"] <= width:
if xpos + tile["width"] + 1 <= width: # there is space in this row
xdelta = m*(xpos+1)
ydelta = ypos
else: # new row
xdelta = 0
ypos += yextent
yextent = 0
ydelta = ypos
xpos = 0
svg += coordinatesToSVGString(np.array([[m*tile["coordinates"][k][0], tile["coordinates"][k][1]] for k in range(np.array(tile["coordinates"]).shape[0])]), xdelta, ydelta, str(tile["_id"]), "tile", tile["rot"], tile["centroidlatlon"])
yextent = max([yextent, tile["height"]])
xpos += tile["width"]+1
idsnotusedtotal.remove(tile["_id"])
k = int(np.where(np.array(randomizedindices) == j)[0])
del randomizedindices[k]
else:
print("Object "+str(way["_id"])+" was too wide (" +str(max(npwayxy[:, 0]))+ " pixel) and could not be placed.")
svg += "\n</svg>"
# Export
if mirrored:
with open(pathdatain + cityname + mode + "parking"+ "extra" +"min.svg", "w") as f:
f.write(svg)
else:
with open(pathdatain + cityname + mode + "parking"+ "extra" +"in.svg", "w") as f:
f.write(svg)
###Output
_____no_output_____
###Markdown
This has generated 2 files in {{pathdatain}}, called *extra*. Use SVGNest on these files. Move the files returned from SVGNest into {{pathdataout}}, and rename them to have the number {{maxbins}}+1. (so, 000m.svg becomes, for example, 007m.svg, if the last filename was 006m.svg) After SVGNest was executed a 2nd time, stitch back together all the parts
###Code
allmirrored = True
# There is a bug: For some cities, all non-big parts are mirrored. Not yet known why, so we are using this hack. Yeah..
# In this case just set allmirrored to True and execute this cell again (no need to run everything again).
swap = False
swapm = True
deltashifts = False
# Another issue, solved by swap and swapm: last and second last bins (before the extra bin) are swapped on one (or both) side. This is most likely due to very large big parts.
# deltashifts: sometimes the big parts are slightly shifted, fixed by deltashifts
# read SVG
numbins = maxbins+1
alltilesfinal = []
# Big parts
bigpartscoo = getCoordinatesFromSVG(pathdataout + "bigparts.svg", not allmirrored, 0)
for k in bigpartscoo:
bigpartscoo[k]["coordinates"] = [[bigpartscoo[k]["coordinates"][i][0]+width-int(allmirrored)*width, bigpartscoo[k]["coordinates"][i][1]] for i in range(len(bigpartscoo[k]["coordinates"]))]
alltilesfinal.append(bigpartscoo)
ypos = 0
xpos = 0
ypos -= 0.75*buffereps
xpos = width/2
# Rest
for j in range(numbins):
for mirrored in [0,1]:
i = j
if mirrored:
m = -1
if allmirrored:
tiles = getCoordinatesFromSVG(pathdataout + str(i).zfill(3) + ".svg", mirrored)
else:
if swapm and numbins >= 3:
if j == numbins-3:
i = numbins-2
if j == numbins-2:
i = numbins-3
tiles = getCoordinatesFromSVG(pathdataout + str(i).zfill(3) + "m.svg", mirrored)
else:
m = 1
if allmirrored:
tiles = getCoordinatesFromSVG(pathdataout + str(i).zfill(3) + "m.svg", mirrored)
else:
if swap and numbins >= 3:
if j == numbins-3:
i = numbins-2
if j == numbins-2:
i = numbins-3
tiles = getCoordinatesFromSVG(pathdataout + str(i).zfill(3) + ".svg", mirrored)
minx = float("inf")
maxx = 0
for key in tiles:
if tiles[key]["class"] == "tile":
npwayxy = np.array(tiles[key]["coordinates"])
minx = min(minx, mirrored*width/2+min([npwayxy[k,0] for k in range(npwayxy.shape[0])]))
maxx = max(maxx, mirrored*width/2+max([npwayxy[k,0] for k in range(npwayxy.shape[0])]))
maxx = width/2-maxx
if mirrored:
if allmirrored:
delta = minx
else:
delta = -minx
else:
if allmirrored:
delta = maxx
else:
delta = -maxx
if not deltashifts:
delta = 0
for key in tiles:
if tiles[key]["class"] == "tile":
npwayxy = np.array(tiles[key]["coordinates"])
if allmirrored:
npwayxy = [[npwayxy[k,0]+xpos-m*width/2-delta, npwayxy[k,1]+ypos] for k in range(npwayxy.shape[0])]
else:
npwayxy = [[npwayxy[k,0]+xpos-delta, npwayxy[k,1]+ypos] for k in range(npwayxy.shape[0])]
alltilesfinal.append({key: {"coordinates": npwayxy, "rot": tiles[key]["rot"], "centroidlatlon": tiles[key]["centroidlatlon"]}})
ypos += height
if allmirrored: # need to mirror all back
for i in range(len(alltilesfinal)):
tile = alltilesfinal[i]
for key in tile:
npwayxy = np.array(tile[key]["coordinates"])
tile[key]["coordinates"] = [[width-npwayxy[k,0], npwayxy[k,1]] for k in range(npwayxy.shape[0])]
alltilesfinal[i] = tile
# Export
svg = "<svg xmlns=\"http://www.w3.org/2000/svg\" version=\"1.1\" width=\""+str(width)+"px\" height=\""+str(height*(numbins-1)+heightx)+"px\">"
for j, tile in enumerate(alltilesfinal):
for i in tile:
svg += coordinatesToSVGString(np.array([[tile[i]["coordinates"][k][0], tile[i]["coordinates"][k][1]] for k in range(np.array(tile[i]["coordinates"]).shape[0])]), 0, 0, i, "", tile[i]["rot"], tile[i]["centroidlatlon"])
svg += "\n</svg>"
with open(pathdataout + "all.svg", "w") as f:
f.write(svg)
###Output
_____no_output_____ |
code/notebooks/.ipynb_checkpoints/mnist_ml-checkpoint.ipynb | ###Markdown
MNIST - Model training using Sagemaker https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-dg.pdfex1-preprocess-data-pull-data
###Code
#!pip install tensorflow==1.14
#!pip install matplotlib==2.2.2
#!pip install opencv-python==4.1.1.26
#!pip install --upgrade botocore
#!pip install --upgrade boto3
import os
import boto3
import re
import copy
import time
import io
import struct
from time import gmtime, strftime
from sagemaker import get_execution_role
###Output
_____no_output_____
###Markdown
1. Download dataset
###Code
%%time
import pickle, gzip, urllib.request, json
import numpy as np
# Load the dataset
urllib.request.urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz",
"mnist.pkl.gz")
with gzip.open('mnist.pkl.gz', 'rb') as f:
train_set, valid_set, test_set = pickle.load(f, encoding='latin1')
print(train_set[0].shape)
###Output
(50000, 784)
CPU times: user 711 ms, sys: 390 ms, total: 1.1 s
Wall time: 3.8 s
###Markdown
**Checkout dataset**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (2,10)
for i in range(0, 10):
img = train_set[0][i]
label = train_set[1][i]
img_reshape = img.reshape((28,28))
imgplot = plt.imshow(img_reshape, cmap='gray')
print('This is a {}'.format(label))
plt.show()
###Output
This is a 5
###Markdown
2. Upload to S3
###Code
role = get_execution_role()
region = boto3.Session().region_name
bucket='imageunderstandingire' # Replace with your s3 bucket name
prefix = 'sagemaker/xgboost-mnist' # Used as part of the path in the bucket where you store data
%%time
def convert_data():
data_partitions = [('train', train_set), ('validation', valid_set), ('test',test_set)]
for data_partition_name, data_partition in data_partitions:
print('{}: {} {}'.format(data_partition_name, data_partition[0].shape,data_partition[1].shape))
labels = [t.tolist() for t in data_partition[1]]
features = [t.tolist() for t in data_partition[0]]
if data_partition_name != 'test':
examples = np.insert(features, 0, labels, axis=1)
else:
examples = features
#print(examples[50000,:])
np.savetxt('data.csv', examples, delimiter=',')
key = "{}/{}/examples".format(prefix,data_partition_name)
url = 's3://{}/{}'.format(bucket, key)
boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_file('data.csv')
print('Done writing to {}'.format(url))
convert_data()
###Output
train: (50000, 784) (50000,)
Done writing to s3://imageunderstandingire/sagemaker/xgboost-mnist/train/examples
validation: (10000, 784) (10000,)
Done writing to s3://imageunderstandingire/sagemaker/xgboost-mnist/validation/examples
test: (10000, 784) (10000,)
Done writing to s3://imageunderstandingire/sagemaker/xgboost-mnist/test/examples
CPU times: user 33.6 s, sys: 7.8 s, total: 41.4 s
Wall time: 51.4 s
###Markdown
3. Train model
###Code
import sagemaker
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(boto3.Session().region_name, 'xgboost')
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
There is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:
get_image_uri(region, 'xgboost', '1.0-1').
###Markdown
**Download dataset from your own S3**
###Code
train_data = 's3://{}/{}/{}'.format(bucket, prefix, 'train')
validation_data = 's3://{}/{}/{}'.format(bucket, prefix, 'validation')
s3_output_location = 's3://{}/{}/{}'.format(bucket, prefix, 'xgboost_model_sdk')
print(train_data)
###Output
s3://imageunderstandingire/sagemaker/xgboost-mnist/train
###Markdown
3.1 Create instance of sagemaker.estimator.Estimator
###Code
xgb_model = sagemaker.estimator.Estimator(container,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
train_volume_size = 5,
output_path=s3_output_location,
sagemaker_session=sagemaker.Session())
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
**Set hyperparamters**
###Code
xgb_model.set_hyperparameters(max_depth = 5,
eta = .2,
gamma = 4,
min_child_weight = 6,
silent = 0,
objective = "multi:softmax",
num_class = 10,
num_round = 10)
###Output
_____no_output_____
###Markdown
**Create training examples**
###Code
train_channel = sagemaker.session.s3_input(train_data, content_type='text/csv')
valid_channel = sagemaker.session.s3_input(validation_data, content_type='text/csv')
data_channels = {'train': train_channel, 'validation': valid_channel}
###Output
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
###Markdown
**Start training model**
###Code
xgb_model.fit(inputs=data_channels, logs=True)
###Output
2020-09-11 21:55:58 Starting - Starting the training job...
2020-09-11 21:56:00 Starting - Launching requested ML instances......
2020-09-11 21:57:06 Starting - Preparing the instances for training...
2020-09-11 21:57:54 Downloading - Downloading input data......
2020-09-11 21:58:58 Training - Training image download completed. Training in progress..[34mArguments: train[0m
[34m[2020-09-11:21:58:58:INFO] Running standalone xgboost training.[0m
[34m[2020-09-11:21:58:58:INFO] File size need to be processed in the node: 1122.95mb. Available memory size in the node: 8493.67mb[0m
[34m[2020-09-11:21:58:58:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:58:58] S3DistributionType set as FullyReplicated[0m
[34m[21:59:04] 50000x784 matrix with 39200000 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[2020-09-11:21:59:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[21:59:04] S3DistributionType set as FullyReplicated[0m
[34m[21:59:06] 10000x784 matrix with 7840000 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34m[21:59:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21:59:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21:59:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[0]#011train-merror:0.17074#011validation-merror:0.1664[0m
[34m[21:59:14] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:15] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[21:59:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:16] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 62 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21:59:17] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21:59:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:18] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[21:59:19] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21:59:20] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[1]#011train-merror:0.12624#011validation-merror:0.1273[0m
[34m[21:59:21] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[21:59:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[21:59:22] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:23] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:24] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21:59:25] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:26] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21:59:27] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[2]#011train-merror:0.11272#011validation-merror:0.1143[0m
[34m[21:59:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21:59:28] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[21:59:29] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:30] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21:59:31] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:32] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[3]#011train-merror:0.10072#011validation-merror:0.1052[0m
[34m[21:59:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[21:59:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[21:59:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:37] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:38] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:39] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:40] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[4]#011train-merror:0.09216#011validation-merror:0.097[0m
[34m[21:59:41] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[21:59:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[21:59:42] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:43] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[21:59:44] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[21:59:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:45] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[21:59:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:46] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[5]#011train-merror:0.08544#011validation-merror:0.0904[0m
[34m[21:59:47] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[21:59:48] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[21:59:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:49] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 0 pruned nodes, max_depth=5[0m
[34m[21:59:50] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21:59:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:51] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[21:59:52] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:53] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 42 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[6]#011train-merror:0.08064#011validation-merror:0.0864[0m
[34m[21:59:54] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21:59:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[21:59:55] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[21:59:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[21:59:56] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[21:59:57] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[21:59:58] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 40 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[21:59:59] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22:00:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[7]#011train-merror:0.0769#011validation-merror:0.0821[0m
[34m[22:00:00] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 46 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[22:00:01] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[22:00:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22:00:02] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[22:00:03] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 44 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[22:00:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22:00:04] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 54 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[22:00:05] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22:00:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 58 extra nodes, 4 pruned nodes, max_depth=5[0m
[34m[22:00:06] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[8]#011train-merror:0.0731#011validation-merror:0.0809[0m
[34m[22:00:07] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[22:00:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[22:00:08] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 56 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22:00:09] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[22:00:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[22:00:10] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 48 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[22:00:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 52 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[22:00:11] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 38 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[22:00:12] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 60 extra nodes, 2 pruned nodes, max_depth=5[0m
[34m[22:00:13] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 50 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[9]#011train-merror:0.06942#011validation-merror:0.0773[0m
2020-09-11 22:00:22 Uploading - Uploading generated training model
2020-09-11 22:00:22 Completed - Training job completed
Training seconds: 148
Billable seconds: 148
###Markdown
4. Deploy model
###Code
xgb_predictor = xgb_model.deploy(initial_instance_count=1,
content_type='text/csv',
instance_type='ml.t2.medium'
)
model_name = training_job_name + '-mod'
info = sm.describe_training_job(TrainingJobName=training_job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = sm.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
###Output
_____no_output_____ |
Tabular_data/Classification/Titanic/Titanic.ipynb | ###Markdown
Before beginsThis notebook is written in google colab.To see some interactive plots, please enter the colab link Below. Overview Competition description [Titanic - Machine Learning from Disaster](https://www.kaggle.com/c/titanic)- Problem type: (Binary) classification - Predict the survial of the Titanic passengers- Evaluation metric: Accuracy Notebook DescriptionThis notebook provides the '**proper workflow**' for kaggle submission.The workflow is divided into three main steps.1. Data preprocessing2. Model selection (hyper parameter tuning, model combination, model comparison)3. Training final model & Prediction on Test-setAt each stage, detailed descriptions of the work and an appropriate procedure will be provided.Through this notebook, readers can learn the 'proper workflow' to be done for kaggle submission, and using this as a basic structure, someone will be able to apply this to other competitions easily with some adjustments**Warnings**:- The purpose of this notebook - This notebook focuses on the 'procedure' rather than the 'result'. - Thus this notebook does not guide you on how to achieve the top score. Since I personally think that any result can only have a meaning through an appropriate procedure. - But since this is a competition, it cannot be avoided that the score is important. Following this notebook, you will get the top 10% (score: 0.77990) result in this competition- The readers this notebook is intended for - Who are aware of the basic usage of data processing tools (e.g., numpy, pandas) - Who are aware of the basic concepts of machine learning models 0. ConfigurationSet the configurations for this notebook
###Code
config = {
'data_name': 'Titanic',
'random_state': 2022
}
###Output
_____no_output_____
###Markdown
1. Data preprocessingThe data preprocessing works are divided into 8 steps here.Some of these steps are mandatory and some are optional.Optional steps are marked separately.It is important to go through each step in order.Be careful not to reverse the order. 1-1. Load DatasetLoad train-set and test-set on working environment > Download Data from Kaggle by using Kaggle APINavigate to https://www.kaggle.com. Then go to the [Account tab of your user profile](https://www.kaggle.com/me/account) and select Create API Token. This will trigger the download of kaggle.json, a file containing your API credentials.Then run the cell below to upload kaggle.json to your Colab runtime.
###Code
from google.colab import files
# Upload Kaggle API key (kaggle.json)
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Move kaggle.json into the folder where the API expects to find it.
!mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/.kaggle/kaggle.json
%%bash
(mkdir Titanic
cd Titanic
kaggle competitions download -c titanic
)
import numpy as np
import pandas as pd
import os
train = pd.read_csv('/content/{}/train.csv'.format(config['data_name']))
test = pd.read_csv('/content/{}/test.csv'.format(config['data_name']))
###Output
_____no_output_____
###Markdown
> Concatenate the 'train' and 'test' data for preprocessingData preprocessing work should be applied equally for train-set and test-set.In order to work at once, exclude the response variable 'Survived' from 'train' and combine it with 'test'.
###Code
all_features = pd.concat((train.drop(['Survived'], axis=1), test), axis=0)
###Output
_____no_output_____
###Markdown
1-2. Missing Value TreatmentMissing (NA) values in Data must be treated properly before model training.There are three main treatment methods:1. Remove the variables which have NA values2. Remove the rows (observations) which have NA values3. Impute the NA values with other valuesWhich of the above methods is chosen is at the analyst's discretion.It is important to choose the appropriate method for the situation. > Check missing values in each variableFour variables (Age, Fare, Cabin, Embarked) have NA values
###Code
import missingno as msno
msno.bar(all_features, figsize=(5, 3))
###Output
_____no_output_____
###Markdown
> Embarked- Method: Replace NA in 'Embarked' by mode value of 'Embarked' grouped by 'Pclass' The figure below shows that the distribution of 'Embarked' is different according to the Pclass.
###Code
dfg = all_features.groupby(['Pclass', 'Embarked']).size().reset_index()
dfg = dfg.rename(columns={0: 'Count'})
import plotly.express as px
px.bar(dfg, x='Pclass', y='Count', color='Embarked', width=400, height=400)
# Impute NA values in 'Embarked' by mode value of 'Embarked' grouped by 'Pcalss'
all_features['Embarked'] = all_features['Embarked'].fillna(all_features.groupby(['Pclass'])['Embarked'].transform(lambda x:x.value_counts().index[0]))
###Output
_____no_output_____
###Markdown
> Fare- Method: Replace NA in 'Fare' by **median** value of 'Fare' grouped by 'Pclass' The figure below shows that the distribution of Fare is different depending on the Pclass.(An ANOVA test is used to determine whether the differences between groups are significant.)
###Code
x = 'Pclass'
y = 'Fare'
import plotly.express as px
fig = px.box(all_features, x=x, y=y, color=x, width=400, height=400)
fig.show()
import statsmodels.api as sm
from statsmodels.formula.api import ols
model = ols('{} ~ {}'.format(y, x), data=all_features).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
print(anova_table)
all_features['Fare'] = all_features['Fare'].fillna(all_features.groupby('Pclass')['Fare'].transform('median'))
###Output
_____no_output_____
###Markdown
> Age- Method: Replace NA in 'Age' by **median** value of 'Age' grouped by 'Title' - To do so, we need to create the 'Title' extract from the 'Name'
###Code
# Extract Title from Name
all_features['Title'] = all_features['Name'].str.split('.').str[0].str.split(',').str[1].str.strip()
# 비슷한 의미의 title을 합쳐준다
title_Mr = ['Major', 'Col', 'Sir', 'Don', 'Jonkheer', 'Capt']
title_Mrs = ['Lady', 'the Countess', 'Dona']
title_Miss = ['Mlle', 'Mme', 'Ms']
all_features.replace(title_Mr, 'Mr', inplace=True)
all_features.replace(title_Mrs, 'Mrs', inplace=True)
all_features.replace(title_Miss, 'Miss', inplace=True)
###Output
_____no_output_____
###Markdown
The figure below shows that the distribution of Age is different depending on the Title.(An ANOVA test is used to determine whether the differences between groups are significant.)
###Code
x = 'Title'
y = 'Age'
import plotly.express as px
fig = px.box(all_features, x=x, y=y, color=x, width=400, height=400)
fig.show()
import statsmodels.api as sm
from statsmodels.formula.api import ols
model = ols('{} ~ {}'.format(y, x), data=all_features).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
print(anova_table)
# Impute NA values in 'Age' by mode value of 'Age' grouped by 'Title'
all_features['Age'] = all_features['Age'].fillna(all_features.groupby('Title')['Age'].transform('median'))
###Output
_____no_output_____
###Markdown
> Cabin- method: Drop the variable - since there are too many NA values (1014 out of 1309) in 'Cabin'
###Code
all_features.drop('Cabin', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
> Check missing values againMake sure there is no NA value in the whole data
###Code
import missingno as msno
msno.bar(all_features, figsize=(5, 3))
assert not all_features.isnull().sum().any()
###Output
_____no_output_____
###Markdown
1-3. Adding New Features (*optional*)New variables can be created using the given data.These variables are called 'derived variables'.New informations can be added by creating appropriate derived variables.This can have a positive effect on model performance. (Not always) Insight from the data (Ticekt & Fare)Through the observations below, if the ticket value is the same, it can be confirmed that the Fare is also the same.Also via SibSp, Parch we can find out whether the passenger has a family or noFrom the example below, it can be inferred that a family of three sharing the name 'Taussig' had boarded on the boat together.Mr. Emil would be a husband, Mrs. Emil would be spouse. Miss. Ruth would be their child.But the information that they are a group is not included in the data!This information might have a positive effect on model performance
###Code
all_features[all_features['Ticket']=='110413']
###Output
_____no_output_____
###Markdown
Even though 'SibSp' & 'Parch' = 0, we can observe the case that Ticket and Fare match.In this case, we can think of it as a group of friends or colleagues rather than family.
###Code
all_features[all_features['Ticket']=='110152']
###Output
_____no_output_____
###Markdown
> Create two derived variablesBased on the above observations, two additional variables are created.- Group_type: indicates the group type of the passenger- Group_size: indicates the size of the group
###Code
# Specify a count equal to the ticket number of each passenger as the 'Ticket_count' variable.
all_features['Ticket_count'] = all_features.groupby('Ticket')['Ticket'].transform('count')
# Specify a count equal to the Fare value of each passenger as the 'Fare_count' variable.
all_features['Fare_count'] = all_features.groupby('Fare')['Fare'].transform('count')
all_features[['Group_type','Group_size']]=None
# Assign the value of the Group_type variable based on specific conditions.
# 1. If SibSp + Parch > 0, Then Group_type = 'Family'
all_features.loc[all_features['SibSp'] + all_features['Parch'] > 0, 'Group_type'] = 'Family'
# 2. If SibSp + Parch = 0, Then Group_type = 'Single'
all_features.loc[all_features['SibSp'] + all_features['Parch'] == 0, 'Group_type'] = 'Single'
# 3. If (SibSp + Parch = 0) & (Ticket_count > 0) & (Fare_count > 0) , Then Group_type = 'Ticket'
all_features.loc[(all_features['SibSp'] + all_features['Parch'] ==0) &
(all_features['Ticket_count']>1) &
(all_features['Fare_count']>1), 'Group_type'] = 'Ticket'
# Assign the value of the Group_size variable based on specific conditions.
# 1. If Group_type=='Family', Then Group_size = 1+ SibSp + Parch
all_features.loc[all_features['Group_type']=='Family','Group_size'] = 1 + all_features.loc[all_features['Group_type']=='Family', ['SibSp', 'Parch']].sum(axis=1)
# 1. If Group_type=='Single', Then Group_size = 1
all_features.loc[all_features['Group_type']=='Single','Group_size'] = 1
# 1. If Group_type=='Ticket', Then Group_size = Ticket_count
all_features.loc[all_features['Group_type']=='Ticket','Group_size'] = all_features.loc[all_features['Group_type']=='Ticket','Ticket_count']
###Output
_____no_output_____
###Markdown
1-4. Drop Variables that will not be usedDrop ['PassengerId', 'Name', 'Ticket', 'Ticket_count', 'Fare_count']
###Code
all_features.head()
all_features.drop(['PassengerId', 'Name', 'Ticket', 'Ticket_count', 'Fare_count'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
1-5. Variable (type) transformationSpecify the data type that matches the characteristics of the variable.
###Code
all_features.dtypes
# int -> object
all_features[['Pclass']] = all_features[['Pclass']].astype('object')
# object -> int
all_features['Group_size'] = all_features['Group_size'].astype('int64')
###Output
_____no_output_____
###Markdown
1-6. Dummify categorical variablesIn the case of linear modeling without regularization, the first or last column should be dropped (to prevent linear dependency), but here, for the convenience of using the factorization model, one-hot encoding method is used that does not drop any columns.
###Code
data_set = pd.get_dummies(all_features, drop_first=False)
###Output
_____no_output_____
###Markdown
1-7. Scaling continuous variablesThe float variables 'Age' and 'Fare' were measured in different units.MinMaxScaling maps all variables from 0 to 1 in order to consider only relative information, not absolute magnitudes of values.Besides, it is known that scaling is often more stable in parameter optimization when training a model.
###Code
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
data_set = scaler.fit_transform(data_set)
###Output
_____no_output_____
###Markdown
1-8. Split Train & Test set
###Code
n_train = train.shape[0]
X_train = data_set[:n_train].astype(np.float32)
X_test = data_set[n_train:].astype(np.float32)
y_train = train['Survived'].values.astype(np.int64)
print('Shape of X_train: {}'.format(X_train.shape))
print('Shape of X_test: {}'.format(X_test.shape))
###Output
Shape of X_train: (891, 22)
Shape of X_test: (418, 22)
###Markdown
1-9. Outlier Detection on Training data (*optional*)Detect and remove outlier observations that exist in the train-set.- Methodology: [Isolation Forest](https://ieeexplore.ieee.org/abstract/document/4781136/?casa_token=V7U3M1UIykoAAAAA:kww9pojtMeJtXaBcNmw0eVlJaXEGGICi1ogmeHUFMpgJ2h_XCbSd2yBU5mRgd7zEJrXZ01z2) - How it works - Isolation Forest applies a decision tree that repeats splits based on the 'random criterion' for the given data unitl only one observation remains in every terminal node (this is defined as 'isolation'). - Based on the number of splits used for isolation, 'normality' is defined. A smaller value means a higher degree of outlierness. - By applying this decision tree several times, the average of the measured 'normality' values is derived as the final 'normality' value. - Assumptions - Outliers require relatively few splits to be isolated. - For normal data, the number of splits required to be isolated is relatively large. - Outlier determination - Determines whether it is an outlier or not based on the measured 'normality' value. - sklearn's IsolationForest package determines based on '0' - I, personally, think it is better to set the discriminant criterion by considering the 'distribution' of the 'normality' values. - The details of the method is given below.
###Code
from sklearn.ensemble import IsolationForest
clf = IsolationForest(
n_estimators=100,
max_samples='auto',
n_jobs=-1,
random_state=config['random_state'])
clf.fit(X_train)
normality_df = pd.DataFrame(clf.decision_function(X_train), columns=['normality'])
###Output
_____no_output_____
###Markdown
- The dicriminant value - The discriminant value (threshold) is defined by calculating the 1st quartile ($q_1$) and 3rd quartile ($q_3$) on the distribution of the measured normality values. - with $k=1.5$$$threshold = q_1 - k*(q_3 - q_1)$$- Motivation - This discriminant method is adapted from Tukey's boxplot idea.In the distribution of any continuous variable, Tukey designates observations smaller than that value or larger than q_3 + k*(q_3 - q_1) as outliers.- How we do - Our methodology does not apply the above method to a specific variable, but applies the method to the obtained normality. - That is, it is based on the assumption that an outlier will be far left from the other observations in the measured normality distribution.
###Code
def outlier_threshold(normality, k=1.5):
q1 = np.quantile(normality, 0.25)
q3 = np.quantile(normality, 0.75)
threshold = q1 - k*(q3-q1)
return threshold
threshold = outlier_threshold(normality_df['normality'].values, k=1.5)
fig = px.histogram(normality_df, x='normality', width=400, height=400)
fig.add_vline(x=threshold, line_width=3, line_dash="dash", line_color="red")
fig.show()
import plotly.express as px
px.box(normality_df, x='normality', orientation='h', width=400, height=400)
###Output
_____no_output_____
###Markdown
Only observations whose normality value is greater than the threshold are left in the train-set.
###Code
X_train = X_train[normality_df['normality'].values>=threshold]
y_train = y_train[normality_df['normality'].values>=threshold]
print('{} observations are removed from train_set'.format(train.shape[0] - X_train.shape[0]))
###Output
0 observations are removed from train_set
###Markdown
2. Model SelectionOur goal is to build a model that predicts the probability of survival (y=1) given information about a passenger (x). The formula can be expressed as:$Pr(Y=1|X=x)$This is a typical binary classification problem, and various machine learning models can be used. This notebook uses the following models.- Logistic regression- Support vector machine- Random forest- Xgboost- Multi-layer perceptron- FactorizationHowever, we have to "choose" one final methodology to make predictions on the test set.To do this, a “fair evaluation” of the models is essential. "Fair evaluation" must satisfy the following two conditions.1. Select optimal hyperparameters for each model - If hyperparameter search is not performed, the difference in model performance may occur due to incorrect hyperparameter values.2. same evaluation method - If the same evaluation method is not applied, comparison between models itself is impossible.When comparing models through an evaluation method that satisfies the above two conditions,Only then can the final model be selected. > Install Packages
###Code
! pip install tune_sklearn ray[tune] skorch
###Output
Successfully installed deprecated-1.2.13 ray-1.10.0 redis-4.1.4 skorch-0.11.0 tensorboardX-2.4.1 tune-sklearn-0.4.1
###Markdown
2-1. Hyper parameter tuning by using Tune_SKlearn (Ray Tune)- Package: tune_sklearn - This package makes it easy to apply [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) to sklearn models. - Ray Tune is a python package that provides various hyperparameter tuning algorithms (HyperOpt, BayesianOptimization, ...).- Tuning procedure - Define an appropriate search space for each model's hyperparameters. - 5-fold CV (Cross Validation) is performed for each specific hyper-parameter value combination of the search space by using the hyper-parameter tuning algorithm (HyperOpt) - Training: Training by using Scikit-Learn and Skorch packages - Validation: Evaluate the model using an appropriate evaluation metric - The hyperparameter with the highest average score of the CV result is designated as the optimal hyperparameter of the model. - Save this CV result and use for model comparison > Make a dataframe for containing CV results
###Code
model_list = []
for name in ['linear', 'svm', 'rf', 'xgb', 'mlp', 'fm']:
model_list.append(np.full(5, name))
best_cv_df = pd.DataFrame({'model': np.hstack((model_list)), 'log_loss':None, 'accuracy':None, 'best_hyper_param':None})
###Output
_____no_output_____
###Markdown
> Logistic regression
###Code
from tune_sklearn import TuneGridSearchCV
from sklearn.linear_model import SGDClassifier
# Define a search space
parameters = {
'max_iter': [1000],
'loss': ['log'],
'penalty': ['l2'],
'random_state': [config['random_state']],
'alpha': [1e-9, 1e-8, 1e-7, 1e-6],
}
# Specify the hyper parameter tuning algorithm
tune_search = TuneGridSearchCV(
SGDClassifier(),
parameters,
n_jobs=-1,
scoring=['neg_log_loss', 'accuracy'],
cv=5,
refit='accuracy', # target metric of competition
verbose=1
)
# Run hyper parameter tuning
X = X_train
y = y_train
tune_search.fit(X, y)
# Save the tuning results
model_name = 'linear'
## Save the optimal hyper parmater values
best_cv_df.loc[best_cv_df['model']==model_name, 'best_hyper_param'] = str(tune_search.best_params_)
## Save the CV results
cv_df = pd.DataFrame(tune_search.cv_results_)
cv_values = cv_df.loc[tune_search.best_index_, cv_df.columns.str.startswith('split')].values
best_cv_df.loc[best_cv_df['model']==model_name, 'log_loss'] = cv_values[:5]
best_cv_df.loc[best_cv_df['model']==model_name, 'accuracy'] = cv_values[5:10]
# Visualize the tuning results with parallel coordinate plot
tune_result_df = pd.concat([pd.DataFrame(tune_search.cv_results_['params']), cv_df.loc[:,cv_df.columns.str.startswith('mean')] ], axis=1)
import plotly.express as px
fig = px.parallel_coordinates(tune_result_df, color='mean_test_accuracy')
fig.show()
###Output
_____no_output_____
###Markdown
> Support vector machine
###Code
from tune_sklearn import TuneGridSearchCV
from sklearn.linear_model import SGDClassifier
# Define a search space
parameters = {
'max_iter': [1000],
'loss': ['hinge'],
'penalty': ['l2'],
'random_state': [config['random_state']],
'alpha': [1e-9, 1e-8, 1e-7],
'epsilon': [1e-9, 1e-8],
}
# Specify the hyper parameter tuning algorithm
tune_search = TuneGridSearchCV(
SGDClassifier(),
parameters,
n_jobs=-1,
scoring=['neg_log_loss', 'accuracy'],
cv=5,
refit='accuracy', # target metric of competition
verbose=1
)
# Run hyper parameter tuning
X = X_train
y = y_train
tune_search.fit(X, y)
# Save the tuning results
model_name = 'svm'
## Save the best hyper parmater values
best_cv_df.loc[best_cv_df['model']==model_name, 'best_hyper_param'] = str(tune_search.best_params_)
## Save the CV results
cv_df = pd.DataFrame(tune_search.cv_results_)
cv_values = cv_df.loc[tune_search.best_index_, cv_df.columns.str.startswith('split')].values
best_cv_df.loc[best_cv_df['model']==model_name, 'log_loss'] = cv_values[:5]
best_cv_df.loc[best_cv_df['model']==model_name, 'accuracy'] = cv_values[5:10]
# Visualize the tuning results with parallel coordinate plot
tune_result_df = pd.concat([pd.DataFrame(tune_search.cv_results_['params']), cv_df.loc[:,cv_df.columns.str.startswith('mean')] ], axis=1)
import plotly.express as px
fig = px.parallel_coordinates(tune_result_df, color='mean_test_accuracy')
fig.show()
###Output
_____no_output_____
###Markdown
> Random forest
###Code
from tune_sklearn import TuneSearchCV
from sklearn.ensemble import RandomForestClassifier
# Define a search space
parameters = {
'max_features': ['auto'],
'random_state': [config['random_state']],
'n_estimators': [100, 500, 1000],
'criterion': ['gini', 'entropy'],
'max_depth': [5, 10, 15],
}
# Specify the hyper parameter tuning algorithm
tune_search = TuneSearchCV(
RandomForestClassifier(),
parameters,
search_optimization='hyperopt',
n_trials=10,
n_jobs=-1,
scoring=['neg_log_loss', 'accuracy'],
cv=5,
refit='accuracy',
verbose=1,
random_state=config['random_state']
)
# Run hyper parameter tuning
X = X_train
y = y_train
tune_search.fit(X, y)
# Save the tuning results
model_name = 'rf'
# Save the tuning results
best_cv_df.loc[best_cv_df['model']==model_name, 'best_hyper_param'] = str(tune_search.best_params_)
## Save the CV results
cv_df = pd.DataFrame(tune_search.cv_results_)
cv_values = cv_df.loc[tune_search.best_index_, cv_df.columns.str.startswith('split')].values
best_cv_df.loc[best_cv_df['model']==model_name, 'log_loss'] = cv_values[:5]
best_cv_df.loc[best_cv_df['model']==model_name, 'accuracy'] = cv_values[5:10]
# Visualize the tuning results with parallel coordinate plot
tune_result_df = pd.concat([pd.DataFrame(tune_search.cv_results_['params']), cv_df.loc[:,cv_df.columns.str.startswith('mean')] ], axis=1)
import plotly.express as px
fig = px.parallel_coordinates(tune_result_df, color='mean_test_accuracy')
fig.show()
###Output
_____no_output_____
###Markdown
> XGBoost
###Code
from tune_sklearn import TuneSearchCV
from xgboost import XGBClassifier
# Define a search space
parameters = {
'n_estimators': [500, 1000],
'learning_rate': [0.001, 0.01, 0.1],
'min_child_weight': [1, 5, 10],
'gamma': [0.5, 2],
'subsample': [0.6, 1.0],
'colsample_bytree': [0.6, 1.0],
'max_depth': [10, 15, 20],
'objective': ['binary:logistic'],
'random_state': [config['random_state']]
}
# Specify the hyper parameter tuning algorithm
tune_search = TuneSearchCV(
XGBClassifier(),
parameters,
search_optimization='hyperopt',
n_trials=10,
n_jobs=-1,
scoring=['neg_log_loss', 'accuracy'],
cv=5,
refit='accuracy',
verbose=1,
random_state=config['random_state']
)
# Run hyper parameter tuning
X = X_train
y = y_train
tune_search.fit(X, y)
# Save the tuning results
model_name = 'xgb'
## Save the optimal hyper parmater values
best_cv_df.loc[best_cv_df['model']==model_name, 'best_hyper_param'] = str(tune_search.best_params_)
## Save the CV results
cv_df = pd.DataFrame(tune_search.cv_results_)
cv_values = cv_df.loc[tune_search.best_index_, cv_df.columns.str.startswith('split')].values
best_cv_df.loc[best_cv_df['model']==model_name, 'log_loss'] = cv_values[:5]
best_cv_df.loc[best_cv_df['model']==model_name, 'accuracy'] = cv_values[5:10]
# Visualize the tuning results with parallel coordinate plot
tune_result_df = pd.concat([pd.DataFrame(tune_search.cv_results_['params']), cv_df.loc[:,cv_df.columns.str.startswith('mean')] ], axis=1)
import plotly.express as px
fig = px.parallel_coordinates(tune_result_df, color='mean_test_accuracy')
fig.show()
###Output
_____no_output_____
###Markdown
> Multi-layer perceptron
###Code
import torch
from torch import nn
from skorch import NeuralNetClassifier
from skorch.callbacks import EarlyStopping
from skorch.callbacks import Checkpoint
from tune_sklearn import TuneSearchCV
# Define a model structure
class MLP(nn.Module):
def __init__(self, num_inputs=X_train.shape[1], num_outputs=len(np.unique(y_train)), layer1=512, layer2=256, dropout1=0, dropout2=0):
super(MLP, self).__init__()
self.linear_relu_stack = nn.Sequential(
nn.Linear(num_inputs, layer1),
nn.LeakyReLU(),
nn.Dropout(dropout1),
nn.Linear(layer1, layer2),
nn.LeakyReLU(),
nn.Dropout(dropout2),
nn.Linear(layer2, num_outputs)
)
def forward(self, x):
x = self.linear_relu_stack(x)
return x
def try_gpu(i=0):
return f'cuda:{i}' if torch.cuda.device_count() >= i + 1 else 'cpu'
# Set model configurations
mlp = NeuralNetClassifier(
MLP(num_inputs=X_train.shape[1], num_outputs=len(np.unique(y_train))),
optimizer=torch.optim.Adam,
criterion=nn.CrossEntropyLoss,
iterator_train__shuffle=True,
device=try_gpu(),
verbose=0,
callbacks=[EarlyStopping(monitor='valid_loss', patience=5,
threshold=1e-4, lower_is_better=True),
Checkpoint(monitor='valid_loss_best')]
)
# Define a search space
parameters = {
'lr': list(np.geomspace(1e-4, 1e-1, 4)),
'module__layer1': [128, 256, 512],
'module__layer2': [128, 256, 512],
'module__dropout1': [0, 0.1],
'module__dropout2': [0, 0.1],
'optimizer__weight_decay': list(np.geomspace(1e-5, 1e-1, 5)),
'max_epochs': [1000],
'batch_size': [32, 64, 128]
}
def use_gpu(device):
return True if not device == 'cpu' else False
# Specify the hyper parameter tuning algorithm
tune_search = TuneSearchCV(
mlp,
parameters,
search_optimization='hyperopt',
n_trials=15,
n_jobs=-1,
scoring=['neg_log_loss', 'accuracy'],
cv=5,
refit='accuracy',
mode='max',
use_gpu = use_gpu(try_gpu()),
random_state=config['random_state'],
verbose=1,
)
# Run hyper parameter tuning
X = X_train
y = y_train
tune_search.fit(X, y)
# Save the tuning results
model_name = 'mlp'
## Save the best hyper parmater values## 1. 최적 하이퍼 파라미터 저장
best_cv_df.loc[best_cv_df['model']==model_name, 'best_hyper_param'] = str(tune_search.best_params_)
## Save the CV results
cv_df = pd.DataFrame(tune_search.cv_results_)
cv_values = cv_df.loc[tune_search.best_index_, cv_df.columns.str.startswith('split')].values
best_cv_df.loc[best_cv_df['model']==model_name, 'log_loss'] = cv_values[:5]
best_cv_df.loc[best_cv_df['model']==model_name, 'accuracy'] = cv_values[5:10]
# Visualize the tuning results with parallel coordinate plot
tune_result_df = pd.concat([pd.DataFrame(tune_search.cv_results_['params']), cv_df.loc[:,cv_df.columns.str.startswith('mean')] ], axis=1)
tune_result_df.rename({
'callbacks__EarlyStopping__threshold':'Earlystoping_threshold',
'optimizer__weight_decay': 'weight_decay'
}, axis=1, inplace=True)
import plotly.express as px
fig = px.parallel_coordinates(tune_result_df, color='mean_test_accuracy')
fig.show()
###Output
_____no_output_____
###Markdown
> Factorization Machine >> Preprocessing Data for implementing Factorization MachineSince the factorization machine uses an embedding layer, it requires that the data type of all input variables be 'int'.To take this into account, 'float' type variables are divided into several sections according to their values, and values belonging to a specific section are transformed into interger values of the section.
###Code
def prepro_for_fm(X_train, X_test, bin_method='sturges'):
n_train = X_train.shape[0]
all = np.vstack((X_train, X_test))
col_num_uniq = np.apply_along_axis(lambda x: len(np.unique(x)), 0, all)
remain_iidx = (col_num_uniq<=2)
to_bin_iidx = (col_num_uniq>2)
all_remain = all[:,remain_iidx]
all_to_bin = all[:,to_bin_iidx]
for iter in range(all_to_bin.shape[1]):
bin_size = len(np.histogram(all_to_bin[:,iter], bins=bin_method)[0])
all_to_bin[:,iter] = pd.cut(all_to_bin[:,iter], bins=bin_size, labels=False)
all_to_bin_df = pd.DataFrame(all_to_bin).astype('object')
all_to_bin_array = pd.get_dummies(all_to_bin_df, drop_first=False).to_numpy()
all_array = np.hstack((all_to_bin_array, all_remain)).astype(np.int64)
field_dims = all_array.shape[1]
all_fm = np.vstack((np.apply_along_axis(lambda x: np.where(x==1), 1, all_array)))
return all_fm[:n_train], all_fm[n_train:], field_dims
X_train_fm, X_test_fm, field_dims = prepro_for_fm(X_train, X_test, bin_method='sturges')
import torch
from torch import nn
from skorch import NeuralNetClassifier
from skorch.callbacks import EarlyStopping
from skorch.callbacks import Checkpoint
from tune_sklearn import TuneSearchCV
# Define a model structure
class FM(nn.Module):
def __init__(self, num_inputs=100, num_factors=20, output_dim=1):
super(FM, self).__init__()
self.embedding = nn.Embedding(num_inputs, num_factors)
self.fc = nn.Embedding(num_inputs, 1)
self.bias = nn.Parameter(torch.zeros((output_dim,)))
def forward(self, x):
square_of_sum = torch.sum(self.embedding(x), dim=1)**2
sum_of_square = torch.sum(self.embedding(x)**2, dim=1)
x = self.bias + self.fc(x).sum(1) + 0.5 * (square_of_sum - sum_of_square).sum(dim=1, keepdim=True)
return x
def try_gpu(i=0):
return f'cuda:{i}' if torch.cuda.device_count() >= i + 1 else 'cpu'
# Set model configurations
fm = NeuralNetClassifier(
FM(num_inputs=field_dims, num_factors=20, output_dim=1),
optimizer=torch.optim.Adam,
criterion=nn.BCEWithLogitsLoss,
iterator_train__shuffle=True,
device=try_gpu(),
verbose=0,
callbacks=[EarlyStopping(monitor='valid_loss', patience=5,
threshold=1e-4, lower_is_better=True),
Checkpoint(monitor='valid_loss_best')]
)
# Define a search space
parameters = {
'lr': list(np.geomspace(1e-3, 1, 4)),
'module__num_factors': [20, 50, 100, 150],
'optimizer__weight_decay': [0.05, 0.1, 0.5, 1],
'max_epochs': [1000],
'batch_size': [32, 64, 128]
}
def use_gpu(device):
return True if not device == 'cpu' else False
# Specify the hyper parameter tuning algorithm
tune_search = TuneSearchCV(
fm,
parameters,
search_optimization='hyperopt',
n_trials=14,
n_jobs=-1,
scoring=['neg_log_loss', 'accuracy'],
cv=5,
refit='accuracy',
mode='max',
use_gpu = use_gpu(try_gpu()),
random_state=config['random_state'],
verbose=1,
)
# Run hyper parameter tuning
X = X_train_fm
y = y_train.reshape(-1,1).astype('float32')
tune_search.fit(X_train, y_train)
# Save the tuning results
model_name = 'fm'
## Save the optimal hyper parmater values
best_cv_df.loc[best_cv_df['model']==model_name, 'best_hyper_param'] = str(tune_search.best_params_)
## Save the CV results
cv_df = pd.DataFrame(tune_search.cv_results_)
cv_values = cv_df.loc[tune_search.best_index_, cv_df.columns.str.startswith('split')].values
best_cv_df.loc[best_cv_df['model']==model_name, 'log_loss'] = cv_values[:5]
best_cv_df.loc[best_cv_df['model']==model_name, 'accuracy'] = cv_values[5:10]
# Visualize the tuning results with parallel coordinate plot
tune_result_df = pd.concat([pd.DataFrame(tune_search.cv_results_['params']), cv_df.loc[:,cv_df.columns.str.startswith('mean')] ], axis=1)
tune_result_df.rename({
'callbacks__EarlyStopping__threshold':'Earlystoping_threshold',
'optimizer__weight_decay': 'weight_decay'
}, axis=1, inplace=True)
import plotly.express as px
fig = px.parallel_coordinates(tune_result_df, color='mean_test_accuracy')
fig.show()
###Output
_____no_output_____
###Markdown
> Save CV results
###Code
import os
save_path = '/content/{}/Result'.format(config['data_name'])
if not os.path.exists(save_path):
os.makedirs(save_path)
file_path = os.path.join(save_path, 'best_cv_results.csv')
best_cv_df.to_csv(file_path, index=False)
###Output
_____no_output_____
###Markdown
2-2. Model Comparison based on CV resultsCompare the CV results (measured using the best hyper parameter values) \\The figure below shows that \\xgb > rf >= mlp >= fm >> linear > svm
###Code
fig = px.box(best_cv_df, x='model', y='accuracy', color='model', width=600)
fig.show()
###Output
_____no_output_____
###Markdown
2-3. Model CombinationAlthough it is possible to select a final model based on the above results, it has been observed that in many cases the combination of predicted values from multiple models leads to improve prediction performance. ([Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts?](https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/qj.210?casa_token=OwyF2RbEywAAAAAA:gahpwGRdOWzLXyafYQQt_voHOF8MedTBLd1SBv4vkdT3ZTLVoKZQj3zl-KbrhSkX5x8CndeCxwBoL_-S))For classification problems, the final probabilities are derived by combining the predicted 'probabilities' for each class in a 'proper way'.This notebook uses following two model combination methods.1. Simple Average2. Stacked Generalization (Stacking)Model comparison needs to be compared with single models (e.g., rf, xgb,...).So model performance are measured by applying the same CV method as above. > Simple AverageThe simple average method derives the final probability value by 'averaging' the predicted probability values for each class of multiple models.The top 4 models (rf, xgb, mlp, fm) of the above CV results are selected as base estimators used for the combination of predicted values.For example,- Base Estimations - $P_{rf}(Y=1|X=x)$ = 0.75 - $P_{xgb}(Y=1|X=x)$ = 0.80 - $P_{mlp}(Y=1|X=x)$ = 0.85 - $P_{fm}(Y=1|X=x)$ = 0.80- Final Estimation - $P_{average}(Y=1|X=x)$ = 0.8 (= 0.75 + 0.80 + 0.85 + 0.80 / 4)
###Code
from sklearn.model_selection import KFold
from tqdm import notebook
from sklearn.metrics import accuracy_score
from sklearn.metrics import log_loss
from sklearn.metrics import roc_auc_score
def CV_ensemble(ensemble_name, ensemble_func, estimators, X_train, y_train, n_folds=5, shuffle=True, random_state=2022):
kf = KFold(n_splits=5, random_state=random_state, shuffle=True)
res_list = []
for train_idx, valid_idx in notebook.tqdm(kf.split(X_train), total=kf.get_n_splits(), desc='Eval_CV'):
X_train_train, X_valid = X_train[train_idx], X_train[valid_idx]
y_train_train, y_valid = y_train[train_idx], y_train[valid_idx]
ensemble_pred_proba = ensemble_func(estimators, X_train_train, y_train_train, X_valid)
neg_log_loss = np.negative(log_loss(y_valid, ensemble_pred_proba))
accuracy = accuracy_score(y_valid, ensemble_pred_proba.argmax(axis=1))
res_list.append([ensemble_name, neg_log_loss, accuracy])
res_df = pd.DataFrame(np.vstack((res_list)))
res_df.columns = ['model', 'log_loss', 'accuracy']
return res_df.reset_index(drop=True)
def ensemble_average(estimators, X_train, y_train, X_test):
preds = []
num_estimators = len(estimators)
num_class = len(np.unique(y_train))
for iter in range(num_estimators):
try:
estimators[iter].module__num_factors
except: # for other models
estimators[iter].fit(X_train, y_train)
preds.append(estimators[iter].predict_proba(X_test))
else: # for factorization machine
X_train_fm, X_test_fm, _ = prepro_for_fm(X_train, X_test)
estimators[iter].fit(X_train_fm, np.reshape(y_train, (-1,1)).astype(np.float32))
preds.append(estimators[iter].predict_proba(X_test_fm).reshape(-1, num_class))
preds_stack = np.hstack((preds))
preds_mean = []
for iter in range(num_class):
col_idx = np.arange(iter, num_estimators * num_class, num_class)
preds_mean.append(np.mean(preds_stack[:,col_idx], axis=1))
avg_pred = np.vstack((preds_mean)).transpose()
return avg_pred
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
linear = SGDClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='linear', 'best_hyper_param'].values[0]))
svm = SGDClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='svm', 'best_hyper_param'].values[0]))
rf = RandomForestClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='rf', 'best_hyper_param'].values[0]))
xgb = XGBClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='xgb', 'best_hyper_param'].values[0]))
mlp = mlp.set_params(**eval(best_cv_df.loc[best_cv_df['model']=='mlp', 'best_hyper_param'].values[0]))
fm = fm.set_params(**eval(best_cv_df.loc[best_cv_df['model']=='fm', 'best_hyper_param'].values[0]))
estimators = [rf, xgb, mlp, fm]
estimators_name = 'rf_xgb_mlp_fm'
ensemble_name = 'average' + '_by_' + estimators_name
X = X_train
y = y_train
res_df = CV_ensemble(ensemble_name, ensemble_average, estimators, X, y, n_folds=5, shuffle=True, random_state=config['random_state'])
best_cv_df = best_cv_df.append(res_df).reset_index(drop=True)
fig = px.box(best_cv_df, x='model', y='accuracy', color='model', width=600)
fig.show()
###Output
_____no_output_____
###Markdown
> Stacked generalization (Stacking)In the [Stacked generalization](https://www.jair.org/index.php/jair/article/view/10228), the predicted probabilities for each class of base estimators are treated as the 'input data', and the result obtained by fitting the 'Meta Learner' with y of each row as the response variable is derived as the final probability.- The 'Meta Learner' can be used with any of the binary classification models. However, this notebook uses a ridge model (logistic regression with ridge penalty) to prevent overfitting.- As input data for 'Meta Learner', prediction probabilities for validation data in cv of base estimators are obtained.- Trained meta-learner predicts the final predicted probabilities for the test-set by using the predicted probabilites of baes estimators for the test-set as input data.The total process, in order, is as follows:1. (Base estimators) Run CV on Train-set2. (Meta Learner) Train on CV predictions (predicted probabilities on validation data of CV) with corresponding y values3. (Base estimators) Train on Train-set4. (Base estimators) Predict on Test-set5. (Meta Learner) Predict on predictions on Test-setFor example,- Base Estimations - $P_{rf}(Y=1|X=x)$ = 0.75 - $P_{xgb}(Y=1|X=x)$ = 0.80 - $P_{mlp}(Y=1|X=x)$ = 0.85 - $P_{fm}(Y=1|X=x)$ = 0.80- Meta Learner (logistic regression) - Parameter - intercept = 0.1 - coefficient = [0.2, 0.9, 0.8, 0.3] - $P_{stack}(Y=1|X=x) = 0.8442 = sigmoid(-0.1 + 0.2*0.75 + 0.9*0.80 + 0.8*0.85 + 0.3*0.80)$ The code provided by sklearn exists ([StackingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.StackingClassifier.html)), but this can not be applied to the skorch models.So I provide below code which does the stacking operation.
###Code
from sklearn.model_selection import KFold
from tqdm import notebook
def stack_clf(estimators, X_train, y_train, X_test, n_folds=5, shuffle=True, random_state=2022):
final_estimator = estimators[-1]
num_estimators = len(estimators)-1
num_outputs = len(np.unique(y_train))
kf = KFold(n_splits=n_folds, random_state=random_state, shuffle=shuffle)
preds = []
y_valid_list = []
# Get CV predictions
for train_idx, valid_idx in notebook.tqdm(kf.split(X_train), total=kf.get_n_splits(), desc='Stack_CV'):
X_train_train, X_valid = X_train[train_idx], X_train[valid_idx]
y_train_train, y_valid = y_train[train_idx], y_train[valid_idx]
valid_preds = []
for iter in range(num_estimators):
try:
estimators[iter].module__num_factors
except: # for other models
estimators[iter].fit(X_train_train, y_train_train)
valid_preds.append(estimators[iter].predict_proba(X_valid))
else: # for factorization machine
X_train_train_fm, X_valid_fm, _ = prepro_for_fm(X_train_train, X_valid)
estimators[iter].fit(X_train_train_fm, y_train_train.reshape(-1,1).astype(np.float32))
valid_preds.append(estimators[iter].predict_proba(X_valid_fm).reshape(-1,num_outputs))
preds.append(np.hstack((valid_preds)))
y_valid_list.append(y_valid)
cv_preds = np.vstack((preds))
cv_y = np.hstack((y_valid_list))
# Get test predictions
test_preds =[]
for iter in range(num_estimators):
try:
estimators[iter].module__num_factors
except: # for other models
estimators[iter].fit(X_train, y_train)
test_preds.append(estimators[iter].predict_proba(X_test))
else: # for factorization machine
X_train_fm, X_test_fm, _ = prepro_for_fm(X_train, X_test)
estimators[iter].fit(X_train_fm, y_train.reshape(-1,1).astype(np.float32))
test_preds.append(estimators[iter].predict_proba(X_test_fm).reshape(-1,num_outputs))
test_preds_mat = np.hstack((test_preds))
# Fit the final estimator on cv prediction values
# And make a prediction on test predictoin values
final_estimator.fit(cv_preds, cv_y)
print(' Estimated coefficients: {} \n intercept: {}'.format(final_estimator.coef_, final_estimator.intercept_))
pred_fin = final_estimator.predict_proba(test_preds_mat)
return pred_fin
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
from sklearn.linear_model import LogisticRegression
# Base estimators
linear = SGDClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='linear', 'best_hyper_param'].values[0]))
svm = SGDClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='svm', 'best_hyper_param'].values[0]))
rf = RandomForestClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='rf', 'best_hyper_param'].values[0]))
xgb = XGBClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='xgb', 'best_hyper_param'].values[0]))
mlp = mlp.set_params(**eval(best_cv_df.loc[best_cv_df['model']=='mlp', 'best_hyper_param'].values[0]))
fm = fm.set_params(**eval(best_cv_df.loc[best_cv_df['model']=='fm', 'best_hyper_param'].values[0]))
estimators = [rf, xgb, mlp, fm]
estimators_name = 'rf_xgb_mlp_fm'
# Final estimator
clf = LogisticRegression(penalty='l2', max_iter=1000, random_state=config['random_state'])
estimators.append(clf)
ensemble_func = stack_clf
ensemble_name = 'stack_ridge_2' + '_by_' + estimators_name
# Run CV
X = X_train
y = y_train
res_df = CV_ensemble(ensemble_name, ensemble_func, estimators, X, y, n_folds=5, shuffle=True, random_state=config['random_state'])
best_cv_df = best_cv_df.append(res_df)
###Output
_____no_output_____
###Markdown
2-4. Model Comparison based on CV results including model combination methodsFrom the figure below, 'xgb' shows the best performance among single models.Among the model combination methodologies, it can be seen that the 'stack_ridge_by_rf_xgb_mlp_fm' method shows the best performance.
###Code
fig = px.box(best_cv_df, x='model', y='accuracy', color='model', width=600)
fig.show()
best_cv_df[['log_loss', 'accuracy']] = best_cv_df[['log_loss', 'accuracy']].astype('float32')
print(best_cv_df.groupby('model').mean()['accuracy'].sort_values(ascending=False))
import os
save_path = '/content/{}/Result'.format(config['data_name'])
if not os.path.exists(save_path):
os.makedirs(save_path)
best_cv_df.to_csv(os.path.join(save_path, 'best_cv_results.csv'), index=False)
###Output
_____no_output_____
###Markdown
3. Make a prediction with the best model
###Code
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
from sklearn.linear_model import LogisticRegression
model_name = 'xgb'
xgb = XGBClassifier(**eval(best_cv_df.loc[best_cv_df['model']=='xgb', 'best_hyper_param'].values[0]))
xgb.fit(X_train, y_train)
pred = xgb.predict(X_test)
res_df = pd.DataFrame({'PassengerId': test['PassengerId'], 'Survived': pred})
res_df.to_csv('{}.csv'.format(model_name), index=False)
###Output
_____no_output_____ |
Pedersen_N07/NaluWindRun01/postpro_n07.ipynb | ###Markdown
Pedersen N07 neutral case with heat fluxComparison between Nalu-wind and Pedersen (2014) **Note**: To convert this notebook to PDF, use the command```bash$ jupyter nbconvert --TagRemovePreprocessor.remove_input_tags='{"hide_input"}' --to pdf postpro_n07.ipynb```
###Code
%%capture
# Important header information
naluhelperdir = './'
# Import libraries
import sys
import numpy as np
import matplotlib.pyplot as plt
sys.path.insert(1, naluhelperdir)
import plotABLstats
import yaml as yaml
from IPython.display import Image
from matplotlib.lines import Line2D
import matplotlib.image as mpimg
%matplotlib inline
# Nalu-wind parameters
rundir = '/ascldap/users/lcheung/GPFS1/2020/amrcodes/testruns/neutral_n07'
statsfile = 'abl_statistics.nc'
avgtimes = [82800,86400]
# Load nalu-wind data
data = plotABLstats.ABLStatsFileClass(stats_file=rundir+'/'+statsfile);
Vprof, vheader = plotABLstats.plotvelocityprofile(data, None, tlims=avgtimes, exportdata=True)
Tprof, theader = plotABLstats.plottemperatureprofile(data, None, tlims=avgtimes, exportdata=True)
# Pedersen parameters
datadir = '../pedersen2014_data'
ped_umag = np.loadtxt(datadir+'/Pedersen2014_N07_velocity.csv', delimiter=',')
ped_T = np.loadtxt(datadir+'/Pedersen2014_N07_temperature.csv', delimiter=',')
h = 757
# Plot the velocity profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
plt.plot(Vprof[:,4], Vprof[:,0]/h, 'b', label='Nalu-wind (Smag)')
plt.plot(ped_umag[:,0], ped_umag[:,1], 'r', label='Pedersen(2014)')
# Construct a legend
plt.legend()
plt.ylim([0, 1.5]);
plt.xlim([0, 12])
plt.xlabel('Velocity [m/s]')
plt.ylabel('Z/h')
#plt.grid()
plt.title('N07 Wind speed')
# Plot the temperature profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
plt.plot(Tprof[:,1], Tprof[:,0], 'b', label='Nalu-wind (Smag)')
plt.plot(ped_T[:,0], ped_T[:,1], 'r', label='Pedersen(2014)')
# Construct a legend
plt.legend()
plt.ylim([0, 1500]);
#plt.xlim([0, 12])
plt.xlabel('Temperature [K]')
plt.ylabel('Z [m]')
#plt.grid()
plt.title('N07 Temperature')
# Export the Nalu-Wind data for other people to compare
np.savetxt('NaluWind_N07_velocity.dat', Vprof, header=vheader)
np.savetxt('NaluWind_N07_temperature.dat', Tprof, header=theader)
# Extract Utau
utau, utheader = plotABLstats.plotutauhistory(data, None, tlims=avgtimes, exportdata=True)
print('Avg Utau = %f'%np.mean(utau[:,1]))
###Output
Avg Utau = 0.383651
###Markdown
Pedersen N07 neutral case with heat fluxComparison between Nalu-wind and Pedersen (2014) **Note**: To convert this notebook to PDF, use the command```bash$ jupyter nbconvert --TagRemovePreprocessor.remove_input_tags='{"hide_input"}' --to pdf postpro_n07.ipynb```
###Code
%%capture
# Important header information
naluhelperdir = '../../utilities/'
# Import libraries
import sys
import numpy as np
import matplotlib.pyplot as plt
sys.path.insert(1, naluhelperdir)
import plotABLstats
import yaml as yaml
from IPython.display import Image
from matplotlib.lines import Line2D
import matplotlib.image as mpimg
%matplotlib inline
# Nalu-wind parameters
rundir = '/ascldap/users/lcheung/GPFS1/2020/amrcodes/testruns/neutral_n07'
statsfile = 'abl_statistics.nc'
avgtimes = [82800,86400]
# Load nalu-wind data
data = plotABLstats.ABLStatsFileClass(stats_file=rundir+'/'+statsfile);
Vprof, vheader = plotABLstats.plotvelocityprofile(data, None, tlims=avgtimes, exportdata=True)
Tprof, theader = plotABLstats.plottemperatureprofile(data, None, tlims=avgtimes, exportdata=True)
# Pedersen parameters
datadir = '../pedersen2014_data'
ped_umag = np.loadtxt(datadir+'/Pedersen2014_N07_velocity.csv', delimiter=',')
ped_T = np.loadtxt(datadir+'/Pedersen2014_N07_temperature.csv', delimiter=',')
h = 757
# Plot the velocity profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
plt.plot(Vprof[:,4], Vprof[:,0]/h, 'b', label='Nalu-wind (Smag)')
plt.plot(ped_umag[:,0], ped_umag[:,1], 'r', label='Pedersen(2014)')
# Construct a legend
plt.legend()
plt.ylim([0, 1.5]);
plt.xlim([0, 12])
plt.xlabel('Velocity [m/s]')
plt.ylabel('Z/h')
#plt.grid()
plt.title('N07 Wind speed')
# Plot the temperature profile comparisons
plt.figure(figsize=(10,8));
plt.rc('font', size=14)
plt.plot(Tprof[:,1], Tprof[:,0], 'b', label='Nalu-wind (Smag)')
plt.plot(ped_T[:,0], ped_T[:,1], 'r', label='Pedersen(2014)')
# Construct a legend
plt.legend()
plt.ylim([0, 1500]);
#plt.xlim([0, 12])
plt.xlabel('Temperature [K]')
plt.ylabel('Z [m]')
#plt.grid()
plt.title('N07 Temperature')
# Extract TKE and Reynolds stresses
REstresses, REheader = plotABLstats.plottkeprofile(data, None, tlims=avgtimes, exportdata=True)
# Extract the fluxes
tfluxes, tfluxheader = plotABLstats.plottfluxprofile(data, None, tlims=avgtimes, exportdata=True)
# Extract the fluxes
sfstfluxes, sfstfluxheader= plotABLstats.plottfluxsfsprofile(data, None, tlims=[avgtimes[-1]-1, avgtimes[-1]], exportdata=True)
# Extract Utau
avgutau = plotABLstats.avgutau(data, None, tlims=avgtimes)
print('Avg Utau = %f'%avgutau)
# Calculate the inversion height
zi, utauz = plotABLstats.calcInversionHeight(data, [1400.0], tlims=avgtimes)
print('zi = %f'%zi)
# Export the Nalu-Wind data for other people to compare
np.savetxt('NaluWind_N07_velocity.dat', Vprof, header=vheader)
np.savetxt('NaluWind_N07_temperature.dat', Tprof, header=theader)
np.savetxt('NaluWind_N07_reynoldsstresses.dat', REstresses, header=REheader)
np.savetxt('NaluWind_N07_temperaturefluxes.dat', tfluxes, header=tfluxheader)
np.savetxt('NaluWind_N07_sfstemperaturefluxes.dat', sfstfluxes, header=sfstfluxheader)
# Write the YAML file with integrated quantities
import yaml
savedict={'zi':float(zi), 'ustar':float(avgutau)}
f=open('istats.yaml','w')
f.write('# Averaged quantities from %f to %f\n'%(avgtimes[0], avgtimes[1]))
f.write(yaml.dump(savedict, default_flow_style=False))
f.close()
# Extract Utau
utau, utheader = plotABLstats.plotutauhistory(data, None, tlims=avgtimes, exportdata=True)
print('Avg Utau = %f'%np.mean(utau[:,1]))
###Output
Avg Utau = 0.383651
|
notebooks/CompoundCalculator.ipynb | ###Markdown
CompoundCalulator 20210705Given a list of base compound names and molecular weights, this notebook calculates possible derivatives (e.g. metabolites) and adds adducts and losses resulting in a list of possible ion masses with labels. These target ion lists are used with the Match notebook to explain peaks in peak lists and is part of Multi-layered Analysis (MLA).In MLA, a peak list is matched to a target ion list; matches are visualized/verified and the residual spectrum is matched against a new target list modified by adding more compounds or adducts. These are added so that combinations with earlier targets are generated.
###Code
from collections import defaultdict
from itertools import groupby
import datetime
import os
import re
###Output
_____no_output_____
###Markdown
Class and function definitions------------------------------The basic entity is a 'Composition'...NB. this is not an elemental composition but simply a text label, a count, a root name and a mass. When compositions are combined, the labels are concatenated (using a specified separator character) and the masses are added. The root name is used to track compound sand can be updated following modification.
###Code
from dataclasses import dataclass
@dataclass
class Composition:
Name: str = ""
Count: int = 1
Mass: float=-1
Root: str = "" #the composition this on is based on - for tracking
# For adducts the mass values are the 'Effective Adduct Mass'
# The dictionary can be changed with: Composition.Mods = {new dictionary}
Mods = {'OH':15.99492,
'COOH':29.97418, #COOH is CH3->COOH, i.e. +O2, -H2)
'Gluc':176.032088,
'Sulphate':79.956815,
'Hex':180.0633,
'C6H10O5':162.052823,
'H2O':-18.010565, # losses are negative
'CO2':-43.989829,
'CO':-27.994915,
'HCOOH':-46.005479,
'HCl':-35.976678,
'H2':-2.015650,
'Rib':-132.0425,
'C2H4O2':60.021129, # neutrals can be added directly
'CH2O2':46.005479,
'CHO2': 44.997654,
'NH3':17.026549, #'Effective adduct masses'
'Na-H':21.981944,
'K-H':37.955881,'K*H': 39.9540, #41K - H
'Ca-2H': 37.946941,
'Ba-2H':135.889597,
'Fe-3H':52.910913,
'Fe-2H': 53.919286,
'Al-3H':23.957515,
'Be-2H':6.99653,
'Mg-2H':21.96939,
'Al-3H':23.95806,
'Ti-2H':45.93229,
'V-2H':48.92831,
'Mn-2H':52.92239,
'Ni-2H':55.91969,
'Co-2H':56.91754,
'Cu-2H':60.91395,
'Zn-2H':61.91349,
'Ge-2H':71.90553,
'Sr-2H':85.88996,
'Zr-4H':85.87339,
'Mo-3H':94.88193,
'Ag-H':105.89727,
'Cd-2H':111.88771,
'Tl-5H':199.93530,
'Pb-2H':205.96100,
'Bi-3H':205.95692,
}
def __init__(self, name, count, mass=None, root=None):
self.Name = f'{name}' if count == 1 else f'({name}){count}'
self.Count = 1 # there's only one of these even if the 'count' (really a multiplier) is gretaer
self.Mass = mass if mass else self.Mods[name]*count
if root:
self.Root = root
else:
self.Root = name
# Make the Composition from a (Name, Count) tuple
@classmethod
def from_tuple(cls, t):
return Composition(t[0],t[1])
# make a composition from a list of (Name,Count)tuples
@classmethod
def from_tuple_list(cls, t_list):
comp = None
for t in t_list:
if not comp:
comp = Composition.from_tuple(t) #create a comp from the first in the list so we can append others to it
else:
comp2 = Composition.from_tuple(t)
comp = comp.add_comp(comp2, sep='.')
return comp
@classmethod
def proton(cls):
return Composition('H+', 1, 1.00727)
# Some basic sanity checks...
@classmethod
def test(cls):
print('Proton: ', Composition.proton())
a = Composition('Na-H',2)
print('Normal init:', a)
b = Composition.from_tuple(('K-H',2))
print('From tuple:', b)
ab = a.add_comp(b, sep='.')
print('From merge:', ab)
t_list = [('Na-H',2),('K-H',2), ('NH3', 1)]
abc = Composition.from_tuple_list(t_list)
print('From tuple list:', abc)
# prints the current list of available modifications
@classmethod
def get_mods_as_strings(cls):
for label in cls.Mods:
print(label, cls.Mods[label])
def get_proton_comp(self, z):
if z == 1:
name = 'H+'
else:
name = f'{z}H+'
comp_p = Composition(name, 1, 1.00727 * z)
return comp_p
def protonate(self):
return self.add_comp(Composition('H+', 1, 1.00727), sep='.')
def deprotonate(self):
return self.add_comp(Composition('[-H+]-', 1, -1.00727), sep='.')
def make_copy(self, mult=1):
return Composition(self.Name, self.Count*mult, self.Mass*mult, self.Root)
def label(self):
return self.Name
# Merge two compositions to generate a new one with a new mass
def add_comp(self, comp1, sep='_', z=1):
new_name = self.label() + sep + comp1.label()
new_mass = (self.Mass + comp1.Mass)/z
# print(self.Mass, comp1.Mass, new_name, z)
return Composition( new_name, 1, root=self.Root, mass=new_mass)
Composition.test()
#Composition.get_mods_as_strings()
# recursive routine to find combinations
def get_combs(maxima, item_count, pos, seed, take, res):
"""
The idea is that the number of each composition to evaluate can be written as a list of integers, e.g. [1,0,0], [0,1,0].
We process each entry successively, setting the value to the number we need to take or the maximum allowed for that entry;
the number to take for the subsequnt entry is based on the number remaining from the first. E.g. if we are to take 5 and 3
are used for the first entry, we pass 2 to the next. We stop when tke gets to 0 or when we run out of entries,
"""
if take == 0:
return
elif take > maxima[pos]:
this_take = maxima[pos]
else:
this_take = take
while this_take >= 0:
# clear the rest of the seed and set this position's value
for i in range(pos, len(seed)): seed[i] = 0
seed[pos] = this_take
# set up for next level
next_take = take - this_take
next_pos = pos + 1
if not next_take: # or next_pos == item_count: # nothing more to add, so save a copy of the current seed
res.append(list(seed)) # copy the seed
elif next_pos == item_count:
break
else:
get_combs(maxima, item_count, next_pos, seed, next_take, res)
this_take -=1
def get_comps_as_str(cleaned_list):
"""
Process the list of adduct limits (cleaned to remove duplicates and entries with zero counts) to generate
a simple string of adducts and counts. We remove hydrogen losses if present - these are identify by "-H"
where the "H" can be foloowed or preceded by a number, n. Note: we require the minus sign since an adduct may contain
H atoms, e.g. HCOOH or C2H3O2. If the last char is a digit (as in C2H3O2) and teh count is > 1,
we add 'x' between the adduct and count
"""
res_str = ""
for adduct, count in cleaned_list:
if not count: continue
a = re.sub("-\d*H\d*", "", adduct) # this removes and optional number of H from after or before the H
if count == 1:
res_str += a
elif a[-1].isdigit():
res_str += f'{a}x{count}'
else:
res_str += f'{a}{count}'
return res_str
def make_combinations(limit_list, max_combinations):
"""
Sets up for the recursive routine by getting and cleaning the list of limits, generating a list of integers
corresponding to the maximum number of each composition, and calling get_combs with take counts of 1, 2, 3...max
Returns a list of (adduct, count) tuples and a string that summarizes the list, generated by removing hydrogen losses from
the adduct strings but only if there is a minus sign. The summary string is in the order the adducts are encountered in the
limit_list and is not sorted further
"""
# first we make sure the compositions are unique and limits are non-zero
# this is needed because the user may specify the same composition more than once which woukd
# cause it to be treated as a separate limit
cleaned = defaultdict(int)
# create a dictionary of {comp:limit}; if the comp is already present the limit will be added
for (c,l) in limit_list:
if l > 0:
cleaned[c] += l
# convert the cleaned dict to a list and then into lists of comps and maxima
clean_list = [(c, cleaned[c]) for c in cleaned]
print(clean_list)
comps, limits = zip(*clean_list)
# get a list of combinations; each combination is a list of the counts for the composition at that index
item_count = len(limits) # number of entries in the limit list
seed = [0]*item_count
res = [] # this will hold the lists of integers representing the count of each Composition
# take 1, 2, 3...max_combinations items and append to res[]
for take in range(1, max_combinations+1):
get_combs(limits, item_count, 0, seed, take, res)
# finally generate a list of the actual compostions, i.e [('x',2), ('y',3)] etc.
# by combining the compositions and each list of counts
combs=[]
for r in res:
c = [(comps[i], r[i]) for i in range(item_count) if r[i] > 0]
combs.append(c)
return combs,get_comps_as_str(clean_list)
# Test code
# Note: x is deliberately present twice
combs, comps_as_str = make_combinations([('x-2H', 2), ('y-H', 2), ('CH2O2', 2), ('q-H3',2), ('x-2H',1)], 3)
print(len(combs), 'should be 31')
print(comps_as_str, 'should be x3y2CH2O2x2q2')
# for c in combs:
# print(c)
def add_mods(compounds, limits, sep='_', update_root=False):
"""
Adds modifications to each compound in the list returning the new compound list.
The modfications are provided as a list of (mods, max count) tuples
By default the root is root updated, so it stays the same as the orignal compound, but if True it
is changed to the new comppound. This allows the root to reflect the compounds at a different level, e.g. after phase 1
"""
mods = []
# Make the compounds by copying the base and adding the possible mods
for c in compounds:
for l in limits:
for i in range(l[1]):
new_comp = c.make_copy().add_comp(Composition(l[0], i+1), sep=sep)
if update_root:
new_comp.Root = new_comp.Name
mods.append(new_comp)
#print(new_comp)
compounds += mods
return compounds
# convert compositions to a printable string
def limits_as_string(limits):
"""
Coverts the composition limits for a particular type (adducts, losses, phase 1...) to string.
Compositions can be switched off by setting the limit to zero so we skip those
"""
non_zero_limits = [l for l in limits if l[1] > 0] # a list of compositions withlimit > 0
if len(non_zero_limits) == 0:
return ""
else:
desc = ",".join([f'{l}' for l in non_zero_limits])
return desc
def get_comp_adduct_str(comp_names, mult_limit, hetero_dimers, adduct_str, max_adducts):
"""
Builds a string describing the compounds and adducts
"""
#Build the output name
c_a_str = f'{comp_names}'
c_a_str += f'_m{mult_limit}' if mult_limit else ""
c_a_str += 'h' if hetero_dimers else ""
c_a_str += f'_{max_adducts}-{adduct_str}' if adduct_str else ""
return c_a_str
# generates a unique file name given the parameters and a string representing the date
# if xic_width is non zero the user wants a list of masses and widthes for use with PeakView
def get_ouput_file_name(comp_names, ionization, time_str, include_date_in_file_name, \
mult_limit, hetero_dimers, adduct_str, max_adducts, xic_width):
"""
Generates a file name based on the compounds used (as a string Comp1_comp2.. etc.) and the polarity
with additions indicating the file is intended to extract XICs in PeakView and the date/time if
required; the format used by the main code is YYMMDD_HHMMSS
"""
polarity = 'neg' if ionization == "negative" else 'pos'
wants_xic = xic_width > 0
#Build the output name
base_name = get_comp_adduct_str(comp_names, mult_limit, hetero_dimers, adduct_str, max_adducts)
base_name += f' {polarity}'
if wants_xic:
base_name += ' xic'
if include_date_in_file_name:
base_name += ' ' + time_str
return wants_xic, base_name + '.txt'
###Output
_____no_output_____
###Markdown
Setup-----Provide the base compound information and other parameters.The base compounds are supplied as a list of (name, mass) tuples.The mass need not be a real known compound but can be an observed and unexplained peak so that its potential derivatives are generated.All user-defined parameters are set here so, once they are set, the code can be executed with 'Run selected cell and all below" Shared path
###Code
# Define shared path for data files
# This allows the Calculator and Match notebooks to easily share data
# This is a platform independent way of defining a path, but Windows users must start with 'C:'
shared_path = os.sep + os.path.join('Users','ronbonner','Data', 'SharedData')
print(shared_path)
###Output
/Users/ronbonner/Data/SharedData
###Markdown
Compounds and adducts
###Code
# Define the compound(s) we want to work with
# can be known compouds or unknown observed peaks, here treated as MH+ by subtracting the mass of H+
base_compounds = [('DiMeSA', 146.057909)] # must be a list
# base_compounds = [('Guan', 283.091669), # Guanosine
# base_compounds = [('x116', 116.0711-1.00727), ('x114', 114.0668-1.00727),('x132', 132.07690-1.00727),
# ('y114', 114.09040-1.00727), ('x190', 190.11950-1.00727)
# ]
# Define the limits for metabolites and adducts...
# Defining this way is not required but allows metabolite and adduct sets to be easily changed depending on polarity.
# Unwanted compositions can be rmoved or the limit can be set to zero
ionization = 'positive' # only 'negative' changes the settings...anything else is 'positive'
phase1_limits = [('OH', 0), ('COOH', 0)] # metabolite modifications - phase 1
if ionization == 'negative':
phase2_limits = [('Gluc', 1), ('Sulphate', 0)]
adduct_limits = [('Na-H', 2), ('K-H', 2), ('C2H4O2',1), ('CH2O2', 1)]
loss_limits = [('H2O',0), ('CO2',0)]
else:
phase2_limits = [('Gluc', 0)]
adduct_limits = [('Na-H', 3), ('K-H',0), ('K*H',0), ('NH3',0), ('Ca-2H', 2)]
loss_limits = [('H2O',1), ('HCOOH', 0), ('Am', 0), ('Rib', 0)]
# it can be useful to summarize here - to allow review before proceeding
print('Adducts:', get_comps_as_str(adduct_limits))
print('Losses:', get_comps_as_str(loss_limits))
###Output
Adducts: Na3Ca2
Losses: H2O
###Markdown
Parameters
###Code
multimer_limit = 3 # maximum multimer count
max_adduct_count = 5 # total number of adducts allowed
include_hetero_dimers = False # if True, calculate dimers of *different* compounds
###Output
_____no_output_____
###Markdown
Output
###Code
output_mass_limit = 1000 # masses greater than this are not written to the file
xic_width = 0.0 # if 0 the normal output form is used...alternative, e.g. 0.01, to generate the PeakView compatible form
save_ion_list = True # write the results a file (or print thm here)
include_date_in_file_name = False #include the date_time in the file name
# Generate the output_path; optional - add a subfolder to the shared path
# otherwise use: data_path = shared_path
data_path = os.path.join(shared_path,'Test')
print(data_path)
###Output
/Users/ronbonner/Data/SharedData/Test
###Markdown
Step 1 - Adduct generation---------------------------Generate a list of possible adduct forms by generating all comibnations of adducts (up to the specified limit) and selecting the unique forms (i.e. as far as we are concerned, a+b+a is the same as a+a+b). Note: this approach would also work if we wanted to allow combinations of the metabolites. These will be added to each compound.
###Code
adduct_combs, adducts_as_str = make_combinations(adduct_limits, max_adduct_count)
adducts_as_str += '-' + get_comps_as_str(loss_limits) # add losses
adduct_comps = [Composition.from_tuple_list(c) for c in adduct_combs]
adduct_comps = sorted(adduct_comps, key=lambda x: x.Mass)
print(len(adduct_comps),'adduct forms')
print(adducts_as_str)
# for a in adduct_comps: # to view compositions
# print(a)
###Output
[('Na-H', 3), ('Ca-2H', 2)]
11 adduct forms
Na3Ca2-H2O
###Markdown
Step 2 - Compound generation-----------------------------We convert the base compound list to a list of compositions and then successively apply the various modifications, generating extended compound lists, in phase order.Finally we calculate the dimers and heterodimers (if desired)
###Code
# Make the compounds by copying the base and adding the possible mods
# The root is set wqual to the name unless specifically specified
compounds = [Composition(name, 1, mass) for name, mass in base_compounds]
# If update_root is True, the root name is changed to the modified name otherwise it is left alone
# This allows the user to choose whether to keep the root as the base conpound or change it to a modified form
compounds = add_mods(compounds, phase1_limits, update_root=True)
print(len(compounds), 'compounds after phase 1')
compounds = add_mods(compounds, phase2_limits, update_root=True)
print(len(compounds), 'after phase 2')
multimers = []
for c in compounds:
for m in range(2, multimer_limit+1):
new_comp = c.make_copy(m)
multimers.append(new_comp)
if include_hetero_dimers:
for i, c in enumerate(compounds):
for j in range(i+1, len(compounds)):
new_comp = c.make_copy()
new_comp_2 = compounds[j].make_copy()
new_comp = new_comp.add_comp(new_comp_2, sep='+')
multimers.append(new_comp)
compounds += multimers
print(len(compounds), 'with multimers')
compounds = add_mods(compounds, loss_limits, sep='-')
print (len(compounds), 'after losses')
# for c in compounds:
# print(c)
###Output
1 compounds after phase 1
1 after phase 2
3 with multimers
6 after losses
###Markdown
Step 3 - Generate ion forms----------------------------We now add all the adduct forms to each of the compounds. The approach relies on adducts being formed by replacing labile protons and are therefore indpendent of the polarity; the final form is determined by providing a charge agent, i.e. adding or subtracting protons.
###Code
ion_forms = []
# now we add each compound on its own and then with the adducts
for c in compounds:
# add the base compound, with a proton added or subtracted depending on the ionization mode
new_comp = c.make_copy()
if ionization == 'negative':
new_comp = new_comp.deprotonate()
else:
new_comp = new_comp.protonate()
ion_forms.append(new_comp)
# then add the adduct forms
for a in adduct_comps:
new_comp = c.make_copy().add_comp(a, sep='.')
if ionization == 'negative':
new_comp = new_comp.deprotonate()
else:
new_comp = new_comp.protonate()
ion_forms.append(new_comp)
print(len(ion_forms), 'ion forms')
###Output
72 ion forms
###Markdown
Step 4 - Summarize results and conditions-----------------------------------------
###Code
# summarize calculations
current_time = datetime.datetime.now()
time_str = current_time.strftime('%y%m%d_%H%M%S')
comp_names = '-'.join([f'{c}' for (c,m) in base_compounds]) # a string of the compoundnames separated by '-'
# first line of file is a summary of the compounds and adducts
cond_str = get_comp_adduct_str(comp_names, multimer_limit, include_hetero_dimers, adducts_as_str, max_adduct_count)
print(cond_str)
print (time_str)
cond_str += f';Time:{time_str}'
print('Compounds:', comp_names)
cond_str += f';Compounds:{comp_names}'
if multimer_limit > 1:
print(f'Up to {multimer_limit} multimers')
cond_str += f';Multimer_limit:{multimer_limit}'
if include_hetero_dimers:
print(f'Include heterodimers')
cond_str += f';Heterodimers:True'
print(f'{ionization} mode')
cond_str += f';Polarity:{ionization}'
desc = limits_as_string(phase1_limits)
if desc:
print(f'Phase 1: {desc}')
cond_str += f';Phase_1:{desc}'
desc = limits_as_string(phase2_limits)
if desc:
print(f'Phase 2: {desc}')
cond_str += f';Phase_2:{desc}'
desc = limits_as_string(adduct_limits)
if desc:
print(f'Adducts: {desc}, max count = {max_adduct_count}')
cond_str += f';Adducts:{desc}; Max adduct count:{max_adduct_count}'
desc = limits_as_string(loss_limits)
if desc:
print(f'Losses: {desc}')
cond_str += f';Losses:{desc}'
print(len(ion_forms), 'ion forms')
print(cond_str)
###Output
DiMeSA_m3_5-Na3Ca2-H2O
210705_085506
Compounds: DiMeSA
Up to 3 multimers
positive mode
Adducts: ('Na-H', 3),('Ca-2H', 2), max count = 5
Losses: ('H2O', 1)
72 ion forms
DiMeSA_m3_5-Na3Ca2-H2O;Time:210705_085506;Compounds:DiMeSA;Multimer_limit:3;Polarity:positive;Adducts:('Na-H', 3),('Ca-2H', 2); Max adduct count:5;Losses:('H2O', 1)
###Markdown
Step 5 - Save the mass/name list--------------------------------Optionally save the ion forms as a simple tab delimited text file.- the main format is: mass, root, label- an additional format: mass, xic width, name is intended to be used with PeakView Extract XIC (by importing it)The list can also be truncated to an upper mass limit.To be sure the file exists, we re-open it and count the nuber of lines
###Code
# Set up fie names and paths...
if save_ion_list:
wants_xic, out_name = get_ouput_file_name(comp_names, ionization, time_str, include_date_in_file_name, \
multimer_limit, include_hetero_dimers, adducts_as_str, max_adduct_count, xic_width)
line_count = 1 # first line is conditions
ion_forms = sorted(ion_forms, key=lambda x: x.Mass)
output_path = os.path.join(data_path, out_name)
print (output_path)
with open(output_path, 'w') as f:
print(f'#{cond_str}', file=f)
for ion in ion_forms:
if ion.Mass > output_mass_limit:
break
if wants_xic:
print(f'{ion.Mass:10.4f}\t{xic_width}\t{ion.Name}', file=f)
else:
print(f'{ion.Mass:10.4f}\t{ion.Root}\t{ion.Name}', file=f)
line_count += 1
f.close()
print(time_str)
print(line_count, 'lines written to', output_path)
with open(output_path, 'r') as f:
lines_read = f.readlines()
f.close()
print(len(lines_read), 'read', lines_read[0])
# if lines_read[0][0] == '#':
# print("Conditions:")
# print(lines_read[0][1:])
else:
for ion in sorted(ion_forms, key=lambda x:x.Mass): #sort list by mass
print(f'{ion.Mass:12.4f} {ion.Root:14} {ion.Name}')
###Output
/Users/ronbonner/Data/SharedData/Test/DiMeSA_m3_5-Na3Ca2-H2O pos.txt
210705_085506
73 lines written to /Users/ronbonner/Data/SharedData/Test/DiMeSA_m3_5-Na3Ca2-H2O pos.txt
73 read #DiMeSA_m3_5-Na3Ca2-H2O;Time:210705_085506;Compounds:DiMeSA;Multimer_limit:3;Polarity:positive;Adducts:('Na-H', 3),('Ca-2H', 2); Max adduct count:5;Losses:('H2O', 1)
|
Logistic regression to predict diabetes patient.ipynb | ###Markdown
Predicting diabetes based on BMI
###Code
from sklearn.model_selection import train_test_split
X = data[['BMI']]
y = data[['Outcome']]
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train,y_train)
y_predicted = model.predict(X_test)
model.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
model.coef_ indicates value of 'm' in y=m*x + b equation
###Code
model.coef_
###Output
_____no_output_____
###Markdown
model.intercept_ indicates value of 'b' in y=m*x + b equation
###Code
model.intercept_
###Output
_____no_output_____
###Markdown
Lets defined sigmoid function now and do the math with hand
###Code
import math
def sigmoid(x):
return 1 / (1 + math.exp(-x))
def prediction_based_on_bmi(bmi):
z = 0.0981 * bmi - 3.808
y = sigmoid(z)
return y
bmi = 33
prediction_based_on_bmi(bmi)
###Output
_____no_output_____
###Markdown
As the probability is 0.36 that is less than 0.5, we can predict that the person with BMI 33 will not be diabetic
###Code
bmi = 56
prediction_based_on_bmi(bmi)
###Output
_____no_output_____
###Markdown
Predicting diabetes based on BP
###Code
X = data[['BloodPressure']]
y = data[['Outcome']]
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7)
model = LogisticRegression()
model.fit(X_train,y_train)
model.score(X_test,y_test)
model.coef_
model.intercept_
def prediction_based_on_bp(bp):
z = 0.0981 * bp - 3.808
y = sigmoid(z)
return y
bp = 30
prediction_based_on_bmi(bp)
bp = 40
prediction_based_on_bmi(bp)
bp = 55
prediction_based_on_bmi(bp)
bp = 10
prediction_based_on_bmi(bp)
bp = 60
prediction_based_on_bmi(bp)
bp = 88
prediction_based_on_bmi(bp)
###Output
_____no_output_____ |
Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning/Week 4 - Using Real-world Images/Exercise4-Question.ipynb | ###Markdown
Below is code with a link to a happy or sad dataset which contains 80 images, 40 happy and 40 sad. Create a convolutional neural network that trains to 100% accuracy on these images, which cancels training upon hitting training accuracy of >.999Hint -- it will work best with 3 convolutional layers.
###Code
import tensorflow as tf
import os
import zipfile
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab happy-or-sad.zip from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/happy-or-sad.zip"
zip_ref = zipfile.ZipFile(path, 'r')
zip_ref.extractall("/tmp/h-or-s")
zip_ref.close()
# GRADED FUNCTION: train_happy_sad_model
def train_happy_sad_model():
# Please write your code only where you are indicated.
# please do not remove # model fitting inline comments.
DESIRED_ACCURACY = 0.999
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs.get('acc') > DESIRED_ACCURACY:
print('\nReached 99.9% accuracy so cancelling training!')
self.model.stop_training = True
callbacks = myCallback()
# This Code Block should Define and Compile the Model. Please assume the images are 150 X 150 in your implementation.
model = tf.keras.models.Sequential([
# Your Code Here
tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
# This code block should create an instance of an ImageDataGenerator called train_datagen
# And a train_generator by calling train_datagen.flow_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1/255.0)
# Please use a target_size of 150 X 150.
train_generator = train_datagen.flow_from_directory(
'/tmp/h-or-s',
target_size=(150, 150),
batch_size=32,
class_mode='binary'
)
# Expected output: 'Found 80 images belonging to 2 classes'
# This code block should call model.fit_generator and train for
# a number of epochs.
# model fitting
history = model.fit_generator(
train_generator,
steps_per_epoch=1,
epochs=30,
verbose=1,
callbacks=[callbacks]
)
# model fitting
return history.history['acc'][-1]
# The Expected output: "Reached 99.9% accuracy so cancelling training!""
train_happy_sad_model()
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____ |
Prototypical_Nets/prototypical-net.ipynb | ###Markdown
Few-Shot Learning With Prototypical Networks
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from matplotlib import pyplot as plt
import cv2
from tqdm import tqdm
import multiprocessing as mp
tqdm.pandas(desc="my bar!")
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
###Output
['omniglot']
###Markdown
Data Reading and AugmentationThe Omniglot data set is designed for developing more human-like learning algorithms. It contains 1623 different handwritten characters from 50 different alphabets. Then to increase the number of classes, all the images are rotated by 90, 180 and 270 degrees and each rotation resulted in one more class. Hence the total count of classes reached to 6492(1623 * 4) classes. We split images of 4200 classes to training data and the rest went to test set.
###Code
train_dir = os.listdir('../input/omniglot/images_background/')
datax = np.array([])
def image_rotate(img, angle):
"""
Image rotation at certain angle. It is used for data augmentation
"""
rows,cols, _ = img.shape
M = cv2.getRotationMatrix2D((cols/2 ,rows/2),angle,1)
dst = cv2.warpAffine(img,M,(cols,rows))
return np.expand_dims(dst, 0)
def read_alphabets(alphabet_directory, directory):
"""
Reads all the characters from alphabet_directory and augment each image with 90, 180, 270 degrees of rotation.
"""
datax = None
datay = []
characters = os.listdir(alphabet_directory)
for character in characters:
images = os.listdir(alphabet_directory + character + '/')
for img in images:
image = cv2.resize(cv2.imread(alphabet_directory + character + '/' + img), (28,28))
image90 = image_rotate(image, 90)
image180 = image_rotate(image, 180)
image270 = image_rotate(image, 270)
image = np.expand_dims(image, 0)
if datax is None:
datax = np.vstack([image, image90, image180, image270])
else:
datax = np.vstack([datax, image, image90, image180, image270])
datay.append(directory + '_' + character + '_0')
datay.append(directory + '_' + character + '_90')
datay.append(directory + '_' + character + '_180')
datay.append(directory + '_' + character + '_270')
return datax, np.array(datay)
def read_images(base_directory):
"""
Used multithreading for data reading to decrease the reading time drastically
"""
datax = None
datay = []
pool = mp.Pool(mp.cpu_count())
results = [pool.apply(read_alphabets, args=(base_directory + '/' + directory + '/', directory, )) for directory in os.listdir(base_directory)]
pool.close()
for result in results:
if datax is None:
datax = result[0]
datay = result[1]
else:
datax = np.vstack([datax, result[0]])
datay = np.concatenate([datay, result[1]])
return datax, datay
%time trainx, trainy = read_images('../input/omniglot/images_background/')
%time testx, testy = read_images('../input/omniglot/images_evaluation/')
trainx.shape, trainy.shape, testx.shape, testy.shape
###Output
_____no_output_____
###Markdown
Model
###Code
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from tqdm import trange
from time import sleep
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
use_gpu = torch.cuda.is_available()
trainx = torch.from_numpy(trainx).float()
#trainy = torch.from_numpy(trainy)
testx = torch.from_numpy(testx).float()
#testy = torch.from_numpy(testy)
if use_gpu:
trainx = trainx.cuda()
testx = testx.cuda()
trainx.size(), testx.size()
trainx = trainx.permute(0,3,1,2)
testx = testx.permute(0,3,1,2)
class Net(nn.Module):
"""
Image2Vector CNN which takes image of dimension (28x28x3) and return column vector length 64
"""
def sub_block(self, in_channels, out_channels=64, kernel_size=3):
block = torch.nn.Sequential(
torch.nn.Conv2d(kernel_size=kernel_size, in_channels=in_channels, out_channels=out_channels, padding=1),
torch.nn.BatchNorm2d(out_channels),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=2)
)
return block
def __init__(self):
super(Net, self).__init__()
self.convnet1 = self.sub_block(3)
self.convnet2 = self.sub_block(64)
self.convnet3 = self.sub_block(64)
self.convnet4 = self.sub_block(64)
def forward(self, x):
x = self.convnet1(x)
x = self.convnet2(x)
x = self.convnet3(x)
x = self.convnet4(x)
x = torch.flatten(x, start_dim=1)
return x
class PrototypicalNet(nn.Module):
def __init__(self, use_gpu=False):
super(PrototypicalNet, self).__init__()
self.f = Net()
self.gpu = use_gpu
if self.gpu:
self.f = self.f.cuda()
def forward(self, datax, datay, Ns,Nc, Nq, total_classes):
"""
Implementation of one episode in Prototypical Net
datax: Training images
datay: Corresponding labels of datax
Nc: Number of classes per episode
Ns: Number of support data per class
Nq: Number of query data per class
total_classes: Total classes in training set
"""
k = total_classes.shape[0]
K = np.random.choice(total_classes, Nc, replace=False)
Query_x = torch.Tensor()
if(self.gpu):
Query_x = Query_x.cuda()
Query_y = []
Query_y_count = []
centroid_per_class = {}
class_label = {}
label_encoding = 0
for cls in K:
S_cls, Q_cls = self.random_sample_cls(datax, datay, Ns, Nq, cls)
centroid_per_class[cls] = self.get_centroid(S_cls, Nc)
class_label[cls] = label_encoding
label_encoding += 1
Query_x = torch.cat((Query_x, Q_cls), 0) # Joining all the query set together
Query_y += [cls]
Query_y_count += [Q_cls.shape[0]]
Query_y, Query_y_labels = self.get_query_y(Query_y, Query_y_count, class_label)
Query_x = self.get_query_x(Query_x, centroid_per_class, Query_y_labels)
return Query_x, Query_y
def random_sample_cls(self, datax, datay, Ns, Nq, cls):
"""
Randomly samples Ns examples as support set and Nq as Query set
"""
data = datax[(datay == cls).nonzero()]
perm = torch.randperm(data.shape[0])
idx = perm[:Ns]
S_cls = data[idx]
idx = perm[Ns : Ns+Nq]
Q_cls = data[idx]
if self.gpu:
S_cls = S_cls.cuda()
Q_cls = Q_cls.cuda()
return S_cls, Q_cls
def get_centroid(self, S_cls, Nc):
"""
Returns a centroid vector of support set for a class
"""
return torch.sum(self.f(S_cls), 0).unsqueeze(1).transpose(0,1) / Nc
def get_query_y(self, Qy, Qyc, class_label):
"""
Returns labeled representation of classes of Query set and a list of labels.
"""
labels = []
m = len(Qy)
for i in range(m):
labels += [Qy[i]] * Qyc[i]
labels = np.array(labels).reshape(len(labels), 1)
label_encoder = LabelEncoder()
Query_y = torch.Tensor(label_encoder.fit_transform(labels).astype(int)).long()
if self.gpu:
Query_y = Query_y.cuda()
Query_y_labels = np.unique(labels)
return Query_y, Query_y_labels
def get_centroid_matrix(self, centroid_per_class, Query_y_labels):
"""
Returns the centroid matrix where each column is a centroid of a class.
"""
centroid_matrix = torch.Tensor()
if(self.gpu):
centroid_matrix = centroid_matrix.cuda()
for label in Query_y_labels:
centroid_matrix = torch.cat((centroid_matrix, centroid_per_class[label]))
if self.gpu:
centroid_matrix = centroid_matrix.cuda()
return centroid_matrix
def get_query_x(self, Query_x, centroid_per_class, Query_y_labels):
"""
Returns distance matrix from each Query image to each centroid.
"""
centroid_matrix = self.get_centroid_matrix(centroid_per_class, Query_y_labels)
Query_x = self.f(Query_x)
m = Query_x.size(0)
n = centroid_matrix.size(0)
# The below expressions expand both the matrices such that they become compatible to each other in order to caclulate L2 distance.
centroid_matrix = centroid_matrix.expand(m, centroid_matrix.size(0), centroid_matrix.size(1)) # Expanding centroid matrix to "m".
Query_matrix = Query_x.expand(n, Query_x.size(0), Query_x.size(1)).transpose(0,1) # Expanding Query matrix "n" times
Qx = torch.pairwise_distance(centroid_matrix.transpose(1,2), Query_matrix.transpose(1,2))
return Qx
protonet = PrototypicalNet(use_gpu=use_gpu)
optimizer = optim.SGD(protonet.parameters(), lr = 0.01, momentum=0.99)
###Output
_____no_output_____
###Markdown
Training
###Code
def train_step(datax, datay, Ns,Nc, Nq):
optimizer.zero_grad()
Qx, Qy= protonet(datax, datay, Ns, Nc, Nq, np.unique(datay))
pred = torch.log_softmax(Qx, dim=-1)
loss = F.nll_loss(pred, Qy)
loss.backward()
optimizer.step()
acc = torch.mean((torch.argmax(pred, 1) == Qy).float())
return loss, acc
num_episode = 16000
frame_size = 1000
frame_loss = 0
frame_acc = 0
for i in range(num_episode):
loss, acc = train_step(trainx, trainy, 5, 60, 5)
frame_loss += loss.data
frame_acc += acc.data
if( (i+1) % frame_size == 0):
print("Frame Number:", ((i+1) // frame_size), 'Frame Loss: ', frame_loss.data.cpu().numpy().tolist()/ frame_size, 'Frame Accuracy:', (frame_acc.data.cpu().numpy().tolist() * 100) / frame_size)
frame_loss = 0
frame_acc = 0
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/preprocessing/label.py:235: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
Testing
###Code
def test_step(datax, datay, Ns,Nc, Nq):
Qx, Qy= protonet(datax, datay, Ns, Nc, Nq, np.unique(datay))
pred = torch.log_softmax(Qx, dim=-1)
loss = F.nll_loss(pred, Qy)
acc = torch.mean((torch.argmax(pred, 1) == Qy).float())
return loss, acc
num_test_episode = 2000
avg_loss = 0
avg_acc = 0
for _ in range(num_test_episode):
loss, acc = test_step(testx, testy, 5, 60, 15)
avg_loss += loss.data
avg_acc += acc.data
print('Avg Loss: ', avg_loss.data.cpu().numpy().tolist() / num_test_episode , 'Avg Accuracy:', (avg_acc.data.cpu().numpy().tolist() * 100) / num_test_episode)
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/preprocessing/label.py:235: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
|
2-Convolutional-Neural-Networks-in-Tensorflow/week3-transfer learning/GradedExercise_3_Horses_vs_humans_using_Transfer_Learning_Question-FINAL.ipynb | ###Markdown
Submission Instructions
###Code
# Now click the 'Submit Assignment' button above.
###Output
_____no_output_____
###Markdown
When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners.
###Code
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____ |
docs/synapse/userguides/storm_adv_vars.ipynb | ###Markdown
.. highlight:: none.. _storm-adv-vars:Storm Reference - Advanced - Variables======================================Storm supports the use of **variables.** A :ref:`gloss-variable` is a value that can change depending on conditions or on information passed to the Storm query. (Contrast this with a :ref:`gloss-constant`, which is a value that is fixed and does not change.)Variables can be used in a variety of ways, from providing simpler or more efficient ways to reference node properties, to facilitating bulk operations, to performing complex tasks or writing extensions to Synapse in Storm.These documents approach variables and their use from a **user** standpoint and aim to provide sufficient background for users to understand and begin to use variables. They do not provide an in-depth discussion of variables and their use from a fully developer-oriented perspective.- `Storm Operating Concepts`_- `Variable Concepts`_ - `Variable Scope`_ - `Call Frame`_ - `Runtsafe vs. Non-Runtsafe`_- `Types of Variables`_ - `Built-In Variables`_ - `User-Defined Variables`_.. _op-concepts:Storm Operating Concepts------------------------When leveraging variables in Storm, it is important to keep in mind the high-level :ref:`storm-op-concepts`. Specifically:- Storm operations (e.g., lifts, filters, pivots, etc.) are performed on **nodes.**- Operations can be **chained** and are executed in order from left to right.- Storm acts as an **execution pipeline,** with each node passed individually and independently through the chain of Storm operations.- Most Storm operations **consume** nodes — that is, a given operation (such as a filter or pivot) acts upon the inbound node in some way and returns only the node or set of nodes that result from that operation.These principles apply to variables that reference nodes (or node properties) in Storm just as they apply to nodes, and so affect the way variables behave within Storm queries... _var-concepts:Variable Concepts-----------------.. _var-scope:Variable Scope++++++++++++++A variable’s **scope** is its lifetime and under what conditions it may be accessed. There are two dimensions that impact a variable’s scope: its **call frame** and its **runtime safety** ("runtsafety")... _var-call-frame:Call Frame++++++++++A variable’s **call frame** is where the variable is used. The main Storm query starts with its own call frame, and each call to a "pure" Storm command, function, or subquery creates a new call frame. The new call frame gets a copy of all the variables from the calling call frame. Changes to existing variables or the creation of new variables within the new call frame do not impact the calling scope.Runtsafe vs. Non-Runtsafe+++++++++++++++++++++++++An important distinction to keep in mind when using variables in Storm is whether the variable is runtime-safe (":ref:`gloss-runtsafe`") or non-runtime safe (":ref:`gloss-non-runtsafe`").A variable that is **runtsafe** has a value independent of any nodes passing through the Storm pipeline. For example, a variable whose value is explicitly set, such as ``$string = mystring`` or ``$ipv4 = 8.8.8.8`` is considered runtsafe because the value does not change / is not affected by the specific node passing through the Storm pipeline.A variable that is **non-runtsafe** has a value derived from a node passing through the Storm pipeline. For example, a variable whose value is set to a node property value may change based on the specific node passing through the Storm pipeline. In other words, if your Storm query is operating on a set of DNS A nodes (``inet:dns:a``) and you define the variable ``$fqdn = :fqdn`` (setting the variable to the value of the ``:fqdn`` secondary property), the value of the variable will change based on the specific value of that property for each ``inet:dns:a`` node in the pipeline.All non-runtsafe variables are **scoped** to an individual node as it passes through the Storm pipeline. This means that a variable’s value based on a given node is not available when processing a different node (at least not without using special commands, methods, or libraries). In other words, the path of a particular node as it passes through the Storm pipeline is its own scope.The "safe" in non-runtsafe should **not** be interpreted as meaning the use of non-runtsafe variables is somehow "risky" or involves insecure programming or processing of data. It simply means the value of the variable is not safe from changing (i.e., it may change) as the Storm pipeline progresses... _var-types:Types of Variables------------------Storm supports two types of variables:- **Built-in variables.** Built-in variables facilitate many common Storm operations. They may vary in their scope and in the context in which they can be used.- **User-defined variables** User-defined variables are named and defined by the user. They are most often limited in scope and facilitate operations within a specific Storm query... _vars-builtin:Built-In Variables++++++++++++++++++Storm includes a set of built-in variables and associated variable methods (:ref:`storm-adv-methods`) and libraries (:ref:`storm-adv-libs` and the :ref:`stormtypes-libs-header` technical documentation) that facilitate Cortex-wide, node-specific, and context-specific operations.Built-in variables differ from user-defined variables in that built-in variable names:- are initialized at Cortex start,- are reserved,- can be accessed automatically (i.e., without needing to define them) from within Storm, and- persist across user sessions and Cortex reboots... _vars-global:Global Variables~~~~~~~~~~~~~~~~Global variables operate independently of any node. That is, they can be invoked in a Storm query in the absence of any nodes in the Storm execution pipeline (though they can also be leveraged when performing operations on nodes)... _vars-global-lib:$libThe library variable ( ``$lib`` ) is a built-in variable that provides access to the global Storm library. In Storm, libraries are accessed using built-in variable names (e.g., ``$lib.print()``).See the :ref:`stormtypes-libs-header` technical documentation for descriptions of the libraries available within Storm... _vars-node:Node-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~Storm includes node-specific variables that are designed to operate on or in conjunction with nodes and require one or more nodes in the Storm pipeline... NOTE:: Node-specific variables are always non-runtsafe... _vars-node-node:$nodeThe node variable (``$node``) is a built-in Storm variable that **references the current node in the Storm query.** Specifically, this variable contains the inbound node’s node object, and provides access to the node’s attributes, properties, and associated attribute and property values.Invoking this variable during a Storm query is useful when you want to:- access the raw and entire node object,- store the value of the current node before pivoting to another node, or- use an aspect of the current node in subsequent query operations.The ``$node`` variable supports a number of built-in methods that can be used to access specific data or properties associated with a node. See the technical documentation for the :ref:`stormprims-storm-node` object or the :ref:`meth-node` section of the :ref:`storm-adv-methods` user documentation for additional detail and examples... _vars-node-path:$pathThe path variable (``$path``) is a built-in Storm variable that **references the path of a node as it travels through the pipeline of a Storm query.**The ``$path`` variable is not used on its own, but in conjunction with its methods. See the technical documentattion for the :ref:`stormprims-storm-path` object or the :ref:`meth-path` section of the :ref:`storm-adv-methods` user documentation for additional detail and examples... _vars-trigger:Trigger-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~~~~A :ref:`gloss-trigger` is used to support automation within a Cortex. Triggers use events (such as the creation of a node, setting the value of a node’s property, or applying a tag to a node) to fire ("trigger") the execution of a predefined Storm query. Storm uses a built-in variable specifically within the context of trigger-initiated Storm queries... _vars-trigger-tag:$tagWithin the context of triggers that fire on ``tag:add`` events, the ``$tag`` variable represents the name of the tag that caused the trigger to fire.For example:You write a trigger to fire when any tag matching the expression ``foo.bar.*`` is added to a ``file:bytes`` node. The trigger executes the following Storm command:.. parsed-literal:: -> hash:md5 [ +$tag ]Because the trigger uses a wildcard expression, it will fire on any tag that matches that expression (e.g., ``foo.bar.hurr``, ``foo.bar.derp``, etc.). The Storm snippet above will take the inbound ``file:bytes`` node, pivot to the file’s associated MD5 node (``hash:md5``), and apply the same tag that fired the trigger to the MD5.See the :ref:`auto-triggers` section of the :ref:`storm-ref-automation` document and the Storm :ref:`storm-trigger` command for a more detailed discussion of triggers and associated Storm commands... _vars-csvtool:CSVTool-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~~~~Synapse's **CSVTool** is used to ingest (import) data into or export data from a Cortex using comma-separated value (CSV) format. Storm includes a built-in variable to facilitate bulk data ingest using CSV... _vars-csvtool-rows:$rowsThe ``$rows`` variable refers to the set of rows in a CSV file. When ingesting data into a Cortex, CSVTool reads a CSV file and a file containing a Storm query that tells CSVTool how to process the CSV data. The Storm query is typically constructed to iterate over the set of rows (``$rows``) using a "for" loop that uses user-defined variables to reference each field (column) in the CSV data.For example:.. parsed-literal:: for ($var1, $var2, $var3, $var4) in $rows { }See :ref:`syn-tools-csvtool` for a more detailed discussion of CSVTool use and associated Storm syntax... _vars-user:User-Defined Variables++++++++++++++++++++++User-defined variables can be defined in one of two ways:- At runtime (i.e., within the scope of a specific Storm query). This is the most common use for user-defined variables.- Mapped via options passed to the Storm runtime (i.e., when using the ``--optifle`` option from Synapse cmdr or via Cortex API access). This method is less common. When defined in this manner, user-defined variables will behave as though they are built-in variables that are runtsafe... _vars-names:Variable Names~~~~~~~~~~~~~~All variable names in Storm (including built-in variables) begin with a dollar sign ( ``$`` ). A variable name can be any alphanumeric string, **except for** the name of a built-in variable (see :ref:`vars-builtin`), as those names are reserved. Variable names are case-sensitive; the variable ``$MyVar`` is different from ``$myvar``... NOTE:: Storm will not prevent you from using the name of a built-in variable to define a variable (such as ``$node = 7``). However, doing so may result in undesired effects or unexpected errors due to the variable name collision... _vars-define:Defining Variables~~~~~~~~~~~~~~~~~~Within Storm, a user-defined variable is defined using the syntax:.. parsed-literal:: $ = The variable name must be specified first, followed by the equals sign and the value of the variable itself.```` can be:- an explicit value / literal,- a node secondary or universal property,- a tag or tag property,- a built-in variable or method,- a library function,- a mathematical expression / "dollar expression", or- an embedded query.Examples~~~~~~~~Two types of examples are used below:- **Demonstrative example:** the ``$lib.print()`` library function is used to display the value of the user-defined variable being set. This is done for illustrative purposes only; ``$lib.print()`` is not required in order to use variables or methods. Keep Storm's operation chaining, pipeline, and node consumption aspects in mind when reviewing the demonstrative examples below. When using ``$lib.print()`` to display the value of a variable, the queries below will: - Lift the specified node(s). - Assign the variable. Note that assigning a variable has no impact on the nodes themselves. - Print the variable's value. - Return any nodes still in the pipeline. Because variable assignment doesn't impact the node(s), they are not consumed and so are returned (displayed) at the CLI. The effect of this process is that for each node in the Storm query pipeline, the output of ``$lib.print()`` is displayed, followed by the relevant node.- **Use-case example:** the user-defined variable is used in one or more sample queries to illustrate possible practical use cases. These represent exemplar Storm queries for how a variable or method might be used in practice. While we have attempted to use relatively simple examples for clarity, some examples may leverage additional Storm features such as subqueries, subquery filters, or flow control elements such as "for" loops or "switch" statements.*Assign a literal to a user-defined variable:*- Assign the value 5 to the variable ``$threshold``:
###Code
# Define and print test query
q = '$threshold=5 $lib.print($threshold)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Tag any ``file:bytes`` nodes that have a number of AV signature hits higher than a given threshold for review:
###Code
# Make some nodes
q = '[file:bytes=sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4 it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (0bfef0179bf358f3fe7bad67fa529c77, trojan.gen.2)) it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (325cd5a01724fa0c63907eac044f4961, trojan.agent/gen-onlinegames)) it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (ac8d9645c6cdf123683a73a02e231052, w32/imestartup.a.gen!eldorado))]'
q1 = '[file:bytes=sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (be9793d772d23269ab0c165af819e74a, troj_gen.r002c0gkj17)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (eef2ccb70945fb28a45c7f14f2a0f11d, malicious.1b8fb7)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (ce4e34d2f9207095aa7351986bbad357, trojan-ddos.win32.stormattack.c)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (ed344310e3203ec4348c4ee549a3b188, "trojan ( 00073eb11 )")) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (f5b5daeda10e487fccc07463d9df6b47, tool.stormattack.win32.10)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (a0f25a5ba637d5c8e7c42911c4336085, trojan/w32.agent.61440.eii))]'
podes = await core.eval(q, num=4, cmdr=False)
podes = await core.eval(q1, num=7, cmdr=False)
# Define and print test query
q = '$threshold=5 file:bytes +{ -> it:av:filehit } >= $threshold [ +#review ]'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a node secondary property to a user-defined variable:*- Assign the ``:user`` property from an Internet-based account (``inet:web:acct``) to the variable ``$user``:
###Code
# Make a node
q = '[inet:web:acct=(twitter.com,bert) :[email protected]]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:web:acct=(twitter.com,bert) $user=:user $lib.print($user)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find email addresses associated with a set of Internet accounts where the username of the email address is the same as the username of the Internet account:
###Code
# Make another node
q = '[inet:web:acct=(twitter.com,ernie) :[email protected]]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:web:acct $user=:user -> inet:email +:user=$user'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a node universal property to a user-defined variable:*- Assign the ``.seen`` universal property from a DNS A node to the variable ``$time``:
###Code
# Make a node
q = '[inet:dns:a=(woot.com,1.2.3.4) .seen=("2018/11/27 03:28:14","2019/08/15 18:32:47")]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:dns:a=(woot.com,1.2.3.4) $time=.seen $lib.print($time)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
.. NOTE:: In the example above, the raw value of the ``.seen`` property is assigned to the ``$time`` variable. ``.seen`` is an interval (:ref:`type-ival`) type, consisting of a pair of minimum and maximum time values. These values are stored in Unix epoch millis, which are the values shown by the output of the ``$lib.print()`` function. - Given a DNS A record, find other DNS A records that pointed to the same IP address in the same time window:
###Code
# Make some moar nodes
q = '[ ( inet:dns:a=(hurr.net,1.2.3.4) .seen=("2018/12/09 06:02:53","2019/01/03 11:27:01") ) ( inet:dns:a=(derp.org,1.2.3.4) .seen=("2019/09/03 01:11:23","2019/12/14 14:22:00"))]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'inet:dns:a=(woot.com,1.2.3.4) $time=.seen -> inet:ipv4 -> inet:dns:a +.seen@=$time'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag to a user-defined variable:*- Assign the explicit tag value ``cno.infra.anon.tor`` to the variable ``$tortag``:
###Code
# Define and print test query
q = '$tortag=cno.infra.anon.tor $lib.print($tortag)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Tag IP addresses that Shodan says are associated with Tor with the ``cno.infra.anon.tor`` tag:
###Code
# Make some nodes
q = '[ inet:ipv4=84.140.90.95 inet:ipv4=54.38.219.150 inet:ipv4=46.105.100.149 +#rep.shodan.tor ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = '$tortag=cno.infra.anon.tor inet:ipv4#rep.shodan.tor [ +#$tortag ]'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag timestamp to a user-defined variable:*- Assign the times associated with Threat Group 20’s use of a malicious domain to the variable ``$time``:
###Code
# Make a node
q = '[inet:fqdn=evildomain.com +#cno.threat.t20.tc=(2015/09/08,2017/09/08)]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=evildomain.com $time=#cno.threat.t20.tc $lib.print($time)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find DNS A records for any subdomain associated with a Threat Group 20 zone during the time they controlled the zone:
###Code
# Make some moar nodes
q = '[ (inet:dns:a=(www.evildomain.com,1.2.3.4) .seen=(2016/07/12,2016/12/13)) (inet:dns:a=(smtp.evildomain.com,5.6.7.8) .seen=(2016/04/04,2016/08/02)) (inet:dns:a=(evildomain.com,12.13.14.15) .seen=(2017/12/22,2019/12/22))]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn#cno.threat.t20.tc $time=#cno.threat.t20.tc -> inet:fqdn:zone -> inet:dns:a +.seen@=$time'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag property to a user-defined variable:*- Assign the risk value assigned by DomainTools to an FQDN to the variable ``$risk``:
###Code
# Create a custom tag property
await core.core.addTagProp('risk', ('int', {'min': 0, 'max': 100}), {'doc': 'Risk score'})
# Make a node
q = '[inet:fqdn=badsite.org +#rep.domaintools:risk=85]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=badsite.org $risk=#rep.domaintools:risk $lib.print($risk)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Given an FQDN with a risk score, find all FQDNs with an equal or higher risk score:
###Code
# Make some moar nodes:
q = '[ (inet:fqdn=stillprettybad.com +#rep.domaintools:risk=92) (inet:fqdn=notsobad.net +#rep.domaintools:risk=67)]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'inet:fqdn=badsite.org $risk=#rep.domaintools:risk inet:fqdn#rep.domaintools:risk>=$risk'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a built-in variable to a user-defined variable:*- Assign a ``ps:person`` node to the variable ``$person``:
###Code
# Make a node
q = '[ps:person="0040a7600a7a4b59297a287d11173d5c"]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'ps:person=0040a7600a7a4b59297a287d11173d5c $person=$node $lib.print($person)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- For a given person, find all objects the person "has" and all the news articles that reference that person (uses the Storm :ref:`storm-tee` command):
###Code
# Make some moar nodes:
q = '[ (edge:has=((ps:person,0040a7600a7a4b59297a287d11173d5c),(inet:web:acct,(twitter.com,mytwitter)))) (edge:refs=((media:news,00076a3f20808a14cbaa01ad51111edc),(ps:person,0040a7600a7a4b59297a287d11173d5c)))]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'ps:person=0040a7600a7a4b59297a287d11173d5c $person = $node | tee { edge:has:n1=$person -> * } { edge:refs:n2=$person <- * +media:news }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE:: See the technical documentation for the :ref:`stormprims-storm-node` object or the :ref:`meth-node` section of the :ref:`storm-adv-methods` user documentation for additional detail and examples when using the ``$node`` built-in variable. *Assign a built-in variable method to a user-defined variable:*- Assign the value of a domain node to the variable ``$fqdn``:
###Code
# Make a node:
q = '[ inet:fqdn=mail.mydomain.com ]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=mail.mydomain.com $fqdn=$node.value() $lib.print($fqdn)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find the DNS A records associated with a given domain where the PTR record for the IP matches the FQDN:
###Code
# Make some moar nodes:
q = '[ inet:dns:a=(mail.mydomain.com,11.12.13.14) inet:dns:a=(mail.mydomain.com,25.25.25.25) ( inet:ipv4=25.25.25.25 :dns:rev=mail.mydomain.com ) ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn=mail.mydomain.com $fqdn=$node.value() -> inet:dns:a +{ -> inet:ipv4 +:dns:rev=$fqdn }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a library function to a user-defined variable:*- Assign a value to the variable ``$mytag`` using a library function:
###Code
# Define and print test query
q = '$mytag = $lib.str.format("cno.mal.sofacy") $lib.print($mytag)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Assign a value to the variable ``$mytag`` using a library function (example 2):
###Code
# Make a node:
q = '[ file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 +#code.fam.sofacy ]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 for $tag in $node.tags(code.fam.*) { $malfam=$tag.split(".").index(2) $mytag=$lib.str.format("cno.mal.{malfam}", malfam=$malfam) $lib.print($mytag) }'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
The above example leverages: - three variables (``$tag``, ``$malfam``, and ``$mytag``); - the :ref:`meth-node-tags` method; - the ``$lib.split()``, ``$lib.index()``, and ``$lib.str.format()`` library functions; as well as - a "for" loop. - If a file is tagged as part of a malicious code (malware) family, then also tag the file to indicate it is part of that malware's ecosystem:
###Code
# Define and print test query
q = 'file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 for $tag in $node.tags(code.fam.*) { $malfam=$tag.split(".").index(2) $mytag=$lib.str.format("cno.mal.{malfam}", malfam=$malfam) [ +#$mytag ] }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE: The above query could be written as a **trigger** (:ref:`auto-triggers`) so that any time a ``code.fam.`` tag was applied to a file, the corresponding ``cno.mal.`` tag would be applied automatically. *Use a mathematical expression / "dollar expression" as a variable:*- Use a mathematical expression to increment the variable ``$x``:
###Code
# Define and print test query
q = '$x=5 $x=$($x + 1) $lib.print($x)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- For any domain with a "risk" score from Talos, tag those with a score greater than 75 as "high risk":
###Code
# Make some nodes:
q = '[ ( inet:fqdn=woot.com +#rep.talos:risk=36 ) ( inet:fqdn=derp.net +#rep.talos:risk=78 ) ( inet:fqdn=hurr.org +#rep.talos:risk=92 ) ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn#rep.talos:risk $risk=#rep.talos:risk if $($risk > 75) { [ +#high.risk ] }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE:: In the examples above, the mathematical expressions ``$($x + 1)`` and ``$($risk > 75)`` are not themselves variables, despite starting with a dollar sign ( ``$`` ). The syntax convention of "dollar expression" ( ``$( )`` ) allows Storm to support the use of variables (like ``$x`` and ``$risk``) in mathematical and logical operations.
###Code
# Close cortex because done
_ = await core.fini()
###Output
_____no_output_____
###Markdown
.. highlight:: none.. _storm-adv-vars:Storm Reference - Advanced - Variables======================================Storm supports the use of **variables.** A :ref:`gloss-variable` is a value that can change depending on conditions or on information passed to the Storm query. (Contrast this with a :ref:`gloss-constant`, which is a value that is fixed and does not change.)Variables can be used in a variety of ways, from providing simpler or more efficient ways to reference node properties, to facilitating bulk operations, to performing complex tasks or writing extensions to Synapse in Storm.These documents approach variables and their use from a **user** standpoint and aim to provide sufficient background for users to understand and begin to use variables. They do not provide an in-depth discussion of variables and their use from a fully developer-oriented perspective.- `Storm Operating Concepts`_- `Variable Concepts`_ - `Variable Scope`_ - `Call Frame`_ - `Runtsafe vs. Non-Runtsafe`_- `Types of Variables`_ - `Built-In Variables`_ - `User-Defined Variables`_.. _op-concepts:Storm Operating Concepts------------------------When leveraging variables in Storm, it is important to keep in mind the high-level :ref:`storm-op-concepts`. Specifically:- Storm operations (e.g., lifts, filters, pivots, etc.) are performed on **nodes.**- Operations can be **chained** and are executed in order from left to right.- Storm acts as an **execution pipeline,** with each node passed individually and independently through the chain of Storm operations.- Most Storm operations **consume** nodes — that is, a given operation (such as a filter or pivot) acts upon the inbound node in some way and returns only the node or set of nodes that result from that operation.These principles apply to variables that reference nodes (or node properties) in Storm just as they apply to nodes, and so affect the way variables behave within Storm queries... _var-concepts:Variable Concepts-----------------.. _var-scope:Variable Scope++++++++++++++A variable’s **scope** is its lifetime and under what conditions it may be accessed. There are two dimensions that impact a variable’s scope: its **call frame** and its **runtime safety** ("runtsafety")... _var-call-frame:Call Frame++++++++++A variable’s **call frame** is where the variable is used. The main Storm query starts with its own call frame, and each call to a "pure" Storm command, function, or subquery creates a new call frame. The new call frame gets a copy of all the variables from the calling call frame. Changes to existing variables or the creation of new variables within the new call frame do not impact the calling scope.Runtsafe vs. Non-Runtsafe+++++++++++++++++++++++++An important distinction to keep in mind when using variables in Storm is whether the variable is runtime-safe (":ref:`gloss-runtsafe`") or non-runtime safe (":ref:`gloss-non-runtsafe`").A variable that is **runtsafe** has a value independent of any nodes passing through the Storm pipeline. For example, a variable whose value is explicitly set, such as ``$string = mystring`` or ``$ipv4 = 8.8.8.8`` is considered runtsafe because the value does not change / is not affected by the specific node passing through the Storm pipeline.A variable that is **non-runtsafe** has a value derived from a node passing through the Storm pipeline. For example, a variable whose value is set to a node property value may change based on the specific node passing through the Storm pipeline. In other words, if your Storm query is operating on a set of DNS A nodes (``inet:dns:a``) and you define the variable ``$fqdn = :fqdn`` (setting the variable to the value of the ``:fqdn`` secondary property), the value of the variable will change based on the specific value of that property for each ``inet:dns:a`` node in the pipeline.All non-runtsafe variables are **scoped** to an individual node as it passes through the Storm pipeline. This means that a variable’s value based on a given node is not available when processing a different node (at least not without using special commands, methods, or libraries). In other words, the path of a particular node as it passes through the Storm pipeline is its own scope.The "safe" in non-runtsafe should **not** be interpreted as meaning the use of non-runtsafe variables is somehow "risky" or involves insecure programming or processing of data. It simply means the value of the variable is not safe from changing (i.e., it may change) as the Storm pipeline progresses... _var-types:Types of Variables------------------Storm supports two types of variables:- **Built-in variables.** Built-in variables facilitate many common Storm operations. They may vary in their scope and in the context in which they can be used.- **User-defined variables** User-defined variables are named and defined by the user. They are most often limited in scope and facilitate operations within a specific Storm query... _vars-builtin:Built-In Variables++++++++++++++++++Storm includes a set of built-in variables and associated variable methods (:ref:`storm-adv-methods`) and libraries (:ref:`storm-adv-libs`) that facilitate Cortex-wide, node-specific, and context-specific operations.Built-in variables differ from user-defined variables in that built-in variable names:- are initialized at Cortex start,- are reserved,- can be accessed automatically (i.e., without needing to define them) from within Storm, and- persist across user sessions and Cortex reboots... _vars-global:Global Variables~~~~~~~~~~~~~~~~Global variables operate independently of any node. That is, they can be invoked in a Storm query in the absence of any nodes in the Storm execution pipeline (though they can also be leveraged when performing operations on nodes)... _vars-global-lib:$libThe library variable ( ``$lib`` ) is a built-in variable that provides access to the global Storm library. In Storm, libraries are accessed using built-in variable names (e.g., ``$lib.print()``).See :ref:`storm-adv-libs` for descriptions of the libraries available within Storm... _vars-node:Node-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~Storm includes node-specific variables that are designed to operate on or in conjunction with nodes and require one or more nodes in the Storm pipeline... NOTE:: Node-specific variables are always non-runtsafe... _vars-node-node:$nodeThe node variable (``$node``) is a built-in Storm variable that **references the current node in the Storm query.** Specifically, this variable contains the inbound node’s node object, and provides access to the node’s attributes, properties, and associated attribute and property values.Invoking this variable during a Storm query is useful when you want to:- access the raw and entire node object,- store the value of the current node before pivoting to another node, or- use an aspect of the current node in subsequent query operations.The ``$node`` variable supports a number of built-in methods that can be used to access specific data or properties associated with a node. See the :ref:`meth-node` section of the :ref:`storm-adv-methods` document for additional detail and examples... _vars-node-path:$pathThe path variable (``$path``) is a built-in Storm variable that **references the path of a node as it travels through the pipeline of a Storm query.**The ``$path`` variable is not used on its own, but in conjunction with its methods. See the :ref:`meth-path` section of the :ref:`storm-adv-methods` documents for additional detail and examples... _vars-trigger:Trigger-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~~~~A :ref:`gloss-trigger` is used to support automation within a Cortex. Triggers use events (such as the creation of a node, setting the value of a node’s property, or applying a tag to a node) to fire ("trigger") the execution of a predefined Storm query. Storm uses a built-in variable specifically within the context of trigger-initiated Storm queries... _vars-trigger-tag:$tagWithin the context of triggers that fire on ``tag:add`` events, the ``$tag`` variable represents the name of the tag that caused the trigger to fire.For example:You write a trigger to fire when any tag matching the expression ``foo.bar.*`` is added to a ``file:bytes`` node. The trigger executes the following Storm command:.. parsed-literal:: -> hash:md5 [ +$tag ]Because the trigger uses a wildcard expression, it will fire on any tag that matches that expression (e.g., ``foo.bar.hurr``, ``foo.bar.derp``, etc.). The Storm snippet above will take the inbound ``file:bytes`` node, pivot to the file’s associated MD5 node (``hash:md5``), and apply the same tag that fired the trigger to the MD5.See the :ref:`auto-triggers` section of the :ref:`storm-ref-automation` document and the Storm :ref:`storm-trigger` command for a more detailed discussion of triggers and associated Storm commands... _vars-csvtool:CSVTool-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~~~~Synapse's **CSVTool** is used to ingest (import) data into or export data from a Cortex using comma-separated value (CSV) format. Storm includes a built-in variable to facilitate bulk data ingest using CSV... _vars-csvtool-rows:$rowsThe ``$rows`` variable refers to the set of rows in a CSV file. When ingesting data into a Cortex, CSVTool reads a CSV file and a file containing a Storm query that tells CSVTool how to process the CSV data. The Storm query is typically constructed to iterate over the set of rows (``$rows``) using a "for" loop that uses user-defined variables to reference each field (column) in the CSV data.For example:.. parsed-literal:: for ($var1, $var2, $var3, $var4) in $rows { }See :ref:`syn-tools-csvtool` for a more detailed discussion of CSVTool use and associated Storm syntax... _vars-user:User-Defined Variables++++++++++++++++++++++User-defined variables can be defined in one of two ways:- At runtime (i.e., within the scope of a specific Storm query). This is the most common use for user-defined variables.- Mapped via options passed to the Storm runtime (i.e., when using the ``--optifle`` option from Synapse cmdr or via Cortex API access). This method is less common. When defined in this manner, user-defined variables will behave as though they are built-in variables that are runtsafe... _vars-names:Variable Names~~~~~~~~~~~~~~All variable names in Storm (including built-in variables) begin with a dollar sign ( ``$`` ). A variable name can be any alphanumeric string, **except for** the name of a built-in variable (see :ref:`vars-builtin`), as those names are reserved. Variable names are case-sensitive; the variable ``$MyVar`` is different from ``$myvar``... NOTE:: Storm will not prevent you from using the name of a built-in variable to define a variable (such as ``$node = 7``). However, doing so may result in undesired effects or unexpected errors due to the variable name collision... _vars-define:Defining Variables~~~~~~~~~~~~~~~~~~Within Storm, a user-defined variable is defined using the syntax:.. parsed-literal:: $ = The variable name must be specified first, followed by the equals sign and the value of the variable itself.```` can be:- an explicit value / literal,- a node secondary or universal property,- a tag or tag property,- a built-in variable or method,- a library function,- a mathematical expression / "dollar expression", or- an embedded query.Examples~~~~~~~~Two types of examples are used below:- **Demonstrative example:** the ``$lib.print()`` library function is used to display the value of the user-defined variable being set. This is done for illustrative purposes only; ``$lib.print()`` is not required in order to use variables or methods. Keep Storm's operation chaining, pipeline, and node consumption aspects in mind when reviewing the demonstrative examples below. When using ``$lib.print()`` to display the value of a variable, the queries below will: - Lift the specified node(s). - Assign the variable. Note that assigning a variable has no impact on the nodes themselves. - Print the variable's value. - Return any nodes still in the pipeline. Because variable assignment doesn't impact the node(s), they are not consumed and so are returned (displayed) at the CLI. The effect of this process is that for each node in the Storm query pipeline, the output of ``$lib.print()`` is displayed, followed by the relevant node.- **Use-case example:** the user-defined variable is used in one or more sample queries to illustrate possible practical use cases. These represent exemplar Storm queries for how a variable or method might be used in practice. While we have attempted to use relatively simple examples for clarity, some examples may leverage additional Storm features such as subqueries, subquery filters, or flow control elements such as "for" loops or "switch" statements.*Assign a literal to a user-defined variable:*- Assign the value 5 to the variable ``$threshold``:
###Code
# Define and print test query
q = '$threshold=5 $lib.print($threshold)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Tag any ``file:bytes`` nodes that have a number of AV signature hits higher than a given threshold for review:
###Code
# Make some nodes
q = '[file:bytes=sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4 it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (0bfef0179bf358f3fe7bad67fa529c77, trojan.gen.2)) it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (325cd5a01724fa0c63907eac044f4961, trojan.agent/gen-onlinegames)) it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (ac8d9645c6cdf123683a73a02e231052, w32/imestartup.a.gen!eldorado))]'
q1 = '[file:bytes=sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (be9793d772d23269ab0c165af819e74a, troj_gen.r002c0gkj17)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (eef2ccb70945fb28a45c7f14f2a0f11d, malicious.1b8fb7)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (ce4e34d2f9207095aa7351986bbad357, trojan-ddos.win32.stormattack.c)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (ed344310e3203ec4348c4ee549a3b188, "trojan ( 00073eb11 )")) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (f5b5daeda10e487fccc07463d9df6b47, tool.stormattack.win32.10)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (a0f25a5ba637d5c8e7c42911c4336085, trojan/w32.agent.61440.eii))]'
podes = await core.eval(q, num=4, cmdr=False)
podes = await core.eval(q1, num=7, cmdr=False)
# Define and print test query
q = '$threshold=5 file:bytes +{ -> it:av:filehit } >= $threshold [ +#review ]'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a node secondary property to a user-defined variable:*- Assign the ``:user`` property from an Internet-based account (``inet:web:acct``) to the variable ``$user``:
###Code
# Make a node
q = '[inet:web:acct=(twitter.com,bert) :[email protected]]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:web:acct=(twitter.com,bert) $user=:user $lib.print($user)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find email addresses associated with a set of Internet accounts where the username of the email address is the same as the username of the Internet account:
###Code
# Make another node
q = '[inet:web:acct=(twitter.com,ernie) :[email protected]]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:web:acct $user=:user -> inet:email +:user=$user'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a node universal property to a user-defined variable:*- Assign the ``.seen`` universal property from a DNS A node to the variable ``$time``:
###Code
# Make a node
q = '[inet:dns:a=(woot.com,1.2.3.4) .seen=("2018/11/27 03:28:14","2019/08/15 18:32:47")]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:dns:a=(woot.com,1.2.3.4) $time=.seen $lib.print($time)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
.. NOTE:: In the example above, the raw value of the ``.seen`` property is assigned to the ``$time`` variable. ``.seen`` is an interval (:ref:`type-ival`) type, consisting of a pair of minimum and maximum time values. These values are stored in Unix epoch millis, which are the values shown by the output of the ``$lib.print()`` function. - Given a DNS A record, find other DNS A records that pointed to the same IP address in the same time window:
###Code
# Make some moar nodes
q = '[ ( inet:dns:a=(hurr.net,1.2.3.4) .seen=("2018/12/09 06:02:53","2019/01/03 11:27:01") ) ( inet:dns:a=(derp.org,1.2.3.4) .seen=("2019/09/03 01:11:23","2019/12/14 14:22:00"))]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'inet:dns:a=(woot.com,1.2.3.4) $time=.seen -> inet:ipv4 -> inet:dns:a +.seen@=$time'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag to a user-defined variable:*- Assign the explicit tag value ``cno.infra.anon.tor`` to the variable ``$tortag``:
###Code
# Define and print test query
q = '$tortag=cno.infra.anon.tor $lib.print($tortag)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Tag IP addresses that Shodan says are associated with Tor with the ``cno.infra.anon.tor`` tag:
###Code
# Make some nodes
q = '[ inet:ipv4=84.140.90.95 inet:ipv4=54.38.219.150 inet:ipv4=46.105.100.149 +#rep.shodan.tor ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = '$tortag=cno.infra.anon.tor inet:ipv4#rep.shodan.tor [ +#$tortag ]'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag timestamp to a user-defined variable:*- Assign the times associated with Threat Group 20’s use of a malicious domain to the variable ``$time``:
###Code
# Make a node
q = '[inet:fqdn=evildomain.com +#cno.threat.t20.tc=(2015/09/08,2017/09/08)]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=evildomain.com $time=#cno.threat.t20.tc $lib.print($time)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find DNS A records for any subdomain associated with a Threat Group 20 zone during the time they controlled the zone:
###Code
# Make some moar nodes
q = '[ (inet:dns:a=(www.evildomain.com,1.2.3.4) .seen=(2016/07/12,2016/12/13)) (inet:dns:a=(smtp.evildomain.com,5.6.7.8) .seen=(2016/04/04,2016/08/02)) (inet:dns:a=(evildomain.com,12.13.14.15) .seen=(2017/12/22,2019/12/22))]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn#cno.threat.t20.tc $time=#cno.threat.t20.tc -> inet:fqdn:zone -> inet:dns:a +.seen@=$time'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag property to a user-defined variable:*- Assign the risk value assigned by DomainTools to an FQDN to the variable ``$risk``:
###Code
# Create a custom tag property
await core.core.addTagProp('risk', ('int', {'min': 0, 'max': 100}), {'doc': 'Risk score'})
# Make a node
q = '[inet:fqdn=badsite.org +#rep.domaintools:risk=85]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=badsite.org $risk=#rep.domaintools:risk $lib.print($risk)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Given an FQDN with a risk score, find all FQDNs with an equal or higher risk score:
###Code
# Make some moar nodes:
q = '[ (inet:fqdn=stillprettybad.com +#rep.domaintools:risk=92) (inet:fqdn=notsobad.net +#rep.domaintools:risk=67)]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'inet:fqdn=badsite.org $risk=#rep.domaintools:risk inet:fqdn#rep.domaintools:risk>=$risk'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a built-in variable to a user-defined variable:*- Assign a ``ps:person`` node to the variable ``$person``:
###Code
# Make a node
q = '[ps:person="0040a7600a7a4b59297a287d11173d5c"]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'ps:person=0040a7600a7a4b59297a287d11173d5c $person=$node $lib.print($person)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- For a given person, find all objects the person "has" and all the news articles that reference that person (uses the Storm :ref:`storm-tee` command):
###Code
# Make some moar nodes:
q = '[ (edge:has=((ps:person,0040a7600a7a4b59297a287d11173d5c),(inet:web:acct,(twitter.com,mytwitter)))) (edge:refs=((media:news,00076a3f20808a14cbaa01ad51111edc),(ps:person,0040a7600a7a4b59297a287d11173d5c)))]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'ps:person=0040a7600a7a4b59297a287d11173d5c $person = $node | tee { edge:has:n1=$person -> * } { edge:refs:n2=$person <- * +media:news }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE:: See the :ref:`meth-node` section of the :ref:`storm-adv-methods` document for additional detail and examples when using the ``$node`` built-in variable and related methods. *Assign a built-in variable method to a user-defined variable:*- Assign the value of a domain node to the variable ``$fqdn``:
###Code
# Make a node:
q = '[ inet:fqdn=mail.mydomain.com ]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=mail.mydomain.com $fqdn=$node.value() $lib.print($fqdn)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find the DNS A records associated with a given domain where the PTR record for the IP matches the FQDN:
###Code
# Make some moar nodes:
q = '[ inet:dns:a=(mail.mydomain.com,11.12.13.14) inet:dns:a=(mail.mydomain.com,25.25.25.25) ( inet:ipv4=25.25.25.25 :dns:rev=mail.mydomain.com ) ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn=mail.mydomain.com $fqdn=$node.value() -> inet:dns:a +{ -> inet:ipv4 +:dns:rev=$fqdn }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a library function to a user-defined variable:*- Assign a value to the variable ``$mytag`` using a library function:
###Code
# Define and print test query
q = '$mytag = $lib.str.format("cno.mal.sofacy") $lib.print($mytag)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Assign a value to the variable ``$mytag`` using a library function (example 2):
###Code
# Make a node:
q = '[ file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 +#code.fam.sofacy ]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 for $tag in $node.tags(code.fam.*) { $malfam=$tag.split(".").index(2) $mytag=$lib.str.format("cno.mal.{malfam}", malfam=$malfam) $lib.print($mytag) }'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
The above example leverages: - three variables (``$tag``, ``$malfam``, and ``$mytag``); - the :ref:`meth-node-tags` method; - the ``$lib.split()``, ``$lib.index()``, and ``$lib.str.format()`` library functions; as well as - a "for" loop. - If a file is tagged as part of a malicious code (malware) family, then also tag the file to indicate it is part of that malware's ecosystem:
###Code
# Define and print test query
q = 'file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 for $tag in $node.tags(code.fam.*) { $malfam=$tag.split(".").index(2) $mytag=$lib.str.format("cno.mal.{malfam}", malfam=$malfam) [ +#$mytag ] }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE: The above query could be written as a **trigger** (:ref:`uto-triggers`) so that any time a ``code.fam.`` tag was applied to a file, the corresponding ``cno.mal.`` tag would be applied automatically. *Use a mathematical expression / "dollar expression" as a variable:*- Use a mathematical expression to increment the variable ``$x``:
###Code
# Define and print test query
q = '$x=5 $x=$($x + 1) $lib.print($x)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- For any domain with a "risk" score from Talos, tag those with a score greater than 75 as "high risk":
###Code
# Make some nodes:
q = '[ ( inet:fqdn=woot.com +#rep.talos:risk=36 ) ( inet:fqdn=derp.net +#rep.talos:risk=78 ) ( inet:fqdn=hurr.org +#rep.talos:risk=92 ) ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn#rep.talos:risk $risk=#rep.talos:risk if $($risk > 75) { [ +#high.risk ] }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE:: In the examples above, the mathematical expressions ``$($x + 1)`` and ``$($risk > 75)`` are not themselves variables, despite starting with a dollar sign ( ``$`` ). The syntax convention of "dollar expression" ( ``$( )`` ) allows Storm to support the use of variables (like ``$x`` and ``$risk``) in mathematical and logical operations. *Assign an embedded query to a user-defined variable:*- TBD
###Code
# Close cortex because done
_ = await core.fini()
###Output
_____no_output_____
###Markdown
.. highlight:: none.. _storm-adv-vars:Storm Reference - Advanced - Variables======================================Storm supports the use of **variables.** A :ref:`gloss-variable` is a value that can change depending on conditions or on information passed to the Storm query. (Contrast this with a :ref:`gloss-constant`, which is a value that is fixed and does not change.)Variables can be used in a variety of ways, from providing simpler or more efficient ways to reference node properties, to facilitating bulk operations, to performing complex tasks or writing extensions to Synapse in Storm.These documents approach variables and their use from a **user** standpoint and aim to provide sufficient background for users to understand and begin to use variables. They do not provide an in-depth discussion of variables and their use from a fully developer-oriented perspective.- `Storm Operating Concepts`_- `Variable Concepts`_ - `Variable Scope`_ - `Call Frame`_ - `Runtsafe vs. Non-Runtsafe`_- `Types of Variables`_ - `Built-In Variables`_ - `User-Defined Variables`_.. _op-concepts:Storm Operating Concepts------------------------When leveraging variables in Storm, it is important to keep in mind the high-level :ref:`storm-op-concepts`. Specifically:- Storm operations (e.g., lifts, filters, pivots, etc.) are performed on **nodes.**- Operations can be **chained** and are executed in order from left to right.- Storm acts as an **execution pipeline,** with each node passed individually and independently through the chain of Storm operations.- Most Storm operations **consume** nodes — that is, a given operation (such as a filter or pivot) acts upon the inbound node in some way and returns only the node or set of nodes that result from that operation.These principles apply to variables that reference nodes (or node properties) in Storm just as they apply to nodes, and so affect the way variables behave within Storm queries... _var-concepts:Variable Concepts-----------------.. _var-scope:Variable Scope++++++++++++++A variable’s **scope** is its lifetime and under what conditions it may be accessed. There are two dimensions that impact a variable’s scope: its **call frame** and its **runtime safety** ("runtsafety")... _var-call-frame:Call Frame++++++++++A variable’s **call frame** is where the variable is used. The main Storm query starts with its own call frame, and each call to a "pure" Storm command, function, or subquery creates a new call frame. The new call frame gets a copy of all the variables from the calling call frame. Changes to existing variables or the creation of new variables within the new call frame do not impact the calling scope.Runtsafe vs. Non-Runtsafe+++++++++++++++++++++++++An important distinction to keep in mind when using variables in Storm is whether the variable is runtime-safe (":ref:`gloss-runtsafe`") or non-runtime safe (":ref:`gloss-non-runtsafe`").A variable that is **runtsafe** has a value independent of any nodes passing through the Storm pipeline. For example, a variable whose value is explicitly set, such as ``$string = mystring`` or ``$ipv4 = 8.8.8.8`` is considered runtsafe because the value does not change / is not affected by the specific node passing through the Storm pipeline.A variable that is **non-runtsafe** has a value derived from a node passing through the Storm pipeline. For example, a variable whose value is set to a node property value may change based on the specific node passing through the Storm pipeline. In other words, if your Storm query is operating on a set of DNS A nodes (``inet:dns:a``) and you define the variable ``$fqdn = :fqdn`` (setting the variable to the value of the ``:fqdn`` secondary property), the value of the variable will change based on the specific value of that property for each ``inet:dns:a`` node in the pipeline.All non-runtsafe variables are **scoped** to an individual node as it passes through the Storm pipeline. This means that a variable’s value based on a given node is not available when processing a different node (at least not without using special commands, methods, or libraries). In other words, the path of a particular node as it passes through the Storm pipeline is its own scope.The "safe" in non-runtsafe should **not** be interpreted as meaning the use of non-runtsafe variables is somehow "risky" or involves insecure programming or processing of data. It simply means the value of the variable is not safe from changing (i.e., it may change) as the Storm pipeline progresses... _var-types:Types of Variables------------------Storm supports two types of variables:- **Built-in variables.** Built-in variables facilitate many common Storm operations. They may vary in their scope and in the context in which they can be used.- **User-defined variables** User-defined variables are named and defined by the user. They are most often limited in scope and facilitate operations within a specific Storm query... _vars-builtin:Built-In Variables++++++++++++++++++Storm includes a set of built-in variables and associated variable methods (:ref:`storm-adv-methods`) and libraries (:ref:`storm-adv-libs`) that facilitate Cortex-wide, node-specific, and context-specific operations.Built-in variables differ from user-defined variables in that built-in variable names:- are initialized at Cortex start,- are reserved,- can be accessed automatically (i.e., without needing to define them) from within Storm, and- persist across user sessions and Cortex reboots... _vars-global:Global Variables~~~~~~~~~~~~~~~~Global variables operate independently of any node. That is, they can be invoked in a Storm query in the absence of any nodes in the Storm execution pipeline (though they can also be leveraged when performing operations on nodes)... _vars-global-lib:$libThe library variable ( ``$lib`` ) is a built-in variable that provides access to the global Storm library. In Storm, libraries are accessed using built-in variable names (e.g., ``$lib.print()``).See :ref:`storm-adv-libs` for descriptions of the libraries available within Storm... _vars-node:Node-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~Storm includes node-specific variables that are designed to operate on or in conjunction with nodes and require one or more nodes in the Storm pipeline... NOTE:: Node-specific variables are always non-runtsafe... _vars-node-node:$nodeThe node variable (``$node``) is a built-in Storm variable that **references the current node in the Storm query.** Specifically, this variable contains the inbound node’s node object, and provides access to the node’s attributes, properties, and associated attribute and property values.Invoking this variable during a Storm query is useful when you want to:- access the raw and entire node object,- store the value of the current node before pivoting to another node, or- use an aspect of the current node in subsequent query operations.The ``$node`` variable supports a number of built-in methods that can be used to access specific data or properties associated with a node. See the :ref:`meth-node` section of the :ref:`storm-adv-methods` document for additional detail and examples... _vars-node-path:$pathThe path variable (``$path``) is a built-in Storm variable that **references the path of a node as it travels through the pipeline of a Storm query.**The ``$path`` variable is not used on its own, but in conjunction with its methods. See the :ref:`meth-path` section of the :ref:`storm-adv-methods` documents for additional detail and examples... _vars-trigger:Trigger-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~~~~A :ref:`gloss-trigger` is used to support automation within a Cortex. Triggers use events (such as the creation of a node, setting the value of a node’s property, or applying a tag to a node) to fire ("trigger") the execution of a predefined Storm query. Storm uses a built-in variable specifically within the context of trigger-initiated Storm queries... _vars-trigger-tag:$tagWithin the context of triggers that fire on ``tag:add`` events, the ``$tag`` variable represents the name of the tag that caused the trigger to fire.For example:You write a trigger to fire when any tag matching the expression ``foo.bar.*`` is added to a ``file:bytes`` node. The trigger executes the following Storm command:.. parsed-literal:: -> hash:md5 [ +$tag ]Because the trigger uses a wildcard expression, it will fire on any tag that matches that expression (e.g., ``foo.bar.hurr``, ``foo.bar.derp``, etc.). The Storm snippet above will take the inbound ``file:bytes`` node, pivot to the file’s associated MD5 node (``hash:md5``), and apply the same tag that fired the trigger to the MD5.See the :ref:`auto-triggers` section of the :ref:`storm-ref-automation` document and the Storm :ref:`storm-trigger` command for a more detailed discussion of triggers and associated Storm commands... _vars-csvtool:CSVTool-Specific Variables~~~~~~~~~~~~~~~~~~~~~~~~~~Synapse's **CSVTool** is used to ingest (import) data into or export data from a Cortex using comma-separated value (CSV) format. Storm includes a built-in variable to facilitate bulk data ingest using CSV... _vars-csvtool-rows:$rowsThe ``$rows`` variable refers to the set of rows in a CSV file. When ingesting data into a Cortex, CSVTool reads a CSV file and a file containing a Storm query that tells CSVTool how to process the CSV data. The Storm query is typically constructed to iterate over the set of rows (``$rows``) using a "for" loop that uses user-defined variables to reference each field (column) in the CSV data.For example:.. parsed-literal:: for ($var1, $var2, $var3, $var4) in $rows { }See :ref:`syn-tools-csvtool` for a more detailed discussion of CSVTool use and associated Storm syntax... _vars-user:User-Defined Variables++++++++++++++++++++++User-defined variables can be defined in one of two ways:- At runtime (i.e., within the scope of a specific Storm query). This is the most common use for user-defined variables.- Mapped via options passed to the Storm runtime (i.e., when using the ``--optifle`` option from Synapse cmdr or via Cortex API access). This method is less common. When defined in this manner, user-defined variables will behave as though they are built-in variables that are runtsafe... _vars-names:Variable Names~~~~~~~~~~~~~~All variable names in Storm (including built-in variables) begin with a dollar sign ( ``$`` ). A variable name can be any alphanumeric string, **except for** the name of a built-in variable (see :ref:`vars-builtin`), as those names are reserved. Variable names are case-sensitive; the variable ``$MyVar`` is different from ``$myvar``... NOTE:: Storm will not prevent you from using the name of a built-in variable to define a variable (such as ``$node = 7``). However, doing so may result in undesired effects or unexpected errors due to the variable name collision... _vars-define:Defining Variables~~~~~~~~~~~~~~~~~~Within Storm, a user-defined variable is defined using the syntax:.. parsed-literal:: $ = The variable name must be specified first, followed by the equals sign and the value of the variable itself.```` can be:- an explicit value / literal,- a node secondary or universal property,- a tag or tag property,- a built-in variable or method,- a library function,- a mathematical expression / "dollar expression", or- an embedded query.Examples~~~~~~~~Two types of examples are used below:- **Demonstrative example:** the ``$lib.print()`` library function is used to display the value of the user-defined variable being set. This is done for illustrative purposes only; ``$lib.print()`` is not required in order to use variables or methods. Keep Storm's operation chaining, pipeline, and node consumption aspects in mind when reviewing the demonstrative examples below. When using ``$lib.print()`` to display the value of a variable, the queries below will: - Lift the specified node(s). - Assign the variable. Note that assigning a variable has no impact on the nodes themselves. - Print the variable's value. - Return any nodes still in the pipeline. Because variable assignment doesn't impact the node(s), they are not consumed and so are returned (displayed) at the CLI. The effect of this process is that for each node in the Storm query pipeline, the output of ``$lib.print()`` is displayed, followed by the relevant node.- **Use-case example:** the user-defined variable is used in one or more sample queries to illustrate possible practical use cases. These represent exemplar Storm queries for how a variable or method might be used in practice. While we have attempted to use relatively simple examples for clarity, some examples may leverage additional Storm features such as subqueries, subquery filters, or flow control elements such as "for" loops or "switch" statements.*Assign a literal to a user-defined variable:*- Assign the value 5 to the variable ``$threshold``:
###Code
# Define and print test query
q = '$threshold=5 $lib.print($threshold)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Tag any ``file:bytes`` nodes that have a number of AV signature hits higher than a given threshold for review:
###Code
# Make some nodes
q = '[file:bytes=sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4 it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (0bfef0179bf358f3fe7bad67fa529c77, trojan.gen.2)) it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (325cd5a01724fa0c63907eac044f4961, trojan.agent/gen-onlinegames)) it:av:filehit=(sha256:0000746c55336cd8d34885545f9347d96607d0391fbd3e76dae7f2b3447775b4, (ac8d9645c6cdf123683a73a02e231052, w32/imestartup.a.gen!eldorado))]'
q1 = '[file:bytes=sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (be9793d772d23269ab0c165af819e74a, troj_gen.r002c0gkj17)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (eef2ccb70945fb28a45c7f14f2a0f11d, malicious.1b8fb7)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (ce4e34d2f9207095aa7351986bbad357, trojan-ddos.win32.stormattack.c)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (ed344310e3203ec4348c4ee549a3b188, "trojan ( 00073eb11 )")) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (f5b5daeda10e487fccc07463d9df6b47, tool.stormattack.win32.10)) it:av:filehit=(sha256:00007694135237ec8dc5234007043814608f239befdfc8a61b992e4d09e0cf3f, (a0f25a5ba637d5c8e7c42911c4336085, trojan/w32.agent.61440.eii))]'
podes = await core.eval(q, num=4, cmdr=False)
podes = await core.eval(q1, num=7, cmdr=False)
# Define and print test query
q = '$threshold=5 file:bytes +{ -> it:av:filehit } >= $threshold [ +#review ]'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a node secondary property to a user-defined variable:*- Assign the ``:user`` property from an Internet-based account (``inet:web:acct``) to the variable ``$user``:
###Code
# Make a node
q = '[inet:web:acct=(twitter.com,bert) :[email protected]]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:web:acct=(twitter.com,bert) $user=:user $lib.print($user)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find email addresses associated with a set of Internet accounts where the username of the email address is the same as the username of the Internet account:
###Code
# Make another node
q = '[inet:web:acct=(twitter.com,ernie) :[email protected]]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:web:acct $user=:user -> inet:email +:user=$user'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a node universal property to a user-defined variable:*- Assign the ``.seen`` universal property from a DNS A node to the variable ``$time``:
###Code
# Make a node
q = '[inet:dns:a=(woot.com,1.2.3.4) .seen=("2018/11/27 03:28:14","2019/08/15 18:32:47")]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:dns:a=(woot.com,1.2.3.4) $time=.seen $lib.print($time)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
.. NOTE:: In the example above, the raw value of the ``.seen`` property is assigned to the ``$time`` variable. ``.seen`` is an interval (:ref:`type-ival`) type, consisting of a pair of minimum and maximum time values. These values are stored in Unix epoch millis, which are the values shown by the output of the ``$lib.print()`` function. - Given a DNS A record, find other DNS A records that pointed to the same IP address in the same time window:
###Code
# Make some moar nodes
q = '[ ( inet:dns:a=(hurr.net,1.2.3.4) .seen=("2018/12/09 06:02:53","2019/01/03 11:27:01") ) ( inet:dns:a=(derp.org,1.2.3.4) .seen=("2019/09/03 01:11:23","2019/12/14 14:22:00"))]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'inet:dns:a=(woot.com,1.2.3.4) $time=.seen -> inet:ipv4 -> inet:dns:a +.seen@=$time'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag to a user-defined variable:*- Assign the explicit tag value ``cno.infra.anon.tor`` to the variable ``$tortag``:
###Code
# Define and print test query
q = '$tortag=cno.infra.anon.tor $lib.print($tortag)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Tag IP addresses that Shodan says are associated with Tor with the ``cno.infra.anon.tor`` tag:
###Code
# Make some nodes
q = '[ inet:ipv4=84.140.90.95 inet:ipv4=54.38.219.150 inet:ipv4=46.105.100.149 +#rep.shodan.tor ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = '$tortag=cno.infra.anon.tor inet:ipv4#rep.shodan.tor [ +#$tortag ]'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag timestamp to a user-defined variable:*- Assign the times associated with Threat Group 20’s use of a malicious domain to the variable ``$time``:
###Code
# Make a node
q = '[inet:fqdn=evildomain.com +#cno.threat.t20.tc=(2015/09/08,2017/09/08)]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=evildomain.com $time=#cno.threat.t20.tc $lib.print($time)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find DNS A records for any subdomain associated with a Threat Group 20 zone during the time they controlled the zone:
###Code
# Make some moar nodes
q = '[ (inet:dns:a=(www.evildomain.com,1.2.3.4) .seen=(2016/07/12,2016/12/13)) (inet:dns:a=(smtp.evildomain.com,5.6.7.8) .seen=(2016/04/04,2016/08/02)) (inet:dns:a=(evildomain.com,12.13.14.15) .seen=(2017/12/22,2019/12/22))]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn#cno.threat.t20.tc $time=#cno.threat.t20.tc -> inet:fqdn:zone -> inet:dns:a +.seen@=$time'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a tag property to a user-defined variable:*- Assign the risk value assigned by DomainTools to an FQDN to the variable ``$risk``:
###Code
# Create a custom tag property
await core.core.addTagProp('risk', ('int', {'minval': 0, 'maxval': 100}), {'doc': 'Risk score'})
# Make a node
q = '[inet:fqdn=badsite.org +#rep.domaintools:risk=85]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=badsite.org $risk=#rep.domaintools:risk $lib.print($risk)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Given an FQDN with a risk score, find all FQDNs with an equal or higher risk score:
###Code
# Make some moar nodes:
q = '[ (inet:fqdn=stillprettybad.com +#rep.domaintools:risk=92) (inet:fqdn=notsobad.net +#rep.domaintools:risk=67)]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'inet:fqdn=badsite.org $risk=#rep.domaintools:risk inet:fqdn#rep.domaintools:risk>=$risk'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a built-in variable to a user-defined variable:*- Assign a ``ps:person`` node to the variable ``$person``:
###Code
# Make a node
q = '[ps:person="0040a7600a7a4b59297a287d11173d5c"]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'ps:person=0040a7600a7a4b59297a287d11173d5c $person=$node $lib.print($person)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- For a given person, find all objects the person "has" and all the news articles that reference that person (uses the Storm :ref:`storm-tee` command):
###Code
# Make some moar nodes:
q = '[ (edge:has=((ps:person,0040a7600a7a4b59297a287d11173d5c),(inet:web:acct,(twitter.com,mytwitter)))) (edge:refs=((media:news,00076a3f20808a14cbaa01ad51111edc),(ps:person,0040a7600a7a4b59297a287d11173d5c)))]'
podes = await core.eval(q, num=2, cmdr=False)
# Define and print test query
q = 'ps:person=0040a7600a7a4b59297a287d11173d5c $person = $node | tee { edge:has:n1=$person -> * } { edge:refs:n2=$person <- * +media:news }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=2, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE:: See the :ref:`meth-node` section of the :ref:`storm-adv-methods` document for additional detail and examples when using the ``$node`` built-in variable and related methods. *Assign a built-in variable method to a user-defined variable:*- Assign the value of a domain node to the variable ``$fqdn``:
###Code
# Make a node:
q = '[ inet:fqdn=mail.mydomain.com ]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'inet:fqdn=mail.mydomain.com $fqdn=$node.value() $lib.print($fqdn)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
- Find the DNS A records associated with a given domain where the PTR record for the IP matches the FQDN:
###Code
# Make some moar nodes:
q = '[ inet:dns:a=(mail.mydomain.com,11.12.13.14) inet:dns:a=(mail.mydomain.com,25.25.25.25) ( inet:ipv4=25.25.25.25 :dns:rev=mail.mydomain.com ) ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn=mail.mydomain.com $fqdn=$node.value() -> inet:dns:a +{ -> inet:ipv4 +:dns:rev=$fqdn }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
*Assign a library function to a user-defined variable:*- Assign a value to the variable ``$mytag`` using a library function:
###Code
# Define and print test query
q = '$mytag = $lib.str.format("cno.mal.sofacy") $lib.print($mytag)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- Assign a value to the variable ``$mytag`` using a library function (example 2):
###Code
# Make a node:
q = '[ file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 +#code.fam.sofacy ]'
podes = await core.eval(q, num=1, cmdr=False)
# Define and print test query
q = 'file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 for $tag in $node.tags(code.fam.*) { $malfam=$tag.split(".").index(2) $mytag=$lib.str.format("cno.mal.{malfam}", malfam=$malfam) $lib.print($mytag) }'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=True)
###Output
_____no_output_____
###Markdown
The above example leverages: - three variables (``$tag``, ``$malfam``, and ``$mytag``); - the :ref:`meth-node-tags` method; - the ``$lib.split()``, ``$lib.index()``, and ``$lib.str.format()`` library functions; as well as - a "for" loop. - If a file is tagged as part of a malicious code (malware) family, then also tag the file to indicate it is part of that malware's ecosystem:
###Code
# Define and print test query
q = 'file:bytes=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 for $tag in $node.tags(code.fam.*) { $malfam=$tag.split(".").index(2) $mytag=$lib.str.format("cno.mal.{malfam}", malfam=$malfam) [ +#$mytag ] }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=1, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE: The above query could be written as a **trigger** (:ref:`uto-triggers`) so that any time a ``code.fam.`` tag was applied to a file, the corresponding ``cno.mal.`` tag would be applied automatically. *Use a mathematical expression / "dollar expression" as a variable:*- Use a mathematical expression to increment the variable ``$x``:
###Code
# Define and print test query
q = '$x=5 $x=$($x + 1) $lib.print($x)'
q1 = '\n'
print(q + q1)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=0, cmdr=True)
###Output
_____no_output_____
###Markdown
- For any domain with a "risk" score from Talos, tag those with a score greater than 75 as "high risk":
###Code
# Make some nodes:
q = '[ ( inet:fqdn=woot.com +#rep.talos:risk=36 ) ( inet:fqdn=derp.net +#rep.talos:risk=78 ) ( inet:fqdn=hurr.org +#rep.talos:risk=92 ) ]'
podes = await core.eval(q, num=3, cmdr=False)
# Define and print test query
q = 'inet:fqdn#rep.talos:risk $risk=#rep.talos:risk if $($risk > 75) { [ +#high.risk ] }'
print(q)
# Execute the query to test it and get the packed nodes (podes).
podes = await core.eval(q, num=3, cmdr=False)
###Output
_____no_output_____
###Markdown
.. NOTE:: In the examples above, the mathematical expressions ``$($x + 1)`` and ``$($risk > 75)`` are not themselves variables, despite starting with a dollar sign ( ``$`` ). The syntax convention of "dollar expression" ( ``$( )`` ) allows Storm to support the use of variables (like ``$x`` and ``$risk``) in mathematical and logical operations. *Assign an embedded query to a user-defined variable:*- TBD
###Code
# Close cortex because done
_ = await core.fini()
###Output
_____no_output_____ |
_doc/notebooks/exemples/basic_example.ipynb | ###Markdown
Exemple de notebooksExemple de matrice avec pandas.
###Code
import pandas
df = pandas.DataFrame ( [ dict(x=1.0, y=1.0), dict(x=1.5, y=2)] )
df
###Output
_____no_output_____
###Markdown
Exemple de notebooksExemple très simple permettant de créer un dataframe.
###Code
import numpy
from pandas import DataFrame
df = DataFrame ( [ dict(x=1.0, y=1.0), dict(x=1.5, y=2)] )
df
###Output
_____no_output_____ |
interactive_graph_layout.ipynb | ###Markdown
GraphDDP
###Code
target_fname = 'data/myeloid/partition_my.csv'
data_fname = 'data/myeloid/fdata_my.csv'
target_fname = 'data/intestine/partition_ADULT.csv'
data_fname = 'data/intestine/fdata_ADULT.csv'
from graph_embed import pre_process
res = pre_process(data_fname,
target_fname,
correlation_transformation=True,
normalization=True,
feature_selection=True,
min_threshold=5)
data_matrix, target, target_dict = res
from display import interactive_widget
w = interactive_widget(data_matrix, target_dict, target)
from IPython.display import display
display(w)
###Output
_____no_output_____ |
cfraud_vs_models.ipynb | ###Markdown
**Credit card Fraud versus a set of Models** _This notebook contains an example of running several models against a credit card fraud dataset pulled from Kaggle._ Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
###Output
_____no_output_____
###Markdown
Load Data I loaded the card fraud data from an S3 bucket on AWS for you.
###Code
import pandas as pd
cfraud=pd.read_csv("https://s3.amazonaws.com/www.ruxton.ai/creditcardfraud.zip")
cfraud.head()
###Output
_____no_output_____
###Markdown
Some Minimal EDA
###Code
from string import ascii_letters
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white")
# Generate a large random dataset
rs = np.random.RandomState(33)
d = pd.DataFrame(data=rs.normal(size=(100, 26)),
columns=list(ascii_letters[26:]))
# Compute the correlation matrix
corr = cfraud.corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=np.bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
use=list(cfraud.columns.values[[1,2,3,4,5,6,7,9,10,11,12,14,16,17,18,19,28]]) # use all the predictor data for example
print(use)
###Output
['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V9', 'V10', 'V11', 'V12', 'V14', 'V16', 'V17', 'V18', 'V19', 'V28']
###Markdown
EDA: Before fitting models, you should do EDA. Do you want to add any features as combos of others? Transform data here That looks awful. Let's try and dientify predictors that are intrinsic to banks balance sheet. That looks better. Now try some other methods like random forest SVM, xgboost, decisio trees. Try tuning them. Which do you choose?
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from xgboost import XGBClassifier
from sklearn import metrics
h = .02 # step size in the mesh
names = [ "Linear SVM", "Logistic",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA","XGBoost"]
classifiers = [
SVC(kernel="linear", C=0.025),
LogisticRegression(),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis(),
XGBClassifier()]
X, y = make_classification(n_features=5, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
i=1. # figure counter
# preprocess dataset, split into training and test part
X, y = cfraud[use],cfraud["Class"]
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.3, random_state=42)
# iterate over classifiers
for name, clf in zip(names, classifiers):
figure = plt.figure(num=i,figsize=(108, 6))
ax = plt.subplot(1, len(classifiers) + 1, i)
clf.fit(X_train, y_train)
fpr, tpr, _ = metrics.roc_curve(y_test, clf.predict(X_test))
roc_auc = metrics.auc(fpr, tpr)
# Plot of a ROC curve for a specific class
# plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC for '+ name )
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
benchmarks/pad/analysis/pad-ufes-20-analysis.ipynb | ###Markdown
PAD-UFES-20 dataset analysisIn this kernel, we performe some EDA for the PAD-UFES-20 dataset. For more information, please refer to the paper.
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
data = pd.read_csv("/home/patcha/Datasets/PAD-UFES-20/metadata.csv")
data_ = data.replace({'True': 'YES', 'False': 'NO', True: 'YES', False: 'NO'})
###Output
_____no_output_____
###Markdown
StatsGetting the number of samples for each diagnosis and the % of biopsy-proven
###Code
grouped = data.groupby(['diagnostic'])
total = 0
total_bio = 0
for g in grouped:
print("-"*10)
print("Diagnostic:", g[0])
g = g[1]
_total = len(g)
_total_bio = len(g[g['biopsed']])
print("# of samples:", _total)
print("% biopsed:", 100 * (_total_bio/_total))
print("-"*10)
total += _total
total_bio += _total_bio
print("*"*10)
print("# Total:", total)
print("% biopsed:", 100 * (total_bio/total))
###Output
----------
Diagnostic: ACK
# of samples: 730
% biopsed: 24.383561643835616
----------
----------
Diagnostic: BCC
# of samples: 845
% biopsed: 100.0
----------
----------
Diagnostic: MEL
# of samples: 52
% biopsed: 100.0
----------
----------
Diagnostic: NEV
# of samples: 244
% biopsed: 24.59016393442623
----------
----------
Diagnostic: SCC
# of samples: 192
% biopsed: 100.0
----------
----------
Diagnostic: SEK
# of samples: 235
% biopsed: 6.382978723404255
----------
**********
# Total: 2298
% biopsed: 58.39860748476936
###Markdown
Age distributionChecking the age distribution for all patients and splitted by gender
###Code
kde = True
male = data[ data['gender'] == 'MALE' ]
female = data[ data['gender'] == 'FEMALE' ]
sns.distplot(data['age'], color="lime", label='All', kde=kde, bins=15)
sns.distplot(male['age'], color="navy", label='Male', kde=kde, bins=15)
sns.distplot(female['age'], color="coral", label='Female', kde=kde, bins=15)
plt.grid(color='black', linestyle='dotted', linewidth=0.7)
plt.xlabel("Age")
plt.legend(['All', 'Male', 'Female'], loc='upper left')
plt.savefig("figures/age_distribution.png", dpi=200)
plt.figure()
###Output
_____no_output_____
###Markdown
Age boxplotsChecking the age bloxplots per diagnostic
###Code
sns.boxplot(y='age', x='diagnostic', data=data, palette="Blues_d")
plt.grid(color='black', linestyle='dotted', linewidth=0.7)
plt.xlabel("Diagnostic")
plt.ylabel("Age")
plt.savefig("figures/age_boxplot.png", dpi=200)
plt.figure()
###Output
_____no_output_____
###Markdown
Gender
###Code
x = sns.countplot(x="diagnostic", hue="gender", data=data, palette="Blues_d")
plt.savefig('figures/gender_per_diag.png', dpi = 200)
###Output
_____no_output_____
###Markdown
Fitspatrick
###Code
x = sns.countplot(x="diagnostic", hue="fitspatrick", data=data)
plt.legend(loc='upper right')
plt.savefig('figures/fitspatrick.png', dpi = 200)
###Output
_____no_output_____
###Markdown
Anatomical regionChecking the frequency of each anatomical region per diagnostic
###Code
x = sns.countplot(x="diagnostic", hue="region", data=data)
plt.legend(loc='right', prop={'size': 7})
plt.tight_layout()
plt.savefig('figures/regions_per_diag.png', dpi = 200)
###Output
_____no_output_____
###Markdown
Checking the frequency of each anatomical region
###Code
x = data.groupby(['region']).count()['diagnostic'].sort_values(ascending=False)
sns.barplot(x.values, x.index, palette="Blues_d", orient='h')
plt.grid(color='black', linestyle='dotted', linewidth=0.7)
plt.xlabel("Frequency")
plt.ylabel("Region")
plt.tight_layout()
plt.savefig("figures/region_frequency.png", dpi=200)
plt.figure()
###Output
_____no_output_____
###Markdown
Family backgroundFather:
###Code
x = data.groupby(['background_father']).count()['diagnostic'].sort_values(ascending=False)
sns.barplot(x.values, x.index, palette="Blues_d", orient='h')
plt.grid(color='black', linestyle='dotted', linewidth=0.7)
plt.xlabel("Frequency")
plt.ylabel("Region")
plt.savefig("figures/fam_back_father_frequency.png", dpi=200)
plt.figure()
###Output
_____no_output_____
###Markdown
Mother:
###Code
x = data.groupby(['background_mother']).count()['diagnostic'].sort_values(ascending=False)
sns.barplot(x.values, x.index, palette="Blues_d", orient='h')
plt.grid(color='black', linestyle='dotted', linewidth=0.7)
plt.xlabel("Frequency")
plt.ylabel("Region")
plt.savefig("figures/fam_back_mother_frequency.png", dpi=200)
plt.figure()
###Output
_____no_output_____
###Markdown
Diameters
###Code
diam = data.dropna(subset = ['diameter_1', 'diameter_2'])
diam = diam[['diameter_1', 'diameter_2', 'diagnostic']]
g = sns.pairplot(diam, hue="diagnostic")
plt.savefig("figures/diameters.png", dpi=200)
plt.figure()
###Output
_____no_output_____
###Markdown
Boolean features
###Code
_feats = ['smoke', 'drink', 'pesticide', 'skin_cancer_history', 'cancer_history', 'has_piped_water',
'has_sewage_system', 'itch', 'grew', 'hurt', 'changed', 'bleed', 'elevation']
def plot_count (_feat):
sub_data = data_[[_feat, 'diagnostic']]
g = sns.FacetGrid(sub_data, col="diagnostic")
g.map(sns.countplot, _feat, order=['YES', 'NO', 'UNK'], palette="Blues_d")
g.savefig("figures/count_{}.png".format(_feat), dpi=200)
plt.figure()
for _feat in _feats:
plot_count(_feat)
###Output
/home/patcha/.local/lib/python3.6/site-packages/seaborn/axisgrid.py:324: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
fig, axes = plt.subplots(nrow, ncol, **kwargs)
/home/patcha/.local/lib/python3.6/site-packages/ipykernel_launcher.py:9: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
if __name__ == '__main__':
###Markdown
Solo plots
###Code
def solo_plot_count (_feat):
g = sns.countplot(x="diagnostic", hue=_feat, data=data_, palette="Blues_d")
plt.savefig('figures/solo_count_{}.png'.format(_feat), dpi = 200)
plt.legend(loc='upper right')
plt.figure()
for _feat in _feats:
solo_plot_count(_feat)
###Output
_____no_output_____ |
examples/notebooks/solution-data-and-processed-variables.ipynb | ###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# create geometry
geometry = model.default_geometry
# load parameter values and process model and geometry
param = model.default_parameter_values
param.process_model(model)
param.process_geometry(geometry)
# set mesh
mesh = pybamm.Mesh(geometry, model.default_submesh_types, model.default_var_pts)
# discretise model
disc = pybamm.Discretisation(mesh, model.default_spatial_methods)
disc.process_model(model)
# solve model
solver = model.default_solver
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = solver.solve(model, t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
_____no_output_____
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
['Active material volume fraction', 'Ambient temperature', 'Ambient temperature [K]', 'Battery voltage [V]', 'C-rate', 'Cell temperature', 'Cell temperature [K]', 'Current [A]', 'Current collector current density', 'Current collector current density [A.m-2]', 'Discharge capacity [A.h]', 'Electrode current density', 'Electrode tortuosity', 'Electrolyte concentration', 'Electrolyte concentration [Molar]', 'Electrolyte concentration [mol.m-3]', 'Electrolyte current density', 'Electrolyte current density [A.m-2]', 'Electrolyte flux', 'Electrolyte flux [mol.m-2.s-1]', 'Electrolyte potential', 'Electrolyte potential [V]', 'Electrolyte tortuosity', 'Exchange current density', 'Exchange current density [A.m-2]', 'Exchange current density per volume [A.m-3]', 'Gradient of electrolyte potential', 'Gradient of negative electrode potential', 'Gradient of negative electrolyte potential', 'Gradient of positive electrode potential', 'Gradient of positive electrolyte potential', 'Gradient of separator electrolyte potential', 'Inner negative electrode sei concentration [mol.m-3]', 'Inner negative electrode sei interfacial current density', 'Inner negative electrode sei interfacial current density [A.m-2]', 'Inner negative electrode sei thickness', 'Inner negative electrode sei thickness [m]', 'Inner positive electrode sei concentration [mol.m-3]', 'Inner positive electrode sei interfacial current density', 'Inner positive electrode sei interfacial current density [A.m-2]', 'Inner positive electrode sei thickness', 'Inner positive electrode sei thickness [m]', 'Interfacial current density', 'Interfacial current density [A.m-2]', 'Interfacial current density per volume [A.m-3]', 'Irreversible electrochemical heating', 'Irreversible electrochemical heating [W.m-3]', 'Leading-order active material volume fraction', 'Leading-order current collector current density', 'Leading-order electrode tortuosity', 'Leading-order electrolyte tortuosity', 'Leading-order negative electrode active material volume fraction', 'Leading-order negative electrode porosity', 'Leading-order negative electrode tortuosity', 'Leading-order negative electrolyte tortuosity', 'Leading-order porosity', 'Leading-order positive electrode active material volume fraction', 'Leading-order positive electrode porosity', 'Leading-order positive electrode tortuosity', 'Leading-order positive electrolyte tortuosity', 'Leading-order separator active material volume fraction', 'Leading-order separator porosity', 'Leading-order separator tortuosity', 'Leading-order x-averaged negative electrode active material volume fraction', 'Leading-order x-averaged negative electrode porosity', 'Leading-order x-averaged negative electrode porosity change', 'Leading-order x-averaged negative electrode tortuosity', 'Leading-order x-averaged negative electrolyte tortuosity', 'Leading-order x-averaged positive electrode active material volume fraction', 'Leading-order x-averaged positive electrode porosity', 'Leading-order x-averaged positive electrode porosity change', 'Leading-order x-averaged positive electrode tortuosity', 'Leading-order x-averaged positive electrolyte tortuosity', 'Leading-order x-averaged separator active material volume fraction', 'Leading-order x-averaged separator porosity', 'Leading-order x-averaged separator porosity change', 'Leading-order x-averaged separator tortuosity', 'Local voltage', 'Local voltage [V]', 'Loss of lithium to negative electrode sei [mol]', 'Loss of lithium to positive electrode sei [mol]', 'Measured battery open circuit voltage [V]', 'Measured open circuit voltage', 'Measured open circuit voltage [V]', 'Negative current collector potential', 'Negative current collector potential [V]', 'Negative current collector temperature', 'Negative current collector temperature [K]', 'Negative electrode active material volume fraction', 'Negative electrode active volume fraction', 'Negative electrode average extent of lithiation', 'Negative electrode current density', 'Negative electrode current density [A.m-2]', 'Negative electrode entropic change', 'Negative electrode exchange current density', 'Negative electrode exchange current density [A.m-2]', 'Negative electrode exchange current density per volume [A.m-3]', 'Negative electrode interfacial current density', 'Negative electrode interfacial current density [A.m-2]', 'Negative electrode interfacial current density per volume [A.m-3]', 'Negative electrode ohmic losses', 'Negative electrode ohmic losses [V]', 'Negative electrode open circuit potential', 'Negative electrode open circuit potential [V]', 'Negative electrode oxygen exchange current density', 'Negative electrode oxygen exchange current density [A.m-2]', 'Negative electrode oxygen exchange current density per volume [A.m-3]', 'Negative electrode oxygen interfacial current density', 'Negative electrode oxygen interfacial current density [A.m-2]', 'Negative electrode oxygen interfacial current density per volume [A.m-3]', 'Negative electrode oxygen open circuit potential', 'Negative electrode oxygen open circuit potential [V]', 'Negative electrode oxygen reaction overpotential', 'Negative electrode oxygen reaction overpotential [V]', 'Negative electrode porosity', 'Negative electrode porosity change', 'Negative electrode potential', 'Negative electrode potential [V]', 'Negative electrode pressure', 'Negative electrode reaction overpotential', 'Negative electrode reaction overpotential [V]', 'Negative electrode sei film overpotential', 'Negative electrode sei film overpotential [V]', 'Negative electrode sei interfacial current density', 'Negative electrode sei interfacial current density [A.m-2]', 'Negative electrode surface potential difference', 'Negative electrode surface potential difference [V]', 'Negative electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative sei concentration [mol.m-3]', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer negative electrode sei concentration [mol.m-3]', 'Outer negative electrode sei interfacial current density', 'Outer negative electrode sei interfacial current density [A.m-2]', 'Outer negative electrode sei thickness', 'Outer negative electrode sei thickness [m]', 'Outer positive electrode sei concentration [mol.m-3]', 'Outer positive electrode sei interfacial current density', 'Outer positive electrode sei interfacial current density [A.m-2]', 'Outer positive electrode sei thickness', 'Outer positive electrode sei thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active volume fraction', 'Positive electrode average extent of lithiation', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode sei film overpotential', 'Positive electrode sei film overpotential [V]', 'Positive electrode sei interfacial current density', 'Positive electrode sei interfacial current density [A.m-2]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive sei concentration [mol.m-3]', 'Pressure', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator active material volume fraction', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total negative electrode sei thickness', 'Total negative electrode sei thickness [m]', 'Total positive electrode sei thickness', 'Total positive electrode sei thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner negative electrode sei concentration [mol.m-3]', 'X-averaged inner negative electrode sei interfacial current density', 'X-averaged inner negative electrode sei interfacial current density [A.m-2]', 'X-averaged inner negative electrode sei thickness', 'X-averaged inner negative electrode sei thickness [m]', 'X-averaged inner positive electrode sei concentration [mol.m-3]', 'X-averaged inner positive electrode sei interfacial current density', 'X-averaged inner positive electrode sei interfacial current density [A.m-2]', 'X-averaged inner positive electrode sei thickness', 'X-averaged inner positive electrode sei thickness [m]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode sei concentration [mol.m-3]', 'X-averaged negative electrode sei film overpotential', 'X-averaged negative electrode sei film overpotential [V]', 'X-averaged negative electrode sei interfacial current density', 'X-averaged negative electrode sei interfacial current density [A.m-2]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer negative electrode sei concentration [mol.m-3]', 'X-averaged outer negative electrode sei interfacial current density', 'X-averaged outer negative electrode sei interfacial current density [A.m-2]', 'X-averaged outer negative electrode sei thickness', 'X-averaged outer negative electrode sei thickness [m]', 'X-averaged outer positive electrode sei concentration [mol.m-3]', 'X-averaged outer positive electrode sei interfacial current density', 'X-averaged outer positive electrode sei interfacial current density [A.m-2]', 'X-averaged outer positive electrode sei thickness', 'X-averaged outer positive electrode sei thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode sei concentration [mol.m-3]', 'X-averaged positive electrode sei film overpotential', 'X-averaged positive electrode sei film overpotential [V]', 'X-averaged positive electrode sei interfacial current density', 'X-averaged positive electrode sei interfacial current density [A.m-2]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged sei film overpotential', 'X-averaged sei film overpotential [V]', 'X-averaged separator active material volume fraction', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total negative electrode sei thickness', 'X-averaged total negative electrode sei thickness [m]', 'X-averaged total positive electrode sei thickness', 'X-averaged total positive electrode sei thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab"
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThis solution was created in one go with the solver's solve method but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_solver = model.default_solver
step_solution = None
while time < end_time:
step_solution = step_solver.step(step_solution, model, dt=dt, npts=2)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77057107 3.71259241]
Time 360
[3.77057107 3.71259241 3.68218316]
Time 720
[3.77057107 3.71259241 3.68218316 3.66126923]
Time 1080
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555]
Time 1440
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633]
Time 1800
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298]
Time 2160
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298 3.58820658]
Time 2520
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298 3.58820658 3.58048923]
Time 2880
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298 3.58820658 3.58048923 3.55051681]
Time 3240
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298 3.58820658 3.58048923 3.55051681 3.14247468]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# create geometry
geometry = model.default_geometry
# load parameter values and process model and geometry
param = model.default_parameter_values
param.process_model(model)
param.process_geometry(geometry)
# set mesh
mesh = pybamm.Mesh(geometry, model.default_submesh_types, model.default_var_pts)
# discretise model
disc = pybamm.Discretisation(mesh, model.default_spatial_methods)
disc.process_model(model)
# solve model
solver = model.default_solver
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = solver.solve(model, t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
_____no_output_____
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First lets find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the actual data in one of two ways, first by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
Secondly by calling the method with a specific solution time, which is non-dimensional
###Code
solution.t
solution['Time [h]'](solution.t)
###Output
_____no_output_____
###Markdown
And interpolated
###Code
interp_t = (solution.t[0] + solution.t[1])/2
solution['Time [h]'](interp_t)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](interp_t)
###Output
_____no_output_____
###Markdown
Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab"
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThis solution was created in one go with the solver's solve method but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_solver = model.default_solver
step_solution = None
while time < end_time:
step_solution = step_solver.step(step_solution, model, dt=dt, npts=2)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77057107 3.71259842]
Time 360
[3.77057107 3.71259842 3.68218919]
Time 720
[3.77057107 3.71259842 3.68218919 3.66127527]
Time 1080
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161]
Time 1440
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241]
Time 1800
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908]
Time 2160
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908 3.5882127 ]
Time 2520
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908 3.5882127 3.58049537]
Time 2880
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908 3.5882127 3.58049537 3.55052297]
Time 3240
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908 3.5882127 3.58049537 3.55052297 3.14248086]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"]
step_voltage = step_solution["Terminal voltage [V]"]
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage(solution.t), "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage(step_solution.t), "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# set up and solve simulation
simulation = pybamm.Simulation(model)
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = simulation.solve(t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
['Ambient temperature', 'Ambient temperature [K]', 'Average negative particle concentration', 'Average negative particle concentration [mol.m-3]', 'Average positive particle concentration', 'Average positive particle concentration [mol.m-3]', 'Battery voltage [V]', 'C-rate', 'Cell temperature', 'Cell temperature [K]', 'Change in measured open circuit voltage', 'Change in measured open circuit voltage [V]', 'Current [A]', 'Current collector current density', 'Current collector current density [A.m-2]', 'Discharge capacity [A.h]', 'Electrode current density', 'Electrode tortuosity', 'Electrolyte concentration', 'Electrolyte concentration [Molar]', 'Electrolyte concentration [mol.m-3]', 'Electrolyte current density', 'Electrolyte current density [A.m-2]', 'Electrolyte flux', 'Electrolyte flux [mol.m-2.s-1]', 'Electrolyte potential', 'Electrolyte potential [V]', 'Electrolyte tortuosity', 'Exchange current density', 'Exchange current density [A.m-2]', 'Exchange current density per volume [A.m-3]', 'Gradient of electrolyte potential', 'Gradient of negative electrode potential', 'Gradient of negative electrolyte potential', 'Gradient of positive electrode potential', 'Gradient of positive electrolyte potential', 'Gradient of separator electrolyte potential', 'Inner SEI concentration [mol.m-3]', 'Inner SEI interfacial current density', 'Inner SEI interfacial current density [A.m-2]', 'Inner SEI thickness', 'Inner SEI thickness [m]', 'Inner positive electrode SEI concentration [mol.m-3]', 'Inner positive electrode SEI interfacial current density', 'Inner positive electrode SEI interfacial current density [A.m-2]', 'Inner positive electrode SEI thickness', 'Inner positive electrode SEI thickness [m]', 'Interfacial current density', 'Interfacial current density [A.m-2]', 'Interfacial current density per volume [A.m-3]', 'Irreversible electrochemical heating', 'Irreversible electrochemical heating [W.m-3]', 'Leading-order current collector current density', 'Leading-order electrode tortuosity', 'Leading-order electrolyte tortuosity', 'Leading-order negative electrode porosity', 'Leading-order negative electrode tortuosity', 'Leading-order negative electrolyte tortuosity', 'Leading-order porosity', 'Leading-order positive electrode porosity', 'Leading-order positive electrode tortuosity', 'Leading-order positive electrolyte tortuosity', 'Leading-order separator porosity', 'Leading-order separator tortuosity', 'Leading-order x-averaged negative electrode porosity', 'Leading-order x-averaged negative electrode porosity change', 'Leading-order x-averaged negative electrode tortuosity', 'Leading-order x-averaged negative electrolyte tortuosity', 'Leading-order x-averaged positive electrode porosity', 'Leading-order x-averaged positive electrode porosity change', 'Leading-order x-averaged positive electrode tortuosity', 'Leading-order x-averaged positive electrolyte tortuosity', 'Leading-order x-averaged separator porosity', 'Leading-order x-averaged separator porosity change', 'Leading-order x-averaged separator tortuosity', 'Local ECM resistance', 'Local ECM resistance [Ohm]', 'Local voltage', 'Local voltage [V]', 'Loss of lithium to SEI [mol]', 'Loss of lithium to positive electrode SEI [mol]', 'Maximum negative particle concentration', 'Maximum negative particle concentration [mol.m-3]', 'Maximum negative particle surface concentration', 'Maximum negative particle surface concentration [mol.m-3]', 'Maximum positive particle concentration', 'Maximum positive particle concentration [mol.m-3]', 'Maximum positive particle surface concentration', 'Maximum positive particle surface concentration [mol.m-3]', 'Measured battery open circuit voltage [V]', 'Measured open circuit voltage', 'Measured open circuit voltage [V]', 'Minimum negative particle concentration', 'Minimum negative particle concentration [mol.m-3]', 'Minimum negative particle surface concentration', 'Minimum negative particle surface concentration [mol.m-3]', 'Minimum positive particle concentration', 'Minimum positive particle concentration [mol.m-3]', 'Minimum positive particle surface concentration', 'Minimum positive particle surface concentration [mol.m-3]', 'Negative current collector potential', 'Negative current collector potential [V]', 'Negative current collector temperature', 'Negative current collector temperature [K]', 'Negative electrode active material volume fraction', 'Negative electrode active material volume fraction change', 'Negative electrode current density', 'Negative electrode current density [A.m-2]', 'Negative electrode entropic change', 'Negative electrode exchange current density', 'Negative electrode exchange current density [A.m-2]', 'Negative electrode exchange current density per volume [A.m-3]', 'Negative electrode extent of lithiation', 'Negative electrode interfacial current density', 'Negative electrode interfacial current density [A.m-2]', 'Negative electrode interfacial current density per volume [A.m-3]', 'Negative electrode ohmic losses', 'Negative electrode ohmic losses [V]', 'Negative electrode open circuit potential', 'Negative electrode open circuit potential [V]', 'Negative electrode oxygen exchange current density', 'Negative electrode oxygen exchange current density [A.m-2]', 'Negative electrode oxygen exchange current density per volume [A.m-3]', 'Negative electrode oxygen interfacial current density', 'Negative electrode oxygen interfacial current density [A.m-2]', 'Negative electrode oxygen interfacial current density per volume [A.m-3]', 'Negative electrode oxygen open circuit potential', 'Negative electrode oxygen open circuit potential [V]', 'Negative electrode oxygen reaction overpotential', 'Negative electrode oxygen reaction overpotential [V]', 'Negative electrode porosity', 'Negative electrode porosity change', 'Negative electrode potential', 'Negative electrode potential [V]', 'Negative electrode pressure', 'Negative electrode reaction overpotential', 'Negative electrode reaction overpotential [V]', 'SEI film overpotential', 'SEI film overpotential [V]', 'SEI interfacial current density', 'SEI interfacial current density [A.m-2]', 'Negative electrode surface area to volume ratio', 'Negative electrode surface area to volume ratio [m-1]', 'Negative electrode surface potential difference', 'Negative electrode surface potential difference [V]', 'Negative electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle radius', 'Negative particle radius [m]', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative SEI concentration [mol.m-3]', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer SEI concentration [mol.m-3]', 'Outer SEI interfacial current density', 'Outer SEI interfacial current density [A.m-2]', 'Outer SEI thickness', 'Outer SEI thickness [m]', 'Outer positive electrode SEI concentration [mol.m-3]', 'Outer positive electrode SEI interfacial current density', 'Outer positive electrode SEI interfacial current density [A.m-2]', 'Outer positive electrode SEI thickness', 'Outer positive electrode SEI thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active material volume fraction change', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode extent of lithiation', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode SEI film overpotential', 'Positive electrode SEI film overpotential [V]', 'Positive electrode SEI interfacial current density', 'Positive electrode SEI interfacial current density [A.m-2]', 'Positive electrode surface area to volume ratio', 'Positive electrode surface area to volume ratio [m-1]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle radius', 'Positive particle radius [m]', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive SEI concentration [mol.m-3]', 'Pressure', 'R-averaged negative particle concentration', 'R-averaged negative particle concentration [mol.m-3]', 'R-averaged positive particle concentration', 'R-averaged positive particle concentration [mol.m-3]', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total concentration in electrolyte [mol]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total lithium in negative electrode [mol]', 'Total lithium in positive electrode [mol]', 'Total SEI thickness', 'Total SEI thickness [m]', 'Total positive electrode SEI thickness', 'Total positive electrode SEI thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged Ohmic heating', 'Volume-averaged Ohmic heating [W.m-3]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged irreversible electrochemical heating', 'Volume-averaged irreversible electrochemical heating[W.m-3]', 'Volume-averaged reversible heating', 'Volume-averaged reversible heating [W.m-3]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged Ohmic heating', 'X-averaged Ohmic heating [W.m-3]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner SEI concentration [mol.m-3]', 'X-averaged inner SEI interfacial current density', 'X-averaged inner SEI interfacial current density [A.m-2]', 'X-averaged inner SEI thickness', 'X-averaged inner SEI thickness [m]', 'X-averaged inner positive electrode SEI concentration [mol.m-3]', 'X-averaged inner positive electrode SEI interfacial current density', 'X-averaged inner positive electrode SEI interfacial current density [A.m-2]', 'X-averaged inner positive electrode SEI thickness', 'X-averaged inner positive electrode SEI thickness [m]', 'X-averaged irreversible electrochemical heating', 'X-averaged irreversible electrochemical heating [W.m-3]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode active material volume fraction change', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode extent of lithiation', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode resistance [Ohm.m2]', 'X-averaged SEI concentration [mol.m-3]', 'X-averaged SEI film overpotential', 'X-averaged SEI film overpotential [V]', 'X-averaged SEI interfacial current density', 'X-averaged SEI interfacial current density [A.m-2]', 'X-averaged negative electrode surface area to volume ratio', 'X-averaged negative electrode surface area to volume ratio [m-1]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer SEI concentration [mol.m-3]', 'X-averaged outer SEI interfacial current density', 'X-averaged outer SEI interfacial current density [A.m-2]', 'X-averaged outer SEI thickness', 'X-averaged outer SEI thickness [m]', 'X-averaged outer positive electrode SEI concentration [mol.m-3]', 'X-averaged outer positive electrode SEI interfacial current density', 'X-averaged outer positive electrode SEI interfacial current density [A.m-2]', 'X-averaged outer positive electrode SEI thickness', 'X-averaged outer positive electrode SEI thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode active material volume fraction change', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode extent of lithiation', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode resistance [Ohm.m2]', 'X-averaged positive electrode SEI concentration [mol.m-3]', 'X-averaged positive electrode SEI film overpotential', 'X-averaged positive electrode SEI film overpotential [V]', 'X-averaged positive electrode SEI interfacial current density', 'X-averaged positive electrode SEI interfacial current density [A.m-2]', 'X-averaged positive electrode surface area to volume ratio', 'X-averaged positive electrode surface area to volume ratio [m-1]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged reversible heating', 'X-averaged reversible heating [W.m-3]', 'X-averaged SEI film overpotential', 'X-averaged SEI film overpotential [V]', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total SEI thickness', 'X-averaged total SEI thickness [m]', 'X-averaged total positive electrode SEI thickness', 'X-averaged total positive electrode SEI thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
# need to give variable names without space
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab",
short_names={
"Time [h]": "t", "Current [A]": "I", "Terminal voltage [V]": "V", "Electrolyte concentration [mol.m-3]": "c_e",
}
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThe previous solution was created in one go with the solve method, but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_simulation = pybamm.Simulation(model)
while time < end_time:
step_solution = step_simulation.step(dt)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77047806 3.71250693]
Time 360
[3.77047806 3.71250693 3.68215218]
Time 720
[3.77047806 3.71250693 3.68215218 3.66125574]
Time 1080
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942]
Time 1440
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857]
Time 1800
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451]
Time 2160
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334]
Time 2520
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055]
Time 2880
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694]
Time 3240
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694 3.16842636]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
ReferencesThe relevant papers for this notebook are:
###Code
pybamm.print_citations()
###Output
[1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4.
[2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2.
[3] Scott G. Marquis, Valentin Sulzer, Robert Timms, Colin P. Please, and S. Jon Chapman. An asymptotic derivation of a single particle model with electrolyte. Journal of The Electrochemical Society, 166(15):A3693–A3706, 2019. doi:10.1149/2.0341915jes.
[4] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). ECSarXiv. February, 2020. doi:10.1149/osf.io/67ckj.
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# set up and solve simulation
simulation = pybamm.Simulation(model)
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = simulation.solve(t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
[33mWARNING: You are using pip version 20.2.1; however, version 20.2.4 is available.
You should consider upgrading via the '/Users/vsulzer/Documents/Energy_storage/PyBaMM/.tox/dev/bin/python -m pip install --upgrade pip' command.[0m
Note: you may need to restart the kernel to use updated packages.
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative sei concentration [mol.m-3]', 'Negative surface area per unit volume distribution in x', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer negative electrode sei concentration [mol.m-3]', 'Outer negative electrode sei interfacial current density', 'Outer negative electrode sei interfacial current density [A.m-2]', 'Outer negative electrode sei thickness', 'Outer negative electrode sei thickness [m]', 'Outer positive electrode sei concentration [mol.m-3]', 'Outer positive electrode sei interfacial current density', 'Outer positive electrode sei interfacial current density [A.m-2]', 'Outer positive electrode sei thickness', 'Outer positive electrode sei thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active volume fraction', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode extent of lithiation', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode sei film overpotential', 'Positive electrode sei film overpotential [V]', 'Positive electrode sei interfacial current density', 'Positive electrode sei interfacial current density [A.m-2]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive sei concentration [mol.m-3]', 'Positive surface area per unit volume distribution in x', 'Pressure', 'R-averaged negative particle concentration', 'R-averaged negative particle concentration [mol.m-3]', 'R-averaged positive particle concentration', 'R-averaged positive particle concentration [mol.m-3]', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator active material volume fraction', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total concentration in electrolyte [mol]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total lithium in negative electrode [mol]', 'Total lithium in positive electrode [mol]', 'Total negative electrode sei thickness', 'Total negative electrode sei thickness [m]', 'Total positive electrode sei thickness', 'Total positive electrode sei thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged Ohmic heating', 'Volume-averaged Ohmic heating [W.m-3]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged irreversible electrochemical heating', 'Volume-averaged irreversible electrochemical heating[W.m-3]', 'Volume-averaged reversible heating', 'Volume-averaged reversible heating [W.m-3]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged Ohmic heating', 'X-averaged Ohmic heating [W.m-3]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner negative electrode sei concentration [mol.m-3]', 'X-averaged inner negative electrode sei interfacial current density', 'X-averaged inner negative electrode sei interfacial current density [A.m-2]', 'X-averaged inner negative electrode sei thickness', 'X-averaged inner negative electrode sei thickness [m]', 'X-averaged inner positive electrode sei concentration [mol.m-3]', 'X-averaged inner positive electrode sei interfacial current density', 'X-averaged inner positive electrode sei interfacial current density [A.m-2]', 'X-averaged inner positive electrode sei thickness', 'X-averaged inner positive electrode sei thickness [m]', 'X-averaged irreversible electrochemical heating', 'X-averaged irreversible electrochemical heating [W.m-3]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode extent of lithiation', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode resistance [Ohm.m2]', 'X-averaged negative electrode sei concentration [mol.m-3]', 'X-averaged negative electrode sei film overpotential', 'X-averaged negative electrode sei film overpotential [V]', 'X-averaged negative electrode sei interfacial current density', 'X-averaged negative electrode sei interfacial current density [A.m-2]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer negative electrode sei concentration [mol.m-3]', 'X-averaged outer negative electrode sei interfacial current density', 'X-averaged outer negative electrode sei interfacial current density [A.m-2]', 'X-averaged outer negative electrode sei thickness', 'X-averaged outer negative electrode sei thickness [m]', 'X-averaged outer positive electrode sei concentration [mol.m-3]', 'X-averaged outer positive electrode sei interfacial current density', 'X-averaged outer positive electrode sei interfacial current density [A.m-2]', 'X-averaged outer positive electrode sei thickness', 'X-averaged outer positive electrode sei thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode extent of lithiation', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode resistance [Ohm.m2]', 'X-averaged positive electrode sei concentration [mol.m-3]', 'X-averaged positive electrode sei film overpotential', 'X-averaged positive electrode sei film overpotential [V]', 'X-averaged positive electrode sei interfacial current density', 'X-averaged positive electrode sei interfacial current density [A.m-2]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged reversible heating', 'X-averaged reversible heating [W.m-3]', 'X-averaged sei film overpotential', 'X-averaged sei film overpotential [V]', 'X-averaged separator active material volume fraction', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total negative electrode sei thickness', 'X-averaged total negative electrode sei thickness [m]', 'X-averaged total positive electrode sei thickness', 'X-averaged total positive electrode sei thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
# need to give variable names without space
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab",
short_names={
"Time [h]": "t", "Current [A]": "I", "Terminal voltage [V]": "V", "Electrolyte concentration [mol.m-3]": "c_e",
}
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThe previous solution was created in one go with the solve method, but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_simulation = pybamm.Simulation(model)
while time < end_time:
step_solution = step_simulation.step(dt)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77047806 3.71250683]
Time 360
[3.77047806 3.71250683 3.68215217]
Time 720
[3.77047806 3.71250683 3.68215217 3.66125574]
Time 1080
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 ]
Time 1440
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857]
Time 1800
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451]
Time 2160
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334]
Time 2520
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055]
Time 2880
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694]
Time 3240
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694 3.16842636]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# set up and solve simulation
simulation = pybamm.Simulation(model)
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = simulation.solve(t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
[33mWARNING: You are using pip version 20.2.1; however, version 20.2.4 is available.
You should consider upgrading via the '/Users/vsulzer/Documents/Energy_storage/PyBaMM/.tox/dev/bin/python -m pip install --upgrade pip' command.[0m
Note: you may need to restart the kernel to use updated packages.
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative sei concentration [mol.m-3]', 'Negative surface area to volume ratio distribution in x', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer negative electrode sei concentration [mol.m-3]', 'Outer negative electrode sei interfacial current density', 'Outer negative electrode sei interfacial current density [A.m-2]', 'Outer negative electrode sei thickness', 'Outer negative electrode sei thickness [m]', 'Outer positive electrode sei concentration [mol.m-3]', 'Outer positive electrode sei interfacial current density', 'Outer positive electrode sei interfacial current density [A.m-2]', 'Outer positive electrode sei thickness', 'Outer positive electrode sei thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active volume fraction', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode extent of lithiation', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode sei film overpotential', 'Positive electrode sei film overpotential [V]', 'Positive electrode sei interfacial current density', 'Positive electrode sei interfacial current density [A.m-2]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive sei concentration [mol.m-3]', 'Positive surface area to volume ratio distribution in x', 'Pressure', 'R-averaged negative particle concentration', 'R-averaged negative particle concentration [mol.m-3]', 'R-averaged positive particle concentration', 'R-averaged positive particle concentration [mol.m-3]', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator active material volume fraction', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total concentration in electrolyte [mol]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total lithium in negative electrode [mol]', 'Total lithium in positive electrode [mol]', 'Total negative electrode sei thickness', 'Total negative electrode sei thickness [m]', 'Total positive electrode sei thickness', 'Total positive electrode sei thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged Ohmic heating', 'Volume-averaged Ohmic heating [W.m-3]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged irreversible electrochemical heating', 'Volume-averaged irreversible electrochemical heating[W.m-3]', 'Volume-averaged reversible heating', 'Volume-averaged reversible heating [W.m-3]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged Ohmic heating', 'X-averaged Ohmic heating [W.m-3]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner negative electrode sei concentration [mol.m-3]', 'X-averaged inner negative electrode sei interfacial current density', 'X-averaged inner negative electrode sei interfacial current density [A.m-2]', 'X-averaged inner negative electrode sei thickness', 'X-averaged inner negative electrode sei thickness [m]', 'X-averaged inner positive electrode sei concentration [mol.m-3]', 'X-averaged inner positive electrode sei interfacial current density', 'X-averaged inner positive electrode sei interfacial current density [A.m-2]', 'X-averaged inner positive electrode sei thickness', 'X-averaged inner positive electrode sei thickness [m]', 'X-averaged irreversible electrochemical heating', 'X-averaged irreversible electrochemical heating [W.m-3]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode extent of lithiation', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode resistance [Ohm.m2]', 'X-averaged negative electrode sei concentration [mol.m-3]', 'X-averaged negative electrode sei film overpotential', 'X-averaged negative electrode sei film overpotential [V]', 'X-averaged negative electrode sei interfacial current density', 'X-averaged negative electrode sei interfacial current density [A.m-2]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer negative electrode sei concentration [mol.m-3]', 'X-averaged outer negative electrode sei interfacial current density', 'X-averaged outer negative electrode sei interfacial current density [A.m-2]', 'X-averaged outer negative electrode sei thickness', 'X-averaged outer negative electrode sei thickness [m]', 'X-averaged outer positive electrode sei concentration [mol.m-3]', 'X-averaged outer positive electrode sei interfacial current density', 'X-averaged outer positive electrode sei interfacial current density [A.m-2]', 'X-averaged outer positive electrode sei thickness', 'X-averaged outer positive electrode sei thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode extent of lithiation', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode resistance [Ohm.m2]', 'X-averaged positive electrode sei concentration [mol.m-3]', 'X-averaged positive electrode sei film overpotential', 'X-averaged positive electrode sei film overpotential [V]', 'X-averaged positive electrode sei interfacial current density', 'X-averaged positive electrode sei interfacial current density [A.m-2]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged reversible heating', 'X-averaged reversible heating [W.m-3]', 'X-averaged sei film overpotential', 'X-averaged sei film overpotential [V]', 'X-averaged separator active material volume fraction', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total negative electrode sei thickness', 'X-averaged total negative electrode sei thickness [m]', 'X-averaged total positive electrode sei thickness', 'X-averaged total positive electrode sei thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
# need to give variable names without space
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab",
short_names={
"Time [h]": "t", "Current [A]": "I", "Terminal voltage [V]": "V", "Electrolyte concentration [mol.m-3]": "c_e",
}
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThe previous solution was created in one go with the solve method, but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_simulation = pybamm.Simulation(model)
while time < end_time:
step_solution = step_simulation.step(dt)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77047806 3.71250683]
Time 360
[3.77047806 3.71250683 3.68215217]
Time 720
[3.77047806 3.71250683 3.68215217 3.66125574]
Time 1080
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 ]
Time 1440
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857]
Time 1800
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451]
Time 2160
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334]
Time 2520
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055]
Time 2880
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694]
Time 3240
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694 3.16842636]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# set up and solve simulation
simulation = pybamm.Simulation(model)
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = simulation.solve(t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
['Active material volume fraction', 'Ambient temperature', 'Ambient temperature [K]', 'Battery voltage [V]', 'C-rate', 'Cell temperature', 'Cell temperature [K]', 'Current [A]', 'Current collector current density', 'Current collector current density [A.m-2]', 'Discharge capacity [A.h]', 'Electrode current density', 'Electrode tortuosity', 'Electrolyte concentration', 'Electrolyte concentration [Molar]', 'Electrolyte concentration [mol.m-3]', 'Electrolyte current density', 'Electrolyte current density [A.m-2]', 'Electrolyte flux', 'Electrolyte flux [mol.m-2.s-1]', 'Electrolyte potential', 'Electrolyte potential [V]', 'Electrolyte tortuosity', 'Exchange current density', 'Exchange current density [A.m-2]', 'Exchange current density per volume [A.m-3]', 'Gradient of electrolyte potential', 'Gradient of negative electrode potential', 'Gradient of negative electrolyte potential', 'Gradient of positive electrode potential', 'Gradient of positive electrolyte potential', 'Gradient of separator electrolyte potential', 'Inner negative electrode sei concentration [mol.m-3]', 'Inner negative electrode sei interfacial current density', 'Inner negative electrode sei interfacial current density [A.m-2]', 'Inner negative electrode sei thickness', 'Inner negative electrode sei thickness [m]', 'Inner positive electrode sei concentration [mol.m-3]', 'Inner positive electrode sei interfacial current density', 'Inner positive electrode sei interfacial current density [A.m-2]', 'Inner positive electrode sei thickness', 'Inner positive electrode sei thickness [m]', 'Interfacial current density', 'Interfacial current density [A.m-2]', 'Interfacial current density per volume [A.m-3]', 'Irreversible electrochemical heating', 'Irreversible electrochemical heating [W.m-3]', 'Leading-order active material volume fraction', 'Leading-order current collector current density', 'Leading-order electrode tortuosity', 'Leading-order electrolyte tortuosity', 'Leading-order negative electrode active material volume fraction', 'Leading-order negative electrode porosity', 'Leading-order negative electrode tortuosity', 'Leading-order negative electrolyte tortuosity', 'Leading-order porosity', 'Leading-order positive electrode active material volume fraction', 'Leading-order positive electrode porosity', 'Leading-order positive electrode tortuosity', 'Leading-order positive electrolyte tortuosity', 'Leading-order separator active material volume fraction', 'Leading-order separator porosity', 'Leading-order separator tortuosity', 'Leading-order x-averaged negative electrode active material volume fraction', 'Leading-order x-averaged negative electrode porosity', 'Leading-order x-averaged negative electrode porosity change', 'Leading-order x-averaged negative electrode tortuosity', 'Leading-order x-averaged negative electrolyte tortuosity', 'Leading-order x-averaged positive electrode active material volume fraction', 'Leading-order x-averaged positive electrode porosity', 'Leading-order x-averaged positive electrode porosity change', 'Leading-order x-averaged positive electrode tortuosity', 'Leading-order x-averaged positive electrolyte tortuosity', 'Leading-order x-averaged separator active material volume fraction', 'Leading-order x-averaged separator porosity', 'Leading-order x-averaged separator porosity change', 'Leading-order x-averaged separator tortuosity', 'Local voltage', 'Local voltage [V]', 'Loss of lithium to negative electrode sei [mol]', 'Loss of lithium to positive electrode sei [mol]', 'Measured battery open circuit voltage [V]', 'Measured open circuit voltage', 'Measured open circuit voltage [V]', 'Negative current collector potential', 'Negative current collector potential [V]', 'Negative current collector temperature', 'Negative current collector temperature [K]', 'Negative electrode active material volume fraction', 'Negative electrode active volume fraction', 'Negative electrode average extent of lithiation', 'Negative electrode current density', 'Negative electrode current density [A.m-2]', 'Negative electrode entropic change', 'Negative electrode exchange current density', 'Negative electrode exchange current density [A.m-2]', 'Negative electrode exchange current density per volume [A.m-3]', 'Negative electrode interfacial current density', 'Negative electrode interfacial current density [A.m-2]', 'Negative electrode interfacial current density per volume [A.m-3]', 'Negative electrode ohmic losses', 'Negative electrode ohmic losses [V]', 'Negative electrode open circuit potential', 'Negative electrode open circuit potential [V]', 'Negative electrode oxygen exchange current density', 'Negative electrode oxygen exchange current density [A.m-2]', 'Negative electrode oxygen exchange current density per volume [A.m-3]', 'Negative electrode oxygen interfacial current density', 'Negative electrode oxygen interfacial current density [A.m-2]', 'Negative electrode oxygen interfacial current density per volume [A.m-3]', 'Negative electrode oxygen open circuit potential', 'Negative electrode oxygen open circuit potential [V]', 'Negative electrode oxygen reaction overpotential', 'Negative electrode oxygen reaction overpotential [V]', 'Negative electrode porosity', 'Negative electrode porosity change', 'Negative electrode potential', 'Negative electrode potential [V]', 'Negative electrode pressure', 'Negative electrode reaction overpotential', 'Negative electrode reaction overpotential [V]', 'Negative electrode sei film overpotential', 'Negative electrode sei film overpotential [V]', 'Negative electrode sei interfacial current density', 'Negative electrode sei interfacial current density [A.m-2]', 'Negative electrode surface potential difference', 'Negative electrode surface potential difference [V]', 'Negative electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative sei concentration [mol.m-3]', 'Negative surface area per unit volume distribution in x', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer negative electrode sei concentration [mol.m-3]', 'Outer negative electrode sei interfacial current density', 'Outer negative electrode sei interfacial current density [A.m-2]', 'Outer negative electrode sei thickness', 'Outer negative electrode sei thickness [m]', 'Outer positive electrode sei concentration [mol.m-3]', 'Outer positive electrode sei interfacial current density', 'Outer positive electrode sei interfacial current density [A.m-2]', 'Outer positive electrode sei thickness', 'Outer positive electrode sei thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active volume fraction', 'Positive electrode average extent of lithiation', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode sei film overpotential', 'Positive electrode sei film overpotential [V]', 'Positive electrode sei interfacial current density', 'Positive electrode sei interfacial current density [A.m-2]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive sei concentration [mol.m-3]', 'Positive surface area per unit volume distribution in x', 'Pressure', 'R-averaged negative particle concentration', 'R-averaged negative particle concentration [mol.m-3]', 'R-averaged positive particle concentration', 'R-averaged positive particle concentration [mol.m-3]', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator active material volume fraction', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total negative electrode sei thickness', 'Total negative electrode sei thickness [m]', 'Total positive electrode sei thickness', 'Total positive electrode sei thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged Ohmic heating', 'Volume-averaged Ohmic heating [W.m-3]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged irreversible electrochemical heating', 'Volume-averaged irreversible electrochemical heating[W.m-3]', 'Volume-averaged reversible heating', 'Volume-averaged reversible heating [W.m-3]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged Ohmic heating', 'X-averaged Ohmic heating [W.m-3]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner negative electrode sei concentration [mol.m-3]', 'X-averaged inner negative electrode sei interfacial current density', 'X-averaged inner negative electrode sei interfacial current density [A.m-2]', 'X-averaged inner negative electrode sei thickness', 'X-averaged inner negative electrode sei thickness [m]', 'X-averaged inner positive electrode sei concentration [mol.m-3]', 'X-averaged inner positive electrode sei interfacial current density', 'X-averaged inner positive electrode sei interfacial current density [A.m-2]', 'X-averaged inner positive electrode sei thickness', 'X-averaged inner positive electrode sei thickness [m]', 'X-averaged irreversible electrochemical heating', 'X-averaged irreversible electrochemical heating [W.m-3]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode resistance [Ohm.m2]', 'X-averaged negative electrode sei concentration [mol.m-3]', 'X-averaged negative electrode sei film overpotential', 'X-averaged negative electrode sei film overpotential [V]', 'X-averaged negative electrode sei interfacial current density', 'X-averaged negative electrode sei interfacial current density [A.m-2]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer negative electrode sei concentration [mol.m-3]', 'X-averaged outer negative electrode sei interfacial current density', 'X-averaged outer negative electrode sei interfacial current density [A.m-2]', 'X-averaged outer negative electrode sei thickness', 'X-averaged outer negative electrode sei thickness [m]', 'X-averaged outer positive electrode sei concentration [mol.m-3]', 'X-averaged outer positive electrode sei interfacial current density', 'X-averaged outer positive electrode sei interfacial current density [A.m-2]', 'X-averaged outer positive electrode sei thickness', 'X-averaged outer positive electrode sei thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode resistance [Ohm.m2]', 'X-averaged positive electrode sei concentration [mol.m-3]', 'X-averaged positive electrode sei film overpotential', 'X-averaged positive electrode sei film overpotential [V]', 'X-averaged positive electrode sei interfacial current density', 'X-averaged positive electrode sei interfacial current density [A.m-2]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged reversible heating', 'X-averaged reversible heating [W.m-3]', 'X-averaged sei film overpotential', 'X-averaged sei film overpotential [V]', 'X-averaged separator active material volume fraction', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total negative electrode sei thickness', 'X-averaged total negative electrode sei thickness [m]', 'X-averaged total positive electrode sei thickness', 'X-averaged total positive electrode sei thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab"
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThe previous solution was created in one go with the solve method, but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_simulation = pybamm.Simulation(model)
while time < end_time:
step_solution = step_simulation.step(dt)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77047806 3.71250683]
Time 360
[3.77047806 3.71250683 3.68215217]
Time 720
[3.77047806 3.71250683 3.68215217 3.66125574]
Time 1080
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 ]
Time 1440
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857]
Time 1800
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451]
Time 2160
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334]
Time 2520
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055]
Time 2880
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694]
Time 3240
[3.77047806 3.71250683 3.68215217 3.66125574 3.6433094 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694 3.16842636]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# set up and solve simulation
simulation = pybamm.Simulation(model)
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = simulation.solve(t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
['Ambient temperature', 'Ambient temperature [K]', 'Average negative particle concentration', 'Average negative particle concentration [mol.m-3]', 'Average positive particle concentration', 'Average positive particle concentration [mol.m-3]', 'Battery voltage [V]', 'C-rate', 'Cell temperature', 'Cell temperature [K]', 'Change in measured open circuit voltage', 'Change in measured open circuit voltage [V]', 'Current [A]', 'Current collector current density', 'Current collector current density [A.m-2]', 'Discharge capacity [A.h]', 'Electrode current density', 'Electrode tortuosity', 'Electrolyte concentration', 'Electrolyte concentration [Molar]', 'Electrolyte concentration [mol.m-3]', 'Electrolyte current density', 'Electrolyte current density [A.m-2]', 'Electrolyte flux', 'Electrolyte flux [mol.m-2.s-1]', 'Electrolyte potential', 'Electrolyte potential [V]', 'Electrolyte tortuosity', 'Exchange current density', 'Exchange current density [A.m-2]', 'Exchange current density per volume [A.m-3]', 'Gradient of electrolyte potential', 'Gradient of negative electrode potential', 'Gradient of negative electrolyte potential', 'Gradient of positive electrode potential', 'Gradient of positive electrolyte potential', 'Gradient of separator electrolyte potential', 'Inner negative electrode sei concentration [mol.m-3]', 'Inner negative electrode sei interfacial current density', 'Inner negative electrode sei interfacial current density [A.m-2]', 'Inner negative electrode sei thickness', 'Inner negative electrode sei thickness [m]', 'Inner positive electrode sei concentration [mol.m-3]', 'Inner positive electrode sei interfacial current density', 'Inner positive electrode sei interfacial current density [A.m-2]', 'Inner positive electrode sei thickness', 'Inner positive electrode sei thickness [m]', 'Interfacial current density', 'Interfacial current density [A.m-2]', 'Interfacial current density per volume [A.m-3]', 'Irreversible electrochemical heating', 'Irreversible electrochemical heating [W.m-3]', 'Leading-order current collector current density', 'Leading-order electrode tortuosity', 'Leading-order electrolyte tortuosity', 'Leading-order negative electrode porosity', 'Leading-order negative electrode tortuosity', 'Leading-order negative electrolyte tortuosity', 'Leading-order porosity', 'Leading-order positive electrode porosity', 'Leading-order positive electrode tortuosity', 'Leading-order positive electrolyte tortuosity', 'Leading-order separator porosity', 'Leading-order separator tortuosity', 'Leading-order x-averaged negative electrode porosity', 'Leading-order x-averaged negative electrode porosity change', 'Leading-order x-averaged negative electrode tortuosity', 'Leading-order x-averaged negative electrolyte tortuosity', 'Leading-order x-averaged positive electrode porosity', 'Leading-order x-averaged positive electrode porosity change', 'Leading-order x-averaged positive electrode tortuosity', 'Leading-order x-averaged positive electrolyte tortuosity', 'Leading-order x-averaged separator porosity', 'Leading-order x-averaged separator porosity change', 'Leading-order x-averaged separator tortuosity', 'Local ECM resistance', 'Local ECM resistance [Ohm]', 'Local voltage', 'Local voltage [V]', 'Loss of lithium to negative electrode sei [mol]', 'Loss of lithium to positive electrode sei [mol]', 'Maximum negative particle concentration', 'Maximum negative particle concentration [mol.m-3]', 'Maximum negative particle surface concentration', 'Maximum negative particle surface concentration [mol.m-3]', 'Maximum positive particle concentration', 'Maximum positive particle concentration [mol.m-3]', 'Maximum positive particle surface concentration', 'Maximum positive particle surface concentration [mol.m-3]', 'Measured battery open circuit voltage [V]', 'Measured open circuit voltage', 'Measured open circuit voltage [V]', 'Minimum negative particle concentration', 'Minimum negative particle concentration [mol.m-3]', 'Minimum negative particle surface concentration', 'Minimum negative particle surface concentration [mol.m-3]', 'Minimum positive particle concentration', 'Minimum positive particle concentration [mol.m-3]', 'Minimum positive particle surface concentration', 'Minimum positive particle surface concentration [mol.m-3]', 'Negative current collector potential', 'Negative current collector potential [V]', 'Negative current collector temperature', 'Negative current collector temperature [K]', 'Negative electrode active material volume fraction', 'Negative electrode active material volume fraction change', 'Negative electrode current density', 'Negative electrode current density [A.m-2]', 'Negative electrode entropic change', 'Negative electrode exchange current density', 'Negative electrode exchange current density [A.m-2]', 'Negative electrode exchange current density per volume [A.m-3]', 'Negative electrode extent of lithiation', 'Negative electrode interfacial current density', 'Negative electrode interfacial current density [A.m-2]', 'Negative electrode interfacial current density per volume [A.m-3]', 'Negative electrode ohmic losses', 'Negative electrode ohmic losses [V]', 'Negative electrode open circuit potential', 'Negative electrode open circuit potential [V]', 'Negative electrode oxygen exchange current density', 'Negative electrode oxygen exchange current density [A.m-2]', 'Negative electrode oxygen exchange current density per volume [A.m-3]', 'Negative electrode oxygen interfacial current density', 'Negative electrode oxygen interfacial current density [A.m-2]', 'Negative electrode oxygen interfacial current density per volume [A.m-3]', 'Negative electrode oxygen open circuit potential', 'Negative electrode oxygen open circuit potential [V]', 'Negative electrode oxygen reaction overpotential', 'Negative electrode oxygen reaction overpotential [V]', 'Negative electrode porosity', 'Negative electrode porosity change', 'Negative electrode potential', 'Negative electrode potential [V]', 'Negative electrode pressure', 'Negative electrode reaction overpotential', 'Negative electrode reaction overpotential [V]', 'Negative electrode sei film overpotential', 'Negative electrode sei film overpotential [V]', 'Negative electrode sei interfacial current density', 'Negative electrode sei interfacial current density [A.m-2]', 'Negative electrode surface area to volume ratio', 'Negative electrode surface area to volume ratio [m-1]', 'Negative electrode surface potential difference', 'Negative electrode surface potential difference [V]', 'Negative electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle radius', 'Negative particle radius [m]', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative sei concentration [mol.m-3]', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer negative electrode sei concentration [mol.m-3]', 'Outer negative electrode sei interfacial current density', 'Outer negative electrode sei interfacial current density [A.m-2]', 'Outer negative electrode sei thickness', 'Outer negative electrode sei thickness [m]', 'Outer positive electrode sei concentration [mol.m-3]', 'Outer positive electrode sei interfacial current density', 'Outer positive electrode sei interfacial current density [A.m-2]', 'Outer positive electrode sei thickness', 'Outer positive electrode sei thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active material volume fraction change', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode extent of lithiation', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode sei film overpotential', 'Positive electrode sei film overpotential [V]', 'Positive electrode sei interfacial current density', 'Positive electrode sei interfacial current density [A.m-2]', 'Positive electrode surface area to volume ratio', 'Positive electrode surface area to volume ratio [m-1]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle radius', 'Positive particle radius [m]', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive sei concentration [mol.m-3]', 'Pressure', 'R-averaged negative particle concentration', 'R-averaged negative particle concentration [mol.m-3]', 'R-averaged positive particle concentration', 'R-averaged positive particle concentration [mol.m-3]', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total concentration in electrolyte [mol]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total lithium in negative electrode [mol]', 'Total lithium in positive electrode [mol]', 'Total negative electrode sei thickness', 'Total negative electrode sei thickness [m]', 'Total positive electrode sei thickness', 'Total positive electrode sei thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged Ohmic heating', 'Volume-averaged Ohmic heating [W.m-3]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged irreversible electrochemical heating', 'Volume-averaged irreversible electrochemical heating[W.m-3]', 'Volume-averaged reversible heating', 'Volume-averaged reversible heating [W.m-3]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged Ohmic heating', 'X-averaged Ohmic heating [W.m-3]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner negative electrode sei concentration [mol.m-3]', 'X-averaged inner negative electrode sei interfacial current density', 'X-averaged inner negative electrode sei interfacial current density [A.m-2]', 'X-averaged inner negative electrode sei thickness', 'X-averaged inner negative electrode sei thickness [m]', 'X-averaged inner positive electrode sei concentration [mol.m-3]', 'X-averaged inner positive electrode sei interfacial current density', 'X-averaged inner positive electrode sei interfacial current density [A.m-2]', 'X-averaged inner positive electrode sei thickness', 'X-averaged inner positive electrode sei thickness [m]', 'X-averaged irreversible electrochemical heating', 'X-averaged irreversible electrochemical heating [W.m-3]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode active material volume fraction change', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode extent of lithiation', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode resistance [Ohm.m2]', 'X-averaged negative electrode sei concentration [mol.m-3]', 'X-averaged negative electrode sei film overpotential', 'X-averaged negative electrode sei film overpotential [V]', 'X-averaged negative electrode sei interfacial current density', 'X-averaged negative electrode sei interfacial current density [A.m-2]', 'X-averaged negative electrode surface area to volume ratio', 'X-averaged negative electrode surface area to volume ratio [m-1]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer negative electrode sei concentration [mol.m-3]', 'X-averaged outer negative electrode sei interfacial current density', 'X-averaged outer negative electrode sei interfacial current density [A.m-2]', 'X-averaged outer negative electrode sei thickness', 'X-averaged outer negative electrode sei thickness [m]', 'X-averaged outer positive electrode sei concentration [mol.m-3]', 'X-averaged outer positive electrode sei interfacial current density', 'X-averaged outer positive electrode sei interfacial current density [A.m-2]', 'X-averaged outer positive electrode sei thickness', 'X-averaged outer positive electrode sei thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode active material volume fraction change', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode extent of lithiation', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode resistance [Ohm.m2]', 'X-averaged positive electrode sei concentration [mol.m-3]', 'X-averaged positive electrode sei film overpotential', 'X-averaged positive electrode sei film overpotential [V]', 'X-averaged positive electrode sei interfacial current density', 'X-averaged positive electrode sei interfacial current density [A.m-2]', 'X-averaged positive electrode surface area to volume ratio', 'X-averaged positive electrode surface area to volume ratio [m-1]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged reversible heating', 'X-averaged reversible heating [W.m-3]', 'X-averaged sei film overpotential', 'X-averaged sei film overpotential [V]', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total negative electrode sei thickness', 'X-averaged total negative electrode sei thickness [m]', 'X-averaged total positive electrode sei thickness', 'X-averaged total positive electrode sei thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
# need to give variable names without space
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab",
short_names={
"Time [h]": "t", "Current [A]": "I", "Terminal voltage [V]": "V", "Electrolyte concentration [mol.m-3]": "c_e",
}
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThe previous solution was created in one go with the solve method, but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_simulation = pybamm.Simulation(model)
while time < end_time:
step_solution = step_simulation.step(dt)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77047806 3.71250693]
Time 360
[3.77047806 3.71250693 3.68215218]
Time 720
[3.77047806 3.71250693 3.68215218 3.66125574]
Time 1080
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942]
Time 1440
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857]
Time 1800
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451]
Time 2160
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334]
Time 2520
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055]
Time 2880
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694]
Time 3240
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694 3.16842636]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
ReferencesThe relevant papers for this notebook are:
###Code
pybamm.print_citations()
###Output
[1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4.
[2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2.
[3] Scott G. Marquis, Valentin Sulzer, Robert Timms, Colin P. Please, and S. Jon Chapman. An asymptotic derivation of a single particle model with electrolyte. Journal of The Electrochemical Society, 166(15):A3693–A3706, 2019. doi:10.1149/2.0341915jes.
[4] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). ECSarXiv. February, 2020. doi:10.1149/osf.io/67ckj.
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# set up and solve simulation
simulation = pybamm.Simulation(model)
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = simulation.solve(t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
['Ambient temperature', 'Ambient temperature [K]', 'Average negative particle concentration', 'Average negative particle concentration [mol.m-3]', 'Average positive particle concentration', 'Average positive particle concentration [mol.m-3]', 'Battery voltage [V]', 'C-rate', 'Cell temperature', 'Cell temperature [K]', 'Change in measured open circuit voltage', 'Change in measured open circuit voltage [V]', 'Current [A]', 'Current collector current density', 'Current collector current density [A.m-2]', 'Discharge capacity [A.h]', 'Electrode current density', 'Electrode tortuosity', 'Electrolyte concentration', 'Electrolyte concentration [Molar]', 'Electrolyte concentration [mol.m-3]', 'Electrolyte current density', 'Electrolyte current density [A.m-2]', 'Electrolyte flux', 'Electrolyte flux [mol.m-2.s-1]', 'Electrolyte potential', 'Electrolyte potential [V]', 'Electrolyte tortuosity', 'Exchange current density', 'Exchange current density [A.m-2]', 'Exchange current density per volume [A.m-3]', 'Gradient of electrolyte potential', 'Gradient of negative electrode potential', 'Gradient of negative electrolyte potential', 'Gradient of positive electrode potential', 'Gradient of positive electrolyte potential', 'Gradient of separator electrolyte potential', 'Inner SEI concentration [mol.m-3]', 'Inner SEI interfacial current density', 'Inner SEI interfacial current density [A.m-2]', 'Inner SEI thickness', 'Inner SEI thickness [m]', 'Inner positive electrode SEI concentration [mol.m-3]', 'Inner positive electrode SEI interfacial current density', 'Inner positive electrode SEI interfacial current density [A.m-2]', 'Inner positive electrode SEI thickness', 'Inner positive electrode SEI thickness [m]', 'Interfacial current density', 'Interfacial current density [A.m-2]', 'Interfacial current density per volume [A.m-3]', 'Irreversible electrochemical heating', 'Irreversible electrochemical heating [W.m-3]', 'Leading-order current collector current density', 'Leading-order electrode tortuosity', 'Leading-order electrolyte tortuosity', 'Leading-order negative electrode porosity', 'Leading-order negative electrode tortuosity', 'Leading-order negative electrolyte tortuosity', 'Leading-order porosity', 'Leading-order positive electrode porosity', 'Leading-order positive electrode tortuosity', 'Leading-order positive electrolyte tortuosity', 'Leading-order separator porosity', 'Leading-order separator tortuosity', 'Leading-order x-averaged negative electrode porosity', 'Leading-order x-averaged negative electrode porosity change', 'Leading-order x-averaged negative electrode tortuosity', 'Leading-order x-averaged negative electrolyte tortuosity', 'Leading-order x-averaged positive electrode porosity', 'Leading-order x-averaged positive electrode porosity change', 'Leading-order x-averaged positive electrode tortuosity', 'Leading-order x-averaged positive electrolyte tortuosity', 'Leading-order x-averaged separator porosity', 'Leading-order x-averaged separator porosity change', 'Leading-order x-averaged separator tortuosity', 'Local ECM resistance', 'Local ECM resistance [Ohm]', 'Local voltage', 'Local voltage [V]', 'Loss of lithium to SEI [mol]', 'Loss of lithium to positive electrode SEI [mol]', 'Maximum negative particle concentration', 'Maximum negative particle concentration [mol.m-3]', 'Maximum negative particle surface concentration', 'Maximum negative particle surface concentration [mol.m-3]', 'Maximum positive particle concentration', 'Maximum positive particle concentration [mol.m-3]', 'Maximum positive particle surface concentration', 'Maximum positive particle surface concentration [mol.m-3]', 'Measured battery open circuit voltage [V]', 'Measured open circuit voltage', 'Measured open circuit voltage [V]', 'Minimum negative particle concentration', 'Minimum negative particle concentration [mol.m-3]', 'Minimum negative particle surface concentration', 'Minimum negative particle surface concentration [mol.m-3]', 'Minimum positive particle concentration', 'Minimum positive particle concentration [mol.m-3]', 'Minimum positive particle surface concentration', 'Minimum positive particle surface concentration [mol.m-3]', 'Negative current collector potential', 'Negative current collector potential [V]', 'Negative current collector temperature', 'Negative current collector temperature [K]', 'Negative electrode active material volume fraction', 'Negative electrode active material volume fraction change', 'Negative electrode current density', 'Negative electrode current density [A.m-2]', 'Negative electrode entropic change', 'Negative electrode exchange current density', 'Negative electrode exchange current density [A.m-2]', 'Negative electrode exchange current density per volume [A.m-3]', 'Negative electrode extent of lithiation', 'Negative electrode interfacial current density', 'Negative electrode interfacial current density [A.m-2]', 'Negative electrode interfacial current density per volume [A.m-3]', 'Negative electrode ohmic losses', 'Negative electrode ohmic losses [V]', 'Negative electrode open circuit potential', 'Negative electrode open circuit potential [V]', 'Negative electrode oxygen exchange current density', 'Negative electrode oxygen exchange current density [A.m-2]', 'Negative electrode oxygen exchange current density per volume [A.m-3]', 'Negative electrode oxygen interfacial current density', 'Negative electrode oxygen interfacial current density [A.m-2]', 'Negative electrode oxygen interfacial current density per volume [A.m-3]', 'Negative electrode oxygen open circuit potential', 'Negative electrode oxygen open circuit potential [V]', 'Negative electrode oxygen reaction overpotential', 'Negative electrode oxygen reaction overpotential [V]', 'Negative electrode porosity', 'Negative electrode porosity change', 'Negative electrode potential', 'Negative electrode potential [V]', 'Negative electrode pressure', 'Negative electrode reaction overpotential', 'Negative electrode reaction overpotential [V]', 'SEI film overpotential', 'SEI film overpotential [V]', 'SEI interfacial current density', 'SEI interfacial current density [A.m-2]', 'Negative electrode surface area to volume ratio', 'Negative electrode surface area to volume ratio [m-1]', 'Negative electrode surface potential difference', 'Negative electrode surface potential difference [V]', 'Negative electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle radius', 'Negative particle radius [m]', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative SEI concentration [mol.m-3]', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer SEI concentration [mol.m-3]', 'Outer SEI interfacial current density', 'Outer SEI interfacial current density [A.m-2]', 'Outer SEI thickness', 'Outer SEI thickness [m]', 'Outer positive electrode SEI concentration [mol.m-3]', 'Outer positive electrode SEI interfacial current density', 'Outer positive electrode SEI interfacial current density [A.m-2]', 'Outer positive electrode SEI thickness', 'Outer positive electrode SEI thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active material volume fraction change', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode extent of lithiation', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode SEI film overpotential', 'Positive electrode SEI film overpotential [V]', 'Positive electrode SEI interfacial current density', 'Positive electrode SEI interfacial current density [A.m-2]', 'Positive electrode surface area to volume ratio', 'Positive electrode surface area to volume ratio [m-1]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle radius', 'Positive particle radius [m]', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive SEI concentration [mol.m-3]', 'Pressure', 'R-averaged negative particle concentration', 'R-averaged negative particle concentration [mol.m-3]', 'R-averaged positive particle concentration', 'R-averaged positive particle concentration [mol.m-3]', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total concentration in electrolyte [mol]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total lithium in negative electrode [mol]', 'Total lithium in positive electrode [mol]', 'Total SEI thickness', 'Total SEI thickness [m]', 'Total positive electrode SEI thickness', 'Total positive electrode SEI thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged Ohmic heating', 'Volume-averaged Ohmic heating [W.m-3]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged irreversible electrochemical heating', 'Volume-averaged irreversible electrochemical heating[W.m-3]', 'Volume-averaged reversible heating', 'Volume-averaged reversible heating [W.m-3]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged Ohmic heating', 'X-averaged Ohmic heating [W.m-3]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner SEI concentration [mol.m-3]', 'X-averaged inner SEI interfacial current density', 'X-averaged inner SEI interfacial current density [A.m-2]', 'X-averaged inner SEI thickness', 'X-averaged inner SEI thickness [m]', 'X-averaged inner positive electrode SEI concentration [mol.m-3]', 'X-averaged inner positive electrode SEI interfacial current density', 'X-averaged inner positive electrode SEI interfacial current density [A.m-2]', 'X-averaged inner positive electrode SEI thickness', 'X-averaged inner positive electrode SEI thickness [m]', 'X-averaged irreversible electrochemical heating', 'X-averaged irreversible electrochemical heating [W.m-3]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode active material volume fraction change', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode extent of lithiation', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode resistance [Ohm.m2]', 'X-averaged SEI concentration [mol.m-3]', 'X-averaged SEI film overpotential', 'X-averaged SEI film overpotential [V]', 'X-averaged SEI interfacial current density', 'X-averaged SEI interfacial current density [A.m-2]', 'X-averaged negative electrode surface area to volume ratio', 'X-averaged negative electrode surface area to volume ratio [m-1]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer SEI concentration [mol.m-3]', 'X-averaged outer SEI interfacial current density', 'X-averaged outer SEI interfacial current density [A.m-2]', 'X-averaged outer SEI thickness', 'X-averaged outer SEI thickness [m]', 'X-averaged outer positive electrode SEI concentration [mol.m-3]', 'X-averaged outer positive electrode SEI interfacial current density', 'X-averaged outer positive electrode SEI interfacial current density [A.m-2]', 'X-averaged outer positive electrode SEI thickness', 'X-averaged outer positive electrode SEI thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode active material volume fraction change', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode extent of lithiation', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode resistance [Ohm.m2]', 'X-averaged positive electrode SEI concentration [mol.m-3]', 'X-averaged positive electrode SEI film overpotential', 'X-averaged positive electrode SEI film overpotential [V]', 'X-averaged positive electrode SEI interfacial current density', 'X-averaged positive electrode SEI interfacial current density [A.m-2]', 'X-averaged positive electrode surface area to volume ratio', 'X-averaged positive electrode surface area to volume ratio [m-1]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged reversible heating', 'X-averaged reversible heating [W.m-3]', 'X-averaged SEI film overpotential', 'X-averaged SEI film overpotential [V]', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total SEI thickness', 'X-averaged total SEI thickness [m]', 'X-averaged total positive electrode SEI thickness', 'X-averaged total positive electrode SEI thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
# need to give variable names without space
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab",
short_names={
"Time [h]": "t", "Current [A]": "I", "Terminal voltage [V]": "V", "Electrolyte concentration [mol.m-3]": "c_e",
}
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThe previous solution was created in one go with the solve method, but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_simulation = pybamm.Simulation(model)
while time < end_time:
step_solution = step_simulation.step(dt)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77047806 3.71250693]
Time 360
[3.77047806 3.71250693 3.68215218]
Time 720
[3.77047806 3.71250693 3.68215218 3.66125574]
Time 1080
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942]
Time 1440
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857]
Time 1800
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451]
Time 2160
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334]
Time 2520
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055]
Time 2880
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694]
Time 3240
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694 3.16842636]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
ReferencesThe relevant papers for this notebook are:
###Code
pybamm.print_citations()
###Output
[1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4.
[2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2.
[3] Scott G. Marquis, Valentin Sulzer, Robert Timms, Colin P. Please, and S. Jon Chapman. An asymptotic derivation of a single particle model with electrolyte. Journal of The Electrochemical Society, 166(15):A3693–A3706, 2019. doi:10.1149/2.0341915jes.
[4] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). ECSarXiv. February, 2020. doi:10.1149/osf.io/67ckj.
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# create geometry
geometry = model.default_geometry
# load parameter values and process model and geometry
param = model.default_parameter_values
param.process_model(model)
param.process_geometry(geometry)
# set mesh
mesh = pybamm.Mesh(geometry, model.default_submesh_types, model.default_var_pts)
# discretise model
disc = pybamm.Discretisation(mesh, model.default_spatial_methods)
disc.process_model(model)
# solve model
solver = model.default_solver
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = solver.solve(model, t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
_____no_output_____
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
['Active material volume fraction', 'Ambient temperature', 'Ambient temperature [K]', 'Battery voltage [V]', 'C-rate', 'Cell temperature', 'Cell temperature [K]', 'Current [A]', 'Current collector current density', 'Current collector current density [A.m-2]', 'Discharge capacity [A.h]', 'Electrode current density', 'Electrode tortuosity', 'Electrolyte concentration', 'Electrolyte concentration [Molar]', 'Electrolyte concentration [mol.m-3]', 'Electrolyte current density', 'Electrolyte current density [A.m-2]', 'Electrolyte flux', 'Electrolyte flux [mol.m-2.s-1]', 'Electrolyte potential', 'Electrolyte potential [V]', 'Electrolyte tortuosity', 'Exchange current density', 'Exchange current density [A.m-2]', 'Exchange current density per volume [A.m-3]', 'Gradient of electrolyte potential', 'Gradient of negative electrode potential', 'Gradient of negative electrolyte potential', 'Gradient of positive electrode potential', 'Gradient of positive electrolyte potential', 'Gradient of separator electrolyte potential', 'Inner negative electrode sei concentration [mol.m-3]', 'Inner negative electrode sei interfacial current density', 'Inner negative electrode sei interfacial current density [A.m-2]', 'Inner negative electrode sei thickness', 'Inner negative electrode sei thickness [m]', 'Inner positive electrode sei concentration [mol.m-3]', 'Inner positive electrode sei interfacial current density', 'Inner positive electrode sei interfacial current density [A.m-2]', 'Inner positive electrode sei thickness', 'Inner positive electrode sei thickness [m]', 'Interfacial current density', 'Interfacial current density [A.m-2]', 'Interfacial current density per volume [A.m-3]', 'Irreversible electrochemical heating', 'Irreversible electrochemical heating [W.m-3]', 'Leading-order active material volume fraction', 'Leading-order current collector current density', 'Leading-order electrode tortuosity', 'Leading-order electrolyte tortuosity', 'Leading-order negative electrode active material volume fraction', 'Leading-order negative electrode porosity', 'Leading-order negative electrode tortuosity', 'Leading-order negative electrolyte tortuosity', 'Leading-order porosity', 'Leading-order positive electrode active material volume fraction', 'Leading-order positive electrode porosity', 'Leading-order positive electrode tortuosity', 'Leading-order positive electrolyte tortuosity', 'Leading-order separator active material volume fraction', 'Leading-order separator porosity', 'Leading-order separator tortuosity', 'Leading-order x-averaged negative electrode active material volume fraction', 'Leading-order x-averaged negative electrode porosity', 'Leading-order x-averaged negative electrode porosity change', 'Leading-order x-averaged negative electrode tortuosity', 'Leading-order x-averaged negative electrolyte tortuosity', 'Leading-order x-averaged positive electrode active material volume fraction', 'Leading-order x-averaged positive electrode porosity', 'Leading-order x-averaged positive electrode porosity change', 'Leading-order x-averaged positive electrode tortuosity', 'Leading-order x-averaged positive electrolyte tortuosity', 'Leading-order x-averaged separator active material volume fraction', 'Leading-order x-averaged separator porosity', 'Leading-order x-averaged separator porosity change', 'Leading-order x-averaged separator tortuosity', 'Local voltage', 'Local voltage [V]', 'Loss of lithium to negative electrode sei [mol]', 'Loss of lithium to positive electrode sei [mol]', 'Measured battery open circuit voltage [V]', 'Measured open circuit voltage', 'Measured open circuit voltage [V]', 'Negative current collector potential', 'Negative current collector potential [V]', 'Negative current collector temperature', 'Negative current collector temperature [K]', 'Negative electrode active material volume fraction', 'Negative electrode active volume fraction', 'Negative electrode average extent of lithiation', 'Negative electrode current density', 'Negative electrode current density [A.m-2]', 'Negative electrode entropic change', 'Negative electrode exchange current density', 'Negative electrode exchange current density [A.m-2]', 'Negative electrode exchange current density per volume [A.m-3]', 'Negative electrode interfacial current density', 'Negative electrode interfacial current density [A.m-2]', 'Negative electrode interfacial current density per volume [A.m-3]', 'Negative electrode ohmic losses', 'Negative electrode ohmic losses [V]', 'Negative electrode open circuit potential', 'Negative electrode open circuit potential [V]', 'Negative electrode oxygen exchange current density', 'Negative electrode oxygen exchange current density [A.m-2]', 'Negative electrode oxygen exchange current density per volume [A.m-3]', 'Negative electrode oxygen interfacial current density', 'Negative electrode oxygen interfacial current density [A.m-2]', 'Negative electrode oxygen interfacial current density per volume [A.m-3]', 'Negative electrode oxygen open circuit potential', 'Negative electrode oxygen open circuit potential [V]', 'Negative electrode oxygen reaction overpotential', 'Negative electrode oxygen reaction overpotential [V]', 'Negative electrode porosity', 'Negative electrode porosity change', 'Negative electrode potential', 'Negative electrode potential [V]', 'Negative electrode pressure', 'Negative electrode reaction overpotential', 'Negative electrode reaction overpotential [V]', 'Negative electrode sei film overpotential', 'Negative electrode sei film overpotential [V]', 'Negative electrode sei interfacial current density', 'Negative electrode sei interfacial current density [A.m-2]', 'Negative electrode surface potential difference', 'Negative electrode surface potential difference [V]', 'Negative electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative sei concentration [mol.m-3]', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer negative electrode sei concentration [mol.m-3]', 'Outer negative electrode sei interfacial current density', 'Outer negative electrode sei interfacial current density [A.m-2]', 'Outer negative electrode sei thickness', 'Outer negative electrode sei thickness [m]', 'Outer positive electrode sei concentration [mol.m-3]', 'Outer positive electrode sei interfacial current density', 'Outer positive electrode sei interfacial current density [A.m-2]', 'Outer positive electrode sei thickness', 'Outer positive electrode sei thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active volume fraction', 'Positive electrode average extent of lithiation', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode sei film overpotential', 'Positive electrode sei film overpotential [V]', 'Positive electrode sei interfacial current density', 'Positive electrode sei interfacial current density [A.m-2]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive sei concentration [mol.m-3]', 'Pressure', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator active material volume fraction', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total negative electrode sei thickness', 'Total negative electrode sei thickness [m]', 'Total positive electrode sei thickness', 'Total positive electrode sei thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner negative electrode sei concentration [mol.m-3]', 'X-averaged inner negative electrode sei interfacial current density', 'X-averaged inner negative electrode sei interfacial current density [A.m-2]', 'X-averaged inner negative electrode sei thickness', 'X-averaged inner negative electrode sei thickness [m]', 'X-averaged inner positive electrode sei concentration [mol.m-3]', 'X-averaged inner positive electrode sei interfacial current density', 'X-averaged inner positive electrode sei interfacial current density [A.m-2]', 'X-averaged inner positive electrode sei thickness', 'X-averaged inner positive electrode sei thickness [m]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode sei concentration [mol.m-3]', 'X-averaged negative electrode sei film overpotential', 'X-averaged negative electrode sei film overpotential [V]', 'X-averaged negative electrode sei interfacial current density', 'X-averaged negative electrode sei interfacial current density [A.m-2]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer negative electrode sei concentration [mol.m-3]', 'X-averaged outer negative electrode sei interfacial current density', 'X-averaged outer negative electrode sei interfacial current density [A.m-2]', 'X-averaged outer negative electrode sei thickness', 'X-averaged outer negative electrode sei thickness [m]', 'X-averaged outer positive electrode sei concentration [mol.m-3]', 'X-averaged outer positive electrode sei interfacial current density', 'X-averaged outer positive electrode sei interfacial current density [A.m-2]', 'X-averaged outer positive electrode sei thickness', 'X-averaged outer positive electrode sei thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode sei concentration [mol.m-3]', 'X-averaged positive electrode sei film overpotential', 'X-averaged positive electrode sei film overpotential [V]', 'X-averaged positive electrode sei interfacial current density', 'X-averaged positive electrode sei interfacial current density [A.m-2]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged sei film overpotential', 'X-averaged sei film overpotential [V]', 'X-averaged separator active material volume fraction', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total negative electrode sei thickness', 'X-averaged total negative electrode sei thickness [m]', 'X-averaged total positive electrode sei thickness', 'X-averaged total positive electrode sei thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab"
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThis solution was created in one go with the solver's solve method but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_solver = model.default_solver
step_solution = None
while time < end_time:
step_solution = step_solver.step(step_solution, model, dt=dt, npts=2)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77057107 3.71259241]
Time 360
[3.77057107 3.71259241 3.68218316]
Time 720
[3.77057107 3.71259241 3.68218316 3.66126923]
Time 1080
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555]
Time 1440
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633]
Time 1800
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298]
Time 2160
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298 3.58820658]
Time 2520
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298 3.58820658 3.58048923]
Time 2880
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298 3.58820658 3.58048923 3.55051681]
Time 3240
[3.77057107 3.71259241 3.68218316 3.66126923 3.64327555 3.61158633
3.59708298 3.58820658 3.58048923 3.55051681 3.14247468]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# create geometry
geometry = model.default_geometry
# load parameter values and process model and geometry
param = model.default_parameter_values
param.process_model(model)
param.process_geometry(geometry)
# set mesh
mesh = pybamm.Mesh(geometry, model.default_submesh_types, model.default_var_pts)
# discretise model
disc = pybamm.Discretisation(mesh, model.default_spatial_methods)
disc.process_model(model)
# solve model
solver = model.default_solver
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = solver.solve(model, t_eval)
quick_plot = pybamm.QuickPlot(solution)
import ipywidgets as widgets
widgets.interact(quick_plot.plot, t=widgets.FloatSlider(min=0,max=quick_plot.max_t,step=0.05,value=0));
###Output
_____no_output_____
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First lets find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the actual data in one of two ways, first by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
Secondly by calling the method with a specific solution time, which is non-dimensional
###Code
solution.t
solution['Time [h]'](solution.t)
###Output
_____no_output_____
###Markdown
And interpolated
###Code
interp_t = (solution.t[0] + solution.t[1])/2
solution['Time [h]'](interp_t)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](interp_t)
###Output
_____no_output_____
###Markdown
Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab"
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThis solution was created in one go with the solver's solve method but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_solver = model.default_solver
step_solution = None
while time < end_time:
step_solution = step_solver.step(step_solution, model, dt=dt, npts=2)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77057107 3.71259842]
Time 360
[3.77057107 3.71259842 3.68218919]
Time 720
[3.77057107 3.71259842 3.68218919 3.66127527]
Time 1080
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161]
Time 1440
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241]
Time 1800
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908]
Time 2160
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908 3.5882127 ]
Time 2520
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908 3.5882127 3.58049537]
Time 2880
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908 3.5882127 3.58049537 3.55052297]
Time 3240
[3.77057107 3.71259842 3.68218919 3.66127527 3.64328161 3.61159241
3.59708908 3.5882127 3.58049537 3.55052297 3.14248086]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"]
step_voltage = step_solution["Terminal voltage [V]"]
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage(solution.t), "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage(step_solution.t), "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
A look at solution data and processed variables Once you have run a simulation the first thing you want to do is have a look at the data. Most of the examples so far have made use of PyBaMM's handy QuickPlot function but there are other ways to access the data and this notebook will explore them. First off we will generate a standard SPMe model and use QuickPlot to view the default variables.
###Code
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
import os
import matplotlib.pyplot as plt
os.chdir(pybamm.__path__[0]+'/..')
# load model
model = pybamm.lithium_ion.SPMe()
# set up and solve simulation
simulation = pybamm.Simulation(model)
dt = 90
t_eval = np.arange(0, 3600, dt) # time in seconds
solution = simulation.solve(t_eval)
quick_plot = pybamm.QuickPlot(solution)
quick_plot.dynamic_plot();
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Behind the scenes the QuickPlot classed has created some processed variables which can interpolate the model variables for our solution and has also stored the results for the solution steps
###Code
solution.data.keys()
solution.data['Negative particle surface concentration [mol.m-3]'].shape
solution.t.shape
###Output
_____no_output_____
###Markdown
Notice that the dictionary keys are in the same order as the subplots in the QuickPlot figure. We can add new processed variables to the solution by simply using it like a dictionary. First let's find a few more variables to look at. As you will see there are quite a few:
###Code
keys = list(model.variables.keys())
keys.sort()
print(keys)
###Output
['Ambient temperature', 'Ambient temperature [K]', 'Average negative particle concentration', 'Average negative particle concentration [mol.m-3]', 'Average positive particle concentration', 'Average positive particle concentration [mol.m-3]', 'Battery voltage [V]', 'C-rate', 'Cell temperature', 'Cell temperature [K]', 'Change in measured open circuit voltage', 'Change in measured open circuit voltage [V]', 'Current [A]', 'Current collector current density', 'Current collector current density [A.m-2]', 'Discharge capacity [A.h]', 'Electrode current density', 'Electrode tortuosity', 'Electrolyte concentration', 'Electrolyte concentration [Molar]', 'Electrolyte concentration [mol.m-3]', 'Electrolyte current density', 'Electrolyte current density [A.m-2]', 'Electrolyte flux', 'Electrolyte flux [mol.m-2.s-1]', 'Electrolyte potential', 'Electrolyte potential [V]', 'Electrolyte tortuosity', 'Exchange current density', 'Exchange current density [A.m-2]', 'Exchange current density per volume [A.m-3]', 'Gradient of electrolyte potential', 'Gradient of negative electrode potential', 'Gradient of negative electrolyte potential', 'Gradient of positive electrode potential', 'Gradient of positive electrolyte potential', 'Gradient of separator electrolyte potential', 'Inner negative electrode SEI concentration [mol.m-3]', 'Inner negative electrode SEI interfacial current density', 'Inner negative electrode SEI interfacial current density [A.m-2]', 'Inner negative electrode SEI thickness', 'Inner negative electrode SEI thickness [m]', 'Inner positive electrode SEI concentration [mol.m-3]', 'Inner positive electrode SEI interfacial current density', 'Inner positive electrode SEI interfacial current density [A.m-2]', 'Inner positive electrode SEI thickness', 'Inner positive electrode SEI thickness [m]', 'Interfacial current density', 'Interfacial current density [A.m-2]', 'Interfacial current density per volume [A.m-3]', 'Irreversible electrochemical heating', 'Irreversible electrochemical heating [W.m-3]', 'Leading-order current collector current density', 'Leading-order electrode tortuosity', 'Leading-order electrolyte tortuosity', 'Leading-order negative electrode porosity', 'Leading-order negative electrode tortuosity', 'Leading-order negative electrolyte tortuosity', 'Leading-order porosity', 'Leading-order positive electrode porosity', 'Leading-order positive electrode tortuosity', 'Leading-order positive electrolyte tortuosity', 'Leading-order separator porosity', 'Leading-order separator tortuosity', 'Leading-order x-averaged negative electrode porosity', 'Leading-order x-averaged negative electrode porosity change', 'Leading-order x-averaged negative electrode tortuosity', 'Leading-order x-averaged negative electrolyte tortuosity', 'Leading-order x-averaged positive electrode porosity', 'Leading-order x-averaged positive electrode porosity change', 'Leading-order x-averaged positive electrode tortuosity', 'Leading-order x-averaged positive electrolyte tortuosity', 'Leading-order x-averaged separator porosity', 'Leading-order x-averaged separator porosity change', 'Leading-order x-averaged separator tortuosity', 'Local ECM resistance', 'Local ECM resistance [Ohm]', 'Local voltage', 'Local voltage [V]', 'Loss of lithium to negative electrode SEI [mol]', 'Loss of lithium to positive electrode SEI [mol]', 'Maximum negative particle concentration', 'Maximum negative particle concentration [mol.m-3]', 'Maximum negative particle surface concentration', 'Maximum negative particle surface concentration [mol.m-3]', 'Maximum positive particle concentration', 'Maximum positive particle concentration [mol.m-3]', 'Maximum positive particle surface concentration', 'Maximum positive particle surface concentration [mol.m-3]', 'Measured battery open circuit voltage [V]', 'Measured open circuit voltage', 'Measured open circuit voltage [V]', 'Minimum negative particle concentration', 'Minimum negative particle concentration [mol.m-3]', 'Minimum negative particle surface concentration', 'Minimum negative particle surface concentration [mol.m-3]', 'Minimum positive particle concentration', 'Minimum positive particle concentration [mol.m-3]', 'Minimum positive particle surface concentration', 'Minimum positive particle surface concentration [mol.m-3]', 'Negative current collector potential', 'Negative current collector potential [V]', 'Negative current collector temperature', 'Negative current collector temperature [K]', 'Negative electrode active material volume fraction', 'Negative electrode active material volume fraction change', 'Negative electrode current density', 'Negative electrode current density [A.m-2]', 'Negative electrode entropic change', 'Negative electrode exchange current density', 'Negative electrode exchange current density [A.m-2]', 'Negative electrode exchange current density per volume [A.m-3]', 'Negative electrode extent of lithiation', 'Negative electrode interfacial current density', 'Negative electrode interfacial current density [A.m-2]', 'Negative electrode interfacial current density per volume [A.m-3]', 'Negative electrode ohmic losses', 'Negative electrode ohmic losses [V]', 'Negative electrode open circuit potential', 'Negative electrode open circuit potential [V]', 'Negative electrode oxygen exchange current density', 'Negative electrode oxygen exchange current density [A.m-2]', 'Negative electrode oxygen exchange current density per volume [A.m-3]', 'Negative electrode oxygen interfacial current density', 'Negative electrode oxygen interfacial current density [A.m-2]', 'Negative electrode oxygen interfacial current density per volume [A.m-3]', 'Negative electrode oxygen open circuit potential', 'Negative electrode oxygen open circuit potential [V]', 'Negative electrode oxygen reaction overpotential', 'Negative electrode oxygen reaction overpotential [V]', 'Negative electrode porosity', 'Negative electrode porosity change', 'Negative electrode potential', 'Negative electrode potential [V]', 'Negative electrode pressure', 'Negative electrode reaction overpotential', 'Negative electrode reaction overpotential [V]', 'Negative electrode SEI film overpotential', 'Negative electrode SEI film overpotential [V]', 'Negative electrode SEI interfacial current density', 'Negative electrode SEI interfacial current density [A.m-2]', 'Negative electrode surface area to volume ratio', 'Negative electrode surface area to volume ratio [m-1]', 'Negative electrode surface potential difference', 'Negative electrode surface potential difference [V]', 'Negative electrode temperature', 'Negative electrode temperature [K]', 'Negative electrode tortuosity', 'Negative electrode transverse volume-averaged acceleration', 'Negative electrode transverse volume-averaged acceleration [m.s-2]', 'Negative electrode transverse volume-averaged velocity', 'Negative electrode transverse volume-averaged velocity [m.s-2]', 'Negative electrode volume-averaged acceleration', 'Negative electrode volume-averaged acceleration [m.s-1]', 'Negative electrode volume-averaged concentration', 'Negative electrode volume-averaged concentration [mol.m-3]', 'Negative electrode volume-averaged velocity', 'Negative electrode volume-averaged velocity [m.s-1]', 'Negative electrolyte concentration', 'Negative electrolyte concentration [Molar]', 'Negative electrolyte concentration [mol.m-3]', 'Negative electrolyte current density', 'Negative electrolyte current density [A.m-2]', 'Negative electrolyte potential', 'Negative electrolyte potential [V]', 'Negative electrolyte tortuosity', 'Negative particle concentration', 'Negative particle concentration [mol.m-3]', 'Negative particle flux', 'Negative particle radius', 'Negative particle radius [m]', 'Negative particle surface concentration', 'Negative particle surface concentration [mol.m-3]', 'Negative SEI concentration [mol.m-3]', 'Ohmic heating', 'Ohmic heating [W.m-3]', 'Outer negative electrode SEI concentration [mol.m-3]', 'Outer negative electrode SEI interfacial current density', 'Outer negative electrode SEI interfacial current density [A.m-2]', 'Outer negative electrode SEI thickness', 'Outer negative electrode SEI thickness [m]', 'Outer positive electrode SEI concentration [mol.m-3]', 'Outer positive electrode SEI interfacial current density', 'Outer positive electrode SEI interfacial current density [A.m-2]', 'Outer positive electrode SEI thickness', 'Outer positive electrode SEI thickness [m]', 'Oxygen exchange current density', 'Oxygen exchange current density [A.m-2]', 'Oxygen exchange current density per volume [A.m-3]', 'Oxygen interfacial current density', 'Oxygen interfacial current density [A.m-2]', 'Oxygen interfacial current density per volume [A.m-3]', 'Porosity', 'Porosity change', 'Positive current collector potential', 'Positive current collector potential [V]', 'Positive current collector temperature', 'Positive current collector temperature [K]', 'Positive electrode active material volume fraction', 'Positive electrode active material volume fraction change', 'Positive electrode current density', 'Positive electrode current density [A.m-2]', 'Positive electrode entropic change', 'Positive electrode exchange current density', 'Positive electrode exchange current density [A.m-2]', 'Positive electrode exchange current density per volume [A.m-3]', 'Positive electrode extent of lithiation', 'Positive electrode interfacial current density', 'Positive electrode interfacial current density [A.m-2]', 'Positive electrode interfacial current density per volume [A.m-3]', 'Positive electrode ohmic losses', 'Positive electrode ohmic losses [V]', 'Positive electrode open circuit potential', 'Positive electrode open circuit potential [V]', 'Positive electrode oxygen exchange current density', 'Positive electrode oxygen exchange current density [A.m-2]', 'Positive electrode oxygen exchange current density per volume [A.m-3]', 'Positive electrode oxygen interfacial current density', 'Positive electrode oxygen interfacial current density [A.m-2]', 'Positive electrode oxygen interfacial current density per volume [A.m-3]', 'Positive electrode oxygen open circuit potential', 'Positive electrode oxygen open circuit potential [V]', 'Positive electrode oxygen reaction overpotential', 'Positive electrode oxygen reaction overpotential [V]', 'Positive electrode porosity', 'Positive electrode porosity change', 'Positive electrode potential', 'Positive electrode potential [V]', 'Positive electrode pressure', 'Positive electrode reaction overpotential', 'Positive electrode reaction overpotential [V]', 'Positive electrode SEI film overpotential', 'Positive electrode SEI film overpotential [V]', 'Positive electrode SEI interfacial current density', 'Positive electrode SEI interfacial current density [A.m-2]', 'Positive electrode surface area to volume ratio', 'Positive electrode surface area to volume ratio [m-1]', 'Positive electrode surface potential difference', 'Positive electrode surface potential difference [V]', 'Positive electrode temperature', 'Positive electrode temperature [K]', 'Positive electrode tortuosity', 'Positive electrode transverse volume-averaged acceleration', 'Positive electrode transverse volume-averaged acceleration [m.s-2]', 'Positive electrode transverse volume-averaged velocity', 'Positive electrode transverse volume-averaged velocity [m.s-2]', 'Positive electrode volume-averaged acceleration', 'Positive electrode volume-averaged acceleration [m.s-1]', 'Positive electrode volume-averaged concentration', 'Positive electrode volume-averaged concentration [mol.m-3]', 'Positive electrode volume-averaged velocity', 'Positive electrode volume-averaged velocity [m.s-1]', 'Positive electrolyte concentration', 'Positive electrolyte concentration [Molar]', 'Positive electrolyte concentration [mol.m-3]', 'Positive electrolyte current density', 'Positive electrolyte current density [A.m-2]', 'Positive electrolyte potential', 'Positive electrolyte potential [V]', 'Positive electrolyte tortuosity', 'Positive particle concentration', 'Positive particle concentration [mol.m-3]', 'Positive particle flux', 'Positive particle radius', 'Positive particle radius [m]', 'Positive particle surface concentration', 'Positive particle surface concentration [mol.m-3]', 'Positive SEI concentration [mol.m-3]', 'Pressure', 'R-averaged negative particle concentration', 'R-averaged negative particle concentration [mol.m-3]', 'R-averaged positive particle concentration', 'R-averaged positive particle concentration [mol.m-3]', 'Reversible heating', 'Reversible heating [W.m-3]', 'Sei interfacial current density', 'Sei interfacial current density [A.m-2]', 'Sei interfacial current density per volume [A.m-3]', 'Separator electrolyte concentration', 'Separator electrolyte concentration [Molar]', 'Separator electrolyte concentration [mol.m-3]', 'Separator electrolyte potential', 'Separator electrolyte potential [V]', 'Separator porosity', 'Separator porosity change', 'Separator pressure', 'Separator temperature', 'Separator temperature [K]', 'Separator tortuosity', 'Separator transverse volume-averaged acceleration', 'Separator transverse volume-averaged acceleration [m.s-2]', 'Separator transverse volume-averaged velocity', 'Separator transverse volume-averaged velocity [m.s-2]', 'Separator volume-averaged acceleration', 'Separator volume-averaged acceleration [m.s-1]', 'Separator volume-averaged velocity', 'Separator volume-averaged velocity [m.s-1]', 'Sum of electrolyte reaction source terms', 'Sum of interfacial current densities', 'Sum of negative electrode electrolyte reaction source terms', 'Sum of negative electrode interfacial current densities', 'Sum of positive electrode electrolyte reaction source terms', 'Sum of positive electrode interfacial current densities', 'Sum of x-averaged negative electrode electrolyte reaction source terms', 'Sum of x-averaged negative electrode interfacial current densities', 'Sum of x-averaged positive electrode electrolyte reaction source terms', 'Sum of x-averaged positive electrode interfacial current densities', 'Terminal power [W]', 'Terminal voltage', 'Terminal voltage [V]', 'Time', 'Time [h]', 'Time [min]', 'Time [s]', 'Total concentration in electrolyte [mol]', 'Total current density', 'Total current density [A.m-2]', 'Total heating', 'Total heating [W.m-3]', 'Total lithium in negative electrode [mol]', 'Total lithium in positive electrode [mol]', 'Total negative electrode SEI thickness', 'Total negative electrode SEI thickness [m]', 'Total positive electrode SEI thickness', 'Total positive electrode SEI thickness [m]', 'Transverse volume-averaged acceleration', 'Transverse volume-averaged acceleration [m.s-2]', 'Transverse volume-averaged velocity', 'Transverse volume-averaged velocity [m.s-2]', 'Volume-averaged Ohmic heating', 'Volume-averaged Ohmic heating [W.m-3]', 'Volume-averaged acceleration', 'Volume-averaged acceleration [m.s-1]', 'Volume-averaged cell temperature', 'Volume-averaged cell temperature [K]', 'Volume-averaged irreversible electrochemical heating', 'Volume-averaged irreversible electrochemical heating[W.m-3]', 'Volume-averaged reversible heating', 'Volume-averaged reversible heating [W.m-3]', 'Volume-averaged total heating', 'Volume-averaged total heating [W.m-3]', 'Volume-averaged velocity', 'Volume-averaged velocity [m.s-1]', 'X-averaged Ohmic heating', 'X-averaged Ohmic heating [W.m-3]', 'X-averaged battery concentration overpotential [V]', 'X-averaged battery electrolyte ohmic losses [V]', 'X-averaged battery open circuit voltage [V]', 'X-averaged battery reaction overpotential [V]', 'X-averaged battery solid phase ohmic losses [V]', 'X-averaged cell temperature', 'X-averaged cell temperature [K]', 'X-averaged concentration overpotential', 'X-averaged concentration overpotential [V]', 'X-averaged electrolyte concentration', 'X-averaged electrolyte concentration [Molar]', 'X-averaged electrolyte concentration [mol.m-3]', 'X-averaged electrolyte ohmic losses', 'X-averaged electrolyte ohmic losses [V]', 'X-averaged electrolyte overpotential', 'X-averaged electrolyte overpotential [V]', 'X-averaged electrolyte potential', 'X-averaged electrolyte potential [V]', 'X-averaged inner negative electrode SEI concentration [mol.m-3]', 'X-averaged inner negative electrode SEI interfacial current density', 'X-averaged inner negative electrode SEI interfacial current density [A.m-2]', 'X-averaged inner negative electrode SEI thickness', 'X-averaged inner negative electrode SEI thickness [m]', 'X-averaged inner positive electrode SEI concentration [mol.m-3]', 'X-averaged inner positive electrode SEI interfacial current density', 'X-averaged inner positive electrode SEI interfacial current density [A.m-2]', 'X-averaged inner positive electrode SEI thickness', 'X-averaged inner positive electrode SEI thickness [m]', 'X-averaged irreversible electrochemical heating', 'X-averaged irreversible electrochemical heating [W.m-3]', 'X-averaged negative electrode active material volume fraction', 'X-averaged negative electrode active material volume fraction change', 'X-averaged negative electrode entropic change', 'X-averaged negative electrode exchange current density', 'X-averaged negative electrode exchange current density [A.m-2]', 'X-averaged negative electrode exchange current density per volume [A.m-3]', 'X-averaged negative electrode extent of lithiation', 'X-averaged negative electrode interfacial current density', 'X-averaged negative electrode interfacial current density [A.m-2]', 'X-averaged negative electrode interfacial current density per volume [A.m-3]', 'X-averaged negative electrode ohmic losses', 'X-averaged negative electrode ohmic losses [V]', 'X-averaged negative electrode open circuit potential', 'X-averaged negative electrode open circuit potential [V]', 'X-averaged negative electrode oxygen exchange current density', 'X-averaged negative electrode oxygen exchange current density [A.m-2]', 'X-averaged negative electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged negative electrode oxygen interfacial current density', 'X-averaged negative electrode oxygen interfacial current density [A.m-2]', 'X-averaged negative electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged negative electrode oxygen open circuit potential', 'X-averaged negative electrode oxygen open circuit potential [V]', 'X-averaged negative electrode oxygen reaction overpotential', 'X-averaged negative electrode oxygen reaction overpotential [V]', 'X-averaged negative electrode porosity', 'X-averaged negative electrode porosity change', 'X-averaged negative electrode potential', 'X-averaged negative electrode potential [V]', 'X-averaged negative electrode pressure', 'X-averaged negative electrode reaction overpotential', 'X-averaged negative electrode reaction overpotential [V]', 'X-averaged negative electrode resistance [Ohm.m2]', 'X-averaged negative electrode SEI concentration [mol.m-3]', 'X-averaged negative electrode SEI film overpotential', 'X-averaged negative electrode SEI film overpotential [V]', 'X-averaged negative electrode SEI interfacial current density', 'X-averaged negative electrode SEI interfacial current density [A.m-2]', 'X-averaged negative electrode surface area to volume ratio', 'X-averaged negative electrode surface area to volume ratio [m-1]', 'X-averaged negative electrode surface potential difference', 'X-averaged negative electrode surface potential difference [V]', 'X-averaged negative electrode temperature', 'X-averaged negative electrode temperature [K]', 'X-averaged negative electrode tortuosity', 'X-averaged negative electrode total interfacial current density', 'X-averaged negative electrode total interfacial current density [A.m-2]', 'X-averaged negative electrode total interfacial current density per volume [A.m-3]', 'X-averaged negative electrode transverse volume-averaged acceleration', 'X-averaged negative electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged negative electrode transverse volume-averaged velocity', 'X-averaged negative electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged negative electrode volume-averaged acceleration', 'X-averaged negative electrode volume-averaged acceleration [m.s-1]', 'X-averaged negative electrolyte concentration', 'X-averaged negative electrolyte concentration [mol.m-3]', 'X-averaged negative electrolyte potential', 'X-averaged negative electrolyte potential [V]', 'X-averaged negative electrolyte tortuosity', 'X-averaged negative particle concentration', 'X-averaged negative particle concentration [mol.m-3]', 'X-averaged negative particle flux', 'X-averaged negative particle surface concentration', 'X-averaged negative particle surface concentration [mol.m-3]', 'X-averaged open circuit voltage', 'X-averaged open circuit voltage [V]', 'X-averaged outer negative electrode SEI concentration [mol.m-3]', 'X-averaged outer negative electrode SEI interfacial current density', 'X-averaged outer negative electrode SEI interfacial current density [A.m-2]', 'X-averaged outer negative electrode SEI thickness', 'X-averaged outer negative electrode SEI thickness [m]', 'X-averaged outer positive electrode SEI concentration [mol.m-3]', 'X-averaged outer positive electrode SEI interfacial current density', 'X-averaged outer positive electrode SEI interfacial current density [A.m-2]', 'X-averaged outer positive electrode SEI thickness', 'X-averaged outer positive electrode SEI thickness [m]', 'X-averaged positive electrode active material volume fraction', 'X-averaged positive electrode active material volume fraction change', 'X-averaged positive electrode entropic change', 'X-averaged positive electrode exchange current density', 'X-averaged positive electrode exchange current density [A.m-2]', 'X-averaged positive electrode exchange current density per volume [A.m-3]', 'X-averaged positive electrode extent of lithiation', 'X-averaged positive electrode interfacial current density', 'X-averaged positive electrode interfacial current density [A.m-2]', 'X-averaged positive electrode interfacial current density per volume [A.m-3]', 'X-averaged positive electrode ohmic losses', 'X-averaged positive electrode ohmic losses [V]', 'X-averaged positive electrode open circuit potential', 'X-averaged positive electrode open circuit potential [V]', 'X-averaged positive electrode oxygen exchange current density', 'X-averaged positive electrode oxygen exchange current density [A.m-2]', 'X-averaged positive electrode oxygen exchange current density per volume [A.m-3]', 'X-averaged positive electrode oxygen interfacial current density', 'X-averaged positive electrode oxygen interfacial current density [A.m-2]', 'X-averaged positive electrode oxygen interfacial current density per volume [A.m-3]', 'X-averaged positive electrode oxygen open circuit potential', 'X-averaged positive electrode oxygen open circuit potential [V]', 'X-averaged positive electrode oxygen reaction overpotential', 'X-averaged positive electrode oxygen reaction overpotential [V]', 'X-averaged positive electrode porosity', 'X-averaged positive electrode porosity change', 'X-averaged positive electrode potential', 'X-averaged positive electrode potential [V]', 'X-averaged positive electrode pressure', 'X-averaged positive electrode reaction overpotential', 'X-averaged positive electrode reaction overpotential [V]', 'X-averaged positive electrode resistance [Ohm.m2]', 'X-averaged positive electrode SEI concentration [mol.m-3]', 'X-averaged positive electrode SEI film overpotential', 'X-averaged positive electrode SEI film overpotential [V]', 'X-averaged positive electrode SEI interfacial current density', 'X-averaged positive electrode SEI interfacial current density [A.m-2]', 'X-averaged positive electrode surface area to volume ratio', 'X-averaged positive electrode surface area to volume ratio [m-1]', 'X-averaged positive electrode surface potential difference', 'X-averaged positive electrode surface potential difference [V]', 'X-averaged positive electrode temperature', 'X-averaged positive electrode temperature [K]', 'X-averaged positive electrode tortuosity', 'X-averaged positive electrode total interfacial current density', 'X-averaged positive electrode total interfacial current density [A.m-2]', 'X-averaged positive electrode total interfacial current density per volume [A.m-3]', 'X-averaged positive electrode transverse volume-averaged acceleration', 'X-averaged positive electrode transverse volume-averaged acceleration [m.s-2]', 'X-averaged positive electrode transverse volume-averaged velocity', 'X-averaged positive electrode transverse volume-averaged velocity [m.s-2]', 'X-averaged positive electrode volume-averaged acceleration', 'X-averaged positive electrode volume-averaged acceleration [m.s-1]', 'X-averaged positive electrolyte concentration', 'X-averaged positive electrolyte concentration [mol.m-3]', 'X-averaged positive electrolyte potential', 'X-averaged positive electrolyte potential [V]', 'X-averaged positive electrolyte tortuosity', 'X-averaged positive particle concentration', 'X-averaged positive particle concentration [mol.m-3]', 'X-averaged positive particle flux', 'X-averaged positive particle surface concentration', 'X-averaged positive particle surface concentration [mol.m-3]', 'X-averaged reaction overpotential', 'X-averaged reaction overpotential [V]', 'X-averaged reversible heating', 'X-averaged reversible heating [W.m-3]', 'X-averaged SEI film overpotential', 'X-averaged SEI film overpotential [V]', 'X-averaged separator electrolyte concentration', 'X-averaged separator electrolyte concentration [mol.m-3]', 'X-averaged separator electrolyte potential', 'X-averaged separator electrolyte potential [V]', 'X-averaged separator porosity', 'X-averaged separator porosity change', 'X-averaged separator pressure', 'X-averaged separator temperature', 'X-averaged separator temperature [K]', 'X-averaged separator tortuosity', 'X-averaged separator transverse volume-averaged acceleration', 'X-averaged separator transverse volume-averaged acceleration [m.s-2]', 'X-averaged separator transverse volume-averaged velocity', 'X-averaged separator transverse volume-averaged velocity [m.s-2]', 'X-averaged separator volume-averaged acceleration', 'X-averaged separator volume-averaged acceleration [m.s-1]', 'X-averaged solid phase ohmic losses', 'X-averaged solid phase ohmic losses [V]', 'X-averaged total heating', 'X-averaged total heating [W.m-3]', 'X-averaged total negative electrode SEI thickness', 'X-averaged total negative electrode SEI thickness [m]', 'X-averaged total positive electrode SEI thickness', 'X-averaged total positive electrode SEI thickness [m]', 'X-averaged volume-averaged acceleration', 'X-averaged volume-averaged acceleration [m.s-1]', 'r_n', 'r_n [m]', 'r_p', 'r_p [m]', 'x', 'x [m]', 'x_n', 'x_n [m]', 'x_p', 'x_p [m]', 'x_s', 'x_s [m]']
###Markdown
If you want to find a particular variable you can search the variables dictionary
###Code
model.variables.search("time")
###Output
Time
Time [h]
Time [min]
Time [s]
###Markdown
We'll use the time in hours
###Code
solution['Time [h]']
###Output
_____no_output_____
###Markdown
This created a new processed variable and stored it on the solution object
###Code
solution.data.keys()
###Output
_____no_output_____
###Markdown
We can see the data by simply accessing the entries attribute of the processed variable
###Code
solution['Time [h]'].entries
###Output
_____no_output_____
###Markdown
We can also call the method with specified time(s) in SI units of seconds
###Code
time_in_seconds = np.array([0, 600, 900, 1700, 3000 ])
solution['Time [h]'](time_in_seconds)
###Output
_____no_output_____
###Markdown
If the variable has not already been processed it will be created behind the scenes
###Code
var = 'X-averaged negative electrode temperature [K]'
solution[var](time_in_seconds)
###Output
_____no_output_____
###Markdown
In this example the simulation was isothermal, so the temperature remains unchanged. Saving the solutionThe solution can be saved in a number of ways:
###Code
# to a pickle file (default)
solution.save_data(
"outputs.pickle", ["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"]
)
# to a matlab file
# need to give variable names without space
solution.save_data(
"outputs.mat",
["Time [h]", "Current [A]", "Terminal voltage [V]", "Electrolyte concentration [mol.m-3]"],
to_format="matlab",
short_names={
"Time [h]": "t", "Current [A]": "I", "Terminal voltage [V]": "V", "Electrolyte concentration [mol.m-3]": "c_e",
}
)
# to a csv file (time-dependent outputs only, no spatial dependence allowed)
solution.save_data(
"outputs.csv", ["Time [h]", "Current [A]", "Terminal voltage [V]"], to_format="csv"
)
###Output
_____no_output_____
###Markdown
Stepping the solverThe previous solution was created in one go with the solve method, but it is also possible to step the solution and look at the results as we go. In doing so, the results are automatically updated at each step.
###Code
dt = 360
time = 0
end_time = solution["Time [s]"].entries[-1]
step_simulation = pybamm.Simulation(model)
while time < end_time:
step_solution = step_simulation.step(dt)
print('Time', time)
print(step_solution["Terminal voltage [V]"].entries)
time += dt
###Output
Time 0
[3.77047806 3.71250693]
Time 360
[3.77047806 3.71250693 3.68215218]
Time 720
[3.77047806 3.71250693 3.68215218 3.66125574]
Time 1080
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942]
Time 1440
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857]
Time 1800
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451]
Time 2160
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334]
Time 2520
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055]
Time 2880
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694]
Time 3240
[3.77047806 3.71250693 3.68215218 3.66125574 3.64330942 3.61166857
3.59709451 3.58821334 3.58056055 3.55158694 3.16842636]
###Markdown
We can plot the voltages and see that the solutions are the same
###Code
voltage = solution["Terminal voltage [V]"].entries
step_voltage = step_solution["Terminal voltage [V]"].entries
plt.figure()
plt.plot(solution["Time [h]"].entries, voltage, "b-", label="SPMe (continuous solve)")
plt.plot(
step_solution["Time [h]"].entries, step_voltage, "ro", label="SPMe (stepped solve)"
)
plt.legend()
###Output
_____no_output_____
###Markdown
ReferencesThe relevant papers for this notebook are:
###Code
pybamm.print_citations()
###Output
[1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4.
[2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2.
[3] Scott G. Marquis, Valentin Sulzer, Robert Timms, Colin P. Please, and S. Jon Chapman. An asymptotic derivation of a single particle model with electrolyte. Journal of The Electrochemical Society, 166(15):A3693–A3706, 2019. doi:10.1149/2.0341915jes.
[4] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). ECSarXiv. February, 2020. doi:10.1149/osf.io/67ckj.
|
docs/soft/maixpy3/zh/usage/AI_net/number_recognize.ipynb | ###Markdown
数字识别| 更新时间 | 负责人 | 内容 | 备注 || --- | --- | --- | --- || 2022年1月4日 | Rui | 初次编写文档 | ---- || 2022年1月18日 | dianjixz | 修改文档的编写方式 | 使用 Jupyter notebook 进行编写文档 || 2022年1月19日 | Rui | 修改文档,增加效果图 | 通过测试的平台有 MaixII-Dock,使用的是 MaixPy3 0.4.0 |> 内容参考至 Neutree 的博客 [使用 V831 AI检测数字卡片](https://neucrack.com/p/384) 背景数字识别是 2021年 电赛 F 题 **智能送药小车**,节选题目部分内容**识别的数字**为: 运行效果 准备- 在 [MaixHub](https://www.maixhub.com/modelInfo?modelId=32) 上获取模型文件- 确认 MaixPy3 版本为 0.4.0 以上- 使用的硬件为 MaixII-Dock- 内存卡内是最新版本的镜像系统- 插卡启动硬件 数字识别将模型读取到python环境中!
###Code
class Number_recognition:
labels = ["1", "2", "3", "4", "5", "6", "7", "8"]
anchors = [2.44, 2.25, 5.03, 4.91, 3.5 , 3.53, 4.16, 3.94, 2.97, 2.84]
model = {
"param": "/root/number_awnn.param",
"bin": "/root/number_awnn.bin"
}
options = {
"model_type": "awnn",
"inputs": {
"input0": (224, 224, 3)
},
"outputs": {
"output0": (7, 7, (1+4+len(labels))*5)
},
"mean": [127.5, 127.5, 127.5],
"norm": [0.0078125, 0.0078125, 0.0078125],
}
w = options["inputs"]["input0"][1]
h = options["inputs"]["input0"][0]
def __init__(self):
from maix import nn
from maix.nn import decoder
self.m = nn.load(self.model, opt=self.options)
self.yolo2_decoder = decoder.Yolo2(len(self.labels), self.anchors, net_in_size=(self.w, self.h), net_out_size=(7, 7))
def map_face(self, box): #将224*224空间的位置转换到240*240空间内
def tran(x):
return int(x/224*240)
box = list(map(tran, box))
return box
global number_recognition
number_recognition = Number_recognition()
print("init over")
###Output
[ rpyc-kernel ]( running at Wed Jan 19 19:32:13 2022 )
init over
###Markdown
开始数字识别
###Code
from maix import camera, display, nn, image
from maix.nn import decoder
while True:
t = time.time()
img = camera.capture()
if not img:
time.sleep(0.01)
continue
AI_img = img.copy().resize(224, 224)
t = time.time()
out = number_recognition.m.forward(AI_img.tobytes(), quantize=True, layout="hwc")
t = time.time()
boxes, probs = number_recognition.yolo2_decoder.run(out, nms=0.3, threshold=0.5, img_size=(240, 240))
t = time.time()
for i, box in enumerate(boxes):
class_id = probs[i][0]
prob = probs[i][1][class_id]
disp_str = "{}:{:.2f}%".format(number_recognition.labels[class_id], prob*100)
font_wh = img.get_string_size(disp_str)
box = number_recognition.map_face(box)
img.draw_rectangle(box[0], box[1], box[0] + box[2], box[1] + box[3], color = (255, 0, 0), thickness=2)
img.draw_rectangle(box[0], box[1] - font_wh[1], box[0] + font_wh[0], box[1], color= (255, 0, 0))
img.draw_string(box[0], box[1] - font_wh[1], disp_str, color= (255, 0, 0))
t = time.time()
display.show(img)
###Output
_____no_output_____
###Markdown
数字识别| 更新时间 | 负责人 | 内容 | 备注 || --- | --- | --- | --- || 2022年1月4日 | Rui | 初次编写文档 | ---- || 2022年1月18日 | dianjixz | 修改文档的编写方式 | 使用 Jupyter notebook 进行编写文档 || 2022年1月19日 | Rui | 修改文档,增加效果图 | 通过测试的平台有 MaixII-Dock,使用的是 MaixPy3 0.4.0 |> 内容参考至 Neutree 的博客 [使用 V831 AI检测数字卡片](https://neucrack.com/p/384) 背景数字识别是 2021年 电赛 F 题 **智能送药小车**,节选题目部分内容**识别的数字**为: 运行效果 准备- 在 [MaixHub](https://www.maixhub.com/modelInfo?modelId=32) 上获取模型文件,并将模型文件存放到 U 盘中- 确认 MaixPy3 版本为 0.4.0 以上- 使用的硬件为 MaixII-Dock- 内存卡内是最新版本的镜像系统- 插卡启动硬件 数字识别将模型读取到python环境中!
###Code
class Number_recognition:
labels = ["1", "2", "3", "4", "5", "6", "7", "8"]
anchors = [2.44, 2.25, 5.03, 4.91, 3.5 , 3.53, 4.16, 3.94, 2.97, 2.84]
model = {
"param": "/root/number_awnn.param",
"bin": "/root/number_awnn.bin"
}
options = {
"model_type": "awnn",
"inputs": {
"input0": (224, 224, 3)
},
"outputs": {
"output0": (7, 7, (1+4+len(labels))*5)
},
"mean": [127.5, 127.5, 127.5],
"norm": [0.0078125, 0.0078125, 0.0078125],
}
w = options["inputs"]["input0"][1]
h = options["inputs"]["input0"][0]
def __init__(self):
from maix import nn
from maix.nn import decoder
self.m = nn.load(self.model, opt=self.options)
self.yolo2_decoder = decoder.Yolo2(len(self.labels), self.anchors, net_in_size=(self.w, self.h), net_out_size=(7, 7))
def map_face(self, box): #将224*224空间的位置转换到240*240空间内
def tran(x):
return int(x/224*240)
box = list(map(tran, box))
return box
global number_recognition
number_recognition = Number_recognition()
print("init over")
###Output
[ rpyc-kernel ]( running at Wed Jan 19 19:32:13 2022 )
init over
###Markdown
开始数字识别
###Code
from maix import camera, display, nn, image
from maix.nn import decoder
import time
while True:
t = time.time()
img = camera.capture()
if not img:
time.sleep(0.01)
continue
AI_img = img.copy().resize(224, 224)
t = time.time()
out = number_recognition.m.forward(AI_img.tobytes(), quantize=True, layout="hwc")
t = time.time()
boxes, probs = number_recognition.yolo2_decoder.run(out, nms=0.3, threshold=0.5, img_size=(240, 240))
t = time.time()
for i, box in enumerate(boxes):
class_id = probs[i][0]
prob = probs[i][1][class_id]
disp_str = "{}:{:.2f}%".format(number_recognition.labels[class_id], prob*100)
font_wh = img.get_string_size(disp_str)
box = number_recognition.map_face(box)
img.draw_rectangle(box[0], box[1], box[0] + box[2], box[1] + box[3], color = (255, 0, 0), thickness=2)
img.draw_rectangle(box[0], box[1] - font_wh[1], box[0] + font_wh[0], box[1], color= (255, 0, 0))
img.draw_string(box[0], box[1] - font_wh[1], disp_str, color= (255, 0, 0))
t = time.time()
display.show(img)
###Output
_____no_output_____ |
tests/practice/hml_10_introduction_to_artificial_neural_networks.ipynb | ###Markdown
**Chapter 10 – Introduction to Artificial Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 10._
###Code
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn,bokeh,gensim
###Output
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
Using TensorFlow backend.
/srv/venv/lib/python3.6/site-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
if d.decorator_argspec is not None), _inspect.getargspec(target))
###Markdown
Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
###Code
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ann"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
###Output
_____no_output_____
###Markdown
Perceptrons
###Code
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
iris = load_iris()
X = iris.data[:, (2, 3)] # petal length, petal width
y = (iris.target == 0).astype(np.int)
per_clf = Perceptron(random_state=42)
per_clf.fit(X, y)
y_pred = per_clf.predict([[2, 0.5]])
y_pred
a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]
b = -per_clf.intercept_ / per_clf.coef_[0][1]
axes = [0, 5, 0, 2]
x0, x1 = np.meshgrid(
np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")
plt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], "k-", linewidth=3)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#9898ff', '#fafab0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="lower right", fontsize=14)
plt.axis(axes)
save_fig("perceptron_iris_plot")
plt.show()
###Output
Saving figure perceptron_iris_plot
###Markdown
Activation functions
###Code
def logit(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def derivative(f, z, eps=0.000001):
return (f(z + eps) - f(z - eps))/(2 * eps)
z = np.linspace(-5, 5, 200)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=2, label="Step")
plt.plot(z, logit(z), "g--", linewidth=2, label="Logit")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])
plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=2, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(logit, z), "g--", linewidth=2, label="Logit")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("activation_functions_plot")
plt.show()
def heaviside(z):
return (z >= 0).astype(z.dtype)
def sigmoid(z):
return 1/(1+np.exp(-z))
def mlp_xor(x1, x2, activation=heaviside):
return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)
x1s = np.linspace(-0.2, 1.2, 100)
x2s = np.linspace(-0.2, 1.2, 100)
x1, x2 = np.meshgrid(x1s, x2s)
z1 = mlp_xor(x1, x2, activation=heaviside)
z2 = mlp_xor(x1, x2, activation=sigmoid)
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)
plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)
###Output
_____no_output_____
###Markdown
FNN for MNIST using tf.learn
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_train = mnist.train.images
X_test = mnist.test.images
y_train = mnist.train.labels.astype("int")
y_test = mnist.test.labels.astype("int")
import tensorflow as tf
config = tf.contrib.learn.RunConfig(tf_random_seed=42) # not shown in the config
feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(X_train)
dnn_clf = tf.contrib.learn.DNNClassifier(hidden_units=[300,100], n_classes=10,
feature_columns=feature_cols, config=config)
dnn_clf = tf.contrib.learn.SKCompat(dnn_clf) # if TensorFlow >= 1.1
dnn_clf.fit(X_train, y_train, batch_size=50, steps=40000)
from sklearn.metrics import accuracy_score
y_pred = dnn_clf.predict(X_test)
accuracy_score(y_test, y_pred['classes'])
from sklearn.metrics import log_loss
y_pred_proba = y_pred['probabilities']
log_loss(y_test, y_pred_proba)
###Output
_____no_output_____
###Markdown
Using plain TensorFlow
###Code
import tensorflow as tf
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
def neuron_layer(X, n_neurons, name, activation=None):
with tf.name_scope(name):
n_inputs = int(X.get_shape()[1])
stddev = 2 / np.sqrt(n_inputs)
init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
W = tf.Variable(init, name="kernel")
b = tf.Variable(tf.zeros([n_neurons]), name="bias")
Z = tf.matmul(X, W) + b
if activation is not None:
return activation(Z)
else:
return Z
with tf.name_scope("dnn"):
hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: mnist.validation.images,
y: mnist.validation.labels})
print(epoch, "Train accuracy:", acc_train, "Val accuracy:", acc_val)
save_path = saver.save(sess, "./my_model_final.ckpt")
with tf.Session() as sess:
saver.restore(sess, "./my_model_final.ckpt") # or better, use save_path
X_new_scaled = mnist.test.images[:20]
Z = logits.eval(feed_dict={X: X_new_scaled})
y_pred = np.argmax(Z, axis=1)
print("Predicted classes:", y_pred)
print("Actual classes: ", mnist.test.labels[:20])
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
###Output
_____no_output_____
###Markdown
Using `dense()` instead of `neuron_layer()` Note: the book uses `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or be deleted without notice. The `dense()` function is almost identical to the `fully_connected()` function, except for a few minor differences:* several parameters are renamed: `scope` becomes `name`, `activation_fn` becomes `activation` (and similarly the `_fn` suffix is removed from other parameters such as `normalizer_fn`), `weights_initializer` becomes `kernel_initializer`, etc.* the default `activation` is now `None` rather than `tf.nn.relu`.* a few more differences are presented in chapter 11.
###Code
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 20
n_batches = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: mnist.test.images, y: mnist.test.labels})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
save_path = saver.save(sess, "./my_model_final.ckpt")
show_graph(tf.get_default_graph())
###Output
_____no_output_____
###Markdown
Exercise solutions 1. to 8. See appendix A. 9. _Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on)._ First let's create the deep net. It's exactly the same as earlier, with just one addition: we add a `tf.summary.scalar()` to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.
###Code
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int64, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
loss_summary = tf.summary.scalar('log_loss', loss)
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
accuracy_summary = tf.summary.scalar('accuracy', accuracy)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
Now we need to define the directory to write the TensorBoard logs to:
###Code
from datetime import datetime
def log_dir(prefix=""):
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
if prefix:
prefix += "-"
name = prefix + "run-" + now
return "{}/{}/".format(root_logdir, name)
logdir = log_dir("mnist_dnn")
###Output
_____no_output_____
###Markdown
Now we can create the `FileWriter` that we will use to write the TensorBoard logs:
###Code
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
###Output
_____no_output_____
###Markdown
Hey! Why don't we implement early stopping? For this, we are going to need a validation set. Luckily, the dataset returned by TensorFlow's `input_data()` function (see above) is already split into a training set (60,000 instances, already shuffled for us), a validation set (5,000 instances) and a test set (5,000 instances). So we can easily define `X_valid` and `y_valid`:
###Code
X_valid = mnist.validation.images
y_valid = mnist.validation.labels
m, n = X_train.shape
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
checkpoint_path = "/tmp/my_deep_mnist_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_deep_mnist_model"
best_loss = np.infty
epochs_without_progress = 0
max_epochs_without_progress = 50
with tf.Session() as sess:
if os.path.isfile(checkpoint_epoch_path):
# if the checkpoint file exists, restore the model and load the epoch number
with open(checkpoint_epoch_path, "rb") as f:
start_epoch = int(f.read())
print("Training was interrupted. Continuing at epoch", start_epoch)
saver.restore(sess, checkpoint_path)
else:
start_epoch = 0
sess.run(init)
for epoch in range(start_epoch, n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})
file_writer.add_summary(accuracy_summary_str, epoch)
file_writer.add_summary(loss_summary_str, epoch)
if epoch % 5 == 0:
print("Epoch:", epoch,
"\tValidation accuracy: {:.3f}%".format(accuracy_val * 100),
"\tLoss: {:.5f}".format(loss_val))
saver.save(sess, checkpoint_path)
with open(checkpoint_epoch_path, "wb") as f:
f.write(b"%d" % (epoch + 1))
if loss_val < best_loss:
saver.save(sess, final_model_path)
best_loss = loss_val
else:
epochs_without_progress += 5
if epochs_without_progress > max_epochs_without_progress:
print("Early stopping")
break
os.remove(checkpoint_epoch_path)
with tf.Session() as sess:
saver.restore(sess, final_model_path)
accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})
accuracy_val
###Output
_____no_output_____ |
notebooks/Run Self Defined Problem with Modified Model.ipynb | ###Markdown
Default Model ArchitectureBy default, the model only contains BERT model and a dense layer for each problem. If you want to add things between BERT and dense layers, you can modify hidden method of BertMultiTask class. Here's an example of adding a cudnn GRU on top of BERT.
###Code
import tensorflow as tf
from tensorflow import keras
from bert_multitask_learning import (get_or_make_label_encoder, FullTokenizer,
create_single_problem_generator, train_bert_multitask,
eval_bert_multitask, DynamicBatchSizeParams, TRAIN, EVAL, PREDICT, BertMultiTask)
import pickle
import types
import os
cd ../
# define new problem
new_problem_type = {'imdb_cls': 'cls'}
def imdb_cls(params, mode):
tokenizer = FullTokenizer(vocab_file=params.vocab_file)
# get data
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=10000)
label_encoder = get_or_make_label_encoder(params, 'imdb_cls', mode, train_labels+test_labels)
word_to_id = keras.datasets.imdb.get_word_index()
index_from=3
word_to_id = {k:(v+index_from) for k,v in word_to_id.items()}
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2
id_to_word = {value:key for key,value in word_to_id.items()}
train_data = [[id_to_word[i] for i in sentence] for sentence in train_data]
test_data = [[id_to_word[i] for i in sentence] for sentence in test_data]
if mode == TRAIN:
input_list = train_data
target_list = train_labels
else:
input_list = test_data
target_list = test_labels
if mode == PREDICT:
return input_list, target_list, label_encoder
return create_single_problem_generator('imdb_cls', input_list, target_list, label_encoder, params, tokenizer, mode)
new_problem_process_fn_dict = {'imdb_cls': imdb_cls}
# create params and model
params = DynamicBatchSizeParams()
params.init_checkpoint = 'models/cased_L-12_H-768_A-12'
tf.logging.set_verbosity(tf.logging.DEBUG)
model = BertMultiTask(params)
def cudnngru_hidden(self, features, hidden_feature, mode):
# with shape (batch_size, seq_len, hidden_size)
seq_hidden_feature = hidden_feature['seq']
cudnn_gru_layer = tf.keras.layers.CuDNNGRU(
units=self.params.bert_config.hidden_size,
return_sequences=True,
return_state=False,
)
gru_logit = cudnn_gru_layer(seq_hidden_feature)
return_features = {}
return_hidden_feature = {}
for problem_dict in self.params.run_problem_list:
for problem in problem_dict:
# for slightly faster training
return_features[problem], return_hidden_feature[problem] = self.get_features_for_problem(
features, hidden_feature, problem, mode)
return return_features, return_hidden_feature
model.hidden = types.MethodType(cudnngru_hidden, model)
# train model
tf.logging.set_verbosity(tf.logging.DEBUG)
train_bert_multitask(problem='imdb_cls', num_gpus=1,
num_epochs=10, params=params,
problem_type_dict=new_problem_type, processing_fn_dict=new_problem_process_fn_dict,
model=model, model_dir='models/ibdm_gru')
# evaluate model
print(eval_bert_multitask(problem='imdb_cls', num_gpus=1,
params=params, eval_scheme='acc',
problem_type_dict=new_problem_type, processing_fn_dict=new_problem_process_fn_dict,
model_dir='models/idbm_gru', model = model))
###Output
Params problem assigned. Problem list: ['imdb_cls']
INFO:tensorflow:Device is available but not used by distribute strategy: /device:CPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_CPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:1
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:2
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:3
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:1
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:2
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:3
INFO:tensorflow:Configured nccl all-reduce.
INFO:tensorflow:Initializing RunConfig with distribution strategies.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Using config: {'_model_dir': 'models/ibdm_gru', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7f1acbec6668>, '_device_fn': None, '_protocol': None, '_eval_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7f1acbec6668>, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f1acbec6400>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None}
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py:429: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
tf.py_function, which takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /data3/yjp/bert-multitask-learning/bert_multitask_learning/bert/modeling.py:673: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
DEBUG:tensorflow:Converted call: <function stop_grad at 0x7f1acc1c6ea0>; owner: None
DEBUG:tensorflow:Converting <function stop_grad at 0x7f1acc1c6ea0>
DEBUG:tensorflow:Compiled output of <function stop_grad at 0x7f1acc1c6ea0>:
def stop_grad(global_step, tensor, freeze_step):
try:
with ag__.function_scope('stop_grad'):
cond_1 = ag__.gt(freeze_step, 0)
def if_true_1():
with ag__.function_scope('if_true_1'):
tensor_2, = tensor,
cond = ag__.lt_e(global_step, freeze_step)
def if_true():
with ag__.function_scope('if_true'):
tensor_1, = tensor_2,
tensor_1 = tf.stop_gradient(tensor_1)
return tensor_1
def if_false():
with ag__.function_scope('if_false'):
return tensor_2
tensor_2 = ag__.if_stmt(cond, if_true, if_false)
return tensor_2
def if_false_1():
with ag__.function_scope('if_false_1'):
return tensor
tensor = ag__.if_stmt(cond_1, if_true_1, if_false_1)
return tensor
except:
ag__.rewrite_graph_construction_error(ag_source_map__)
stop_grad.autograph_info__ = {}
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from models/ibdm_gru/model.ckpt-7812
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
###Markdown
Default Model ArchitectureBy default, the model only contains BERT model and a dense layer for each problem. If you want to add things between BERT and dense layers, you can modify hidden method of BertMultiTask class. Here's an example of adding a cudnn GRU on top of BERT.
###Code
import tensorflow as tf
from tensorflow import keras
from bert_multitask_learning import (get_or_make_label_encoder, FullTokenizer,
create_single_problem_generator, train_bert_multitask,
eval_bert_multitask, DynamicBatchSizeParams, TRAIN, EVAL, PREDICT, BertMultiTask,preprocessing_fn)
import pickle
import types
import os
cd ../
# define new problem
new_problem_type = {'imdb_cls': 'cls'}
@preprocessing_fn
def imdb_cls(params, mode):
# get data
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=10000)
label_encoder = get_or_make_label_encoder(params, 'imdb_cls', mode, train_labels+test_labels)
word_to_id = keras.datasets.imdb.get_word_index()
index_from=3
word_to_id = {k:(v+index_from) for k,v in word_to_id.items()}
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2
id_to_word = {value:key for key,value in word_to_id.items()}
train_data = [[id_to_word[i] for i in sentence] for sentence in train_data]
test_data = [[id_to_word[i] for i in sentence] for sentence in test_data]
if mode == TRAIN:
input_list = train_data
target_list = train_labels
else:
input_list = test_data
target_list = test_labels
return input_list, target_list
new_problem_process_fn_dict = {'imdb_cls': imdb_cls}
# create params and model
params = DynamicBatchSizeParams()
params.init_checkpoint = 'models/cased_L-12_H-768_A-12'
tf.logging.set_verbosity(tf.logging.DEBUG)
model = BertMultiTask(params)
def cudnngru_hidden(self, features, hidden_feature, mode):
# with shape (batch_size, seq_len, hidden_size)
seq_hidden_feature = hidden_feature['seq']
cudnn_gru_layer = tf.keras.layers.CuDNNGRU(
units=self.params.bert_config.hidden_size,
return_sequences=True,
return_state=False,
)
gru_logit = cudnn_gru_layer(seq_hidden_feature)
return_features = {}
return_hidden_feature = {}
for problem_dict in self.params.run_problem_list:
for problem in problem_dict:
# for slightly faster training
return_features[problem], return_hidden_feature[problem] = self.get_features_for_problem(
features, hidden_feature, problem, mode)
return return_features, return_hidden_feature
model.hidden = types.MethodType(cudnngru_hidden, model)
# train model
tf.logging.set_verbosity(tf.logging.DEBUG)
train_bert_multitask(problem='imdb_cls', num_gpus=1,
num_epochs=10, params=params,
problem_type_dict=new_problem_type, processing_fn_dict=new_problem_process_fn_dict,
model=model, model_dir='models/ibdm_gru')
# evaluate model
print(eval_bert_multitask(problem='imdb_cls', num_gpus=1,
params=params, eval_scheme='acc',
problem_type_dict=new_problem_type, processing_fn_dict=new_problem_process_fn_dict,
model_dir='models/idbm_gru', model = model))
###Output
Params problem assigned. Problem list: ['imdb_cls']
INFO:tensorflow:Device is available but not used by distribute strategy: /device:CPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_CPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:1
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:2
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:3
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:1
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:2
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:3
INFO:tensorflow:Configured nccl all-reduce.
INFO:tensorflow:Initializing RunConfig with distribution strategies.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Using config: {'_model_dir': 'models/ibdm_gru', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7f1acbec6668>, '_device_fn': None, '_protocol': None, '_eval_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7f1acbec6668>, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f1acbec6400>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None}
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py:429: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
tf.py_function, which takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /data3/yjp/bert-multitask-learning/bert_multitask_learning/bert/modeling.py:673: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
DEBUG:tensorflow:Converted call: <function stop_grad at 0x7f1acc1c6ea0>; owner: None
DEBUG:tensorflow:Converting <function stop_grad at 0x7f1acc1c6ea0>
DEBUG:tensorflow:Compiled output of <function stop_grad at 0x7f1acc1c6ea0>:
def stop_grad(global_step, tensor, freeze_step):
try:
with ag__.function_scope('stop_grad'):
cond_1 = ag__.gt(freeze_step, 0)
def if_true_1():
with ag__.function_scope('if_true_1'):
tensor_2, = tensor,
cond = ag__.lt_e(global_step, freeze_step)
def if_true():
with ag__.function_scope('if_true'):
tensor_1, = tensor_2,
tensor_1 = tf.stop_gradient(tensor_1)
return tensor_1
def if_false():
with ag__.function_scope('if_false'):
return tensor_2
tensor_2 = ag__.if_stmt(cond, if_true, if_false)
return tensor_2
def if_false_1():
with ag__.function_scope('if_false_1'):
return tensor
tensor = ag__.if_stmt(cond_1, if_true_1, if_false_1)
return tensor
except:
ag__.rewrite_graph_construction_error(ag_source_map__)
stop_grad.autograph_info__ = {}
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from models/ibdm_gru/model.ckpt-7812
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
###Markdown
Default Model Architecture**Modified model is not supported yet. This notebook DOSE NOT work now.**By default, the model only contains BERT model and a dense layer for each problem. If you want to add things between BERT and dense layers, you can modify hidden method of BertMultiTask class. Here's an example of adding a cudnn GRU on top of BERT.
###Code
import tensorflow as tf
from tensorflow import keras
from bert_multitask_learning import (get_or_make_label_encoder, FullTokenizer,
create_single_problem_generator, train_bert_multitask,
eval_bert_multitask, DynamicBatchSizeParams, TRAIN, EVAL, PREDICT, BertMultiTask,preprocessing_fn)
import pickle
import types
import os
cd ../
# define new problem
new_problem_type = {'imdb_cls': 'cls'}
@preprocessing_fn
def imdb_cls(params, mode):
# get data
(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=10000)
label_encoder = get_or_make_label_encoder(params, 'imdb_cls', mode, train_labels+test_labels)
word_to_id = keras.datasets.imdb.get_word_index()
index_from=3
word_to_id = {k:(v+index_from) for k,v in word_to_id.items()}
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2
id_to_word = {value:key for key,value in word_to_id.items()}
train_data = [[id_to_word[i] for i in sentence] for sentence in train_data]
test_data = [[id_to_word[i] for i in sentence] for sentence in test_data]
if mode == TRAIN:
input_list = train_data
target_list = train_labels
else:
input_list = test_data
target_list = test_labels
return input_list, target_list
new_problem_process_fn_dict = {'imdb_cls': imdb_cls}
# create params and model
params = DynamicBatchSizeParams()
params.init_checkpoint = 'models/cased_L-12_H-768_A-12'
tf.logging.set_verbosity(tf.logging.DEBUG)
model = BertMultiTask(params)
def cudnngru_hidden(self, features, hidden_feature, mode):
# with shape (batch_size, seq_len, hidden_size)
seq_hidden_feature = hidden_feature['seq']
cudnn_gru_layer = tf.keras.layers.CuDNNGRU(
units=self.params.bert_config.hidden_size,
return_sequences=True,
return_state=False,
)
gru_logit = cudnn_gru_layer(seq_hidden_feature)
return_features = {}
return_hidden_feature = {}
for problem_dict in self.params.run_problem_list:
for problem in problem_dict:
# for slightly faster training
return_features[problem], return_hidden_feature[problem] = self.get_features_for_problem(
features, hidden_feature, problem, mode)
return return_features, return_hidden_feature
model.hidden = types.MethodType(cudnngru_hidden, model)
# train model
tf.logging.set_verbosity(tf.logging.DEBUG)
train_bert_multitask(problem='imdb_cls', num_gpus=1,
num_epochs=10, params=params,
problem_type_dict=new_problem_type, processing_fn_dict=new_problem_process_fn_dict,
model=model, model_dir='models/ibdm_gru')
# evaluate model
print(eval_bert_multitask(problem='imdb_cls', num_gpus=1,
params=params, eval_scheme='acc',
problem_type_dict=new_problem_type, processing_fn_dict=new_problem_process_fn_dict,
model_dir='models/idbm_gru', model = model))
###Output
Params problem assigned. Problem list: ['imdb_cls']
INFO:tensorflow:Device is available but not used by distribute strategy: /device:CPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_CPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:0
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:1
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:2
INFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:3
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:1
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:2
INFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:3
INFO:tensorflow:Configured nccl all-reduce.
INFO:tensorflow:Initializing RunConfig with distribution strategies.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Using config: {'_model_dir': 'models/ibdm_gru', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7f1acbec6668>, '_device_fn': None, '_protocol': None, '_eval_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7f1acbec6668>, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f1acbec6400>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None}
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py:429: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
tf.py_function, which takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /data3/yjp/bert-multitask-learning/bert_multitask_learning/bert/modeling.py:673: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
DEBUG:tensorflow:Converted call: <function stop_grad at 0x7f1acc1c6ea0>; owner: None
DEBUG:tensorflow:Converting <function stop_grad at 0x7f1acc1c6ea0>
DEBUG:tensorflow:Compiled output of <function stop_grad at 0x7f1acc1c6ea0>:
def stop_grad(global_step, tensor, freeze_step):
try:
with ag__.function_scope('stop_grad'):
cond_1 = ag__.gt(freeze_step, 0)
def if_true_1():
with ag__.function_scope('if_true_1'):
tensor_2, = tensor,
cond = ag__.lt_e(global_step, freeze_step)
def if_true():
with ag__.function_scope('if_true'):
tensor_1, = tensor_2,
tensor_1 = tf.stop_gradient(tensor_1)
return tensor_1
def if_false():
with ag__.function_scope('if_false'):
return tensor_2
tensor_2 = ag__.if_stmt(cond, if_true, if_false)
return tensor_2
def if_false_1():
with ag__.function_scope('if_false_1'):
return tensor
tensor = ag__.if_stmt(cond_1, if_true_1, if_false_1)
return tensor
except:
ag__.rewrite_graph_construction_error(ag_source_map__)
stop_grad.autograph_info__ = {}
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
WARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from models/ibdm_gru/model.ckpt-7812
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
|
03-machine-learning-tabular-crossection/14 - Pipelines/pipelines-1.ipynb | ###Markdown
Pipelines para Automatizar Fluxos de Trabalho de Aprendizado de MáquinaHá fluxos de trabalho padrão no aprendizado de máquina aplicado. Padrão porque eles superam problemas comuns, como vazamento de dados em seu equipamento de teste.Python scikit-learn fornece um utilitário Pipeline para ajudar a automatizar fluxos de trabalho de aprendizado de máquina.Os pipelines funcionam permitindo que uma sequência linear de transformações de dados seja encadeada, culminando em um processo de modelagem que pode ser avaliado.O objetivo é garantir que todas as etapas do pipeline sejam restritas aos dados disponíveis para a avaliação, como o conjunto de dados de treinamento ou cada dobra do procedimento de validação cruzada.Você pode aprender mais sobre Pipelines no scikit-learn lendo a seção Pipeline do guia do usuário . Você também pode revisar a documentação da API para as classes Pipeline e FeatureUnion no módulo de pipeline . Pipeline 1: preparação e modelagem de dadosUma armadilha fácil de cair no aprendizado de máquina aplicado é o vazamento de dados do conjunto de dados de treinamento para o conjunto de dados de teste.Para evitar essa armadilha, você precisa de um equipamento de teste robusto com forte separação de treinamento e testes. Isso inclui preparação de dados.A preparação de dados é uma maneira fácil de vazar o conhecimento de todo o conjunto de dados de treinamento para o algoritmo. Por exemplo, preparar seus dados usando normalização ou padronização em todo o conjunto de dados de treinamento antes do aprendizado não seria um teste válido porque o conjunto de dados de treinamento teria sido influenciado pela escala dos dados no conjunto de testes.Os pipelines ajudam a evitar o vazamento de dados em seu equipamento de teste, garantindo que a preparação de dados, como a padronização, seja restrita a cada dobra do procedimento de validação cruzada.O exemplo abaixo demonstra esse importante fluxo de trabalho de preparação de dados e avaliação de modelos. O pipeline é definido com duas etapas:Padronize os dados.Aprenda um modelo de Análise Linear Discriminante.O pipeline é então avaliado usando validação cruzada de 10 vezes.
###Code
# Create a pipeline that standardizes the data then creates a model
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
# load data
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
# create pipeline
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('lda', LinearDiscriminantAnalysis()))
model = Pipeline(estimators)
# evaluate pipeline
seed = 7
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(model, X, Y, cv=kfold)
print(results.mean())
###Output
0.773462064251538
###Markdown
Pipeline 2: Extração e Modelagem de RecursosExtração de recurso é outro procedimento que é suscetível a vazamento de dados.Como a preparação de dados, os procedimentos de extração de recursos devem ser restritos aos dados em seu conjunto de dados de treinamento.O pipeline fornece uma ferramenta útil chamada FeatureUnion, que permite que os resultados de vários procedimentos de seleção e extração de recursos sejam combinados em um conjunto de dados maior no qual um modelo pode ser treinado. É importante ressaltar que toda a extração de recursos e a união de recursos ocorrem dentro de cada dobra do procedimento de validação cruzada.O exemplo abaixo demonstra o pipeline definido com quatro etapas:Extração de recursos com análise de componentes principais (3 recursos)Extração de recursos com seleção estatística (6 recursos)União de recursosAprenda um modelo de regressão logísticaO pipeline é então avaliado usando validação cruzada de 10 vezes.
###Code
# Create a pipeline that extracts features from the data then creates a model
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn.pipeline import FeatureUnion
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
# load data
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
# create feature union
features = []
features.append(('pca', PCA(n_components=3, random_state=42)))
features.append(('select_best', SelectKBest(k=6, )))
feature_union = FeatureUnion(features)
# create pipeline
estimators = []
estimators.append(('feature_union', feature_union))
estimators.append(('logistic', sklearn.ensambles.RandomForest()))
model = Pipeline(estimators)
# evaluate pipeline
seed = 7
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(model, X, Y, cv=kfold)
print(results.mean())
###Output
0.7760423786739576
|
_notebooks/2020-03-30-covid19-overview-australia.ipynb | ###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____
###Markdown
COVID-19 Status in Australia> Tracking new, confirmed, and recovered cases and deaths by state.- comments: true- author: IamAshKS- categories: [overview, interactive, australia]- hide: true- permalink: /covid-overview-australia/
###Code
#hide
from IPython.display import HTML
from pathlib import Path
import jinja2 as jj
import numpy as np
import pandas as pd
import requests as rq
#hide
def do_dev_tasks(html):
file_url = '.local'
if Path(file_url).is_dir():
with open(f'{file_url}/index.html', 'w') as f:
f.write(html)
def get_dataframe(name):
data_url = ('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/'
f'csse_covid_19_time_series/time_series_covid19_{name}_global.csv')
return pd.read_csv(data_url)
def get_css_asset(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/assets/css/{name}.css'
file_url = f'assets/css/{name}.css'
if Path(file_url).is_file():
asset = f'<link rel="stylesheet" type="text/css" href="../{file_url}" />\n'
else:
asset = f'<style>{rq.get(data_url).text}</style>'
return asset
def get_template(name):
data_url = f'https://raw.githubusercontent.com/iamashks/covid19-australia/master/templates/{name}.html'
file_url = f'templates/{name}.html'
if Path(file_url).is_file():
templateLoader = jj.FileSystemLoader(searchpath='./')
templateEnv = jj.Environment(loader=templateLoader)
template = templateEnv.get_template(file_url)
else:
template = jj.Template(rq.get(data_url).text)
return template
#hide
COL_COUNTRY = 'Country/Region'
COL_STATE = 'Province/State'
COUNTRY = 'Australia'
dft_confirm = get_dataframe('confirmed')
dft_confirm = dft_confirm[dft_confirm[COL_COUNTRY] == COUNTRY]
dft_demised = get_dataframe('deaths')
dft_demised = dft_demised[dft_demised[COL_COUNTRY] == COUNTRY]
dft_recover = get_dataframe('recovered')
dft_recover = dft_recover[dft_recover[COL_COUNTRY] == COUNTRY]
#hide
COL_TODAY = dft_confirm.columns[-1]
COL_1DAY = dft_confirm.columns[-1 - 1]
COL_5DAY = dft_confirm.columns[-1 - 5]
COL_50DAY = dft_confirm.columns[-1 - 50]
df_table = pd.DataFrame({'State': dft_confirm[COL_STATE], 'Cases': dft_confirm[COL_TODAY],
'Recover': dft_recover[COL_TODAY], 'Deaths': dft_demised[COL_TODAY]})
df_table['Cases (5D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_5DAY])
df_table['Recover (5D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_5DAY])
df_table['Deaths (5D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_5DAY])
df_table['Cases (1D)'] = np.array(dft_confirm[COL_TODAY]) - np.array(dft_confirm[COL_1DAY])
df_table['Recover (1D)'] = np.array(dft_recover[COL_TODAY]) - np.array(dft_recover[COL_1DAY])
df_table['Deaths (1D)'] = np.array(dft_demised[COL_TODAY]) - np.array(dft_demised[COL_1DAY])
df_table['Fatality Rate'] = (100 * df_table['Deaths'] / df_table['Cases']).round(1)
df_table = df_table.sort_values(by=['Cases', 'Deaths'], ascending=[False, False])
df_table = df_table.reset_index()
df_table.index += 1
del df_table['index'] # del duplicate index
df_table.head(8)
#hide
dt_cols = dft_confirm.columns[~dft_confirm.columns.isin([COL_STATE, COL_COUNTRY, 'Lat', 'Long'])]
dft_cases = dft_confirm.groupby(COL_STATE)[dt_cols].sum()
dft_cases_new = dft_cases.diff(axis=1).fillna(0).astype(int)
include_cols = ['Cases', 'Recover', 'Deaths', 'Cases (5D)', 'Recover (5D)', 'Deaths (5D)']
summary_nsw = df_table[df_table['State'].eq('New South Wales')][include_cols].sum().add_prefix('NSW ')
summary_vic = df_table[df_table['State'].eq('Victoria')][include_cols].sum().add_prefix('VIC ')
summary_qld = df_table[df_table['State'].eq('Queensland')][include_cols].sum().add_prefix('QLD ')
summary_time = {'updated': pd.to_datetime(COL_TODAY), 'since': pd.to_datetime(COL_5DAY)}
summary = {**summary_time, **df_table[include_cols].sum(), **summary_nsw, **summary_vic, **summary_qld}
summary
#hide_input
html_text = get_template('overview').render(D=summary, table=df_table, newcases=dft_cases_new,
np=np, pd=pd, enumerate=enumerate)
html_text = f'<div>{get_css_asset("keen")}{html_text}</div>'
do_dev_tasks(html=html_text)
HTML(html_text)
###Output
_____no_output_____ |
Autodrive/Autodrive.ipynb | ###Markdown
AutodriveKeep an autodrive car walking between two white lines.
###Code
%matplotlib inline
import keras
from keras.models import Model, load_model
from keras.layers import Input, Convolution2D, MaxPooling2D, Activation, Dropout, Flatten, Dense
import tensorflow as tf
import os
import urllib
import urllib.request
import pickle
import requests
import sys
import matplotlib.pyplot
import numpy as np
import sklearn
import pandas as pd
datapath = os.getcwd() + '\indoor_lanes.pkl'
if os.path.exists(datapath):
print('Using local data file.')
else:
link = "https://s3.amazonaws.com/donkey_resources/indoor_lanes.pkl"
file_name = "indoor_lanes.pkl"
with open(file_name, "wb") as f:
print("Downloading %s" % file_name)
response = requests.get(link, stream=True)
total_length = response.headers.get('content-length')
if total_length is None: # no content length header
f.write(response.content)
else:
dl = 0
total_length = int(total_length)
for data in response.iter_content(chunk_size=4096):
dl += len(data)
f.write(data)
done = int(50 * dl / total_length)
sys.stdout.write("\r[%s%s]" % ('=' * done, ' ' * (50-done)) )
sys.stdout.flush()
print('Data file downloaded.')
with open(datapath, 'rb') as f:
X, Y = pickle.load(f)
'If downloading from web, ''
'the following bar will display the progress of downloading (443 MB)'
print('X.shape: ', X.shape)
print('Y.shape: ', Y.shape)
matplotlib.pyplot.imshow(X[210])
#X, Y = sklearn.utils.shuffle(X, Y)
def unison_shuffled_copies(X, Y):
assert len(X) == len(Y)
p = np.random.permutation(len(X))
return X[p], Y[p]
shuffled_X, shuffled_Y = unison_shuffled_copies(X,Y)
len(shuffled_X)
cutoff_8 = int(len(X)*.8) #80% used for training
cutoff_9 = int(len(X)*.9) #10% used for test
Xtrain, Ytrain = shuffled_X[:cutoff_8], shuffled_Y[:cutoff_8]
Xval, Yval = shuffled_X[cutoff_8 :cutoff_9], shuffled_Y[cutoff_8:cutoff_9]
Xtest, Ytest = shuffled_X[cutoff_9:], shuffled_Y[cutoff_9:]
len(Xtrain)+len(Xval)+len(Xtest)
###Output
_____no_output_____
###Markdown
Drive modelInput: 120x160 imageOutput: steering (-90 to 90 degrees)Reference: https://github.com/otaviogood/carputerNOTE: there is no input such as last steeringNote2: No output of throttle value
###Code
img_in = Input(shape=(120, 160, 3), name='img_in')
angle_in = Input(shape=(1,), name='angle_in')
x = Convolution2D(8, 3, 3)(img_in)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Convolution2D(16, 3, 3)(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Convolution2D(32, 3, 3)(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
merged = Flatten()(x)
x = Dense(256)(merged)
x = Activation('linear')(x)
x = Dropout(.2)(x)
angle_out = Dense(1, name='angle_out')(x)
model = Model(input=[img_in], output=[angle_out])
model.compile(optimizer='adam', loss='mean_squared_error')
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
img_in (InputLayer) (None, 120, 160, 3) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 118, 158, 8) 224
_________________________________________________________________
activation_5 (Activation) (None, 118, 158, 8) 0
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 59, 79, 8) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 57, 77, 16) 1168
_________________________________________________________________
activation_6 (Activation) (None, 57, 77, 16) 0
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 28, 38, 16) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 26, 36, 32) 4640
_________________________________________________________________
activation_7 (Activation) (None, 26, 36, 32) 0
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 13, 18, 32) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 7488) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 1917184
_________________________________________________________________
activation_8 (Activation) (None, 256) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 256) 0
_________________________________________________________________
angle_out (Dense) (None, 1) 257
=================================================================
Total params: 1,923,473
Trainable params: 1,923,473
Non-trainable params: 0
_________________________________________________________________
###Markdown
Train the modelOnly 4 epoches used
###Code
from keras import callbacks
model_path = os.path.expanduser('~/best_autopilot.hdf5')
#Save the model after each epoch if the validation loss improved.
save_best = callbacks.ModelCheckpoint(model_path, monitor='val_loss', verbose=1,
save_best_only=True, mode='min')
#stop training if the validation loss doesn't improve for 5 consecutive epochs.
early_stop = callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=5,
verbose=0, mode='auto')
callbacks_list = [save_best, early_stop]
model.fit(Xtrain, Ytrain, batch_size=64, nb_epoch=4, validation_data=(Xval, Yval), callbacks=callbacks_list)
###Output
C:\Users\10\AppData\Local\Programs\Python\Python35\lib\site-packages\ipykernel_launcher.py:15: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
from ipykernel import kernelapp as app
###Markdown
Valiadation Show the scatter plot of actual vs predicted results.
###Code
model = load_model(model_path)
test_P = model.predict(Xtest)
test_P = test_P.reshape((test_P.shape[0],))
df = pd.DataFrame({'predicted':test_P, 'actual':Ytest})
ax = df.plot.scatter('predicted', 'actual')
P = model.predict(X[:700])
P = P.reshape((P.shape[0],))
ax = pd.DataFrame({'predicted':P, 'actual':Y[:700]}).plot()
ax.set_ylabel("steering angle")
###Output
_____no_output_____ |
2017-09-13-MeetIT-Torun/klimat.ipynb | ###Markdown
WstępMoże się wydawać, że Polska jest szczególnym miejscem jeśli chodzi o zmianę klimatu. W sumie czasem pada śnieg albo jest zimno, czasem latem trzeba zakładać kurtkę, etc. Spójrzmy, co zmieniło się na przestrzeni lat.Zbiór danych przetwarzany tutaj opisuje **temperaturę lądu**, nie temperaturę powietrza. Ląd ogrzewa się szybciej niż powietrze, więc łatwiej z niego wyłowić trendy.Na początku analizy posiłkowałem się trochę [tym wpisem][1] [Tomasza Tomaszewskiego][2], ale w końcu zrozumiałem to i owo i dopisałem kilka rzeczy, a z kilku zrezygnowałem. UwagaNie chciałem umieszczać w repozytorium całego zbioru danych, gdyż waży on około 500MB. W związku z tym ograniczyłem go tylko do Polski. Jeśli chcesz coś policzyć na danych z całego świata, cały zbiór możesz ściągnąć [na kaggle][3].Zaczynamy.Najpierw kilka ustawień początkowych i import potrzebnych bibliotek. [1]: https://medium.com/@tkwadrat/gor%C4%85cy-kwietniowy-weekend-czas-na-small-talk-w-oparciu-o-dane-c4d0c158c013 [2]: https://twitter.com/tkwadrat [3]: https://www.kaggle.com/berkeleyearth/climate-change-earth-surface-temperature-data
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import calendar, datetime
sns.set(style='whitegrid', palette='Set2')
CITY = 'Torun'
SELECTED_CITIES = ['Torun', 'Cracow', 'Katowice', 'Olsztyn', 'Szczecin', 'Bialystok',
'Zielona Gora', 'Warsaw', 'Poznan', 'Bialystok', 'Wroclaw']
START_YEAR = 1753
MONTH = int(datetime.datetime.now().strftime("%m"))
# when global average temperature started to rise
RISE_START_YEAR = 1970
# Baseline years
BASE_YEAR_START = 1850
BASE_YEAR_END = 1900
###Output
_____no_output_____
###Markdown
Teraz wczytujemy dane z udostępnionego zbioru i czyścimy je z niepotrzebnych w tej analizie informacji.
###Code
df = pd.read_csv('./data/climate/GlobalLandTemperaturesByCityPoland.csv')
date_index = pd.to_datetime(df['dt'], format='%Y-%m-%d', errors='ignore')
df['di'] = date_index
df['Year'] = df['di'].dt.year
df['Month'] = df['di'].dt.month
df['Day'] = df['di'].dt.day
poland = df
poland_clean = poland.copy()
poland_clean.drop('Latitude', axis=1, inplace=True)
poland_clean.drop('Longitude', axis=1, inplace=True)
poland_clean.drop('AverageTemperatureUncertainty', axis=1, inplace=True)
poland_clean.drop('Country', axis=1, inplace=True)
poland_clean.drop('dt', axis=1, inplace=True)
city = df[df['City'] == CITY].dropna().copy()
climate_city = city[city['Year'] >= START_YEAR].copy()
month_city = climate_city[climate_city['Month'] == MONTH].copy()
current_month_city = month_city[month_city['Year'] > RISE_START_YEAR]
baseline_month_city = month_city[
month_city['Year'].between(BASE_YEAR_START, BASE_YEAR_END)].copy()
###Output
_____no_output_____
###Markdown
Do dyspozycji w Polsce miałem następujące miasta:
###Code
poland['City'].unique()
###Output
_____no_output_____
###Markdown
Wybierzmy kilka i zobaczmy, jak się zmieniała dla nich temperatura na przestrzeni lat.To co widać na poniższym wykresie to znaczący wzrost średnich temperatur od początku XX wieku. Dane kończą się na roku 2013 i do tej chwili średnia temperatura wzrosła o 1 stopień.Ciekawe jest to, że najchłodniejsze z miast (Białystok) ma średnią temperaturę niższą o ponad 2 stopnie od najcieplejszego (Zielonej Góry).
###Code
sns.lmplot(x='Year', y='AverageTemperature', hue='City', data=poland[poland['City'].isin(SELECTED_CITIES)],
scatter=False, lowess=True)
###Output
_____no_output_____
###Markdown
Analiza dla ToruniaSam mieszkam obecnie w Toruniu i dlatego analizę przeprowadzę dla tego właśnie miasta.Część wykresów dotyczyć będzie obecnego miesiąca (tego, w którym uruchomię notebook albo w którym Ty go sforkujesz i uruchomisz ;) )Próbka zbioru danych dla Torunia po usunięciu kolumn, które mnie nie interesowały, wygląda tak:
###Code
climate_city.head()
###Output
_____no_output_____
###Markdown
Od 1753 roku mamy pełne dane. Widać też, że są to średnie z całego miesiąca.Na początek sprawdźmy jaka była średnia temperatura w aktualnym miesiącu od 1753 i 1970 roku. W 1970 uśredniona temperatura całego świata zaczęła dramatycznie rosnąć.Należy wziąć pod uwagę, że naukowcy badający klimat odchylenia od średniej mierzą względem okresu **1850-1900** (o tym więcej poniżej).
###Code
print("Average temperature in {city} in {from_year}-2013: {temperature}".format(
city=calendar.month_name[MONTH],
from_year=START_YEAR,
temperature=month_city['AverageTemperature'].mean()
))
print("Average temperature in {city} in {from_year}-2013: {temperature}".format(
city=calendar.month_name[MONTH],
from_year=RISE_START_YEAR,
temperature=current_month_city['AverageTemperature'].mean()
))
print("Average temperature in {city} in {from_year}-{to_year}: {temperature} (baseline)".format(
city=calendar.month_name[MONTH],
from_year=BASE_YEAR_START,
to_year=BASE_YEAR_END,
temperature=baseline_month_city['AverageTemperature'].mean()
))
###Output
Average temperature in September in 1753-2013: 14.08733076923077
Average temperature in September in 1970-2013: 14.224523809523811
Average temperature in September in 1850-1900: 13.92435294117647 (baseline)
###Markdown
Poniższy wykres przedstawia rozkład temperatur w całym roku i w miesiącu zdefiniowanym w `MONTH`.
###Code
fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=False)
sns.distplot(climate_city['AverageTemperature'], bins=50, rug=True, ax=ax1)
ax1.set_title("Whole year")
sns.distplot(month_city['AverageTemperature'], bins=50, rug=True, ax=ax2)
ax2.set_title("Current month")
plt.show()
###Output
_____no_output_____
###Markdown
Następny wykres przedstawia zmiany średniej temperatury na przestrzeni lat we wszystkich miesiącach. Dokładniej rzecz ujmując nie jest to średnia, tylko krzywa regresji.
###Code
sns.lmplot(y="AverageTemperature", x="Year", data=climate_city,
lowess=True, markers=".", col="Month", col_wrap=3, sharex=False, sharey=False)
###Output
_____no_output_____
###Markdown
Poniżej zaś można zobaczyć zakresy temperatur, jakie zdarzały się w całym zbiorze danych przez te nieco ponad 200 lat.
###Code
sns.swarmplot(x='Month', y='AverageTemperature', data=climate_city)
###Output
_____no_output_____
###Markdown
Taka prezentacja danych nie mówi jednak wiele o tym, co jest szczególnie istotne w kontekście globalnego ocieplenia, czyli o trendzie. Sprawdźmy jak to wygląda.
###Code
from scipy.stats import spearmanr
sns.jointplot(month_city['Year'], month_city['AverageTemperature'],
kind="hex", stat_func=spearmanr)
###Output
_____no_output_____
###Markdown
Co to mówi? Coraz ciemniejsze sześciokąty w górnej części wykresu w miarę upływu czasu wskazują na coraz częstsze odchylenia temperatury w górę. Średnie temperatury w jednym miesiącuWybrałem jeden miesiąc (taki jak jest teraz) i sprawdziłem, jak często występowała dana temperatura. Sprawdźmy jak to było porównując wszystkie dostępne dane (1753-2013) z okresem po 1970 oraz 1995 roku (niedługo przed El Niño).Warto zwrócić uwagę, że jeśli miesiącem zdefiniowanym w `MONTH` jest wrzesień, zmiana nie jest duża. Wynika to z tego, że ten akurat miesiąc aż tak bardzo się nie ocieplił na przestrzeni lat (co widać tam, gdzie jest tak dużo wykresów :) ). Polecam zmienić wartość zmiennej `MONTH` i uruchomić dwie komórki poniżej.
###Code
before_1970 = month_city[month_city['Year'] < RISE_START_YEAR]['AverageTemperature'].copy()
after_1970 = month_city[month_city['Year'] >= RISE_START_YEAR]['AverageTemperature'].copy()
after_1995 = month_city[month_city['Year'] >= 1995]['AverageTemperature'].copy()
fig, ax = plt.subplots()
sns.distplot(before_1970, hist=False, ax=ax)
sns.distplot(after_1970, hist=False, ax=ax)
sns.distplot(after_1995, hist=False, ax=ax)
###Output
_____no_output_____
###Markdown
W drugim i trzecim przypadku danych oczywiście nie ma zbyt dużo (43 i 19 pomiarów), ale da się zauważyć, że zwiększa się prawdopodobieństwo wystąpienia temperatury w okolicach 19 stopni. Kostka klimatycznaKostka klimatyczna jest koncepcją, która ma ułatwić zrozumienie tego, że globalne ocieplenie, zwłaszcza w początkowej fazie, nie oznacza, że nie będzie zim w ogóle, a tylko to, że będą one rzadziej.Załóżmy, że pomalowaliśmy ściany kostki na trzy kolory: czerwony, biały i niebieski oznaczające odpowiednio od wysokich, przez średnie, do niskich temperatur. Dostosujmy te przedziały tak, żeby prawdopodobieństwa były analogiczne do szans, że wypadnie ściana kostki zamalowana danym kolorem.Globalne ocieplenie sprawia, że dodatkowe ściany kostki zostają zamalowane na czerwono.W tym notebooku zmieniłem trochę tę koncepcję — zamiast ścian kostki, pokazuję liczbę miesięcy w roku, które uznalibyśmy za ciepłe, normalne lub chłodne (ciepłe jak na styczeń, itd.) Zobaczmy, jak to wygląda.
###Code
def label_bin(row):
if row['AverageTemperature'] < (row['MonthMeanBaseline'] - row['MonthStdBaseline']):
return 'Low'
elif row['AverageTemperature'] > (row['MonthMeanBaseline'] + row['MonthStdBaseline']):
return 'High'
return 'Medium'
monthly_frames = []
for i, group in climate_city.groupby('Month'):
g2 = group.copy()
g_temp_series = g2[(g2['Year'] >= BASE_YEAR_START) & (g2['Year'] <= BASE_YEAR_END)]['AverageTemperature']
g2['MonthMeanBaseline'] = g_temp_series.mean()
g2['MonthStdBaseline'] = g_temp_series.std()
g2['TemperatureBin'] = g2.apply(label_bin, axis=1)
monthly_frames.append(g2)
all_with_bins = pd.concat(monthly_frames).sort_index()
city_grouped_with_counted_bins = all_with_bins.groupby(['Year','TemperatureBin'])['City'] \
.agg(['count']).reset_index()
sns.lmplot(x='Year', y='count', hue='TemperatureBin',
data=city_grouped_with_counted_bins,
scatter=False, lowess=True)
###Output
_____no_output_____ |
models/generator.ipynb | ###Markdown
###Code
import numpy as np
import tensorflow as tf
from tensorflow.keras.initializers import RandomUniform
from tensorflow.keras.layers import Conv2D, concatenate, Input, Add
from tensorflow.keras.models import Model
class Build_generator:
def __init__(self):
self.initializer = RandomUniform(minval=-0.05, maxval=0.05, seed=None)
self.num_RDB_blocks = 6
self.num_RDB_in_RRDB_blocks = 3
self.num_RRDB_blocks = 23
self.beta = 0.2
self.scale_factor = 4
self.path_size = None # What is the patch size
# Residual dense block
def RDB(self,input_layer):
x = input_layer
for i in range(self.num_RDB_blocks):
layer = Conv2D(filters = 32, kernel_size = (3,3),
strides = (1,1), padding = 'same',
activation = 'relu',initializer = self.initializer,
name = 'RDB Layer {}'.format(i+1))(input_layer)
x = concatenate([x,layer],axis = 3, name = 'RDB Concatenate Layer')
# Maybe use 32 filters instead of 64 / activation ?
x = Conv2D(filters = 32, kernel_size = (3,3),
stides = (1,1), padding = 'same',
initializer = self.initializer,
name = 'Final RDB Layer')(x)
return x
# Residual in Residual Dense Block
def RRDB(self,input_layer):
x = input_layer
for i in range (num_RDB_in_RRDB_blocks):
rdb = self.RDG(x)*self.beta
x = Add(name = 'RRDB Layer {}'.format(i+1))([x,rdb])
x = x*self.beta
x = Add(name = 'RRDB')[input_layer,x]
return x
def pixel_shuffle(self,input_layer):
x = Conv2D(filters = 3*self.scale_factor**2, kernel_size = (3,3),
strides = (1,1), padding = 'same',
kernel_initialize = self.initializer,
name = 'Preshuffle')(input_layer)
return tf.nn.depth_to_space(x, block_size = self.scale_factor, data_format = 'NHWC')
def build_generator(self):
LR_input = Input(shape = (self.path_shape, self.path_shape,3),
name = 'LR_INPUT')
pre_block = Conv2D(filters = 32, kernel = (3,3),
strides = (1,1), padding = 'same',
kernel_initializer = initializer,
name = 'Genenerator Preblock')(LR_input)
x = self.RRDB(pre_block)
for n in range(self.num_RRDB_blocks):
x = self.RRDG(x)
post_block = Conv2D(filters = 32, kernel_size = (3,3),
strides = (1,1), padding = 'same',
kernel_initializer = self.initializer,
name = 'Generator Post Block')(x)
# Global Residual Learning
GRL = Add(name = 'Global Residual Learning')([post_block,pre_block])
# Pixel shuffeling
PS = self.pixel_shuffle(GRL)
# SR Image
SR_output = Conv2D(filters = 3, kernel_size = (3,3),
srides = (1,1), padding = 'same',
kernel_initializer = self.initializer,
name = 'SR Output')(PS)
return Model(inputs = LR_input, outputs = SR_output)
###Output
_____no_output_____ |
l2/ftavares_l2.ipynb | ###Markdown
Unicamp - MO826 - Ciência e Visualização de Dados em SaúdeFelipe Marinho TavaresRA: 265680Laboratório 2---
###Code
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Lendo o arquivo `raw.csv`
###Code
pathraw = "raw/heart.csv"
pathprocessed = {"age": "processed/heart-missing-age.csv",
"chol": "processed/heart-missing-chol.csv",
"sex": "processed/heart-missing-sex.csv"}
dt_raw = pd.read_csv(pathraw)
dt_raw.describe()
###Output
_____no_output_____
###Markdown
Lendo o arquivo `heart-missing-age.csv`
###Code
dt_age = pd.read_csv(pathprocessed["age"])
dt_age.describe()
print("Dados de age")
plt.figure(figsize=(16,6))
dt_raw.age.sort_values().plot.hist(label="raw")
dt_age.age.sort_values().plot.hist(label="processed")
plt.legend()
plt.show()
###Output
Dados de age
###Markdown
Analisando estatisticas da coluna age, aparentemente não houve uma retirada para valor especifico de age. O histograma mostra que os valores retirados eram das bins de maior frequência. Analisando dados faltantes de `heart-missing-age.csv`
###Code
dt_age[dt_age["age"].isna()].describe()
dt_age[dt_age["age"].isna()]
###Output
_____no_output_____ |
even-more-python-for-beginners-data-tools/14 - NumPy vs Pandas/14 - Working with numpy and pandas.ipynb | ###Markdown
Moving data from numpy arrays to pandas DataFramesIn our last notebook we trained a model and compared our actual and predicted resultsWhat may not have been evident was when we did this we were working with two different objects: a **numpy array** and a **pandas DataFrame** To explore further let's rerun the code from the previous notebook to create a trained model and get predicted values for our test data
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Load our data from the csv file
delays_df = pd.read_csv('Data/Lots_of_flight_data.csv')
# Remove rows with null values since those will crash our linear regression model training
delays_df.dropna(inplace=True)
# Move our features into the X DataFrame
X = delays_df.loc[:,['DISTANCE','CRS_ELAPSED_TIME']]
# Move our labels into the y DataFrame
y = delays_df.loc[:,['ARR_DELAY']]
# Split our data into test and training DataFrames
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=42)
regressor = LinearRegression() # Create a scikit learn LinearRegression object
regressor.fit(X_train, y_train) # Use the fit method to train the model using your training data
y_pred = regressor.predict(X_test) # Generate predicted values for our test data
###Output
_____no_output_____
###Markdown
In the last Notebook, you might have noticed the output displays differently when you display the contents of the predicted values in y_pred and the actual values in y_test
###Code
y_pred
y_test
###Output
_____no_output_____
###Markdown
Use **type()** to check the datatype of an object.
###Code
type(y_pred)
type(y_test)
###Output
_____no_output_____
###Markdown
* **y_pred** is a numpy array* **y_test** is a pandas DataFrameAnother way you might discover this is if you try to use the **head** method on **y_pred**. This will return an error, because **head** is a method of the DataFrame class it is not a method of numpy arrays
###Code
y_pred.head()
###Output
_____no_output_____
###Markdown
A one dimensional numpy array is similar to a pandas Series
###Code
import numpy as np
airports_array = np.array(['Pearson','Changi','Narita'])
print(airports_array)
print(airports_array[2])
airports_series = pd.Series(['Pearson','Changi','Narita'])
print(airports_series)
print(airports_series[2])
###Output
0 Pearson
1 Changi
2 Narita
dtype: object
Narita
###Markdown
A two dimensional numpy array is similar to a pandas DataFrame
###Code
airports_array = np.array([
['YYZ','Pearson'],
['SIN','Changi'],
['NRT','Narita']])
print(airports_array)
print(airports_array[0,0])
airports_df = pd.DataFrame([['YYZ','Pearson'],['SIN','Changi'],['NRT','Narita']])
print(airports_df)
print(airports_df.iloc[0,0])
###Output
0 1
0 YYZ Pearson
1 SIN Changi
2 NRT Narita
YYZ
###Markdown
If you need the functionality of a DataFrame, you can move data from numpy objects to pandas objects and vice-versa.In the example below we use the DataFrame constructor to read the contents of the numpy array *y_pred* into a DataFrame called *predicted_df*Then we can use the functionality of the DataFrame object
###Code
predicted_df = pd.DataFrame(y_pred)
predicted_df.head()
###Output
_____no_output_____
###Markdown
numpy 배열에서 pandas DataFrame으로 데이터 옮기기지난 notebook에서 모델을 학습하고 실제값과 예측값을 비교해보았습니다.분명하지 않았던 것은 작업을 수행했을 때 두 가지 다른 객체로 작업하는 것입니다.: **numpy 배열**과 **pandas DataFrame** 더 자세히 알아보기 위해 이전 notebook의 코드를 다시 실행하여 훈련 된 모델을 만들고 테스트 데이터에 대한 예측 값을 가져 오겠습니다.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# csv 파일로부터 데이터 불러오기
delays_df = pd.read_csv('Data/Lots_of_flight_data.csv')
# 선형 회귀 모델 학습을 방해하는 null 값을 포함한 행 제거
delays_df.dropna(inplace=True)
# X DataFrame에 특성 옮기기
X = delays_df.loc[:,['DISTANCE', 'CRS_ELAPSED_TIME']]
# y DataFrame에 레이블 값 옮기기
y = delays_df.loc[:,['ARR_DELAY']]
# 테스트용 DataFrame과 학습용 DataFrame으로 데이터 나누기
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.3,
random_state=42
)
regressor = LinearRegression() # scikit learn LinearRegression 객체 생성
regressor.fit(X_train, y_train) # fit 메소드를 사용하여 훈련 데이터의 모델 학습
y_pred = regressor.predict(X_test)
###Output
_____no_output_____
###Markdown
마지막 notebook에서 예측 값의 내용을 y_pred에 표시하고 실제 값을 y_test에 표시 할 때 출력이 다르게 표시되는 것을 확인할 수 있었습니다.
###Code
y_pred
y_test
###Output
_____no_output_____
###Markdown
** type () **을 사용하여 객체의 데이터 유형을 확인합니다.
###Code
type(y_pred)
type(y_test)
###Output
_____no_output_____
###Markdown
* **y_pred** 는 numpy 배열입니다.* **y_test** 는 pandas DataFrame입니다.타입을 확인 할 수있는 또 다른 방법은 ** y_pred **에서 ** head ** 메소드를 사용하려고하는 것입니다.** head **는 DataFrame 클래스의 메소드이므로 numpy 배열에서는 메소드가 아니므로 오류가 반환됩니다.
###Code
y_pred.head()
###Output
_____no_output_____
###Markdown
A one dimensional numpy array is similar to a pandas Series
###Code
import numpy as np
airports_array = np.array(['Pearson','Changi','Narita'])
print(airports_array)
print(airports_array[2])
airports_series = pd.Series(['Pearson','Changi','Narita'])
print(airports_series)
print(airports_series[2])
###Output
0 Pearson
1 Changi
2 Narita
dtype: object
Narita
###Markdown
2 차원 numpy 배열은 pandas DataFrame과 유사합니다.
###Code
airports_array = np.array([
['YYZ','Pearson'],
['SIN','Changi'],
['NRT','Narita']])
print(airports_array)
print(airports_array[0,0])
airports_df = pd.DataFrame([['YYZ','Pearson'],['SIN','Changi'],['NRT','Narita']])
print(airports_df)
print(airports_df.iloc[0,0])
###Output
0 1
0 YYZ Pearson
1 SIN Changi
2 NRT Narita
YYZ
###Markdown
DataFrame의 기능이 필요한 경우 데이터를 numpy 객체에서 pandas 객체로 또는 그 반대로 이동할 수 있습니다.아래 예제에서 DataFrame 생성자를 사용하여 numpy 배열 * y_pred *의 내용을 * predicted_df *라는 DataFrame으로 읽어들입니다.그런 다음 DataFrame 객체의 기능을 사용할 수 있습니다.
###Code
객predicted_df = pd.DataFrame(y_pred)
predicted_df.head()
###Output
_____no_output_____
###Markdown
Moving data from numpy arrays to pandas DataFramesIn our last notebook we trained a model and compared our actual and predicted resultsWhat may not have been evident was when we did this we were working with two different objects: a **numpy array** and a **pandas DataFrame** To explore further let's rerun the code from the previous notebook to create a trained model and get predicted values for our test data
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Load our data from the csv file
delays_df = pd.read_csv('Lots_of_flight_data.csv')
# Remove rows with null values since those will crash our linear regression model training
delays_df.dropna(inplace=True)
# Move our features into the X DataFrame
X = delays_df.loc[:,['DISTANCE','CRS_ELAPSED_TIME']]
# Move our labels into the y DataFrame
y = delays_df.loc[:,['ARR_DELAY']]
# Split our data into test and training DataFrames
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=42)
regressor = LinearRegression() # Create a scikit learn LinearRegression object
regressor.fit(X_train, y_train) # Use the fit method to train the model using your training data
y_pred = regressor.predict(X_test) # Generate predicted values for our test data
###Output
_____no_output_____
###Markdown
In the last Notebook, you might have noticed the output displays differently when you display the contents of the predicted values in y_pred and the actual values in y_test
###Code
y_pred
y_test
###Output
_____no_output_____
###Markdown
Use **type()** to check the datatype of an object.
###Code
type(y_pred)
type(y_test)
###Output
_____no_output_____
###Markdown
* **y_pred** is a numpy array* **y_test** is a pandas DataFrameAnother way you might discover this is if you try to use the **head** method on **y_pred**. This will return an error, because **head** is a method of the DataFrame class it is not a method of numpy arrays
###Code
y_pred.head()
###Output
_____no_output_____
###Markdown
A one dimensional numpy array is similar to a pandas Series
###Code
import numpy as np
airports_array = np.array(['Pearson','Changi','Narita'])
print(airports_array)
print(airports_array[2])
airports_series = pd.Series(['Pearson','Changi','Narita'])
print(airports_series)
print(airports_series[2])
###Output
0 Pearson
1 Changi
2 Narita
dtype: object
Narita
###Markdown
A two dimensional numpy array is similar to a pandas DataFrame
###Code
airports_array = np.array([
['YYZ','Pearson'],
['SIN','Changi'],
['NRT','Narita']])
print(airports_array)
print(airports_array[0,0])
airports_df = pd.DataFrame([['YYZ','Pearson'],['SIN','Changi'],['NRT','Narita']])
print(airports_df)
print(airports_df.iloc[0,0])
###Output
0 1
0 YYZ Pearson
1 SIN Changi
2 NRT Narita
YYZ
###Markdown
If you need the functionality of a DataFrame, you can move data from numpy objects to pandas objects and vice-versa.In the example below we use the DataFrame constructor to read the contents of the numpy array *y_pred* into a DataFrame called *predicted_df*Then we can use the functionality of the DataFrame object
###Code
predicted_df = pd.DataFrame(y_pred)
predicted_df.head()
###Output
_____no_output_____
###Markdown
Moving data from numpy arrays to pandas DataFramesIn our last notebook we trained a model and compared our actual and predicted resultsWhat may not have been evident was when we did this we were working with two different objects: a **numpy array** and a **pandas DataFrame** To explore further let's rerun the code from the previous notebook to create a trained model and get predicted values for our test data
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Load our data from the csv file
delays_df = pd.read_csv('Data/Lots_of_flight_data.csv')
# Remove rows with null values since those will crash our linear regression model training
delays_df.dropna(inplace=True)
# Move our features into the X DataFrame
X = delays_df.loc[:,['DISTANCE','CRS_ELAPSED_TIME']]
# Move our labels into the y DataFrame
y = delays_df.loc[:,['ARR_DELAY']]
# Split our data into test and training DataFrames
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.3,
random_state=42
)
regressor = LinearRegression() # Create a scikit learn LinearRegression object
regressor.fit(X_train, y_train) # Use the fit method to train the model using your training data
y_pred = regressor.predict(X_test) # Generate predicted values for our test data
###Output
_____no_output_____
###Markdown
In the last Notebook, you might have noticed the output displays differently when you display the contents of the predicted values in y_pred and the actual values in y_test
###Code
y_pred
y_test
###Output
_____no_output_____
###Markdown
Use **type()** to check the datatype of an object.
###Code
type(y_pred)
type(y_test)
###Output
_____no_output_____
###Markdown
* **y_pred** is a numpy array* **y_test** is a pandas DataFrameAnother way you might discover this is if you try to use the **head** method on **y_pred**. This will return an error, because **head** is a method of the DataFrame class it is not a method of numpy arrays
###Code
y_pred.head()
###Output
_____no_output_____
###Markdown
A one dimensional numpy array is similar to a pandas Series
###Code
import numpy as np
airports_array = np.array(['Pearson','Changi','Narita'])
print(airports_array)
print(airports_array[2])
airports_series = pd.Series(['Pearson','Changi','Narita'])
print(airports_series)
print(airports_series[2])
###Output
0 Pearson
1 Changi
2 Narita
dtype: object
Narita
###Markdown
A two dimensional numpy array is similar to a pandas DataFrame
###Code
airports_array = np.array(
[
['YYZ','Pearson'],
['SIN','Changi'],
['NRT','Narita']
]
)
print(airports_array)
print(airports_array[0,0])
airports_df = pd.DataFrame([['YYZ','Pearson'],['SIN','Changi'],['NRT','Narita']])
print(airports_df)
print(airports_df.iloc[0,0])
###Output
0 1
0 YYZ Pearson
1 SIN Changi
2 NRT Narita
YYZ
###Markdown
If you need the functionality of a DataFrame, you can move data from numpy objects to pandas objects and vice-versa.In the example below we use the DataFrame constructor to read the contents of the numpy array *y_pred* into a DataFrame called *predicted_df*Then we can use the functionality of the DataFrame object
###Code
predicted_df = pd.DataFrame(y_pred)
predicted_df.head()
###Output
_____no_output_____
###Markdown
Moving data from numpy arrays to pandas DataFramesIn our last notebook we trained a model and compared our actual and predicted resultsWhat may not have been evident was when we did this we were working with two different objects: a **numpy array** and a **pandas DataFrame** To explore further let's rerun the code from the previous notebook to create a trained model and get predicted values for our test data
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Load our data from the csv file
delays_df = pd.read_csv('Data/Lots_of_flight_data.csv')
# Remove rows with null values since those will crash our linear regression model training
delays_df.dropna(inplace=True)
# Move our features into the X DataFrame
X = delays_df.loc[:,['DISTANCE','CRS_ELAPSED_TIME']]
# Move our labels into the y DataFrame
y = delays_df.loc[:,['ARR_DELAY']]
# Split our data into test and training DataFrames
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=42)
regressor = LinearRegression() # Create a scikit learn LinearRegression object
regressor.fit(X_train, y_train) # Use the fit method to train the model using your training data
y_pred = regressor.predict(X_test) # Generate predicted values for our test data
###Output
_____no_output_____
###Markdown
In the last Notebook, you might have noticed the output displays differently when you display the contents of the predicted values in y_pred and the actual values in y_test
###Code
y_pred
y_test
###Output
_____no_output_____
###Markdown
Use **type()** to check the datatype of an object.
###Code
type(y_pred)
type(y_test)
###Output
_____no_output_____
###Markdown
* **y_pred** is a numpy array* **y_test** is a pandas DataFrameAnother way you might discover this is if you try to use the **head** method on **y_pred**. This will return an error, because **head** is a method of the DataFrame class it is not a method of numpy arrays
###Code
y_pred.head()
###Output
_____no_output_____
###Markdown
A one dimensional numpy array is similar to a pandas Series
###Code
import numpy as np
airports_array = np.array(['Pearson','Changi','Narita'])
print(airports_array)
print(airports_array[2])
airports_series = pd.Series(['Pearson','Changi','Narita'])
print(airports_series)
print(airports_series[2])
###Output
0 Pearson
1 Changi
2 Narita
dtype: object
Narita
###Markdown
A two dimensional numpy array is similar to a pandas DataFrame
###Code
airports_array = np.array([
['YYZ','Pearson'],
['SIN','Changi'],
['NRT','Narita']])
print(airports_array)
print(airports_array[0,0])
airports_df = pd.DataFrame([['YYZ','Pearson'],['SIN','Changi'],['NRT','Narita']])
print(airports_df)
print(airports_df.iloc[0,0])
###Output
0 1
0 YYZ Pearson
1 SIN Changi
2 NRT Narita
YYZ
###Markdown
If you need the functionality of a DataFrame, you can move data from numpy objects to pandas objects and vice-versa.In the example below we use the DataFrame constructor to read the contents of the numpy array *y_pred* into a DataFrame called *predicted_df*Then we can use the functionality of the DataFrame object
###Code
predicted_df = pd.DataFrame(y_pred)
predicted_df.head()
###Output
_____no_output_____
###Markdown
numpy array에서 pandas DataFrames로 데이터 옮기기지난 notebook에서 머신러닝 모델을 트레이닝하고 실제 label 결과와 predict 된 결과를 비교했습니다.아마도, 일부 불분명한 부분은 이 작업을 수행했을 때 **numpy array**와 **pandas DataFrame**이라는 두 개의 다른 객체로 작업했던 부분일 것입니다. 더 자세히 알아보기 위해 이전 notebook의 코드를 다시 실행해 트레이닝된 모델을 만들고 테스트 데이터에 대한 예측 값을 가져 오겠습니다.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# CSV 파일에서 데이터를 로드
delays_df = pd.read_csv('Lots_of_flight_data.csv')
# null값을 제거 - null값이 있으면 트레이닝 과정에서 문제를 유발할 수 있습니다.
delays_df.dropna(inplace=True)
# Feature column들을 X DataFrame으로 이동
X = delays_df.loc[:,['DISTANCE', 'CRS_ELAPSED_TIME']]
# Labe column을 y DataFrame으로 이동
y = delays_df.loc[:,['ARR_DELAY']]
# 데이터를 트레이닝 데이터셋과 테스트 데이터셋으로 분리
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.3,
random_state=42
)
regressor = LinearRegression() # scikit learn LinearRegression 개체 생성
regressor.fit(X_train, y_train) # fit 메서드를 사용해 모델 트레이닝 실행
y_pred = regressor.predict(X_test)
###Output
_____no_output_____
###Markdown
마지막 노트북에서 predict(예측된) 값의 내용을 y_pred에 로드하고, 실제(label) 값을 y_test에 로드 할 때 출력이 다르게 표시되는 것을 확인했을 수 있습니다.
###Code
y_pred
y_test
###Output
_____no_output_____
###Markdown
**type()**을 사용하여 객체의 데이터 유형을 확인할 수 있습니다.
###Code
type(y_pred)
type(y_test)
###Output
_____no_output_____
###Markdown
* **y_pred**는 numpy array입니다.* **y_test**는 pandas DataFrame입니다.**y_pred**에서 **head** 메서드를 사용하는 경우에도 이 차이를 확인할 수 있습니다.**head**는 DataFrame 클래스의 메서드이고, numpy 배열의 메서드가 아니므로 에러가 리턴됩니다.
###Code
y_pred.head()
###Output
_____no_output_____
###Markdown
1 차원 numpy array는 pandas의 series와 유사합니다.
###Code
import numpy as np
airports_array = np.array(['Pearson','Changi','Narita'])
print(airports_array)
print(airports_array[2])
airports_series = pd.Series(['Pearson','Changi','Narita'])
print(airports_series)
print(airports_series[2])
###Output
0 Pearson
1 Changi
2 Narita
dtype: object
Narita
###Markdown
2 차원 numpy array는 pandas DataFrame과 유사합니다.
###Code
airports_array = np.array([
['YYZ','Pearson'],
['SIN','Changi'],
['NRT','Narita']])
print(airports_array)
print(airports_array[0,0])
airports_df = pd.DataFrame([['YYZ','Pearson'],['SIN','Changi'],['NRT','Narita']])
print(airports_df)
print(airports_df.iloc[0,0])
###Output
0 1
0 YYZ Pearson
1 SIN Changi
2 NRT Narita
YYZ
###Markdown
DataFrame의 기능이 필요한 경우, 데이터를 numpy 개체에서 pandas 개체로 또는 그 반대로 변환할 수 있습니다.아래 예제처럼, DataFrame 생성자를 사용하여 numpy 배열 *y_pred*의 내용을 *predicted_df*라는 DataFrame으로 로드합니다.바로 이어서, DataFrame 개체의 함수들을 사용할 수 있습니다.
###Code
predicted_df = pd.DataFrame(y_pred)
predicted_df.head()
###Output
_____no_output_____ |
Quantum Circuit Simulator.ipynb | ###Markdown
Quantum Circuit Simulator This work is written as a part of the application for the Quantum Open Source Foundation Mentorship program. In this project we are going to be implementing a Quantum Circuit Simulator from scratch. I have written several helper functions in order to have a modular structure that will help me track and fix bugs along the way. I will describe in detail how each function works and is relevant to our task. First we are going to import the necessary functions. We are also going to define the text for the error message, in case the user chooses a gate that has not been implemented.
###Code
import numpy as np
import random
import collections
from math import log2
ni_error = "ERROR: Choose one of the implemented gates"
###Output
_____no_output_____
###Markdown
Get Ground State In the template our first objective was to implement a 'get_ground_state' function that takes 'num_qubits' as an input and returns a vector with '1' as a first element and '0' for other elements. The 'ground_state' variable is comprised of two parts:   1) First element that we can represent as a list with one element.   2) List of the length 'num_elements' - 1, where every element is equal to zero. We have generated it using list comprehension.
###Code
def get_ground_state(num_qubits):
num_elements = 2 ** num_qubits
ground_state = [1] + [0 for i in range(num_elements - 1)]
return ground_state
###Output
_____no_output_____
###Markdown
To see if it works lets create a ground state with num_qubits = X:
###Code
X = 3
get_ground_state(X)
###Output
_____no_output_____
###Markdown
Make Operator Second part of the tast is implementation of a general 'make_operator' for our solution. We want a function that takes 'total_qubits' in a quantum computer and generates a matrix operator of a given gate for a 'target_qubit' defined by the user. First we are going to implement a switch function for our gates. I have taken a time to hardcode unitaries for several popular gates. Additionally, I have added optional arguments for the bonus task of implementing parameterized gates. 'get_unitary' function outputs a string "error" in the case user chooses a gate that has not been implemented. This will be useful for adding error messages. Finally, we will also be converting the string to lower in case a user wants to type the name of a gate with capital letters.
###Code
def get_unitary(gate_name, theta = 0, phi = 0, lam = 0):
i_ = np.complex(0, 1)
unitary_collection = {
"x" : np.array([[0, 1], [1, 0]]),
"y" : np.array([[0, -i_], [i_, 0]]),
"z" : np.array([[1, 0], [0, -1]]),
"s" : np.array([[1, 0], [0, np.exp((i_ * np.pi)/2)]]),
"sdg" : np.array([[1, 0], [0, np.exp(-1*((i_ * np.pi)/2))]]),
"h" : np.array([[1/np.sqrt(2), 1/np.sqrt(2)], [1/np.sqrt(2), -1/np.sqrt(2)]]),
"t" : np.array([[1, 0], [0, np.exp((i_ * np.pi)/4)]]),
"tdg" : np.array([[1, 0], [0, np.exp(-1*((i_ * np.pi)/4))]]),
"cx" : np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0]]),
"cnot": np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0]]),
"u3" : np.array([[np.cos(0.5*theta),-np.exp(1.0j*lam)*np.sin(0.5*theta)],
[np.exp(1.0j*phi)*np.sin(0.5*theta),np.exp(1.0j*(phi+lam))*np.cos(0.5*theta)]])
}
return unitary_collection.get(gate_name.lower(), ni_error)
###Output
_____no_output_____
###Markdown
We can show it's functionality by applying it to get a unitary for any of the implemented gates:
###Code
example_gate = "x"
get_unitary(example_gate.lower())
###Output
_____no_output_____
###Markdown
We can see that the to generate a matrix operator we need to create a list of the length log2total_qubits consisting entirely out of identity matrices. The we are going to put in the unitary_gate in place of a target qubit. After that we need to continuously apply a Kronecker product function from left to right.
###Code
def generate_identity_list(total_qubits):
I = np.identity(2)
identity_list = [I] * int(log2(total_qubits))
return identity_list
###Output
_____no_output_____
###Markdown
Let's test it:
###Code
tot_qub = 8
generate_identity_list(tot_qub)
###Output
_____no_output_____
###Markdown
Running Kronecker product function applies 'np.kron' continuously on a list. Essentially, our entire task for implementing generalised quantum simulator now comes down to swapping the target gates in place of an identity gate.
###Code
def runningkron(identity_list):
count = 0
answer = 1
while (count != len(identity_list)):
answer = np.kron(answer, identity_list[count])
count += 1
return answer
total_qubits = 8
tester_list = generate_identity_list(total_qubits)
runningkron(tester_list)
###Output
_____no_output_____
###Markdown
CNOT gate esssentially deals with putting in the control and target qubits in the correct places. Then adding running krons to each other. We are going to generate two identity_lists for the left side and the right side. Observe that the right side will always contain a target. Therefore we can go ahead and replace the identiy matrix of the right identity_list with the Pauli-X gate at the target location. Then we are going to insert P0x0 and P1x1 at the control location in both identity_lists.
###Code
def cx(total_qubits,control,target):
X = np.array([[0, 1], [1, 0]])
left = generate_identity_list(total_qubits)
right = generate_identity_list(total_qubits)
left[control] = np.array([[1, 0], [0, 0]])
right[control] = np.array([[0, 0], [0, 1]])
right[target] = X
return runningkron(left) + runningkron(right)
###Output
_____no_output_____
###Markdown
In the task description we have used a CNOT(0, 2) gate in a 3-qubit circuit. After running the cell we can see that the results are identical to the one in the task description:
###Code
tot_qub = 8
tester = [0, 2]
control = tester[0]
target = tester[1]
cx(tot_qub, control, target)
###Output
_____no_output_____
###Markdown
We can now put all of our functions together and construct our 'make_operator' function. Take not of the error handling, we are raising an error message that was defined at the beginning of our program. We are taking into an account scenarios for when the user puts in an incorrect input. We are also extending our support for the additional input formats. User can enter their own unitary or they can put in a string corresponding to a gate name. To avoid getting a numpy error message "FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison" we are going to make a type comparison instead of a direct comparison. In case user chooses a "cx" we are activating a cx function described earlier. Otherwise we proceed with constructing an identity list and swapping in the gate at the target index. We apply running_kron and return a matrix operator.
###Code
def make_operator(total_qubits, gate_unitary, target_qubit, **kwargs):
# add ability to add gate_unitary in different formats
if (type(gate_unitary) == str):
GU = get_unitary(gate_unitary, **kwargs)
elif (type(gate_unitary) == list):
GU = np.array(gate_unitary)
elif (type(gate_unitary) == np.ndarray):
GU = gate_unitary
else:
raise Exception(ni_error)
# get_unitary passes a str when the user inputs a gate that has not been implemented.
if(type(GU) == str):
raise Exception(ni_error)
else:
# we only use control qubit in cx gate, hence why we can define condition for switching to cx this way
if(len(target_qubit) == 2):
if((type(GU) == np.ndarray) & (len(GU) == 4)):
controlqub = target_qubit[0]
targetqub = target_qubit[1]
return cx(total_qubits, controlqub, targetqub)
else:
raise Exception(ni_error)
# if not cx the proceed normally
elif(len(target_qubit) == 1):
if((type(GU) == np.ndarray) & (len(GU) == 2)):
identity_list = generate_identity_list(total_qubits)
identity_list[target_qubit[0]] = GU
return runningkron(identity_list)
else:
raise Exception(ni_error)
else:
raise Exception(ni_error)
###Output
_____no_output_____
###Markdown
In the task description we have used an X gate acting on third qubit in 3-qubit circuit. After running the cell we can see that the results are identical to the one in the task description:
###Code
tot_qub = 8
gate_name = "x"
tester = [2]
make_operator(tot_qub, gate_name, tester)
###Output
_____no_output_____
###Markdown
Additionally, to check the correct flow of the program we are going to apply a CNOT gate again in the same scenario as when we tested just CNOT. After comparing the two we can see that they are exactly the same:
###Code
tot_qub = 8
gate_name = "cx"
tester = [0,2]
make_operator(tot_qub, gate_name, tester)
###Output
_____no_output_____
###Markdown
Run Program We are ready to run our program. The key here is to extract the gate names and the targets from the circuit. This can be done by simply iterating through the circuit and assigning the gates and targets to the variables. We can generate matrix operators using our newly extracted variables. We need to make sure that we recieved an array and not the error message, so we are going to raise an exception in case we don't receive an array. We are going to continuosly apply a dot product for every gate until we recieve our answer:
###Code
def run_program(initial_state, circuit, **kwargs):
global global_1
global global_2
total_qubs = len(initial_state)
ans = initial_state
for line in circuit:
gate = line["gate"]
targs = line["target"]
params = line["params"] if (len(circuit[0]) > 2) else {"theta":0,"phi":0,"lam":0} # use ternary operator to pass params.
matrix_operator = make_operator(total_qubs, gate, targs, **params)
if(type(matrix_operator) != np.ndarray): # raise error in case we receive a str
raise Exception(ni_error)
ans = np.dot(matrix_operator, ans)
return ans
###Output
_____no_output_____
###Markdown
Let's use some arbitrary parameters and see our results on different input formats:
###Code
test_qpu1 = get_ground_state(3)
test_circ1 = [
{ "gate": "X", "target": [0]},
{ "gate": [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0]], "target": [0, 1]}
]
run_program(test_qpu1, test_circ1)
test_qpu2 = get_ground_state(2)
test_circ2 = [
{ "gate": np.array([[0, 1], [1, 0]]), "target": [0]},
{ "gate": "Sdg", "target": [1]}
]
run_program(test_qpu2, test_circ2)
###Output
_____no_output_____
###Markdown
Measure All We can utilize a choices() function from the random library. It lends itself well to our task. We will generate a list and are going to pull from it using the probability distribution that we get from our state_vector. Note that we are converting probability amplitudes into probabilities. We will also convert the index into a correct format:
###Code
def measure_all(state_vector):
n = int(log2(len(state_vector)))
nums_list = list(range(len(state_vector)))
prob_dist = [np.abs(i)**2 for i in state_vector] #convert probability amplitude
raw_index = random.choices(nums_list, prob_dist)[0]
form_index = bin(raw_index)[2:].zfill(n) #convert to binary and get rid of the first two chars,
#then fill it with leading zeroes
return form_index
###Output
_____no_output_____
###Markdown
Let's demonstrate it's use with an example :
###Code
test_qpu3 = get_ground_state(2)
test_circ3 = [
{ "gate": "h", "target": [0]},
{ "gate": "x", "target": [1]}
]
test_final_state1 = run_program(test_qpu3, test_circ3)
measure_all(test_final_state1)
###Output
_____no_output_____
###Markdown
Get Counts We will apply measure_all function on the state vector in the range num_shots. After which we are going to construct a dictionary using a Counter object. Finally we will sort by the key and convert it into a dictionary, to receive our final answer.
###Code
def get_counts(state_vector, num_shots):
keys = [measure_all(state_vector) for i in range(num_shots)]
answer = dict(collections.Counter(keys)) #Use Counter object to construct a dictionary with indices and values
return dict(sorted(answer.items())) # sort and output the counts
###Output
_____no_output_____
###Markdown
Let's use some arbitrary parameters to demonstrate the program in action:
###Code
test_qpu4 = get_ground_state(4)
test_circ4 = [
{ "gate": "h", "target": [2]},
{ "gate": "cx", "target": [0,3]}
]
test_final_state2 = run_program(test_qpu4, test_circ4)
get_counts(test_final_state2, 10000)
###Output
_____no_output_____
###Markdown
Result Let's use the suggestion in the task description to see if the program works as intended:
###Code
# Define program:
my_circuit = [
{ "gate": "h", "target": [0] },
{ "gate": "cx", "target": [0, 1] }
]
# Create "quantum computer" with 2 qubits (this is actually just a vector :) )
my_qpu = get_ground_state(2)
# Run circuit
final_state = run_program(my_qpu, my_circuit)
# Read results
counts = get_counts(final_state, 1000)
print(counts)
# Should print something like:
# {
# "00": 502,
# "11": 498
# }
# Voila!
###Output
{'00': 510, '11': 490}
###Markdown
We can see that the simulator produces the desired output. Bonus Requirements Parametric gates We can add the support for parameters by simply setting default values in the get_unitary function. We are going to unpack the contents of parameters dictionary using a ** operator. I have taken the liberty of setting default values to the ones described in the task description. We are going to observe our function at work. We will see that after rounding we will get the desired result. Observe that I have decided to put in an argument "lam" instead of "lambda". This was done because in python lambda is a keyword that signifies an anonymous function. Instead of reworking the existing code, I opted for a quick solution of slightly changing the name.
###Code
test_params_1 = { "theta": 3.1415, "phi": 1.5708, "lam": -3.1415 }
bonus_qpu_1 = get_ground_state(2)
bonus_circ_1 = [{ "gate": "u3", "params": test_params_1, "target": [0] }]
bonus_final_state_1 = run_program(bonus_qpu_1, bonus_circ_1)
get_counts(bonus_final_state_1, 10000)
###Output
_____no_output_____
###Markdown
Let's try with different parameters to see if the function indeed takes in the new values for an input.
###Code
test_params_2 = { "theta": 1, "phi": 1, "lam": 1 }
bonus_qpu_2 = get_ground_state(2)
bonus_circ_2 = [{ "gate": "u3", "params": test_params_2, "target": [0] }]
###Output
_____no_output_____
###Markdown
We have only changed the parameters and left everything else exactly the same. We got a different output, hence we achieved our goal.
###Code
bonus_final_state_2 = run_program(bonus_qpu_2, bonus_circ_2)
get_counts(bonus_final_state_2, 10000)
###Output
_____no_output_____
###Markdown
Finally, let's see if we are going to get the same values as the ones outlined in the task. We are going to unpack the dictionary within the make_operator function to get the values for theta, phi and lambda.
###Code
test_params_3 = { "theta": 3.1415, "phi": 1.5708, "lam": -3.1415 }
test_operator_1 = make_operator(2,"u3",[0], **test_params_3)
print(test_operator_1)
###Output
[[ 4.63267949e-05+0.00000000e+00j 9.99999995e-01+9.26535896e-05j]
[-3.67320510e-06+9.99999999e-01j 4.46251166e-09-4.63267947e-05j]]
###Markdown
While these numbers look like they could work, we want to get the version that will be simpler to understand and will match the one specified in the task description:```[[ 0+0j, 1+0j], [ 0+1j, 0+0j]]```We will need to round our output:
###Code
def complex_rounder(num):
return complex(round(num.real),round(num.imag))
roundedop = []
roundedop.append([complex_rounder(test_operator_1[0][0]), complex_rounder(test_operator_1[0][1])])
roundedop.append([complex_rounder(test_operator_1[1][0]), complex_rounder(test_operator_1[1][1])])
###Output
_____no_output_____
###Markdown
While our numbers are not exactly the same as the ones in the example we see that our output is very close that might be attributed to a rounding method.
###Code
print(np.array(roundedop))
###Output
[[ 0.+0.j 1.+0.j]
[-0.+1.j 0.-0.j]]
###Markdown
Variational Quantum Algorithms My implementation is not the same as specified in the task description. I have made several attempts to implement it the way it was specified, but I reasoned that it will require more source code alteration, which might cause the program to behave unpredictably.
###Code
params = [3.1415, 1.5708]
globals()["global_1"] = params[0]
globals()["global_2"] = params[1]
bonus_qpu_3 = get_ground_state(2)
bonus_circ_3 = [{ "gate": "u3", "target": [0],"params": { "theta": global_1, "phi":global_2, "lam": -3.1415 }}]
bonus_final_state_3 = run_program(bonus_qpu_3, bonus_circ_3)
get_counts(bonus_final_state_3, 10000)
###Output
_____no_output_____
###Markdown
We can confirm that this works, by changing the parameters:
###Code
params = [1, 2]
globals()["global_1"] = params[0]
globals()["global_2"] = params[1]
bonus_qpu_3 = get_ground_state(2)
bonus_circ_3 = [{ "gate": "u3", "target": [0],"params": { "theta": global_1, "phi":global_2, "lam": -3.1415 }}]
bonus_final_state_3 = run_program(bonus_qpu_3, bonus_circ_3)
get_counts(bonus_final_state_3, 10000)
###Output
_____no_output_____ |
models/Digit classifier.ipynb | ###Markdown
###Code
import numpy as np
import cv2
import matplotlib.pyplot as plt
from keras import optimizers
from keras.datasets import mnist
from keras.utils import to_categorical
import keras
from keras.models import Sequential
from keras.layers import Dense,MaxPool2D,Conv2D,Dropout,Flatten,Activation,Dropout
from tensorflow.keras.callbacks import EarlyStopping
# ### loading mist hand written dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
# threshold for removing noise
_,X_train_th = cv2.threshold(X_train,127,255,cv2.THRESH_BINARY)
_,X_test_th = cv2.threshold(X_test,127,255,cv2.THRESH_BINARY)
X_train = X_train_th.reshape(-1,28,28,1)
X_test = X_test_th.reshape(-1,28,28,1)
y_train = to_categorical(y_train, num_classes = 10)
y_test = to_categorical(y_test, num_classes = 10)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
InputShape = (28,28,1)
model = Sequential()
model.add(Conv2D(input_shape=(InputShape), filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3,3), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Flatten())
model.add(Dense(units=4080, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=4080, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=10, activation='softmax'))
model.summary()
early_stop = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=25)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])
BatchSize = 64
Epochs = 20
history = model.fit(X_train, y_train, batch_size=BatchSize, epochs=Epochs,validation_data=(X_test,y_test), callbacks=[early_stop])
def plot_model_history(model_history):
fig, axs = plt.subplots(1,2,figsize=(15,5))
axs[0].plot(range(1,len(model_history.history['accuracy'])+1),model_history.history['accuracy'])
axs[0].plot(range(1,len(model_history.history['val_accuracy'])+1),model_history.history['val_accuracy'])
axs[0].set_title('Model Accuracy')
axs[0].set_ylabel('Accuracy')
axs[0].set_xlabel('Epoch')
axs[0].set_xticks(np.arange(1,len(model_history.history['accuracy'])+1))
axs[0].legend(['train', 'val'], loc='best')
axs[1].plot(range(1,len(model_history.history['loss'])+1),model_history.history['loss'])
axs[1].plot(range(1,len(model_history.history['val_loss'])+1),model_history.history['val_loss'])
axs[1].set_title('Model Loss')
axs[1].set_ylabel('Loss')
axs[1].set_xlabel('Epoch')
axs[1].set_xticks(np.arange(1,len(model_history.history['loss'])+1))
axs[1].legend(['train', 'val'], loc='best')
fig.savefig('plot.png')
plt.show()
plot_model_history(history)
model.save('digit_classifier.h5')
results = model.evaluate(X_test, y_test)
print(results)
###Output
[0.06501168012619019, 0.9879000186920166]
|
examples/paper_recreations/Wu et al. (2010)/Wu et al. (2010) - Part A.ipynb | ###Markdown
Example: Regenerating Data from [R. Wu et al. / Elec Acta 54 25 (2010) 7394–7403](http://www.sciencedirect.com/science/article/pii/S0013468610009503) Import the modules
###Code
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
import openpnm.models.geometry as gm
import openpnm.topotools as tt
%matplotlib inline
np.random.seed(10)
###Output
_____no_output_____
###Markdown
Set the workspace loglevel to not print anything
###Code
ws = op.Workspace()
ws.settings["loglevel"] = 50
###Output
_____no_output_____
###Markdown
As the paper requires some lengthy calculation we have split it into parts and put the function in a separate notebook to be re-used in each part. The following code runs and loads the shared functions into this kernel
###Code
%run shared_funcs.ipynb
###Output
_____no_output_____
###Markdown
The main function runs the simulation for a given network size 'n' and number of points for the relative diffusivity curve. Setting 'npts' to 1 will return the single phase diffusivity. the network size is doubled in the z direction for percolation but the diffusion calculation is effectively only calculated on the middle square section of length 'n'. This is achieved by copying the saturation distribution from the larger network to a smaller one. We can inspect the source in this notebook by running a code cell with the following: simulation?? Run the simulation once for a network of size 8 x 8 x 8
###Code
x_values, y_values = simulation(n=8)
plt.figure()
plt.plot(x_values, y_values, 'ro')
plt.title('normalized diffusivity versus saturation')
plt.xlabel('saturation')
plt.ylabel('normalized diffusivity')
plt.show()
###Output
_____no_output_____
###Markdown
Example: Regenerating Data from [R. Wu et al. / Elec Acta 54 25 (2010) 7394–7403](http://www.sciencedirect.com/science/article/pii/S0013468610009503) Import the modules
###Code
import openpnm as op
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
import openpnm.models.geometry as gm
import openpnm.topotools as tt
%matplotlib inline
np.random.seed(10)
###Output
_____no_output_____
###Markdown
Set the workspace loglevel to not print anything
###Code
ws = op.Workspace()
ws.settings["loglevel"] = 50
###Output
_____no_output_____
###Markdown
As the paper requires some lengthy calculation we have split it into parts and put the function in a separate notebook to be re-used in each part. The following code runs and loads the shared functions into this kernel
###Code
%run shared_funcs.ipynb
###Output
_____no_output_____
###Markdown
The main function runs the simulation for a given network size 'n' and number of points for the relative diffusivity curve. Setting 'npts' to 1 will return the single phase diffusivity. the network size is doubled in the z direction for percolation but the diffusion calculation is effectively only calculated on the middle square section of length 'n'. This is achieved by copying the saturation distribution from the larger network to a smaller one. We can inspect the source in this notebook by running a code cell with the following: simulation?? Run the simulation once for a network of size 8 x 8 x 8
###Code
x_values, y_values = simulation(n=8)
plt.figure()
plt.plot(x_values, y_values, 'ro')
plt.title('normalized diffusivity versus saturation')
plt.xlabel('saturation')
plt.ylabel('normalized diffusivity')
plt.show()
###Output
_____no_output_____ |
V4A Example 09 - Are You in Voila.ipynb | ###Markdown
V4A Example 09 - Are You in Voila?The environment variable `SERVER_SOFTWARE` will be set to `voila/{version}` when running in Voila. Here are a few ways to use that.
###Code
import os
print(os.getenv('SERVER_SOFTWARE'))
def in_voila():
return os.getenv('SERVER_SOFTWARE', '').startswith('voila')
in_voila()
def hide_in_voila(thing):
if in_voila():
return None
else:
return thing
output = "Hide this in voila"
hide_in_voila(output)
###Output
_____no_output_____ |
Lesson 4.ipynb | ###Markdown
Курс «Алгоритмы анализа данных» Урок 4. Алгоритм построения дерева решений Домашняя работа к уроку 4
###Code
import matplotlib.pyplot as plt
import random
from matplotlib.colors import ListedColormap
from sklearn import datasets
from sklearn import model_selection
import numpy as np
# Реализуем класс узла
class Node:
def __init__(self, index, t, true_branch, false_branch):
self.index = index # индекс признака, по которому ведется сравнение с порогом в этом узле
self.t = t # значение порога
self.true_branch = true_branch # поддерево, удовлетворяющее условию в узле
self.false_branch = false_branch # поддерево, не удовлетворяющее условию в узле
# И класс терминального узла (листа)
class Leaf:
def __init__(self, data, labels):
self.data = data
self.labels = labels
self.prediction = self.predict()
def predict(self):
# подсчет количества объектов разных классов
classes = {} # сформируем словарь "класс: количество объектов"
for label in self.labels:
if label not in classes:
classes[label] = 0
classes[label] += 1
# найдем класс, количество объектов которого будет максимальным в этом листе и вернем его
prediction = max(classes, key=classes.get)
return prediction
# Расчет критерия Джини
def gini(labels):
# подсчет количества объектов разных классов
classes = {}
for label in labels:
if label not in classes:
classes[label] = 0
classes[label] += 1
# расчет критерия
impurity = 1
for label in classes:
p = classes[label] / len(labels)
impurity -= p ** 2
return impurity
# Расчет качества
def quality(left_labels, right_labels, current_gini):
# доля выбоки, ушедшая в левое поддерево
p = float(left_labels.shape[0]) / (left_labels.shape[0] + right_labels.shape[0])
return current_gini - p * gini(left_labels) - (1 - p) * gini(right_labels)
# Разбиение датасета в узле
def split(data, labels, index, t):
left = np.where(data[:, index] <= t)
right = np.where(data[:, index] > t)
true_data = data[left]
false_data = data[right]
true_labels = labels[left]
false_labels = labels[right]
return true_data, false_data, true_labels, false_labels
# Нахождение наилучшего разбиения
def find_best_split(data, labels):
# обозначим минимальное количество объектов в узле
min_leaf = 5
current_gini = gini(labels)
best_quality = 0
best_t = None
best_index = None
n_features = data.shape[1]
for index in range(n_features):
# будем проверять только уникальные значения признака, исключая повторения
t_values = np.unique([row[index] for row in data])
for t in t_values:
true_data, false_data, true_labels, false_labels = split(data, labels, index, t)
# пропускаем разбиения, в которых в узле остается менее 5 объектов
if len(true_data) < min_leaf or len(false_data) < min_leaf:
continue
current_quality = quality(true_labels, false_labels, current_gini)
# выбираем порог, на котором получается максимальный прирост качества
if current_quality > best_quality:
best_quality, best_t, best_index = current_quality, t, index
return best_quality, best_t, best_index
# Построение дерева с помощью рекурсивной функции
def build_tree(data, labels):
quality, t, index = find_best_split(data, labels)
# Базовый случай - прекращаем рекурсию, когда нет прироста в качества
if quality == 0:
return Leaf(data, labels)
true_data, false_data, true_labels, false_labels = split(data, labels, index, t)
# Рекурсивно строим два поддерева
true_branch = build_tree(true_data, true_labels)
false_branch = build_tree(false_data, false_labels)
# Возвращаем класс узла со всеми поддеревьями, то есть целого дерева
return Node(index, t, true_branch, false_branch)
def classify_object(obj, node):
# Останавливаем рекурсию, если достигли листа
if isinstance(node, Leaf):
answer = node.prediction
return answer
if obj[node.index] <= node.t:
return classify_object(obj, node.true_branch)
else:
return classify_object(obj, node.false_branch)
def predict(data, tree):
classes = []
for obj in data:
prediction = classify_object(obj, tree)
classes.append(prediction)
return classes
# Напечатаем ход нашего дерева
def print_tree(node, spacing=""):
# Если лист, то выводим его прогноз
if isinstance(node, Leaf):
print(spacing + "Прогноз:", node.prediction)
return
# Выведем значение индекса и порога на этом узле
print(spacing + 'Индекс', str(node.index))
print(spacing + 'Порог', str(node.t))
# Рекурсионный вызов функции на положительном поддереве
print (spacing + '--> True:')
print_tree(node.true_branch, spacing + " ")
# Рекурсионный вызов функции на положительном поддереве
print (spacing + '--> False:')
print_tree(node.false_branch, spacing + " ")
# Введем функцию подсчета точности как доли правильных ответов
def accuracy_metric(actual, predicted):
correct = 0
for i in range(len(actual)):
if actual[i] == predicted[i]:
correct += 1
return correct / float(len(actual)) * 100.0
# Визуализируем дерево на графике
def get_meshgrid(data, step=.05, border=1.2):
x_min, x_max = data[:, 0].min() - border, data[:, 0].max() + border
y_min, y_max = data[:, 1].min() - border, data[:, 1].max() + border
return np.meshgrid(np.arange(x_min, x_max, step), np.arange(y_min, y_max, step))
###Output
_____no_output_____
###Markdown
__Воспроизведем расчет с урока__
###Code
# сгенерируем данные
classification_data, classification_labels = datasets.make_classification(n_features = 2, n_informative = 2,
n_classes = 2, n_redundant=0,
n_clusters_per_class=1, random_state=5)
# визуализируем сгенерированные данные
colors = ListedColormap(['red', 'blue'])
light_colors = ListedColormap(['lightcoral', 'lightblue'])
plt.figure(figsize=(8,8))
plt.scatter(list(map(lambda x: x[0], classification_data)), list(map(lambda x: x[1], classification_data)),
c=classification_labels, cmap=colors)
# Разобьем выборку на обучающую и тестовую
train_data, test_data, train_labels, test_labels = model_selection.train_test_split(classification_data,
classification_labels,
test_size = 0.3,
random_state = 1)
# Построим дерево по обучающей выборке
my_tree = build_tree(train_data, train_labels)
print_tree(my_tree)
# Получим ответы для обучающей выборки
train_answers = predict(train_data, my_tree)
# И получим ответы для тестовой выборки
answers = predict(test_data, my_tree)
# Точность на обучающей выборке
train_accuracy = accuracy_metric(train_labels, train_answers)
train_accuracy
# Точность на тестовой выборке
test_accuracy = accuracy_metric(test_labels, answers)
test_accuracy
plt.figure(figsize = (16, 7))
# график обучающей выборки
plt.subplot(1,2,1)
xx, yy = get_meshgrid(train_data)
mesh_predictions = np.array(predict(np.c_[xx.ravel(), yy.ravel()], my_tree)).reshape(xx.shape)
plt.pcolormesh(xx, yy, mesh_predictions, cmap = light_colors)
plt.scatter(train_data[:, 0], train_data[:, 1], c = train_labels, cmap = colors)
plt.title(f'Train accuracy={train_accuracy:.2f}')
# график тестовой выборки
plt.subplot(1,2,2)
plt.pcolormesh(xx, yy, mesh_predictions, cmap = light_colors)
plt.scatter(test_data[:, 0], test_data[:, 1], c = test_labels, cmap = colors)
plt.title(f'Test accuracy={test_accuracy:.2f}')
###Output
_____no_output_____
###Markdown
Задание 1 В коде из методички реализуйте один или несколько из критериев останова (количество листьев, количество используемых признаков, глубина дерева и т.д.) Перепишем build_tree на глубину дерева
###Code
# Критерий останова по глубине дерева.
def build_tree_by_depth(data, labels, max_tree_depth=5, tree_depth=0):
quality, t, index = find_best_split(data, labels)
# Базовый случай - прекращаем рекурсию, когда нет прироста в качества
if quality == 0 or tree_depth >= max_tree_depth:
return Leaf(data, labels)
true_data, false_data, true_labels, false_labels = split(data, labels, index, t)
# Рекурсивно строим два поддерева
true_branch = build_tree_by_depth(true_data, true_labels, max_tree_depth, tree_depth + 1)
false_branch = build_tree_by_depth(false_data, false_labels, max_tree_depth, tree_depth + 1)
# Возвращаем класс узла со всеми поддеревьями, то есть целого дерева
return Node(index, t, true_branch, false_branch)
###Output
_____no_output_____
###Markdown
Отработаем значения глубины дерева по умолчанию.
###Code
my_tree = build_tree_by_depth(train_data, train_labels)
print_tree(my_tree)
###Output
Индекс 0
Порог 0.16261402870113306
--> True:
Индекс 1
Порог -1.5208896621663803
--> True:
Индекс 0
Порог -0.9478301462477035
--> True:
Прогноз: 0
--> False:
Прогноз: 1
--> False:
Прогноз: 0
--> False:
Прогноз: 1
###Markdown
Отработаем значения глубины дерева до 2-го уровня.
###Code
my_tree = build_tree_by_depth(train_data, train_labels, max_tree_depth=2)
print_tree(my_tree)
###Output
Индекс 0
Порог 0.16261402870113306
--> True:
Индекс 1
Порог -1.5208896621663803
--> True:
Прогноз: 0
--> False:
Прогноз: 0
--> False:
Прогноз: 1
###Markdown
Задание 2 Реализуйте дерево для задачи регрессии. Возьмите за основу дерево, реализованное в методичке, заменив механизм предсказания в листе на взятие среднего значения по выборке, и критерий Джини на дисперсию значений.
###Code
class Leaf:
def __init__(self, data, labels):
self.data = data
self.labels = labels
self.prediction = self.predict()
def predict(self):
return np.mean(self.labels)
def variance(values):
return np.array(values).var()
def quality(left_labels, right_labels, current_variance):
p = float(left_labels.shape[0]) / (left_labels.shape[0] + right_labels.shape[0])
return current_variance - p * variance(left_labels) - (1 - p) * variance(right_labels)
def find_best_split(data, labels):
min_leaf = 5
current_variance = variance(labels)
best_quality = 0
best_t = None
best_index = None
n_features = data.shape[1]
for index in range(n_features):
t_values = np.unique([row[index] for row in data])
for t in t_values:
true_data, false_data, true_labels, false_labels = split(data, labels, index, t)
if len(true_data) < min_leaf or len(false_data) < min_leaf:
continue
current_quality = quality(true_labels, false_labels, current_variance)
if current_quality > best_quality:
best_quality, best_t, best_index = current_quality, t, index
return best_quality, best_t, best_index
classification_data, classification_labels = datasets.make_regression(n_features = 2, n_informative = 2)
my_tree = build_tree(train_data, train_labels)
print_tree(my_tree)
train_answers = predict(train_data, my_tree)
answers = predict(test_data, my_tree)
train_accuracy = accuracy_metric(train_labels, train_answers)
train_accuracy
test_accuracy = accuracy_metric(test_labels, answers)
test_accuracy
plt.figure(figsize = (16, 7))
# график обучающей выборки
plt.subplot(1,2,1)
xx, yy = get_meshgrid(train_data)
mesh_predictions = np.array(predict(np.c_[xx.ravel(), yy.ravel()], my_tree)).reshape(xx.shape)
plt.pcolormesh(xx, yy, mesh_predictions, cmap = light_colors)
plt.scatter(train_data[:, 0], train_data[:, 1], c = train_labels, cmap = colors)
plt.title(f'Train accuracy={train_accuracy:.2f}')
# график тестовой выборки
plt.subplot(1,2,2)
plt.pcolormesh(xx, yy, mesh_predictions, cmap = light_colors)
plt.scatter(test_data[:, 0], test_data[:, 1], c = test_labels, cmap = colors)
plt.title(f'Test accuracy={test_accuracy:.2f}')
###Output
_____no_output_____
###Markdown
Курс «Машинное обучение в бизнесе» Урок 4. Кейс 1. Построение и оценка модели Домашнее задание к уроку 4
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import statsmodels.api as sm
from statsmodels.tsa.arima_model import ARIMA
import itertools
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
import os
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_absolute_error, mean_squared_error, median_absolute_error, r2_score
import warnings
###Output
_____no_output_____
###Markdown
Задание 1 Прочитайте базу my_PJME_MW.csv и сделайте ее перерасчет (ресемплинг) в представление по неделям.
###Code
data = pd.read_csv('my_PJME_MW.csv', index_col=[0], parse_dates=[0])
data.head()
data.describe()
plt.figure(figsize =(20,6))
plt.plot( data.index, data['PJME_MW'], 'b' )
plt.title('PJM East потребление энергии' )
plt.ylabel ( 'МВт' )
plt.show()
data_w = data['PJME_MW'].resample('W').mean()
data_w.plot(figsize=(20, 6), title='PJM East потребление энергии')
###Output
_____no_output_____
###Markdown
Задание 2 Постройте модель предсказания 4-й точки от текущей (h = 4), используя результаты автокорреляционного анализа из предшествующих уроков.
###Code
def exponential_smoothing(series, alpha):
result = [series[0]] # first value is same as series
for n in range(1, len(series)):
result.append(alpha * series[n] + (1 - alpha) * result[n-1])
return result
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
data_exp_1 = exponential_smoothing(data['PJME_MW'], 0.05)
b_s = pd.DataFrame(data = data[['PJME_MW']])
data_exp_1 = pd.DataFrame(data = data_exp_1, index = b_s.index)
df_end= pd.DataFrame(data_exp_1)
scl = StandardScaler()
scly = StandardScaler()
# сделаем примеры и модель
def split_data_b( data, split_date ):
return data.loc[data.index.get_level_values('Date') <= split_date].copy(), \
data.loc[data.index.get_level_values('Date') > split_date].copy()
train_b, test_b = split_data_b(df_end, '10-10-2014')
X_train_b = train_b.iloc[:-1,:]
y_train_b = train_b[df_end.columns[0]].values[1:]
X_test_b = test_b.iloc[:-1,:]
y_test_b = test_b[df_end.columns[0]].values[1:]
model_gb = GradientBoostingRegressor(max_depth=15, random_state=0, n_estimators=100)
model_gb.fit(X_train_b, y_train_b)
X_test_pred_gb = model_gb.predict(X_test_b)
h = 4
X_train_b = train_b.iloc[:-h,:]
y_train_b = train_b[df_end.columns[0]].values[h:]
X_test_b = test_b.iloc[:-h,:]
y_test_b = test_b[df_end.columns[0]].values[h:]
model_gb.fit(X_train_b, y_train_b)
X_test_pred_gb = model_gb.predict(X_test_b)
warnings.filterwarnings("ignore")
for i in range(4):
X_test_b.loc[len(X_test_b)] = model_gb.predict(X_test_b[-1:])
print('Значение предсказания четвертой точки от текущей:', X_test_b[-1:])
###Output
Значение предсказания четвертой точки от текущей: 0
Date
33415 39576.746329
###Markdown
Ничего не понял. Сделал как понял. Задание 3 Постройте модель предсказания 1-й точки от текущей (h = 1), используя результаты автокорреляционного анализа из предшествующих уроков.
###Code
print('Значение предсказания первой точки от текущей:', X_test_b[-4:-3])
###Output
Значение предсказания первой точки от текущей: 0
Date
33412 40532.911571
###Markdown
Задание 4 Примените авторекурсию и сравните результат в 4-й точке путем прямого моделирования и путем рекурсивного моделирования на длине выборки из 4-х точек.
###Code
# Ничего не понял. Вернусь к этому позже.
###Output
_____no_output_____
###Markdown
Задание 5 Оцените рост ошибки прогнозирования рекурсивной модели в интервалах от 1-й до 10-й, от 10-й до 20-й, от 20-й до 30-й, .. .10*i-й до (i+1)*10 -й,... ,от 90 до 100-й точках (используйте осреднение по десяткам точек).
###Code
# Ничего не понял. Вернусь к этому позже.
###Output
_____no_output_____
###Markdown
Задание 6 Сделайте вывод о том, как изменилось поведение ошибки предсказания ряда.
###Code
# Ничего не понял. Вернусь к этому позже.
###Output
_____no_output_____
###Markdown
Continuation of Pandas Data Frames
###Code
import numpy as np
import pandas as pd
from numpy.random import randn
np.random.randint(2,6,3)
df=pd.DataFrame(randn(5,4),['A','B','C','D','E'],['W','X','Y','Z'])
df
df=pd.DataFrame(randn(5,4))
df
###Output
_____no_output_____
###Markdown
Create a New Column
###Code
df['New_Column']=9
df
df.drop('New_Column',axis=1)
pwd
data=pd.read_csv('demodatasetCSV.csv')
data.head()
data=pd.read_excel('demodatasetExcel.xlsx')
data.head()
###Output
_____no_output_____
###Markdown
Review Lesson 3 HomeworkGame Loop Battle with FunctionIs there only Functions in the game loop?Is there a comment for each funtion that explains what the function does? Lesson 4Let the Games Begin PygameCheck out [https://www.pygame.org/news](https://www.pygame.org/news) for more information about pygame.Todays lesson We will be using **Pygame Tutorial for Beginners - Python Game Development Course** [https://youtu.be/FfWpgLFMI7w?t=155](https://youtu.be/FfWpgLFMI7w?t=155) from youtube as our guide1) Use the `import` statement. - Import pygame - import the pygame package via Pycharm - file -> Settings... - in settings go to "project name" - Project interpreter - Press the plus and search for pygame - Click on install package - import pygame via pip install - Open the terminal and type 'pip install pygame' 2) Create a game window3) Create the game loop4) Add a quit event5) Add a title, logo, and background color
###Code
### Import pygame and initialize
# Import pygame
# if there is a red squiggle under pygame,
# you need to install pygame
import pygame
# To access anything from pygame use pygame.init()
# Initialize the pygame
pygame.init()
###Output
_____no_output_____
###Markdown
Create the Game WindowFor more information about pygame.display read the documentation at [https://www.pygame.org/docs/ref/display.html](https://www.pygame.org/docs/ref/display.html)
###Code
import pygame
# Initialize the pygame
pygame.init()
# Create the game window
# Notice the tuple inside the function call,
# that is why there are two sets of brackets
screen_width = 800
screen_height = 600
# Create the Screen
screen = pygame.display.set_mode( (screen_width,screen_height) )
# This is used to show the screen
while True:
pass
###Output
_____no_output_____
###Markdown
Add the game loop Add an exit eventWithout the exit event, the loop would be endlessEvery mouse click, or keyboard press is an event. We need an event to check of the X button is pressed to close the window.Everything that happens in the game needs to happen inside the game loop.
###Code
import pygame
# Initialize the pygame
pygame.init()
# Create the game window
# Notice the tuple inside the function call,
# that is why there are two sets of brackets
screen_width = 800
screen_height = 600
# Create the Screen
screen = pygame.display.set_mode((screen_width,screen_height))
# A boolean while the game is running
running = True
# Beginning of the game loop
while running:
# Take all the events that are happening
# Check if the 'x' button is pressed
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
print("GAME OVER")
###Output
_____no_output_____
###Markdown
Change the title, logo, and background color of game window Gather an imagenavigate to [www.flaticon.com](www.flaticon.com) Search for spaceship or any other icon save the icon as ufo.png Save as **PNG** Select Size **32** pixels This is very important. Download the icon Move the png file into the same folder as `main.py` Title and Icon `pygame.display.set_caption("Seagull Invaders)` `icon = pygame.image.load('bird.png')` `pygame.display.set_icon(icon)`Change backgroundNeeds to be placed inside the game loopNotice the tuple (Red, Green, Blue)All values can go from 0 - 255 google rgb color picker for a cheat sheet.We can pre define colors alsoWHITE = (200, 200, 200)BLACK = (0, 0, 0)BLUE = (30, 144, 255)GREEN = (60, 179, 113)RED = (178, 0, 0)`screen.fill((0, 0, 0))`
###Code
import pygame
# Initialize the pygame
pygame.init()
# Create the game window
# Notice the tuple inside the function call,
# that is why there are two sets of brackets
screen_width = 800
screen_height = 600
# Create the Screen
screen = pygame.display.set_mode((screen_width,screen_height))
pygame.display.set_caption("Seagull Invaders")
icon = pygame.image.load('bird.png')
pygame.display.set_icon(icon)
# A boolean while the game is running
running = True
# Beginning of the game loop
while running:
# Take all the events that are happening
# Check if the 'x' button is pressed
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
# What color is this going to display?
screen.fill(0, 0, 255)
pygame.display.update() # This line needs to be present to make anything on the screen move.
print("GAME OVER")
###Output
_____no_output_____
###Markdown
Курс «Введение в нейронные сети» Урок 4. Сверточные нейронные сети Домашняя работа к уроку 4
###Code
from __future__ import print_function
import keras # расскоментируйте эту строку, чтобы начать обучение
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
import matplotlib.pyplot as plt
import os
###Output
_____no_output_____
###Markdown
Задание 1 Попробовать улучшить точность распознования образов cifar 10 сверточной нейронной сетью, рассмотренной на уроке. Приложить анализ с описанием того, что улучшает работу нейронной сети и что ухудшает.
###Code
# установка параметров нейросети
batch_size = 32
num_classes = 10
epochs = 1
data_augmentation = True
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# разделение тренировочной и тестовой выборки
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'тренировочные примеры')
print(x_test.shape[0], 'тестовые примеры')
class_names = ['самолет', 'автомобиль', 'птица', 'кошка', 'олень', 'собака', 'лягушка', 'лошадь', 'корабль', 'грузовик']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[i], cmap=plt.cm.binary)
plt.xlabel(class_names[y_train[i][0]])
plt.show()
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# конфигурирование слоев нейросети
model = Sequential()
# слои нейросети отвественные за свертку и max-pooling
model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# полносвязные слои нейронной сети
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# инициализация RMSprop optimizer
opt = keras.optimizers.RMSprop(lr=0.0001, decay=1e-6)
# компиляция модели
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
if not data_augmentation:
print('Не используется data augmentation')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
else:
print('Использование data augmentation в реальном времени')
# Препроцессинг и data augmentation в реальном времени:
datagen = ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
zca_epsilon=1e-06,
rotation_range=0,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.,
zoom_range=0.,
channel_shift_range=0.,
fill_mode='nearest',
cval=0.,
horizontal_flip=True,
vertical_flip=False,
rescale=None,
preprocessing_function=None,
data_format=None,
validation_split=0.0)
# запуск data augmentation через fit
#datagen.fit(x_train)
# запуск data augmentation через fit_generator
model.fit_generator(datagen.flow(x_train, y_train,
batch_size=batch_size),
epochs=epochs,
validation_data=(x_test, y_test),
workers=4)
# сохранение модели и весов
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('сохранить обученную модель как %s ' % model_path)
# проверка работы обученной модели
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
313/313 [==============================] - 3s 9ms/step - loss: 1.6792 - accuracy: 0.3963
Test loss: 1.6792353391647339
Test accuracy: 0.39629998803138733
###Markdown
Произведем улучшение в ручную.
###Code
# конфигурирование слоев новой нейросети
model_2 = Sequential()
###Output
_____no_output_____
###Markdown
Используем только один сверточный слой.
###Code
model_2.add(Conv2D(32, (3, 3), activation='relu', input_shape=x_train.shape[1:]))
# model.add(Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]))
# model.add(Activation('relu'))
# model.add(Conv2D(32, (3, 3)))
# model.add(Activation('relu'))
###Output
_____no_output_____
###Markdown
Слой пулинга без прореживания, изображения и так небольшого размера.
###Code
model_2.add(MaxPooling2D((2, 2)))
# model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(Dropout(0.25))
###Output
_____no_output_____
###Markdown
Еще один сверточный слой и тожек один.
###Code
model_2.add(Conv2D(64, (3, 3), activation='relu'))
# model.add(Conv2D(64, (3, 3), padding='same'))
# model.add(Activation('relu'))
# model.add(Conv2D(64, (3, 3)))
# model.add(Activation('relu'))
###Output
_____no_output_____
###Markdown
Еще один слой пулинга без прореживания.
###Code
model_2.add(MaxPooling2D((2, 2)))
# model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(Dropout(0.25))
###Output
_____no_output_____
###Markdown
Добавим еще одну светрку.
###Code
model_2.add(Conv2D(64, (3, 3), activation='relu'))
###Output
_____no_output_____
###Markdown
Добавим несколько полносвязных слоев.
###Code
model_2.add(Flatten())
# model.add(Flatten())
model_2.add(Dense(64, activation='relu'))
# model.add(Dense(512))
# model.add(Activation('relu'))
# model.add(Dropout(0.5))
model_2.add(Dense(10, activation='softmax'))
# model.add(Dense(num_classes))
# model.add(Activation('softmax'))
model_2.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model_2.fit(x_train, y_train, epochs=20, validation_data=(x_test, y_test))
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model_2.evaluate(x_test, y_test, verbose=2)
print(test_acc)
###Output
0.7013999819755554
|
Breast cancer analysis - Bonus task/BONUS_Breast_Cancer_Analysis.ipynb | ###Markdown
**Loading Data**
###Code
X,y = load_breast_cancer(return_X_y=True, as_frame=True)
X.head
X.describe()
y.describe()
###Output
_____no_output_____
###Markdown
**Data cleaning and preprocessing**
###Code
all_data = pd.concat([X,y], axis=1)
all_data.isna().sum()
all_data.duplicated().sum()
###Output
_____no_output_____
###Markdown
**EDA**
###Code
print(f'Count of Malignant tumors {y.value_counts()[0]}')
print(f'Count of Benign tumors {y.value_counts()[1]}')
ax = sns.countplot(y, label='Count')
plt.show()
###Output
Count of Malignant tumors 212
Count of Benign tumors 357
###Markdown
**Checking correlation**
###Code
plt.figure(figsize=(18,10))
sns.heatmap(X.corr(), annot=True)
plt.show()
###Output
_____no_output_____
###Markdown
**Some features are highly correlated with each otehr. So, it's better to remove highly correlated features. The correlated pairs are:**1. ('mean perimeter','mean radius')2. ('mean compactness','mean concave points')3. ('radius error','perimeter error')4. ('compactness error','concave points error')5. ('worst radius','worst perimeter')6. ('worst compactness','worst concave points')7. ('worst texture','worst area')
###Code
drop_columns = ['worst texture','worst area', 'worst compactness','worst concave points', 'worst radius','worst perimeter', 'compactness error','concave points error',
'radius error','perimeter error', 'mean compactness','mean concave points', 'mean perimeter','mean radius']
X = X.drop(drop_columns, axis=1)
###Output
_____no_output_____
###Markdown
**Data splitting**
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
###Output
_____no_output_____
###Markdown
**Modeling**
###Code
clf = DecisionTreeClassifier(criterion='entropy', max_depth=5)
clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print(f'The accuracy of the model is: {accuracy_score(y_test,y_pred)*100:0.2f}')
print()
print('confusion matrix')
print(f'{confusion_matrix(y_test, y_pred)}')
###Output
The accuracy of the model is: 94.74
confusion matrix
[[41 2]
[ 4 67]]
###Markdown
THIS IS VICTORIOUS. The model accuracy is 94.74%. **Cross Validation**
###Code
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_transformer
from imblearn.pipeline import make_pipeline
from sklearn import decomposition, datasets
from sklearn.pipeline import Pipeline
std_slc = StandardScaler()
pca = decomposition.PCA()
pipe = Pipeline(steps=[('std_slc', std_slc),
('pca', pca),
('dec_tree', clf)])
n_components = list(range(1,X.shape[1]+1,1))
criterion = ['gini', 'entropy']
max_depth = [2,4,6,8,10,12]
parameters = dict(pca__n_components=n_components,
dec_tree__criterion=criterion,
dec_tree__max_depth=max_depth)
clf_GS = GridSearchCV(pipe, parameters)
clf_GS.fit(X_train, y_train)
print('Best Criterion:', clf_GS.best_estimator_.get_params()['dec_tree__criterion'])
print('Best max_depth:', clf_GS.best_estimator_.get_params()['dec_tree__max_depth'])
print('Best Number Of Components:', clf_GS.best_estimator_.get_params()['pca__n_components'])
print(); print(clf_GS.best_estimator_.get_params()['dec_tree'])
###Output
Best Criterion: entropy
Best max_depth: 4
Best Number Of Components: 11
DecisionTreeClassifier(criterion='entropy', max_depth=4)
###Markdown
**Running model with best parameters**
###Code
clf = DecisionTreeClassifier(criterion='entropy', max_depth=4)
clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print(f'The accuracy of the model is: {accuracy_score(y_test,y_pred)*100:0.2f}')
print()
print('confusion matrix')
print(f'{confusion_matrix(y_test, y_pred)}')
###Output
The accuracy of the model is: 94.74
confusion matrix
[[41 2]
[ 4 67]]
|
Linear Regression Using Normal Equations Method.ipynb | ###Markdown
Linear Regression using Normal Equations Method Loading the dataset
###Code
#Importing the required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
#This file contains the population of the city and the second column contains the profits
#Reading it as a dataframe and specifying the column names
food_truck = pd.read_csv('ex1data1.txt',names = ["Population","Profits"])
food_truck.head()
###Output
_____no_output_____
###Markdown
Targets and features
###Code
#Adding a column of intercept
food_truck['intercept'] = 1
#Assigning the feature and the target vector
X = food_truck[['Population','intercept']]
y = food_truck['Profits']
#Vectorizing the features and targets
X = np.array(X)
y = np.array(y).flatten()
###Output
_____no_output_____
###Markdown
Normal Equations Method
###Code
#You can calculate using multiple features as well
theta = (np.linalg.inv(X.T.dot(X)).dot(X.T)).dot(y)
theta
###Output
_____no_output_____ |
Notebooks/MLP_GenCode_110.ipynb | ###Markdown
MLP GenCode MLP_GenCode_109 with one change. Now use NEURONS=32 instead of 128. accuracy: 96.71%, AUC: 99.95%
###Code
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()
PC_TRAINS=8000
NC_TRAINS=8000
PC_TESTS=8000
NC_TESTS=8000 # Wen et al 2019 used 8000 and 2000 of each class
PC_LENS=(200,99000)
NC_LENS=(200,99000) # Wen et al 2019 used 250-3500 for lncRNA only
MAX_K = 3
INPUT_SHAPE=(None,84) # 4^3 + 4^2 + 4^1
NEURONS=32
DROP_RATE=0.25
EPOCHS=100 # 25
SPLITS=5
FOLDS=5 # make this 5 for serious testing
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/GenCodeTools.py')
with open('GenCodeTools.py', 'w') as f:
f.write(r.text)
from GenCodeTools import GenCodeLoader
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.GenCodeTools import GenCodeLoader
from SimTools.KmerTools import KmerTools
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
###Output
On Google CoLab, mount cloud-local file, get our code from GitHub.
Mounted at /content/drive/
###Markdown
Data LoadRestrict mRNA to those transcripts with a recognized ORF.
###Code
PC_FILENAME='gencode.v26.pc_transcripts.fa.gz'
NC_FILENAME='gencode.v26.lncRNA_transcripts.fa.gz'
PC_FILENAME='gencode.v38.pc_transcripts.fa.gz'
NC_FILENAME='gencode.v38.lncRNA_transcripts.fa.gz'
PC_FULLPATH=DATAPATH+PC_FILENAME
NC_FULLPATH=DATAPATH+NC_FILENAME
loader=GenCodeLoader()
loader.set_label(1)
loader.set_check_utr(False)
pcdf=loader.load_file(PC_FULLPATH)
print("PC seqs loaded:",len(pcdf))
loader.set_label(0)
loader.set_check_utr(False)
ncdf=loader.load_file(NC_FULLPATH)
print("NC seqs loaded:",len(ncdf))
show_time()
###Output
PC seqs loaded: 106143
NC seqs loaded: 48752
2021-07-19 22:30:54 UTC
###Markdown
Data Prep
###Code
def dataframe_length_filter(df,low_high):
(low,high)=low_high
# The pandas query language is strange,
# but this is MUCH faster than loop & drop.
return df[ (df['seqlen']>=low) & (df['seqlen']<=high) ]
def dataframe_shuffle(df):
# The ignore_index option is new in Pandas 1.3.
# The default (False) replicates the old behavior: shuffle the index too.
# The new option seems more logical th
# After shuffling, df.iloc[0] has index == 0.
# return df.sample(frac=1,ignore_index=True)
return df.sample(frac=1) # Use this till CoLab upgrades Pandas
def dataframe_extract_sequence(df):
return df['sequence'].tolist()
pc_all = dataframe_extract_sequence(
#dataframe_shuffle(
dataframe_length_filter(pcdf,PC_LENS))#)
nc_all = dataframe_extract_sequence(
#dataframe_shuffle(
dataframe_length_filter(ncdf,NC_LENS))#)
#pc_all=['CAAAA','CCCCC','AAAAA','AAACC','CCCAA','CAAAA','CCCCC','AACAA','AAACC','CCCAA']
#nc_all=['GGGGG','TTTTT','GGGTT','GGGTG','TTGTG','GGGGG','TTTTT','GGTTT','GGGTG','TTGTG']
show_time()
print("PC seqs pass filter:",len(pc_all))
print("NC seqs pass filter:",len(nc_all))
# Garbage collection to reduce RAM footprint
pcdf=None
ncdf=None
# Any portion of a shuffled list is a random selection
pc_train=pc_all[:PC_TRAINS]
nc_train=nc_all[:NC_TRAINS]
pc_test=pc_all[PC_TRAINS:PC_TRAINS+PC_TESTS]
nc_test=nc_all[NC_TRAINS:NC_TRAINS+PC_TESTS]
print("PC train, NC train:",len(pc_train),len(nc_train))
print("PC test, NC test:",len(pc_test),len(nc_test))
# Garbage collection
pc_all=None
nc_all=None
def prepare_x_and_y(seqs1,seqs0):
len1=len(seqs1)
len0=len(seqs0)
total=len1+len0
L1=np.ones(len1,dtype=np.int8)
L0=np.zeros(len0,dtype=np.int8)
S1 = np.asarray(seqs1)
S0 = np.asarray(seqs0)
all_labels = np.concatenate((L1,L0))
all_seqs = np.concatenate((S1,S0))
return all_seqs,all_labels # use this to test unshuffled
# bug in next line?
X,y = shuffle(all_seqs,all_labels) # sklearn.utils.shuffle
#Doesn't fix it
#X = shuffle(all_seqs,random_state=3) # sklearn.utils.shuffle
#y = shuffle(all_labels,random_state=3) # sklearn.utils.shuffle
return X,y
Xseq,y=prepare_x_and_y(pc_train,nc_train)
print(Xseq[:3])
print(y[:3])
# Tests:
show_time()
def seqs_to_kmer_freqs(seqs,max_K):
tool = KmerTools() # from SimTools
empty = tool.make_dict_upto_K(max_K)
collection = []
for seq in seqs:
counts = empty
# Last param should be True when using Harvester.
counts = tool.update_count_one_K(counts,max_K,seq,True)
# Given counts for K=3, Harvester fills in counts for K=1,2.
counts = tool.harvest_counts_from_K(counts,max_K)
fdict = tool.count_to_frequency(counts,max_K)
freqs = list(fdict.values())
collection.append(freqs)
return np.asarray(collection)
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
show_time()
###Output
2021-07-19 22:31:10 UTC
###Markdown
Neural network
###Code
def make_DNN():
dt=np.float32
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt)) # relu doesn't work as well
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=dt))
dnn.compile(optimizer='adam', # adadelta doesn't work as well
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
return dnn
model = make_DNN()
print(model.summary())
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS,shuffle=True)
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(Xfrq,y)
from keras.models import load_model
print(pc_train[0])
Xseq,y=prepare_x_and_y(pc_train,nc_train)
print(Xseq[0])
Xfrq=seqs_to_kmer_freqs(Xseq,MAX_K)
print(Xfrq[0])
X=Xfrq
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
print("predictions.shape",bm_probs.shape)
print("first prediction",bm_probs[0])
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
###Output
predictions.shape (16000, 1)
first prediction [5.6787233e-08]
|
part-05-keywords-cleanup.ipynb | ###Markdown
Limpieza de palabras clave
###Code
import pandas as pd
from techminer import RecordsDataFrame
rdf = RecordsDataFrame(
pd.read_json(
'part-04.json',
orient='records',
lines=True))
###Output
_____no_output_____
###Markdown
Términos con la misma cantidad de palabras
###Code
from techminer import Thesaurus, text_clustering
## busca keywords que tengan la misma raiz
th = text_clustering(rdf['keywords'], sep=';', transformer=lambda x: x.lower())
with open('thesaurus-1.json', 'w') as f:
f.write(th.__repr__())
!head -n 30 thesaurus-1.json
##
## Numero de cadenas de texto diferentes en keywords
##
print(len(set([w.strip() for x in rdf['keywords'] if x is not None for w in x.split(';')])))
##
## Lectura de las palabras clave editadas manualmente
##
import json
with open('thesaurus-1-edited.json', 'r') as f:
dictionary = json.loads(' '.join(f.readlines()))
##
## Limpieza
##
from techminer import Thesaurus
th = Thesaurus(dictionary, ignore_case=False, full_match=True, use_re=False)
rdf['keywords (cleaned)'] = rdf['keywords'].map(lambda x: th.apply(x, sep=';'))
rdf['keywords (cleaned)'] = rdf['keywords (cleaned)'].map(lambda x: ';'.join(set([w.strip() for w in x.split(';')])))
rdf['keywords (cleaned)'] = rdf['keywords (cleaned)'].map(lambda x: x if x !='' else None)
##
## Numero de cadenas de texto diferentes en keywords
##
print(len(set([w.strip() for x in rdf['keywords (cleaned)'] if x is not None for w in x.split(';')])))
###Output
1010
###Markdown
Agrupamiento de subcadenas
###Code
##
## subcadenas en palabras clave
##
from techminer import text_nesting
tn = text_nesting(rdf['keywords (cleaned)'], sep=';', max_distance=1, transformer=lambda x: x.lower())
with open('thesaurus-2.json', 'w') as f:
f.write(tn.__repr__())
!head -n 60 thesaurus-2.json
!head -n 60 thesaurus-2-edited.json
with open('thesaurus-2-edited.json', 'r') as f:
dictionary = json.loads(' '.join(f.readlines()))
th = Thesaurus(dictionary, ignore_case=False, full_match=True, use_re=False)
rdf['keywords (cleaned)'] = rdf['keywords (cleaned)'].map(lambda x: th.apply(x, sep=';'))
rdf['keywords (cleaned)'] = rdf['keywords (cleaned)'].map(lambda x: ';'.join(set([w.strip() for w in x.split(';')])))
rdf['keywords (cleaned)'] = rdf['keywords (cleaned)'].map(lambda x: x if x !='' else None)
##
## Numero de cadenas de texto diferentes en keywords
##
print(len(set([w.strip() for x in rdf['keywords (cleaned)'] if x is not None for w in x.split(';')])))
##
## Review
##
from techminer import display_records
display_records(rdf[['Title', 'keywords (cleaned)']].head(10))
###Output
-----------------------------------------------
Record index: 0
{
"Title": "Improving DWT-RNN model via B-spline wavelet multiresolution to forecast a high-frequency time series",
"keywords (cleaned)": "Noise reduction;Moving average;trends;long short-term memory neural network;empirical mode decomposition;time series forecasting;Meta-learning;machine learning;Nonlinear autoregressive neural network"
}
-----------------------------------------------
Record index: 1
{
"Title": "Direct marketing campaigns in retail banking with the use of deep learning and random forests",
"keywords (cleaned)": "stock prediction;Consumer price index;Tokyo Stock Exchange;Textual information;Information science;long short-term memory neural network;Newsprint;time series forecasting;Earnings;Distributed representation;costs;Novel applications;financial data"
}
-----------------------------------------------
Record index: 2
{
"Title": "Combining time-series and textual data for taxi demand prediction in event areas: A deep learning approach",
"keywords (cleaned)": "feedforward neural networks;dynamic neural networks;Self-organized neural networks;Exchange rates;recurrent neural networks;Stationary signal;intelligent computing;Self-organised;Immune algorithms;Exchange rate time series;regularization;financial time series forecasting;Bioinformatics;algorithms;financial data"
}
-----------------------------------------------
Record index: 3
{
"Title": "Stock price forecasting model based on modified convolution neural network and financial time series analysis",
"keywords (cleaned)": "stock forecasting;stock prediction;trading;Levenberg-Marquardt algorithm;NARX algorithm;Auto-regressive exogenous inputs;Big data;Learning algorithms;commerce;Risk assessment;Financial applications;Bankruptcy prediction;time series;artificial intelligence;financial time series forecasting;deep learning;Computational technology;mean square error;forecasting;Financial markets"
}
-----------------------------------------------
Record index: 4
{
"Title": "Sentiment-aware volatility forecasting",
"keywords (cleaned)": "stock forecasting;Long-term forecasting;trading;Multivariant analysis;Historical data;commerce;Market forecast;artificial neural networks;long short-term memory neural network;deep learning;time series forecasting;forecasting accuracy;Multi variate analysis;Learning architectures;forecasting;Financial markets"
}
-----------------------------------------------
Record index: 5
{
"Title": "DeepLOB: Deep convolutional neural networks for limit order books",
"keywords (cleaned)": "Short term prediction;Signal reconstruction;Dow Jones Industrial averages;High frequency HF;Topology;wavelet transforms;forecasting;High-frequency forecasting;finance;financial time series forecasting;high-frequency data;deep learning;forecasting accuracy;Directional accuracy;financial data;Metadata"
}
-----------------------------------------------
Record index: 6
{
"Title": "DeepClue: Visual interpretation of text-based deep stock prediction",
"keywords (cleaned)": "Learning systems;Directional predictions;time series;Brain;Financial markets;trading;Learning algorithms;Uranium alloys;Binary alloys;time series forecasting;machine learning;Threshold parameters;stock markets;financial time series;commerce;artificial neural networks;recurrent neural networks;long short-term memory neural network;Pattern recognition;Nearest neighbor search;Classifiers;anomaly detection;forecasting;Potassium alloys"
}
-----------------------------------------------
Record index: 7
{
"Title": "Stock Price Prediction Based on Information Entropy and Artificial Neural Network",
"keywords (cleaned)": "Stock market index;Medical data analysis;financial data;Errors;financial time series;commerce;Financial markets;Data augmentation;overfitting;investments;long short-term memory neural network;time series;deep learning;mean square error;forecasting;mean absolute error;decision making"
}
-----------------------------------------------
Record index: 8
{
"Title": "Deep Temporal Logistic Bag-of-features for Forecasting High Frequency Limit Order Book Time Series",
"keywords (cleaned)": "predictive capabilities;Recursive prediction;economics;arima;Big data;recurrent neural networks;support vector machines;Complex networks;Data mining;forecasting;time series;Multi-step ahead forecast;Comparative analysis;time series forecasting;algorithms"
}
-----------------------------------------------
Record index: 9
{
"Title": "Comparison of Predictive Algorithms: Backpropagation, SVM, LSTM and Kalman Filter for Stock Market",
"keywords (cleaned)": "Complex networks;time series;algorithms;financial data;Financial markets;trading;Learning algorithms;backpropagation;mape;Parameter estimation;Evolutionary algorithms;functions;polynomials;AMAPE;commerce;artificial neural networks;recurrent neural networks;finance;Stock indices;optimization;PFLARNN;differential evolution;IBM stock indices;forecasting"
}
###Markdown
Borrado basado en keywords y otros textos
###Code
from techminer import display_records
display_records(rdf[['Title', 'keywords (cleaned)']].head(10))
from techminer.keywords import Keywords
kyw = Keywords()
kyw.add_keywords(['Vacuum', 'market data'])
kyw
'vacuum' in kyw
idx = rdf['keywords (cleaned)'].map(lambda x: not kyw.common(x, sep=';'))
idx[0:20]
print('Records before = ', len(rdf))
rdf = rdf[idx]
print('Records after = ', len(rdf))
rdf.to_json(
'part-05.json',
orient='records',
lines=True)
###Output
_____no_output_____ |
num_methods/second/lab2.ipynb | ###Markdown
Ordinary differential equations  Euler method
###Code
def euler(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
y[i] = y[i - 1] + h * f(x[i - 1], y[i - 1])
return y
###Output
_____no_output_____
###Markdown
To check correctness we are going to solve simple differential equation$$y' = (x + y)^2,\\y(0) = 0,\\[a, b] = [0, 0.5],\\h = 0.05$$Solution here is a function $y(x) = tan(x) - x$.
###Code
dy = lambda x, y: x*x + y*y
x = np.linspace(0, 0.5, 100)
y0 = 0
y = euler(dy, x, y0)
y_ans = np.tan(x) - x
plt.figure(figsize=(15, 10))
plt.plot(x, y, x, y_ans)
plt.legend(['euler', 'answer'], loc='best')
plt.xlabel('x')
plt.title('Euler method (Runge-Kutta 1-st order method)')
plt.show()
###Output
_____no_output_____
###Markdown
The next method we are going to use is Runge-Kutta method family. Actually Euler method is a **special case** of Runge-Kutta methods. Runge-Kutta methods family We actually are going to try only two from Runge-Kutta methods: RK3 and RK4.
###Code
def runge_kutta3(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
k1 = h * f(x[i - 1], y[i - 1])
k2 = h * f(x[i - 1] + h/3, y[i - 1] + k1/3)
k3 = h * f(x[i - 1] + 2*h/3, y[i - 1] + 2*k2/3)
y[i] = y[i - 1] + (k1 + 3*k3) / 4
return y
def runge_kutta4(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
k1 = h * f(x[i - 1], y[i - 1])
k2 = h * f(x[i - 1] + h/2, y[i - 1] + k1/2)
k3 = h * f(x[i - 1] + h/2, y[i - 1] + k2/2)
k4 = h * f(x[i - 1] + h, y[i - 1] + k3)
y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6
return y
###Output
_____no_output_____
###Markdown
Let's solve slightly different equation$$y' = \frac{sin(x)}{y},\\y(0) = 1,\\[a, b] = [0, 5],\\h = 1.25$$A correct solution is $y = \sqrt{3 - 2cos(x)}$
###Code
dy = lambda x, y: np.sin(x) / y
x = np.linspace(0, 5, 4)
y0 = 1
y3 = runge_kutta3(dy, x, y0)
y4 = runge_kutta4(dy, x, y0)
y_ans = np.sqrt(3 - 2*np.cos(x))
plt.figure(figsize=(15, 10))
plt.plot(x, y3, x, y4, x, y_ans)
plt.legend(['rk3', 'rk4', 'ans'], loc='best')
plt.xlabel('x')
plt.title('Runge-Kutta 3-rd and 4-th order methods')
plt.show()
###Output
_____no_output_____
###Markdown
Now let's move to system of differential equations. Runge-Kutta methods for SDE
###Code
def fmap(fs, x):
return np.array([f(*x) for f in fs])
def runge_kutta4_system(fs, x, y0):
h = x[1] - x[0]
y = np.empty((len(x), len(y0)))
y[0] = y0
for i in range(1, len(x)):
k1 = h * fmap(fs, [x[i - 1], *y[i - 1]])
k2 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k1/2)])
k3 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k2/2)])
k4 = h * fmap(fs, [x[i - 1] + h, *(y[i - 1] + k3)])
y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6
return y
###Output
_____no_output_____
###Markdown
Eg. We have system of differential equations$$y = z',\\z' = \frac{2xz}{x^2+1},\\y(0) = 1,\\z(0) = 3$$Let's try to solve it using Runge-Kutta methods of order 4.
###Code
dy = lambda x, y, z: z
dz = lambda x, y, z: 2*x*z / (x*x + 1)
fs = [dy, dz]
x = np.linspace(0, 1, 10)
y0 = np.array([1, 3])
y = runge_kutta4_system(fs, x, y0)
plt.figure(figsize=(15, 10))
plt.plot(x, y[:, 0], x, y[:, 1])
plt.legend(['y(x)', 'z(x)'], loc='best')
plt.xlabel('x')
plt.title('Runge-Kutta 4-th order method for system of differential equations')
plt.show()
###Output
_____no_output_____
###Markdown
Predator-prey equation $$\frac{dx}{dt} = \alpha x - \beta xy\\\frac{dy}{dt} = \delta xy - \gamma y$$where $x$ - population of preys, and $y$ - population predators
###Code
dx = lambda t, x, y: 2/3*x - 4/3*x*y
dy = lambda t, x, y: x*y - y
fs = [dx, dy]
t = np.linspace(0, 20, 500)
y0 = np.array([1, 2])
z = runge_kutta4_system(fs, t, y0)
plt.figure(figsize=(15, 10))
plt.plot(t, z[:, 0], t, z[:, 1])
plt.legend(['prey', 'predator'], loc='best')
plt.xlabel('time (sec)')
plt.ylabel('population')
plt.title('Lotka-Volterra equation')
plt.show()
plt.figure(figsize=(15, 10))
plt.plot(z[:, 0], z[:, 1])
plt.xlabel('pray')
plt.ylabel('predator')
plt.title('Parametric graph')
plt.show()
###Output
_____no_output_____
###Markdown
Equilibrium Let's look at population equilibrium$$y = \frac{\alpha}{\beta}\\x = \frac{\gamma}{\delta}$$we will take values close to these to show how it's approaching equilibrium
###Code
dx = lambda t, x, y: 2/3*x - 4/3*x*y
dy = lambda t, x, y: x*y - y
fs = [dx, dy]
t = np.linspace(0, 20, 500)
y0 = np.array([1, 101/200])
z = runge_kutta4_system(fs, t, y0)
plt.figure(figsize=(15, 10))
plt.plot(t, z[:, 0], t, z[:, 1])
plt.legend(['prey', 'predator'], loc='best')
plt.xlabel('time (sec)')
plt.ylabel('population')
plt.title('Lotka-Volterra equilibrium')
plt.show()
plt.figure(figsize=(15, 10))
plt.plot(z[:, 0], z[:, 1])
plt.xlabel('pray')
plt.ylabel('predator')
plt.title('Parametric graph of equilibrium')
plt.show()
###Output
_____no_output_____ |
docs/notebooks/Hydrofunctions_Comparing_Stream_Environments.ipynb | ###Markdown
Comparing Different Stream EnvironmentsThis Jupyter Notebook compares four streams in different environments in the U.S. Using hydrofunctions, we are able to plot the flow duration graphs for all four streams and compare them.
###Code
import hydrofunctions as hf
%matplotlib inline
###Output
_____no_output_____
###Markdown
Choose four streams from different environments from HydroCloud. Import data for three years. In this example, all four streams are in places with low development:- Colorado Western Slopes: ROARING FORK RIVER NEAR ASPEN, CO.- California Mendicino National Park: MAD R AB RUTH RES NR FOREST GLEN CA- White Mountains, NH: EAST BRANCH PEMIGEWASSET RIVER AT LINCOLN, NH- PINTO CREEK NEAR MIAMI, AZ
###Code
streams = ['09073400','11480390','01074520','09498502']
sites = hf.NWIS(streams, 'dv', start_date='2001-01-01', end_date='2003-12-31')
sites
#Create a dataframe of the four sites
Q = sites.df('discharge')
#Show the first few lines of the dataframe
Q.head()
# rename the columns based on the names of the sites from HydroCloud
Q.columns=['White Mountains National Park', 'White River National Forest', 'Tonto National Forest', 'Mendicino National Park']
# show the first few rows of the data to confirm the changes
Q.head()
#use the built-in functions from hydrofunctions to create a flow duration graph for the dataframe.
hf.flow_duration(Q)
#Pull the stats for each of the four sites.
Q.describe()
###Output
_____no_output_____ |
Coursera/IBM Python 01/Course04/7.4one_layer_neural_network_MNIST.ipynb | ###Markdown
Neural Networks with One Hidden Layer Table of ContentsIn this lab, you will use a single layer neural network to classify handwritten digits from the MNIST database. Neural Network Module and Training Function Make Some Data Define the Neural Network, Optimizer, and Train the Model Analyze ResultsEstimated Time Needed: 25 min Preparation We'll need the following libraries
###Code
# Import the libraries we need for this lab
# Using the following line code to install the torchvision library
# !conda install -y torchvision
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
import torch.nn.functional as F
import matplotlib.pylab as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Use the following helper functions for plotting the loss:
###Code
# Define a function to plot accuracy and loss
def plot_accuracy_loss(training_results):
plt.subplot(2, 1, 1)
plt.plot(training_results['training_loss'], 'r')
plt.ylabel('loss')
plt.title('training loss iterations')
plt.subplot(2, 1, 2)
plt.plot(training_results['validation_accuracy'])
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.show()
###Output
_____no_output_____
###Markdown
Use the following function for printing the model parameters:
###Code
# Define a function to plot model parameters
def print_model_parameters(model):
count = 0
for ele in model.state_dict():
count += 1
if count % 2 != 0:
print ("The following are the parameters for the layer ", count // 2 + 1)
if ele.find("bias") != -1:
print("The size of bias: ", model.state_dict()[ele].size())
else:
print("The size of weights: ", model.state_dict()[ele].size())
###Output
_____no_output_____
###Markdown
Define the neural network module or class:
###Code
# Define a function to display data
def show_data(data_sample):
plt.imshow(data_sample.numpy().reshape(28, 28), cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Neural Network Module and Training Function Define the neural network module or class:
###Code
# Define a Neural Network class
class Net(nn.Module):
# Constructor
def __init__(self, D_in, H, D_out):
super(Net, self).__init__()
self.linear1 = nn.Linear(D_in, H)
self.linear2 = nn.Linear(H, D_out)
# Prediction
def forward(self, x):
x = torch.sigmoid(self.linear1(x))
x = self.linear2(x)
return x
###Output
_____no_output_____
###Markdown
Define a function to train the model. In this case, the function returns a Python dictionary to store the training loss and accuracy on the validation data.
###Code
# Define a training function to train the model
def train(model, criterion, train_loader, validation_loader, optimizer, epochs=100):
i = 0
useful_stuff = {'training_loss': [],'validation_accuracy': []}
for epoch in range(epochs):
for i, (x, y) in enumerate(train_loader):
optimizer.zero_grad()
z = model(x.view(-1, 28 * 28))
loss = criterion(z, y)
loss.backward()
optimizer.step()
#loss for every iteration
useful_stuff['training_loss'].append(loss.data.item())
correct = 0
for x, y in validation_loader:
#validation
z = model(x.view(-1, 28 * 28))
_, label = torch.max(z, 1)
correct += (label == y).sum().item()
accuracy = 100 * (correct / len(validation_dataset))
useful_stuff['validation_accuracy'].append(accuracy)
return useful_stuff
###Output
_____no_output_____
###Markdown
Make Some Data Load the training dataset by setting the parameters train to True and convert it to a tensor by placing a transform object in the argument transform.
###Code
# Create training dataset
train_dataset = dsets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor())
###Output
_____no_output_____
###Markdown
Load the testing dataset by setting the parameters train to False and convert it to a tensor by placing a transform object in the argument transform:
###Code
# Create validating dataset
validation_dataset = dsets.MNIST(root='./data', train=False, download=True, transform=transforms.ToTensor())
###Output
_____no_output_____
###Markdown
Create the criterion function:
###Code
# Create criterion function
criterion = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Create the training-data loader and the validation-data loader objects:
###Code
# Create data loader for both train dataset and valdiate dataset
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=2000, shuffle=True)
validation_loader = torch.utils.data.DataLoader(dataset=validation_dataset, batch_size=5000, shuffle=False)
###Output
_____no_output_____
###Markdown
Define the Neural Network, Optimizer, and Train the Model Create the model with 100 neurons:
###Code
# Create the model with 100 neurons
input_dim = 28 * 28
hidden_dim = 100
output_dim = 10
model = Net(input_dim, hidden_dim, output_dim)
###Output
_____no_output_____
###Markdown
Print the model parameters:
###Code
# Print the parameters for model
print_model_parameters(model)
###Output
The following are the parameters for the layer 1
The size of weights: torch.Size([100, 784])
The size of bias: torch.Size([100])
The following are the parameters for the layer 2
The size of weights: torch.Size([10, 100])
The size of bias: torch.Size([10])
###Markdown
Define the optimizer object with a learning rate of 0.01:
###Code
# Set the learning rate and the optimizer
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Train the model by using 100 epochs **(this process takes time)**:
###Code
# Train the model
training_results = train(model, criterion, train_loader, validation_loader, optimizer, epochs=30)
###Output
_____no_output_____
###Markdown
Analyze Results Plot the training total loss or cost for every iteration and plot the training accuracy for every epoch:
###Code
# Plot the accuracy and loss
plot_accuracy_loss(training_results)
###Output
_____no_output_____
###Markdown
Plot the first five misclassified samples:
###Code
# Plot the first five misclassified samples
count = 0
for x, y in validation_dataset:
z = model(x.reshape(-1, 28 * 28))
_,yhat = torch.max(z, 1)
if yhat != y:
show_data(x)
count += 1
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Practice Use nn.Sequential to build exactly the same model as you just built. Use the function train to train the model and use the function plot_accuracy_loss to see the metrics. Also, try different epoch numbers.
###Code
# Practice: Use nn.Sequential to build the same model. Use plot_accuracy_loss to print out the accuarcy and loss
# Type your code here
###Output
_____no_output_____ |
pandas/Exploration_Correspondance_Mssante.ipynb | ###Markdown
[Opikanoba.org](https://opikanoba.org) Extraction beta Publique Correspondance MSSanté Ce notebook se base sur l'extraction publique du 28/05/2018 disponible sur le site de l'ASIP Sante.Elle est téléchargeable dans la rubrique [Extractions en libre accès](https://annuaire.sante.fr/web/site-pro/extractions-mss). Le fichier est téléchargé localement dans le répertoire `/tmp/asip`.Le notebook est disponible en open source sur github : https://github.com/flrt/jupyter-notebooks
###Code
fname='/tmp/asip/Extraction_Correspondance_MSSante_201805280855.txt'
###Output
_____no_output_____
###Markdown
Redéfinition des clés pour enlever les espaces et autres caractères. Plus pratique à manipuler.
###Code
KEYS_CORRESP = ["type_bal", "adresse_bal", "type_identifiant_pp", "identifiant_pp",
"identification_nationale_pp", "type_identifiant_structure", "identification_structure","service_rattachement", "civilite_exercice", "nom_exercice", "prenom_exercice",
"categorie_professionnelle", "libelle_categorie_professionelle", "code_profession",
"libelle_profession", "code_savoir_faire", "libelle_savoir_faire",
"dematerialisation", "raison_sociale_structure_bal", "enseigne_commerciale_structure_bal",
"complement_localisation_structure_bal", "complement_distribution_structure_bal",
"numero_voie_structure_bal", "complement_numero_voie_structure_bal",
"type_voie_structure_bal", "libelle_voie_structure_bal", "lieu_dit_mention_structure_bal",
"ligne_acheminement_structure_bal", "code_postal_structure_bal",
"departement_structure_bal", "pays_structure_bal"]
import pandas
df=pandas.read_csv(fname, delimiter='|', names=KEYS_CORRESP, header=0, index_col=False)
###Output
/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2728: DtypeWarning: Columns (5,6) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Nombre de lignes du fichier de données
###Code
len(df)
###Output
_____no_output_____
###Markdown
Professions présentes dans le fichier
###Code
df.libelle_profession.value_counts()
df['identifiant_pp'].nunique()
df.type_bal.nunique()
###Output
_____no_output_____
###Markdown
Type de boite aux lettres : PER pour personnel et ORG pour Organisationnelle
###Code
df.type_bal.value_counts()
df2=df.loc[df.type_bal=='PER']
len(df2)
###Output
_____no_output_____
###Markdown
Suppression des enregistrements en double, et extraction des fournisseurs d'adresse MSSanté
###Code
df3 = df.loc[df2.drop_duplicates('adresse_bal').adresse_bal.dropna().index]
mss_providers = df3.adresse_bal.str.split('@', expand=True).get(1)
mss_providers.value_counts().head(10)
#_df=df[df['adresse_bal'].str.contains('medecin.mssante.fr')]
#_df.adresse_bal.value_counts()[_df.adresse_bal.value_counts()>4]
###Output
_____no_output_____
###Markdown
Focus sur les CHU
###Code
chu_mss=df3[df3['adresse_bal'].str.match('.*((ch(r?)u-)|aphm|aphp).*')]
chu_vc = chu_mss.adresse_bal.value_counts()[chu_mss.adresse_bal.value_counts()>1]
###Output
_____no_output_____
###Markdown
Vérification qu'il n'y a pas de doublons : aucun enregistrement dans chu_vc
###Code
chu_vc
chu_vals=mss_providers.value_counts().filter(regex='.*((ch(r?)u-)|aphm|aphp).*', axis=0)
chu_vals
type(chu_vals)
chu_vals.head()
chu_vals.axes
chu_vals.data
chu_vals.values
list(chu_vals.index.get_values())
sdf = pandas.DataFrame(data={'CHU':list(chu_vals.index.get_values()), 'nb':list(chu_vals.values)})
sdf
###Output
_____no_output_____
###Markdown
Graph avec seaborn
###Code
import seaborn as sns
sns.set(style="darkgrid")
sns.set(rc={'figure.figsize':(12,10)})
ax = sns.barplot(y="CHU", x='nb', data=sdf)
###Output
_____no_output_____ |
notebooks/Supervised_Retrieval_all_features.ipynb | ###Markdown
Supervised Retrieval for all models with all features. In this notebook we use the supervised classification model for a supervised crosslingual information retrieval task. We use the default settings with all features remaining after we got rid of correlated features and features that only have one value in the whole column.
###Code
#import sys
#import os
#sys.path.append(os.path.dirname((os.path.abspath(''))))
import pandas as pd
from sklearn import preprocessing
from sklearn.neural_network import MLPClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from src.models.predict_model import MAP_score, threshold_counts
###Output
_____no_output_____
###Markdown
I. Import Data In this section we import the feature dataframe for the retrieval task.
###Code
feature_dataframe=pd.read_feather("../data/processed/feature_model_en_de.feather")
feature_retrieval=pd.read_feather("../data/processed/feature_retrieval_en_de.feather")
feature_dataframe = feature_dataframe.rename(columns={"id_source": "source_id", "id_target": "target_id"})
feature_retrieval = feature_retrieval.rename(columns={"id_source": "source_id", "id_target": "target_id"})
###Output
_____no_output_____
###Markdown
Delete all columns with only one value
###Code
column_mask = feature_dataframe.apply(threshold_counts, threshold=1)
feature_dataframe = feature_dataframe.loc[:, column_mask]
feature_retrieval = feature_retrieval.loc[:, column_mask]
len(feature_retrieval.columns)
###Output
_____no_output_____
###Markdown
II. Supervised Retrieval Drop the target label and the indexes for training and testing
###Code
target_train=feature_dataframe['Translation'].astype(float)
data_train=feature_dataframe.drop(columns=['Translation','source_id','target_id'])
target_test=feature_retrieval['Translation'].astype(float)
data_test=feature_retrieval.drop(columns=['Translation','source_id','target_id'])
###Output
_____no_output_____
###Markdown
Z-Normalization
###Code
#scale data into [0,1]
scaler = preprocessing.StandardScaler()
data_train.loc[:, data_train.columns] = scaler.fit_transform(data_train.loc[:, data_train.columns])
data_test.loc[:, data_test.columns] = scaler.transform(data_test.loc[:, data_test.columns])
###Output
_____no_output_____
###Markdown
Naive Bayes
###Code
nb = GaussianNB().fit(data_train, target_train)
prediction = nb.predict_proba(data_test)
print("The MAP score on test set: {:.4f}".format(MAP_score(feature_retrieval['source_id'],target_test,prediction)))
###Output
The MAP score on test set: 0.3244
###Markdown
MLP Classifier
###Code
mlp = MLPClassifier( verbose=True, early_stopping=True).fit(data_train, target_train)
prediction = mlp.predict_proba(data_test)
print("The MAP score on test set: {:.4f}".format(MAP_score(feature_retrieval['source_id'],target_test,prediction)))
###Output
Iteration 1, loss = 0.06539059
Validation score: 0.985682
Iteration 2, loss = 0.03628665
Validation score: 0.986500
Iteration 3, loss = 0.03410492
Validation score: 0.986182
Iteration 4, loss = 0.03367580
Validation score: 0.986545
Iteration 5, loss = 0.03275958
Validation score: 0.987045
Iteration 6, loss = 0.03221838
Validation score: 0.986682
Iteration 7, loss = 0.03168696
Validation score: 0.987000
Iteration 8, loss = 0.03085646
Validation score: 0.986682
Iteration 9, loss = 0.03065942
Validation score: 0.987091
Iteration 10, loss = 0.03035675
Validation score: 0.985864
Iteration 11, loss = 0.03027931
Validation score: 0.986818
Iteration 12, loss = 0.02986307
Validation score: 0.987409
Iteration 13, loss = 0.02938762
Validation score: 0.986818
Iteration 14, loss = 0.02902481
Validation score: 0.987227
Iteration 15, loss = 0.02894969
Validation score: 0.987136
Iteration 16, loss = 0.02849265
Validation score: 0.987182
Iteration 17, loss = 0.02908350
Validation score: 0.987273
Iteration 18, loss = 0.02840851
Validation score: 0.987000
Iteration 19, loss = 0.02889221
Validation score: 0.987227
Iteration 20, loss = 0.02848600
Validation score: 0.987409
Iteration 21, loss = 0.02811073
Validation score: 0.987136
Iteration 22, loss = 0.02823864
Validation score: 0.987136
Iteration 23, loss = 0.02786305
Validation score: 0.986909
Validation score did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
The MAP score on test set: 0.6198
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(max_iter=100000, verbose=10).fit(data_train.to_numpy(), target_train.to_numpy())
prediction = lr.predict_proba(data_test.to_numpy())
print("The MAP score on test set: {:.4f}".format(MAP_score(feature_retrieval['source_id'],target_test,prediction)))
prediction
###Output
_____no_output_____
###Markdown
XGBoost
###Code
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(data_train.to_numpy(), target_train.to_numpy())
prediction = model.predict_proba(data_test).tolist()
print("The MAP score on test set: {:.4f}".format(MAP_score(feature_retrieval['source_id'],target_test,prediction)))
prediction
###Output
_____no_output_____ |
day-2/basics-of-machine-learning.ipynb | ###Markdown
Day 2: Basics of Statistics and Basics of Machine Learning- Random Variables- Common Distributions- Linear Regression- Logistic Regression
###Code
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Day 2: Basics of Machine Learning Linear Regression Loading and inspecting the dataWe will use a dataset about various species of insects (Source: CrossValidated). We will be interested in explaining how the insects wing span varies, as influenced by the other measurements in the dataset.Our first step is to take a quick look at the raw data.
###Code
!head ./data/insects.csv
###Output
continent latitude wingsize sex
1 40.5 941.111111111 0
1 42.0 924.444444444 0
1 43.6 924.204444444 0
1 45.7 915.217777778 0
1 45.9 905.471111111 0
1 47.4 896.004444444 0
1 50.0 913.0 0
1 51.8 916.44 0
1 53.8 933.417777778 0
###Markdown
**Discussion:** How would you describe this dataset? How many variables are there? How would you describe these variables?It looks like there are four columns in our dataset: `continent`, `latitude`, `wingsize`, and `sex`.Elements in rows are separated from each other using a tab character; this type of format is often called "tab-separated data".
###Code
insects_df = pd.read_csv('./data/insects.csv', sep='\t')
###Output
_____no_output_____
###Markdown
Now we have the data in a Python object:
###Code
insects_df.head()
###Output
_____no_output_____
###Markdown
We've got our four columns `continent`, `latitude`, `wingsize`, and `sex`.We can see some short descriptions of their qualities using `info`:
###Code
insects_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 42 entries, 0 to 41
Data columns (total 4 columns):
continent 42 non-null int64
latitude 42 non-null float64
wingsize 42 non-null float64
sex 42 non-null int64
dtypes: float64(2), int64(2)
memory usage: 1.4 KB
###Markdown
Looking at the dataWe can get a first feel for how the quantities in our data are spread out using **histograms**:
###Code
fig, axs = plt.subplots(2, 2, figsize=(10, 8))
for ax, column in zip(axs.flatten(), insects_df.columns):
ax.hist(insects_df[column])
ax.set_title(column)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
**Discussion:** What did you learn from these histograms? How do they help you describe the data?Some observations:- `continent` and `sex` take only two values. There are two continents represented in the data, labeled zero and one as well as two sexes (probably Male and Female), also labeled zero and one.These zero/one columns are called **binary** or **indicator variables**, they measure a specific yes/no condition.- The values of `wingspan` cluster into two distinct groups. This is very interesting and worthy of further investigation. ScatterplotsHistograms are useful, but limited, as they do not reveal anything about the *relationships between the columns in our data*. Insread, we will turn to undoubtedly the most effective and flexible visualization: the **scatterplot**.
###Code
_, ax = plt.subplots()
ax.scatter(insects_df.latitude, insects_df.wingsize, s=40)
ax.set_xlabel('Latitude')
ax.set_ylabel('Wing Size')
ax.set_title('Insect wing sizes at various latitudes')
###Output
_____no_output_____
###Markdown
Here we have a scatterplot of `wingsize` against `latitude`.**Discussion:** What patterns do you see in the scatterplot. Can you form any hypothesis about the data?Here are some thoughts:- The most prominent feature of this data is the two bands. There seem to be two very well defined elongated clusters of data, with the average wing size in one cluster much greater than in the other.- Within each cluster there is noticeable tendency for wing size to first decrease and then increase as latitude varies.This leads to a few questions we may wish to answer with the data.1. Are the two clusters associated with one of the other two variables in the dataset, continent or sex?2. Is the increase of wing size as latitude increases real or illusory?Let's answer each of these questions. Are the two clusters associated with either continent or sex?We can discover if the two clusters in the data are associated with either `continent` or `sex` through a well chosen visualization. Let's make the same scatterplot from before, but color each point either red or blue, according to the value of `continent` or `sex`.
###Code
_, ax = plt.subplots(figsize=(8, 8))
for continent, color in ((0, 'red'), (1, 'blue')):
df = insects_df[insects_df.continent == continent]
ax.scatter(df.latitude, df.wingsize, s=40, c=color, label='Continent {}'.format(continent))
ax.set_xlabel('Latitude')
ax.set_ylabel('Wing Size')
ax.set_title('Are the two clusters associated with continent?')
ax.legend()
###Output
_____no_output_____
###Markdown
Values of different continents seem scattered randomly across the two clusters, so it does **not** seem like continent is associated with the clusters.
###Code
_, ax = plt.subplots(figsize=(8, 8))
for sex, color in ((0, 'red'), (1, 'blue')):
df = insects_df[insects_df.sex == sex]
ax.scatter(df.latitude, df.wingsize, s=40, c=color, label='sex {}'.format(sex))
ax.set_xlabel('Latitude')
ax.set_ylabel('Wing Size')
ax.set_title('Are the two clusters associated with sex?')
ax.legend()
###Output
_____no_output_____
###Markdown
There we go!This is pretty definitive, the cluster of the larger insects are all female, and the cluster of smaller insects are all male. This seems like enough evidence to conclude that the sex of the insect causes the data to cluster into two groups. Is an increasing latitude associated with a larger wing size?This question is a little more sophisticated, and we need some new technology to answer it.The idea is to create an equation:$$wingsize \approx a + b \cdot latitude$$Then we can look at the number $b$, which tells us how we should expect `wingsize` to change as `latitude` changes. If we find that $b > 0$, that's evidence that an increasing latitude is associated with an increasing wingspan.I'll skip the technicalities, but the basic tool for creating equations like this is called **linear regression**.
###Code
model = smf.ols(formula='wingsize ~ latitude', data=insects_df).fit()
model.summary()
###Output
_____no_output_____
###Markdown
The linear regression estimated the equation as:$$wingsize \approx 765+ 2.54 \times latitude$$So we can expect, on average, an increase of 2.54 wing size for every additional latitude.But is this really meaningful? We need to make sure our model is representing the data honestly.One way we can visualize this is to look at the equation as the equation for a line. If we *know* the latitude that we find an insect, we can *predict* the wing span using the equation. If we plot the latitudes verses the predictions, we get a line.
###Code
_, ax = plt.subplots(figsize=(8, 8))
for sex, color in ((0, 'red'), (1, 'blue')):
df = insects_df[insects_df.sex == sex]
ax.scatter(df.latitude, df.wingsize, s=40, c=color, label='sex {}'.format(sex))
# Draw the linear regression predictions
latitudes = np.linspace(30, 60, num=250)
wingsizes_hat = model.params[0] + model.params[1] * latitudes
ax.plot(latitudes, wingsizes_hat, linewidth=2, c='black')
ax.set_xlim(30, 60)
ax.set_xlabel('Latitude')
ax.set_ylabel('Wing Size')
ax.set_title('Insect wing sizes at various latitudes')
ax.legend()
###Output
_____no_output_____
###Markdown
This plot demonstrates two serious flaws in our model:- The model has no knowledge of the sex of the insect, so the fit line attemps to bisect the two clusters of data.- The model cannot account for the curvature in the data points. The model attempts to fit a line to data that does not have a linear shape. Accounting for the insects' sexOne way to account for the insects' sex is to modify our equation by adding a new term:$$wingsize \approx a + b \cdot latitude + c \cdot sex$$Again, linear regression can find an equation of this shape describing the data:
###Code
model = smf.ols(formula='wingsize ~ latitude + sex', data=insects_df).fit()
model.summary()
###Output
_____no_output_____
###Markdown
We now have an estimate for the number $c$ of $-88$. This means, that on average, an insect with `sex = 1` costs it about $-88$ in wingsize.The predictions from this model now depend on whether an insect is male or female. We have two lines of predictions, and the sex of the insect chooses which line to use:
###Code
_, ax = plt.subplots(figsize=(8, 8))
for sex, color in ((0, 'red'), (1, 'blue')):
df = insects_df[insects_df.sex == sex]
ax.scatter(df.latitude, df.wingsize, s=40, c=color, label='sex {}'.format(sex))
# Draw the linear regression predictions
latitudes = np.linspace(30, 60, num=250)
for sex, color in ((0, 'red'), (1, 'blue')):
wingsizes_hat = model.params[0] + model.params[1] * latitudes + model.params[2] * sex
ax.plot(latitudes, wingsizes_hat, linewidth=2, c=color)
ax.set_xlim(30, 60)
ax.set_xlabel('Latitude')
ax.set_ylabel('Wing Size')
ax.set_title('Insect wing sizes at various latitudes')
ax.legend()
###Output
_____no_output_____
###Markdown
Accounting for the curvature of the data pointsWe can account for the curvature of the data points by using a *poynomial regression*. This means that we fit powers of latitude bigger than one:$$wingsize \approx a + b \cdot latitude + c \cdot latitude^2 + d \cdot sex$$
###Code
model = smf.ols(formula='wingsize ~ latitude + I(latitude**2) + sex', data=insects_df).fit()
model.summary()
_, ax = plt.subplots(figsize=(8, 8))
for sex, color in ((0, 'red'), (1, 'blue')):
df = insects_df[insects_df.sex == sex]
ax.scatter(df.latitude, df.wingsize, s=40, c=color, label='sex {}'.format(sex))
# Draw the linear regression predictions
latitudes = np.linspace(30, 60, num=250)
for sex, color in ((0, 'red'), (1, 'blue')):
wingsizes_hat = model.params[0] + model.params[1] * latitudes + model.params[2] * latitudes ** 2 + model.params[3] * sex
ax.plot(latitudes, wingsizes_hat, linewidth=2, c=color)
ax.set_xlim(30, 60)
ax.set_xlabel('Latitude')
ax.set_ylabel('Wing Size')
ax.set_title('Insect wing sizes at various latitudes')
ax.legend()
###Output
_____no_output_____
###Markdown
This now looks like a good model that represents the data honestly. How is the line fitted?How did linear regression draw these lines and curves? How were the coefficients estimated?We can think of the fitted line as a *prediction*. If we were to collect a new insect at a certain latitude, the y-coordinate of the line would be our best estiamte for the wing size of that insect.A good strategy for drawing the line would seem to be: **draw the line that minimizes the dissimilarity between the predictions and the actual wing size.**
###Code
model = smf.ols(formula='wingsize ~ latitude', data=insects_df).fit()
_, ax = plt.subplots(figsize=(8, 8))
for sex, color in ((0, 'red'), (1, 'blue')):
df = insects_df[insects_df.sex == sex]
ax.scatter(df.latitude, df.wingsize, s=40, c=color, label='sex {}'.format(sex))
latitudes = np.linspace(30, 60, num=250)
wingsizes_hat = model.params[0] + model.params[1] * latitudes
ax.plot(latitudes, wingsizes_hat, linewidth=2, c='black')
wingsizes_hat = model.params[0] + model.params[1] * insects_df.latitude
for latitude, wingsize, wingsize_hat in zip(insects_df.latitude, insects_df.wingsize, wingsizes_hat):
ax.plot((latitude, latitude), (wingsize, wingsize_hat), color='grey')
ax.set_xlim(30, 60)
ax.set_xlabel('Latitude')
ax.set_ylabel('Wing Size')
ax.set_title('Insect wing sizes at various latitudes')
ax.legend()
###Output
_____no_output_____
###Markdown
In the picture above the vertical distance between each data point $y$ and its estimated value $\hat{y}$ are highlighted.A common numeric measure of the dissimilarity is the *sum of squared residuals*:$$SSR = \sum_{i=1}^{N}{( y_i - \hat{y}_i )^2}$$**The linear regression line is the line that minimizes that sum of squared residuals.** Logistic RegressionWith linear regression we predict some quantity that has a continuous range of values. *Logistic Regression* solves a slightly different problem. What if we wanted to predict the sex of an insect from the other measurements?
###Code
_, ax = plt.subplots(figsize=(8, 4))
# Scatterplot of the data
ax.scatter(insects_df.latitude, insects_df.sex)
ax.set_xlim([30, 65])
ax.set_yticks([0, 1])
ax.set_yticklabels(['sex = 0', 'sex = 1'])
ax.set_xlabel('Latitude')
###Output
_____no_output_____
###Markdown
Logistic regression attempts to estimate the probability that an insect is male ($sex = 1$) or female ($sex = 0$).$$P(sex = 1 \mid latitude)$$
###Code
model = smf.logit(formula='sex ~ latitude', data=insects_df).fit()
model.summary()
###Output
Optimization terminated successfully.
Current function value: 0.603191
Iterations 5
###Markdown
Logistic regression attempts to draw a *curve* of predicted probabilities (not a line, as in linear regression).The above logistic regression results in the following curve:$$P(sex = 1 \mid latitude) = \frac{1}{1 + e^{-(7.3 - 0.16 \times latitude)}}$$
###Code
_, ax = plt.subplots(figsize=(8, 4))
ax.scatter(insects_df.latitude, insects_df.sex)
df = pd.DataFrame({'latitude': np.linspace(30, 65, num=250)})
ax.plot(df, model.predict(df), linewidth=2)
ax.set_xlim([30, 65])
ax.set_yticks([0, 1])
ax.set_yticklabels(['sex = 0', 'sex = 1'])
ax.set_xlabel('Latitude')
ax.set_ylabel('$P(sex = 1)$')
###Output
_____no_output_____
###Markdown
What if we need to make a binary decision, i.e., we need to use the estimated probabilities to actually classify each insect as either male or female?We can accomplish this by thresholding probabilities:
###Code
fig, ax = plt.subplots(figsize=(8, 4))
# Scatterplot of the data
ax.scatter(insects_df.latitude, insects_df.sex)
df = pd.DataFrame({'latitude': np.linspace(30, 65, num=250)})
ax.plot((30, 65), (0.5, 0.5), color='black', linestyle='--')
ax.plot((47.075, 47.075), (0, 1), color='black', linestyle='--')
ax.plot(df, model.predict(df), linewidth=2)
ax.set_xlim([30, 65])
ax.set_yticks([0, 1])
ax.set_yticklabels(['sex = 0', 'sex = 1'])
ax.set_xlabel('Latitude')
ax.set_ylabel('$P(sex = 1)$')
###Output
_____no_output_____ |
course_2/course_material/Part_4_Python/S27_L160/Python 2/Creating a Function with a Parameter - Solution_Py2.ipynb | ###Markdown
Creating a Function with a Parameter *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Define a function that returns a value equal to its argument multiplied by 2.
###Code
def multiplication_by_2(x):
print x * 2
###Output
_____no_output_____
###Markdown
Define a funciton that returns a float value equal to its argument divided by 2.
###Code
def division_by_2(x):
return float(x) / 2
###Output
_____no_output_____
###Markdown
or:
###Code
def divisin_by_2(x):
return x / 2.0
# This is new to you - yes, the divisor could be set to be a float, and that would make your output a floating point, too!
###Output
_____no_output_____ |
Chapter03/Exercise3.01/Exercise3.01_Creating_an_active_training_environment.ipynb | ###Markdown
Magic CommandsMagic commands (those that start with `%`) are commands that modify a configuration of Jupyter Notebooks. A number of magic commands are available by default (see list [here](http://ipython.readthedocs.io/en/stable/interactive/magics.html))--and many more can be added with extensions. The magic command added in this section allows `matplotlib` to display our plots directly on the browser instead of having to save them on a local file.
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Exercise 3.01: Creating an active training environmentIn this activity we learn to evaluate our LSTM model and to actively train it with new data.
###Code
import math
import numpy as np
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as plt
import sys
sys.path.append('scripts')
from tensorflow.keras.models import load_model
from tensorflow.keras.callbacks import TensorBoard
from datetime import datetime, timedelta
from utilities import create_groups, split_lstm_input
plt.style.use('seaborn-white')
###Output
_____no_output_____
###Markdown
Validation DataOn Lesson 1, we separated our Bitcoin dataset using a 90/10 split. 90% of the data was used to train `v0` of our LSTM model and 10% was left for evaluation purposes. Let's load these datasets into this session.
###Code
train = pd.read_csv('data/train_dataset.csv')
test = pd.read_csv('data/test_dataset.csv')
len(train), len(test)
train.iso_week.nunique()
test.iso_week.nunique()
###Output
_____no_output_____
###Markdown
Re-train Model with TensorBoardWe have trained our model previously using a vanilla Keras implementation. We will now retrain that same model (loaded using `load_model()`) using the same parameters, but adding the `TensorBoard` callback. This allows us to investigate how that model is performing in near-real time.
###Code
def train_model(model, X, Y, epochs=100, version=0, run_number=0):
"""
Shorthand function for training a new model.
This function names each run of the model
using the TensorBoard naming conventions.
Parameters
----------
model: Keras model instance
Compiled Keras model.
X, Y: np.array
Series of observations to be used in
the training process.
epochs: int
The number of epochs to train the
model for.
version: int
Version of the model to run.
run_number: int
The number of the run. Used in case
the same model version is run again.
"""
model_name = 'bitcoin_lstm_v{version}_run_{run_number}'.format(version=version,
run_number=run_number)
tensorboard = TensorBoard(log_dir='logs\\{}'.format(model_name), profile_batch = 100000000)
model_history = model.fit(
x=X, y=Y,
batch_size=1, epochs=5,
verbose=0,
shuffle=False,
callbacks=[tensorboard])
return model_history
train_data = create_groups(
train['close_point_relative_normalization'].values, 7)
test_data = create_groups(
test['close_point_relative_normalization'].values, 7)
X_train, Y_train = split_lstm_input(train_data)
model = load_model('bitcoin_lstm_v0.h5')
X_train.shape, Y_train.shape
model_history = train_model(model=model, X=X_train, Y=Y_train, epochs=10, version=0, run_number=0)
###Output
_____no_output_____
###Markdown
Evaluate LSTM ModelLet's evaluate how our model performed against unseen data. Our model is trained in 76 weeks to predict a following weeks--that is, a sequence of 7 days. When we started this project we divided our original dataset between a test and a validation set. We will now take that originally trained network--containing 76 weeks--and use it to predict all the 19 weeks from the validation set.In order to do that we need a sequence of 76 weeks as the data used for predictions. To get that data in a continuous way, we combine the training and validation sets, then move a 76 window from the beginning of the series until its end - 1. We leave one at the end because that's the final target prediction that we can make.At each one of these iterations, our LSTM model generates a 7-day prediction. We take those predictions and separate them. We then compare the predicted series with all the weeks in the validation set. We do this by computing both MSE and MAPE on that final series.
###Code
combined_set = np.concatenate((train_data, test_data), axis=1)
combined_set.shape
evaluated_weeks = []
for i in range(0, test_data.shape[1]):
input_series = combined_set[0:,i:i+187]
X_test = input_series[0:,:-1].reshape(1, input_series.shape[1] - 1, 7)
Y_test = input_series[0:,-1:][0]
result = model.evaluate(x=X_test, y=Y_test, verbose=0)
evaluated_weeks.append(result)
evaluated_weeks
ax = pd.Series(evaluated_weeks).plot(drawstyle="steps-post",
figsize=(14,4),
linewidth=2,
color='#2c3e50',
grid=True,
title='Mean Squared Error (MSE) for Test Data')
y = [i for i in range(0, len(evaluated_weeks))]
yint = range(min(y), math.ceil(max(y))+1)
plt.xticks(yint)
ax.set_xlabel("Predicted Week")
ax.set_ylabel("MSE")
###Output
_____no_output_____
###Markdown
Interpreting the Model ResultsMSE is a good loss function for our problem, but its results are difficult to interpret. We use two utility functions to facilitate the interpretation of our results: Root Mean Squared Error (`rmse()`) and Mean Absolute Error (`mape()`). We will execute these functions for both the observed and predicted series. Make Predictions Now, let's make predictions for every week of the dataset using a similar technique. Instead of calling the `evaluate()` method to compute the network's MSE, we now use the `predict()` method for making future predictions.
###Code
predicted_weeks = []
for i in range(0, test_data.shape[1]):
input_series = combined_set[0:,i:i+186]
predicted_weeks.append(model.predict(input_series))
predicted_days = []
for week in predicted_weeks:
predicted_days += list(week[0])
###Output
_____no_output_____
###Markdown
Let's now create a new Pandas DataFrame with the predicted values. This will help us when plotting and manipulating data.
###Code
train.shape, test.shape
combined = pd.concat([train, test])
last_day = datetime.strptime(train['date'].max(), '%Y-%m-%d')
list_of_days = []
for days in range(1, len(predicted_days) + 1):
D = (last_day + timedelta(days=days)).strftime('%Y-%m-%d')
list_of_days.append(D)
print(len(list_of_days))
predicted = pd.DataFrame({
'date': list_of_days,
'close_point_relative_normalization': predicted_days
})
combined['date'] = combined['date'].apply(
lambda x: datetime.strptime(x, '%Y-%m-%d'))
predicted['date'] = predicted['date'].apply(lambda x: datetime.strptime(x, '%Y-%m-%d'))
predicted.isnull().sum()
observed = combined[combined['date'] > train['date'].max()]
###Output
_____no_output_____
###Markdown
The graph below compares the predicted value versus the real one.
###Code
def plot_two_series(A, B, variable, title):
"""
Plots two series using the same `date` index.
Parameters
----------
A, B: pd.DataFrame
Dataframe with a `date` key and a variable
passed in the `variable` parameter. Parameter A
represents the "Observed" series and B the "Predicted"
series. These will be labelled respectivelly.
variable: str
Variable to use in plot.
title: str
Plot title.
"""
plt.figure(figsize=(14,4))
plt.xlabel('Real and predicted')
ax1 = A.set_index('date')[variable].plot(
linewidth=2, color='#d35400', grid=True, label='Observed', title=title)
ax2 = B.set_index('date')[variable].plot(
linewidth=2, color='grey', grid=True, label='Predicted')
ax1.set_xlabel("Predicted Week")
ax1.set_ylabel("Predicted Values")
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
plt.legend(l1+l2, loc=2)
plt.show()
plot_two_series(observed, predicted,
variable='close_point_relative_normalization',
title='Normalized Predictions per Week')
###Output
_____no_output_____
###Markdown
De-normalized PredictionsLet's also compare the predictions with our denormalized series.
###Code
predicted['iso_week'] = predicted.date.dt.year.astype('str') + '-' + predicted.date.dt.week.astype('str')
def denormalize(reference, series,
normalized_variable='close_point_relative_normalization',
denormalized_variable='close'):
"""
Denormalizes the values for a given series.
Parameters
----------
reference: pd.DataFrame
DataFrame to use as reference. This dataframe
contains both a week index and the USD price
reference that we are interested on.
series: pd.DataFrame
DataFrame with the predicted series. The
DataFrame must have the same columns as the
`reference` dataset.
normalized_variable: str, default 'close_point_relative_normalization'
Variable to use in normalization.
denormalized_variable: str, `close`
Variable to use in de-normalization.
Returns
-------
A modified DataFrame with the new variable provided
in `denormalized_variable` parameter.
"""
if('iso_week' in list(series.columns)):
week_values = reference[reference['iso_week'] == series['iso_week'].values[0]]
last_value = week_values[denormalized_variable].values[0]
series[denormalized_variable] = last_value * (series[normalized_variable] + 1)
return series
###Output
_____no_output_____
###Markdown
Let's now plot the predicted timeseries versus the observed one using the `close` price values.
###Code
predicted_close = predicted.groupby('iso_week').apply(lambda x: denormalize(observed, x))
print(len(predicted_close), len(observed))
plot_two_series(observed, predicted_close,
variable='close',
title='De-normalized Predictions per Week')
###Output
_____no_output_____
###Markdown
Calculate RMSE and MAPEAfter computing both timeseries we want to predict how close the predictions are to the real, observed data. We do this by using a modified version of our loss function (the Root Mean Squared Error instead of the Mean Squared Error) and add the Mean Absolute Percentage Error (MAPE) for readability.
###Code
from utilities import rmse, mape
print('Normalized RMSE: {:.2f}'.format(
rmse(observed['close_point_relative_normalization'][:-3],
predicted_close['close_point_relative_normalization'])))
print('De-normalized RMSE: ${:.1f} USD'.format(
rmse(observed['close'][:-3],
predicted_close['close'])))
print('De-normalized MAPE: {:.1f}%'.format(
mape(observed['close'][:-3],
predicted_close['close'])))
###Output
De-normalized RMSE: $596.6 USD
De-normalized MAPE: 4.7%
|
Lab5/Lab5.ipynb | ###Markdown
Lab 5: Stellar Spectroscopy + Fitting Models to DataIf imaging data is the 'bread and butter' of astronomy (see Lab 2), then spectrosopy is meat and potatoes. In this lab, we will guide you through reading, plotting and fitting spectra of stars in a Milky Way globular cluster. The science goal is to determine the velocity and velocity errors for a handful of stars in order to determine if they are members of the globular cluster, or foreground stars in the Milky Way. The coding goal is to apply both $\chi^2$ fitting and MCMC fitting techniques when the model is more complicated. Goals of this lab:1. Explore a maintained software package (pypeit).2. Read a complicated fits file and plot a spectrum. 3. Find parameters and errors via chi2 fitting when the model is not an analytic function4. Find parameters and errors via MCMC.5. Fitting polynomials to 2D surfaces, corner plots Question 1: Keck DEIMOS data We will be working with data from the Keck Telescope's DEIMOS instrument. All Keck data is publically available on the Keck Data Archive (KOA) website. While we will not be directly reducing the raw data, let's take a look at these files to get a sense for what the data look like. We've selected data from the Milky Way globular cluster NGC 7006. Head to the KOA website (https://koa.ipac.caltech.edu/cgi-bin/KOA/nph-KOAlogin) and search for all files take with DEIMOS on the night of June 3, 2011 (20110603). Search the list for files with `Target Name == n7006` and `Image or Dispersion == mos`. Find the column named `Quicklook Previews` and click on `[Raw]`. This is a single exposure of a spectroscopic mask centered on NGC 7006. You should see a hundred or so spectra in this image.
###Code
#Include a screen grab here and/or describe in words the image that you see.
###Output
_____no_output_____
###Markdown
Question 2: Spectral Reductions with PypeIt Using the raw files downloaded from the KOA above , we [the A330 instructors] have run the science and calibration frames through a spectral reduction softare package called `PypeIt`: https://pypeit.readthedocs.io/en/release/. The `PypeIt` github repository can be found here: https://github.com/pypeit/PypeIt While we won't actually run `PypeIt` in this lab, we will be using its output files. This is a software project that is actively being developed, so let's look around at the code and identify some familar pieces:On github, take a look in the /pypeit directory and click on a few of the *.py files. 1. Find one instance of PypeIt using a Class structure 2. Find one instance of PypeIt not fully/properly populating a doc string :) 3. Find a line of code that you understand and explain what its doing 4. Fine a line of code that you don't understand. 5. How many branches current exist from the main `release` branch?
###Code
# Answers to 5 items above.
###Output
_____no_output_____
###Markdown
In the data access directory, we have provide a PypeIt output file which contains one-dimensional spectra for all the stars observed in the DEIMOS mask `n7006a` that you viewed above. Read in the file using the astropy.io.fits commands and view the contents using `hdu.info()`. State how many spectra are contained in this file.
###Code
from astropy.io import fits
file = 'spec1d_DE.20110603.45055-n7006a_DEIMOS_2011Jun03T123053.021.fits'
# Code to view file contents
# How many spectra are contained in this file?
###Output
_____no_output_____
###Markdown
Question 3: Plotting 1D PypeIt output spectra and fitting by eye We have selected 3 spectra from this file which are high signal-to-noise stars. From your fits table that you have read in, select extension 121, 135 and 157. These can also be selected using the names 'SPAT0564-SLIT0560-DET06', 'SPAT1163-SLIT1162-DET06' and 'SPAT0288-SLIT0302-DET07'. Save the data for each spectrum separately.Plot wavelength versus counts/flux for each star. Please use the optimal extraction results ('OPT_*'). If you need additional guidence for what is in this file, see the PypeIt documentation on spec1d_* files: https://pypeit.readthedocs.io/en/release/out_spec1D.html
###Code
# For each of these three stars, plot the wavelength versus counts. Use ylim = 8300-8800 Angstrum
###Output
_____no_output_____
###Markdown
Extra (+0.5) To get a sense for the velocity of each star, you might try measuring a rough velocity 'by eye'. The three strongest lines in the spectra above are from Calcium II: 8500.36, 8544.44, 8664.52 Angstrum. What velocity do you estimate? Question 4: Synthetic model spectra In ASTR 255 and the extra question above, you have measured the velocity of a star by measuring the center of a known absorption line (either by eye or fitting a Gaussian) and comparing to its rest wavelength. While this process does estimate the star's velocity, it wastes much of the information present in the full spectrum. To determine more accurate velocities, we turn to "template fitting" where a spectrum with a known velocity is compared to our unknown science spectrum. A template spectrum can either be empiricial (an observed spectrum of a standard star where the velocity is already known) or synthetic (numerically computed from stellar models). Here we will use synthetic templates from the PHEONIX library: https://phoenix.astro.physik.uni-goettingen.de/
###Code
template_file = 'dmost_lte_5000_3.0_-2.0_.fits'
def read_synthetic_spectrum(pfile):
'''
Function to load synthetic template file into python using vacuum wavelengths
Parameters
----------
pfile: str
path to the synthitic fits file to load.
Returns
-------
pwave: float array
Wavelengths of synthetic spectrum
pflux: float array
Flux of sythetic spectrum
'''
with fits.open(pfile) as hdu:
data = hdu[1].data
pflux = np.array(data['flux']).flatten()
awave = np.exp((data['wave']).flatten())
# CONVERTING AIR WAVELENGTHS TO VACUUM
s = 10**4 / awave
n = 1. + 0.00008336624212083 + \
(0.02408926869968 / (130.1065924522 - s**2)) +\
(0.0001599740894897 / (38.92568793293 - s**2))
pwave = awave*n
return pwave, pflux
# Read in synthetic spectra and plot wavelegth versus flux
###Output
_____no_output_____
###Markdown
Question 5: Synthetic model spectra -- Smoothing and Continuum fitting We will fit the sythetic spectrum to our science data with the goal of determining the velocity of our science spectrum. The synthetic spectrum is at zero velocity. To match the science data, we will need to (1) smooth the synthetic spectrum to the wavelength resolution of the science, (2) shift the sythetic spectrum to the velocity of the science data, and (3) rebin the synthetic spectrum and match continuum levels. Smoothing the templates We will first address how to smooth the synthetic spectrum to match the data. We will fit for this value below, but for the moment, let's just choose a number based on a single point estimate. The DEIMOS spectral lines are well fit by a Gaussian with a 1-$\sigma$ line width that is roughly 0.5 Angstrum. The synthetic spectra have resolution of 0.02 Angstrum. Thus, we need to smooth the sythetic spectra with a Gaussian kernal that is 0.5/0.02 = 25 pixels. Hint: scipy has functions which do Gaussian filtering in 1D.
###Code
# Write a function to Gaussin smooth the synthtic spectrum, using a smoothing kernal of 25 pixels.
###Output
_____no_output_____
###Markdown
Fitting the ContinuumWe will next address the above step (3), the overall shape and value of the spectrum which we will call the 'continuum'. Let's fit a function to the synthetic spectrum so that it is approximately the same as the science spectrum. For the section of a spectrum we are working with a **linear function** (i.e., like we fit in lab 4) is sufficient. To do this, we will first rebin the synthetic spectrum in wavelength to the same array as the data. Choose a science spectrum from above and rebin the sythentic template so that it uses the same wavelength array (consider using `np.interp()`). We need this to carry out point by point fits and comparisons between arrays. Next, determine the **linear function** (mx+b) needed to match the continuum of the synthetic spectrum to that of the science. If you wish, you may also try a second order polynomial, if you think it is a better fit.```{tip}If you just try to fit the spectrum, the absorption lines and other features will "drag" your fit away from the level of the continuum. This is easy to see by eye. There's a few ways around this. First, we could mask regions of deep absorption. Second, we could run something like `np.percentile()` on the spectrum, and remove all points farther than, say, 1 or 2-sigma from the median when doing the fit. Third, we could do an iterative fit (see the extra below). For this problem, we'll allow you to estimate the linear fit to the continuum by eye, or by any of the methods above. ```
###Code
# Write a function to rebin the synthetic template to the data wavelength array and fit the continuuum.
###Output
_____no_output_____
###Markdown
OK, now run both functions (your smoothing function and your rebin/continuum function) on the sythetic spectrum and plot the results.
###Code
# Run both functions (smooth + rebin/continumm) and plot your smoothed, continuum normalized synthetic spectrum
# Compare this to one of your science spectra.
###Output
_____no_output_____
###Markdown
Extra (1.0) When fitting continua, we usually want to avoid the "features" in the spectrum. We could mask them out, or drop percentiles of the data far from the median... but we could also iteratively remove them. To do this, you would fit your chosen function to the data as a start, then iterate, throwing out 3 (or 5 or whatever works) sigma distant points and re-fitting. This works because emission and absorption lines have data points far from the continuum value. Try fitting your continuum this way to get a better estimate.
###Code
# iterative fit method
###Output
_____no_output_____
###Markdown
Question 6: $\chi^2$ fitting to find velocity The science and synthetic spectra above should roughly match-- except for an unknown velocity shift. You can shift the synthetic template in velocity by changing its wavelength array *before smoothing*. Recall that $\delta \lambda = \lambda * v/c$. Write a $\chi^2$ code to find the best-fit velocity for each of the three stars above. Look up the velocity of the globular cluster NGC 7006 to justify the range of velocities to search over. Consider the wavelength resolution of your science data to determine the spacing of your grid.
###Code
# Write a chi2 algorithm to determine the best fitting velocity and error.
###Output
_____no_output_____
###Markdown
Question 7: $\chi^2$ fitting with more parameters In Question 6, we fixed the smoothing value to 25 pixels and used a single function to match the sythentic to science continuum. Next, let's redo $\chi^2$, but now including these values in the fit. This will be a 2 parameter chi2 fit.
###Code
# Repeat $chi^2$ fitting searching over 2 (and bonus 4) parameters:
# velocity, smoothing, and (bonus) continuum value (m,b)
# If you use 4 parameters, this will get ugly.
#
# Calculate errors from your chi2 contours on the velocity only.
###Output
_____no_output_____
###Markdown
Question 8: MCMC with to find velocity Repeat Question 7 but this time fitting with MCMC. We suggest writing a single function `make_model` which creates a single synthetic model spectrum given an input velocity and smoothing.Report your best fit velocity and errors.You can chose to fit 2 parameters (velocity and smoothing), or as a bonus all 4 parameters (velocity, smoothing and continuum fit values).
###Code
# MCMC to find velocity only. Report your best fit velocity and errors.
# Plot full corner plots for all fitted parameters.
###Output
_____no_output_____
###Markdown
```{note}In the context of MCMC, you'll often hear people talk about "marginalization". This is a classic example. Marginalization is the process of fitting for parameters we care about, plus "nuisance parameters" that we don't (like the smoothing and continuum values), and then "marginalizing out" the nuisance parameters by taking the 1D posterior spread only of the parameter of interest.``` Question 9: MCMC convergence Confirm that your MCMC above converged and that you are discarding the appropriate number of samples when determining your parameters (that is the burnin number).
###Code
# Confirm convergence
###Output
_____no_output_____
###Markdown
Question 10: ScienceAnd finally, some science questions:1. Do velocities agree between chi2 and mcmc within error?2. Are the velocity errors the same?3. Are these three stars part of NGC 7006?
###Code
# Answers to the 3 questions above
###Output
_____no_output_____
###Markdown
HMM Experiments
###Code
# Read annotated corpora with NLTK
# first download data
import nltk
from includes import *
#nltk.download()
# it will open a GUI and you have to double click in "all" to download
# this will download different types of annotated corpora
###Output
_____no_output_____
###Markdown
Here is the complete **PTB** data
###Code
# inspect PoS from Treebank
# we use the universal tagset
treebank_sents = nltk.corpus.treebank.tagged_sents(tagset='universal')
###Output
_____no_output_____
###Markdown
Now we load the ids of sentences corresponding to training, development, and test sets
###Code
training_ids = [int(i) for i in open('training.ids') if i.strip()]
development_ids = [int(i) for i in open('development.ids') if i.strip()]
test_ids = [int(i) for i in open('test.ids') if i.strip()]
print(len(training_ids), len(development_ids), len(test_ids))
###Output
3000 114 800
###Markdown
Now we separate the 3 parts of the annotation
###Code
ptb_train = [treebank_sents[i] for i in training_ids]
ptb_dev = [treebank_sents[i] for i in development_ids]
ptb_test = [treebank_sents[i] for i in test_ids]
###Output
_____no_output_____
###Markdown
1. We *suggest* you print details about the data such as number of unique words/tags, total tokens, number of sentences, etc.2. Then you can copy here your implementation from lab4 or import it from a separate python file if you want (but make sure to submit those too)3. Then you can go on with training/development and test The following are just **tips** (you need not necessarily follow them).
###Code
def extract_sentences(treebank_corpus):
sentences = []
for observations in treebank_corpus:
sentences.append([x for x, c in observations])
return sentences
def accuracy(gold_sequences, pred_sequences):
"""
Return percentage of instances in the test data that our tagger labeled correctly.
:param gold_sequences: a list of tag sequences that can be assumed to be correct
:param pred_sequences: a list of tag sequences predicted by Viterbi
"""
count_correct, count_total = 0, 0
for i, combined in enumerate(zip(pred_sequences, gold_sequences)):
for p, g in list(zip(*combined)):
if p == g:
count_correct += 1
count_total += 1
if count_total:
return count_correct / count_total
return None
def predict_corpus(test_set, hmm):
"""
Returns viterbi predictions for all sentences in a given corpus
:param test_set: A corpus of tagged sentences
:param hmm : A language model
"""
gold_sequences, pred_sequences = list(), list()
print('Making predictions', end='')
for i, sequence in enumerate(test_set):
if i % round(len(test_set) / 10) == 0:
print('.', end='')
sentence , tags = map(list, zip(*sequence))
viterbi_tags, _ = viterbi_recursion(sentence, hmm)
gold_sequences.append(tags)
pred_sequences.append(viterbi_tags)
print()
return gold_sequences, pred_sequences
# Grid search for hyperparameters
models = []
for alpha in [0.01, 0.1, 1., 10.]:
for beta in [0.01, 0.1, 1., 10.]:
print("a, b:", alpha, beta)
hmm = HMMLM(alpha, beta)
hmm.estimate_model(ptb_train)
models.append(hmm)
sents = extract_sentences(ptb_dev)
results = []
for model in models:
a = model._transition_alpha
b = model._emission_alpha
ppl = log_perplexity(sents, model)
acc = accuracy(*predict_corpus(ptb_dev, model))
results.append([a, b, ppl, acc])
headers = ['alpha', 'beta', 'ppl', 'acc']
print()
print(tabulate(results, headers=headers))
# Best test set model 13 with alpha 10 and beta 0.1
# Both perplexity and accuracy are best at this model
print(results[13])
# Now use test set to calc perplexity and accuracy
ppl = log_perplexity(extract_sentences(ptb_test), models[13])
acc = accuracy(*predict_corpus(ptb_test, models[13]))
print(round(ppl,3), round(acc,3))
###Output
[10.0, 0.1, 4.408709283788377, 0.881201044386423]
Making predictions..........4.41 0.886
###Markdown
You can also use *tabulate* to print some examples. For example:
###Code
def tabulate_example(sentence, gold_pos, pred_pos):
illustration = []
for w, g, p in zip(sentence, gold_pos, pred_pos):
illustration.append([w, g, p])
return illustration
for i in range(4):
sentence, gold = map(list, zip(*ptb_test[i]))
pred, _ = viterbi_recursion(sentence, models[13])
illustration = tabulate_example(sentence, gold, pred)
print()
print(tabulate(illustration, headers=['Word', 'Gold', 'Pred']))
print("accuracy:",round(accuracy([gold], [pred]),4))
###Output
Word Gold Pred
--------------- ------ ------
But CONJ ADJ
Rep. NOUN NOUN
Marge NOUN ADP
Roukema NOUN NOUN
-LRB- . .
R. NOUN NOUN
, . .
N.J NOUN NOUN
. . .
-RRB- . .
instead ADV ADV
praised VERB ADP
the DET DET
House NOUN NOUN
's PRT PRT
acceptance NOUN NOUN
of ADP ADP
a DET DET
new ADJ ADJ
youth NOUN NOUN
`` . .
training NOUN NOUN
'' . .
wage NOUN NOUN
, . .
a DET DET
subminimum NOUN NOUN
that ADP ADP
GOP NOUN NOUN
administrations NOUN PRT
have VERB VERB
sought VERB VERB
*T*-1 X X
for ADP ADP
many ADJ ADJ
years NOUN NOUN
. . .
accuracy: 0.8919
Word Gold Pred
---------- ------ ------
Reserves NOUN PRON
traded VERB VERB
* X X
among ADP ADP
commercial ADJ ADJ
banks NOUN NOUN
for ADP ADP
overnight ADJ DET
use NOUN NOUN
in ADP ADP
amounts NOUN NOUN
of ADP ADP
$ . .
1 NUM NUM
million NUM NUM
or CONJ CONJ
more ADJ ADJ
*U* X X
. . .
accuracy: 0.8947
Word Gold Pred
--------- ------ ------
But CONJ PRON
advancing VERB VERB
issues NOUN NOUN
on ADP ADP
the DET DET
New NOUN ADJ
York NOUN NOUN
Stock NOUN NOUN
Exchange NOUN NOUN
were VERB VERB
tidily ADV ADV
ahead ADV ADV
of ADP ADP
declining VERB DET
stocks NOUN NOUN
, . .
847 NUM NUM
to PRT PRT
644 NUM NUM
. . .
accuracy: 0.85
Word Gold Pred
---------- ------ ------
They PRON PRON
are VERB VERB
n't ADV ADV
accepted VERB VERB
*-1 X X
everywhere ADV ADV
, . .
however ADV ADV
. . .
accuracy: 1.0
###Markdown
Домашняя лабораторная работа №6 по вычислительной математикеДержавин Андрей, Б01-909 группа Задача __VII.9.5(a)__
###Code
import numpy as np
x = np.array([n * 0.25 for n in range(9)])
f = np.sin(x[1:]) / x[1:]
f = np.array([1, *f])
def trap(x, f) -> float:
assert len(x) == len(f), "Incompatible lengths"
I = 0
for i in range(len(x) - 1):
I += (f[i] + f[i + 1]) / 2 * (x[i + 1] - x[i])
return I
def simp(x, f) -> float:
assert len(x) == len(f), "Incompatible lengths"
I = 0
h = x[1] - x[0]
for i, fi in enumerate(f):
if i == 0 or i == len(f) - 1:
I += fi
elif i % 2 == 0:
I += 2 * fi
else:
I += 4 * fi
I *= h / 3
return I
I_h = trap(x, f)
I_h2 = trap(x[::2], f[::2])
I_R = (4 * I_h - I_h2) / 3
I_simp = simp(x, f)
print(f'Метод трапеций I_h = {I_h}')
print(f'Метод трапеций с удвоенным разбиением I_2h = {I_h2}')
print(f'Метод трапеций с уточнением по Ричардсону I_R = {I_R}')
print(f'Метод Симпсона I_S = {I_simp}')
###Output
Метод трапеций I_h = 1.603143993230099
Метод трапеций с удвоенным разбиением I_2h = 1.5963215382293798
Метод трапеций с уточнением по Ричардсону I_R = 1.6054181448970055
Метод Симпсона I_S = 1.6054181448970053
###Markdown
| column name | description ||------------|--------------------------------------------------------------------------------------------------------------------------|| pt | transverse momentum relative to beam axis || eta | pseudorapidity given by $\eta = -ln(tan\theta/2))$ where $\theta$ is the polar angle measured with respect to the z-axis [ATLAS Collaboration (2013)](https://arxiv.org/abs/1206.5369)|| phi | azimuthal angle measured around the beam axis || mass | jet mass || ee2, ee3 | energy correlation function 2 and 3, used in identifying the hadronic decays of boosted Z bosons [Larkowski, A. J. 2014](https://doi.org/10.1007/JHEP12(2014)009) || d2 | discriminating variable to identify boosted two prong jets given by $D^{(\beta)}_2 =\frac{e^{(\beta)}_3}{(e^{(\beta)}_2)^3}$ [Larkowski, A. J. (2014)](https://doi.org/10.1007/JHEP12(2014)009) || angularity | has characteristic distribution for two-body decays || t1 | $\tau_1$; 1-subjettiness || t2 | $\tau_2$; 2-subjettiness || t3 | $\tau_3$; 3-subjettiness || t21 | $\tau_2/\tau_1$ || t32 | $\tau_3/\tau_2$ || KtDeltaR | $\Delta R$ of two subjets within the large-R jet [Eur. Phys. J. C 79 (2019) 836](https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/PERF-2017-04/) |
###Code
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as stats
higgs = pd.read_pickle('higgs_100000_pt_1000_1200.pkl')
qcd = pd.read_pickle('qcd_100000_pt_1000_1200.pkl')
qcd.head(5)
qcd.keys()
keys = ['pt', 'eta', 'phi', 'mass', 'ee2', 'ee3', 'd2', 'angularity', 't1',
't2', 't3', 't21', 't32', 'KtDeltaR']
fig, ax = plt.subplots(14, 1, figsize = (10,10*14))
for i in range(len(keys)):
hist1 = ax[i].hist(qcd[keys[i]], label = 'QCD', histtype = 'step')
hist2 = ax[i].hist(higgs[keys[i]], bins = hist1[1],label = 'Higgs', histtype = 'step')
ax[i].semilogy()
ax[i].legend()
ax[i].set_xlabel(keys[i])
ax[i].set_ylabel('Particle Count')
###Output
_____no_output_____
###Markdown
Not all features provide discrimination power between signal and background. The distribution of pt, d2, t2, t3, t21, and KtdeltaR for QCD and Higgs are different so it is possible to discriminate between signal and noise.There are some correlations among the features. For example, d2 is calculated from ee2 and ee3. t21 and t32 are calculated from t1, t2, and t3. Next I normalize the data with expected yields and plot it in a stacked bar plot.
###Code
# Expected Yields:
# N_higgs = 50
# N_qcd = 2000
normalization_higgs = 50/100000
normalization_qcd = 2000/100000
#print(normalization_higgs, normalization_qcd)
keys = ['pt', 'eta', 'phi', 'mass', 'ee2', 'ee3', 'd2', 'angularity', 't1',
't2', 't3', 't21', 't32', 'KtDeltaR']
fig, ax = plt.subplots(14, 1, figsize = (10,10*14))
for i in range(len(keys)):
hist_qcd = plt.hist(qcd[keys[i]])
normalized_hist_qcd = hist_qcd[0]*normalization_qcd
hist_higgs = plt.hist(higgs[keys[i]], bins = hist_qcd[1])
normalized_hist_higgs = hist_higgs[0]*normalization_higgs
width = hist_qcd[1][1]-hist_qcd[1][0]
ax[i].bar(hist_qcd[1][:10], normalized_hist_qcd, width = width, label = 'QCD')
ax[i].bar(hist_higgs[1][:10], normalized_hist_higgs, width = width, bottom = normalized_hist_qcd, label = 'Higgs')
ax[i].semilogy()
ax[i].legend()
ax[i].set_xlabel(keys[i])
ax[i].set_ylabel('Particle Count')
plt.show()
###Output
_____no_output_____
###Markdown
I have no idea why the KtDeltaR plot is doing that. It gets really hard to differentiate background from signal. I think if the background is well known, we can use pt and phi. To model the background, we can use a Poisson distribution with mean to be the number of expected QCD. Then the signal would be N_qcd + N_higgs. The signal strength would have to be 2228 or above.
###Code
n_qcd = 2000
prob5sigma = 1/3.5e6
signal = stats.poisson.isf(prob5sigma, n_qcd)
print(signal)
###Output
2228.0
|
tests/fixtures/set_columns_equal_test_case_generation.ipynb | ###Markdown
Linear (no constraint)
###Code
Xlt = np.tile(np.linspace(-0.5, 1.3, T), (p, 1)).T
y1 = X1 + Xlt
y1[~use_set] = np.nan
with sns.axes_style('white'):
plt.imshow(y1, aspect='auto', interpolation='none', cmap='plasma')
plt.colorbar()
plt.show()
plt.plot(y1)
plt.show()
c1 = [
MeanSquareSmall(size=T*p),
make_columns_equal(LinearTrend)
]
p1 = Problem(y1, c1)
p1.decompose(how='cvx', reset=True)
p1.objective_value
p1.problem.value
plt.plot(p1.estimates[1])
plt.plot(Xlt)
p1.decompose(how='admm', reset=True)
p1.objective_value
plt.plot(p1.estimates[1])
plt.plot(Xlt)
p1.decompose(how='bcd', reset=True)
p1.objective_value
plt.plot(p1.estimates[1])
plt.plot(Xlt)
###Output
_____no_output_____
###Markdown
Linear with first value constraint
###Code
y1 = X1 + Xlt
y1[~use_set] = np.nan
with sns.axes_style('white'):
plt.imshow(y1, aspect='auto', interpolation='none', cmap='plasma')
plt.colorbar()
plt.show()
plt.plot(y1)
plt.show()
c1 = [
MeanSquareSmall(size=T*p),
make_columns_equal(LinearTrend)(first_val=-0.5)
]
p1 = Problem(y1, c1)
p1.decompose(how='cvx', reset=True)
p1.objective_value
p1.problem.value
plt.plot(p1.estimates[1])
plt.plot(Xlt)
p1.decompose(how='admm', reset=True)
p1.objective_value
plt.plot(p1.estimates[1])
plt.plot(Xlt)
p1.decompose(how='bcd', reset=True)
p1.objective_value
plt.plot(p1.estimates[1])
plt.plot(Xlt)
###Output
_____no_output_____
###Markdown
Asymmetric noice
###Code
from scipy.stats import laplace_asymmetric
kappa = 2
np.random.seed(110100100)
al = laplace_asymmetric.rvs(kappa, size=T)
Xan = np.tile(al, (p, 1)).T
plt.hist(al, bins=15);
y2 = X1 + Xan
y2[~use_set] = np.nan
with sns.axes_style('white'):
plt.imshow(y2, aspect='auto', interpolation='none', cmap='plasma')
plt.colorbar()
plt.show()
plt.plot(y2)
plt.show()
c2 = [
MeanSquareSmall(size=T*p),
make_columns_equal(AsymmetricNoise)(weight=1/(T*p), tau=0.8)
]
p2 = Problem(y2, c2)
p2.decompose(how='cvx', reset=True)
p2.objective_value
p2.problem.objective.value
plt.hist(p2.estimates[1,:, 0], bins=15);
p2.decompose(how='admm', reset=True)
p2.objective_value
plt.figure()
plt.plot(p2.admm_result['obj_vals'], label='residual')
plt.axvline(p2.admm_result['it'], color='red', ls='--')
# plt.yscale('log')
plt.legend()
plt.title('objective value')
plt.show()
plt.hist(p2.estimates[1,:, 0], bins=15);
p2.decompose(how='bcd', reset=True)
p2.objective_value
plt.hist(p2.estimates[1,:, 0], bins=15);
import cvxpy as cvx
x = cvx.Variable(y2.shape)
cost = p2.components[-1].cost(x) + 1/2 * cvx.sum_squares(x[use_set] - y2[use_set])
constraints = [cvx.diff(x, axis=1) == 0]
cvx_prox = cvx.Problem(cvx.Minimize(cost), constraints)
cvx_prox.solve()
plt.plot(x.value[:, 0])
plt.plot(p2.components[-1].prox_op(y2, 1, 1, use_set=use_set)[:, 0])
###Output
_____no_output_____
###Markdown
Constant chunks
###Code
np.random.seed(110100100)
cs = 17
v = np.random.uniform(-1, 1, T // 7 + 1)
z = np.tile(v, (7, 1))
z = z.ravel(order='F')
z = z[:100]
Xch = np.tile(z, (p, 1)).T
plt.plot(z)
y3 = X1 + Xch
y3[~use_set] = np.nan
with sns.axes_style('white'):
plt.imshow(y3, aspect='auto', interpolation='none', cmap='plasma')
plt.colorbar()
plt.show()
plt.plot(y3)
plt.show()
c3 = [
MeanSquareSmall(size=T*p),
make_columns_equal(ConstantChunks)(length=7)
]
p3 = Problem(y3, c3)
p3.decompose(how='cvx', reset=True)
plt.plot(z)
plt.plot(p3.estimates[1])
p3.objective_value
p3.decompose(how='admm', reset=True)
p3.objective_value
plt.plot(z)
plt.plot(p3.estimates[1])
p3.decompose(how='bcd', reset=True)
p3.objective_value
plt.plot(z)
plt.plot(p3.estimates[1])
x = cvx.Variable(y3.shape)
cost = p3.components[-1].cost(x) + 1/2 * cvx.sum_squares(x[use_set] - y3[use_set])
constraints = [cvx.diff(x, axis=1) == 0]
constraints.extend(p3.components[-1].internal_constraints(x, T, p))
cvx_prox = cvx.Problem(cvx.Minimize(cost), constraints)
cvx_prox.solve()
plt.plot(x.value[:, 0])
plt.plot(p3.components[-1].prox_op(y3, 1, 1, use_set=use_set)[:, 0])
###Output
_____no_output_____ |
notebooks/ProLoaF applied and explained - Day-Ahead Forcasts on open data of the German load.ipynb | ###Markdown
Using ProLoaF for hourly predictions of the German power consumption of the open power system data platform 1) Data Selection, Preprocessing, and Configuration of Forecasting Model First, we need to make sure that we either use local or external data (preferably in csv or xlsx format). We make ProLoaF aware of the data-path and some details that need to be taken care of during pandas dataframe reading through or preprocessing config. For more infos on the confid checkout the [configuration-helper](https://acs.pages.rwth-aachen.de/public/automation/plf/proloaf/docs/files-and-scripts/config/)> **NOTE:** If you start with your forecasting task already with all features integrated in one csv file, you can skip the whole preprocessing step. We now use preprocess.py to do the job of importing, interpolation, aligning the timesteps, taking the minimum joint period of input data, and generating one hot encoding and cyclical encoding of calendric data, that is based on the timestamp.More infos on the interpolation techniques is explained in [this notebook-section](./Step-by-step%20user%20guide%20on%20train.py.ipynb1.-Dealing-with-missing-values-in-the-data).
###Code
!python3 ../source/preprocess.py -s opsd_24
###Output
Importing CSV Data...
Importing time_series/2020-10-06/time_series_60min_singleindex.csv ...
...Importing finished.
Importing weather_data/2020-09-16/weather_data.csv ...
...Importing finished.
Some values are NaN. They are being filled...
...interpolation finished! No missing data left.
AT_load_actual_entsoe_transparency DE_load_actual_entsoe_transparency
Time
2014-12-31 23:00:00 5946.0 41151.0
2015-01-01 00:00:00 5946.0 41151.0
2015-01-01 01:00:00 5726.0 40135.0
2015-01-01 02:00:00 5347.0 39106.0
2015-01-01 03:00:00 5249.0 38765.0
... ... ...
2020-09-30 19:00:00 6661.0 57559.0
2020-09-30 20:00:00 6336.0 54108.0
2020-09-30 21:00:00 5932.0 49845.0
2020-09-30 22:00:00 5628.0 46886.0
2020-09-30 23:00:00 5395.0 45461.0
[50401 rows x 2 columns]
No missing data
AT_temperature ... DE_radiation_diffuse_horizontal
Time ...
1980-01-01 00:00:00 -3.640 ... 0.0
1980-01-01 01:00:00 -3.803 ... 0.0
1980-01-01 02:00:00 -3.969 ... 0.0
1980-01-01 03:00:00 -4.076 ... 0.0
1980-01-01 04:00:00 -4.248 ... 0.0
... ... ... ...
2019-12-31 19:00:00 -1.386 ... 0.0
2019-12-31 20:00:00 -1.661 ... 0.0
2019-12-31 21:00:00 -1.986 ... 0.0
2019-12-31 22:00:00 -2.184 ... 0.0
2019-12-31 23:00:00 -2.271 ... 0.0
[350640 rows x 6 columns]
[ AT_load_actual_entsoe_transparency DE_load_actual_entsoe_transparency
Time
2014-12-31 23:00:00 5946.0 41151.0
2015-01-01 00:00:00 5946.0 41151.0
2015-01-01 01:00:00 5726.0 40135.0
2015-01-01 02:00:00 5347.0 39106.0
2015-01-01 03:00:00 5249.0 38765.0
... ... ...
2020-09-30 19:00:00 6661.0 57559.0
2020-09-30 20:00:00 6336.0 54108.0
2020-09-30 21:00:00 5932.0 49845.0
2020-09-30 22:00:00 5628.0 46886.0
2020-09-30 23:00:00 5395.0 45461.0
[50401 rows x 2 columns], AT_temperature ... DE_radiation_diffuse_horizontal
Time ...
1980-01-01 00:00:00 -3.640 ... 0.0
1980-01-01 01:00:00 -3.803 ... 0.0
1980-01-01 02:00:00 -3.969 ... 0.0
1980-01-01 03:00:00 -4.076 ... 0.0
1980-01-01 04:00:00 -4.248 ... 0.0
... ... ... ...
2019-12-31 19:00:00 -1.386 ... 0.0
2019-12-31 20:00:00 -1.661 ... 0.0
2019-12-31 21:00:00 -1.986 ... 0.0
2019-12-31 22:00:00 -2.184 ... 0.0
2019-12-31 23:00:00 -2.271 ... 0.0
[350640 rows x 6 columns]]
No missing data
###Markdown
2) TrainingYou can use the shell command that allows interactive options to set hyperparameter tuning on or off and decide upon whether or not to overwrite the configuration files the final parameters. ```python3 ../source/train.py -s opsd_24```You can also use currently a workaround with the CI flag to turn off all upcoming queries. Simply add --ci when calling the script. Upon sucessful run of train.py, it will generate a trained RNN model that is stored in the path 'oracles'.A step-by-step guide to what is done in train.py follows in [this notebook](./Step-by-step%20user%20guide%20on%20train.py.ipynb). 3) ProLoaF Performance CheckWe now want to give a deeper look into the 24h-forecast the RNN is able to generate. Read in config file and set active paths Assuming opsd_24 exists in 'targets' and the model has been trained already, so it is stored in 'oracles' (see step 2))
###Code
config_path = 'opsd_24'
PAR = read_config(config_path, main_path="../")
torch.manual_seed(1)
model_name = PAR["model_name"]
data_path = PAR["data_path"]
INFILE = os.path.join("../", data_path) # input factsheet
INMODEL = os.path.join("../", PAR["output_path"], model_name)
OUTDIR = os.path.join("../", PAR["evaluation_path"])
DEVICE = "cpu"
target_id = PAR["target_id"]
SPLIT_RATIO = PAR["validation_split"]
HISTORY_HORIZON = PAR["history_horizon"]
FORECAST_HORIZON = PAR["forecast_horizon"]
feature_groups = PAR["feature_groups"]
if not os.path.exists(OUTDIR):
os.makedirs(OUTDIR)
###Output
_____no_output_____
###Markdown
Load Data, Interpolate any missing values, and Split to use Test-Set only
###Code
# Read load data
df = pd.read_csv(INFILE, sep=";")
df = dh.fill_if_missing(df)
split = int(len(df.index) * SPLIT_RATIO)
#Originally we have trained the model to predict 24h ahead.
adj_forecast_horizon = widgets.IntSlider(value=PAR["forecast_horizon"])
display(adj_forecast_horizon)
FORECAST_HORIZON = adj_forecast_horizon.value
###Output
_____no_output_____
###Markdown
Re-load Trained RNN and Generate Tensor using ProLoaF Dataloader
###Code
with torch.no_grad():
net = torch.load(os.path.join("../",PAR["output_path"], model_name), map_location=torch.device(DEVICE)) # mapping to CPU
df_new, _ = dh.scale_all(df, scalers=net.scalers, **PAR)
df_new.index = df['Time']
df_test = df_new.iloc[int(len(df.index) * PAR["validation_split"]):]
_,_,test_data_loader = dh.transform(
df_new,
device=DEVICE,
**PAR,)
print("Number of Test Samples (1st dimension): ", test_data_loader.dataset.inputs2.shape[0])
print("Forecast Horizon (2nd dimension): ", test_data_loader.dataset.inputs2.shape[1])
print("Number of Features (3rd dimension): ", test_data_loader.dataset.inputs2.shape[2])
###Output
Number of Test Samples (1st dimension): 8595
Forecast Horizon (2nd dimension): 24
Number of Features (3rd dimension): 43
###Markdown
Imagine the 3D-tensor to have these dimensions as mentioned above Fetch Predictions
###Code
# fetch forecast horizon from the shape of the targets (check)
horizon = test_data_loader.dataset.targets.shape[1]
number_of_targets = test_data_loader.dataset.targets.shape[2]
record_targets, record_output = mh.get_prediction(net, test_data_loader,
horizon, number_of_targets)
net.eval()
###Output
_____no_output_____
###Markdown
Fetch prediction upper bounds, lower bounds and mean expected valuesThis process is score dependent. We refer to the score with the term "criterion". Per default, ProLoaF trains with GNLL (=Negative Log-Likelihood under the assumption of a Standard-Normal Distribution)
###Code
criterion = net.criterion
y_pred_upper, y_pred_lower, record_expected_values = mh.get_pred_interval(
record_output, criterion, record_targets)
###Output
_____no_output_____
###Markdown
Let's Calculate the Deterministic and Probabilistic Forecast Metrics on the Test Set
###Code
analyzed_metrics=[
"mse",
"rmse",
"sharpness",
"picp",
"rae",
"mae",
"mis",
"mase",
"pinball_loss",
"residuals"
]
###Output
_____no_output_____
###Markdown
Choose time steps of interest and plot those
###Code
testhours = [-840]
#h=-840 is a Wednesday
# Fetch the actual time from the datetimeindex in the pandas dataframe
actual_time = pd.to_datetime(df.loc[HISTORY_HORIZON+split:, "Time"])
for i in testhours:
hours = actual_time.iloc[i : i + FORECAST_HORIZON]
plot.plot_timestep(
record_targets[i].detach().numpy(),
record_expected_values[i].detach().numpy(),
y_pred_upper[i].detach().numpy(),
y_pred_lower[i].detach().numpy(),
i,
OUTDIR,
limit=None,
actual_time=hours,
)
# Plot for days with possible congestion
print("One can also plot a desired limit, to visually indicate congestion in high load situations.")
testhours = [314]
#h=314 is a Sunday
actual_time = pd.to_datetime(df.loc[HISTORY_HORIZON+split:, "Time"])
for i in testhours:
hours = actual_time.iloc[i : i + FORECAST_HORIZON]
plot.plot_timestep(
record_targets[i].detach().numpy(),
record_expected_values[i].detach().numpy(),
y_pred_upper[i].detach().numpy(),
y_pred_lower[i].detach().numpy(),
i,
OUTDIR,
limit=1,
actual_time=hours,
draw_limit = True,
)
###Output
_____no_output_____
###Markdown
How does the forecast model perform on average?
###Code
results = metrics.fetch_metrics(
targets=record_targets,
expected_values=record_expected_values,
y_pred_upper=y_pred_upper,
y_pred_lower=y_pred_lower,
analyzed_metrics=analyzed_metrics #all above listes metrics are fetched
)
print(metrics.results_table(model_name,results,save_to_disc=OUTDIR))
###Output
mse rmse sharpness picp mis \
opsd_24_LSTM_gnll 0.007613 0.087251 0.399651 98.649902 0.473824
mase rae mae
opsd_24_LSTM_gnll 0.113817 0.150793 0.056311
###Markdown
Does the prediction perform equally good or bad over the whole test set?We have fetched the predictions over the chosen forecast horizon for every hour in the test set. Let us visualize how this is done. Since every sample is shifted by a time-delta, or let's day a sample-frequency (here: frequency=1), we achieve as many samples as is possible by the given data set.
###Code
print("A boxplot will serve as the most practical plot to show how much the error metrics deviate\nfrom one forecast situation to another. We refer to each forecast situation as one sample.")
# BOXPLOTS
with torch.no_grad():
plot.plot_boxplot(
targets=record_targets,
expected_values=record_expected_values,
y_pred_upper=y_pred_upper,
y_pred_lower=y_pred_lower,
analyzed_metrics=['rmse', 'mae'],
sample_frequency=24,
save_to_disc=OUTDIR,
)
print("Remember that the number of test samples is", test_data_loader.dataset.inputs2.shape[0], ".")
print("The period ranges from",actual_time.iloc[0].strftime("%a, %Y-%m-%d"), "to", actual_time.iloc[-1].strftime("%a, %Y-%m-%d"), ".")
print("For sake of clarity we have plotted the samples in 24h intervals resulting in:", test_data_loader.dataset.inputs2.shape[0]/24, "days in the test set.")
###Output
A boxplot will serve as the most practical plot to show how much the error metrics deviate
from one forecast situation to another. We refer to each forecast situation as one sample.
###Markdown
How does the forecast perform on average over the forecast period (=24 hours)?
###Code
#first fetch the metrics per time step on the forecast horizon (=24 hours in our case)
results_per_timestep = metrics.fetch_metrics(
targets=record_targets,
expected_values=record_expected_values,
y_pred_upper=y_pred_upper,
y_pred_lower=y_pred_lower,
analyzed_metrics=["rmse",
"sharpness",
"picp",
"mis"],
total=False,
)
# plot metrics
plot.plot_metrics(
results_per_timestep["rmse"],
results_per_timestep["sharpness"],
results_per_timestep["picp"],
results_per_timestep["mis"],
OUTDIR,
title="How does the forecast perform on average over the forecast period (=24 hours)?"
)
###Output
_____no_output_____ |
w_00/ex_00.ipynb | ###Markdown
Exercise notebookPlay with Python *Tested with `Python >= 3.8`, may not work otherwise!* Exercise 1Find out which is the Python function to get input data from the user. Write some code to get in input an integer and return the sum of all **even** intergers smaller than the input number.
###Code
# Function
def sum_evenint_lt(bound: int) -> int:
return sum([elem for elem in range(bound) if not elem % 2])
def ask_for_int() -> int:
return int((input("Insert an integer number: ")))
# Test
print(sum_evenint_lt(ask_for_int()))
###Output
Insert an integer number: 22
###Markdown
Exercise 2Write a Python function to calculate the number of days between two dates. Sample dates : `(2014, 7, 2)`, `(2014, 7, 11)`
###Code
# Imports
from typing import Tuple
from datetime import datetime
# Function
def simple_datediff(start: Tuple[int], stop: Tuple[int]) -> int:
return abs((datetime(*start) - datetime(*stop)).days)
# Test
print(simple_datediff((1995, 9, 10), (2021, 3, 6)))
###Output
9309
###Markdown
Exercise 3Write a Python function to create all possible strings by using exactly once the characters `'a'`, `'e'`, `'i'`, `'o'`, `'u'`.
###Code
# Imports
from itertools import permutations
# Functions
def all_perms_list(lst: list) -> tuple:
return permutations(lst)
def permprint_list(lst) -> None:
for elemstr in all_perms_list(lst):
print("".join(elemstr))
# Test
permprint_list("aeiou")
###Output
aeiou
aeiuo
aeoiu
aeoui
aeuio
aeuoi
aieou
aieuo
aioeu
aioue
aiueo
aiuoe
aoeiu
aoeui
aoieu
aoiue
aouei
aouie
aueio
aueoi
auieo
auioe
auoei
auoie
eaiou
eaiuo
eaoiu
eaoui
eauio
eauoi
eiaou
eiauo
eioau
eioua
eiuao
eiuoa
eoaiu
eoaui
eoiau
eoiua
eouai
eouia
euaio
euaoi
euiao
euioa
euoai
euoia
iaeou
iaeuo
iaoeu
iaoue
iaueo
iauoe
ieaou
ieauo
ieoau
ieoua
ieuao
ieuoa
ioaeu
ioaue
ioeau
ioeua
iouae
iouea
iuaeo
iuaoe
iueao
iueoa
iuoae
iuoea
oaeiu
oaeui
oaieu
oaiue
oauei
oauie
oeaiu
oeaui
oeiau
oeiua
oeuai
oeuia
oiaeu
oiaue
oieau
oieua
oiuae
oiuea
ouaei
ouaie
oueai
oueia
ouiae
ouiea
uaeio
uaeoi
uaieo
uaioe
uaoei
uaoie
ueaio
ueaoi
ueiao
ueioa
ueoai
ueoia
uiaeo
uiaoe
uieao
uieoa
uioae
uioea
uoaei
uoaie
uoeai
uoeia
uoiae
uoiea
###Markdown
Exercise 4Write a Python program to find the second-smallest number in a list.
###Code
# Imports
from typing import List, Union
# Function
def second_smallest_in_list(x: List[Union[int, float]]) -> Union[int, float]:
return sorted(x)[1]
print(second_smallest_in_list([1.2, -8.2, 47, 8.7, 5]))
###Output
1.2
|
Activities/03-FR/FR-networkx.ipynb | ###Markdown
FR Animation using networkx and matplotlib
###Code
import networkx as nx;
import matplotlib.pyplot as plt;
import numpy as np;
import math;
from matplotlib.animation import FuncAnimation, writers
%matplotlib widget
# network = nx.gnp_random_graph(10,0.75)
#generating a geometric network
network = nx.random_geometric_graph(100,0.2);
iterations = 500;
viewSize = 50;
viscosity = 0.15;
alpha = 0.5;
a = 0.001;
b = 1.0;
deltaT = 1.0;
verticesCount = network.number_of_nodes();
edges = network.edges();
positionsX = viewSize*np.random.random(verticesCount)-viewSize/2.0;
positionsY = viewSize*np.random.random(verticesCount)-viewSize/2.0;
velocitiesX = np.zeros(verticesCount);
velocitiesY = np.zeros(verticesCount);
def iterate(iterationCount):
global positionsX,positionsY,velocitiesX,velocitiesY;
for iteration in range(iterationCount):
forcesX = np.zeros(verticesCount);
forcesY = np.zeros(verticesCount);
#repulstive forces;
for vertex1 in range(verticesCount):
for vertex2 in range(vertex1):
x1 = positionsX[vertex1];
y1 = positionsY[vertex1];
x2 = positionsX[vertex2];
y2 = positionsY[vertex2];
distance = math.sqrt((x2-x1)*(x2-x1) + (y2-y1)*(y2-y1)) + alpha;
rx = (x2-x1)/distance;
ry = (y2-y1)/distance;
Fx = -b*rx/distance/distance;
Fy = -b*ry/distance/distance;
forcesX[vertex1] += Fx;
forcesY[vertex1] += Fy;
forcesX[vertex2] -= Fx;
forcesY[vertex2] -= Fy;
#attractive forces;
for vFrom,vTo in edges:
x1 = positionsX[vFrom];
y1 = positionsY[vFrom];
x2 = positionsX[vTo];
y2 = positionsY[vTo];
distance = math.sqrt((x2-x1)*(x2-x1) + (y2-y1)*(y2-y1));
Rx = (x2-x1);
Ry = (y2-y1);
Fx = a*Rx*distance;
Fy = a*Ry*distance;
forcesX[vFrom] += Fx;
forcesY[vFrom] += Fy;
forcesX[vTo] -= Fx;
forcesY[vTo] -= Fy;
velocitiesX+=forcesX*deltaT;
velocitiesX*=(1.0-viscosity);
velocitiesY+=forcesY*deltaT;
velocitiesY*=(1.0-viscosity);
positionsX += velocitiesX*deltaT;
positionsY += velocitiesY*deltaT;
# def drawGraph(i):
# linesX = []
# linesY = []
# for edge in edges:
# fx = positionsX[edge[0]];
# fy = positionsY[edge[0]];
# tx = positionsX[edge[1]];
# ty = positionsY[edge[1]];
# linesX.append(fx)
# linesX.append(tx)
# linesX.append(None)
# linesY.append(fy)
# linesY.append(ty)
# linesY.append(None)
# plt.plot(linesX,linesY);
# plt.scatter(positionsX,positionsY,marker="o",s=50);
# for i in range(10):
# iterate(10);
# drawGraph(i);
# plt.savefig("iteration%04d.png"%i);
# plt.close();
def displayEdges():
linesX = []
linesY = []
for edge in edges:
fx = positionsX[edge[0]];
fy = positionsY[edge[0]];
tx = positionsX[edge[1]];
ty = positionsY[edge[1]];
linesX.append(fx)
linesX.append(tx)
linesX.append(None)
linesY.append(fy)
linesY.append(ty)
linesY.append(None)
return linesX,linesY;
linesX,linesY = displayEdges();
fig = plt.figure(figsize=(8,8));
pltEdges, = plt.plot(linesX,linesY);
pltNodes, = plt.plot(positionsX,positionsY,marker="o",c="#888888",ms=5,linestyle = 'None');
def update(frame):
iterate(1);
linesX,linesY = displayEdges();
pltEdges.set_data(linesX,linesY);
pltNodes.set_data(positionsX,positionsY);
plt.xlim(min(np.min(positionsX),-viewSize),max(np.max(positionsX),viewSize));
plt.ylim(min(np.min(positionsY),-viewSize),max(np.max(positionsY),viewSize));
ani = FuncAnimation(fig, update, frames=np.linspace(0,1,500), interval=10, blit=False,repeat=False);
plt.plot()
###Output
_____no_output_____ |
tests/notebooks/gimvi_tutorial.ipynb | ###Markdown
Introduction to gimVI Impute missing genes in Spatial Data from Sequencing Data
###Code
import sys
sys.path.append("../../")
sys.path.append("../")
def allow_notebook_for_test():
print("Testing the gimvi notebook")
test_mode = False
save_path = "data/"
def if_not_test_else(x, y):
if not test_mode:
return x
else:
return y
if not test_mode:
save_path = "../../data"
from scvi.dataset import (
PreFrontalCortexStarmapDataset,
FrontalCortexDropseqDataset,
SmfishDataset,
CortexDataset,
)
from scvi.models import JVAE, Classifier
from scvi.inference import JVAETrainer
import notebooks.utils.gimvi_tutorial as gimvi_utils
import numpy as np
import copy
###Output
_____no_output_____
###Markdown
Load two datasets: one with spatial data, one from sequencing Here we load: - **Cortex**: a scRNA-seq dataset of 3,005 mouse somatosensory cortex cells (Zeisel et al., 2015)- **osmFISH**: a smFISH dataset of 4,462 cells and 33 genes from the same tissue (Codeluppi et al., 2018)
###Code
data_spatial = SmfishDataset(save_path=save_path)
data_seq = CortexDataset(
save_path=save_path, total_genes=None
)
# make sure gene names have the same case
data_spatial.make_gene_names_lower()
data_seq.make_gene_names_lower()
# filters genes by gene_names
data_seq.filter_genes_by_attribute(data_spatial.gene_names)
if test_mode:
data_seq = data_spatial
###Output
INFO:scvi.dataset.dataset:File ../../data/osmFISH_SScortex_mouse_all_cell.loom already downloaded
INFO:scvi.dataset.smfish:Loading smFISH dataset
../../scvi/dataset/dataset.py:1276: RuntimeWarning: divide by zero encountered in log
log_counts = np.log(data.sum(axis=1))
/home/achille/miniconda3/envs/scvi-update/lib/python3.7/site-packages/numpy/core/_methods.py:117: RuntimeWarning: invalid value encountered in subtract
x = asanyarray(arr - arrmean)
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Downsampled from 6471 to 4530 cells
INFO:scvi.dataset.dataset:Remapping labels to [0,N]
INFO:scvi.dataset.dataset:Remapping batch_indices to [0,N]
INFO:scvi.dataset.dataset:File ../../data/expression.bin already downloaded
INFO:scvi.dataset.cortex:Loading Cortex data
INFO:scvi.dataset.cortex:Finished preprocessing Cortex data
INFO:scvi.dataset.dataset:Remapping labels to [0,N]
INFO:scvi.dataset.dataset:Remapping batch_indices to [0,N]
###Markdown
- **FrontalCortexDropseq**: a scRNA-seq dataset of 71,639 mouse frontal cortex cells (Saunders et al., 2018)- **PreFrontalCortexStarmap**: a starMAP dataset of 3,704 cells and 166 genes from the mouse pre-frontal cortex (Wang et al., 2018)
###Code
# data_spatial = PreFrontalCortexStarmapDataset(save_path=save_path)
# data_seq = FrontalCortexDropseqDataset(
# save_path=save_path, genes_to_keep=data_spatial.gene_names
# )
# data_seq.subsample_cells(5000)
###Output
_____no_output_____
###Markdown
**Hide some genes in the osFISH dataset to score the imputation**
###Code
data_seq.filter_cells_by_count(1)
data_spatial.filter_cells_by_count(1)
train_size = 0.8
gene_names_rnaseq = data_seq.gene_names
np.random.seed(0)
n_genes = len(gene_names_rnaseq)
gene_ids_train = sorted(
np.random.choice(range(n_genes), int(n_genes * train_size), False)
)
gene_ids_test = sorted(set(range(n_genes)) - set(gene_ids_train))
gene_names_fish = gene_names_rnaseq[gene_ids_train]
# Create copy of the fish dataset with hidden genes
data_spatial_partial = copy.deepcopy(data_spatial)
data_spatial_partial.filter_genes_by_attribute(gene_names_fish)
data_spatial_partial.batch_indices += data_seq.n_batches
###Output
INFO:scvi.dataset.dataset:Downsampling from 33 to 26 genes
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Filtering non-expressing cells.
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Downsampled from 4530 to 4530 cells
###Markdown
**Configure the Joint Model**The joint model can take multiple datasets with potentially different observed genes. All dataset will be encoded and decoded with the union of all genes.It requires:- The gene mappings from each dataset to the common decoded vector: * *Eg: dataset1 has genes ['a', 'b'] and dataset2 has genes ['b', 'c'], then a possible output can be ['b', 'a', 'c'] such that the mappings are [1, 0] and [0, 2]* * *Usually, if the genes of dataset2 are included in dataset1, it is way more efficient to keep the order of dataset1 in the output and use `slice(None)` as a mapping for dataset1* - The number of inputs (ie) number of genes in each dataset- The distributions to use for the generative process: usually scRNA-seq is modelled with ZINB (because of technical dropout) and FISH with NB or even Poisson- Whether to model the library size with a latent variable or use the observed value
###Code
datasets = [data_seq, data_spatial_partial]
generative_distributions = ["zinb", "nb"]
gene_mappings = [slice(None), np.array(gene_ids_train)]
n_inputs = [d.nb_genes for d in datasets]
total_genes = data_seq.nb_genes
n_batches = sum([d.n_batches for d in datasets])
model_library_size = [True, False]
n_latent = 8
kappa = 1
import torch
torch.manual_seed(0)
model = JVAE(
n_inputs,
total_genes,
gene_mappings,
generative_distributions,
model_library_size,
n_layers_decoder_individual=0,
n_layers_decoder_shared=0,
n_layers_encoder_individual=1,
n_layers_encoder_shared=1,
dim_hidden_encoder=64,
dim_hidden_decoder_shared=64,
dropout_rate_encoder=0.2,
dropout_rate_decoder=0.2,
n_batch=n_batches,
n_latent=n_latent,
)
discriminator = Classifier(n_latent, 32, 2, 3, logits=True)
trainer = JVAETrainer(model, discriminator, datasets, 0.95, frequency=1, kappa=kappa)
n_epochs = if_not_test_else(200, 1)
trainer.train(n_epochs=n_epochs)
gimvi_utils.plot_umap(trainer)
gimvi_utils.imputation_score(trainer, data_spatial, gene_ids_test, True)
###Output
_____no_output_____
###Markdown
Plot imputation for *LAMP5*, hidden in the training
###Code
gimvi_utils.plot_gene_spatial(trainer, data_spatial, 9)
###Output
_____no_output_____
###Markdown
Inspect classification accuracy (we expect a uniform matrix)If the matrix is diagonal, the `kappa` needs to be scaled up to ensure mixing.
###Code
discriminator_classification = trainer.get_discriminator_confusion()
discriminator_classification
import pandas as pd
results = pd.DataFrame(
trainer.get_loss_magnitude(),
index=["reconstruction", "kl_divergence", "discriminator"],
columns=["Sequencing", "Spatial"],
)
results.columns.name = "Dataset"
results.index.name = "Loss"
results
###Output
_____no_output_____
###Markdown
Basic gimVI tutorial Impute missing genes in Spatial Data from Sequencing Data
###Code
import sys
sys.path.append("../../")
sys.path.append("../")
def allow_notebook_for_test():
print("Testing the gimvi notebook")
test_mode = False
save_path = "data/"
def if_not_test_else(x, y):
if not test_mode:
return x
else:
return y
if not test_mode:
save_path = "../../data"
from scvi.dataset import (
PreFrontalCortexStarmapDataset,
FrontalCortexDropseqDataset,
SmfishDataset,
CortexDataset,
)
from scvi.models import JVAE, Classifier
from scvi.inference import JVAETrainer
import notebooks.utils.gimvi_tutorial as gimvi_utils
import numpy as np
import copy
###Output
_____no_output_____
###Markdown
Load two datasets: one with spatial data, one from sequencing Here we load: - **Cortex**: a scRNA-seq dataset of 3,005 mouse somatosensory cortex cells (Zeisel et al., 2015)- **osmFISH**: a smFISH dataset of 4,462 cells and 33 genes from the same tissue (Codeluppi et al., 2018)
###Code
data_spatial = SmfishDataset(save_path=save_path)
data_seq = CortexDataset(
save_path=save_path, total_genes=None
)
# make sure gene names have the same case
data_spatial.make_gene_names_lower()
data_seq.make_gene_names_lower()
# filters genes by gene_names
data_seq.filter_genes_by_attribute(data_spatial.gene_names)
if test_mode:
data_seq = data_spatial
###Output
INFO:scvi.dataset.dataset:File ../../data/osmFISH_SScortex_mouse_all_cell.loom already downloaded
INFO:scvi.dataset.smfish:Loading smFISH dataset
../../scvi/dataset/dataset.py:1276: RuntimeWarning: divide by zero encountered in log
log_counts = np.log(data.sum(axis=1))
/home/achille/miniconda3/envs/scvi-update/lib/python3.7/site-packages/numpy/core/_methods.py:117: RuntimeWarning: invalid value encountered in subtract
x = asanyarray(arr - arrmean)
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Downsampled from 6471 to 4530 cells
INFO:scvi.dataset.dataset:Remapping labels to [0,N]
INFO:scvi.dataset.dataset:Remapping batch_indices to [0,N]
INFO:scvi.dataset.dataset:File ../../data/expression.bin already downloaded
INFO:scvi.dataset.cortex:Loading Cortex data
INFO:scvi.dataset.cortex:Finished preprocessing Cortex data
INFO:scvi.dataset.dataset:Remapping labels to [0,N]
INFO:scvi.dataset.dataset:Remapping batch_indices to [0,N]
###Markdown
- **FrontalCortexDropseq**: a scRNA-seq dataset of 71,639 mouse frontal cortex cells (Saunders et al., 2018)- **PreFrontalCortexStarmap**: a starMAP dataset of 3,704 cells and 166 genes from the mouse pre-frontal cortex (Wang et al., 2018)
###Code
# data_spatial = PreFrontalCortexStarmapDataset(save_path=save_path)
# data_seq = FrontalCortexDropseqDataset(
# save_path=save_path, genes_to_keep=data_spatial.gene_names
# )
# data_seq.subsample_cells(5000)
###Output
_____no_output_____
###Markdown
**Hide some genes in the osFISH dataset to score the imputation**
###Code
data_seq.filter_cells_by_count(1)
data_spatial.filter_cells_by_count(1)
train_size = 0.8
gene_names_rnaseq = data_seq.gene_names
np.random.seed(0)
n_genes = len(gene_names_rnaseq)
gene_ids_train = sorted(
np.random.choice(range(n_genes), int(n_genes * train_size), False)
)
gene_ids_test = sorted(set(range(n_genes)) - set(gene_ids_train))
gene_names_fish = gene_names_rnaseq[gene_ids_train]
# Create copy of the fish dataset with hidden genes
data_spatial_partial = copy.deepcopy(data_spatial)
data_spatial_partial.filter_genes_by_attribute(gene_names_fish)
data_spatial_partial.batch_indices += data_seq.n_batches
###Output
INFO:scvi.dataset.dataset:Downsampling from 33 to 26 genes
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Filtering non-expressing cells.
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Downsampled from 4530 to 4530 cells
###Markdown
**Configure the Joint Model**The joint model can take multiple datasets with potentially different observed genes. All dataset will be encoded and decoded with the union of all genes.It requires:- The gene mappings from each dataset to the common decoded vector: * *Eg: dataset1 has genes ['a', 'b'] and dataset2 has genes ['b', 'c'], then a possible output can be ['b', 'a', 'c'] such that the mappings are [1, 0] and [0, 2]* * *Usually, if the genes of dataset2 are included in dataset1, it is way more efficient to keep the order of dataset1 in the output and use `slice(None)` as a mapping for dataset1* - The number of inputs (ie) number of genes in each dataset- The distributions to use for the generative process: usually scRNA-seq is modelled with ZINB (because of technical dropout) and FISH with NB or even Poisson- Whether to model the library size with a latent variable or use the observed value
###Code
datasets = [data_seq, data_spatial_partial]
generative_distributions = ["zinb", "nb"]
gene_mappings = [slice(None), np.array(gene_ids_train)]
n_inputs = [d.nb_genes for d in datasets]
total_genes = data_seq.nb_genes
n_batches = sum([d.n_batches for d in datasets])
model_library_size = [True, False]
n_latent = 8
kappa = 1
import torch
torch.manual_seed(0)
model = JVAE(
n_inputs,
total_genes,
gene_mappings,
generative_distributions,
model_library_size,
n_layers_decoder_individual=0,
n_layers_decoder_shared=0,
n_layers_encoder_individual=1,
n_layers_encoder_shared=1,
dim_hidden_encoder=64,
dim_hidden_decoder_shared=64,
dropout_rate_encoder=0.2,
dropout_rate_decoder=0.2,
n_batch=n_batches,
n_latent=n_latent,
)
discriminator = Classifier(n_latent, 32, 2, 3, logits=True)
trainer = JVAETrainer(model, discriminator, datasets, 0.95, frequency=1, kappa=kappa)
n_epochs = if_not_test_else(200, 1)
trainer.train(n_epochs=n_epochs)
gimvi_utils.plot_umap(trainer)
gimvi_utils.imputation_score(trainer, data_spatial, gene_ids_test, True)
###Output
_____no_output_____
###Markdown
Plot imputation for *LAMP5*, hidden in the training
###Code
gimvi_utils.plot_gene_spatial(trainer, data_spatial, 9)
###Output
_____no_output_____
###Markdown
Inspect classification accuracy (we expect a uniform matrix)If the matrix is diagonal, the `kappa` needs to be scaled up to ensure mixing.
###Code
discriminator_classification = trainer.get_discriminator_confusion()
discriminator_classification
import pandas as pd
results = pd.DataFrame(
trainer.get_loss_magnitude(),
index=["reconstruction", "kl_divergence", "discriminator"],
columns=["Sequencing", "Spatial"],
)
results.columns.name = "Dataset"
results.index.name = "Loss"
results
###Output
_____no_output_____
###Markdown
Basic gimVI tutorial Impute missing genes in Spatial Data from Sequencing Data
###Code
import sys
sys.path.append("../../")
sys.path.append("../")
def allow_notebook_for_test():
print("Testing the gimvi notebook")
test_mode = False
save_path = "data/"
def if_not_test_else(x, y):
if not test_mode:
return x
else:
return y
if not test_mode:
save_path = "../../data"
from scvi.dataset import (
PreFrontalCortexStarmapDataset,
FrontalCortexDropseqDataset,
SmfishDataset,
CortexDataset,
)
from scvi.models import JVAE, Classifier
from scvi.inference import JVAETrainer
import notebooks.utils.gimvi_tutorial as gimvi_utils
import numpy as np
import copy
###Output
_____no_output_____
###Markdown
Load two datasets: one with spatial data, one from sequencing Here we load: - **Cortex**: a scRNA-seq dataset of 3,005 mouse somatosensory cortex cells (Zeisel et al., 2015)- **osmFISH**: a smFISH dataset of 4,462 cells and 33 genes from the same tissue (Codeluppi et al., 2018)
###Code
data_spatial = SmfishDataset(save_path=save_path)
data_seq = CortexDataset(
save_path=save_path, genes_to_keep=data_spatial.gene_names, total_genes=None
)
if test_mode:
data_seq = data_spatial
###Output
INFO:scvi.dataset.dataset:File ../../data/osmFISH_SScortex_mouse_all_cell.loom already downloaded
INFO:scvi.dataset.smfish:Loading smFISH dataset
../../scvi/dataset/dataset.py:1276: RuntimeWarning: divide by zero encountered in log
log_counts = np.log(data.sum(axis=1))
/home/achille/miniconda3/envs/scvi-update/lib/python3.7/site-packages/numpy/core/_methods.py:117: RuntimeWarning: invalid value encountered in subtract
x = asanyarray(arr - arrmean)
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Downsampled from 6471 to 4530 cells
INFO:scvi.dataset.dataset:Remapping labels to [0,N]
INFO:scvi.dataset.dataset:Remapping batch_indices to [0,N]
INFO:scvi.dataset.dataset:File ../../data/expression.bin already downloaded
INFO:scvi.dataset.cortex:Loading Cortex data
INFO:scvi.dataset.cortex:Finished preprocessing Cortex data
INFO:scvi.dataset.dataset:Remapping labels to [0,N]
INFO:scvi.dataset.dataset:Remapping batch_indices to [0,N]
###Markdown
- **FrontalCortexDropseq**: a scRNA-seq dataset of 71,639 mouse frontal cortex cells (Saunders et al., 2018)- **PreFrontalCortexStarmap**: a starMAP dataset of 3,704 cells and 166 genes from the mouse pre-frontal cortex (Wang et al., 2018)
###Code
# data_spatial = PreFrontalCortexStarmapDataset(save_path=save_path)
# data_seq = FrontalCortexDropseqDataset(
# save_path=save_path, genes_to_keep=data_spatial.gene_names
# )
# data_seq.subsample_cells(5000)
###Output
_____no_output_____
###Markdown
**Hide some genes in the osFISH dataset to score the imputation**
###Code
data_seq.filter_cells_by_count(1)
data_spatial.filter_cells_by_count(1)
train_size = 0.8
gene_names_rnaseq = data_seq.gene_names
np.random.seed(0)
n_genes = len(gene_names_rnaseq)
gene_ids_train = sorted(
np.random.choice(range(n_genes), int(n_genes * train_size), False)
)
gene_ids_test = sorted(set(range(n_genes)) - set(gene_ids_train))
gene_names_fish = gene_names_rnaseq[gene_ids_train]
# Create copy of the fish dataset with hidden genes
data_spatial_partial = copy.deepcopy(data_spatial)
data_spatial_partial.filter_genes_by_attribute(gene_names_fish)
data_spatial_partial.batch_indices += data_seq.n_batches
###Output
INFO:scvi.dataset.dataset:Downsampling from 33 to 26 genes
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Filtering non-expressing cells.
INFO:scvi.dataset.dataset:Computing the library size for the new data
INFO:scvi.dataset.dataset:Downsampled from 4530 to 4530 cells
###Markdown
**Configure the Joint Model**The joint model can take multiple datasets with potentially different observed genes. All dataset will be encoded and decoded with the union of all genes.It requires:- The gene mappings from each dataset to the common decoded vector: * *Eg: dataset1 has genes ['a', 'b'] and dataset2 has genes ['b', 'c'], then a possible output can be ['b', 'a', 'c'] such that the mappings are [1, 0] and [0, 2]* * *Usually, if the genes of dataset2 are included in dataset1, it is way more efficient to keep the order of dataset1 in the output and use `slice(None)` as a mapping for dataset1* - The number of inputs (ie) number of genes in each dataset- The distributions to use for the generative process: usually scRNA-seq is modelled with ZINB (because of technical dropout) and FISH with NB or even Poisson- Whether to model the library size with a latent variable or use the observed value
###Code
datasets = [data_seq, data_spatial_partial]
generative_distributions = ["zinb", "nb"]
gene_mappings = [slice(None), np.array(gene_ids_train)]
n_inputs = [d.nb_genes for d in datasets]
total_genes = data_seq.nb_genes
n_batches = sum([d.n_batches for d in datasets])
model_library_size = [True, False]
n_latent = 8
kappa = 1
import torch
torch.manual_seed(0)
model = JVAE(
n_inputs,
total_genes,
gene_mappings,
generative_distributions,
model_library_size,
n_layers_decoder_individual=0,
n_layers_decoder_shared=0,
n_layers_encoder_individual=1,
n_layers_encoder_shared=1,
dim_hidden_encoder=64,
dim_hidden_decoder_shared=64,
dropout_rate_encoder=0.2,
dropout_rate_decoder=0.2,
n_batch=n_batches,
n_latent=n_latent,
)
discriminator = Classifier(n_latent, 32, 2, 3, logits=True)
trainer = JVAETrainer(model, discriminator, datasets, 0.95, frequency=1, kappa=kappa)
n_epochs = if_not_test_else(200, 1)
trainer.train(n_epochs=n_epochs)
gimvi_utils.plot_umap(trainer)
gimvi_utils.imputation_score(trainer, data_spatial, gene_ids_test, True)
###Output
_____no_output_____
###Markdown
Plot imputation for *LAMP5*, hidden in the training
###Code
gimvi_utils.plot_gene_spatial(trainer, data_spatial, 9)
###Output
_____no_output_____
###Markdown
Inspect classification accuracy (we expect a uniform matrix)If the matrix is diagonal, the `kappa` needs to be scaled up to ensure mixing.
###Code
discriminator_classification = trainer.get_discriminator_confusion()
discriminator_classification
import pandas as pd
results = pd.DataFrame(
trainer.get_loss_magnitude(),
index=["reconstruction", "kl_divergence", "discriminator"],
columns=["Sequencing", "Spatial"],
)
results.columns.name = "Dataset"
results.index.name = "Loss"
results
###Output
_____no_output_____ |
pytorch-tabular-solution-francesco-pochetti.ipynb | ###Markdown
PyTorch for Tabular Data: Predicting NYC Taxi Fares_FULL CREDIT TO FRANCESCO POCHETTI_ _http://francescopochetti.com/pytorch-for-tabular-data-predicting-nyc-taxi-fares/_ Imports
###Code
%matplotlib inline
import pathlib
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['figure.figsize'] = 8, 6
import pandas as pd
import numpy as np
import seaborn as sns
pd.set_option('display.max_columns', 500)
from collections import defaultdict
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity='all'
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error
pd.options.mode.chained_assignment = None
from torch.nn import init
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import torch.nn.functional as F
from torch.utils import data
from torch.optim import lr_scheduler
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
from tqdm import tqdm, tqdm_notebook, tnrange
tqdm.pandas(desc='Progress')
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
def haversine_distance(df, start_lat, end_lat, start_lng, end_lng, prefix):
"""
calculates haversine distance between 2 sets of GPS coordinates in df
"""
R = 6371 #radius of earth in kilometers
phi1 = np.radians(df[start_lat])
phi2 = np.radians(df[end_lat])
delta_phi = np.radians(df[end_lat]-df[start_lat])
delta_lambda = np.radians(df[end_lng]-df[start_lng])
a = np.sin(delta_phi / 2.0) ** 2 + np.cos(phi1) * np.cos(phi2) * np.sin(delta_lambda / 2.0) ** 2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a))
d = (R * c) #in kilometers
df[prefix+'distance_km'] = d
def add_datepart(df, col, prefix):
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
attr = attr + ['Hour', 'Minute', 'Second']
for n in attr: df[prefix + n] = getattr(df[col].dt, n.lower())
df[prefix + 'Elapsed'] = df[col].astype(np.int64) // 10 ** 9
df.drop(col, axis=1, inplace=True)
def reject_outliers(data, m = 2.):
d = np.abs(data - np.median(data))
mdev = np.median(d)
s = d/(mdev if mdev else 1.)
return s<m
def parse_gps(df, prefix):
lat = prefix + '_latitude'
lon = prefix + '_longitude'
df[prefix + '_x'] = np.cos(df[lat]) * np.cos(df[lon])
df[prefix + '_y'] = np.cos(df[lat]) * np.sin(df[lon])
df[prefix + '_z'] = np.sin(df[lat])
df.drop([lat, lon], axis=1, inplace=True)
def prepare_dataset(df):
df['pickup_datetime'] = pd.to_datetime(df.pickup_datetime, infer_datetime_format=True)
add_datepart(df, 'pickup_datetime', 'pickup')
haversine_distance(df, 'pickup_latitude', 'dropoff_latitude', 'pickup_longitude', 'dropoff_longitude', '')
parse_gps(df, 'pickup')
parse_gps(df, 'dropoff')
df.dropna(inplace=True)
y = np.log(df.fare_amount)
df.drop(['key', 'fare_amount'], axis=1, inplace=True)
return df, y
def split_features(df):
catf = ['pickupYear', 'pickupMonth', 'pickupWeek', 'pickupDay', 'pickupDayofweek',
'pickupDayofyear', 'pickupHour', 'pickupMinute', 'pickupSecond', 'pickupIs_month_end',
'pickupIs_month_start', 'pickupIs_quarter_end', 'pickupIs_quarter_start',
'pickupIs_year_end', 'pickupIs_year_start']
numf = [col for col in df.columns if col not in catf]
for c in catf:
df[c] = df[c].astype('category').cat.as_ordered()
df[c] = df[c].cat.codes+1
return catf, numf
def numericalize(df):
df[name] = col.cat.codes+1
def split_dataset(df, y): return train_test_split(df, y, test_size=0.25, random_state=42)
def inv_y(y): return np.exp(y)
def get_numf_scaler(train): return preprocessing.StandardScaler().fit(train)
def scale_numf(df, num, scaler):
cols = numf
index = df.index
scaled = scaler.transform(df[numf])
scaled = pd.DataFrame(scaled, columns=cols, index=index)
return pd.concat([scaled, df.drop(numf, axis=1)], axis=1)
class RegressionColumnarDataset(data.Dataset):
def __init__(self, df, cats, y):
self.dfcats = df[cats]
self.dfconts = df.drop(cats, axis=1)
self.cats = np.stack([c.values for n, c in self.dfcats.items()], axis=1).astype(np.int64)
self.conts = np.stack([c.values for n, c in self.dfconts.items()], axis=1).astype(np.float32)
self.y = y.values.astype(np.float32)
def __len__(self): return len(self.y)
def __getitem__(self, idx):
return [self.cats[idx], self.conts[idx], self.y[idx]]
def rmse(targ, y_pred):
return np.sqrt(mean_squared_error(inv_y(y_pred), inv_y(targ))) #.detach().numpy()
def emb_init(x):
x = x.weight.data
sc = 2/(x.size(1)+1)
x.uniform_(-sc,sc)
class MixedInputModel(nn.Module):
def __init__(self, emb_szs, n_cont, emb_drop, out_sz, szs, drops, y_range, use_bn=True):
super().__init__()
for i,(c,s) in enumerate(emb_szs): assert c > 1, f"cardinality must be >=2, got emb_szs[{i}]: ({c},{s})"
self.embs = nn.ModuleList([nn.Embedding(c, s) for c,s in emb_szs])
for emb in self.embs: emb_init(emb)
n_emb = sum(e.embedding_dim for e in self.embs)
self.n_emb, self.n_cont=n_emb, n_cont
szs = [n_emb+n_cont] + szs
self.lins = nn.ModuleList([nn.Linear(szs[i], szs[i+1]) for i in range(len(szs)-1)])
self.bns = nn.ModuleList([nn.BatchNorm1d(sz) for sz in szs[1:]])
for o in self.lins: nn.init.kaiming_normal_(o.weight.data)
self.outp = nn.Linear(szs[-1], out_sz)
nn.init.kaiming_normal_(self.outp.weight.data)
self.emb_drop = nn.Dropout(emb_drop)
self.drops = nn.ModuleList([nn.Dropout(drop) for drop in drops])
self.bn = nn.BatchNorm1d(n_cont)
self.use_bn,self.y_range = use_bn,y_range
def forward(self, x_cat, x_cont):
if self.n_emb != 0:
x = [e(x_cat[:,i]) for i,e in enumerate(self.embs)]
x = torch.cat(x, 1)
x = self.emb_drop(x)
if self.n_cont != 0:
x2 = self.bn(x_cont)
x = torch.cat([x, x2], 1) if self.n_emb != 0 else x2
for l,d,b in zip(self.lins, self.drops, self.bns):
x = F.relu(l(x))
if self.use_bn: x = b(x)
x = d(x)
x = self.outp(x)
if self.y_range:
x = torch.sigmoid(x)
x = x*(self.y_range[1] - self.y_range[0])
x = x+self.y_range[0]
return x.squeeze()
def fit(model, train_dl, val_dl, loss_fn, opt, scheduler, epochs=3):
num_batch = len(train_dl)
for epoch in tnrange(epochs):
y_true_train = list()
y_pred_train = list()
total_loss_train = 0
t = tqdm_notebook(iter(train_dl), leave=False, total=num_batch)
for cat, cont, y in t:
cat = cat.cuda()
cont = cont.cuda()
y = y.cuda()
t.set_description(f'Epoch {epoch}')
opt.zero_grad()
pred = model(cat, cont)
loss = loss_fn(pred, y)
loss.backward()
lr[epoch].append(opt.param_groups[0]['lr'])
tloss[epoch].append(loss.item())
scheduler.step()
opt.step()
t.set_postfix(loss=loss.item())
y_true_train += list(y.cpu().data.numpy())
y_pred_train += list(pred.cpu().data.numpy())
total_loss_train += loss.item()
train_acc = rmse(y_true_train, y_pred_train)
train_loss = total_loss_train/len(train_dl)
if val_dl:
y_true_val = list()
y_pred_val = list()
total_loss_val = 0
for cat, cont, y in tqdm_notebook(val_dl, leave=False):
cat = cat.cuda()
cont = cont.cuda()
y = y.cuda()
pred = model(cat, cont)
loss = loss_fn(pred, y)
y_true_val += list(y.cpu().data.numpy())
y_pred_val += list(pred.cpu().data.numpy())
total_loss_val += loss.item()
vloss[epoch].append(loss.item())
valacc = rmse(y_true_val, y_pred_val)
valloss = total_loss_val/len(valdl)
print(f'Epoch {epoch}: train_loss: {train_loss:.4f} train_rmse: {train_acc:.4f} | val_loss: {valloss:.4f} val_rmse: {valacc:.4f}')
else:
print(f'Epoch {epoch}: train_loss: {train_loss:.4f} train_rmse: {train_acc:.4f}')
return lr, tloss, vloss
###Output
_____no_output_____
###Markdown
Preparing the data Tha data is a half-million random sample from the 55M original training set from the [New York City Taxi Fare Prediction](https://www.kaggle.com/c/new-york-city-taxi-fare-prediction/data) Kaggle's challenge
###Code
PATH='../input/'
names = ['key','fare_amount','pickup_datetime','pickup_longitude',
'pickup_latitude','dropoff_longitude','dropoff_latitude','passenger_count']
df = pd.read_csv(f'{PATH}train.csv', nrows=6000000)
print(df.shape)
print(df.head())
print(df.passenger_count.describe())
print(df.passenger_count.quantile([.85, .99]))
print(df.fare_amount.describe())
print(df.fare_amount.quantile([.85, .99]))
df = df.loc[(df.fare_amount > 0) & (df.passenger_count < 6) & (df.fare_amount < 53),:]
df, y = prepare_dataset(df)
print(df.shape)
print(df.head())
ax = y.hist(bins=20, figsize=(8,6))
_ = ax.set_xlabel("Ride Value (EUR)")
_ = ax.set_ylabel("# Rides")
_ = ax.set_title('Ditribution of Ride Values (USD)')
catf, numf = split_features(df)
len(catf)
catf
len(numf)
numf
df.head()
y_range = (0, y.max()*1.2)
y_range
y = y.clip(y_range[0], y_range[1])
X_train, X_test, y_train, y_test = split_dataset(df, y)
X_train.shape
X_test.shape
scaler = get_numf_scaler(X_train[numf])
X_train_sc = scale_numf(X_train, numf, scaler)
X_train_sc.std(axis=0)
X_test_sc = scale_numf(X_test, numf, scaler)
X_train_sc.shape
X_test_sc.shape
X_test_sc.std(axis=0)
###Output
_____no_output_____
###Markdown
Defining pytorch datasets and dataloaders
###Code
trainds = RegressionColumnarDataset(X_train_sc, catf, y_train)
valds = RegressionColumnarDataset(X_test_sc, catf, y_test)
class RegressionColumnarDataset(data.Dataset):
def __init__(self, df, cats, y):
self.dfcats = df[cats]
self.dfconts = df.drop(cats, axis=1)
self.cats = np.stack([c.values for n, c in self.dfcats.items()], axis=1).astype(np.int64)
self.conts = np.stack([c.values for n, c in self.dfconts.items()], axis=1).astype(np.float32)
self.y = y.values.astype(np.float32)
def __len__(self): return len(self.y)
def __getitem__(self, idx):
return [self.cats[idx], self.conts[idx], self.y[idx]]
params = {'batch_size': 128,
'shuffle': True,
'num_workers': 8}
traindl = data.DataLoader(trainds, **params)
valdl = data.DataLoader(valds, **params)
###Output
_____no_output_____
###Markdown
Defining model and related variables
###Code
cat_sz = [(c, df[c].max()+1) for c in catf]
cat_sz
emb_szs = [(c, min(50, (c+1)//2)) for _,c in cat_sz]
emb_szs
m = MixedInputModel(emb_szs=emb_szs,
n_cont=len(df.columns)-len(catf),
emb_drop=0.04,
out_sz=1,
szs=[1000,500,250],
drops=[0.001,0.01,0.01],
y_range=y_range).to(device)
opt = optim.Adam(m.parameters(), 1e-2)
lr_cosine = lr_scheduler.CosineAnnealingLR(opt, 1000)
lr = defaultdict(list)
tloss = defaultdict(list)
vloss = defaultdict(list)
m
###Output
_____no_output_____
###Markdown
Training the model
###Code
epoch_n = 12
lr, tloss, vloss = fit(model=m, train_dl=traindl, val_dl=valdl, loss_fn=F.mse_loss, opt=opt, scheduler=lr_cosine, epochs=epoch_n)
_ = plt.plot(lr[0])
_ = plt.title('Learning Rate Cosine Annealing over Train Batches Iterations (Epoch 0)')
t = [np.mean(tloss[el]) for el in tloss]
v = [np.mean(vloss[el]) for el in vloss]
p = pd.DataFrame({'Train Loss': t, 'Validation Loss': v, 'Epochs': range(1, epoch_n+1)})
_ = p.plot(x='Epochs', y=['Train Loss', 'Validation Loss'],
title='Train and Validation Loss over Epochs')
###Output
_____no_output_____ |
hw2/neural_summarization_pegasus.ipynb | ###Markdown
###Code
#@title MIT License
#
# Copyright (c) 2022 Maxwell Weinzierl
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Neural Summarization with PEGASUSThis notebook utilizes PEGASUS, a State-Of-The-Art (SOTA) neural transformer model pre-trained on extracted gap-sentences forabstractive summarization. The paper which introduces PEGASUS can be found here:https://arxiv.org/pdf/1912.08777.pdf Packages and LibrariesWe will utilize the deep learning library PyTorch this time as opposed to TensorFlow. PyTorch (https://pytorch.org/) has become the most popular deep learning library for research to-date: http://horace.io/pytorch-vs-tensorflow/ HuggingFace TransformersNext we will install the `transformers` library, built by HuggingFace. This library makes it extremely easy to use SOTA neural NLP models with PyTorch. See the HuggingFace website to browse all the publically available models: https://huggingface.co/models HuggingFace DatasetsHuggingFace also provides a library called `datasets` for downloading and utilizing common NLP datasets: https://huggingface.co/datasets SentencePiece TokenizerThe SentencePiece tokenizer library is required for the PEGASUS model Model SummaryTorchInfo is a nice little library to provide a summary of model sizes and layers. We install it below to visualize the size of our models.
###Code
!pip install transformers datasets sentencepiece torchinfo
import torch
import transformers
import datasets
from torchinfo import summary
from textwrap import wrap
print(torch.__version__)
print('CUDA Enabled: ', torch.cuda.is_available())
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if torch.cuda.is_available():
print(f' {device} - ' + torch.cuda.get_device_name(0))
else:
print(f' {device}')
###Output
_____no_output_____
###Markdown
The above cell should include a torch library with "+cu..." to denote PyTorch is installed with CUDA capabilities. CUDA should be enabled with at least one device. Typically a Tesla K80 is the GPU I get on Google Colab, but others may be assigned as resources are made available. If you are unable to reserve a GPU instance then the device will be "cpu" and the code will run much slower, but still work. Neural Summarization ModelsBelow we load our neural summarization model. We load the model and the tokenizer from the `model_name` from HuggingFace. The library will automatically download all required model weights, config files, and tokenizers.We then move the model to the `cuda:0` device (our GPU) and turn on eval mode to avoid dropout randomness.Finally, we print a summary of our model.
###Code
#@title Model
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = 'google/pegasus-xsum' #@param ["google/pegasus-xsum"]
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
# move model to GPU device
model.to(device)
# turn on EVAL mode so drop-out layers do not randomize outputs
model.eval()
# create model summary
summary(model)
###Output
_____no_output_____
###Markdown
Summarization DatasetsWe will examine the Extreme Summarization (XSum) Dataset. https://github.com/EdinburghNLP/XSum/tree/master/XSum-DatasetFeel free to play around with the other Summarization datasets, or find your own on HuggingFace Datasets: https://huggingface.co/datasets?task_categories=task_categories:question-answering&sort=downloads
###Code
from datasets import load_dataset
#@title Dataset
dataset = 'xsum' #@param ["xsum", "cnn_dailymail"]
data = load_dataset(dataset)
ds = data['validation']
data_size = len(ds)
print(ds)
###Output
_____no_output_____
###Markdown
Inspecting the DatasetWe can look at individual examples in the validation collection of SQUAD v2 to get a feeling for the types of questions and answers.
###Code
#@title Example { run: "auto" }
example_index = 0 #@param {type:"slider", min:0, max:11331, step:1}
example = ds[example_index]
print('Document: ')
for line in wrap(example['document'], 50):
print(f' {line}')
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Specific ExampleWe will use the below example to follow the prediction process of the model
###Code
example_index = 2161
example = ds[example_index]
print('Document: ')
for line in wrap(example['document'], 50):
print(f' {line}')
print('Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
TokenizationWe will tokenize the above example using the HuggingFace tokenizer:
###Code
# we will tokenize a single example question and context,
# and we will move these tensors to the GPU device:
inputs = tokenizer(example['document'], return_tensors="pt", truncation=True).to(device)
print('Inputs to model: ')
print(f' {inputs.keys()}')
# the inputs to the model will contain a few tensors, but the most
# important tensor is the "input_ids":
input_ids = inputs['input_ids'][0]
print(input_ids)
# these are the token ids of the input. We can convert back to text tokens like so:
input_tokens = tokenizer.convert_ids_to_tokens(input_ids)
for line in wrap(str(input_tokens), 50):
print(line)
# Notice that we have added a </s> token to denote the end of the sequence,
# and the SentencePiece tokenizer has split some words up into pieces, such as
# '▁PG', '&', 'E' from PG&E and
# '▁shutoff', 's' from shutoffs
###Output
_____no_output_____
###Markdown
Running ModelNext we will run the model on the above example
###Code
# the outputs will contain decoded token ids
# based on the estimated most likely summary sequence
# using greedy decoding
summary_ids = model.generate(
**inputs
)[0]
print(summary_ids)
# we convert these token ids back to tokens:
summary_tokens = tokenizer.convert_ids_to_tokens(summary_ids)
for line in wrap(str(summary_tokens), 50):
print(line)
# we can then transform these tokens to a normal string:
summary = tokenizer.convert_tokens_to_string(summary_tokens)
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('Generated Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Sampling SummariesWe will first define a `run_model` function to do all of the above for an example.
###Code
# Re-run this cell when you swap models
def run_model(example, **generate_args):
# we will tokenize a single example document,
# and we will move these tensors to the GPU device:
inputs = tokenizer(example['document'], return_tensors="pt", truncation=True).to(device)
# the outputs will contain decoded token ids
# based on the estimated most likely summary sequence
# using various decoding options
multi_summary_ids = model.generate(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
**generate_args
)
# converts token ids back to strings for multiple summaries
summaries = tokenizer.batch_decode(
multi_summary_ids,
skip_special_tokens=True
)
return summaries
###Output
_____no_output_____
###Markdown
Generating StrategiesThere are various ways to produce samples from a sequence generating model. Above we utilized Greedy search, which picks the maximum probability token atevery opportunity. This can miss out on other tokens which may have a lower conditional probability, but produce a higher joint sentence probabilityafter futher token generation.The following article summarizes many popular generating strategies: https://huggingface.co/blog/how-to-generate Greedy Search
###Code
summary = run_model(example)[0]
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('Greedy Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Beam Search
###Code
summaries = run_model(
example,
num_beams=10,
num_return_sequences=5,
early_stopping=True
)
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('Beam Summaries:')
for beam, summary in enumerate(summaries, start=1):
print(f' Beam #{beam} Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Problems with Beam SearchBeam search and all deterministic generating approaches are rarely suprising.This leads to an almost robotic sounding result, where only high-probability English words are selected. In reality, language is often suprising, with unlikely words showing up all the time! Therefore, we want to consider approaches which randomly sample from the conditional distribution produced by our model. SamplingNow there are multiple approaches to random sampling from the $P(w_t|w_{1:t-1})$ conditional distribution. The first approach is just to directly sample:
###Code
summaries = run_model(
example,
do_sample=True,
num_return_sequences=5,
top_k=0
)
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('Sampled Summaries:')
for sample, summary in enumerate(summaries, start=1):
print(f' Sample #{sample} Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
TemperatureWe can modify $P(w_t|w_{1:t-1})$ to be more or less "suprising", by making the distribution sharper or more flat with the `temperature` parameter. A lower temperature ($t<1.0$) leads to a sharper distribution, which will have a higher probability of sampling from high probability tokens.
###Code
summaries = run_model(
example,
do_sample=True,
num_return_sequences=5,
top_k=0,
temperature=0.7
)
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('Low Temperature Sampled Summaries:')
for sample, summary in enumerate(summaries, start=1):
print(f' Sample #{sample} Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
A higher temperature ($t>1.0$) leads to a flatter distribution, which will have a higher probability of sampling from low probability tokens.
###Code
summaries = run_model(
example,
do_sample=True,
num_return_sequences=5,
top_k=0,
temperature=1.3
)
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('High Temperature Sampled Summaries:')
for sample, summary in enumerate(summaries, start=1):
print(f' Sample #{sample} Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Top-K SamplingTop-K sampling restricts $P(w_t|w_{1:t-1})$ to only allow sampling from the top-k probability tokens. In effect, this rebalances $P(w_t|w_{1:t-1})$ to remove all probability mass from non top-k tokens to be redistributed to top-k tokens, such that only top-k tokens get sampled. This approach avoids sampling extremely low probability tokens, and thus potentially ruining the sequence.
###Code
summaries = run_model(
example,
do_sample=True,
num_return_sequences=5,
top_k=50,
)
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('Top-K Sampling Summaries:')
for sample, summary in enumerate(summaries, start=1):
print(f' Sample #{sample} Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Top-P SamplingTop-P sampling restricts $P(w_t|w_{1:t-1})$ to only allow sampling from the tokens which have a sum total probability mass greater than p. In other words, the probabilities $P(w_t|w_{1:t-1})$ are sorted, from largest to smallest, and only tokens from the first top-p probability mass are available to be sampled from. The probability mass is then redistributed among these top-p tokens.
###Code
summaries = run_model(
example,
do_sample=True,
num_return_sequences=5,
top_p=0.90,
top_k=0
)
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('Top-P Sampling Summaries:')
for sample, summary in enumerate(summaries, start=1):
print(f' Sample #{sample} Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Top-P and Top-K SamplingWe can also perform both Top-P and Top-K sampling together, which provides multiple constraints on which tokens we can sample from $P(w_t|w_{1:t-1})$.
###Code
summaries = run_model(
example,
do_sample=True,
num_return_sequences=5,
top_p=0.90,
top_k=50,
)
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
print('Top-P AND Top-K Sampling Summaries:')
for sample, summary in enumerate(summaries, start=1):
print(f' Sample #{sample} Summary:')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Examine SummariesNow we will perform the above process for a few examples. We will first define a `generate` function to do all of the above for an example.
###Code
def generate(example, strategy):
if strategy == 'greedy':
summary = run_model(example)[0]
elif strategy == 'beam':
summary = run_model(
example,
num_beams=10,
num_return_sequences=1,
early_stopping=True
)[0]
elif strategy == 'sample':
summary = run_model(
example,
do_sample=True,
num_return_sequences=1,
top_k=0,
)[0]
elif strategy == 'top-k':
summary = run_model(
example,
do_sample=True,
num_return_sequences=1,
top_k=50,
)[0]
elif strategy == 'top-p':
summary = run_model(
example,
do_sample=True,
num_return_sequences=1,
top_p=0.90,
top_k=0,
)[0]
elif strategy == 'top-p-k':
summary = run_model(
example,
do_sample=True,
num_return_sequences=1,
top_p=0.90,
top_k=50,
)[0]
else:
raise ValueError(f'Unknown generator strategy: {strategy}')
return summary
###Output
_____no_output_____
###Markdown
EvaluationChange the example index and view the model's predictions below for 10 different examples. For each example, compare the results for each strategy. Manually judge whether each strategy for each of the 10 examples is correct, for a total of 60 judgements. Discuss how accurately the model summarized the documents and whether they lined up with the annotated summaries of the examples. Report the results in your report.
###Code
#@title Example { run: "auto" }
example_index = 0 #@param {type:"slider", min:0, max:11331, step:1}
strategy = 'greedy' #@param ["greedy", "beam", "sample", "top-k", "top-p", "top-p-k"]
example = ds[example_index]
print('Document: ')
for line in wrap(example['document'], 50):
print(f' {line}')
print('Annotated Summary: ')
for line in wrap(example['summary'], 50):
print(f' {line}')
summary = generate(example, strategy)
print(f'Generated Summary: ')
for line in wrap(summary, 50):
print(f' {line}')
###Output
_____no_output_____
###Markdown
Report FormatYou should have the following in your report:| Strategy | Accuracy || ----------- | ----------- || greedy | ... || beam | ... || sample | ... || top-k | ... || top-p | ... || top-p-k | ... |Calculate the accuracy of each summary strategy by adding up the number of correct examples (by your own judgement) and dividing by 10 (the total number of examples you should evaluate). Also include an example prediction that has a judged answer and compare it to the predictions by each strategy. Try to find an example where the strategies differ.
###Code
###Output
_____no_output_____ |
Flowers Recognition/code/obsolete/oldest/gans-n-roses.ipynb | ###Markdown
GANs n´ Roses by Peterson Katagiri ZilliI will show how to build and train a simple _Generative Adversarial Networks (GAN)_ and a _Deep Convolutional GAN (DCGAN)_ in a dataset for generating realistic roses images.See also:* https://medium.com/@jonathan_hui/gan-why-it-is-so-hard-to-train-generative-advisory-networks-819a86b3750b* https://github.com/Zackory/Keras-MNIST-GAN/blob/master/mnist_gan.py* https://github.com/eriklindernoren/Keras-GAN The DataFirst we configure the paths and get the rose images filenames.
###Code
import numpy as np
import os
from glob import glob
import cv2
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D, UpSampling2D, Conv2D
from keras.layers.advanced_activations import LeakyReLU
from keras.models import Sequential, Model
from keras.optimizers import Adam
%matplotlib inline
import matplotlib.pyplot as plt
#PATH = os.path.abspath(os.path.join('..','input', 'flowers', 'flowers', 'rose'))
PATH = os.path.abspath(os.path.join('..','input', 'roseimages', 'roseimages'))
IMGS = glob(os.path.join(PATH, "*.jpg"))
print(len(IMGS)) # number of the rose images
print(IMGS[:10]) # rose images filenames
###Output
_____no_output_____
###Markdown
Then we resize the images to WIDTH pixels width, HEIGHT pixels height, and DEPTH color channels)
###Code
WIDTH = 28
HEIGHT = 28
DEPTH = 3
def procImages(images):
processed_images = []
# set depth
depth = None
if DEPTH == 1:
depth = cv2.IMREAD_GRAYSCALE
elif DEPTH == 3:
depth = cv2.IMREAD_COLOR
else:
print('DEPTH must be set to 1 or to 3.')
return None
#resize images
for img in images:
base = os.path.basename(img)
full_size_image = cv2.imread(img, depth)
processed_images.append(cv2.resize(full_size_image, (WIDTH, HEIGHT), interpolation=cv2.INTER_CUBIC))
processed_images = np.asarray(processed_images)
# rescale images to [-1, 1]
processed_images = np.divide(processed_images, 127.5) - 1
return processed_images
processed_images = procImages(IMGS)
processed_images.shape
fig, axs = plt.subplots(5, 5)
count = 0
for i in range(5):
for j in range(5):
img = processed_images[count, :, :, :] * 127.5 + 127.5
img = np.asarray(img, dtype=np.uint8)
if DEPTH == 3:
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
axs[i, j].imshow(img)
axs[i, j].axis('off')
count += 1
plt.show()
###Output
_____no_output_____
###Markdown
Building Simple GAN Model Below we create functions for building simple dense generator and a discriminator modelsa
###Code
# GAN parameters
LATENT_DIM = 100
G_LAYERS_DIM = [256, 512, 1024]
D_LAYERS_DIM = [1024, 512, 256]
BATCH_SIZE = 16
EPOCHS = 1000
LR = 0.0002
BETA_1 = 0.5
def buildGenerator(img_shape):
def addLayer(model, dim):
model.add(Dense(dim))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model = Sequential()
model.add(Dense(G_LAYERS_DIM[0], input_dim=LATENT_DIM))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
for layer_dim in G_LAYERS_DIM[1:]:
addLayer(model, layer_dim)
model.add(Dense(np.prod(img_shape), activation='tanh'))
model.add(Reshape(img_shape))
model.summary()
noise = Input(shape=(LATENT_DIM,))
img = model(noise)
return Model(noise, img)
#g = buildGenerator(processed_images.shape[1:])
def buildDiscriminator(img_shape):
def addLayer(model, dim):
model.add(Dense(dim))
model.add(LeakyReLU(alpha=0.2))
model = Sequential()
model.add(Flatten(input_shape=img_shape))
for layer_dim in D_LAYERS_DIM:
addLayer(model, layer_dim)
model.add(Dense(1, activation='sigmoid'))
model.summary()
img = Input(shape=img_shape)
classification = model(img)
return Model(img, classification)
#d = buildDiscriminator(processed_images.shape[1:])
def buildCombined(g, d):
# fix d for training g in the combined model
d.trainable = False
# g gets z as input and outputs fake_img
z = Input(shape=(LATENT_DIM,))
fake_img = g(z)
# gets the classification of the fake image
gan_output = d(fake_img)
# the combined model for training generator g to fool discriminator d
model = Model(z, gan_output)
model.summary()
return model
def sampleImages(generator):
rows, columns = 5, 5
noise = np.random.normal(0, 1, (rows * columns, LATENT_DIM))
generated_imgs = generator.predict(noise)
fig, axs = plt.subplots(rows, columns)
count = 0
for i in range(rows):
for j in range(columns):
img = generated_imgs[count, :, :, :] * 127.5 + 127.5
img = np.asarray(img, dtype=np.uint8)
if DEPTH == 3:
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
axs[i, j].imshow(img)
axs[i, j].axis('off')
count += 1
plt.show()
#sampleImages(g)
#instantiate the optimizer
optimizer = Adam(LR, BETA_1)
#build the discriminator
d = buildDiscriminator(processed_images.shape[1:])
d.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
#build generator
g = buildGenerator(processed_images.shape[1:])
g.compile(loss='binary_crossentropy', optimizer=optimizer)
#build combined model
c = buildCombined(g, d)
c.compile(loss='binary_crossentropy', optimizer=optimizer)
#training
SAMPLE_INTERVAL = WARNING_INTERVAL = 100
YDis = np.zeros(2 * BATCH_SIZE)
YDis[:BATCH_SIZE] = .9 #Label smoothing
YGen = np.ones(BATCH_SIZE)
for epoch in range(EPOCHS):
# get a batch of real images
idx = np.random.randint(0, processed_images.shape[0], BATCH_SIZE)
real_imgs = processed_images[idx]
# generate a batch of fake images
noise = np.random.normal(0, 1, (BATCH_SIZE, LATENT_DIM))
fake_imgs = g.predict(noise)
X = np.concatenate([real_imgs, fake_imgs])
# Train discriminator
d.trainable = True
d_loss = d.train_on_batch(X, YDis)
# Train the generator
d.trainable = False
#noise = np.random.normal(0, 1, (BATCH_SIZE, LATENT_DIM))
g_loss = c.train_on_batch(noise, YGen)
# Progress
if (epoch+1) % WARNING_INTERVAL == 0 or epoch == 0:
print ("%d [Discriminator Loss: %f, Acc.: %.2f%%] [Generator Loss: %f]" % (epoch, d_loss[0], 100. * d_loss[1], g_loss))
# If at save interval => save generated image samples
if (epoch+1) % SAMPLE_INTERVAL == 0 or epoch == 0:
sampleImages(g)
###Output
_____no_output_____
###Markdown
Building Deep Convolutional GAN Model
###Code
def buildGeneratorDC(img_shape):
model = Sequential()
model.add(Dense(128 * 7 * 7, activation="relu", input_dim=LATENT_DIM))
model.add(Reshape((7, 7, 128)))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2D(DEPTH, kernel_size=3, padding="same"))
model.add(Activation("tanh"))
model.summary()
noise = Input(shape=(LATENT_DIM,))
img = model(noise)
return Model(noise, img)
def buildDiscriminatorDC(img_shape):
model = Sequential()
model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=img_shape, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(256, kernel_size=3, strides=1, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.summary()
img = Input(shape=img_shape)
classification = model(img)
return Model(img, classification)
#build the discriminator
dDC = buildDiscriminatorDC(processed_images.shape[1:])
dDC.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
#build generator
gDC = buildGeneratorDC(processed_images.shape[1:])
gDC.compile(loss='binary_crossentropy', optimizer=optimizer)
#build combined model
cDC = buildCombined(gDC, dDC)
cDC.compile(loss='binary_crossentropy', optimizer=optimizer)
#training DC GAN
SAMPLE_INTERVAL = WARNING_INTERVAL = 100
YDis = np.zeros(2 * BATCH_SIZE)
YDis[:BATCH_SIZE] = .9 #Label smoothing
YGen = np.ones(BATCH_SIZE)
for epoch in range(EPOCHS):
# get a batch of real images
idx = np.random.randint(0, processed_images.shape[0], BATCH_SIZE)
real_imgs = processed_images[idx]
# generate a batch of fake images
noise = np.random.normal(0, 1, (BATCH_SIZE, LATENT_DIM))
fake_imgs = gDC.predict(noise)
X = np.concatenate([real_imgs, fake_imgs])
# Train discriminator
dDC.trainable = True
d_loss = dDC.train_on_batch(X, YDis)
# Train the generator
dDC.trainable = False
#noise = np.random.normal(0, 1, (BATCH_SIZE, LATENT_DIM))
g_loss = cDC.train_on_batch(noise, YGen)
# Progress
if (epoch+1) % WARNING_INTERVAL == 0 or epoch == 0:
print ("%d [Discriminator Loss: %f, Acc.: %.2f%%] [Generator Loss: %f]" % (epoch, d_loss[0], 100. * d_loss[1], g_loss))
# If at save interval => save generated image samples
if (epoch+1) % SAMPLE_INTERVAL == 0 or epoch == 0:
sampleImages(gDC)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.