markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Visualize the new (upsampled) raw data:
fig,axes=plt.subplots(nrows=2,figsize=(8,6),sharex=True) iiplot=np.arange(0,60*upsampfactor) # bins of stimulus to plot ttplot=iiplot*dtStimhi # time bins of stimulus axes[0].plot(ttplot,Stimhi[iiplot]) axes[0].set_title('raw stimulus (fine time bins)') axes[0].set_ylabel('stim intensity') # Should notice stimulus now constant for many bins in a row sps,_=np.histogram(tsp,ttgridhi) # Bin the spike train and replot binned counts axes[1].stem(ttplot,sps[iiplot]) axes[1].set_title('binned spike counts') axes[1].set_ylabel('spike count') axes[1].set_xlabel('time (s)') axes[1].set_xlim(ttplot[0],ttplot[-1])
<ipython-input-36-97db7153fd27>:9: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the "use_line_collection" keyword argument to True. axes[1].stem(ttplot,sps[iiplot])
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Divide data into "training" and "test" sets for cross-validation
trainfrac = .8 # fraction of data to use for training ntrain = int(np.ceil(nThi*trainfrac)) # number of training samples ntest = int(nThi-ntrain) # number of test samples iitest = np.arange(ntest).astype(int) # time indices for test iitrain = np.arange(ntest,nThi).astype(int) # time indices for training stimtrain = Stimhi[iitrain] # training stimulus stimtest = Stimhi[iitest] # test stimulus spstrain = sps[iitrain] spstest = sps[iitest] print('Dividing data into training and test sets:\n') print('Training: %d samples (%d spikes) \n'%(ntrain, sum(spstrain))) print(' Test: %d samples (%d spikes)\n'%(ntest, sum(spstest)))
Dividing data into training and test sets: Training: 28800 samples (2109 spikes) Test: 7200 samples (557 spikes)
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Set the number of time bins of stimulus to use for predicting spikes
ntfilt = 20*upsampfactor # Try varying this, to see how performance changes!
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
build the design matrix, training data
Xtrain = np.c_[ np.ones((ntrain,1)), hankel(np.r_[np.zeros(ntfilt-1),stimtrain[:-ntfilt+1]].reshape(-1,1),stimtrain[-ntfilt:])]
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Build design matrix for test data
Xtest = np.c_[ np.ones((ntest,1)), hankel(np.r_[np.zeros(ntfilt-1),stimtest[:-ntfilt+1]].reshape(-1,1),stimtest[-ntfilt:])]
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Fit poisson GLM using ML Compute maximum likelihood estimate (using `scipy.optimize.fmin` instead of `sm.GLM`)
sta = (Xtrain.T@spstrain)/np.sum(spstrain) # compute STA for initialization
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Make loss function and minimize
jac_neglogli_poissGLM? lossfun=lambda prs:neglogli_poissGLM(prs,Xtrain,spstrain,dtStimhi) jacfun=lambda prs:jac_neglogli_poissGLM(prs,Xtrain,spstrain,dtStimhi) hessfun=lambda prs:hess_neglogli_poissGLM(prs,Xtrain,spstrain,dtStimhi) filtML=minimize(lossfun,x0=sta,method='trust-ncg',jac=jacfun, hess=hessfun).x ttk=np.arange(-ntfilt+1,1)*dtStimhi fig,axes=plt.subplots() axes.plot(ttk,ttk*0,'k') axes.plot(ttk,filtML[1:]) axes.set_xlabel('time before spike') axes.set_ylabel('coefficient') axes.set_title('Maximum likelihood filter estimate') # % Looks bad due to lack of regularization!
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Ridge regression prior Now let's regularize by adding a penalty on the sum of squared filtercoefficients w(i) of the form: penalty(lambda) = lambda*(sum_i w(i).^2),where lambda is known as the "ridge" parameter. As noted in tutorial3,this is equivalent to placing an iid zero-mean Gaussian prior on the RFcoefficients with variance equal to 1/lambda. Lambda is thus the inversevariance or "precision" of the prior.To set lambda, we'll try a grid of values and usecross-validation (test error) to select which is best. Set up grid of lambda values (ridge parameters)
lamvals = 2.**np.arange(0,11,1) # it's common to use a log-spaced set of values nlam = len(lamvals)
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Precompute some quantities (X'X and X'*y) for training and test data
Imat = np.eye(ntfilt+1) # identity matrix of size of filter + const Imat[0,0] = 0 # remove penalty on constant dc offset
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Allocate space for train and test errors
negLtrain = np.zeros(nlam) # training error negLtest = np.zeros(nlam) # test error w_ridge = np.zeros((ntfilt+1,nlam)) # filters for each lambda
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Define train and test log-likelihood funcs
negLtrainfun = lambda prs:neglogli_poissGLM(prs,Xtrain,spstrain,dtStimhi) jac_negLtrainfun = lambda prs:jac_neglogli_poissGLM(prs,Xtrain,spstrain,dtStimhi) hess_negLtrainfun = lambda prs:hess_neglogli_poissGLM(prs,Xtrain,spstrain,dtStimhi) negLtestfun = lambda prs:neglogli_poissGLM(prs,Xtest,spstest,dtStimhi) jac_negLtestfun = lambda prs:jac_neglogli_poissGLM(prs,Xtest,spstest,dtStimhi) hess_negLtestfun = lambda prs:hess_neglogli_poissGLM(prs,Xtest,spstest,dtStimhi)
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Now compute MAP estimate for each ridge parameter
wmap = filtML # initialize parameter estimate fig,axes=plt.subplots() axes.plot(ttk,ttk*0,'k') # initialize plot for jj in range(nlam): # Compute ridge-penalized MAP estimate Cinv = lamvals[jj]*Imat # set inverse prior covariance lossfun = lambda prs:neglogposterior(prs,negLtrainfun,Cinv) jacfun=lambda prs:jac_neglogposterior(prs,jac_negLtrainfun,Cinv) hessfun=lambda prs:hessian_neglogposterior(prs,hess_negLtrainfun,Cinv) wmap=minimize(lossfun,x0=wmap,method='trust-ncg',jac=jacfun,hess=hessfun).x # Compute negative logli negLtrain[jj] = negLtrainfun(wmap) # training loss negLtest[jj] = negLtestfun(wmap) # test loss # store the filter w_ridge[:,jj] = wmap # plot it axes.plot(ttk,wmap[1:]) axes.set_title(['ridge estimate: lambda = %.2f'%lamvals[jj]]) axes.set_xlabel('time before spike (s)') # note that the esimate "shrinks" down as we increase lambda
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Plot filter estimates and errors for ridge estimates
fig,axes=plt.subplots(nrows=2,ncols=2,figsize=(8,8)) axes[0,1].plot(ttk,w_ridge[1:,:]) axes[0,1].set_title('all ridge estimates') axes[0,0].semilogx(lamvals,-negLtrain,'o-') axes[0,0].set_title('training logli') axes[1,0].semilogx(lamvals,-negLtest,'o-') axes[1,0].set_title('test logli') axes[1,0].set_xlabel('lambda') # Notice that training error gets monotonically worse as we increase lambda # However, test error has an dip at some optimal, intermediate value. # Determine which lambda is best by selecting one with lowest test error imin = np.argmin(negLtest) filt_ridge= w_ridge[1:,imin] axes[1,1].plot(ttk,ttk*0, 'k--') axes[1,1].set_xlabel('time before spike (s)') axes[1,1].set_title('best ridge estimate')
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
L2 smoothing prior Use penalty on the squared differences between filter coefficients,penalizing large jumps between successive filter elements. This isequivalent to placing an iid zero-mean Gaussian prior on the incrementsbetween filter coeffs. (See tutorial 3 for visualization of the priorcovariance).This matrix computes differences between adjacent coeffs
Dx1 = (np.diag(-np.ones(ntfilt),0)+np.diag(np.ones(ntfilt-1),1))[:-1,:] Dx = Dx1.T@Dx1 # computes squared diffs
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Select smoothing penalty by cross-validation
lamvals = 2**np.arange(1,15) # grid of lambda values (ridge parameters) nlam = len(lamvals)
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Embed `Dx` matrix in matrix with one extra row/column for constant coeff
D = block_diag(0,Dx)
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Allocate space for train and test errors
negLtrain_sm = np.zeros(nlam) # training error negLtest_sm = np.zeros(nlam) # test error w_smooth = np.zeros((ntfilt+1,nlam)) # filters for each lambda
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Now compute MAP estimate for each ridge parameter
fig,axes=plt.subplots() axes.plot(ttk,ttk*0,'k') # initialize plot wmap=filtML # initialize with ML fit for jj in range(nlam): # Compute MAP estimate Cinv=lamvals[jj]*D # set inverse prior covariance lossfun = lambda prs:neglogposterior(prs,negLtrainfun,Cinv) jacfun=lambda prs:jac_neglogposterior(prs,jac_negLtrainfun,Cinv) hessfun=lambda prs:hessian_neglogposterior(prs,hess_negLtrainfun,Cinv) wmap=minimize(lossfun,x0=wmap,method='trust-ncg',jac=jacfun,hess=hessfun).x # Compute negative logli negLtrain_sm[jj]=negLtrainfun(wmap) # training loss negLtest_sm[jj]=negLtestfun(wmap) # test loss # store the filter w_smooth[:,jj] = wmap # plot it axes.plot(ttk,wmap[1:]) axes.set_title('smoothing estimate: lambda = %.2f'%lamvals[jj]) axes.set_xlabel('time before spike (s)')
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Plot filter estimates and errors for smoothing estimates
fig,axes=plt.subplots(nrows=2,ncols=2,figsize=(8,8)) axes[0,1].plot(ttk,w_smooth[1:,:]) axes[0,1].set_title('all smoothing estimates') axes[0,0].semilogx(lamvals,-negLtrain_sm,'o-') axes[0,0].set_title('training LL') axes[1,0].semilogx(lamvals,-negLtest_sm,'o-') axes[1,0].set_title('test LL') axes[1,0].set_xlabel('lambda') # Notice that training error gets monotonically worse as we increase lambda # However, test error has an dip at some optimal, intermediate value. # Determine which lambda is best by selecting one with lowest test error imin = np.argmin(negLtest_sm) filt_smooth= w_smooth[1:,imin] axes[1,1].plot(ttk,ttk*0, 'k--') axes[1,1].plot(ttk,filt_ridge,label='ridge') axes[1,1].plot(ttk,filt_smooth,label='L2 smoothing') axes[1,1].set_xlabel('time before spike (s)') axes[1,1].set_title('best ridge estimate') axes[1,1].legend() # clearly the "L2 smoothing" filter looks better by eye!
_____no_output_____
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Last, lets see which one actually achieved lower test error
print('\nBest ridge test error: %.5f'%(-min(negLtest))) print('Best smoothing test error: %.5f'%(-min(negLtest_sm)))
Best ridge test error: 2093.80432 Best smoothing test error: 2095.67887
MIT
mypython/t4_regularization_PoissonGLM.ipynb
disadone/GLMspiketraintutorial
Bank Note AuthenticationData were extracted from images that were taken from genuine and forged banknote-like specimens. For digitization, an industrial camera usually used for inspection was used. The final images have 400 pixles. Due to the object lens and distance to the investigated object gray-scale pictures with a resolution of about 660 dpl were gained. Wavelet Transformation tool were sued to extract features from images.
#dataset link: https://kaggle.com/ritesaluja/bank-note-authentication-uci-data import pandas as pd import numpy as np df = pd.read_csv('BankNote_Authentication.csv') df.head() df.tail() df.describe() #y=dependent and x=independent features x=df.iloc[:,:-1] #present everything in the datasset except the last column y=df.iloc[:,-1] #presenting only the last column from the dataset x.head() y.head() # train test split from sklearn.model_selection import train_test_split x_train,x_test, y_train, y_test = train_test_split(x,y, test_size=0.3, random_state=0) # Implementing a Random Forest Classifier from sklearn.ensemble import RandomForestClassifier classifier = RandomForestClassifier() classifier.fit(x_train, y_train) #prediction y_pred = classifier.predict(x_test) #checking for accuracy from sklearn.metrics import accuracy_score score = accuracy_score(y_test, y_pred) score # creating a pickle file using serialization import pickle pickle_file = open('./classifier.pkl', 'wb') pickle.dump(classifier, pickle_file) pickle_file.close() import numpy as np classifier.predict([[2,3,4,1]])
_____no_output_____
MIT
bank_note_auth.ipynb
josh-boat365/docker-ml
Curso Python Programación Contenidos 📚Este curso se separa en varios notebooks (capítulos)* [00](00.ipynb) Introducción a Python y como empezar a correrlo en Google Colab* [01](01.ipynb) Tipos básicos de datos y operaciones (Numeros y Strings)* [02](02.ipynb) Manipulación de Strings * [03](03.ipynb) Estructuras de datos: Listas y Tuplas* [04](04.ipynb) Estructuras de datos (continuación): diccionarios* [05](05.ipynb) Control de Flujo: sentencias if, for, while, y try* [06](06.ipynb) Funciones* [07](07.ipynb) Clases y Pogramacion Orientada a Objetos (POO) básicoEste es un tutorial de introducción a python 3. Para un resumen/torpedo de toda la syntaxis de python puedes ir al siguiente link [Quick Reference Card](http://www.cs.put.poznan.pl/csobaniec/software/python/py-qrc.html) may be useful. Una version mas detallada a este tutorial esta disponible en la documentación oficial de python. https://docs.python.org/3/tutorial/ Introducción Google Colab![Foo](https://raw.githubusercontent.com/domingo2000/Python-Lectures/master/pictures/colab_logo.png) Instalación 🖥️Para los fines de este curso trabajaremos con una herramienta online llamada Google Colab la cual tiene muchas ventajas frente a usar una instalación directa de python, algunas de las siguientes son:1. Servicio en linea igual para todos sin depender del sistema operativo2. Sistema de control de versiones, siempre puedes volver atrás igual que un Google Docs3. Texto y codigo en un mismo lugar, puedes explicar con "Markdown" tus programas y programar en un solo entorno4. Rendimiento, Google Colab corre los codigos por ti y te da recursos, no importa que tengas un mal procesador o una baja RAM.5. Es gratis![Foo](https://github.com/domingo2000/Python-Lectures/blob/master/pictures/its_free.png?raw=true) Abriendo un Notebook desde Google Colab 🟠⚪ Para abrir un notebook del material del curso deberás seguir las siguientes instrucciones Instrucciones Abrir Notebook del Material1. Ingresa a https://colab.research.google.com en una pestaña nueva2. Selecciona la opción "GitHub" de la siguiente ventana (En caso de no ver esa ventana ve al paso 2.1 más abajo y luego continua con los demás )![tutorial_colab_1-2.png](attachment:tutorial_colab_1-2.png)5. Copia el siguiente link https://github.com/domingo2000/Python-Lectures y pégalo en la primera línea que te deja ingresar texto, luego presiona la lupita.![tutorial_colab_2.png](attachment:tutorial_colab_2.png)6. Por último selecciona el archivo .pynb que quieres abrir, en este caso el 00, !Listo, ahora solo sigue leyendo pero en Google Colab!.![tutorial_colab_3.png](attachment:tutorial_colab_3.png)2.1. Selecciona "Archivo" o "File" Archivo > Open Notebookvuelve al paso 2.![tutorial_colab_4.png](attachment:tutorial_colab_4.png) Jupyter Notebooks y Markdown 📕Cuando tu trabajas en colab en el fondo estás corriendo un Jupyter notebook, estos funcionan como un cuaderno, en el que puedes escribir texto o programar código en celdas diferentes. Mi primera celda de código ⌨️En primer lugar borraremos todo lo que se corre automáticamente cuando abres el archivo, para ello debemos presionar los siguientes menús en este orden. Edit > Clear all Output![tutorial_colab_5.png](attachment:tutorial_colab_5.png)Luego para correr cada celda de código en Colab solo debes hacer clic en el botón ![Foo](https://github.com/domingo2000/Python-Lectures/blob/master/pictures/boton_play_colab.png?raw=true)Alternativamente puedes presionar `shift + enter ↵` para correr la celda.A continuación se muestra tu primer codigo python con el clásico ``Hello World!``
print("Hello World!")
Hello World!
CC-BY-3.0
00.ipynb
domingo2000/Python-Lectures
!Genial, acabas de correr tu primer programa de python! 😀 💻💻 Markdown ️⃣️⃣Markdown es un tipo de lenguaje de formateo de texto que busca que el texto sea fácil de leer tanto en el "codigo" como en la salida que este produce, todo el texto de este tutorial está escrito en markdown para que tengas una idea de que cosas se pueden hacer con el.Si haces doble click en esta celda te daras cuenta de como se estructura el markdown, al igual que las celdas de codigo, puedes correrlas presionando `shift + enter ↵` o presionando en una celda distinta a esta. Esctructura básicaPuedes escribir texto simple como este Puedes escribir Titulos usando los "" Puedes escribir Titulos usando los "" Puedes escribir Titulos usando los "" Puedes escribir Titulos usando los "" etc...Con esto ya debrías lograr expresar tus ideas en los bloques de texto, pero con las siguientes cosas puedes poner más énfasis y hacer un mejor trabajo.* Puedes poner tu texto en **Negrita** usando \*\*texto** (hazle doble click a la celda para ver como se escriben dichos tags)* Puedes poner tu texto en *Itálica* usando \*text\** Puedes ponerle ***ambas cosas*** usando \*\*\*texto\*\*\*Puedes hacer listas de cosas haciendo usando 1. y *1. Item 12. Item 23. Item 3* Item 3* Item 2* Item 1Si quieres buscar mas cosas que puedes hacer en markdown puedes ir a https://www.markdownguide.org/basic-syntax/Si quieres saber mas sobre markdown puedes encontrar mas información en https://es.wikipedia.org/wiki/Markdown ¿Que es Python?![Foo](https://github.com/domingo2000/Python-Lectures/blob/master/pictures/python_logo.png?raw=true)Python es un lenguaje moderno, robusto y de alto nivel de programación hoy es usado en gran medida en usos científicos variados y análisis de datos. Tiene la ventaja de ser bastante intuitivo y fácil de usar incluso si eres nuevo en la programación. Por otro lado tiene las desventajas de no ser tan rápido como otros lenguajes tipo C++, o Java, pero tiene la ventaja de ser mucho más rápido de programar debido a la syntaxis (Forma de escribir código) que este tiene.A continuación si corres la siguiente celda se mostrará un texto con la filosofía detrás de python, la cual sustenta este lenguaje y además son grandes consejos que te incentivan a escribir un **mejor codigo**
import this
The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!
CC-BY-3.0
00.ipynb
domingo2000/Python-Lectures
Josephson Junction (Dolan) We'll be creating a Dolan style Josephson Junction.
# So, let us dive right in. For convenience, let's begin by enabling # automatic reloading of modules when they change. %load_ext autoreload %autoreload 2 import qiskit_metal as metal from qiskit_metal import designs, draw from qiskit_metal import MetalGUI, Dict, open_docs # Each time you create a new quantum circuit design, # you start by instantiating a QDesign class. # The design class `DesignPlanar` is best for 2D circuit designs. design = designs.DesignPlanar() #Launch Qiskit Metal GUI to interactively view, edit, and simulate QDesign: Metal GUI gui = MetalGUI(design)
_____no_output_____
Apache-2.0
docs/circuit-examples/A.Qubits/08-JJ-Dolan.ipynb
Antonio-Aguiar/qiskit-metal
A dolan style josephson junctionYou can create a dolan style josephson junction from the QComponent Library, `qiskit_metal.qlibrary.qubits`. `jj_dolan.py` is the file containing our josephson junction so `jj_dolan` is the module we import. The `jj_dolan` class is our josephson junction. Like all quantum components, `jj_dolan` inherits from `QComponent`.
from qiskit_metal.qlibrary.qubits.JJ_Dolan import jj_dolan # Be aware of the default_options that can be overridden by user. design.overwrite_enabled = True jj2 = jj_dolan(design, 'JJ2', options=dict(x_pos="0.1", y_pos="0.0")) gui.rebuild() gui.autoscale() gui.zoom_on_components(['JJ2']) # Save screenshot as a .png formatted file. gui.screenshot() # Screenshot the canvas only as a .png formatted file. gui.figure.savefig('shot.png') from IPython.display import Image, display _disp_ops = dict(width=500) display(Image('shot.png', **_disp_ops))
_____no_output_____
Apache-2.0
docs/circuit-examples/A.Qubits/08-JJ-Dolan.ipynb
Antonio-Aguiar/qiskit-metal
Closing the Qiskit Metal GUI
gui.main_window.close()
_____no_output_____
Apache-2.0
docs/circuit-examples/A.Qubits/08-JJ-Dolan.ipynb
Antonio-Aguiar/qiskit-metal
Creating VariablesUnlike other programming languages, Python has no command for declaring a variable.A variable is created the moment you first assign a value to it.
x = 5 y = "I TRAIN TECHNOLOGY" print(x) print(y) # Variables do not need to be declared with any particular type and can even change type after they have been set. x = 4 # x is of type int x = "Sally" # x is now of type str print(x)
Sally
MIT
1. Python Variables.ipynb
sivacheetas/matplotlib
Variable Names1. A variable can have a short name (like x and y) or a more descriptive name (age, carname, total_volume). Rules for Python variables:2. A variable name must start with a letter or the underscore character3. A variable name cannot start with a number4. A variable name can only contain alpha-numeric characters and underscores (A-z, 0-9, and _ )5. Variable names are case-sensitive (age, Age and AGE are three different variables) NOTE: Remember that variables are case-sensitive
# # Output Variables # The Python print statement is often used to output variables. #To combine both text and a variable, Python uses the + character: x = "Scripting Programing" print("Python is ",x, "Language") x = "Python is " y = "awesome" z = x + y print(z) # For numbers, the + character works as a mathematical operator: x = 5 y = 10 print(x + y) # Create a variable named carname and assign the value Volvo to it. car = "Volvo" print(car)
Volvo
MIT
1. Python Variables.ipynb
sivacheetas/matplotlib
Converting Rating.dat to Rating.csv
ratings_dataframe=pd.read_table("ratings.dat",sep="::") ratings_dataframe.to_csv("ratings.csv",index=False) ratings_dataframe=pd.read_csv("ratings.csv",header=None) ratings_dataframe.columns=["UserID","MovieID","Rating","Timestamp"] ratings_dataframe.columns print(ratings_dataframe.shape) ratings_dataframe.to_csv("ratings.csv",index=False)
_____no_output_____
MIT
DatfiletoCSV.ipynb
karangupta26/Movie-Recommendation-system
Converting Movies.dat to Movies.csv
movies_dataframe=pd.read_table("movies.dat",sep="::") movies_dataframe.to_csv("movies.csv",index=False) movies_dataframe=pd.read_csv("movies.csv",header=None) movies_dataframe.columns=["MovieID","Title","Genres"] movies_dataframe.columns print(movies_dataframe.shape) movies_dataframe.to_csv("movies.csv",index=False)
(3883, 3)
MIT
DatfiletoCSV.ipynb
karangupta26/Movie-Recommendation-system
Converting User.dat to User.csv
users_dataframe=pd.read_table("users.dat",sep="::") users_dataframe.to_csv("users.csv",index=False) users_dataframe=pd.read_csv("users.csv",header=None) users_dataframe.columns=["UserID","Gender","Age","Occupation","Zip-code"] users_dataframe.columns users_dataframe.to_csv("users.csv",index=False)
_____no_output_____
MIT
DatfiletoCSV.ipynb
karangupta26/Movie-Recommendation-system
WeatherPy----Observations:1. In the northern hemisphere the tempature increases as the latitude increases. So as the we move away from the equator to the north - the tempature decreases. 2. In the southern hemisphere, the tempature increases as you get closer to the equator. 3. In the northern hemishere, the humidity increases as you move away for the equator (0 lattitude).
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time from scipy.stats import linregress # Import API key from api_keys import weather_api_key # Incorporated citipy to determine city based on latitude and longitude from citipy import citipy # Save config information. url = "http://api.openweathermap.org/data/2.5/weather?" units = "Imperial" # Build partial query URL query_url = f"{url}appid={weather_api_key}&units={units}&q=" # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180) # Output File (CSV) output_data_file = "cities.csv"
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Generate Cities List
# List for holding lat_lngs and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations lats = np.random.uniform(low=-90.000, high=90.000, size=1500) lngs = np.random.uniform(low=-180.000, high=180.000, size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combination for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name # If the city is unique, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count len(cities)
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Perform API Calls* Perform a weather check on each city using a series of successive API calls.* Include a print log of each city as it'sbeing processed (with the city number and city name).
# set lists for the dataframe city_two = [] cloudinesses = [] dates = [] humidities = [] lats = [] lngs = [] max_temps = [] wind_speeds = [] countries = [] # set initial count quantities for organization count_one = 0 set_one = 1 # loops for creating dataframe columns for city in cities: try: response = requests.get(query_url + city.replace(" ","&")).json() cloudinesses.append(response['clouds']['all']) countries.append(response['sys']['country']) dates.append(response['dt']) humidities.append(response['main']['humidity']) lats.append(response['coord']['lat']) lngs.append(response['coord']['lon']) max_temps.append(response['main']['temp_max']) wind_speeds.append(response['wind']['speed']) if count_one > 48: count_one = 1 set_one += 1 city_two.append(city) else: count_one += 1 city_two.append(city) print(f"Processing Record {count_one} of Set {set_one} | {city}") except Exception: print("City not found. Skipping...") print("------------------------------\nData Retrieval Complete\n------------------------------")
Processing Record 1 of Set 1 | vaini Processing Record 2 of Set 1 | punta arenas Processing Record 3 of Set 1 | cruzilia Processing Record 4 of Set 1 | mahibadhoo Processing Record 5 of Set 1 | mys shmidta Processing Record 6 of Set 1 | castro Processing Record 7 of Set 1 | lebu Processing Record 8 of Set 1 | butaritari Processing Record 9 of Set 1 | rikitea Processing Record 10 of Set 1 | turangi Processing Record 11 of Set 1 | norman wells Processing Record 12 of Set 1 | ushuaia Processing Record 13 of Set 1 | san cristobal Processing Record 14 of Set 1 | san luis Processing Record 15 of Set 1 | saint-philippe Processing Record 16 of Set 1 | mugumu Processing Record 17 of Set 1 | port alfred Processing Record 18 of Set 1 | dikson Processing Record 19 of Set 1 | naryan-mar City not found. Skipping... Processing Record 20 of Set 1 | ogembo Processing Record 21 of Set 1 | port hedland City not found. Skipping... Processing Record 22 of Set 1 | thompson Processing Record 23 of Set 1 | provideniya Processing Record 24 of Set 1 | sukhumi Processing Record 25 of Set 1 | airai Processing Record 26 of Set 1 | arraial do cabo Processing Record 27 of Set 1 | esperance City not found. Skipping... City not found. Skipping... Processing Record 28 of Set 1 | kaitangata City not found. Skipping... Processing Record 29 of Set 1 | ponta do sol Processing Record 30 of Set 1 | ribeira grande City not found. Skipping... Processing Record 31 of Set 1 | hermanus Processing Record 32 of Set 1 | bredasdorp Processing Record 33 of Set 1 | verkh-usugli Processing Record 34 of Set 1 | mahebourg Processing Record 35 of Set 1 | severo-kurilsk Processing Record 36 of Set 1 | mataura Processing Record 37 of Set 1 | haines junction Processing Record 38 of Set 1 | port hardy Processing Record 39 of Set 1 | praia Processing Record 40 of Set 1 | ancud Processing Record 41 of Set 1 | albany City not found. Skipping... Processing Record 42 of Set 1 | upernavik Processing Record 43 of Set 1 | necocli Processing Record 44 of Set 1 | ambulu Processing Record 45 of Set 1 | souillac Processing Record 46 of Set 1 | san patricio Processing Record 47 of Set 1 | kushmurun Processing Record 48 of Set 1 | busselton Processing Record 49 of Set 1 | vao Processing Record 1 of Set 2 | inverness City not found. Skipping... Processing Record 2 of Set 2 | gweta Processing Record 3 of Set 2 | longyearbyen Processing Record 4 of Set 2 | clyde river Processing Record 5 of Set 2 | usvyaty Processing Record 6 of Set 2 | grand river south east Processing Record 7 of Set 2 | amahai Processing Record 8 of Set 2 | kodiak Processing Record 9 of Set 2 | kapaa Processing Record 10 of Set 2 | lavrentiya Processing Record 11 of Set 2 | cherskiy Processing Record 12 of Set 2 | hobart Processing Record 13 of Set 2 | sept-iles Processing Record 14 of Set 2 | penapolis Processing Record 15 of Set 2 | carutapera Processing Record 16 of Set 2 | tecoanapa Processing Record 17 of Set 2 | deer lake Processing Record 18 of Set 2 | itarema Processing Record 19 of Set 2 | bambous virieux Processing Record 20 of Set 2 | sasykoli Processing Record 21 of Set 2 | hilo City not found. Skipping... Processing Record 22 of Set 2 | qaanaaq Processing Record 23 of Set 2 | yellowknife Processing Record 24 of Set 2 | cacapava do sul Processing Record 25 of Set 2 | fukue Processing Record 26 of Set 2 | andra Processing Record 27 of Set 2 | bengkulu Processing Record 28 of Set 2 | yangshe Processing Record 29 of Set 2 | gondanglegi Processing Record 30 of Set 2 | les cayes Processing Record 31 of Set 2 | bluff Processing Record 32 of Set 2 | katherine Processing Record 33 of Set 2 | yuzhno-kurilsk Processing Record 34 of Set 2 | luderitz Processing Record 35 of Set 2 | yokadouma Processing Record 36 of Set 2 | khatanga City not found. Skipping... Processing Record 37 of Set 2 | atuona Processing Record 38 of Set 2 | bethel Processing Record 39 of Set 2 | ilulissat Processing Record 40 of Set 2 | saint-pierre Processing Record 41 of Set 2 | sayyan Processing Record 42 of Set 2 | moose factory Processing Record 43 of Set 2 | geraldton Processing Record 44 of Set 2 | puerto ayora City not found. Skipping... Processing Record 45 of Set 2 | tuktoyaktuk Processing Record 46 of Set 2 | chapleau City not found. Skipping... Processing Record 47 of Set 2 | munirabad Processing Record 48 of Set 2 | tevriz Processing Record 49 of Set 2 | saskylakh Processing Record 1 of Set 3 | jamestown Processing Record 2 of Set 3 | camana Processing Record 3 of Set 3 | srikakulam Processing Record 4 of Set 3 | lalawigan Processing Record 5 of Set 3 | cape town Processing Record 6 of Set 3 | hami City not found. Skipping... Processing Record 7 of Set 3 | montoro Processing Record 8 of Set 3 | sistranda Processing Record 9 of Set 3 | georgetown Processing Record 10 of Set 3 | muisne Processing Record 11 of Set 3 | burnie Processing Record 12 of Set 3 | hambantota City not found. Skipping... Processing Record 13 of Set 3 | port elizabeth Processing Record 14 of Set 3 | apeldoorn Processing Record 15 of Set 3 | saint-augustin Processing Record 16 of Set 3 | kohlu Processing Record 17 of Set 3 | nabire Processing Record 18 of Set 3 | bonavista City not found. Skipping... Processing Record 19 of Set 3 | aranos Processing Record 20 of Set 3 | simao Processing Record 21 of Set 3 | tautira Processing Record 22 of Set 3 | umkomaas Processing Record 23 of Set 3 | torbay Processing Record 24 of Set 3 | tingi City not found. Skipping... Processing Record 25 of Set 3 | hithadhoo Processing Record 26 of Set 3 | nikolskoye Processing Record 27 of Set 3 | pangnirtung Processing Record 28 of Set 3 | cabo san lucas City not found. Skipping... Processing Record 29 of Set 3 | vanimo Processing Record 30 of Set 3 | zhucheng Processing Record 31 of Set 3 | burgeo Processing Record 32 of Set 3 | grand gaube Processing Record 33 of Set 3 | ukiah City not found. Skipping... Processing Record 34 of Set 3 | sitka Processing Record 35 of Set 3 | pacasmayo Processing Record 36 of Set 3 | timizart Processing Record 37 of Set 3 | guanambi Processing Record 38 of Set 3 | matagami Processing Record 39 of Set 3 | pochutla Processing Record 40 of Set 3 | karratha Processing Record 41 of Set 3 | marquette Processing Record 42 of Set 3 | chuy Processing Record 43 of Set 3 | kokstad Processing Record 44 of Set 3 | banda aceh Processing Record 45 of Set 3 | dingle Processing Record 46 of Set 3 | leh Processing Record 47 of Set 3 | hualmay Processing Record 48 of Set 3 | new norfolk Processing Record 49 of Set 3 | avarua Processing Record 1 of Set 4 | padang Processing Record 2 of Set 4 | bari Processing Record 3 of Set 4 | turukhansk City not found. Skipping... City not found. Skipping... Processing Record 4 of Set 4 | kruisfontein Processing Record 5 of Set 4 | fernie Processing Record 6 of Set 4 | enshi City not found. Skipping... Processing Record 7 of Set 4 | salalah Processing Record 8 of Set 4 | rodas Processing Record 9 of Set 4 | shatura Processing Record 10 of Set 4 | kampot City not found. Skipping... Processing Record 11 of Set 4 | luxor Processing Record 12 of Set 4 | moerai Processing Record 13 of Set 4 | srednekolymsk Processing Record 14 of Set 4 | novita City not found. Skipping... Processing Record 15 of Set 4 | bikin Processing Record 16 of Set 4 | norton shores Processing Record 17 of Set 4 | warmbad Processing Record 18 of Set 4 | kazachinskoye Processing Record 19 of Set 4 | namibe Processing Record 20 of Set 4 | yanji Processing Record 21 of Set 4 | poplar bluff City not found. Skipping... Processing Record 22 of Set 4 | miranda City not found. Skipping... Processing Record 23 of Set 4 | byron bay Processing Record 24 of Set 4 | ngunguru Processing Record 25 of Set 4 | chongwe Processing Record 26 of Set 4 | shaoxing Processing Record 27 of Set 4 | maceio Processing Record 28 of Set 4 | korla Processing Record 29 of Set 4 | jakar Processing Record 30 of Set 4 | mineiros Processing Record 31 of Set 4 | chapais Processing Record 32 of Set 4 | soyo Processing Record 33 of Set 4 | sarkand Processing Record 34 of Set 4 | stonewall Processing Record 35 of Set 4 | nicoya Processing Record 36 of Set 4 | barrow Processing Record 37 of Set 4 | mwense Processing Record 38 of Set 4 | bati
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Convert Raw Data to DataFrame* Export the city data into a .csv.* Display the DataFrame
# create a dictionary for establishing dataframe weather_dict = { "City":city_two, "Cloudiness":cloudinesses, "Country":countries, "Date":dates, "Humidity":humidities, "Lat":lats, "Lng":lngs, "Max Temp":max_temps, "Wind Speed":wind_speeds } weather_dict # establish dataframe weather_dataframe = pd.DataFrame(weather_dict) weather_dataframe.to_csv (output_data_file, index = False) # show the top of the dataframe weather_dataframe.head() weather_dataframe.count() weather_dataframe.head()
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Plotting the Data* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.* Save the plotted figures as .pngs. Latitude vs. Temperature Plot
time.strftime('%x') plt.scatter(weather_dataframe["Lat"],weather_dataframe["Max Temp"],edgecolors="black",facecolors="skyblue") plt.title(f"City Latitude vs. Max Temperature {time.strftime('%x')}") plt.xlabel("Latitude") plt.ylabel("Max Temperature (F)") plt.grid (b=True,which="major",axis="both",linestyle="-",color="lightgrey") plt.savefig("Figures/fig1.png") plt.show() #This graph analyzes latitude vs tempature
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Latitude vs. Humidity Plot
plt.scatter(weather_dataframe["Lat"],weather_dataframe["Humidity"],edgecolors="black",facecolors="skyblue") plt.title("City Latitude vs. Humidity (%s)" % time.strftime('%x') ) plt.xlabel("Latitude") plt.ylabel("Humidity (%)") plt.ylim(15,105) plt.grid (b=True,which="major",axis="both",linestyle="-",color="lightgrey") plt.savefig("Figures/fig2.png") plt.show() #Analyzing the Humidity vs Latitude
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Latitude vs. Cloudiness Plot
plt.scatter(weather_dataframe["Lat"],weather_dataframe["Cloudiness"],edgecolors="black",facecolors="skyblue") plt.title("City Latitude vs. Cloudiness (%s)" % time.strftime('%x') ) plt.xlabel("Latitude") plt.ylabel("Cloudiness (%)") plt.grid (b=True,which="major",axis="both",linestyle="-",color="lightgrey") plt.savefig("Figures/fig3.png") plt.show() #Analyzes the latitude and cloudiness of the cities
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Latitude vs. Wind Speed Plot
plt.scatter(weather_dataframe["Lat"],weather_dataframe["Wind Speed"],edgecolors="black",facecolors="skyblue") plt.title("City Latitude vs. Wind Speed (%s)" % time.strftime('%x') ) plt.xlabel("Latitude") plt.ylabel("Wind Speed (mph)") plt.ylim(-2,34) plt.grid (b=True,which="major",axis="both",linestyle="-",color="lightgrey") plt.savefig("Figures/fig4.png") plt.show() #This the relationship between latitude and windspeed
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Linear Regression
#Define x and y values x_values = weather_dataframe['Lat'] y_values = weather_dataframe['Max Temp'] # Perform a linear regression on temperature vs. latitude (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) # Get regression values regress_values = x_values * slope + intercept print(regress_values) # Create line equation string line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2)) print(line_eq) # Create Plot plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") # Label plot and annotate the line equation plt.xlabel('Latitude') plt.ylabel('Temperature') plt.annotate(line_eq,(-40,0),fontsize=15,color="red") # Print r square value print(f"The r-squared is: {rvalue}") # Show plot plt.show() # Create Northern and Southern Hemisphere DataFrames south_df = weather_dataframe[weather_dataframe["Lat"] < 0] north_df = weather_dataframe[weather_dataframe["Lat"] >= 0]
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Northern Hemisphere - Max Temp vs. Latitude Linear Regression
def linear_agression(x,y): print(f'The r-squared is : {round(linregress(x, y)[0],2)}') (slope, intercept, rvalue, pvalue, stderr) = linregress(x, y) regress_values = x * slope + intercept line_eq = 'y = ' + str(round(slope,2)) + 'x + ' + str(round(intercept,2)) plt.scatter(x, y) plt.plot(x,regress_values,'r-') return line_eq # Define a fuction for annotating def annotate(line_eq, a, b): plt.annotate(line_eq,(a,b),fontsize=15,color='red') # Call an function #1 equation = linear_agression(north_df['Lat'], north_df['Max Temp']) # Call an function #2 annotate(equation, 0, 0) # Set a title plt.title('Northern Hemisphere - Max Temp vs. Latitude Linear Regression') # Set xlabel plt.xlabel('Latitude') # Set ylabel plt.ylabel('Max Temp') # Save the figure plt.savefig('Northern Hemisphere - Max Temp vs. Latitude Linear Regression.png') # This graph shows the relationship in the Nouthern Hemisphere between Temp vs. latitude
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Southern Hemisphere - Max Temp vs. Latitude Linear Regression
def linear_agression(x,y): print(f'The r-squared is : {round(linregress(x, y)[0],2)}') (slope, intercept, rvalue, pvalue, stderr) = linregress(x, y) regress_values = x * slope + intercept line_eq = 'y = ' + str(round(slope,2)) + 'x + ' + str(round(intercept,2)) plt.scatter(x, y) plt.plot(x,regress_values,'r-') return line_eq # Define a fuction for annotating def annotate(line_eq, a, b): plt.annotate(line_eq,(a,b),fontsize=15,color='red') # Call an function #1 equation = linear_agression(south_df['Lat'], south_df['Max Temp']) # Call an function #2 annotate(equation, -30, 50) # Set a title plt.title('Southern Hemisphere - Max Temp vs. Latitude Linear Regression') # Set xlabel plt.xlabel('Latitude') # Set ylabel plt.ylabel('Max Temp') # Save the figure plt.savefig('Southern Hemisphere - Max Temp vs. Latitude Linear Regression.png') # This graph shows the relationship in the southern Hemisphere between Temp vs. latitude
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
def linear_agression(x,y): print(f'The r-squared is : {round(linregress(x, y)[0],2)}') (slope, intercept, rvalue, pvalue, stderr) = linregress(x, y) regress_values = x * slope + intercept line_eq = 'y = ' + str(round(slope,2)) + 'x + ' + str(round(intercept,2)) plt.scatter(x, y) plt.plot(x,regress_values,'r-') return line_eq # Define a fuction for annotating def annotate(line_eq, a, b): plt.annotate(line_eq,(a,b),fontsize=15,color='red') # Call an function #1 equation = linear_agression(north_df['Lat'], north_df['Humidity']) # Call an function #2 annotate(equation, 40, 20) # Set a title plt.title('Northern Hemisphere - Humidity vs. Latitude Linear Regression') # Set xlabel plt.xlabel('Latitude') # Set ylabel plt.ylabel('Humidity') # Save the figure plt.savefig('Northern Hemisphere - Humidity vs. Latitude Linear Regression.png')
The r-squared is : 0.41
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
def linear_agression(x,y): print(f'The r-squared is : {round(linregress(x, y)[0],2)}') (slope, intercept, rvalue, pvalue, stderr) = linregress(x, y) regress_values = x * slope + intercept line_eq = 'y = ' + str(round(slope,2)) + 'x + ' + str(round(intercept,2)) plt.scatter(x, y) plt.plot(x,regress_values,'r-') return line_eq # Define a fuction for annotating def annotate(line_eq, a, b): plt.annotate(line_eq,(a,b),fontsize=15,color='red') # Call an function #1 equation = linear_agression(south_df['Lat'], south_df['Humidity']) # Call an function #2 annotate(equation,-30, 20) # Set a title plt.title('Southern Hemisphere - Humidity vs. Latitude Linear Regression') # Set xlabel plt.xlabel('Latitude') # Set ylabel plt.ylabel('Humidity') # Save the figure plt.savefig('Southern Hemisphere - Humidity vs. Latitude Linear Regression.png') # This graph shows the relationship in the Southern Hemisphere between humidity vs. lattitude
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
def linear_agression(x,y): print(f'The r-squared is : {round(linregress(x, y)[0],2)}') (slope, intercept, rvalue, pvalue, stderr) = linregress(x, y) regress_values = x * slope + intercept line_eq = 'y = ' + str(round(slope,2)) + 'x + ' + str(round(intercept,2)) plt.scatter(x, y) plt.plot(x,regress_values,'r-') return line_eq # Define a fuction for annotating def annotate(line_eq, a, b): plt.annotate(line_eq,(a,b),fontsize=15,color='red') # Call an function #1 equation = linear_agression(north_df['Lat'], north_df['Humidity']) # Call an function #2 annotate(equation, 40, 20) # Set a title plt.title('Northern Hemisphere - Humidity vs. Latitude Linear Regression') # Set xlabel plt.xlabel('Latitude') # Set ylabel plt.ylabel('Humidity') # Save the figure plt.savefig('Northern Hemisphere - Humidity vs. Latitude Linear Regression.png') # This graph shows the relationship in the Northern Hemisphere between humidity vs. lattitude
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
def linear_agression(x,y): print(f'The r-squared is : {round(linregress(x, y)[0],2)}') (slope, intercept, rvalue, pvalue, stderr) = linregress(x, y) regress_values = x * slope + intercept line_eq = 'y = ' + str(round(slope,2)) + 'x + ' + str(round(intercept,2)) plt.scatter(x, y) plt.plot(x,regress_values,'r-') return line_eq # Define a fuction for annotating def annotate(line_eq, a, b): plt.annotate(line_eq,(a,b),fontsize=15,color='red') # Call an function #1 equation = linear_agression(south_df['Lat'], south_df['Cloudiness']) # Call an function #2 annotate(equation,-50, 10) # Set a title plt.title('Southern Hemisphere - Cloudiness vs. Latitude Linear Regression') # Set xlabel plt.xlabel('Latitude') # Set ylabel plt.ylabel('Cloudiness') # Save the figure plt.savefig('Southern Hemisphere - Cloudiness vs. Lattude Linear Regression.png') #This grpah shows the relationship in the southern hempisphere with cloudiness vs lattitude
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
def linear_agression(x,y): print(f'The r-squared is : {round(linregress(x, y)[0],2)}') (slope, intercept, rvalue, pvalue, stderr) = linregress(x, y) regress_values = x * slope + intercept line_eq = 'y = ' + str(round(slope,2)) + 'x + ' + str(round(intercept,2)) plt.scatter(x, y) plt.plot(x,regress_values,'r-') return line_eq # Define a fuction for annotating def annotate(line_eq, a, b): plt.annotate(line_eq,(a,b),fontsize=15,color='red') # Call an function #1 equation = linear_agression(north_df['Lat'], north_df['Wind Speed']) # Call an function #2 annotate(equation,10, 35) # Set a title plt.title('Northern Hemisphere - Wind Speed vs. Latitude Linear Regression') # Set xlabel plt.xlabel('Latitude') # Set ylabel plt.ylabel('Wind Speed') # Save the figure plt.savefig('Northern Hemisphere - Wind Speed vs. Latitude Linear Regression.png') #The northern hemisphere is wind speed vs lattitude
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
def linear_agression(x,y): print(f'The r-squared is : {round(linregress(x, y)[0],2)}') (slope, intercept, rvalue, pvalue, stderr) = linregress(x, y) regress_values = x * slope + intercept line_eq = 'y = ' + str(round(slope,2)) + 'x + ' + str(round(intercept,2)) plt.scatter(x, y) plt.plot(x,regress_values,'r-') return line_eq # Define a fuction for annotating def annotate(line_eq, a, b): plt.annotate(line_eq,(a,b),fontsize=15,color='red') # Call an function #1 equation = linear_agression(south_df['Lat'], south_df['Wind Speed']) # Call an function #2 annotate(equation, -50, 25) # Set a title plt.title('Southern Hemisphere - Wind Speed vs. Latitude Linear Regression') # Set xlabel plt.xlabel('Latitude') # Set ylabel plt.ylabel('Wind Speed') # Save the figure plt.savefig('Southern Hemisphere - Wind Speed vs. Latitude Linear Regression.png') # This graph compares the southern hemisphere the wind speed vs lattitude.
_____no_output_____
ADSL
starter_code/WeatherPy.ipynb
sruelle/python-api-challenge
[모듈 6.1] 모델 배포 파이프라인 개발 (SageMaker Model Building Pipeline 모든 스텝)이 노트북은 아래와 같은 목차로 진행 됩니다. 전체를 모두 실행시에 완료 시간은 **약 5분** 소요 됩니다.- 0. SageMaker Model Building Pipeline 개요- 1. 파이프라인 변수 및 환경 설정- 2. 파이프라인 스텝 단계 정의 - (1) 모델 승인 상태 변경 람다 스텝 - (2) 배포할 세이지 메이커 모델 스텝 생성 - (3) 모델 앤드 포인트 배포를 위한 람다 스텝 생성 - 3. 모델 빌딩 파이프라인 정의 및 실행- 4. Pipleline 캐싱 및 파라미터 이용한 실행- 5. 정리 작업 --- 0.[모듈 6.1] 모델 배포 파이프라인 개요- 이 노트북은 다음과 같은 상황에 대한 파이프라인 입니다. - 모델 레제스트리에 여러개의 모델 패키지 그룹이 있습니다. - 모델 패키지 그룹에서 특정한 것을 선택하여 가장 최근에 저장된 모델 버전을 선택 합니다. - 선택된 모델 버전의 "모델 승인 상태"를 "Pending" 에서 "Approved" 로 변경 합니다. - 이 모델 버전에 대해서 세이지 메이커 모델을 생성합니다. - 세이지 메이커 모델을 기반으로 앤드포인트를 생성 합니다. 1. 파이프라인 변수 및 환경 설정
import boto3 import sagemaker import pandas as pd region = boto3.Session().region_name sagemaker_session = sagemaker.session.Session() role = sagemaker.get_execution_role() sm_client = boto3.client('sagemaker', region_name=region) %store -r
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
(1) 모델 빌딩 파이프라인 변수 생성파이프라인에 인자로 넘길 변수는 아래 크게 3가지 종류가 있습니다.- 모델 레지스트리에 모델 등록시에 모델 승인 상태 값
from sagemaker.workflow.parameters import ( ParameterInteger, ParameterString, ParameterFloat, ) model_approval_status = ParameterString( name="ModelApprovalStatus", default_value="PendingManualApproval" )
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
2. 파이프라인 스텝 단계 정의 (1) 모델 승인 상태 변경 람다 스텝- 모델 레지스트리에서 해당 모델 패키지 그룹을 조회하고, 가장 최신 버전의 모델에 대해서 '모델 승인 상태 변경' 을 합니다. [에러] 아래와 같은 데러가 발생시에 `0.0.Setup-Environment.ipynb` 의 정책 추가 부분을 진행 해주세요.```ClientError: An error occurred (AccessDenied) when calling the CreateRole operation: User: arn:aws:sts::0287032915XX:assumed-role/AmazonSageMaker-ExecutionRole-20210827T141955/SageMaker is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::0287032915XX:role/lambda-deployment-role```
from src.iam_helper import create_lambda_role lambda_role = create_lambda_role("lambda-deployment-role") print("lambda_role: \n", lambda_role) from sagemaker.lambda_helper import Lambda from sagemaker.workflow.lambda_step import ( LambdaStep, LambdaOutput, LambdaOutputTypeEnum, ) import time current_time = time.strftime("%m-%d-%H-%M-%S", time.localtime()) function_name = "sagemaker-lambda-step-approve-model-deployment-" + current_time print("function_name: \n", function_name) # Lambda helper class can be used to create the Lambda function func_approve_model = Lambda( function_name=function_name, execution_role_arn=lambda_role, script="src/iam_change_model_approval.py", handler="iam_change_model_approval.lambda_handler", ) output_param_1 = LambdaOutput(output_name="statusCode", output_type=LambdaOutputTypeEnum.String) output_param_2 = LambdaOutput(output_name="body", output_type=LambdaOutputTypeEnum.String) output_param_3 = LambdaOutput(output_name="other_key", output_type=LambdaOutputTypeEnum.String) step_approve_lambda = LambdaStep( name="LambdaApproveModelStep", lambda_func=func_approve_model, inputs={ "model_package_group_name" : model_package_group_name, "ModelApprovalStatus": "Approved", }, outputs=[output_param_1, output_param_2, output_param_3], )
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
(2) 배포할 세이지 메이커 모델 스텝 생성- 위의 람다 스텝에서 "모델 승인 상태" 를 변경한 모델에 대하여 '모델 레지스트리'에서 저장된 도커 컨테이너 이미지, 모델 아티펙트의 위치를 가져 옵니다.- 이후에 이 두개의 인자를 가지고 세이지 메이커 모델을 생성 합니다.
import boto3 sm_client = boto3.client('sagemaker') # 위에서 생성한 model_package_group_name 을 인자로 제공 합니다. response = sm_client.list_model_packages(ModelPackageGroupName= model_package_group_name) ModelPackageArn = response['ModelPackageSummaryList'][0]['ModelPackageArn'] sm_client.describe_model_package(ModelPackageName=ModelPackageArn) response = sm_client.describe_model_package(ModelPackageName=ModelPackageArn) image_uri_approved = response["InferenceSpecification"]["Containers"][0]["Image"] ModelDataUrl_approved = response["InferenceSpecification"]["Containers"][0]["ModelDataUrl"] print("image_uri_approved: ", image_uri_approved) print("ModelDataUrl_approved: ", ModelDataUrl_approved) from sagemaker.model import Model model = Model( image_uri= image_uri_approved, model_data= ModelDataUrl_approved, sagemaker_session=sagemaker_session, role=role, ) from sagemaker.inputs import CreateModelInput from sagemaker.workflow.steps import CreateModelStep inputs = CreateModelInput( instance_type="ml.m5.large", # accelerator_type="ml.eia1.medium", ) step_create_best_model = CreateModelStep( name="CreateFraudhModel", model=model, inputs=inputs, ) step_create_best_model.add_depends_on([step_approve_lambda]) # step_approve_lambda 완료 후 실행 함.
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
(3) 모델 앤드 포인트 배포를 위한 람다 스텝 생성- 람다 함수는 입력으로 세이지 메이커 모델, 앤드 포인트 컨피그 및 앤드 포인트 이름을 받아서, 앤드포인트를 생성 함.
# model_name = project_prefix + "-lambda-model" + current_time endpoint_config_name = "lambda-deploy-endpoint-config-" + current_time endpoint_name = "lambda-deploy-endpoint-" + current_time function_name = "sagemaker-lambda-step-endpoint-deploy-" + current_time # print("model_name: \n", model_name) print("endpoint_config_name: \n", endpoint_config_name) print("endpoint_config_name: \n", len(endpoint_config_name)) print("endpoint_name: \n", endpoint_name) print("function_name: \n", function_name) # Lambda helper class can be used to create the Lambda function func_deploy_model = Lambda( function_name=function_name, execution_role_arn=lambda_role, script="src/iam_create_endpoint.py", handler="iam_create_endpoint.lambda_handler", timeout = 900, # 디폴트는 120초 임. 10분으로 연장 ) output_param_1 = LambdaOutput(output_name="statusCode", output_type=LambdaOutputTypeEnum.String) output_param_2 = LambdaOutput(output_name="body", output_type=LambdaOutputTypeEnum.String) output_param_3 = LambdaOutput(output_name="other_key", output_type=LambdaOutputTypeEnum.String) step_deploy_lambda = LambdaStep( name="LambdaDeployStep", lambda_func=func_deploy_model, inputs={ "model_name": step_create_best_model.properties.ModelName, "endpoint_config_name": endpoint_config_name, "endpoint_name": endpoint_name, }, outputs=[output_param_1, output_param_2, output_param_3], )
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
3.모델 빌딩 파이프라인 정의 및 실행위에서 정의한 아래의 4개의 스텝으로 파이프라인 정의를 합니다.- steps=[step_process, step_train, step_create_model, step_deploy],- 아래는 약 1분 정도 소요 됩니다. 이후 아래와 같이 실행 결과를 스튜디오에서 확인할 수 있습니다.- ![deployment-pipeline.png](img/deployment-pipeline.png)
from sagemaker.workflow.pipeline import Pipeline project_prefix = 'sagemaker-pipeline-phase2-deployment-step-by-step' pipeline_name = project_prefix pipeline = Pipeline( name=pipeline_name, parameters=[ model_approval_status, ], steps=[step_approve_lambda, step_create_best_model, step_deploy_lambda], ) import json definition = json.loads(pipeline.definition()) # definition
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
파이프라인을 SageMaker에 제출하고 실행하기
pipeline.upsert(role_arn=role)
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
디폴트값을 이용하여 파이프라인을 샐행합니다.
execution = pipeline.start()
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
파이프라인 운영: 파이프라인 대기 및 실행상태 확인워크플로우의 실행상황을 살펴봅니다. 실행이 완료될 때까지 기다립니다.
execution.wait()
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
실행된 단계들을 리스트업합니다. 파이프라인의 단계실행 서비스에 의해 시작되거나 완료된 단계를 보여줍니다.
execution.list_steps()
_____no_output_____
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
5. 정리 작업 변수 저장
depolyment_endpoint_name = endpoint_name %store depolyment_endpoint_name all_deployment_pipeline_name = pipeline_name %store all_deployment_pipeline_name
Stored 'depolyment_endpoint_name' (str) Stored 'all_deployment_pipeline_name' (str)
Apache-2.0
phase02/6.1.deployment-pipeline.ipynb
gonsoomoon-ml/SageMaker-Pipelines-Step-By-Step
**Chapter 9 – Up and running with TensorFlow** _This notebook contains all the sample code and solutions to the exercises in chapter 9._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
# To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs def reset_graph(seed=42): tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "tensorflow" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Creating and running a graph
import tensorflow as tf reset_graph() x = tf.Variable(3, name="x") y = tf.Variable(4, name="y") f = x*x*y + y + 2 f sess = tf.Session() sess.run(x.initializer) sess.run(y.initializer) result = sess.run(f) print(result) sess.close() with tf.Session() as sess: x.initializer.run() y.initializer.run() result = f.eval() result init = tf.global_variables_initializer() with tf.Session() as sess: init.run() result = f.eval() result init = tf.global_variables_initializer() sess = tf.InteractiveSession() init.run() result = f.eval() print(result) sess.close() result
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Managing graphs
reset_graph() x1 = tf.Variable(1) x1.graph is tf.get_default_graph() graph = tf.Graph() with graph.as_default(): x2 = tf.Variable(2) x2.graph is graph x2.graph is tf.get_default_graph() w = tf.constant(3) x = w + 2 y = x + 5 z = x * 3 with tf.Session() as sess: print(y.eval()) # 10 print(z.eval()) # 15 with tf.Session() as sess: y_val, z_val = sess.run([y, z]) print(y_val) # 10 print(z_val) # 15
10 15
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Linear Regression Using the Normal Equation
import numpy as np from sklearn.datasets import fetch_california_housing reset_graph() housing = fetch_california_housing() m, n = housing.data.shape housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data] X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name="X") y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y") XT = tf.transpose(X) theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y) with tf.Session() as sess: theta_value = theta.eval() theta_value
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Compare with pure NumPy
X = housing_data_plus_bias y = housing.target.reshape(-1, 1) theta_numpy = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) print(theta_numpy)
[[ -3.69419202e+01] [ 4.36693293e-01] [ 9.43577803e-03] [ -1.07322041e-01] [ 6.45065694e-01] [ -3.97638942e-06] [ -3.78654266e-03] [ -4.21314378e-01] [ -4.34513755e-01]]
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Compare with Scikit-Learn
from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(housing.data, housing.target.reshape(-1, 1)) print(np.r_[lin_reg.intercept_.reshape(-1, 1), lin_reg.coef_.T])
[[ -3.69419202e+01] [ 4.36693293e-01] [ 9.43577803e-03] [ -1.07322041e-01] [ 6.45065694e-01] [ -3.97638942e-06] [ -3.78654265e-03] [ -4.21314378e-01] [ -4.34513755e-01]]
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Using Batch Gradient Descent Gradient Descent requires scaling the feature vectors first. We could do this using TF, but let's just use Scikit-Learn for now.
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaled_housing_data = scaler.fit_transform(housing.data) scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data] print(scaled_housing_data_plus_bias.mean(axis=0)) print(scaled_housing_data_plus_bias.mean(axis=1)) print(scaled_housing_data_plus_bias.mean()) print(scaled_housing_data_plus_bias.shape)
[ 1.00000000e+00 6.60969987e-17 5.50808322e-18 6.60969987e-17 -1.06030602e-16 -1.10161664e-17 3.44255201e-18 -1.07958431e-15 -8.52651283e-15] [ 0.38915536 0.36424355 0.5116157 ..., -0.06612179 -0.06360587 0.01359031] 0.111111111111 (20640, 9)
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Manually computing the gradients
reset_graph() n_epochs = 1000 learning_rate = 0.01 X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X") y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y") theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta") y_pred = tf.matmul(X, theta, name="predictions") error = y_pred - y mse = tf.reduce_mean(tf.square(error), name="mse") gradients = 2/m * tf.matmul(tf.transpose(X), error) training_op = tf.assign(theta, theta - learning_rate * gradients) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): if epoch % 100 == 0: print("Epoch", epoch, "MSE =", mse.eval()) sess.run(training_op) best_theta = theta.eval() best_theta
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Using autodiff Same as above except for the `gradients = ...` line:
reset_graph() n_epochs = 1000 learning_rate = 0.01 X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X") y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y") theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta") y_pred = tf.matmul(X, theta, name="predictions") error = y_pred - y mse = tf.reduce_mean(tf.square(error), name="mse") gradients = tf.gradients(mse, [theta])[0] training_op = tf.assign(theta, theta - learning_rate * gradients) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): if epoch % 100 == 0: print("Epoch", epoch, "MSE =", mse.eval()) sess.run(training_op) best_theta = theta.eval() print("Best theta:") print(best_theta)
Epoch 0 MSE = 9.16154 Epoch 100 MSE = 0.714501 Epoch 200 MSE = 0.566705 Epoch 300 MSE = 0.555572 Epoch 400 MSE = 0.548812 Epoch 500 MSE = 0.543636 Epoch 600 MSE = 0.539629 Epoch 700 MSE = 0.536509 Epoch 800 MSE = 0.534068 Epoch 900 MSE = 0.532147 Best theta: [[ 2.06855249] [ 0.88740271] [ 0.14401658] [-0.34770882] [ 0.36178368] [ 0.00393811] [-0.04269556] [-0.66145277] [-0.6375277 ]]
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
How could you find the partial derivatives of the following function with regards to `a` and `b`?
def my_func(a, b): z = 0 for i in range(100): z = a * np.cos(z + i) + z * np.sin(b - i) return z my_func(0.2, 0.3) reset_graph() a = tf.Variable(0.2, name="a") b = tf.Variable(0.3, name="b") z = tf.constant(0.0, name="z0") for i in range(100): z = a * tf.cos(z + i) + z * tf.sin(b - i) grads = tf.gradients(z, [a, b]) init = tf.global_variables_initializer()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Let's compute the function at $a=0.2$ and $b=0.3$, and the partial derivatives at that point with regards to $a$ and with regards to $b$:
with tf.Session() as sess: init.run() print(z.eval()) print(sess.run(grads))
-0.212537 [-1.1388494, 0.19671395]
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Using a `GradientDescentOptimizer`
reset_graph() n_epochs = 1000 learning_rate = 0.01 X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X") y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y") theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta") y_pred = tf.matmul(X, theta, name="predictions") error = y_pred - y mse = tf.reduce_mean(tf.square(error), name="mse") optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(mse) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): if epoch % 100 == 0: print("Epoch", epoch, "MSE =", mse.eval()) sess.run(training_op) best_theta = theta.eval() print("Best theta:") print(best_theta)
Epoch 0 MSE = 9.16154 Epoch 100 MSE = 0.714501 Epoch 200 MSE = 0.566705 Epoch 300 MSE = 0.555572 Epoch 400 MSE = 0.548812 Epoch 500 MSE = 0.543636 Epoch 600 MSE = 0.539629 Epoch 700 MSE = 0.536509 Epoch 800 MSE = 0.534068 Epoch 900 MSE = 0.532147 Best theta: [[ 2.06855249] [ 0.88740271] [ 0.14401658] [-0.34770882] [ 0.36178368] [ 0.00393811] [-0.04269556] [-0.66145277] [-0.6375277 ]]
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Using a momentum optimizer
reset_graph() n_epochs = 1000 learning_rate = 0.01 X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X") y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y") theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta") y_pred = tf.matmul(X, theta, name="predictions") error = y_pred - y mse = tf.reduce_mean(tf.square(error), name="mse") optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9) training_op = optimizer.minimize(mse) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): sess.run(training_op) best_theta = theta.eval() print("Best theta:") print(best_theta)
Best theta: [[ 2.06855798] [ 0.82962859] [ 0.11875337] [-0.26554456] [ 0.30571091] [-0.00450251] [-0.03932662] [-0.89986444] [-0.87052065]]
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Feeding data to the training algorithm Placeholder nodes
reset_graph() A = tf.placeholder(tf.float32, shape=(None, 3)) B = A + 5 with tf.Session() as sess: B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]}) B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]}) print(B_val_1) print(B_val_2)
[[ 9. 10. 11.] [ 12. 13. 14.]]
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Mini-batch Gradient Descent
n_epochs = 1000 learning_rate = 0.01 reset_graph() X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X") y = tf.placeholder(tf.float32, shape=(None, 1), name="y") theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta") y_pred = tf.matmul(X, theta, name="predictions") error = y_pred - y mse = tf.reduce_mean(tf.square(error), name="mse") optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(mse) init = tf.global_variables_initializer() n_epochs = 10 batch_size = 100 n_batches = int(np.ceil(m / batch_size)) def fetch_batch(epoch, batch_index, batch_size): np.random.seed(epoch * n_batches + batch_index) # not shown in the book indices = np.random.randint(m, size=batch_size) # not shown X_batch = scaled_housing_data_plus_bias[indices] # not shown y_batch = housing.target.reshape(-1, 1)[indices] # not shown return X_batch, y_batch with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): for batch_index in range(n_batches): X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) best_theta = theta.eval() best_theta
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Saving and restoring a model
reset_graph() n_epochs = 1000 # not shown in the book learning_rate = 0.01 # not shown X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name="X") # not shown y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name="y") # not shown theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta") y_pred = tf.matmul(X, theta, name="predictions") # not shown error = y_pred - y # not shown mse = tf.reduce_mean(tf.square(error), name="mse") # not shown optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) # not shown training_op = optimizer.minimize(mse) # not shown init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): if epoch % 100 == 0: print("Epoch", epoch, "MSE =", mse.eval()) # not shown save_path = saver.save(sess, "/tmp/my_model.ckpt") sess.run(training_op) best_theta = theta.eval() save_path = saver.save(sess, "/tmp/my_model_final.ckpt") best_theta with tf.Session() as sess: saver.restore(sess, "/tmp/my_model_final.ckpt") best_theta_restored = theta.eval() # not shown in the book np.allclose(best_theta, best_theta_restored)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
If you want to have a saver that loads and restores `theta` with a different name, such as `"weights"`:
saver = tf.train.Saver({"weights": theta})
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
By default the saver also saves the graph structure itself in a second file with the extension `.meta`. You can use the function `tf.train.import_meta_graph()` to restore the graph structure. This function loads the graph into the default graph and returns a `Saver` that can then be used to restore the graph state (i.e., the variable values):
reset_graph() # notice that we start with an empty graph. saver = tf.train.import_meta_graph("/tmp/my_model_final.ckpt.meta") # this loads the graph structure theta = tf.get_default_graph().get_tensor_by_name("theta:0") # not shown in the book with tf.Session() as sess: saver.restore(sess, "/tmp/my_model_final.ckpt") # this restores the graph's state best_theta_restored = theta.eval() # not shown in the book np.allclose(best_theta, best_theta_restored)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
This means that you can import a pretrained model without having to have the corresponding Python code to build the graph. This is very handy when you keep tweaking and saving your model: you can load a previously saved model without having to search for the version of the code that built it. Visualizing the graph inside Jupyter
from IPython.display import clear_output, Image, display, HTML def strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = b"<stripped %d bytes>"%size return strip_def def show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand())) iframe = """ <iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '&quot;')) display(HTML(iframe)) show_graph(tf.get_default_graph())
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Using TensorBoard
reset_graph() from datetime import datetime now = datetime.utcnow().strftime("%Y%m%d%H%M%S") root_logdir = "tf_logs" logdir = "{}/run-{}/".format(root_logdir, now) n_epochs = 1000 learning_rate = 0.01 X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X") y = tf.placeholder(tf.float32, shape=(None, 1), name="y") theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta") y_pred = tf.matmul(X, theta, name="predictions") error = y_pred - y mse = tf.reduce_mean(tf.square(error), name="mse") optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(mse) init = tf.global_variables_initializer() mse_summary = tf.summary.scalar('MSE', mse) file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph()) n_epochs = 10 batch_size = 100 n_batches = int(np.ceil(m / batch_size)) with tf.Session() as sess: # not shown in the book sess.run(init) # not shown for epoch in range(n_epochs): # not shown for batch_index in range(n_batches): X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size) if batch_index % 10 == 0: summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch}) step = epoch * n_batches + batch_index file_writer.add_summary(summary_str, step) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) best_theta = theta.eval() # not shown file_writer.close() best_theta
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Name scopes
reset_graph() now = datetime.utcnow().strftime("%Y%m%d%H%M%S") root_logdir = "tf_logs" logdir = "{}/run-{}/".format(root_logdir, now) n_epochs = 1000 learning_rate = 0.01 X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X") y = tf.placeholder(tf.float32, shape=(None, 1), name="y") theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta") y_pred = tf.matmul(X, theta, name="predictions") optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(mse) init = tf.global_variables_initializer() mse_summary = tf.summary.scalar('MSE', mse) file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph()) n_epochs = 10 batch_size = 100 n_batches = int(np.ceil(m / batch_size)) with tf.Session() as sess: sess.run(init) for epoch in range(n_epochs): for batch_index in range(n_batches): X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size) if batch_index % 10 == 0: summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch}) step = epoch * n_batches + batch_index file_writer.add_summary(summary_str, step) sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) best_theta = theta.eval() file_writer.flush() file_writer.close() print("Best theta:") print(best_theta) print(error.op.name) print(mse.op.name) reset_graph() a1 = tf.Variable(0, name="a") # name == "a" a2 = tf.Variable(0, name="a") # name == "a_1" with tf.name_scope("param"): # name == "param" a3 = tf.Variable(0, name="a") # name == "param/a" with tf.name_scope("param"): # name == "param_1" a4 = tf.Variable(0, name="a") # name == "param_1/a" for node in (a1, a2, a3, a4): print(node.op.name) with tf.name_scope("loss") as scope: error = y_pred - y mse = tf.reduce_mean(tf.square(error), name="mse")
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Modularity An ugly flat code:
reset_graph() n_features = 3 X = tf.placeholder(tf.float32, shape=(None, n_features), name="X") w1 = tf.Variable(tf.random_normal((n_features, 1)), name="weights1") w2 = tf.Variable(tf.random_normal((n_features, 1)), name="weights2") b1 = tf.Variable(0.0, name="bias1") b2 = tf.Variable(0.0, name="bias2") z1 = tf.add(tf.matmul(X, w1), b1, name="z1") z2 = tf.add(tf.matmul(X, w2), b2, name="z2") relu1 = tf.maximum(z1, 0., name="relu1") relu2 = tf.maximum(z1, 0., name="relu2") # Oops, cut&paste error! Did you spot it? output = tf.add(relu1, relu2, name="output")
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Much better, using a function to build the ReLUs:
reset_graph() def relu(X): w_shape = (int(X.get_shape()[1]), 1) w = tf.Variable(tf.random_normal(w_shape), name="weights") b = tf.Variable(0.0, name="bias") z = tf.add(tf.matmul(X, w), b, name="z") return tf.maximum(z, 0., name="relu") n_features = 3 X = tf.placeholder(tf.float32, shape=(None, n_features), name="X") relus = [relu(X) for i in range(5)] output = tf.add_n(relus, name="output") file_writer = tf.summary.FileWriter("logs/relu1", tf.get_default_graph())
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Even better using name scopes:
reset_graph() def relu(X): with tf.name_scope("relu"): w_shape = (int(X.get_shape()[1]), 1) # not shown in the book w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown b = tf.Variable(0.0, name="bias") # not shown z = tf.add(tf.matmul(X, w), b, name="z") # not shown return tf.maximum(z, 0., name="max") # not shown n_features = 3 X = tf.placeholder(tf.float32, shape=(None, n_features), name="X") relus = [relu(X) for i in range(5)] output = tf.add_n(relus, name="output") file_writer = tf.summary.FileWriter("logs/relu2", tf.get_default_graph()) file_writer.close()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Sharing Variables Sharing a `threshold` variable the classic way, by defining it outside of the `relu()` function then passing it as a parameter:
reset_graph() def relu(X, threshold): with tf.name_scope("relu"): w_shape = (int(X.get_shape()[1]), 1) # not shown in the book w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown b = tf.Variable(0.0, name="bias") # not shown z = tf.add(tf.matmul(X, w), b, name="z") # not shown return tf.maximum(z, threshold, name="max") threshold = tf.Variable(0.0, name="threshold") X = tf.placeholder(tf.float32, shape=(None, n_features), name="X") relus = [relu(X, threshold) for i in range(5)] output = tf.add_n(relus, name="output") reset_graph() def relu(X): with tf.name_scope("relu"): if not hasattr(relu, "threshold"): relu.threshold = tf.Variable(0.0, name="threshold") w_shape = int(X.get_shape()[1]), 1 # not shown in the book w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown b = tf.Variable(0.0, name="bias") # not shown z = tf.add(tf.matmul(X, w), b, name="z") # not shown return tf.maximum(z, relu.threshold, name="max") X = tf.placeholder(tf.float32, shape=(None, n_features), name="X") relus = [relu(X) for i in range(5)] output = tf.add_n(relus, name="output") reset_graph() with tf.variable_scope("relu"): threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0)) with tf.variable_scope("relu", reuse=True): threshold = tf.get_variable("threshold") with tf.variable_scope("relu") as scope: scope.reuse_variables() threshold = tf.get_variable("threshold") reset_graph() def relu(X): with tf.variable_scope("relu", reuse=True): threshold = tf.get_variable("threshold") w_shape = int(X.get_shape()[1]), 1 # not shown w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown b = tf.Variable(0.0, name="bias") # not shown z = tf.add(tf.matmul(X, w), b, name="z") # not shown return tf.maximum(z, threshold, name="max") X = tf.placeholder(tf.float32, shape=(None, n_features), name="X") with tf.variable_scope("relu"): threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0)) relus = [relu(X) for relu_index in range(5)] output = tf.add_n(relus, name="output") file_writer = tf.summary.FileWriter("logs/relu6", tf.get_default_graph()) file_writer.close() reset_graph() def relu(X): with tf.variable_scope("relu"): threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0)) w_shape = (int(X.get_shape()[1]), 1) w = tf.Variable(tf.random_normal(w_shape), name="weights") b = tf.Variable(0.0, name="bias") z = tf.add(tf.matmul(X, w), b, name="z") return tf.maximum(z, threshold, name="max") X = tf.placeholder(tf.float32, shape=(None, n_features), name="X") with tf.variable_scope("", default_name="") as scope: first_relu = relu(X) # create the shared variable scope.reuse_variables() # then reuse it relus = [first_relu] + [relu(X) for i in range(4)] output = tf.add_n(relus, name="output") file_writer = tf.summary.FileWriter("logs/relu8", tf.get_default_graph()) file_writer.close() reset_graph() def relu(X): threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0)) w_shape = (int(X.get_shape()[1]), 1) # not shown in the book w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown b = tf.Variable(0.0, name="bias") # not shown z = tf.add(tf.matmul(X, w), b, name="z") # not shown return tf.maximum(z, threshold, name="max") X = tf.placeholder(tf.float32, shape=(None, n_features), name="X") relus = [] for relu_index in range(5): with tf.variable_scope("relu", reuse=(relu_index >= 1)) as scope: relus.append(relu(X)) output = tf.add_n(relus, name="output") file_writer = tf.summary.FileWriter("logs/relu9", tf.get_default_graph()) file_writer.close()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Extra material
reset_graph() with tf.variable_scope("my_scope"): x0 = tf.get_variable("x", shape=(), initializer=tf.constant_initializer(0.)) x1 = tf.Variable(0., name="x") x2 = tf.Variable(0., name="x") with tf.variable_scope("my_scope", reuse=True): x3 = tf.get_variable("x") x4 = tf.Variable(0., name="x") with tf.variable_scope("", default_name="", reuse=True): x5 = tf.get_variable("my_scope/x") print("x0:", x0.op.name) print("x1:", x1.op.name) print("x2:", x2.op.name) print("x3:", x3.op.name) print("x4:", x4.op.name) print("x5:", x5.op.name) print(x0 is x3 and x3 is x5)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
The first `variable_scope()` block first creates the shared variable `x0`, named `my_scope/x`. For all operations other than shared variables (including non-shared variables), the variable scope acts like a regular name scope, which is why the two variables `x1` and `x2` have a name with a prefix `my_scope/`. Note however that TensorFlow makes their names unique by adding an index: `my_scope/x_1` and `my_scope/x_2`.The second `variable_scope()` block reuses the shared variables in scope `my_scope`, which is why `x0 is x3`. Once again, for all operations other than shared variables it acts as a named scope, and since it's a separate block from the first one, the name of the scope is made unique by TensorFlow (`my_scope_1`) and thus the variable `x4` is named `my_scope_1/x`.The third block shows another way to get a handle on the shared variable `my_scope/x` by creating a `variable_scope()` at the root scope (whose name is an empty string), then calling `get_variable()` with the full name of the shared variable (i.e. `"my_scope/x"`). Strings
reset_graph() text = np.array("Do you want some café?".split()) text_tensor = tf.constant(text) with tf.Session() as sess: print(text_tensor.eval())
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Implementing a Home-Made Computation Graph
class Const(object): def __init__(self, value): self.value = value def evaluate(self): return self.value def __str__(self): return str(self.value) class Var(object): def __init__(self, init_value, name): self.value = init_value self.name = name def evaluate(self): return self.value def __str__(self): return self.name class BinaryOperator(object): def __init__(self, a, b): self.a = a self.b = b class Add(BinaryOperator): def evaluate(self): return self.a.evaluate() + self.b.evaluate() def __str__(self): return "{} + {}".format(self.a, self.b) class Mul(BinaryOperator): def evaluate(self): return self.a.evaluate() * self.b.evaluate() def __str__(self): return "({}) * ({})".format(self.a, self.b) x = Var(3, name="x") y = Var(4, name="y") f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2 print("f(x,y) =", f) print("f(3,4) =", f.evaluate())
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Computing gradients Mathematical differentiation
df_dx = Mul(Const(2), Mul(x, y)) # df/dx = 2xy df_dy = Add(Mul(x, x), Const(1)) # df/dy = x² + 1 print("df/dx(3,4) =", df_dx.evaluate()) print("df/dy(3,4) =", df_dy.evaluate())
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Numerical differentiation
def gradients(func, vars_list, eps=0.0001): partial_derivatives = [] base_func_eval = func.evaluate() for var in vars_list: original_value = var.value var.value = var.value + eps tweaked_func_eval = func.evaluate() var.value = original_value derivative = (tweaked_func_eval - base_func_eval) / eps partial_derivatives.append(derivative) return partial_derivatives df_dx, df_dy = gradients(f, [x, y]) print("df/dx(3,4) =", df_dx) print("df/dy(3,4) =", df_dy)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Symbolic differentiation
Const.derive = lambda self, var: Const(0) Var.derive = lambda self, var: Const(1) if self is var else Const(0) Add.derive = lambda self, var: Add(self.a.derive(var), self.b.derive(var)) Mul.derive = lambda self, var: Add(Mul(self.a, self.b.derive(var)), Mul(self.a.derive(var), self.b)) x = Var(3.0, name="x") y = Var(4.0, name="y") f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2 df_dx = f.derive(x) # 2xy df_dy = f.derive(y) # x² + 1 print("df/dx(3,4) =", df_dx.evaluate()) print("df/dy(3,4) =", df_dy.evaluate())
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Automatic differentiation (autodiff) – forward mode
class DualNumber(object): def __init__(self, value=0.0, eps=0.0): self.value = value self.eps = eps def __add__(self, b): return DualNumber(self.value + self.to_dual(b).value, self.eps + self.to_dual(b).eps) def __radd__(self, a): return self.to_dual(a).__add__(self) def __mul__(self, b): return DualNumber(self.value * self.to_dual(b).value, self.eps * self.to_dual(b).value + self.value * self.to_dual(b).eps) def __rmul__(self, a): return self.to_dual(a).__mul__(self) def __str__(self): if self.eps: return "{:.1f} + {:.1f}ε".format(self.value, self.eps) else: return "{:.1f}".format(self.value) def __repr__(self): return str(self) @classmethod def to_dual(cls, n): if hasattr(n, "value"): return n else: return cls(n)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
$3 + (3 + 4 \epsilon) = 6 + 4\epsilon$
3 + DualNumber(3, 4)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
$(3 + 4ε)\times(5 + 7ε) = 3 \times 5 + 3 \times 7ε + 4ε \times 5 + 4ε \times 7ε = 15 + 21ε + 20ε + 28ε^2 = 15 + 41ε + 28 \times 0 = 15 + 41ε$
DualNumber(3, 4) * DualNumber(5, 7) x.value = DualNumber(3.0) y.value = DualNumber(4.0) f.evaluate() x.value = DualNumber(3.0, 1.0) # 3 + ε y.value = DualNumber(4.0) # 4 df_dx = f.evaluate().eps x.value = DualNumber(3.0) # 3 y.value = DualNumber(4.0, 1.0) # 4 + ε df_dy = f.evaluate().eps df_dx df_dy
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Autodiff – Reverse mode
class Const(object): def __init__(self, value): self.value = value def evaluate(self): return self.value def backpropagate(self, gradient): pass def __str__(self): return str(self.value) class Var(object): def __init__(self, init_value, name): self.value = init_value self.name = name self.gradient = 0 def evaluate(self): return self.value def backpropagate(self, gradient): self.gradient += gradient def __str__(self): return self.name class BinaryOperator(object): def __init__(self, a, b): self.a = a self.b = b class Add(BinaryOperator): def evaluate(self): self.value = self.a.evaluate() + self.b.evaluate() return self.value def backpropagate(self, gradient): self.a.backpropagate(gradient) self.b.backpropagate(gradient) def __str__(self): return "{} + {}".format(self.a, self.b) class Mul(BinaryOperator): def evaluate(self): self.value = self.a.evaluate() * self.b.evaluate() return self.value def backpropagate(self, gradient): self.a.backpropagate(gradient * self.b.value) self.b.backpropagate(gradient * self.a.value) def __str__(self): return "({}) * ({})".format(self.a, self.b) x = Var(3, name="x") y = Var(4, name="y") f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2 result = f.evaluate() f.backpropagate(1.0) print("f(x,y) =", f) print("f(3,4) =", result) print("df_dx =", x.gradient) print("df_dy =", y.gradient)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Autodiff – reverse mode (using TensorFlow)
reset_graph() x = tf.Variable(3., name="x") y = tf.Variable(4., name="y") f = x*x*y + y + 2 gradients = tf.gradients(f, [x, y]) init = tf.global_variables_initializer() with tf.Session() as sess: init.run() f_val, gradients_val = sess.run([f, gradients]) f_val, gradients_val
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Exercise solutions 1. to 11. See appendix A. 12. Logistic Regression with Mini-Batch Gradient Descent using TensorFlow First, let's create the moons dataset using Scikit-Learn's `make_moons()` function:
from sklearn.datasets import make_moons m = 1000 X_moons, y_moons = make_moons(m, noise=0.1, random_state=42)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Let's take a peek at the dataset:
plt.plot(X_moons[y_moons == 1, 0], X_moons[y_moons == 1, 1], 'go', label="Positive") plt.plot(X_moons[y_moons == 0, 0], X_moons[y_moons == 0, 1], 'r^', label="Negative") plt.legend() plt.show()
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
We must not forget to add an extra bias feature ($x_0 = 1$) to every instance. For this, we just need to add a column full of 1s on the left of the input matrix $\mathbf{X}$:
X_moons_with_bias = np.c_[np.ones((m, 1)), X_moons]
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Let's check:
X_moons_with_bias[:5]
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science
Looks good. Now let's reshape `y_train` to make it a column vector (i.e. a 2D array with a single column):
y_moons_column_vector = y_moons.reshape(-1, 1)
_____no_output_____
Apache-2.0
09_up_and_running_with_tensorflow.ipynb
JeffRisberg/SciKit_and_Data_Science